0% found this document useful (0 votes)
18 views77 pages

DAA Question Bank

Uploaded by

hiremathsumith99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views77 pages

DAA Question Bank

Uploaded by

hiremathsumith99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Module 1 : Introduction to Algorithms

1. Describe Notion of Algorithm 4 Marks

The study of algorithms is the cornerstone of computer science. It can be recognized as the core
of computer science. Computer programs would not exist without algorithms. With computers
becoming an essential part of our professional & personal life‘s, studying algorithms becomes a
necessity, more so for computer science engineers. Another reason for studying algorithms is
that if we know a standard set of important algorithms, They further our analytical skills & help
us in developing new algorithms for required applications Algorithm An algorithm is finite set of
instructions that is followed, accomplishes a particular task. In addition, all algorithms must
satisfy the following criteria:
1. Input. Zero or more quantities are externally supplied.
2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and produced.
4. Finiteness. If we trace out the instruction of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
5. Effectiveness. Every instruction must be very basic so that it can be carried out, in principal,
by a person using only pencil and paper. It is not enough that each operation be definite as in
criterion 3; it also must be feasible

2. List down the steps involved in analyzing an algorithm 4 Marks

We use a hypothetical model with following assumptions

• Total time taken by the algorithm is given as a function on its input size
• Logical units are identified as one step

• Every step require ONE unit of time

• Total time taken = Total Num. of steps executed

Input’s size: Time required by an algorithm is proportional to size of the problem instance. For
e.g., more time is required to sort 20 elements than what is required to sort 10 elements. Units
for Measuring Running Time: Count the number of times an algorithm‘s basic operation is
executed. (Basic operation: The most important operation of the algorithm, the operation
contributing the most to the tot ,al running time.) For e.g., The basic operation is usually the
most time consuming operation in the algorithm‘s innermost loop.

Consider the following example:

3. Define Worst-case, Best-case efficiencies 4 Marks


Algorithm efficiency depends on the input size n. And for some algorithms efficiency depends on
type of input. We have best, worst & average case efficiencies.
• Worst-case efficiency: Efficiency (number of times the basic operation will be executed) for
the worst case input of size n. i.e. The algorithm runs the longest among all possible inputs of
size n.
• Best-case efficiency: Efficiency (number of times the basic operation will be executed) for the
best case input of size n. i.e. The algorithm runs the fastest among all possible inputs of size n.
• Average-case efficiency: Average time taken (number of times the basic operation will be
executed) to solve all the possible instances (random) of the input. NOTE: NOT the average of
worst and best case

4. Briefly explain asymptotic notations 10 Marks

Asymptotic notation is a way of comparing functions that ignores constant factors and small
input sizes. Three notations used to compare orders of growth of an algorithm‘s basic operation
count are: O, Ω, Θ notations
Big Oh- O notation
Definition: A function t(n) is said to be in O(g(n)), denoted t(n)=O(g(n)), if t(n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n) ≤ cg(n) for all n ≥ n0

Big Omega- Ω notation:


Definition: A function t (n) is said to be inΩ (g(n)), denoted t(n) = Ω (g (n)), if t (n) is bounded
below by some constant multiple of g (n) f or all large n, i.e., i f there exist some positive
constant c and some nonnegative integer n0 such that t(n) ≥ cg(n) for all n ≥ n0
Big Theta- Θ notation
Definition: A function t (n) is said to be in Θ(g (n)), denoted t(n) = Θ(g (n)), if t (n) is bounded
both above a nd below by some constant multiple of g (n) for all large n, i.e., if there exist some
positive constant c1 and c2 and some nonnegative integer n0 such that c2 g (n) ≤ t (n) ≤ c1 g (n)
for all n ≥ n0
5. List down basic efficiency classes 4 Marks

6. List down the steps involved in mathematical analysis of Non-Recursive Algorithms 6 Marks
General plan for analyzing efficiency of non-recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only on the
input size n. If it also depends on the type of input, investigate worst, average, and best case
efficiency separately.
4. Set up summation for C(n) reflecting the number of times the algorithm‘s basic operation is
executed.
5. Simplify summation using standard formulas
7. List down the steps involved in mathematical analysis of Recursive Algorithms 6 Marks
General plan for analyzing efficiency of recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only on the
input size n. If it also depends on the type of input, investigate worst, average, and best case
efficiency separately.
4. Set up recurrence relation, with an appropriate initial condition, for the number of times the
algorithm‘s basic operation is executed.
5. Solve the recurrence.
8. Identify the time complexity (upper bound) for the below iterative functions 6 Marks

A()
{
int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
printf(“Ravi”);
}
}

Sol:

s: 1 3 6 10 ……………………………………………………. n
i: 1 2 3 4 ……………………………………………………. k

K(k+1)
= >n
2
=𝑘 2 +𝑘
2
>n
∴K=O(n)
9. Identify the time complexity (upper bound) for the below iterative functions 6 Marks

A()
{
Int i,j,k,n;
for(i=1;i<=n;i++)
{
for(j=1;j<=i;j++)
{
for(k=1;k<=100;k++)
{
Printf(“Ravi”);
}

Sol:

i 1 2 3 4 5 ……………………………….. n
j 1 time 2 time 3 time 4 time time ……………………………….. n time
k 100 200 300 400 500 ……………………………….. N*100

=100+2*100+3*100+4*100+5*100………………………+n*100
=100(1+2+3+…………………+n)
𝑛(𝑛+1)
=100 2
∴O(n2)

10. Find the time complexity (upper bound) for the below iterative functions 10 Marks
A()
{
for(i=1;i<n;i=i*2)
{
Printf(“Ravi”);
}

Sol:

i= 1 2 4 ……………………………. n
20 21 22 2k
2k=n
K=log n
∴O(log2 n)
11. Find the time complexity (upper bound) for the below recursive functions 10 Marks
T(n)=n + T(n-1); ;n>1

T(n)=1 ;n=1

Sol:
T(n)=1+T(n-1)………..(1)
T(n-1)=1+T(n-2)……..(2)
T(n-2)=1+T(n-3)……..(3)
Substituting 2 in 1
T(n)=1+T(n-1)
=2+T(n-2)………..(4)
Substituting 3 in 4
=3+ T(n-3)

=k+T(n-k)……….(5)
=n-k=1
k=n-1……….(6)
Substituting 6 in 5
= (n-1)+t(N-(N-1))
=(n-1)+T(1)
=n-1+1
T(n)=n
O=N

12. Explain with an example how a new variable count introduced in a program can be used to
find the number of steps needed by a program to solve a problem instance.

Module 2-Algorithm design techniques-Brute force

1. In brief explain brute force strategy of programming 4 Marks


Brute force is a straightforward approach to problem solving, usually directly based on the
problem‘s statement and definitions of the concepts involved. Though rarely a source of clever
or algorithms, the brute-force approach should not be overlooked as an strategies, brute force is
applicable to a very wide variety of problems. For some important problems (e.g., sorting,
searching, string matching),the brute-force approach yields reasonable algorithms of at least
some practical value with no limitation on instance size Even if too inefficient in general, a
brute-force algorithm can still be useful for solving small-size instances of a problem. A brute-
force algorithm can serve an important theoretical or educational purpose.
2. Write and apply bubble sort algorithm on following set of integers 8,5, 7,3,2. 4 Marks

Pass-1 Pass-2
8 5 5 5 5 5 5 5 5
5 8 7 7 7 7 7 3 3
7 7 8 3 3 3 3 7 2
3 3 3 8 2 2 2 2 7
2 2 2 2 8 8 8 8 8

Pass-3 Pass-4
5 3 3 3 2
3 5 2 2 3
2 2 5 5 5
7 7 7 7 7
8 8 8 8 8

3. Define Knapsack problem and apply on following set of data having bag capacity m=15
6 Marks

Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1

Solun:
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
p/w 5 1.3 3 1 6 4.5 3

∑ xiwi =15
∑ xipi =54.6

4. Write and explain selection sort algorithm with example of your choice. 10 Marks

Algorithm: Selection-Sort (A)

fori← 1 to n-1 do

min j ←i;

min x ← A[i]

for j ←i + 1 to n do

if A[j] < min x then

min j ← j

min x ← A[j]

A[min j] ← A [i]

A[i] ← min x

Consider the following depicted array as an example.


For the first position in the sorted list, the whole list is scanned sequentially. The first
position where 14 is stored presently, we search the whole list and find that 10 is the lowest
value.

So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in
the list, appears in the first position of the sorted list.

For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.

We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.

After two iterations, two least values are positioned at the beginning in a sorted manner.

The same process is applied to the rest of the items in the array −
5. Briefly explain Traveling Salesman Problem (TSP) using brute force strategy with example
10 Marks
Travelling Salesman Problem (TSP) : Given a set of cities and distances between every pair of
cities, the problem is to find the shortest possible route that visits every city exactly once and
returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to
find if there exists a tour that visits every city exactly once. Here we know that Hamiltonian Tour
exists (because the graph is complete) and in fact, many such tours exist, the problem is to find a
minimum weight Hamiltonian Cycle.
For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is
1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80.
The problem is a famous NP-hard problem. There is no polynomial-time known solution for this
problem.

Output of Given Graph:


minimum weight Hamiltonian Cycle :
10 + 25 + 30 + 15 := 80
6. Write an algorithm to find uniqueness of elements in an array and give the mathematical
analysis of this non recursive algorithm with all steps. 6 Marks
General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation. (As a rule, it is located in innermost loop.)
3. Check whether the number of times the basic operation is executed depends only on the size of
an input. If it also depends on some additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed form formula for
the count or, at the very least, establish its order of growth.
Uniqueness of elements in an array :

7. Apply Traveling Salesman Problem (TSP) for below graph using A as source vertex. 10 Marks

Sol:
The shortest path that originates and ends at A is A → B → C → D → E → F → A

The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

8. Write selection sort algorithm and apply on following set of integers 64, 25, 12, 22, 11

6 Marks
9. List down the steps for linear search and mention its best case, worst case and average case 4
Marks
And apply linear search on 10,15,30,70,80,60,20,90,40 to search key element 20.
Algorithm for Linear Search:
The algorithm for linear search can be broken down into the following steps:
 Start: Begin at the first element of the collection of elements.
 Compare: Compare the current element with the desired element.
 Found: If the current element is equal to the desired element, return true or index to the current
element.
 Move: Otherwise, move to the next element in the collection.
 Repeat: Repeat steps 2-4 until we have reached the end of collection.
 Not found: If the end of the collection is reached without finding the desired element, return that
the desired element is not in the array.
Time and Space Complexity of Linear Search:

Time Complexity:
 Best Case: In the best case, the key might be present at the first index. So the best case complexity
is O(1)
 Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the end
from which the search has started in the list. So the worst-case complexity is O(N) where N is the
size of the list.
 Average Case: O(N)
10. List down the steps involved for bubble sort and apply the same to sort 6 0 3 5

4 Marks

Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as its
average and worst-case time complexity is quite high.
Bubble Sort Algorithm
In Bubble Sort algorithm,
 traverse from left and compare adjacent elements and the higher one is placed at right side.
 In this way, the largest element is moved to the rightmost end at first.
 This process is then continued to find the second largest and place it and so on until the data
is sorted.
11. Write C Program or algorithm to Print all Distinct (Unique ) Elements in given Array 6 Marks
void printDistinct(int arr[], int n)

{
for (int i=0; i<n; i++)
{
int j;
for (j=0; j<i; j++)
if (arr[i] == arr[j])
break;

// If not printed earlier, then print it


if (i == j)
cout << arr[i] << " ";
}
}

int main()
{
int arr[] = {6, 10, 5, 4, 9, 120, 4, 6, 10};
int n = sizeof(arr)/sizeof(arr[0]);
printDistinct(arr, n);
return 0;
}

12. Demonstrate pattern matching algorithm with suitable example 10 Marks

Uniqueness of elements in an array :

Module 3-Divide-and-conquer, Decrease and Conquer

1. In brief explain Divide & conquer strategy of programming 4 Marks


Divide & conquer is a general algorithm design strategy with a general plan as follows:
1. DIVIDE: A problem‘s instance is divided into several smaller instances of the same problem,
ideally of about the same size.
2. RECUR: Solve the sub-problem recursively.
3. CONQUER: If necessary, the solutions obtained for the smaller instances are combined to get
a solution to the original instance.
Below diagram shows the general divide & conquer plan

2. List down the advantages and limitations of divide & conquer technique 4 Marks

• For solving conceptually difficult problems like Tower Of Hanoi, divide & conquer is a powerful
tool

• Results in efficient algorithms

• Divide & Conquer algorithms are adapted foe execution in multi-processor machines

• Results in algorithms that use memory cache efficiently.

Limitations of divide & conquer technique:

• Recursion is slow

• Very simple problem may be more complicated than an iterative approach. Example: adding n
numbers etc.

3. Explain the general divide & conquer recurrence relation 4 Marks


An instance of size n can be divided into b instances of size n/b, with ―a‖ of them needing to be
solved. [ a ≥ 1, b > 1].

Assume size n is a power of b. The recurrence for the running time T(n) is as follows:

T(n) = aT(n/b) + f(n)

where:

f(n) – a function that accounts for the time spent on dividing the problem into smaller ones and
on combining their solutions

Therefore, the order of growth of T(n) depends on the values of the constants a & b and the
order of growth of the function f(n).

4. State master theorem and apply the same for recurrence relation T(n) = 2T(n/2) + 1 10 Marks
Theorem: If f(n) Є Θ (nd ) with d ≥ 0 in recurrence equation
T(n) = aT(n/b) + f(n),
Then, T(n) =
Θ (nd ) if a < bd
Θ (nd log n) if a = bd
Θ (n log b a ) if a > bd
Let T(n) = 2T(n/2) + 1, solve using master theorem.
Solution:
Here: a=2
b=2
f(n) = Θ(1)
d=0
Therefore:
a > bd i.e., 2 > 20
Case 3 of master theorem holds good. Therefore:
T(n) Є Θ (n log b a )
Є Θ (n log 2 2 )
Є Θ (n)
5. Write and explain binary search algorithm with an example c 10 Marks

Binary Search, also known as half-interval search is one of the most popular search techniques to find
elements in a sorted array. Here, you have to make sure that the array is sorted.

The algorithm follows the divide and conquer approach, where the complete array is divided into two
halves and the element to be searched is compared with the middle element of the list. If the
element to be searched is less than the middle element, then the search is narrowed down to 1st half
of the array. Else, the search continues to the second half of the list.

Binary Search Example

Consider the following array and the search element to understand the Binary Search techniques.

Array considered: 09 17 25 34 49

Element to be searched: 34

Step 1: Start the Binary Search algorithm by using the formula middle = (left + right )/2 Here, left = 0
and right = 4. So the middle will be 2. This means 25 is the middle element of the array.
Step 2: Now, you have to compare 25 with our seach element i.e. 34. Since 25 < 34, left = middle + 1
and right = 4.
Step 3: So, the new middle = (3 + 4)/ 2, which is 3.5 considered as 3.
Step 4: Now, If you observe, the element to be searched = middle found in the previous step. This
implies that the element is found at a[3].
6. Write and explain quick sort algorithm with an example. 10 Marks

QucikSort (A,p,r)
{
If(p<r)
{
Q=partition(A,p,r)
Quciksort(A,P,q-1);
Quicksort(A,q+1,r);
}
}
Partition(A,p,r)
{
X=a[r];
I=p-1;
for(j=p to r-1)
{
Ifa[j]<=X)
{
I=i+1;
Exchange a[i]a[j]
}
}
Exchange a[i+1] a[r]
Return i+1;
}
7. Briefly explain working of insertion sort algorithm with an example. 6 Marks

Solun:

To understand the working of the insertion sort algorithm, let's take an unsorted array. It
will be easier to understand the insertion sort via an example.

Let the elements of array are -


Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So,
for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along
with swapping, insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements
that are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.


After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are
31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.


Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

8. Give an analysis of merge sort algorithm ? What types of Datasets work best for Merge
Sort? How does the Divide and Conquer Strategy work with Merge Sort? 6 Marks

Analysis of merge sort:


A merge sort consists of several passes over the input. The first pass merges segments of
size 1, the second merges segments of size 2, and the ith pass merges segments of size 2i-1.
Thus, the total number of passes is [log2n]. As merge showed, we can merge two sorted
segments in linear time, which means that each pass takes O(n) time. Since there are [log2n]
passes, the total computing time is O(n log2n).

How does the Divide and Conquer Strategy work with Merge Sort :
Merge sort works well on any type of dataset, be it large or small. But Quicksort generally is
more efficient for small datasets or on datasets where the elements are more or less evenly
distributed over the range.

How does the Divide and Conquer Strategy work with Merge Sort ?
The Divide and Conquer strategy divides the problem into smaller parts, solves them, and
combines the small solved sub problems to get the final solution. The same happens with
the Merge Sort algorithm. It keeps on dividing the array into two halves until their lengths
become 1. Then it starts combining them two at a time. First, the unit cells are combined
into sorted arrays of length 2 and these sorted subarrays are combined into another bigger
sorted subarrays and so on until the whole sorted array is formed.

9. When does the worst case occur in Merge Sort? 4 Marks


The worst case of Merge Sort will occur when the number of comparisons is maximum. In
Merge Sort, the number of comparisons between two subarrays is maximum when the
subarrays contain alternate elements of the sorted subarray formed by combining them. For
example, comparing {1, 3} and {2, 4} will have the maximum number of comparisons as it
has the alternate elements of the sorted subarray {1, 2, 3, 4}. See the below image to find
the arrangement corresponding to a sorted array which will give the worst case.

10. Write and explain Merge sort algorithm 10 Marks

Merge sort is one of the most efficient sorting algorithms. It works on the principle of Divide
and Conquer. Merge sort repeatedly breaks down a list into several sublists until each sub
list consists of a single element and merging those sublists in a manner that results into a
sorted list.

A merge sort works as follows:

Top down Merge Sort Implementation

The top-down merge sort approach is the methodology which uses recursion mechanism. It
starts at the Top and proceeds downwards, with each recursive turn asking the same
question such as “What is required to be done to sort the array?” and having the answer as,
“split the array into two, make a recursive call, and merge the results.”, until one gets to the
bottom of the array-tree.

Example: Let us consider an example to understand the approach better.

1. Divide the unsorted list into n sublists, each comprising 1 element (a list of 1 element is
supposed sorted).
Top-down Implementation

2. Repeatedly merge sublists to produce newly sorted sublists until there is only 1 sublist
remaining. This will be the sorted list.

Merging is done as follows:

The first element of both lists is compared. If sorting in ascending order, the smaller element
among two becomes a new element of the sorted list. This procedure is repeated until both
the smaller sublists are empty and the newly combined sublist covers all the elements of
both the sublists.
Merging of two lists

11. Briefly explain decrease and conquer with two advantages and disadvantages 6 Marks

Decrease or reduce problem instance to smaller instance of the same problem and extend
solution. Conquer the problem by solving smaller instance of the problem. Extend solution
of smaller instance to obtain solution to original problem. Basic idea of the decrease-and-
conquer technique is based on exploiting the relationship between a solution to a given
instance of a problem and a solution to its smaller instance. This approach is also known as
incremental or inductive approach. Decrease and conquer is a technique used to solve
problems by reducing the size of the input data at each step of the solution process. This
technique is similar to divide-and-conquer, in that it breaks down a problem into smaller sub
problems, but the difference is that in decrease-and-conquer, the size of the input data is
reduced at each step. The technique is used when it’s easier to solve a smaller version of the
problem, and the solution to the smaller problem can be used to find the solution to the
original problem.
Advantages of Decrease and Conquer:

1. Simplicity: Decrease-and-conquer is often simpler to implement compared to other


techniques like dynamic programming or divide-and-conquer.
2. Efficient Algorithms: The technique often leads to efficient algorithms as the size of the
input data is reduced at each step, reducing the time and space complexity of the solution.
3. Problem-Specific: The technique is well-suited for specific problems where it’s easier to
solve a smaller version of the problem.

Disadvantages of Decrease and Conquer:

1. Problem-Specific: The technique is not applicable to all problems and may not be
suitable for more complex problems.
2. Implementation Complexity: The technique can be more complex to implement when
compared to other techniques like divide-and-conquer, and may require more careful
planning.

12. What are different variation in decrease and conquer 6 Marks

Variations of Decrease and Conquer


There are three major variations of decrease-and-conquer:

1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same
constant on each iteration of the algorithm. Typically, this constant is equal to one ,
although other constant size reductions do happen occasionally. Below are example
problems :
 Insertion sort
 Graph search algorithms: DFS, BFS
 Topological sorting
 Algorithms for generating permutations, subsets
Decrease by a Constant factor: This technique suggests reducing a problem instance by the
same constant factor on each iteration of the algorithm. In most applications, this constant
factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by
a constant factor algorithms are very efficient especially when the factor is greater than 2 as
in the fake-coin problem. Below are example problems :
 Binary search
 Fake-coin problems
 Russian peasant multiplication
Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one
iteration of an algorithm to another. As, in problem of finding gcd of two number though
the value of the second argument is always smaller on the right-handside than on the left-
hand side, it decreases neither by a constant nor by a constant factor. Below are example
problems :
 Computing median and selection problem.
 Interpolation Search
 Euclid’s algorithm
There may be a case that problem can be solved by decrease-by-constant as well as
decrease-by-factor variations, but the implementations can be either recursive or iterative.
The iterative implementations may require more coding effort, however they avoid the
overload that accompanies recursion.
Module 4 - Dynamic Programming and Greedy technique

1. Define Dynamic programming and briefly list down its properties 4 Marks
Dynamic programming (DP) is a general algorithm design technique for solving problems with
overlapping sub-problems.
Dynamic Programming Properties
• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their results in a
table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.
2. Bring out at least three differences between divide & conquer and dynamic programming 4
Marks 4 Marks
Divide & Conquer Dynamic Programming
1 Partitions a problem into Partitions a problem into overlapping sub-
independent smaller sub- problems
problems
2 Doesn‘t store solutions of sub Stores solutions of sub problems: thus avoids
problems. (Identical sub-problems calculations of same quantity twice
may arise - results in the same
computations are performed
repeatedly.)
3 Top down algorithms: which Bottom up algorithms: in which the smallest sub-
logically progresses from the problems are explicitly solved first and the results
initial instance down to the of these used to construct solutions to
smallest sub-instances via progressively larger sub-instances
intermediate sub-instances.

3. Compare and contrast between greedy method and dynamic programming method 4 Marks
• LIKE dynamic programming, greedy method solves optimization problems.
• LIKE dynamic programming, greedy method problems exhibit optimal substructure
• UNLIKE dynamic programming, greedy method problems exhibit the greedy choice property -
avoids back-tracing
4. List down the applications of the greedy strategy 4 Marks
• Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
• Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem

5. Briefly explain steps in involved in Floyd warshall algorithm with steps in


involved in it. 6 Marks

Suppose we have a graph G[][] with V vertices from 1 to N. Now we have to evaluate
a shortest PathMatrix[][] where shortest PathMatrix[i][j] represents the shortest path
between vertices i and j. Obviously the shortest path between i to j will have
some k number of intermediate nodes. The idea behind floyd warshall algorithm is to
treat each and every vertex from 1 to N as an intermediate node one by one. The
following figure shows the above optimal substructure property in floyd warshall
algorithm:

Steps in involved in floyd warshall algorithm

1. Initialize the solution matrix same as the input graph matrix as a first step.
2. Then update the solution matrix by considering all vertices as an intermediate
vertex.
3. The idea is to pick all vertices one by one and updates all shortest paths which
include the picked vertex as an intermediate vertex in the shortest path.
4. When we pick vertex number k as an intermediate vertex, we already have
considered vertices {0, 1, 2, .. k-1} as intermediate vertices.
5. For every pair (i, j) of the source and destination vertices respectively, there are two
possible cases.
6. k is not an intermediate vertex in shortest path from i to j. We keep the value
of dist[i][j] as it is.
7. k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j], if dist[i][j] > dist[i][k] + dist[k][j]

6. Apply all pair shortest path algorithm (Floyd Warshall) for the below graph
10 Marks
7. Apply bellman ford algorithm for below graph 10 Marks

Step 1: Initialize a distance array Dist[] to store the shortest distance for each vertex from the source
vertex. Initially distance of source will be 0 and Distance of other vertices will be INFINITY.

Step 2: Start relaxing the edges, during 1st Relaxation:


Step 3: During 2nd Relaxation:

Step 4: During 3rd Relaxation:


Step 5: During 4th Relaxation:

Step 6: During 5th Relaxation:


Step 7: Now the final relaxation i.e. the 6th relaxation should indicate the presence of negative cycle if
there is any changes in the distance array of 5th relaxation.
8. Briefly explain Bellman ford algorithm and Why Relaxing Edges N-1 times, gives
us Single Source Shortest Path? 6 Marks

Bellman ford algorithm

A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar
to Dijkstra’s algorithm. Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable
of handling graphs with negative edge weights, which makes it more versatile. The shortest
path cannot be found if there exists a negative cycle in the graph. If we continue to go around
the negative cycle an infinite number of times, then the cost of the path will continue to
decrease (even though the length of the path is increasing). As a result, Bellman-Ford is also
capable of detecting negative cycles, which is an important feature.

Why Relaxing Edges N-1 times, gives us Single Source Shortest Path?

In the worst-case scenario, a shortest path between two vertices can have at most N-1 edges,
where N is the number of vertices. This is because a simple path in a graph with N vertices can
have at most N-1 edges, as it’s impossible to form a closed loop without revisiting a vertex. By
relaxing edges N-1 times, the Bellman-Ford algorithm ensures that the distance estimates for
all vertices have been updated to their optimal values, assuming the graph doesn’t contain any
negative-weight cycles reachable from the source vertex. If a graph contains a negative-weight
cycle reachable from the source vertex, the algorithm can detect it after N-1 iterations, since
the negative cycle disrupts the shortest path lengths. In summary, relaxing edges N-1 times in
the Bellman-Ford algorithm guarantees that the algorithm has explored all possible paths of
length up to N-1, which is the maximum possible length of a shortest path in a graph
with N vertices. This allows the algorithm to correctly calculate the shortest paths from the
source vertex to all other vertices, given that there are no negative-weight cycles.

9. Why Dijkstra’s Algorithms fails for the Graphs having Negative Edges ? 6 Marks

The problem with negative weights arises from the fact that Dijkstra’s algorithm assumes that
once a node is added to the set of visited nodes, its distance is finalized and will not change.
However, in the presence of negative weights, this assumption can lead to incorrect results.
Consider the following graph for the example:
In the above graph, A is the source node, among the edges A to B and A to C , A to B is the
smaller weight and Dijkstra assigns the shortest distance of B as 2, but because of existence of
a negative edge from C to B, the actual shortest distance reduces to 1 which Dijkstra fails to
detect.
10. List down the steps involved in Dijikstras algorithm 6 Marks

Algorithm

1. Create a set sptSet (shortest path tree set) that keeps track of vertices included in the
shortest path tree, i.e., whose minimum distance from the source is calculated and
finalized. Initially, this set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all distance values
as INFINITE. Assign the distance value as 0 for the source vertex so that it is picked
first.
3. While sptSet doesn’t include all vertices
1. Pick a vertex u that is not there in sptSet and has a minimum distance value.
2. Include u to sptSet.
3. Then update the distance value of all adjacent vertices of u.
1. To update the distance values, iterate through all adjacent vertices.
2. For every adjacent vertex v, if the sum of the distance value of u (from source)
and weight of edge u-v, is less than the distance value of v, then update the
distance value of v.
11. Why to choose greedy approach and explain the greedy choice property.
The greedy approach has a few tradeoffs, which may make it suitable for optimization.
One prominent reason is to achieve the most feasible solution immediately. In the
activity selection problem (Explained below), if more activities can be done before
finishing the current activity, these activities can be performed within the same
time. Another reason is to divide a problem recursively based on a condition, with no
need to combine all the solutions. In the activity selection problem, the “recursive
division” step is achieved by scanning a list of items only once and considering certain
activities.
Greedy choice property:
This property says that the globally optimal solution can be obtained by making a
locally optimal solution (Greedy). The choice made by a Greedy algorithm may depend
on earlier choices but not on the future. It iteratively makes one Greedy choice after
another and reduces the given problem to a smaller one.

12. List down any five characteristic components of greedy algorithm 10 Marks

characteristic components of greedy algorithm

1. The feasible solution: A subset of given inputs that satisfies all specified
constraints of a problem is known as a “feasible solution”.
2. Optimal solution: The feasible solution that achieves the desired extremum is
called an “optimal solution”. In other words, the feasible solution that either
minimizes or maximizes the objective function specified in a problem is known as
an “optimal solution”.
3. Feasibility check: It investigates whether the selected input fulfils all constraints
mentioned in a problem or not. If it fulfils all the constraints then it is added to a
set of feasible solutions; otherwise, it is rejected.
4. Optimality check: It investigates whether a selected input produces either a
minimum or maximum value of the objective function by fulfilling all the specified
constraints. If an element in a solution set produces the desired extremum, then it is
added to a sel of optimal solutions.
5. Optimal substructure property: The globally optimal solution to a problem
includes the optimal sub solutions within it.
6. Greedy choice property: The globally optimal solution is assembled by selecting
locally optimal choices. The greedy approach applies some locally optimal criteria
to obtain a partial solution that seems to be the best at that moment and then find
out the solution for the remaining sub-problem.

13. List down any five the advantages and disadvantages of greedy approach. 10 Marks

Advantages of the Greedy Approach:


1. The greedy approach is easy to implement.
2. Typically have less time complexity.
3. Greedy algorithms can be used for optimization purposes or finding close to
optimization in case of Hard problems.
4. Greedy algorithms can produce efficient solutions in many cases, especially when
the problem has a substructure that exhibits the greedy choice property.
5. Greedy algorithms are often faster than other optimization algorithms, such as
dynamic programming or branch and bound, because they require less computation
and memory.
6. The greedy approach is often used as a heuristic or approximation algorithm when
an exact solution is not feasible or when finding an exact solution would be too
time-consuming.
7. The greedy approach can be applied to a wide range of problems, including
problems in computer science, operations research, economics, and other fields.
8. The greedy approach can be used to solve problems in real-time, such as
scheduling problems or resource allocation problems, because it does not require
the solution to be computed in advance.
9. Greedy algorithms are often used as a first step in solving optimization problems,
because they provide a good starting point for more complex optimization
algorithms.
10. Greedy algorithms can be used in conjunction with other optimization algorithms,
such as local search or simulated annealing, to improve the quality of the solution.
Disadvantages of the Greedy Approach:
1. The local optimal solution may not always be globally optimal.
2. Greedy algorithms do not always guarantee to find the optimal solution, and may
produce suboptimal solutions in some cases.
3. The greedy approach relies heavily on the problem structure and the choice of
criteria used to make the local optimal choice. If the criteria are not chosen
carefully, the solution produced may be far from optimal.
4. Greedy algorithms may require a lot of preprocessing to transform the problem into
a form that can be solved by the greedy approach.
5. Greedy algorithms may not be applicable to problems where the optimal solution
depends on the order in which the inputs are processed.
6. Greedy algorithms may not be suitable for problems where the optimal solution
depends on the size or composition of the input, such as the bin packing problem.
7. Greedy algorithms may not be able to handle constraints on the solution space,
such as constraints on the total weight or capacity of the solution.
8. Greedy algorithms may be sensitive to small changes in the input, which can result
in large changes in the output. This can make the algorithm unstable and
unpredictable in some cases.

14. Characteristics of Greedy approach and any five Applications of Greedy Algorithms
10 Marks
Characteristics of Greedy approach
1. There is an ordered list of resources(profit, cost, value, etc.)
2. Maximum of all the resources(max profit, max value, etc.) are taken.
3. For example, in the fractional knapsack problem, the maximum value/weight is
taken first according to available capacity.

Applications of Greedy Algorithms


1. Finding an optimal solution (Activity selection, Fractional Knapsack, Job
Sequencing, Huffman Coding).
2. Finding close to the optimal solution for NP-Hard problems like TSP.
3. Network design: Greedy algorithms can be used to design efficient networks, such
as minimum spanning trees, shortest paths, and maximum flow networks. These
algorithms can be applied to a wide range of network design problems, such as
routing, resource allocation, and capacity planning.
4. Machine learning: Greedy algorithms can be used in machine learning
applications, such as feature selection, clustering, and classification. In feature
selection, greedy algorithms are used to select a subset of features that are most
relevant to a given problem. In clustering and classification, greedy algorithms can
be used to optimize the selection of clusters or classes.
5. Image processing: Greedy algorithms can be used to solve a wide range of image
processing problems, such as image compression, denoising, and segmentation.
For example, Huffman coding is a greedy algorithm that can be used to compress
digital images by efficiently encoding the most frequent pixels.
6. Combinatorial optimization: Greedy algorithms can be used to solve
combinatorial optimization problems, such as the traveling salesman problem,
graph coloring, and scheduling. Although these problems are typically NP-hard,
greedy algorithms can often provide close-to-optimal solutions that are practical
and efficient.
7. Game theory: Greedy algorithms can be used in game theory applications, such as
finding the optimal strategy for games like chess or poker. In these applications,
greedy algorithms can be used to identify the most promising moves or actions at
each turn, based on the current state of the game.
8. Financial optimization: Greedy algorithms can be used in financial applications,
such as portfolio optimization and risk management. In portfolio optimization,
greedy algorithms can be used to select a subset of assets that are most likely to
provide the best return on investment, based on historical data and current market
trends.
15. Let’s consider the case if it is needed to send the packet from the node ‘A’ to node ‘E’
in the graph shown below through the path which gives the least cost for routing. Cost
for routing for a node to another node is indicated in the link which connects those two
nodes. Hint: Dijikstras algorithm
Solution:
Step 1:

Step 2:

Step 3:

Module 5-Backtracking

1. How does backtracking algorithm work? 4 Marks


In any backtracking algorithm, the algorithm seeks a path to a feasible solution that
includes some intermediate checkpoints. If the checkpoints do not lead to a viable
solution, the problem can return to the checkpoints and take another path to find a
solution. Consider the following scenario:

1. In this case, S represents the problem's starting point. You start at S and work your
way to solution S1 via the midway point M1. However, you discovered that solution
S1 is not a viable solution to our problem. As a result, you backtrack (return) from
S1, return to M1, return to S, and then look for the feasible solution S2. This process
is repeated until you arrive at a workable solution.

2. S1 and S2 are not viable options in this case. According to this example, only S3 is a
viable solution. When you look at this example, you can see that we go through all
possible combinations until you find a viable solution. As a result, you refer to
backtracking as a brute-force algorithmic technique.

3. A "space state tree" is the above tree representation of a problem. It represents all
possible states of a given problem (solution or non-solution).

2. List down the steps involved in back tracking 4 Marks

The final algorithm is as follows:

 Step 1: Return success if the current point is a viable solution.


 Step 2: Otherwise, if all paths have been exhausted (i.e., the current point is an
endpoint), return failure because there is no feasible solution.

 Step 3: If the current point is not an endpoint, backtrack and explore other points,
then repeat the preceding steps.

3. When to Use a Backtracking Algorithm? 4 Marks

There are the following scenarios in which you can use the backtracking:

 It is used to solve a variety of problems. You can use it, for example, to find a
feasible solution to a decision problem.

 Backtracking algorithms were also discovered to be very effective for solving


optimization problems.

 In some cases, it is used to find all feasible solutions to the enumeration problem.

 Backtracking, on the other hand, is not regarded as an optimal problem-solving


technique. It is useful when the solution to a problem does not have a time limit.

Following an example of a backtracking algorithm, you will now look at different


types.

4. List and explain applications of backtracking 4 Marks

The backtracking algorithm has various practical applications, including:

1. Finding Hamiltonian Paths in a Graph:


Backtracking can be used to find all possible Hamiltonian paths in a graph,
where each vertex is visited exactly once. This is useful in optimizing travel
routes or exploring graph connectivity.
5. Solving the N-Queens Problem:
Backtracking is commonly employed to solve the N-Queens problem, which involves
placing N queens on an NxN chessboard without any two queens attacking each other. It
helps in finding all the distinct solutions or a single valid solution.

3. Maze Solving:
Backtracking algorithms are applied to solve maze problems, where the objective is to
find a path from the starting point to the destination. By exploring different paths and
backtracking when reaching dead ends, the algorithm determines a valid solution.

4.Knight’sTourProblem:
The backtracking algorithm is utilized to solve the Knight’s Tour problem, which involves
finding a sequence of moves for a knight on a chessboard to visit every square exactly
once. It helps in identifying all possible tours or a single valid tour.
6. How does the backtracking algorithm differ from other search algorithms? Can the
backtracking algorithm handle problems with a large search space? 6Marks

backtracking algorithm differ from other search algorithms?


The backtracking algorithm is different from other search algorithms in that it
systematically explores all possible solutions by incrementally building a
solution candidate and backtracking whenever necessary. It exhaustively
searches the entire solution space, whereas other algorithms may use heuristics
or pruning techniques to optimize the search process.
Can the backtracking algorithm handle problems with a large search space?
The backtracking algorithm explores all possible solutions, which can be time-
consuming and resource-intensive for problems with large search spaces. In
such cases, optimization techniques like pruning or heuristics can be applied to
reduce the search space and improve the algorithm’s efficiency.

7. How do I determine the constraints or conditions for backtracking? What happens if


there is no valid solution in the search space? 6Marks

How do I determine the constraints or conditions for backtracking?


The constraints or conditions for backtracking depend on the specific problem
you are trying to solve. They define the rules that must be satisfied at each step
of the solution-building process. Understanding the problem domain and
requirements is crucial for identifying the constraints and formulating the
backtracking algorithm accordingly.
What happens if there is no valid solution in the search space?
If there is no valid solution in the search space, the backtracking algorithm will
exhaustively explore all possibilities and eventually backtrack to the previous
decision point. At that point, the algorithm might terminate without finding a
solution, indicating that no valid solution exists for the given problem.

8. Draw state space tree for N queens problem with 4 *4 chess board having 4 queens
Q1,Q2,Q3,Q4. 10 Marks

The N Queen is the problem of placing N chess queens on an N×N chessboard


so that no two queens attack each other. For example, the following is a solution
for the 4 Queen problem.
N Queen Problem using Backtracking:
The idea is to place queens one by one in different columns, starting from the
leftmost column. When we place a queen in a column, we check for clashes with
already placed queens. In the current column, if we find a row for which there is
no clash, we mark this row and column as part of the solution. If we do not find
such a row due to clashes, then we backtrack and return false.
Below is the recursive tree of the above approach:
9. How backtracking approach is used to solve sum of subset problem 6
Marks
In the naive method to solve a subset sum problem, the algorithm generates all
the possible permutations and then checks for a valid solution one by one.
Whenever a solution satisfies the constraints, mark it as a part of the solution. In
solving the subset sum problem, the backtracking approach is used for selecting
a valid subset. When an item is not valid, we will backtrack to get the previous
subset and add another element to get the solution. In the worst-case scenario,
backtracking approach may generate all combinations, however, in general, it
performs better than the naive approach. Follow the below steps to solve subset
sum problem using the backtracking approach:

 First, take an empty subset.


 Include the next element, which is at index 0 to the empty set.
 If the subset is equal to the sum value, mark it as a part of the solution.
 If the subset is not a solution and it is less than the sum value, add next
element to the subset until a valid solution is found.
 Now, move to the next element in the set and check for another solution until
all combinations have been tried.

10. For a given set {3, 34, 4, 12, 5, 2} and the target sum = 9. Define a function
and use recursive method to check whether there exists a subset with the
given sum or not. 6 Marks
#include <stdio.h>

int isSubsetSum(int set[], int n, int sum)

// Base Cases

if (sum == 0) return 1; // Found a subset with the given sum

if (n == 0) return 0; // No more elements left to explore

// If the last element is greater than the sum, skip it

if (set[n-1] > sum) return isSubsetSum(set, n-1, sum);

// Check two possibilities:


// 1. Include the last element in the subset

// 2. Exclude the last element from the subset

return isSubsetSum(set, n-1, sum) || isSubsetSum(set, n-1, sum - set[n-1]);

int main()

int set[] = {3, 34, 4, 12, 5, 2};

int sum = 9;

int n = sizeof(set) / sizeof(set[0]);

if (isSubsetSum(set, n, sum)) printf("Found a subset with given sum");

else printf("No subset with given sum");

return 0;

11. Find the minimum spanning tree (MST) by applying prims algorithm with B as
source vertex. 10 Marks

Step 1

Step 2
Step 3

Step 4

Step 5

12. Define minimum spanning tree (MST) and explain working principle of Prims
algorithm. 10 Marks

Spanning tree - A spanning tree is the subgraph of an undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree
in which the sum of the weights of the edge is minimum. The weight of the spanning
tree is the sum of the weights given to the edges of the spanning tree.
Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree
from a graph. Prim's algorithm finds the subset of edges that includes every vertex of
the graph such that the sum of the weights of the edges can be minimized.

Prim's algorithm starts with the single node and explores all the adjacent nodes with all
the connecting edges at every step. The edges with the minimal weights causing no
cycles in the graph got selected.

How does the prim's algorithm work?

Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add
the edges with the smallest weight until the goal is reached. The steps to implement the
prim's algorithm are given as follows

o First, we have to initialize an MST with the randomly chosen vertex.

o Now, we have to find all the edges that connect the tree in the above step with the
new vertices. From the edges found, select the minimum edge and add it to the tree.

o Repeat step 2 until the minimum spanning tree is formed.

The applications of prim's algorithm are:

o Prim's algorithm can be used in network designing.

o It can be used to make network cycles.

o It can also be used to lay down electrical wiring cables.


Module 1
1. Define algorithm and list the characteristics of algorithm

An algorithm is a sequence of unambiguous instructions for solving a problem. i.e., for obtaining
a required output for any legitimate input in a finite amount of time
.
2. Write the algorithm to find a factorial of a given number using recursion.
The factorial of a number is defined as:

f(n) = n * f(n-1) → for all n >0


f(0) = 1 → for n = 0

3. Derive the time complexity for fibonacci of a given number.

algorithm F(n):
// INPUT
// n = Some non-negative integer
// OUTPUT
// The nth number in the Fibonacci Sequence
if n <= 1: return n
else: return F(n - 1) + F(n - 2)

Analyzing the time complexity for our iterative algorithm is a lot more straightforward than its
recursive counterpart. In this case, our most costly operation is assignment. Firstly, our
assignments of F[0] and F[1] cost O(1) each. Secondly, our loop performs one assignment per
iteration and executes (n-1)-2 times, costing a total of O(n-3) = O(n).

Therefore, our iterative algorithm has a time complexity of O(n) + O(1) + O(1) = O(n).

4. List various basic asymptotic efficiency classes?


The various basic efficiency classes are

• Constant: 1
• Logarithmic: logn
• Linear: n
• N-log-n: n logn
• Quadratic: n2
• Cubic: n3
• Exponential: 2n
• Factorial: n!
5. Explain briefly any two asymptotic notations.

Big Omega Notations:

A function t(n) is said to be in Ω(g(n)) , denoted t(n) Є Ω((g(n)) , if t(n) is bounded below by some
positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n) ≥cg(n) for all for all n ≥ n0

Big Theta Notations :

A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both above and
below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that c1 g(n) ≤t(n) ≤ c2g(n) for all n ≥
n0

Big ‘Oh’ notation:

A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)) , if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integers no such that t(n) ≤ cg(n) for all n≥ n0

6. Find the time complexity (upper bound) for the below iterative functions
1. A()
{
int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
printf(“Ravi”);
}
}

Sol:

s: 1 3 6 10 ……………………………………………………. n
i: 1 2 3 4 ……………………………………………………. k

K(k+1)
= >n
2
2
=𝑘 +𝑘
2
>n
∴K=O(n)

2. A()
{
Int i,j,k,n;
for(i=1;i<=n;i++)
{
for(j=;j<=i;j++)
{
for(k=1;k<=100;k++)
{
Printf(“Ravi”);
}

Sol:

i 1 2 3 4 5 ……………………………….. n
j 1 time 2 time 3 time 4 time time ……………………………….. n time
k 100 200 300 400 500 ……………………………….. N*100

=100+2*100+3*100+4*100+5*100………………………+n*100
=100(1+2+3+…………………+n)
𝑛(𝑛+1)
=100
2
∴O(n2)

3. A()
{
for(i=1;i<n;i=i*2)
{
Printf(“Ravi”);
}

Sol:
i= 1 2 4 ……………………………. n
20 21 22 2k
2k=n
K=log n
∴O(log2 n)

4. A()
{
int I,j,k;
𝑛
for(i=n/2;i<=n;i++)
2
𝑛
for(j=1;j<=n;j=2*j) 2
for(k=1;k<=n;k++) log2 n
printf(“Ravi”);
}
}
Sol:
𝑛 𝑛
* *log2
2 2
n
2
∴O(n log2 n)

5. Find the time complexity (upper bound) for the below recursive functions
1. A()
{
If(A>1)
return (A(n-1))
}
Sol:
T(n)=1+T(n-1); ; n>1
T(n)=1 ;n=0
T(n)=1+T(n-1)…………………….(1)
T(n-1)=1+T(n-2)………………….(2)
T(n-3)=1+T(n-3)………………….(3)
Substitute (2) in (1)
T(n)=1+T(n-1)
=2+T(n-2)……(4)
Substitute (3) in (4)
=3+T(n-3)
….
….
….
….
=K+T(n-k)…………….(5)
Then n-k=1
K=n-1……………..(6)
Substituting 6 in 5
=(n-1)+T(n-(n-1)
=(n-1)+T(1)
=n-1+1
=n
T(n)=n
∴O(n)

6. T(n)=n + T(n-1); ;n>1

T(n)=1 ;n=1

Sol:
T(n)=1+T(n-1)………..(1)
T(n-1)=1+T(n-2)……..(2)
T(n-2)=1+T(n-3)……..(3)
Substituting 2 in 1
T(n)=1+T(n-1)
=2+T(n-2)………..(4)
Substituting 3 in 4
=3+ T(n-3)

=k+T(n-k)……….(5)
=n-k=1
k=n-1……….(6)
Substituting 6 in 5
= (n-1)+t(N-(N-1))
=(n-1)+T(1)
=n-1+1
T(n)=n
O=N

Module 2
13. Apply bubble sort algorithm on following set of integers 8,5, 7,3,2

Pass-1 Pass-2
8 5 5 5 5 5 5 5 5
5 8 7 7 7 7 7 3 3
7 7 8 3 3 3 3 7 2
3 3 3 8 2 2 2 2 7
2 2 2 2 8 8 8 8 8

Pass-3 Pass-4
5 3 3 3 2
3 5 2 2 3
2 2 5 5 5
7 7 7 7 7
8 8 8 8 8

14. Write an algorithm to find uniqueness of elements in an array and give the mathematical
analysis of this non recursive algorithm with all steps

General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms

6. Decide on a parameter (or parameters) indicating an input’s size.


7. Identify the algorithm’s basic operation. (As a rule, it is located in innermost loop.)
8. Check whether the number of times the basic operation is executed depends only on the size of
an input. If it also depends on some additional property, the worst-case,average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
9. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
10. Using standard formulas and rules of sum manipulation, either find a closed form formula for
the count or, at the very least, establish its order of growth.
Uniqueness of elements in an array :
15. Define Knapsack problem and apply on following set of data having bag capacity m=15

Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1

Solun:
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
p/w 5 1.3 3 1 6 4.5 3

∑ xiwi =15
∑ xipi =54.6

16. Apply knapsack algorithm for following set of data

Objects: 1 2 3 4 5 6 7

Profit (P): 10 15 7 8 9 4

Weight(w): 1 3 5 4 1 3 2

W (Weight of the knapsack): 15

n (no of items): 7

Solution:

Object Profit Weight Remaining weight

5 8 1 15 - 1 = 14

1 5 1 14 - 1 = 13

2 10 3 13 - 3 = 10

3 15 5 10 - 5 = 5

6 9 3 5-3=2

7 4 2 2-2=0
The maximum profit is 51.

17. Write and explain selection sort algorithm with example of your choice.

Algorithm: Selection-Sort (A)

fori← 1 to n-1 do

min j ←i;

min x ← A[i]

for j ←i + 1 to n do

if A[j] < min x then

min j ← j

min x ← A[j]

A[min j] ← A [i]

A[i] ← min x

Consider the following depicted array as an example.

For the first position in the sorted list, the whole list is scanned sequentially. The first
position where 14 is stored presently, we search the whole list and find that 10 is the lowest
value.

So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in
the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.

We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.

After two iterations, two least values are positioned at the beginning in a sorted manner.

The same process is applied to the rest of the items in the array −
Module -3
18. Write and explain binary search algorithm with an example

Binary Search, also known as half-interval search is one of the most popular search
techniques to find elements in a sorted array. Here, you have to make sure that the array is
sorted.
The algorithm follows the divide and conquer approach, where the complete array is divided
into two halves and the element to be searched is compared with the middle element of the
list. If the element to be searched is less than the middle element, then the search is
narrowed down to 1st half of the array. Else, the search continues to the second half of the
list.

Binary Search Example

Consider the following array and the search element to understand the Binary Search
techniques.
Array considered: 09 17 25 34 49
Element to be searched: 34
Step 1: Start the Binary Search algorithm by using the formula middle = (left + right )/2 Here,
left = 0 and right = 4. So the middle will be 2. This means 25 is the middle element of the
array.
Step 2: Now, you have to compare 25 with our seach element i.e. 34. Since 25 < 34, left =
middle + 1 and right = 4.
Step 3: So, the new middle = (3 + 4)/ 2, which is 3.5 considered as 3.
Step 4: Now, If you observe, the element to be searched = middle found in the previous
step. This implies that the element is found at a[3].

19. Write and explain quick sort algorithm with an example.


QucikSort (A,p,r)
{
If(p<r)
{
Q=partition(A,p,r)
Quciksort(A,P,q-1);
Quicksort(A,q+1,r);
}
}
Partition(A,p,r)
{
X=a[r];
I=p-1;
for(j=p to r-1)
{
Ifa[j]<=X)
{
I=i+1;
Exchange a[i]a[j]
}
}
Exchange a[i+1] a[r]
Return i+1;
}
20. Apply Master’s theorem solves recurrence relations of the form

Here, a >= 1, b > 1, k >= 0 and p is a real number.

Then, we follow the following cases-

Case-01:

If a > bk, then T(n) = θ (nlogba)

Case-02:

If a = bk and

 If p < -1, then T(n) = θ (nlogba)


 If p = -1, then T(n) = θ (nlogba.log2n)
 If p > -1, then T(n) = θ (nlogba.logp+1n)

Case-03:

If a < bk and

 If p < 0, then T(n) = O (nk)


 If p >= 0, then T(n) = θ (nklogpn)

 Solve the following recurrence relation using Master’s theorem-

T(n) = 3T(n/2) + n2

Solution-

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

Then, we have-

a=3

b=2
k=2

p=0

Now, a = 3 and bk = 22 = 4.

Clearly, a < bk.

So, we follow case-03.

0, so we have-

T(n) = θ (nklogpn)

T(n) = θ (n2log0n)

Thus,

T(n) = θ (n2)

21. Briefly explain working of insertion sort algorithm with an example.

Solun:

To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be easier
to understand the insertion sort via an example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now, 12 is
stored in a sorted sub-array.
Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted
array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are 31
and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.


So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.


Now, the array is completely sorted.

You might also like