0% found this document useful (0 votes)
41 views34 pages

DAA Unit-1

The document is a collection of solved questions and answers for the Design & Analysis of Algorithms course at Buddha Institute of Technology. It covers fundamental concepts such as algorithms, performance analysis, time complexity of Merge Sort, stable vs. unstable sorting, and recurrence relations. Additionally, it includes practical examples and algorithms like Merge Sort and Max-Heap construction.

Uploaded by

dwiutkarsh2260
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views34 pages

DAA Unit-1

The document is a collection of solved questions and answers for the Design & Analysis of Algorithms course at Buddha Institute of Technology. It covers fundamental concepts such as algorithms, performance analysis, time complexity of Merge Sort, stable vs. unstable sorting, and recurrence relations. Additionally, it includes practical examples and algorithms like Merge Sort and Max-Heap construction.

Uploaded by

dwiutkarsh2260
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

BUDDHA SERIES

(Unit Wise Solved Question & Answers)

Course – B.Tech. (CSE)


College – Buddha Institute of Technology
(AKTU CODE-525)

Department: Computer Science &


Engineering
Subject: Design & Analysis of Algorithm
(BCS 503)

Faculty Name: Dr. Abhinandan Tripathi


Unit - 1
DAA (BCS-503) Unit-1

Q1. Define Algorithm. [AKTU 2021, 2020]

Ans. An algorithm is a finite set of sequence of instructions each of which has a clear
meaning and can be performed within a finite amount of effort in finite length of time.
An algorithm is any well defined computational procedure that takes some values, or
set of values as input and produces some values or set of values as output. It is a
sequence of computational steps that transform the input into output.
A finite set of instructions which if followed accomplish a particular task.
In addition every algorithm must satisfy following criteria:
1. Input: zero or more quantities externally supplied
2. Output: at least one quantity is produced
3. Definiteness: Each instruction must be clear and unambiguous.
4. Finiteness: In all cases algorithm must terminate after finite number of steps.
5. Effectiveness: each instruction must be sufficiently basic.

Q2. How analyze the performance of an algorithm in different cases? [AKTU 2022]
Ans: Analyzing the performance of an algorithm in different cases involves evaluating its
efficiency, accuracy, and scalability under various inputs and scenarios. This process helps you
understand how well the algorithm performs and whether it meets your requirements. Here's a
step-by-step guide on how to analyze the performance of an algorithm in different cases:

1. Analyze Time Complexity: Analyze the algorithm's time complexity in terms of Big O
notation. This helps you understand how the algorithm's runtime grows as the input size
increases. Perform worst-case, average-case, and best-case analyses to identify the
algorithm's performance characteristics.

2. Analyze Space Complexity: Assess the algorithm's space complexity to understand its
memory requirements as the input size changes. Consider factors like memory usage, data
structures, and temporary storage.

Q3. Derive the time complexity of Merge sort. [AKTU 2022].

ANS: Merge Sort is a popular sorting algorithm that follows the divide-and-conquer approach
to sort an array of elements. Here's the derivation of its time complexity:

1.Divide: The array is divided into two halves repeatedly until each subarray contains only
one element. This takes log₂(n) divisions, where "n" is the number of elements in the array.

2. Conquer: Each subarray of size one is considered sorted, which takes constant time O(1).

3.Merge: The key operation is the merging of two sorted subarrays. The merge operation
compares and combines the elements of the subarrays while maintaining the sorted order.
For an array of size "n," the merge operation takes O (n) time.
2
DAA (BCS-503) Unit-1
Now, let's consider the total time complexity of Merge Sort:

In the "Divide" step, the array is divided log₂(n) times.

In the "Merge" step, each division requires merging two subarrays of size n/2. As each
merging operation takes O(n) time, the total time spent in the "Merge" step is O(n log n).

Since the "Divide" and "Conquer" steps contribute less to the time complexity compared to
the "Merge" step, the overall time complexity of Merge Sort is O(n log n).

In summary, the time complexity of Merge Sort is O(n log n), where "n" is the number of
elements in the array. Merge Sort's efficient time complexity makes it a popular choice for
sorting large datasets.

Q4. Solve the recurrence (i) T (n) =3T (n/4) + cn2 using recursion tree method [AKTU 2022].

(ii)T (n) = n + 2T (n/2) using Iteration method. (Given T(1)=1) [AKTU 2022].

ANS: (i) Convert the recurrence into a tree:


– Each node represents the cost incurred at various levels of recursion
– Sum up the costs of all levels

• Subproblem size at level i is: n/4i


• Subproblem size hits 1 when 1 = n/4i Þ i = log4n
• Cost of a node at level i = c(n/4i)2
• Number of nodes at level i = 3i Þ last level has 3log4n = nlog43 nodes
• Total Cost T(n) is given by-
log 4n1 i  i
 3  3
T(n)=    cn 2 + Θ n 4    cn 2 + Θ n 4  = cn 2 + Θ n 4  = O(n2 )
log 3 log 3 1 log 3

i=0  16    i=0  16    1 3  
16

(ii) T (n) = n + 2T (n/2) using Iteration method. (Given T(1)=1)


T(n) = n + 2T(n/2)
T(n) = n + 2T(n/2) T(n/2) = n/2 + 2T(n/4)
3
DAA (BCS-503) Unit-1
= n + 2(n/2 + 2T(n/4))
= n + n + 4T(n/4)
= n + n + 4(n/4 + 2T(n/8))
= n + n + n + 8T(n/8)
Assume k=lgn
n = 2k
… after k steps
= kn + 2kT(n/2k)
= kn + 2kT(1)
= nlgn + nT(1) = Θ(nlgn)

Q5. Write Merge sort algorithm? [AKTU 2022]

ANS: To sort an array A[p . . r]:

• Divide
– Divide the n-element sequence to be sorted into two subsequences of n/2 elements
each
• Conquer
– Sort the subsequences recursively using merge sort
– When the size of the sequences is 1 there is nothing more to do
• Combine
– Merge the two sorted subsequences

Alg.: MERGE-SORT(A, p, r)

1) if p < r Check for base case


2) then q ← (p + r)/2 Divide
a. MERGE-SORT(A, p, q) Conquer
b. MERGE-SORT(A, q + 1, r) Conquer
c. MERGE(A, p, q, r) Combine

Initial call: MERGE-SORT(A, 1, n)

Algo.: MERGE(A, p, q, r)

1. Compute n1 and n2
2. Copy the first n1 elements into L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
3. L[n1 + 1] ← ; R[n2 + 1] ← 
4. i ← 1; j ← 1
5. for k ← p to r
6. do if L[ i ] ≤ R[ j ]
7. then A[k] ← L[ i ]
8. i ←i + 1
9. else A[k] ← R[ j ]
10. j←j+1

4
DAA (BCS-503) Unit-1
Q6. What do you understand by stable and unstable sorting?
ANS: Stable and unstable sorting refer to two different behaviors exhibited by sorting algorithms
when they encounter elements with equal keys (values) during the sorting process.

1. Stable Sorting:
A sorting algorithm is considered stable if it maintains the relative order of equal elements as
they appear in the original input. In other words, if you have two elements with equal keys, and
one appears before the other in the initial unsorted array, a stable sorting algorithm ensures that
their order remains the same in the sorted array.
Stable sorting is particularly useful in situations where you want to preserve the original order of
elements with equal keys. For example, if you're sorting a list of people by their ages and two
people have the same age, a stable sorting algorithm will keep them in the same order they were
in before sorting.
2. Unstable Sorting:
An unstable sorting algorithm does not guarantee the preservation of the relative order of equal
elements. When sorting elements with equal keys, an unstable algorithm might change the order
of these elements from their original sequence in the input array.
Unstable sorting is not concerned with maintaining the original order of equal elements. It might
be faster or require less memory compared to stable sorting algorithms, but it does not guarantee
consistent behavior when dealing with equal keys.

Example:
Consider an array of records containing names and ages:[("Alice", 25), ("Bob", 30), ("Carol", 25),
("David", 28)]
If you were to sort this array by ages using a stable sorting algorithm, the result would be:
[("Alice", 25), ("Carol", 25), ("David", 28), ("Bob", 30)]
Notice that "Alice" and "Carol" maintain their original order since the sorting algorithm is stable.

However, if you used an unstable sorting algorithm, the result might be:
[("Carol", 25), ("Alice", 25), ("David", 28), ("Bob", 30)] Here, the order of "Alice" and "Carol" got
switched during the sorting process.

In summary, stable sorting algorithms preserve the order of equal elements, while unstable
sorting algorithms do not guarantee this preservation and may change the order of equal
elements.

Q7. Construct a Max-Heap using following list of key Values:[AKTU 2022]

4 1 3 2 16 9 10 14 8 7

5
DAA (BCS-503) Unit-1

Q8. What is recurrence relation? How a recurrence is solved using master’stheorem?


[AKTU 2021]
Ans. A recurrence relation is a mathematical equation that defines a sequence of values based on
one or more previous terms in the sequence. In the context of computer science and algorithms,
recurrence relations are commonly used to describe the time complexity of recursive algorithms
or divide-and-conquer algorithms. Solving a recurrence relation involves finding a closed-form
solution for the sequence of values it defines.

6
DAA (BCS-503) Unit-1
The Master's Theorem is a technique used to solve certain types of recurrence relations that arise
in the analysis of divide-and-conquer algorithms. It provides a convenient way to determine the
asymptotic (big-O) complexity of algorithms without explicitly solving the recurrence relation.
The Master's Theorem is typically applied to recurrences of the form:

T(n) = aT(n/b) + f(n)


Where:
- T(n) is the time complexity of the algorithm for input size n.
- a is the number of subproblems the algorithm divides the original problem into.
- b is the factor by which the input size is reduced in each subproblem.
- f(n) represents the time complexity of the work done outside the recursive calls (e.g., combining
the subproblem solutions).

The Master's Theorem provides three cases, each with specific conditions, for solving
recurrences of this form:

n
T(n) = aT   + f(n)
b

7
DAA (BCS-503) Unit-1
Q9. What is asymptotic notation? Explain Omega (Ω) notation? [AKTU 2021]
Ans. Asymptotic notation is a mathematical framework used in computer science and
mathematics to describe the behavior of functions as their input values become very large
(approaching infinity). It provides a concise way to analyze and compare the growth rates of
functions, which is especially useful when analyzing the time and space complexity of algorithms.
Omega notation describes a lower bound on the growth rate of a function. In other words, it
provides information about the best-case or lower limit of how a function behaves as its input
increases.
Formally, a function g(n) is said to be in Ω(f(n)) if there exist positive constants c and n₀ such that
for all values of n greater than or equal to n₀, the following inequality holds:

g(n) ≥ c * f(n)

In simpler terms, this means that the function g(n) grows at least as fast as c times the function
f(n) for sufficiently large values of n.

For example, if we have a function g(n) = 2n² + 3n and we say g(n) is Ω(n²), we are stating that
the growth rate of g(n) is at least quadratic. This means that no matter what constant factors or
lower-order terms are involved in the function, as long as the growth is at least quadratic for
sufficiently large n, we can express it using Ω(n²).
Omega notation is useful for understanding lower bounds on algorithm performance.

Q10. Solve the recurrence T (n) = 4T(n/2) + n2 [AKTU 2021]


Ans To solve the recurrence T(n) = 4T(n/2) + n² using the Master's Theorem, we need to identify
the values of a, b, and f(n) in the recurrence relation:

T(n) = a * T(n/b) + f(n)

In this case:
- a = 4 (the number of subproblems)
- b = 2 (the factor by which the input size is reduced)
- f(n) = n² (the time complexity of the work done outside the recursive calls)

Now, we need to calculate nlogb a = nlog2 4=n2

Since f(n) = n² ,we are in Case 2 of the Master's Theorem.

According to Case 2 of the Master's Theorem:

So, the solution to the recurrence relation T(n) = 4T(n/2) + n²


Using the Master's Theorem is T (n) = Θ (n2logn)

Q11. Explain how algorithms performance is analyzed? [AKTU 2021]


Ans. Analyzing the performance of algorithms involves assessing how an algorithm's runtime
(time complexity) and memory usage (space complexity) grow as the input size increases. This
analysis provides valuable insights into the efficiency and scalability of algorithms. There are
several key steps and techniques involved in algorithm performance analysis:
8
DAA (BCS-503) Unit-1
1. Counting Basic Operations: Start by identifying the basic operations that contribute to the
algorithm's execution time. This step requires an understanding of the algorithm's high-level
structure and the operations performed in each step. For example, counting assignments,
comparisons, arithmetic operations, and memory accesses can give you a rough idea of the
algorithm's computational workload.

2. Asymptotic Analysis: Asymptotic notation, including Big O, Omega, and Theta notations, is
used to describe how an algorithm's performance scales with the input size in the worst, best, or
average case. Asymptotic analysis provides a high-level view of an algorithm's efficiency without
getting bogged down in constant factors or lower-order terms.

3. Worst-case, Best-case, and Average-case Analysis: Algorithms are often analyzed based on
their worst-case, best-case, and average-case scenarios. The worst-case analysis provides an
upper bound on the algorithm's performance, while the best-case analysis provides a lower
bound. Average-case analysis takes into account the probabilities of various inputs and their
corresponding runtimes. In some cases, amortized analysis is used to average the time taken over
a sequence of operations.

4. Recurrence Relations: Divide-and-conquer and recursive algorithms often result in


recurrence relations that describe the algorithm's time complexity in terms of its subproblem
sizes. Solving these relations using techniques like the Master's Theorem helps determine the
overall time complexity of the algorithm.

Q12. Write an algorithm for counting sort? Illustrate the operation of counting sort on the
following array: A={4, 0, 2, 0, 1, 3, 5, 4, 1, 3, 2, 3}: [AKTU 2021]
Ans. Counting Sort is a linear time sorting algorithm that works well when the range of input
values is small compared to the input size. It operates by counting the occurrences of each
distinct element in the input array and then using this count information to place the elements in
their sorted order. Here's the algorithm for Counting Sort:

Algo.: COUNTING-SORT(A, B, k)

1. for i ← 0 to k
2. do C[ i ] ← 0
3. for j ← 1 to length [A]
4. do C[A[ j ]] ← C[A[ j ]] + 1
5. C[i] contains the number of elements equal to i
6. for i ← 1 to k
7. do C[ i ] ← C[ i ] + C[i -1]
8. C[i] contains the number of elements ≤ i
9. for j ← length[A] downto 1
10. do B[C[A[ j ]]] ← A[ j ]
11. C[A[ j ]] ← C[A[ j ]] – 1

Operation of Counting Sort on the array A = [4, 0, 2, 0, 1, 3, 5, 4, 1, 3, 2, 3]:


Step 1: Find the maximum value in the array (max = 5).
Step 2: Initialize the count array (count) of size (max + 1) with all elements set to 0.
9
DAA (BCS-503) Unit-1
count = [0, 0, 0, 0, 0, 0]
Step 3: Iterate through the input array (A) and increment the corresponding count in the count
array.
After this step, the count array becomes:
count = [2, 2, 3, 3, 2, 1]
Step 4: Modify the count array to store the cumulative count.
After this step, the count array becomes:
count = [2, 4, 7, 10, 12, 13]
Step 5: Initialize the output array (output) of the same size as the input array.
output = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Step 6: Iterate through the input array (A) in reverse order:
a. For A[11] = 3:
- Find the position (pos) of 3 in the count array: count[3] = 7.
- Place 3 in the output array at position 7: output[7] = 3.
- Decrement the count of 3 in the count array: count[3] = 6.

b. For A[10] = 2:
- Find the position (pos) of 2 in the count array: count[2] = 4.
- Place 2 in the output array at position 4: output[4] = 2.
- Decrement the count of 2 in the count array: count[2] = 3.

... Repeat this process for the remaining elements in reverse order.

Step 7: The sorted output array is [0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 4, 5].

Final sorted array: [0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 4, 5]

Q13. Solve the following recurrence relation: [AKTU 2021]


i. T (n) = T (n-1) + n4
ii. T (n) = T (n/4) + T (n/2) + n2 [AKTU 2021]

Ans. (i)T (n) = T (n-1) + n4

10
DAA (BCS-503) Unit-1

ii. T (n) = T (n/4) + T (n/2) + n2 [AKTU 2021]

Apply recursion tree Method -------

11
DAA (BCS-503) Unit-1

12
DAA (BCS-503) Unit-1

Q14. Write an algorithm for insertion sort. Find the time complexity of Insertion sort in all cases.
[AKTU 2021]

Ans.
InsertionSort(A, n)
{
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j=j-1
}
A[j+1] = key
}}

Analysis of Insertion Sort:

Insertion Sort is an elementary sorting algorithm that builds the final sorted array one item at a
time. It is generally not the most efficient sorting algorithm for large lists, but it performs well for
small lists or nearly sorted lists. The time complexity of Insertion Sort in different cases is as
follows:

1. Best Case (Already Sorted List):


In the best case scenario, where the input array is already sorted, Insertion Sort only needs to
make a single pass through the array to confirm that each element is in its correct position. The
time complexity in this case is O(n), where n is the number of elements in the array.

2. Average Case:
In the average case, Insertion Sort requires approximately O(n2) comparisons and swaps. It
involves comparing each element with the previous elements in the sorted portion of the array
and potentially swapping them to their correct positions.

3. Worst Case (Reverse Sorted List):


In the worst case scenario, where the input array is sorted in reverse order, Insertion Sort will
require the maximum number of comparisons and swaps for each element. This results in a time
complexity of O(n2), similar to the average case.

It's worth noting that while Insertion Sort has a worst-case time complexity of O(n2), its actual
performance can be better than other O(n2) sorting algorithms (like Bubble Sort or Selection
Sort) due to its adaptive nature. In practice, when dealing with small lists or lists that are already
partially sorted, Insertion Sort can perform quite efficiently.

In summary:
13
DAA (BCS-503) Unit-1
- Best Case: O(n)
- Average Case: O(n2)
- Worst Case: O(n2)

Q15. What do you mean by best case, average case, and worst case time complexity of an
algorithm? [AKTU 2020]
Ans. The best-case, average-case, and worst-case time complexity of an algorithm are ways to
describe the performance of the algorithm under different scenarios or inputs. They provide
insights into how the algorithm behaves in different situations and help us understand its
efficiency and scalability.

1. Best-Case Time Complexity: This refers to the minimum amount of time an algorithm takes
to complete when given the best possible input. In other words, it represents the scenario where
the algorithm performs optimally. Best-case time complexity gives an idea of the lower bound of
the algorithm's performance. However, it may not always be a realistic measure because the best-
case scenario might not occur frequently in real-world scenarios.

2. Average-Case Time Complexity: This represents the expected or average amount of time an
algorithm takes to complete when given a random or typical input. It takes into account the
probabilities of various inputs and their corresponding runtimes. Average-case time complexity
provides a more realistic assessment of an algorithm's performance in practice. However,
calculating the exact average-case complexity can be complex and may involve probability
distributions and statistical analysis.

3. Worst-Case Time Complexity: This is the maximum amount of time an algorithm takes to
complete when given the worst possible input. It represents the scenario where the algorithm
performs least optimally. Worst-case time complexity is often used in theoretical analysis
because it provides an upper bound on the algorithm's performance that holds for all inputs. It
helps developers ensure that the algorithm won't take more time than the worst case in any
situation.

Q16. Why do we use asymptotic notation in the study of algorithm? Explain in brief various
asymptotic notations. [AKTU 2020]

Ans. Asymptotic notation plays a vital role in algorithm analysis, design, comparison, and
communication. It provides a powerful tool for understanding and predicting the performance
characteristics of algorithms, allowing us to make informed decisions when solving
computational problems.

14
DAA (BCS-503) Unit-1

15
DAA (BCS-503) Unit-1

Q17. Solve the recurrence: T(n)=16T(n/4)+n2 by using Master method. [AKTU 2020].
Ans. To solve the recurrence relation T(n) = 16T(n/4) + n2 using the Master Theorem, we'll
compare its parameters to the standard form of the Master Theorem and determine the
appropriate case.

The standard form of the Master Theorem is:


T(n) = aT(n/b) + f(n)

In your recurrence relation:


- a = 16 (number of sub problems created)
- b = 4 (input size reduction factor)
- f(n) = n2 (work done outside the recursive calls)

Let's compare f(n) = n2 with n^c, where c is a constant exponent:

f(n) = n^2
n^c = n^2

Since f(n) = n^2 is equal to n^c = n^2, we fall into Case 2 of the Master Theorem.

In Case 2, if f(n) = Θ(n^c), where c is the same exponent in the recursive relation, and if a > b^c,
then the time complexity is Θ(nc * log n).

In your case:
- a = 16
-b=4
- c = 2 (exponent in f(n))

16
DAA (BCS-503) Unit-1
a = 16 > b^c = 4^2 = 16

Therefore, the time complexity of the recurrence relation T(n) = 16T(n/4) + n^2 is
T(n)= Θ(n^2 * log n) according to Case 2 of the Master Theorem.

Q18. What is the running time of heap sort on an array A of length n that is already sorted in
increasing order? [AKTU 2020]
Ans. The running time of Heap Sort on an array A of length n that is already sorted in increasing
order is still O(n log n). Here's why:
Heap Sort operates by first building a max-heap (or min-heap) from the input array and then
repeatedly extracting the maximum (or minimum) element from the heap and placing it at the
end of the array. While building the heap from an arbitrary array takes O(n) time, the extraction
step (removing the root element and restoring the heap property) takes O(log n) time.
In the worst case, Heap Sort's time complexity is O(n log n), regardless of the initial order of the
input array. This is because Heap Sort's time complexity is dominated by the heapify operation,
which has a time complexity of O(n) when performed on an arbitrary array. Once the heap is
constructed, the extraction step is performed log n times for each of the n elements.
Even if the array is already sorted in increasing order, Heap Sort's worst-case time complexity
remains O(n log n) because the heapify step will still take O(n) time. However, it's worth noting
that other sorting algorithms like Insertion Sort or Tim Sort might perform better on partially or
nearly sorted arrays due to their specific characteristics and optimizations.
So, in summary, the running time of Heap Sort on an already sorted array in increasing order is
still O(n log n), which is the same as its worst-case time complexity.

Q19. Illustrate the function of Heap sort on the following array: A={25,57,48,37,12,92,86,33}.
[AKTU 2020]
Ans. Heap Sort algorithm on the given array A = {25, 57, 48, 37, 12, 92, 86, 33} step by step:

1. Build Heap (Heapify):


We start by building a max heap from the array A. The max heap property ensures that the root
of each subtree is greater than its children.
Original Array: 25, 57, 48, 37, 12, 92, 86, 33
Heapify:
92
/ \
57 86
/\ / \
37 12 25 33

2.Sorting Phase (Extract Max and Re-heapify):


In each step of this phase, we extract the maximum element from the heap (which is the root)
and place it at the end of the array. Then, we adjust the heap to maintain the max heap property.

Step 1: Extract Max (92) and Re-heapify


Extracted Max: 92
Heap after extraction:

17
DAA (BCS-503) Unit-1
86
/ \
57 33
/\ / \
37 12 25

Step 2: Extract Max (86) and Re-heapify


Extracted Max: 86
Heap after extraction:
57
/ \
37 33
/\
12 25
Step 3: Extract Max (57) and Re-heapify
Extracted Max: 57
Heap after extraction:
37
/ \
25 33
/
12
Step 4: Extract Max (37) and Re-heapify
Extracted Max: 37
Heap after extraction:
33
/ \
25 12

Step 5: Extract Max (33) and Re-heapify

Extracted Max: 33
Heap after extraction:
25
/
12

Step 6: Extract Max (25) and Re-heapify


Extracted Max: 25
Heap after extraction:
12

Step 7: Extract Max (12) and Re-heapify


Extracted Max: 12
Heap after extraction: (Empty Heap)
3. Resulting Sorted Array:
Finally, we have the sorted array: [92, 86, 57, 48, 37, 33, 25, 12].
18
DAA (BCS-503) Unit-1

Q20. Explain and write partitioning algorithm for Quick Sort. [AKTU 2020]

Ans. Quick Sort is a popular divide-and-conquer sorting algorithm that efficiently sorts an array
or a list by selecting a "pivot" element and partitioning the other elements into two sub-arrays or
sub-lists, one containing elements less than the pivot and the other containing elements greater
than the pivot. The sub-arrays are then recursively sorted. The key to Quick Sort's efficiency is its
partitioning algorithm.

Here's how the partitioning algorithm works:

Partition Algorithm for Quick Sort:

Q21. Quick sort is fastest comparison sorting algorithm in the average case. Are you agree with
this statement? Justify your answer. [AKTU2019]

Ans. Yes, I agree with the statement that Quick Sort is generally considered to be the fastest
comparison-based sorting algorithm in the average case. Here's the justification for this
assertion:
1. Average Case Time Complexity: In the average case, Quick Sort exhibits an average time
complexity of O(n log n), where 'n' is the number of elements to be sorted. This performance is
highly efficient and faster than many other sorting algorithms.
19
DAA (BCS-503) Unit-1

2. Partitioning Strategy: Quick Sort's efficiency lies in its partitioning strategy. It selects a pivot
element and partitions the array into two sub arrays such that elements on one side of the pivot
are smaller, and elements on the other side are larger. This step is performed in linear time, O(n).

3. Divide and Conquer: Quick Sort follows a divide-and-conquer approach. After partitioning, it
recursively sorts the two sub arrays. This recursive process helps in reducing the problem size
and contributes to the overall efficiency of the algorithm.

4. Cache Efficiency: Quick Sort's localized movements and better cache utilization contribute to
its performance. The algorithm accesses elements sequentially and is well-suited for modern
computer architectures with memory hierarchies.

5. Low Constant Factors: Quick Sort tends to have lower constant factors compared to some
other sorting algorithms like Merge Sort or Heap Sort. This can make a noticeable difference in
practice for smaller inputs.

However, it's important to note that Quick Sort's performance can degrade in certain cases, such
as when the pivot selection is poor and leads to unbalanced partitions, resulting in worst-case
time complexity of O(n2). This worst-case scenario can be mitigated through various techniques,
such as choosing a good pivot or using hybrid sorting algorithms.

In summary, Quick Sort's average-case time complexity of O(n log n) and its efficient partitioning
strategy make it one of the fastest comparison-based sorting algorithms in practice. However, it's
essential to consider the potential for worst-case scenarios and take appropriate measures to
handle them.

Q22. Solve the recurrence: T (n) = 50 T (n/49) + log n! [AKTU2019].

Ans. To solve the recurrence relation T(n) = 50T(n/49) + log(n!), we can use the Master Theorem.
The Master Theorem provides a way to analyze and solve recurrence relations of the form:

T(n) = aT(n/b) + f(n)

In this case, we have:


a = 50,
b = 49,
f(n) = log(n!).

Let's break down the steps to solve the recurrence using the Master Theorem:
Step 1: Compute the ratio a/b:
a/b = 50/49

Step 2: Compute the value of f(n):


We have f(n) = log(n!). The factorial function is defined as n! = n * (n - 1) * (n - 2) * ... * 2 * 1.
Taking the logarithm of n! yields:
log(n!) = log(n) + log(n - 1) + log(n - 2) + ... + log(2) + log(1)
20
DAA (BCS-503) Unit-1
Step 3: Compare f(n) with n^(log_b(a)):
In this step, we compare the growth rate of f(n) with n raised to the power of the logarithm base
b of a.
log_b(a) = log_49(50) ≈ 1.03
Comparing log(n!) with n^(log_b(a)), we find that f(n) grows faster:
f(n) = log(n!) > n^(log_b(a))
Step 4: Apply the Master Theorem Case 3:
If f(n) is larger than n^(log_b(a)), then the Master Theorem Case 3 applies. The solution to the
recurrence is given by:

T(n) = Θ(f(n))

Therefore, the solution to the recurrence T(n) = 50T(n/49) + log(n!) is:

T(n) = Θ(log(n!))

Q23. Illustrate the operation of Quick sort on the array, A= (9, 14, 87, 4, 32, 86, 67) [AKTU2019].

Ans. The operation of the Quick Sort algorithm on the given array A = (9, 14, 87, 4, 32, 86, 67). I'll
walk you through the steps of partitioning and sorting.

1.Initial Array: A = (9, 14, 87, 4, 32, 86, 67)

2. Choosing a Pivot: Let's choose the first element, 9, as the pivot.

3.Partitioning: We'll rearrange the array so that elements smaller than the pivot are on the left,
and elements greater than the pivot are on the right.

After partitioning: A = (4, 9, 87, 14, 32, 86, 67)

Pivot (9) is now in its correct sorted position. Elements less than 9 are on the left, and elements
greater than 9 are on the right.

4. Recursive Step: Now we recursively apply the Quick Sort algorithm to the left and right sub
arrays.

Left sub array: A = (4, 9)


Right sub array: A = (87, 14, 32, 86, 67)

5. Left Sub array (4, 9): No further partitioning is needed as both sub arrays have only one
element.

6. Right Sub array (87, 14, 32, 86, 67): We choose a new pivot, let's say 87 (the first element in the
right sub array).

After partitioning: A = (67, 14, 32, 86, 87)

21
DAA (BCS-503) Unit-1
Pivot (87) is now in its correct sorted position. Elements less than 87 are on the left, and
elements greater than 87 are on the right.

7. Recursive Step for Right Sub array: Now we recursively apply Quick Sort to the two subarrays
formed by partitioning the right sub array.

Left sub array: A = (67, 14, 32, 86)


Right sub array: A = (87)

8. Left Sub array (67, 14, 32, 86): We choose a new pivot, say 67.

After partitioning: A = (32, 14, 67, 86)

Pivot (67) is in its correct sorted position.

9. Right Sub array (87): No further partitioning is needed for a single-element sub-array.

10. Final Sorted Array: After completing all the recursive steps, the array is fully sorted.

Sorted array: A = (4, 9, 14, 32, 67, 86, 87)

Q24. Solve the recurrence using recursion tree method: T (n) = T (n/2) + T (n/4) + T (n/8) + n
[AKTU 2019].
Ans: The time complexity of the recurrence relation T(n) = T(n/2) + T(n/4) + T(n/8) + n using
the recursion tree method:

n
/|\
T(n/2) T(n/4) T(n/8)
/|\ /|\ /|\
... ... ... ...
In each level of the recursion tree, we have three recursive calls with input sizes n/2, n/4, and
n/8. Let's analyze the tree depth and work done at each level:

- Level 0: Work = n (from the initial call)


- Level 1: Work = n/2 + n/4 + n/8 (from three recursive calls)
- Level 2: Work = n/(2^2) + n/(4^2) + n/(8^2) (from nine recursive calls)
- And so on...

The total work done at level k is the sum of work done at each level up to level k. We can express
it as follows:

Work at level k = (n/2^k) + (n/4^k) + (n/8^k)

Now, let's determine when the recursion stops, i.e., when the term n/(8^k) becomes very small
(close to 0). This occurs when k is the largest integer such that n/(8^k) >= 1, since the other
terms diminish faster.
22
DAA (BCS-503) Unit-1

Solving n/(8^k) >= 1:


k <= log_8 n

This means that the recursion tree stops when k reaches a value less than or equal to log_8 n.
Therefore, the recursion tree has a height of O(log n) since k represents the depth of the tree.

The total work done can be approximated as the sum of work at each level, up to the height of the
tree:

Total work = n + n/2 + n/4 + n/8 + ... + n/(2^k) + n/(4^k) + n/(8^k)


= n * (1 + 1/2 + 1/4 + 1/8 + ... + 1/(2^k) + 1/(4^k) + 1/(8^k))

This is a geometric series where the first term (a) is 1, the common ratio (r) is 1/2, and there are
k+1 terms.

Using the sum formula for a geometric series: Sum = a * (1 - r^(k+1)) / (1 - r)

Total work = n * (1 - (1/2)^(k+1)) / (1 - 1/2)


= 2n * (1 - (1/2)^(k+1))

Since k is bounded by O(log n), as k goes to infinity, (1/2)^(k+1) approaches 0.

Total work ≈ 2n * (1 - 0)
= 2n
T(n)=O(n)

Hence, using the recursion tree method, the time complexity of the recurrence relation T(n) =
T(n/2) + T(n/4) + T(n/8) + n is O(n).

Q25. Solve the following recurrence using Master method: [AKTU2019]


T (n) = 4T (n/3) + n2

23
DAA (BCS-503) Unit-1
Ans:

Q26. Name the sorting algorithm that is most practically used and also write its Time Complexity.
[AKTU 2019]

Ans: One of the most practically used sorting algorithms is Quick Sort. Quick Sort is widely used
because of its efficiency and average-case time complexity. It is a divide-and-conquer algorithm
that works by selecting a 'pivot' element and partitioning the array into two sub-arrays –
elements less than the pivot and elements greater than the pivot. The sub-arrays are then
recursively sorted.
Time Complexity of Quick Sort:
- Best Case: O(n log n)
- Average Case: O(n log n)
- Worst Case: O(n2), but can be mitigated with proper pivot selection and optimizations

24
DAA (BCS-503) Unit-1
In the average and best cases, Quick Sort demonstrates excellent performance and is often faster
than many other sorting algorithms. Its ability to efficiently sort large datasets and its
adaptability to different data distributions make it a popular choice for practical use.

Q27. Find the time complexity of the recurrence relation


T(n)= n +T(n/10)+T(7n/5) [AKTU2019]

Ans: To analyze the time complexity of the recurrence relation T(n) = n + T(n/10) + T(7n/5)
using the recursion tree method, we'll build a recursion tree and analyze its structure. The
recurrence relation has two recursive calls in each step, with different input sizes.

Here's how the recursion tree would look for the given relation:

n
/\
T(n/10) T(7n/5)
/\ /\
T(n/100) T(49n/50) ...

In each level of the recursion tree, we have two recursive calls with input sizes n/10 and 7n/5.
Let's analyze the tree depth and work done at each level:

- Level 0: Work = n (from the initial call)


- Level 1: Work = n/10 + 7n/5 (from two recursive calls)
- Level 2: Work = n/100 + 49n/50 (from four recursive calls)
- And so on...

The total work done at level k is the sum of work done at each level up to level k. We can express
it as follows:

Work at level k = n/(10^k) + (7n/5)^k

Now, let's determine when the recursion stops, i.e., when n/(10^k) becomes very small (close to
0). This occurs when k is the largest integer such that (7/5)^k >= 1, since the other term
diminishes faster.

Solving (7/5)^k >= 1:


k log(7/5) >= 0
k >= 0 (since log(7/5) is positive)

This means that the recursion tree stops at level 0. Therefore, the recursion tree has only one
level.

The total work done in this case is the sum of work at level 0, which is n.

25
DAA (BCS-503) Unit-1
Hence, using the recursion tree method, the time complexity of the recurrence relation T(n) = n +
T(n/10) + T(7n/5) is O(n).

Q28. Compare Time Complexity with Space Complexity. [AKTU2019]


Ans: Time complexity and space complexity are both fundamental concepts in computer science
that are used to analyze the efficiency of algorithms. They provide insights into how an
algorithm's performance scales with respect to input size and available memory. Let's delve into
each concept and then compare them:
Time Complexity:
Time complexity measures the amount of time an algorithm takes to complete its execution in
terms of the input size. It gives you an understanding of how the algorithm's runtime increases as
the input size grows. Time complexity is usually expressed using Big O notation (e.g., O(1), O(log
n), O(n), O(n2), etc.).
For example, an algorithm with O (1) time complexity executes in constant time, meaning its
execution time doesn't change with the input size. An algorithm with O(n) time complexity has a
linear relationship between input size and execution time. As the input size increases, the
execution time increases proportionally. Algorithms with higher time complexities like O(n2), O(n
log n), etc., have increasing execution times as the input size grows.

Space Complexity:
Space complexity refers to the amount of memory space an algorithm requires for its execution
as a function of the input size. It provides insights into how an algorithm uses memory resources
and how that usage scales with varying input sizes. Space complexity is also expressed using Big
O notation.
Space complexity considers the memory required for both the algorithm's code and any
additional data structures it uses. Algorithms that utilize additional data structures like arrays,
lists, stacks, queues, and more will have space complexity that includes the memory required for
these data structures.
In summary, time complexity and space complexity provide complementary insights into the
efficiency of algorithms. Time complexity analyzes how an algorithm's execution time scales with
input size, while space complexity analyzes how its memory usage scales. Both are crucial
considerations when designing and evaluating algorithms.

Q29.What are the characteristics of the algorithm? [AKTU2019]

Ans: Algorithms are step-by-step procedures or sets of rules designed to perform specific tasks or
solve particular problems. Characteristics of algorithms help us understand their behavior,
efficiency, and suitability for different tasks. Here are some important characteristics of
algorithms:
1. Well-Defined Steps: An algorithm should consist of clear and well-defined steps or instructions
that are precise and unambiguous. Each step should be understandable and executable.
2. Input and Output: Algorithms take some input and produce an output. The input is the data or
information on which the algorithm operates, and the output is the result or solution produced
by the algorithm.
3. Finiteness: An algorithm should eventually terminate after a finite number of steps. It must not
run indefinitely, providing a solution or output within a reasonable time.

26
DAA (BCS-503) Unit-1
4. Definiteness: Each step of the algorithm should be precisely defined and understandable. There
should be no ambiguity in how to perform each step.
5. Effectiveness: An algorithm must be effective, meaning that it can be executed and carried out
using a finite amount of resources (such as time and memory).
6. Correctness:An algorithm should produce the correct and desired output for all valid inputs. It
should solve the problem it is designed for accurately.
7. Determinism: The steps of an algorithm should be deterministic, meaning that given the same
input and starting conditions, the algorithm will always produce the same output.
8. Efficiency: Algorithms should be designed to execute in a reasonable amount of time and with
minimal resource usage. Efficiency is often measured in terms of time complexity and space
complexity.

Q30. Solve the following By Recursion Tree Method T(n)=n+T(n/5)+T(4n/5) [AKTU2019]

Ans: The time complexity of the recurrence relation T(n) = n + T(n/5) + T(4n/5) using the
recursion tree method:

T(n)

/\

T(n/5) T(4n/5)

/\ /\

... ... ... ...

In each level of the recursion tree, we have two recursive calls with input sizes n/5 and 4n/5.
Let's analyze the tree depth and work done at each level:

- Level 0: Work = n (from the initial call)

- Level 1: Work = n/5 + 4n/5 (from two recursive calls) =n

- Level 2: Work = n/(5^2) + 4n/(5^2) (from four recursive calls)=n/5

- And so on...

The total work done at level k is the sum of work done at each level up to level k. We can express
it as follows:

Work at level k = n/(5^k) + 4^n/(5^k)

27
DAA (BCS-503) Unit-1
Now, let's determine when the recursion stops, i.e., when n/(5^k) becomes very small (close to
0). This occurs when k is the largest integer such that 4^n/(5^k) >= 1, since the other term
diminishes faster.

Solving 4^n/(5^k) >= 1:

4^n >= 5^k

k <= (log_5 4^n) (taking the base-5 logarithm on both sides)

k <= n * log_5 4

This means that the recursion tree stops when k reaches a value less than or equal to n * log_5 4.
Therefore, the recursion tree has a height of O(log n) since k represents the depth of the tree.

The total work done can be approximated as the sum of work at each level, up to the height of the
tree:

Total work = n + n/5 + 4n/5 + n/(5^2) + 4n/(5^2) + ... + n/(5^k) + 4n/(5^k)

= n * (1 + 1/5 + 4/5 + 1/(5^2) + 4/(5^2) + ... + 1/(5^k) + 4/(5^k))

This is a geometric series where the first term (a) is 1, the common ratio (r) is 4/5, and there are
k+1 term.

Using the sum formula for a geometric series: Sum = a * (1 - r^(k+1)) / (1 - r)

Total work = n * (1 - (4/5)^(k+1)) / (1 - 4/5)

= 5n * (1 - (4/5)^(k+1))

Since k is bounded by O(log n), as k goes to infinity, (4/5)^(k+1) approaches 0.

Total work ≈ 5n * (1 - 0)

= 5n

T(n)= O(n).
Hence, using the recursion tree method, the time complexity of the recurrence relation T(n) = n +
T(n/5) + T(4n/5) is O(n). This result confirms that the time complexity of the given recurrence
relation is linear.

Q31. Explain HEAP-SORT on the array. Illustrate the operation of HEAP-SORT on the array
A= {6, 14, 3, 25, 2, 10, 20, 7, 6}. [AKTU2019]
Ans. Heap Sort is a comparison-based sorting algorithm that uses the properties of a binary heap
data structure. It consists of two main steps: building a heap (heapify) and repeatedly extracting
the maximum element from the heap and placing it at the end of the sorted array.
28
DAA (BCS-503) Unit-1
Let's walk through the operation of Heap Sort on the given array A = {6, 14, 3, 25, 2, 10, 20, 7, 6}
step by step:

1. Build Heap (Heapify):


We start by building a max heap from the array A. The max heap property ensures that the root
of each sub tree is greater than its children.

Original Array: 6, 14, 3, 25, 2, 10, 20, 7, 6

Heapify:
25
/ \
14 20
/\ / \
7 6 10 3
/\
6 2

2. Sorting Phase (Extract Max and Re-heapify):


In each step of this phase, we extract the maximum element from the heap (which is the root)
and place it at the end of the array. Then, we adjust the heap to maintain the max heap property.

Step 1: Extract Max (25) and Re-heapify


Extracted Max: 25
Heap after extraction:
20
/ \
14 10
/\ / \
7 6 6 3
/
2

Step 2: Extract Max (20) and Re-heapify


Extracted Max: 20
Heap after extraction:
14
/ \
7 10
/\ / \
6 6 2 3

Step 3: Extract Max (14) and Re-heapify


Extracted Max: 14
Heap after extraction:
29
DAA (BCS-503) Unit-1
10
/ \
7 3
/\
6 6
Step 4: Extract Max (10) and Re-heapify
Extracted Max: 10
Heap after extraction:
7
/ \
6 3
/
6

Step 5: Extract Max (7) and Re-heapify


Extracted Max: 7
Heap after extraction:
6
/ \
6 3

Step 6: Extract Max (6) and Re-heapify


Extracted Max: 6
Heap after extraction:
3
/ \
6 6
Step 7: Extract Max (6) and Re-heapify
Extracted Max: 6
Heap after extraction:
3
/ \
6 6

Step 8: Extract Max (6) and Re-heapify


Extracted Max: 6
Heap after extraction:
3
/ \
6 6
Step 9: Extract Max (3) and Re-heapify
Extracted Max: 3
Heap after extraction:
6
/ \
6 6

30
DAA (BCS-503) Unit-1
3.Resulting Sorted Array:
Finally, we have the sorted array: [25, 20, 14, 10, 7, 6, 6, 6, 3, 2].

Heap Sort's time complexity is O(n log n) for all cases (best, average, worst), making it efficient
for large datasets and outperforming other O(n log n) sorting algorithms in practice due to its
lower constant factors.

Q32. Solve the following recurrence relation : [AKTU2018]


T(n)= 2T(n-1)+1 if n>0
T(0)=0
Ans:

31
DAA (BCS-503) Unit-1

Q33. Solve the following recurrence relation : [AKTU2018]


T(n)- 2T(n-1)=3n

Ans:

32
DAA (BCS-503) Unit-1

Q34. Solve the following recurrence relation: [AKTU2017]


T(n)=n if n=0,1or 2
T(n)=5T(n-1)-8T(n-2)+4T(n-3) otherwise

Ans:

33
DAA (BCS-503) Unit-1

34

You might also like