0% found this document useful (0 votes)
6 views128 pages

Dmada

The document provides an overview of algorithms, their properties, and various concepts related to algorithm analysis, including time and space complexity, asymptotic notations, and amortized analysis. It explains key terms such as linear equations, inequalities, and different types of algorithm complexities like worst-case, best-case, and average-case. Additionally, it includes examples of algorithms like selection sort and Tower of Hanoi, along with their implementations and complexities.

Uploaded by

suthardurgesh330
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views128 pages

Dmada

The document provides an overview of algorithms, their properties, and various concepts related to algorithm analysis, including time and space complexity, asymptotic notations, and amortized analysis. It explains key terms such as linear equations, inequalities, and different types of algorithm complexities like worst-case, best-case, and average-case. Additionally, it includes examples of algorithms like selection sort and Tower of Hanoi, along with their implementations and complexities.

Uploaded by

suthardurgesh330
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 128

ADA

UNIT-1:
Q1.What is an algorithm ?what do you mean
by correct algorithm ? What do you mean by
intance of a problem ?list out the criteria
that all algorithms must satisfy . On what
bases will you consider algorithm A is a
better than algorithm B?
Ans:
- Algorithm: A step-by-step procedure to
solve a problem or perform a task.

- Correct Algorithm: An algorithm that


produces the correct output for all valid
inputs.

- Instance of a Problem: A specific set of


inputs for which the problem needs to be
solved.

- Criteria for Algorithms:


1. Input: Takes zero or more inputs.
2. Output: Produces at least one output.
3. Finiteness: Must terminate after a finite
number of steps.
4. Definiteness: Each step must be clearly
defined.
5. Effectiveness: All steps must be simple
enough to be executed in practice.

- Better Algorithm: Algorithm A is considered


better than Algorithm B based on:
1. Time complexity(faster execution).
2. Space complexity (less memory usage).

Q2. What is an algorithm? What do you


mean by linear inequalities and linear
equations? Explain asymptotic notation with
the help of example.
Ans:
1. Algorithm: An algorithm is a step-by-
step procedure or set of rules designed
to perform a specific task or solve a
problem. It is a sequence of instructions
that take input, process it, and produce
an output.

2. Linear Equations: A linear equation is


an equation in which the highest power
of the variable(s) is 1. It can be written in
the form:

Ax + b = 0

3. Linear Inequalities: A linear


inequality is similar to a linear equation,
but instead of equality, it involves
inequality symbols like . For example:

Ax + b \geq c
4. Asymptotic Notation: Asymptotic
notation describes the behavior of an
algorithm as its input size approaches
infinity. The three common types are:

Big-O (O): Represents the upper bound of an


algorithm’s time complexity (worst-case).

Omega (Ω): Represents the lower bound


(best-case).

Theta (Θ): Represents the exact bound


(average-case).

Example:
Consider an algorithm that runs in .
The Big-O is , focusing on the dominant term
as , which is .

Q3. Why do we use asymptotic notations in


the study of algorithms? Briefly describe the
commonly used asymptotic notations.
Ans:
Why do we use Asymptotic Notations in the
study of algorithms?

Asymptotic notations are used to analyze


and describe the efficiency of algorithms as
the input size grows large. They help
compare algorithms based on:

Time complexity (how fast an algorithm


runs)

Space complexity (how much memory it


uses)
These notations provide a high-level
understanding of an algorithm’s
performance without getting bogged down
in specific machine details or minor
variations.

Common Asymptotic Notations

1. Big-O Notation (O):

Describes the worst-case time complexity.

Upper bound of an algorithm’s growth rate.

Example: means the algorithm’s time grows


quadratically with input size.

2. Omega Notation (Ω):


Describes the best-case time complexity.

Lower bound of the algorithm’s growth rate.

Example: means the algorithm takes at


least linear time in the best case.

3. Theta Notation (Θ):

Describes the exact time complexity (both


upper and lower bounds).

Example: means the algorithm grows at this


rate in all cases.
These notations help compare different
algorithms efficiently, especially as input
size tends to infinity.

Q5. What is an amortized analysis? Explain


accounting method and aggregate analysis
with suitable example.
Ans:
Amortized Analysis:

Amortized analysis evaluates the average


time per operation over a sequence of
operations, ensuring that while some
operations may be costly, the overall cost
remains low.

Accounting Method:

Overcharge cheap operations and store


“credits” to pay for future expensive ones.
Example: In a dynamic array, charge 2 units
per insertion (1 for the insertion, 1 saved for
resizing). This covers the cost when the
array doubles in size.

Aggregate Analysis:

Calculate the total cost of all operations and


divide by the number of operations to find
the average cost.

Example: For insertions in a dynamic array,


total cost is , so the average cost per
insertion is .

Q6. Explain following terms with example.


1.set 2.relation 3.function
Ans:
1. Set:

A collection of distinct objects or elements.


Example:
Set of natural numbers:

2. Relation:

A relationship between elements of two


sets, pairing elements from one set with
elements of another.

Example:
Let and .
A relation can be: .

3. Function:

A special type of relation where each


element in the first set (domain) maps to
exactly one element in the second set
(range).

Example:
Is a function mapping to .

Q8. Define an amortized analysis. Briefly


explain its different techniques. Carry out
aggregate analysis for problem of
implementing a k-bit binary counter that
counts upward from 0.
Ans:
Amortized Analysis:

Amortized analysis finds the average cost


per operation over a sequence of
operations, ensuring that while some
operations may be expensive, the overall
cost remains low.

Techniques of Amortized Analysis:


1. Aggregate Analysis:
Total cost of operations is calculated, and
the average cost is found by dividing the
total cost by .

2. Accounting Method:
Overcharge cheaper operations and store
“credits” to pay for more expensive
operations later.

3. Potential Method:
Uses a potential function to represent the
stored work (like credits) and tracks changes
in potential to balance expensive
operations.
Aggregate Analysis for a k-bit Binary
Counter:

For a k-bit counter that increments from 0:

Each bit flips from 0 to 1 or 1 to 0 during an


increment.

A bit flips once every 2 steps, the second bit


every 4 steps, the third every 8 steps, and
so on.

Total cost for increments:

In increments, each bit flips times.

Total number of flips is .


Thus, the average cost per increment is ,
meaning each increment takes constant
time on average.

Q9.Define following terms :


1.Quantifier
2.algorithm
3.Big ‘oh’ notation
4.Big ‘omega’ notation
5. ‘Theta’ notation
Ans:-
1. Quantifier: A symbol in logic that
specifies the quantity of instances in a
statement, like “∀” (for all) or “∃” (there
exists).

2. Algorithm: A step-by-step procedure


or set of rules to solve a problem or
perform a task.
3. Big ‘O’ Notation (O): Describes the
worst-case upper bound on the time or
space complexity of an algorithm,
showing how it scales with input size.

4. Big ‘Omega’ Notation (Ω): Describes


the best-case lower bound on the time or
space complexity of an algorithm,
indicating the minimum amount of
resources required.

5. Theta Notation (Θ): Describes the


exact asymptotic behavior of an
algorithm, bounding it from both above
and below (best and worst case).

Q10. Short questions


1. What is an algorithm?
2. What is worst case time complexity?
3. Big O notation
Ans:-
1. What is an algorithm?
An algorithm is a step-by-step set of
instructions designed to perform a
specific task or solve a problem.

2. What is worst-case time


complexity?
Worst-case time complexity measures
the maximum time an algorithm takes to
complete, given the worst possible input
scenario.

3.Big O Notation
Big O notation expresses the upper
bound of an algorithm’s time or space
complexity, representing its performance
in the worst case as input size grows.

Q11. Define algorithm,time complexity


And space complexity.
Ans:-
1. Algorithm: A defined set of steps or
instructions to solve a problem or
perform a task.

2. Time Complexity: A measure of the


amount of time an algorithm takes to run
as a function of the input size.

3. Space Complexity: A measure of the


amount of memory an algorithm uses as
a function of the input size.

Q12. Solve the recurrence T(n)= 7T(n/2) +


n3.
Ans:-
The recurrence can be solved using the
Master Theorem.

, , and .
.

Since grows faster than , we are in Case 3


of the Master Theorem.

Thus, the solution is:

T(n) = \Theta(n^3)

Unit-1:
Q1. What is an algorithm? Explain various
properties of an algorithm.
An algorithm is a step-by-step procedure or
set of rules designed to perform a specific
task or solve a problem. It is a sequence of
instructions that, when followed, leads to
the desired output for a given input.
Properties of an Algorithm:
1. Finiteness: An algorithm must have
a finite number of steps, meaning it
should terminate after a set number of
operations.
2. Definiteness: Each step of the
algorithm must be clear and
unambiguous, with no uncertainty in
what to do at any stage.
3. Input: An algorithm should have
well-defined inputs, which may be zero
or more values provided before
execution.
4. Output: The algorithm must produce
at least one output, which is the result of
the computations performed on the
input.
5. Effectiveness: The steps of the
algorithm must be basic enough to be
performed by a person or machine
within a reasonable time, using basic
operations.
6. Deterministic: The algorithm should
produce the same output for the same
input each time it is executed, without
randomness (unless it's a probabilistic
algorithm).
7. Generality: An algorithm should be
applicable to a set of inputs, not just a
specific problem instance, providing a
general solution for a class of problems.

Q2. What do you mean by asymptotic


notations? Explain.
Asymptotic notations describe the
efficiency of an algorithm as input size
grows large, focusing on time or space
complexity.
1. Big O (O): Upper bound (worst-
case).
2. Big Omega (Ω): Lower bound (best-
case).
3. Big Theta (Θ): Tight bound
(average-case).
4. Little o (o): Strictly less than.
5. Little omega (ω): Strictly greater
than.
Q3. Write a program/algorithm of selection
sort methods. What is complexity of the
method?
def selection_sort(arr):
n = len(arr)
for i in range(n):
min_idx = i
for j in range(i+1, n):
if arr[j] < arr[min_idx]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
return arr

# Example usage:
arr = [64, 25, 12, 22, 11]
sorted_arr = selection_sort(arr)
print(sorted_arr) # Output: [11, 12, 22, 25,
64]

Q4. Explain different asymptotic notations in


brief.
Asymptotic notations describe the growth of
an algorithm's time or space complexity
relative to input size, focusing on large
inputs.
1. Big O (O): Upper bound, shows the
worst-case scenario. Example:
O(n2)O(n^2)O(n2).
2. Big Omega (Ω): Lower bound,
shows the best-case scenario. Example:
Ω(n)Ω(n)Ω(n).
3. Big Theta (Θ): Tight bound, shows
both the best and worst-case. Example:
Θ(n)Θ(n)Θ(n).
4. Little o (o): Strictly smaller growth
rate than a function. Example:
o(n2)o(n^2)o(n2).
5. Little omega (ω): Strictly faster
growth rate than a function. Example:
ω(n)ω(n)ω(n)

Q5. What is an amortized analysis? Explain


aggregate method of amortized analysis
using simple example
Amortized analysis evaluates the average
time per operation over a sequence,
spreading out the cost of expensive
operations.
Aggregate Method:
It calculates the total cost of all operations,
then divides it by the number of operations
to find the amortized cost.
Example (Dynamic Array):
Appending nnn elements to a dynamic
array:
 Most operations are O(1)O(1)O(1), but
resizing (which takes O(n)O(n)O(n))
happens occasionally.
 Total cost: O(n)O(n)O(n) for nnn
operations.
 Amortized cost per operation:
O(1)O(1)O(1).

Q6. Explain why analysis of algorithm is


important? Explain: Worst case, Best case,
Average case complexity.
Analysis of algorithms is important to
evaluate an algorithm’s efficiency in terms
of time and space, helping to compare
different algorithms, predict performance,
and choose the best one for a problem,
especially for large inputs.
Types of Complexity:
1. Worst-case complexity: Measures
the maximum time or space an
algorithm will take for any input. It shows
the performance in the worst possible
scenario (e.g., O(n2)O(n^2)O(n2)).
2. Best-case complexity: Measures
the minimum time or space an algorithm
will take. It shows the performance in the
most favorable scenario (e.g.,
O(n)O(n)O(n)).
3. Average-case complexity:
Measures the expected time or space
taken by the algorithm, considering all
possible inputs. It represents typical
performance (e.g., O(nlog⁡n)O(n \log
n)O(nlogn)).

Q7. Define: Big Oh, Omega and Big Theta


notation
 Big O (O): Describes the upper bound
or worst-case time/space complexity of an
algorithm (e.g., O(n2)O(n^2)O(n2)).
 Omega (Ω): Describes the lower bound
or best-case time/space complexity of an
algorithm (e.g., Ω(n)Ω(n)Ω(n)).
 Big Theta (Θ): Describes the tight
bound,
indicating the algorithm's time/space
complexity in both the best and worst cases
(e.g., Θ(n)Θ(n)Θ(n)).

Q8. What is Recursion? Give the


implementation of Tower of Hanoi Problem
using recursion.
Recursion is a technique in which a
function calls itself to solve a smaller
instance of the same problem until it
reaches a base case.
Tower of Hanoi using Recursion:
python
Copy code
def tower_of_hanoi(n, source, destination,
auxiliary):
if n == 1:
print(f"Move disk 1 from {source} to
{destination}")
return
tower_of_hanoi(n-1, source, auxiliary,
destination)
print(f"Move disk {n} from {source} to
{destination}")
tower_of_hanoi(n-1, auxiliary, destination,
source)

# Example usage:
n = 3 # Number of disks
tower_of_hanoi(n, 'A', 'C', 'B')

Q9. Explain why analysis of algorithm is


important?
 Key Point: Analyzing algorithms helps
predict performance, compare efficiency,
and choose the best solution for a
problem.
Explanation: Algorithm analysis is essential
to understand time and space complexity,
especially for large inputs. It helps in
selecting the most efficient algorithm,
avoiding performance bottlenecks, and
optimizing resource usage.

Q10. Explain bubble sort algorithm. Derive


the algorithmic complexity in best case,
worst case and average case analysis.
Bubble Sort Algorithm:
Bubble Sort repeatedly compares adjacent
elements and swaps them if they are in the
wrong order, "bubbling" the largest unsorted
element to its correct position in each pass.
Algorithm:
1. Loop through the array.
2. Compare each pair of adjacent
elements.
3. Swap them if the first is greater than
the second.
4. Repeat until no swaps are needed.
Complexity Analysis:
1. Best Case: O(n)O(n)O(n)
o Occurs when the array is already
sorted. Only one pass is needed to
confirm order.
2. Worst Case: O(n2)O(n^2)O(n2)
o Occurs when the array is sorted in
reverse order. Every element needs
to be compared and swapped.
3. Average Case: O(n2)O(n^2)O(n2)
o On average, half the elements will
need to be swapped in each pass,
leading to quadratic complexity.
Summary:
 Best Case: O(n)O(n)O(n)
 Worst Case: O(n2)O(n^2)O(n2)

Q11. Explain the heap sort in detail. Give its


complexity
Heap Sort Algorithm:
Heap Sort is a comparison-based sorting
algorithm that uses a binary heap data
structure. It consists of two main phases:
building a heap and sorting the elements.
Steps of Heap Sort:
1. Build a Max Heap: Convert the
array into a max heap, where the largest
element is at the root.
2. Extract Elements:
o Swap the root of the max heap with
the last element in the array.
o Reduce the size of the heap by one.
o Heapify the root to maintain the max
heap property.
o Repeat until the heap size is reduced
to one.
Heapify Function:
The heapify function ensures that a subtree
rooted at an index maintains the heap
property.
Complexity Analysis:
1. Building the Heap:
o Time complexity: O(n)O(n)O(n) (using
the bottom-up approach).
2. Sorting:
o Extracting the maximum element
and heapifying takes O(log⁡n)O(\log
n)O(logn) for each of the nnn
elements.
o Time complexity: O(nlog⁡n)O(n \log
n)O(nlogn).
Overall Complexity:
 Best Case: O(nlog⁡n)O(n \log n)O(nlogn)
 Worst Case: O(nlog⁡n)O(n \log
n)O(nlogn)
 Average Case: O(nlog⁡n)O(n \log
n)O(nlogn)
Space Complexity:
 Space Complexity: O(1)O(1)O(1) (in-
place sorting).

Q13. Sort the letters of word “DESIGN” in


alphabetical order using bubble sort.
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr

# Example usage:
word = "DESIGN"
sorted_word = ''.join(bubble_sort(list(word)))
print(sorted_word) # Output: DEGISN

Q14. Write an algorithm for insertion sort.


Analyze insertion sort algorithm for best
case and worst case.
Insertion Sort Algorithm:
1. Start with the second element
(assume the first element is sorted).
2. Compare the current element with
the previous elements.
3. Shift all larger elements one position
to the right.
4. Insert the current element in the
correct position.
5. Repeat until the entire array is
sorted.
Pseudocode:
plaintext
Copy code
for i from 1 to length(array) - 1:
key = array[i]
j=i-1
while j >= 0 and array[j] > key:
array[j + 1] = array[j]
j=j-1
array[j + 1] = key
Complexity Analysis:
1. Best Case: O(n)O(n)O(n)
o Occurs when the array is already
sorted. Only one comparison per
element is needed.
2. Worst Case: O(n2)O(n^2)O(n2)
o Occurs when the array is sorted in
reverse order. Every element needs
to be compared with all previous
elements.
Summary:
 Best Case: O(n)O(n)O(n)
 Worst Case: O(n2)O(n^2)O(n2)
4o mini

Q15. Explain Counting sort with example.


Counting Sort:
Counting Sort is a non-comparison-based
sorting algorithm that counts the
occurrences of each unique element in the
input array, then uses that information to
place elements in the correct order.
Steps:
1. Find the Range: Determine the
minimum and maximum values in the
input array.
2. Create a Count Array: Initialize a
count array of size max−min+1\
text{max} - \text{min} + 1max−min+1
to store the count of each element.
3. Count Occurrences: Iterate through
the input array, incrementing the
corresponding index in the count array.
4. Cumulative Count: Modify the
count array to hold the cumulative
counts, indicating the final positions of
elements in the sorted output.
5. Build Output Array: Create an
output array, placing each element in its
correct position based on the count
array.
6. Copy to Original Array: Optionally,
copy the sorted output back to the
original array.
Example:
Input: [4, 2, 2, 8, 3, 3, 1]
1. Range: Min = 1, Max = 8
2. Count Array:
o Initial: [0, 0, 0, 0, 0, 0, 0, 0] (size 8)
o After counting: [0, 1, 2, 0, 1, 0, 2, 0]
(counts for 1-8)
3. Cumulative Count:
o Cumulative: [0, 1, 3, 3, 4, 4, 6, 6]
4. Output Array:
o Build output: [1, 2, 2, 3, 3, 4, 8]
Result:
Sorted Output: [1, 2, 2, 3, 3, 4, 8]
Complexity:
 Time Complexity: O(n+k)O(n +
k)O(n+k), where nnn is the number of
elements and kkk is the range of the
input.
 Space Complexity: O(k)O(k)O(k).

Q16. Sort the following data with Heap Sort


Method: 20, 50, 30, 75, 90, 60, 25, 10, and
40. And explain it.
Heap Sort Steps for Data: [20, 50, 30,
75, 90, 60, 25, 10, 40]
1. Build a Max Heap:
o Start from the last non-leaf node and
heapify upwards.
o Resulting Max Heap:
markdown
Copy code
90
/ \
75 60
/\ /\
2. 50 40 30 25 / 20 / 10
3. Copy code
4. Extract Elements:
o Swap the root (maximum) with the
last element, reduce the heap size,
and heapify the root.
o Repeat until the heap size is reduced
to one.
Steps in Detail:
 Initial Array: [20, 50, 30, 75, 90, 60, 25,
10, 40]
 Max Heap Construction:
o Heapify from the bottom up, yielding:
o After heapifying 20: [20, 50, 30, 75,
90, 60, 25, 10, 40]
o After heapifying 50: [20, 90, 30, 75,
50, 60, 25, 10, 40]
o After heapifying 30: [20, 90, 60, 75,
50, 30, 25, 10, 40]
o After heapifying 20: [90, 75, 60, 40,
50, 30, 25, 10, 20]
 Sorting Process:
1. Swap 90 and 20 → [20, 75, 60, 40,
50, 30, 25, 10, 90], heapify → [75,
50, 60, 40, 20, 30, 25, 10, 90]
2. Swap 75 and 10 → [10, 50, 60, 40,
20, 30, 25, 75, 90], heapify → [60,
50, 30, 40, 20, 10, 25, 75, 90]
3. Swap 60 and 25 → [25, 50, 30, 40,
20, 10, 60, 75, 90], heapify → [50,
40, 30, 25, 20, 10, 60, 75, 90]
4. Swap 50 and 10 → [10, 40, 30, 25,
20, 50, 60, 75, 90], heapify → [40,
25, 30, 10, 20, 50, 60, 75, 90]
5. Swap 40 and 10 → [10, 25, 30, 20,
40, 50, 60, 75, 90], heapify → [30,
25, 10, 20, 40, 50, 60, 75, 90]
6. Swap 30 and 10 → [10, 25, 20, 30,
40, 50, 60, 75, 90], heapify → [25,
20, 10, 30, 40, 50, 60, 75, 90]
7. Swap 25 and 10 → [10, 20, 25, 30,
40, 50, 60, 75, 90], heapify → [20,
10, 25, 30, 40, 50, 60, 75, 90]
8. Swap 20 and 10 → [10, 20, 25, 30,
40, 50, 60, 75, 90]
Final Sorted Array:
Output: [10, 20, 25, 30, 40, 50, 60, 75, 90]
This process ensures that the array is sorted
in ascending order through heap properties.
4o mini

Q17. What is an amortized analysis? Explain


aggregate method of amortized analysis
using suitable example
Amortized analysis evaluates the average
time complexity of operations in a
sequence, smoothing out the costs of
expensive operations over time.
Aggregate Method:
The aggregate method computes the total
cost of a sequence of operations and divides
it by the number of operations to find the
average (amortized) cost per operation.
Example (Dynamic Array Resizing):
1. Operations: Appending elements to
a dynamic array that doubles in size
when full.
2. Cost Breakdown:
o Appending an element is
O(1)O(1)O(1) most of the time.
o When resizing occurs, it takes
O(n)O(n)O(n) time (copying all
elements).
3. Total Cost:
o For nnn operations: nnn appends
take O(n)O(n)O(n) time, and resizing
occurs log(n) times (1, 2, 4, ..., n).
o Total time: O(n)+O(n)=O(n)O(n) +
O(n) = O(n)O(n)+O(n)=O(n).
4. Amortized Cost:
o Amortized cost per append = Total
cost / Number of operations =
O(n)/n=O(1)O(n) / n =
O(1)O(n)/n=O(1).
Summary:
Even with occasional expensive operations,
the average cost remains efficient at
O(1)O(1)O(1) per operation.

Q18. Explain Selection Sort Algorithm and


give its best case, worst case and average
case complexity with example.
Selection Sort Algorithm:
Selection Sort is a simple comparison-
based sorting algorithm. It works by dividing
the array into a sorted and an unsorted
region. The algorithm repeatedly selects the
smallest (or largest) element from the
unsorted region and moves it to the end of
the sorted region.
Steps:
1. Start with the first element as the
minimum.
2. Compare it with all other elements to
find the smallest one.
3. Swap the smallest found with the
first element.
4. Move the boundary of the sorted
region one element to the right.
5. Repeat until the array is sorted.
Example:
Input: [64, 25, 12, 22, 11]
1. Find the smallest (11), swap with the
first: [11, 25, 12, 22, 64]
2. Next, find the smallest (12), swap
with the second: [11, 12, 25, 22, 64]
3. Find the smallest (22), swap with the
third: [11, 12, 22, 25, 64]
4. Find the smallest (25), already in
place: [11, 12, 22, 25, 64]
5. Array is now sorted.
Complexity:
 Best Case: O(n2)O(n^2)O(n2) (even if
the array is sorted, each element must
be checked).
 Worst Case: O(n2)O(n^2)O(n2)
(reverse sorted array requires maximum
comparisons).
 Average Case: O(n2)O(n^2)O(n2)
(comparing each element with the rest
leads to quadratic complexity).
Summary:
Selection Sort has a time complexity of
O(n2)O(n^2)O(n2) for all cases, making it
inefficient on large lists. It performs well for
small datasets or when memory space is
limited.
4o mini

Q19. Sort the letters of word “EDUCATION”


in alphabetical order using insertion sort
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j=i-1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key

# Example usage:
word = "EDUCATION"
sorted_word = ''.join(sorted(word)) # or use
insertion_sort(list(word))
insertion_sort(list(word))
print(''.join(word)) # Output: "ACDEIOTU"

Q20. Apply the bubble sort algorithm for


sorting {U,N,I,V,E,R,S}
Bubble Sort Implementation for {U, N,
I, V, E, R, S}:
python
Copy code
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]

# Example usage:
letters = ['U', 'N', 'I', 'V', 'E', 'R', 'S']
bubble_sort(letters)
sorted_letters = ''.join(letters)
print(sorted_letters) # Output: "EINRSUV"
Result:
The letters sorted in alphabetical order are:
"EINRSUV".

Q21. Let f(n) and g(n) be asymptotically


nonnegative functions. Using the basic
definition of Θ-notation, prove that max(f(n),
g(n)) = Θ (f(n) + g(n))

To prove that max⁡(f(n),g(n))=Θ(f(n)+g(n))\


max(f(n), g(n)) = \Theta(f(n) +
g(n))max(f(n),g(n))=Θ(f(n)+g(n)) using the
definition of Θ\ThetaΘ-notation, we need to
show that there exist positive constants
c1c_1c1, c2c_2c2, and n0n_0n0 such that
for all n≥n0n \geq n_0n≥n0:
c1⋅(f(n)+g(n))≤max⁡(f(n),g(n))≤c2⋅(f(n)
+g(n))c_1 \cdot (f(n) + g(n)) \leq \max(f(n),
g(n)) \leq c_2 \cdot (f(n) + g(n))c1⋅(f(n)
+g(n))≤max(f(n),g(n))≤c2⋅(f(n)+g(n))
Proof:
1. Upper Bound:
o Since max⁡(f(n),g(n))\max(f(n),
g(n))max(f(n),g(n)) is at least as
large as either function, we have:
max⁡(f(n),g(n))≤f(n)+g(n)\max(f(n), g(n)) \leq
f(n) + g(n)max(f(n),g(n))≤f(n)+g(n)
o Therefore, we can choose c2=1c_2 =
1c2=1.
2. Lower Bound:
o Without loss of generality, assume
f(n)≥g(n)f(n) \geq g(n)f(n)≥g(n) for
sufficiently large nnn. Then:
max⁡(f(n),g(n))=f(n)\max(f(n), g(n)) =
f(n)max(f(n),g(n))=f(n)
o Now, f(n)+g(n)≥f(n)f(n) + g(n) \geq
f(n)f(n)+g(n)≥f(n), so:
Q22. What is the smallest value of n such
that an algorithm whose running time is
100n2 runs faster than an algorithm whose
running time is 2n on the same machine?
To find the smallest value of nnn such that
100n2<2n100n^2 < 2^n100n2<2n, we can
test small values of nnn:
1. n=1n = 1n=1:
100(1)2=100<21=2100(1)^2 = 100 <
2^1 = 2100(1)2=100<21=2 (false)
2. n=2n = 2n=2:
100(2)2=400<22=4100(2)^2 = 400 <
2^2 = 4100(2)2=400<22=4 (false)
3. n=3n = 3n=3:
100(3)2=900<23=8100(3)^2 = 900 <
2^3 = 8100(3)2=900<23=8 (false)
4. n=4n = 4n=4:
100(4)2=1600<24=16100(4)^2 = 1600
< 2^4 = 16100(4)2=1600<24=16
(false)
5. n=5n = 5n=5:
100(5)2=2500<25=32100(5)^2 = 2500
< 2^5 = 32100(5)2=2500<25=32
(false)
6. n=6n = 6n=6:
100(6)2=3600<26=64100(6)^2 = 3600
< 2^6 = 64100(6)2=3600<26=64
(false)
7. n=7n = 7n=7:
100(7)2=4900<27=128100(7)^2 =
4900 < 2^7 =
128100(7)2=4900<27=128 (false)
8. n=8n = 8n=8:
100(8)2=6400<28=256100(8)^2 =
6400 < 2^8 =
256100(8)2=6400<28=256 (false)
9. n=9n = 9n=9:
100(9)2=8100<29=512100(9)^2 =
8100 < 2^9 =
512100(9)2=8100<29=512 (false)
10. n=10n = 10n=10:
100(10)2=10000<210=1024100(10)^2
= 10000 < 2^{10} =
1024100(10)2=10000<210=1024
(false)
11. n=11n = 11n=11:
100(11)2=12100<211=2048100(11)^2
= 12100 < 2^{11} =
2048100(11)2=12100<211=2048
(false)
12. n=12n = 12n=12:
100(12)2=14400<212=4096100(12)^2
= 14400 < 2^{12} =
4096100(12)2=14400<212=4096
(false)
13. n=13n = 13n=13:
100(13)2=16900<213=8192100(13)^2
= 16900 < 2^{13} =
8192100(13)2=16900<213=8192
(false)
14. n=14n = 14n=14:
100(14)2=19600<214=16384100(14)^
2 = 19600 < 2^{14} =
16384100(14)2=19600<214=16384
(false)
15. n=15n = 15n=15:
100(15)2=22500<215=32768100(15)^
2 = 22500 < 2^{15} =
32768100(15)2=22500<215=32768
(true)
Thus, the smallest value of nnn such that
100n2<2n100n^2 < 2^n100n2<2n is:
15\boxed{15}15

Q23. Explain Tower of Hanoi Problem, Derive


its recursion equation and computer it’s
time complexity.
Tower of Hanoi Problem:
The Tower of Hanoi is a classic problem that
involves moving a stack of disks from one
peg to another, using a third peg as
auxiliary storage. The rules are:
1. Only one disk can be moved at a
time.
2. A disk can only be placed on top of a
larger disk or on an empty peg.
Steps to Solve:
1. Move n−1n-1n−1 disks from the
source peg to the auxiliary peg.
2. Move the nnn-th (largest) disk
directly to the destination peg.
3. Move the n−1n-1n−1 disks from the
auxiliary peg to the destination peg.
Recursion Equation:
Let T(n)T(n)T(n) be the number of moves
required to solve the Tower of Hanoi with
nnn disks. The recurrence relation is:
T(n)=2T(n−1)+1T(n) = 2T(n-1) +
1T(n)=2T(n−1)+1
with the base case:
T(1)=1T(1) = 1T(1)=1
Solving the Recurrence:
Expanding the recurrence:
 T(n)=2T(n−1)+1T(n) = 2T(n-1) +
1T(n)=2T(n−1)+1
 T(n−1)=2T(n−2)+1T(n-1) = 2T(n-2) +
1T(n−1)=2T(n−2)+1
 Substitute:
T(n)=2(2T(n−2)+1)+1=4T(n−2)+2+1T(
n) = 2(2T(n-2) + 1) + 1 = 4T(n-2) + 2 +
1T(n)=2(2T(n−2)+1)+1=4T(n−2)+2+1
Continue this pattern:
T(n)=2kT(n−k)+(2k−1)T(n) = 2^k T(n-k) +
(2^k - 1)T(n)=2kT(n−k)+(2k−1)
When k=n−1k = n - 1k=n−1:
T(n)=2n−1T(1)+(2n−1−1)=2n−1⋅1+(2n−1
−1)=2n−1T(n) = 2^{n-1} T(1) + (2^{n-1}
- 1) = 2^{n-1} \cdot 1 + (2^{n-1} - 1) =
2^n -
1T(n)=2n−1T(1)+(2n−1−1)=2n−1⋅1+(2n−
1−1)=2n−1
Time Complexity:
The time complexity of the Tower of Hanoi
problem is:
T(n)=2n−1T(n) = 2^n - 1T(n)=2n−1
Thus, the time complexity is
O(2n)O(2^n)O(2n).

Unit-2
Q1. Prove that Greedy Algorithms does not
always give optimal solution. What are the
general characteristics of Greedy
Algorithms? Also compare GreedyAlgorithms
with Dynamic Programming and Divide and
Conquer methods to find out major
difference between them.
Greedy Algorithm Suboptimality Proof:
0/1 Knapsack Example:
 Items: (Weight, Value) = (10, 60), (20,
100), (30, 120)
 Knapsack capacity = 50
Greedy by value/weight picks Item 3 (ratio =
4), leaving 20 capacity but suboptimal total
value of 180. Optimal solution: pick Items 1
and 2, total value = 160.
Thus, greedy may not always give the
optimal solution.

General Characteristics of Greedy


Algorithms:
1. Greedy Choice Property: Locally
optimal choices.
2. Optimal Substructure: Global
solution built from local choices.
3. Irreversibility: Decisions are final.

Comparison:
Dynamic Divide
Greed
Aspect Programmi and
y
ng Conquer
Solve all Divide into
Locally
Approach subproblem independe
optimal
s nt parts
Yes
Backtracki
None (memoizatio Sometimes
ng
n)
Varies
Complexit Often O(n^2) or (depends
y O(n) more on
problem)
Prim's, Merge
Knapsack,
Examples Kruskal' Sort, Quick
Fibonacci
s Sort
 Greedy: Fast, may not be optimal.
 Dynamic Programming: Slower,
always optimal.
 Divide and Conquer: Splits problems
into subproblems, optimal in certain
cases.
Q2. Justify the general statement
that “if a problem can be split using
Divide and Conquer strategy in
almost equal portions at each stage,
then it is a good can did ate for
recursive implementation, but if it
cannot be easily be so divided in
equal portions, then it better be
implemented iteratively”. Explain
with an example.
The Divide and Conquer strategy
works well when a problem can be split
into almost equal portions because
recursion efficiently handles these
balanced subproblems. However, if the
portions are uneven, recursion can lead
to inefficiency due to unbalanced call
stacks and unnecessary recomputation.
In such cases, an iterative approach is
often better, as it avoids the overhead of
recursion.
Example:
1. Merge Sort (Balanced) –
Recursive:
o Splits the array into two nearly equal
halves at each step.
o Recursion handles the two halves
efficiently, leading to O(n log n)
time complexity.
2. Fibonacci Sequence
(Unbalanced) – Iterative:
o Recursive version recalculates
overlapping subproblems, leading to
exponential time complexity
(O(2^n)).
o Iterative version runs in O(n) time by
avoiding redundant calculations.
Thus, recursion fits well for balanced
problems (like Merge Sort), while
iterative methods are better for
unbalanced problems (like Fibonacci).

Q3. Write an algorithm for binary search.


Calculate the time complexity for each
case.
Binary Search Algorithm:
pseudo
Copy code
Algorithm BinarySearch(arr, target)
1. Set low = 0, high = length(arr) - 1
2. While low ≤ high:
a. Set mid = (low + high) / 2
b. If arr[mid] == target, return mid //
Target found
c. If arr[mid] < target, set low = mid +
1
d. Else, set high = mid - 1
3. Return -1 // Target not found
Time Complexity:
1. Best Case (Target is at mid):
o Occurs on the first comparison.
o O(1).
2. Worst and Average Case:
o Array is halved in each iteration
(logarithmic reduction).
o O(log n), where n is the number of
elements in the array.
Q4. Write an algorithm for merge sort
with divide and conquer strategy.Analyze
each case.List best case worst case and
average case complexity.
Algorithm MergeSort(arr)
1. If length(arr) > 1:
a. Set mid = length(arr) / 2
b. Left = MergeSort(arr[0:mid])
c. Right = MergeSort(arr[mid:end])
d. Merge(Left, Right)

Algorithm Merge(Left, Right)


1. Create empty result array
2. While Left and Right are not empty:
a. Compare the first elements of Left
and Right
b. Append the smaller one to result
3. Append remaining elements from Left
or Right
4. Return result
Time Complexity Analysis:
 Best Case:
Even in the best case (array already
sorted), the array is always split and
merged.
o O(n log n).
 Worst Case:
Occurs when the array is in reverse
order (or any order). Same splitting and
merging.
o O(n log n).
 Average Case:
Average splitting and merging behavior
for any random array.
o O(n log n).
In all cases, Merge Sort has the same
time complexity: O(n log n).
4o

Q5. Write an algorithm for quick sort with


divide and conquer strategy.Analyze
each case. In whichcase it performs
similar to selection sort?
Algorithm QuickSort(arr, low, high)
1. If low < high:
a. Set pivot = Partition(arr, low, high)
b. QuickSort(arr, low, pivot - 1) //
Recursively sort left side
c. QuickSort(arr, pivot + 1, high) //
Recursively sort right side

Algorithm Partition(arr, low, high)


1. Set pivot = arr[high]
2. Set i = low - 1
3. For j = low to high - 1:
a. If arr[j] ≤ pivot:
i=i+1
Swap arr[i] and arr[j]
4. Swap arr[i + 1] and arr[high]
5. Return i + 1

Q6. Differentiate divide and conquer with


dynamic programming. Write recurrence
for calculation for binominal coefficient.
Difference Between Divide and
Conquer & Dynamic Programming:
Divide Dynamic
Aspect and Programm
Conquer ing
Breaks
problem Solves
into overlapping
Approach independe subproblem
nt s by storing
subproble solutions
ms
Subprobl Subproble Subproble
em ms are ms overlap
depende independe and are
ncy nt reused
Optimal
substruct Yes Yes
ure
Solution Combines Reuses
combinat results stored
ion from results to
subproble avoid
ms recomputat
Divide Dynamic
Aspect and Programm
Conquer ing
ion
Merge
Fibonacci,
Example Sort,
Knapsack
Quick Sort

Recurrence for Binomial Coefficient:


The binomial coefficient C(n,k)C(n,
k)C(n,k) can be computed using
Dynamic Programming with the
following recurrence:
C(n,k)=C(n−1,k−1)+C(n−1,k)C(n, k) =
C(n-1, k-1) + C(n-1,
k)C(n,k)=C(n−1,k−1)+C(n−1,k)
Base case:
C(n,0)=C(n,n)=1C(n, 0) = C(n, n) =
1C(n,0)=C(n,n)=1

Q7. Explain how to apply the divide and


conquer strategy for sorting the
elements using merge sort.
Merge Sort with Divide and
Conquer:
1. Divide:
o Split the array into two halves
recursively until each subarray has
only one element.
2. Conquer:
o Recursively sort each half by further
dividing.
3. Combine:
o Merge the two sorted halves by
comparing elements and combining
them in sorted order.
Steps:
1. Divide: Split array into left and right.
2. Conquer: Recursively apply Merge
Sort to left and right.
3. Combine: Merge the sorted halves
back together.
This results in a time complexity of O(n
log n).
Q8. Differentiate the following: 1. Divide
and conquer & Dynamic Programming 2.
Greedy Algorithm & Dynamic
Programming
Divide and Conquer vs. Dynamic
Programming:
Divide Dynamic
Aspect and Program
Conquer ming
Subprobl Independe Overlappin
em nt g
depende subproble subproble
ncy ms ms
Optimal
substruct Yes Yes
ure
No,
subproble
Yes, stores
Solution ms are
and reuses
reuse solved
solutions
independe
ntly
Example Merge Fibonacci,
Sort, Quick
Divide Dynamic
Aspect and Program
Conquer ming
Sort Knapsack

2. Greedy Algorithm vs. Dynamic


Programming:
Greedy Dynamic
Aspect Algorith Programm
m ing
Makes
locally Solves
optimal subproblem
Approach
choices s and stores
at each solutions
step
May not
Guarantees
Solution guarante
optimal
guarantee e optimal
solution
solution
Optimal
substruct Yes Yes
ure
Example Prim's, Knapsack,
Greedy Dynamic
Aspect Algorith Programm
m ing
Longest
Kruskal's
Common
algorith
Subsequenc
ms
e

Q9. Show how divide and conquer


technique is used to compute product of
two n digit no with example.
The Divide and Conquer technique can
be used to compute the product of two
nnn-digit numbers efficiently using
Karatsuba's algorithm. Here's a brief
overview:
Problem
Multiply two nnn-digit numbers xxx and
yyy.
Approach
Split both numbers into two halves:
 x=x1⋅10m+x0x = x_1 \cdot 10^{m} +
x_0x=x1⋅10m+x0
 y=y1⋅10m+y0y = y_1 \cdot 10^{m} +
y_0y=y1⋅10m+y0 where x1,x0,y1,y0x_1,
x_0, y_1, y_0x1,x0,y1,y0 are roughly n2\
frac{n}{2}2n-digit numbers, and
m=n2m = \frac{n}{2}m=2n.
Now, using these, we compute:

⋅y0
1. z0=x0⋅y0z_0 = x_0 \cdot y_0z0=x0

2. z1=(x1+x0)⋅(y1+y0)z_1 = (x_1 +
x_0) \cdot (y_1 + y_0)z1=(x1+x0)⋅(y1
+y0)

⋅y1
3. z2=x1⋅y1z_2 = x_1 \cdot y_1z2=x1

Then the product x⋅yx \cdot yx⋅y can be


written as:
x⋅y=z2⋅102m+(z1−z2−z0)⋅10m+z0x \
cdot y = z_2 \cdot 10^{2m} + (z_1 - z_2
- z_0) \cdot 10^m + z_0x⋅y=z2⋅102m+
(z1−z2−z0)⋅10m+z0
Example
Let's multiply x=1234x = 1234x=1234
and y=5678y = 5678y=5678 (4-digit
numbers):
1. Split into halves:
o x=12⋅102+34x = 12 \cdot 10^2 +
34x=12⋅102+34
o y=56⋅102+78y = 56 \cdot 10^2 +
78y=56⋅102+78
2. Compute:
o z0=34⋅78=2652z_0 = 34 \cdot 78 =
2652z0=34⋅78=2652
o z1=(12+34)⋅(56+78)=46⋅134=6164z
_1 = (12 + 34) \cdot (56 + 78) = 46 \
cdot 134 = 6164z1
=(12+34)⋅(56+78)=46⋅134=6164
o z2=12⋅56=672z_2 = 12 \cdot 56 =
672z2=12⋅56=672
3. Combine:
x⋅y=672⋅104+(6164−672−2652)⋅102+2
652x \cdot y = 672 \cdot 10^4 + (6164 -
672 - 2652) \cdot 10^2 +
2652x⋅y=672⋅104+(6164−672−2652)⋅1
02+2652
x⋅y=672⋅10000+2840⋅100+2652=67200
00+284000+2652=7006652x \cdot y =
672 \cdot 10000 + 2840 \cdot 100 +
2652 = 6720000 + 284000 + 2652 =
7006652x⋅y=672⋅10000+2840⋅100+265
2=6720000+284000+2652=7006652
Thus, 1234⋅5678=70066521234 \cdot
5678 = 70066521234⋅5678=7006652.
This is how Karatsuba's algorithm
works using the Divide and Conquer
technique to compute the product of two
nnn-digit numbers.
4o

Q10. Sort the following list using quick


sort algorithm: Also discuss worst and
best case of quick sort algorithm
Quick Sort Algorithm
We are given the list:
List=⟨50,40,20,60,80,100,45,70,105,30,
90,75⟩\text{List} = \langle 50, 40, 20,
60, 80, 100, 45, 70, 105, 30, 90, 75 \
rangleList=⟨50,40,20,60,80,100,45,70,1
05,30,90,75⟩
Steps:
1. Choose a pivot (last element): Pivot
= 75
2. Partition the list:
o Left (less than 75):
⟨50,40,20,60,45,70,30⟩\langle 50, 40,
20, 60, 45, 70, 30 \
rangle⟨50,40,20,60,45,70,30⟩
o Pivot: 75
o Right (greater than 75):
⟨80,100,105,90⟩\langle 80, 100, 105,
90 \rangle⟨80,100,105,90⟩
3. Recursive quicksort on left and
right parts:
o Left part ⟨50,40,20,60,45,70,30⟩\
langle 50, 40, 20, 60, 45, 70, 30 \
rangle⟨50,40,20,60,45,70,30⟩:
 Pivot = 30
 Partition:
 Left: ⟨20⟩\langle 20 \
rangle⟨20⟩, Pivot: 30, Right:
⟨50,40,60,45,70⟩\langle 50,
40, 60, 45, 70 \
rangle⟨50,40,60,45,70⟩
 Recursive on ⟨50,40,60,45,70⟩\
langle 50, 40, 60, 45, 70 \
rangle⟨50,40,60,45,70⟩:
 Pivot = 70
 Partition:
 Left: ⟨50,40,60,45⟩\langle
50, 40, 60, 45 \
rangle⟨50,40,60,45⟩, Pivot:
70, Right: ⟨⟩\langle \
rangle⟨⟩
 Recursive on ⟨50,40,60,45⟩\
langle 50, 40, 60, 45 \
rangle⟨50,40,60,45⟩:
 Pivot = 45
 Partition:
 Left: ⟨40⟩\langle 40 \
rangle⟨40⟩, Pivot: 45,
Right: ⟨50,60⟩\langle
50, 60 \rangle⟨50,60⟩
 Sort ⟨50,60⟩\langle 50, 60 \
rangle⟨50,60⟩ (already
sorted).
o Right part ⟨80,100,105,90⟩\langle 80,
100, 105, 90 \rangle⟨80,100,105,90⟩:
 Pivot = 90
 Partition:
 Left: ⟨80⟩\langle 80 \
rangle⟨80⟩, Pivot: 90, Right:
⟨100,105⟩\langle 100, 105 \
rangle⟨100,105⟩ (already
sorted).
4. Final sorted list:
⟨20,30,40,45,50,60,70,75,80,90,100,105
⟩\langle 20, 30, 40, 45, 50, 60, 70, 75,
80, 90, 100, 105 \
rangle⟨20,30,40,45,50,60,70,75,80,90,10
0,105⟩
Best and Worst Case of Quick Sort
 Best Case: Occurs when the pivot splits
the list into two equal parts, giving the
time complexity O(nlog⁡n)O(n \log
n)O(nlogn).
 Worst Case: Happens when the pivot is
always the smallest or largest element,
causing unbalanced splits, resulting in
O(n2)O(n^2)O(n2) time complexity.

Q11. Explain Binary search algorithm


with divide and conquer strategy and
use the recurrence tree to show that the
solution to the binary search recurrence
T (n) = T(n/2) + Ѳ(1) is T(n) = Ѳ(lgn).
Binary Search Algorithm with Divide
and Conquer
Binary Search works on a sorted array to
find a target element by dividing the
search space in half during each
iteration.
Steps:
1. Divide: Find the middle element of
the array.
2. Conquer: Compare the target value
with the middle element:
o If equal, return the index.
o If target is smaller, search the left
half (discard the right half).
o If target is larger, search the right
half (discard the left half).
3. Combine: The result is found when
the target element is located, or the
search space is reduced to zero.
Binary Search Recurrence
For an array of size nnn:
 The algorithm splits the array into two
halves and works on one half, leading to
the recurrence:
T(n)=T(n/2)+Θ(1)T(n) = T(n/2) + \
Theta(1)T(n)=T(n/2)+Θ(1)
Where T(n/2)T(n/2)T(n/2) is the time to
search one half, and Θ(1)\Theta(1)Θ(1) is
the constant time to compare with the
middle element.
Solving the Recurrence using
Recurrence Tree
1. First Level: The problem size is nnn,
and the cost is Θ(1)\Theta(1)Θ(1).
2. Second Level: The problem size is
reduced to n/2n/2n/2, and the cost is
Θ(1)\Theta(1)Θ(1).
3. Third Level: The problem size is
reduced to n/4n/4n/4, and the cost is
Θ(1)\Theta(1)Θ(1).
At each level, the problem size is
reduced by half, but the cost remains
Θ(1)\Theta(1)Θ(1).
 Depth of the tree = log⁡2n\log_2 nlog2n
(since the size reduces by half each
time).
 Total cost = Θ(1)\Theta(1)Θ(1) at each
level, repeated for log⁡2n\log_2 nlog2n
levels.
Thus, the total time complexity is:
T(n)=Θ(log⁡n)T(n) = \Theta(\log
n)T(n)=Θ(logn)
Conclusion
The solution to the recurrence
T(n)=T(n/2)+Θ(1)T(n) = T(n/2) + \
Theta(1)T(n)=T(n/2)+Θ(1) is
T(n)=Θ(log⁡n)T(n) = \Theta(\log
n)T(n)=Θ(logn), which means binary
search runs in logarithmic time.
4o
Q12. Explain how to apply the divide and
conquer strategy for sorting the
elements using quick sort with example.
Write algorithm for quick sort method.
Quick Sort using Divide and Conquer
Strategy
Quick Sort follows the divide and conquer
strategy to sort an array:
Steps:
1. Divide: Select a pivot element and
partition the array into two subarrays:
o Elements less than the pivot.
o Elements greater than the pivot.
2. Conquer: Recursively apply quick
sort to both subarrays.
3. Combine: Since subarrays are
sorted in-place, combining is implicit.
Quick Sort Example
Given the array:
⟨50,40,20,60,80,100,45,70,105,30,90,75⟩\
langle 50, 40, 20, 60, 80, 100, 45, 70, 105,
30, 90, 75 \
rangle⟨50,40,20,60,80,100,45,70,105,30,90,
75⟩
1. Choose pivot (last element): 757575.
2. Partition:
o Left: ⟨50,40,20,60,45,70,30⟩\langle
50, 40, 20, 60, 45, 70, 30 \
rangle⟨50,40,20,60,45,70,30⟩
o Right: ⟨80,100,105,90⟩\langle 80,
100, 105, 90 \rangle⟨80,100,105,90⟩
3. Recursively sort both parts:
o Left sorted: ⟨20,30,40,45,50,60,70⟩\
langle 20, 30, 40, 45, 50, 60, 70 \
rangle⟨20,30,40,45,50,60,70⟩
o Right sorted: ⟨80,90,100,105⟩\langle
80, 90, 100, 105 \
rangle⟨80,90,100,105⟩
Final sorted array:
⟨20,30,40,45,50,60,70,75,80,90,100,105⟩\
langle 20, 30, 40, 45, 50, 60, 70, 75, 80, 90,
100, 105 \
rangle⟨20,30,40,45,50,60,70,75,80,90,100,1
05⟩
Quick Sort Algorithm
python
Copy code
def quicksort(arr, low, high):
if low < high:
pi = partition(arr, low, high)
quicksort(arr, low, pi - 1) # Sort left
half
quicksort(arr, pi + 1, high) # Sort right
half

def partition(arr, low, high):


pivot = arr[high]
i = low - 1
for j in range(low, high):
if arr[j] < pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i] # Swap
elements
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
Key Points:
 Pivot: Determines how the array is split.
 Recursion: Recursively sorts left and
right subarrays.
 Time Complexity:
o Best/Average: O(nlog⁡n)O(n \log
n)O(nlogn).
o Worst: O(n2)O(n^2)O(n2) when the
pivot is always the smallest or largest
element (unbalanced split).

Q13. Discuss matrix multiplication problem


using divide and Conquertechnique.
Matrix Multiplication using Divide and
Conquer
In matrix multiplication, given two
matrices AAA and BBB, the goal is to
compute their product C=A×BC = A \times
BC=A×B.
Divide and Conquer Strategy
1. Divide: Split matrices AAA, BBB, and
CCC into four submatrices each:
A=[A11A12A21A22],B=[B11B12B21B22],C=
[C11C12C21C22]A = \begin{bmatrix}
A_{11} & A_{12} \\ A_{21} & A_{22} \
end{bmatrix}, \quad B = \begin{bmatrix}
B_{11} & B_{12} \\ B_{21} & B_{22} \
end{bmatrix}, \quad C = \begin{bmatrix}
C_{11} & C_{12} \\ C_{21} & C_{22} \
end{bmatrix}A=[A11A21A12A22],B=[B11
B21B12B22],C=[C11C21C12C22]
2. Conquer: Use the following matrix
operations to compute
C11,C12,C21,C22C_{11}, C_{12},
C_{21}, C_{22}C11,C12,C21,C22:
C11=A11B11+A12B21,C12=A11B12+A12B
22C_{11} = A_{11}B_{11} +
A_{12}B_{21}, \quad C_{12} =
A_{11}B_{12} + A_{12}B_{22}C11=A11
B11+A12B21,C12=A11B12+A12B22
C21=A21B11+A22B21,C22=A21B12+A22B
22C_{21} = A_{21}B_{11} +
A_{22}B_{21}, \quad C_{22} =
A_{21}B_{12} + A_{22}B_{22}C21=A21
B11+A22B21,C22=A21B12+A22B22
Each submatrix product is computed
recursively.
3. Combine: Combine the results to get
the final matrix CCC.
Example
Given two matrices:
A=[1234],B=[5678]A = \begin{bmatrix} 1 &
2 \\ 3 & 4 \end{bmatrix}, \quad B = \
begin{bmatrix} 5 & 6 \\ 7 & 8 \
end{bmatrix}A=[1324],B=[5768]
1. Divide:
A11=1,A12=2,A21=3,A22=4,B11=5,B12=6,
B21=7,B22=8A_{11} = 1, A_{12} = 2,
A_{21} = 3, A_{22} = 4, \quad B_{11} = 5,
B_{12} = 6, B_{21} = 7, B_{22} = 8A11
=1,A12=2,A21=3,A22=4,B11=5,B12=6,B21
=7,B22=8
2. Conquer:
C11=(1×5)+(2×7)=5+14=19,C12=(1×6)+(
2×8)=6+16=22C_{11} = (1 \times 5) + (2 \
times 7) = 5 + 14 = 19, \quad C_{12} = (1 \
times 6) + (2 \times 8) = 6 + 16 = 22C11
=(1×5)+(2×7)=5+14=19,C12
=(1×6)+(2×8)=6+16=22
C21=(3×5)+(4×7)=15+28=43,C22=(3×6)
+(4×8)=18+32=50C_{21} = (3 \times 5) +
(4 \times 7) = 15 + 28 = 43, \quad C_{22}
= (3 \times 6) + (4 \times 8) = 18 + 32 =
50C21=(3×5)+(4×7)=15+28=43,C22
=(3×6)+(4×8)=18+32=50
3. Combine:
C=[19224350]C = \begin{bmatrix} 19 &
22 \\ 43 & 50 \end{bmatrix}C=[19432250]
Strassen’s Algorithm (Improved Divide
and Conquer)
 Strassen’s algorithm improves matrix
multiplication to
O(n2.81)O(n^{2.81})O(n2.81) using 7
recursive multiplications instead of 8.

Q14. Explain Strasson’s algorithm for matrix


multiplication.
Strassen’s Algorithm for Matrix
Multiplication
Strassen’s algorithm improves the time
complexity of standard matrix multiplication
from O(n3)O(n^3)O(n3) to approximately
O(n2.81)O(n^{2.81})O(n2.81) by reducing
the number of recursive multiplications.
Basic Idea:
For two 2×22 \times 22×2 matrices AAA and
BBB:
A=[abcd],B=[efgh]A = \begin{bmatrix} a &
b \\ c & d \end{bmatrix}, \quad B = \
begin{bmatrix} e & f \\ g & h \
end{bmatrix}A=[acbd],B=[egfh]
Instead of performing 8 multiplications (as in
regular divide and conquer), Strassen’s
algorithm uses 7 matrix multiplications
and a few additions/subtractions.
Steps:
1. Compute 7 products:
M1=(a+d)(e+h)M_1 = (a + d)(e + h)M1
=(a+d)(e+h) M2=(c+d)eM_2 = (c + d) eM2
=(c+d)e M3=a(f−h)M_3 = a (f - h)M3
=a(f−h) M4=d(g−e)M_4 = d (g - e)M4
=d(g−e) M5=(a+b)hM_5 = (a + b) hM5
=(a+b)h M6=(c−a)(e+f)M_6 = (c - a) (e +
f)M6=(c−a)(e+f) M7=(b−d)(g+h)M_7 = (b -
d) (g + h)M7=(b−d)(g+h)
2. Compute the 4 submatrices of
the result:
C11=M1+M4−M5+M7C_{11} = M_1 + M_4
- M_5 + M_7C11=M1+M4−M5+M7
C12=M3+M5C_{12} = M_3 + M_5C12=M3
+M5 C21=M2+M4C_{21} = M_2 + M_4C21
=M2+M4 C22=M1−M2+M3+M6C_{22} =
M_1 - M_2 + M_3 + M_6C22=M1−M2+M3
+M6
3. Combine: Construct the resulting
matrix CCC from the submatrices
C11,C12,C21,C22C_{11}, C_{12},
C_{21}, C_{22}C11,C12,C21,C22.
Time Complexity
Strassen’s algorithm performs 7
multiplications instead of 8 in the divide
and conquer approach, reducing the time
complexity to approximately
O(n2.81)O(n^{2.81})O(n2.81). This is faster
than the standard O(n3)O(n^3)O(n3) matrix
multiplication.

Q15. Explain the use of Divide and Conquer


Technique for Binary Search Method.What is
the complexity of Binary Search Method?
Explain it with example.
Binary Search using Divide and
Conquer
Binary Search is a divide and conquer
algorithm used to find a target element in a
sorted array. It works by repeatedly
dividing the array in half until the target is
found or the search space is empty.
Steps:
1. Divide: Find the middle element of
the array.
2. Conquer:
o If the target is equal to the middle
element, return the index.
o If the target is smaller, search the left
half.
o If the target is larger, search the right
half.
3. Combine: There’s no explicit
combination step since the target is
either found or not.
Example:
Given a sorted array
A=⟨10,20,30,40,50,60,70,80⟩A = \langle 10,
20, 30, 40, 50, 60, 70, 80 \
rangleA=⟨10,20,30,40,50,60,70,80⟩ and the
target x=50x = 50x=50:
1. Middle element = A[3]=40A[3] =
40A[3]=40.
o Since x>40x > 40x>40, search right
half ⟨50,60,70,80⟩\langle 50, 60, 70,
80 \rangle⟨50,60,70,80⟩.
2. New middle element = A[4]=50A[4]
= 50A[4]=50.
o x=50x = 50x=50, target found.
Complexity of Binary Search
 Time Complexity: Each step reduces
the array size by half, leading to a time
complexity of O(log⁡n)O(\log n)O(logn).
o At each level, you divide the array in
half, and the maximum number of
divisions is log⁡2n\log_2 nlog2n.
Example Calculation:
For an array of size n=8n = 8n=8, binary
search takes log⁡28=3\log_2 8 = 3log28=3
steps to find an element.
Conclusion:
 Best/Average Case: O(log⁡n)O(\log
n)O(logn)
 Worst Case: O(log⁡n)O(\log n)O(logn),
when the target is the last element or
not present.

Q16. Write a program/algorithm of Quick


Sort Method and analyze it with example.
Quick Sort Algorithm
python
Copy code
def quicksort(arr, low, high):
if low < high:
pi = partition(arr, low, high) # Get the
partition index
quicksort(arr, low, pi - 1) # Sort left of
pivot
quicksort(arr, pi + 1, high) # Sort right
of pivot

def partition(arr, low, high):


pivot = arr[high] # Select the pivot (last
element)
i = low - 1 # Index of smaller element
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i] # Swap
elements
arr[i + 1], arr[high] = arr[high], arr[i + 1]
# Place pivot correctly
return i + 1 # Return the partition index

# Example Usage
arr = [50, 40, 20, 60, 80, 100, 45, 70, 105,
30, 90, 75]
quicksort(arr, 0, len(arr) - 1)
print(arr) # Sorted array: [20, 30, 40, 45,
50, 60, 70, 75, 80, 90, 100, 105]
Analysis
Example:
Array =
[50,40,20,60,80,100,45,70,105,30,90,75]
[50, 40, 20, 60, 80, 100, 45, 70, 105, 30, 90,
75][50,40,20,60,80,100,45,70,105,30,90,75]
1. First Partition: Pivot = 75.
Rearrange elements:
o Left: [50,40,20,60,45,70,30][50, 40,
20, 60, 45, 70, 30]
[50,40,20,60,45,70,30]
o Pivot = 75
o Right: [80,100,105,90][80, 100, 105,
90][80,100,105,90]
2. Recursively sort left and right parts.
Time Complexity:
 Best/Average Case: O(nlog⁡n)O(n \log
n)O(nlogn), when partitions are
balanced.
 Worst Case: O(n2)O(n^2)O(n2), when
partitions are unbalanced (e.g., already
sorted array).

Q17. Write an algorithm for quick sort and


derive best case, worst case using divide
and conquer technique also trace given data
(3,1,4,5,9,2,6,5)
Quick Sort Algorithm
python
Copy code
def quicksort(arr, low, high):
if low < high:
pi = partition(arr, low, high)
quicksort(arr, low, pi - 1)
quicksort(arr, pi + 1, high)

def partition(arr, low, high):


pivot = arr[high]
i = low - 1
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
Best Case and Worst Case Time
Complexity
Best Case:
 Divide and Conquer: The pivot always
divides the array into two equal halves.
 Recurrence Relation:
T(n)=2T(n/2)+Θ(n)T(n) = 2T(n/2) + \
Theta(n)T(n)=2T(n/2)+Θ(n)
o This resolves to T(n)=O(nlog⁡n)T(n) =
O(n \log n)T(n)=O(nlogn).
Worst Case:
 Divide and Conquer: The pivot is
always the smallest or largest element,
causing unbalanced partitions (one
subarray has size n−1n-1n−1 and the
other is empty).
 Recurrence Relation:
T(n)=T(n−1)+Θ(n)T(n) = T(n-1) + \
Theta(n)T(n)=T(n−1)+Θ(n)
o This resolves to T(n)=O(n2)T(n) =
O(n^2)T(n)=O(n2).
Trace of Quick Sort on Data
[3,1,4,5,9,2,6,5][3,1,4,5,9,2,6,5]
[3,1,4,5,9,2,6,5]
1. Initial Array: [3,1,4,5,9,2,6,5][3, 1,
4, 5, 9, 2, 6, 5][3,1,4,5,9,2,6,5]
o Pivot = 5 (last element).
o Partition: [3,1,4,5,2][3, 1, 4, 5, 2]
[3,1,4,5,2], Pivot = 5, [9,6][9, 6][9,6].
2. Left Subarray: [3,1,4,5,2][3, 1, 4, 5,
2][3,1,4,5,2]
o Pivot = 2.
o Partition: [1][1][1], Pivot = 2, [3,4,5]
[3, 4, 5][3,4,5].
o Recursively sort [3,4,5][3, 4, 5]
[3,4,5].
3. Right Subarray: [9,6][9, 6][9,6]
o Pivot = 6.
o Partition: [][][], Pivot = 6, [9][9][9].
4. Final Sorted Array:
[1,2,3,4,5,5,6,9][1, 2, 3, 4, 5, 5, 6, 9]
[1,2,3,4,5,5,6,9].
Q18. Multiply 981 by 1234 by divide and
conquer method.
Multiplying 981 by 1234 using Divide
and Conquer (Karatsuba Algorithm)
Given two numbers: x=981x = 981x=981
and y=1234y = 1234y=1234.
Steps:
1. Divide xxx and yyy into two halves:
x=981→a=98,b=1x = 981 \rightarrow a =
98, b = 1x=981→a=98,b=1
y=1234→c=12,d=34y = 1234 \rightarrow c
= 12, d = 34y=1234→c=12,d=34
2. Recursive Multiplications:
o ac=98×12=1176ac = 98 \times 12 =
1176ac=98×12=1176
o bd=1×34=34bd = 1 \times 34 =
34bd=1×34=34
o (a+b)(c+d)=(98+1)
(12+34)=99×46=4554(a + b)(c + d)
= (98 + 1)(12 + 34) = 99 \times 46
= 4554(a+b)(c+d)=(98+1)
(12+34)=99×46=4554
3. Combine the results:
o ad+bc=4554−1176−34=3344ad +
bc = 4554 - 1176 - 34 =
3344ad+bc=4554−1176−34=3344
o Final product: x×y=ac×104+
(ad+bc)×102+bdx \times y = ac \
times 10^4 + (ad + bc) \times 10^2
+ bdx×y=ac×104+(ad+bc)×102+bd
=1176×104+3344×102+34=11760
000+334400+34=12094434= 1176 \
times 10^4 + 3344 \times 10^2 +
34 = 11760000 + 334400 + 34 =
12094434=1176×104+3344×102+3
4=11760000+334400+34=1209443
4
Thus, 981×1234=1,209,434981 \times 1234
= 1,209,434981×1234=1,209,434.

Q19. What do you mean by Divide &


Conquer approach? List advantages and
disadvantages of it.
Divide & Conquer Approach
The Divide and Conquer approach is an
algorithmic technique that breaks a problem
into smaller subproblems, solves them
recursively, and combines their solutions to
solve the original problem.
Steps:
1. Divide: Break the problem into
smaller, independent subproblems.
2. Conquer: Solve the subproblems
recursively.
3. Combine: Merge the results to form
the final solution.
Advantages:
 Efficiency: Often reduces time
complexity (e.g., quicksort, mergesort).
 Parallelism: Independent subproblems
can be solved in parallel.
 Optimal for Recursion: Naturally fits
recursive problem-solving.
Disadvantages:
 Overhead: Recursive calls and
combining solutions can add overhead.
 Space Complexity: Requires additional
memory for recursion stack.
 Not Always Optimal: May not be the
best approach for small problems or if
subproblems are not independent.

Q20. Solve the following recurrence relation


using iteration method. T(n) = 8T(n/2) + n2.
Here T(1) = 1.
Recurrence Relation:
T(n)=8T(n/2)+n2T(n) = 8T(n/2) +
n^2T(n)=8T(n/2)+n2
with T(1)=1T(1) = 1T(1)=1.
Solving Using Iteration Method:
1. First Iteration:
T(n)=8T(n/2)+n2T(n) = 8T(n/2) +
n^2T(n)=8T(n/2)+n2
T(n/2)=8T(n/4)+(n/2)2=8T(n/4)+n2/4T(n/2)
= 8T(n/4) + (n/2)^2 = 8T(n/4) +
n^2/4T(n/2)=8T(n/4)+(n/2)2=8T(n/4)+n2/4
Substituting:
T(n)=8[8T(n/4)+n2/4]+n2=64T(n/
4)+2n2T(n) = 8[8T(n/4) + n^2/4] + n^2 =
64T(n/4) +
2n^2T(n)=8[8T(n/4)+n2/4]+n2=64T(n/4)+2
n2
2. Second Iteration:
T(n)=64[8T(n/8)+n2/16]+2n2=512T(n/
8)+3n2T(n) = 64[8T(n/8) + n^2/16] + 2n^2
= 512T(n/8) +
3n^2T(n)=64[8T(n/8)+n2/16]+2n2=512T(n/
8)+3n2
3. General Pattern: After kkk
iterations:
T(n)=8kT(n/2k)+kn2T(n) = 8^k T(n/2^k) +
kn^2T(n)=8kT(n/2k)+kn2
4. Base Case: When n/2k=1n/2^k =
1n/2k=1, k=log⁡2nk = \log_2 nk=log2n,
and T(1)=1T(1) = 1T(1)=1:
T(n)=8log⁡2nT(1)+(log⁡2n)n2T(n) = 8^{\
log_2 n} T(1) + (\log_2 n) n^2T(n)=8log2
nT(1)+(log2n)n2 T(n)=n3+n2log⁡2nT(n) =
n^3 + n^2 \log_2 nT(n)=n3+n2log2n
Final Solution:
T(n)=O(n3)T(n) = O(n^3)T(n)=O(n3)
Q21. Write Merge sort algorithm and
compute its worst case and best-case time
complexity. Sort the List G,U,J,A,R,A,T in
alphabetical order using merge sort
Merge Sort Algorithm
python
Copy code
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2 # Divide the array
into two halves
left_half = arr[:mid]
right_half = arr[mid:]

merge_sort(left_half) # Recursively
sort left half
merge_sort(right_half) # Recursively
sort right half

# Merge the two halves


i=j=k=0
while i < len(left_half) and j <
len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1

# Checking for any remaining elements


while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1

while j < len(right_half):


arr[k] = right_half[j]
j += 1
k += 1
Worst Case and Best Case Time
Complexity:
 Best Case: O(nlog⁡n)O(n \log n)O(nlogn)
(array is always split into two equal
halves)
 Worst Case: O(nlog⁡n)O(n \log
n)O(nlogn) (same as best case because
merge sort always divides the array
evenly)
Sort the List: G, U, J, A, R, A, T
1. Initial List: [G,U,J,A,R,A,T][G, U, J, A,
R, A, T][G,U,J,A,R,A,T]
2. Merge Sort Process:
o Split: [G,U,J][G, U, J][G,U,J] and
[A,R,A,T][A, R, A, T][A,R,A,T]
o Recursively split:
 [G,U,J]→[G],[U,J]→[U],[J][G, U, J] \
rightarrow [G], [U, J] \rightarrow
[U], [J][G,U,J]→[G],[U,J]→[U],[J]
 [A,R,A,T]→[A,R],[A,T]→[A],[R][A,
R, A, T] \rightarrow [A, R], [A, T] \
rightarrow [A], [R][A,R,A,T]→[A,R],
[A,T]→[A],[R]
3. Merge and sort each pair:
o Merging [G],[J,U]→[G,J,U][G], [J, U] \
rightarrow [G, J, U][G],[J,U]→[G,J,U]
o Merging [A],[R]→[A,R][A], [R] \
rightarrow [A, R][A],[R]→[A,R], [A],
[T]→[A,T][A], [T] \rightarrow [A, T][A],
[T]→[A,T]
o Merging [A,R],[A,T]→[A,A,R,T][A, R],
[A, T] \rightarrow [A, A, R, T][A,R],
[A,T]→[A,A,R,T]
4. Final Merge:
o Merging [G,J,U][G, J, U][G,J,U] and
[A,A,R,T][A, A, R, T][A,A,R,T]
o Sorted list: [A,A,G,J,R,T,U][A, A, G, J,
R, T, U][A,A,G,J,R,T,U]
Final Sorted List:
[A,A,G,J,R,T,U][A, A, G, J, R, T, U]
[A,A,G,J,R,T,U]
4o

Q22. Demonstrate Binary Search method to


search Key = 14, form the array A=
Binary Search for Key = 14 in Array
A=⟨2,4,7,8,10,13,14,60⟩A = \langle 2, 4,
7, 8, 10, 13, 14, 60 \
rangleA=⟨2,4,7,8,10,13,14,60⟩
Steps:
1. Initial Array:
A=⟨2,4,7,8,10,13,14,60⟩A = \langle 2, 4,
7, 8, 10, 13, 14, 60 \
rangleA=⟨2,4,7,8,10,13,14,60⟩
o Start: low=0low = 0low=0,
high=7high = 7high=7
o Middle Index: mid=(0+7)//2=3mid =
(0 + 7) // 2 = 3mid=(0+7)//2=3
o Middle Element: A[3]=8A[3] =
8A[3]=8
o Since 14>814 > 814>8, search right
half.
2. New Subarray: A=⟨10,13,14,60⟩A =
\langle 10, 13, 14, 60 \
rangleA=⟨10,13,14,60⟩
o low=4low = 4low=4, high=7high =
7high=7
o Middle Index: mid=(4+7)//2=5mid =
(4 + 7) // 2 = 5mid=(4+7)//2=5
o Middle Element: A[5]=13A[5] =
13A[5]=13
o Since 14>1314 > 1314>13, search
right half.
3. New Subarray: A=⟨14,60⟩A = \
langle 14, 60 \rangleA=⟨14,60⟩
o low=6low = 6low=6, high=7high =
7high=7
o Middle Index: mid=(6+7)//2=6mid =
(6 + 7) // 2 = 6mid=(6+7)//2=6
o Middle Element: A[6]=14A[6] =
14A[6]=14
4. Key Found at index 6.
Result:
 Key 14 is found at index 6.

Q23. Write an algorithm for insertion sort.


Analyze insertion sort algorithm for best
case and worst case.
Insertion Sort Algorithm
python
Copy code
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j=i-1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
Best Case and Worst Case Analysis
Best Case:
 Scenario: The array is already sorted.
 Time Complexity: O(n)O(n)O(n),
because each element is compared once
and no shifting is required.
Worst Case:
 Scenario: The array is sorted in reverse
order.
 Time Complexity: O(n2)O(n^2)O(n2),
because each element has to be
compared and shifted all the way to the
beginning.
Example:
 Initial Array: [2,1,3,4][2, 1, 3, 4]
[2,1,3,4]
 After sorting, the array becomes
[1,2,3,4][1, 2, 3, 4][1,2,3,4].
Time Complexity Summary:
 Best Case: O(n)O(n)O(n)
 Worst Case: O(n2)O(n^2)O(n2)
4o

Unit-4
Q1. Explain with example how games can be
formulated using graphs?
Formulating Games using Graphs
Graphs can represent games through their
structure, where nodes represent game
states and edges represent possible moves
or transitions between those states.
Example: Tic-Tac-Toe
1. Game States: Each unique
arrangement of the Tic-Tac-Toe board can
be represented as a node in the graph.
o For instance, the state where X has
moved to the center and O has taken
a corner.
2. Transitions: An edge connects two
nodes if one game state can be reached
from another by a legal move.
o If X moves from the center to a
corner, an edge connects the two
states.
3. Win Conditions: The terminal nodes
(leaf nodes) represent win, lose, or draw
outcomes.
o A node where X has three in a row is
a win state for X.
Game Tree Representation
 The game can be represented as a tree
where:
o The root node is the initial state.
o Each level represents a player's turn.
o Branches represent possible moves.
Example Diagram:
scss
Copy code
(Start)
|
+----+----+
| |
X O
(State 1) (State 2)
| |
+--+--+ +--+--+
| | | |
X1 O1 X2 O2
Applications:
 Minimax Algorithm: Used to determine
the best move by analyzing game states
and outcomes.
 AI Development: AI can explore
possible moves using graphs to make
optimal decisions.
Conclusion:
Graph representation allows for a clear
visualization of game dynamics, making it
easier to analyze and strategize.

Q2. Write down the algorithm to determine


articulation points in a given undirected
graph. Give any application where it is
applicable
Algorithm to Determine Articulation
Points
Definition: An articulation point (or cut
vertex) in a graph is a vertex whose removal
increases the number of connected
components.
Algorithm (Tarjan's Algorithm)
1. Initialization:
o Create a visited array to track visited
vertices.
o Create a disc array to store discovery
times of visited vertices.
o Create a low array to store the lowest
discovery time reachable.
o Initialize a parent array to store the
parent vertices.
2. DFS Function:
python
Copy code
def articulation_points(u):
nonlocal time
visited[u] = True
disc[u] = low[u] = time
time += 1
children = 0

for v in graph[u]: # For each adjacent


vertex v
if not visited[v]:
parent[v] = u
children += 1
articulation_points(v)
low[u] = min(low[u], low[v])
if (parent[u] is None and children >
1) or (parent[u] is not None and low[v] >=
disc[u]):
ap[u] = True

elif v != parent[u]:
low[u] = min(low[u], disc[v])
3. Main Function:
python
Copy code
def find_articulation_points(graph):
global time
time = 0
visited = [False] * len(graph)
disc = [float('inf')] * len(graph)
low = [float('inf')] * len(graph)
parent = [None] * len(graph)
ap = [False] * len(graph) # To mark
articulation points

for i in range(len(graph)):
if not visited[i]:
articulation_points(i)

return [index for index, value in


enumerate(ap) if value]
Application
 Network Design: In communication
networks, articulation points help
identify critical routers or switches;
removing them could disconnect parts of
the network. This is crucial for
maintaining robust and resilient network
architectures.

Q3. Define: Acyclic Directed Graph,


Articulation Point, Dense Graph,
Preconditioning, Sparse Graph , Graph, Tree.
Definitions
1. Acyclic Directed Graph (DAG): A
directed graph that has no cycles,
meaning there is no way to start at a
vertex and follow a sequence of edges to
return to the same vertex.
2. Articulation Point: A vertex in a
graph whose removal increases the
number of connected components,
leading to a disconnection in the graph.
3. Dense Graph: A graph where the
number of edges is close to the
maximum possible. In an undirected
graph with nnn vertices, a dense graph
has approximately O(n2)O(n^2)O(n2)
edges.
4. Preconditioning: A process used to
transform a problem into a simpler form
to improve the efficiency of algorithms,
often applied in numerical methods or
optimization.
5. Sparse Graph: A graph where the
number of edges is much less than the
maximum possible, typically having
O(n)O(n)O(n) edges for nnn vertices.
6. Graph: A mathematical structure
consisting of a set of vertices (or nodes)
connected by edges. Graphs can be
directed or undirected.
7. Tree: A special type of graph that is
connected and acyclic. A tree with nnn
vertices has exactly n−1n-1n−1 edges
and contains no cycles.

Q4. Explain Breadth First Traversal


Method for Graph with algorithm.
Breadth-First Traversal (BFS)
Method
Definition: BFS is a graph traversal
method that explores all neighbors of a
vertex before moving to the next level of
vertices. It uses a queue data structure.
Algorithm
1. Initialization:
o Create a queue to keep track of
vertices to explore.
o Create a visited array to track visited
vertices.
2. BFS Function:
python
Copy code
def bfs(graph, start):
visited = [False] * len(graph)
queue = []

# Mark the starting node as visited


and enqueue it
visited[start] = True
queue.append(start)

while queue:
# Dequeue a vertex and print it
current = queue.pop(0)
print(current)

# Enqueue all adjacent unvisited


vertices
for neighbor in graph[current]:
if not visited[neighbor]:
visited[neighbor] = True
queue.append(neighbor)
Example Usage
python
Copy code
# Example graph represented as an
adjacency list
graph = [
[1, 2], # Neighbors of vertex 0
[0, 3, 4], # Neighbors of vertex 1
[0, 4], # Neighbors of vertex 2
[1], # Neighbors of vertex 3
[1, 2] # Neighbors of vertex 4
]

bfs(graph, 0) # Start BFS from vertex 0


Characteristics
 Time Complexity: O(V+E)O(V +
E)O(V+E), where VVV is the number of
vertices and EEE is the number of edges.
 Space Complexity: O(V)O(V)O(V) for
the queue and visited array.
 Use Cases: Finding the shortest path in
unweighted graphs, web crawling, and
network broadcasting.

Q5. Differentiate BFS and DFS


Differences Between BFS and DFS
Breadth- Depth-
First First
Feature
Search Search
(BFS) (DFS)
Explores
as far as
Explores possible
Traversal neighbors down one
Method level by branch
level before
backtracki
ng
Uses a
Data Uses a
stack (or
Structure queue
recursion)
Breadth- Depth-
First First
Feature
Search Search
(BFS) (DFS)
Finds the Does not
shortest guarantee
Path
path in the
Finding
unweighte shortest
d graphs path
Time O(V+E)O( O(V+E)O(
Complexit V+ V+
y E)O(V+E) E)O(V+E)
O(V)O(V)O
(V) in
Space
O(V)O(V)O worst case
Complexit
(V) (for
y
recursion
stack)
Use Cases Shortest Topologica
path, l sorting,
broadcasti puzzle
ng solving,
maze
exploratio
Breadth- Depth-
First First
Feature
Search Search
(BFS) (DFS)
n
Not
Complete
Completen complete
(finds all
ess in infinite
solutions)
graphs
Optimal
for Not
Optimality
unweighte optimal
d graphs
4o mini
Q6. How you can identify articulation
points explain with example. Describe
the use of articulation point.
Identifying Articulation Points
Method: Using Depth-First Search (DFS)
and tracking discovery and low values.
Steps:
1. Initialization:
o Create arrays for visited, disc
(discovery time), low (lowest
discovery time reachable), and
parent.
2. DFS Traversal:
o For each vertex, perform a DFS:
 Update discovery and low values.
 For each adjacent vertex:
 If it’s not visited, mark it,
increment child count, and
recursively call DFS.
 Update low values based on
the child vertices.
 Determine articulation points
based on:
 If the root has two or more
children.
 If any other vertex
satisfies low[child] >=
disc[parent].
Example:
Consider the following undirected graph:
css
Copy code
A
/\
B C
| |
D---E
 Start DFS from A:
o disc[A] = 0, low[A] = 0
o Visit B:
 disc[B] = 1, low[B] = 1
 Visit D:
 disc[D] = 2, low[D] = 2
 Visit E:
 disc[E] = 3, low[E] = 3
 From E, back to D updates
low values.
 Backtrack and update low values
for B.
o Articulation points found: B
(removing it disconnects A from D,
E).
Use of Articulation Points
 Network Design: Identifying critical
routers or servers whose failure would
disconnect parts of a network.
 Graph Analysis: Understanding the
robustness of networks, social networks,
and transportation systems by
pinpointing vulnerabilities.

Q8. Explain Depth First Traversal Method


for Graph with algorithm.
Depth-First Traversal (DFS) Method
Definition: DFS is a graph traversal
method that explores as far down a
branch as possible before backtracking.
It uses a stack data structure (either
explicitly or via recursion).
Algorithm
1. Initialization:
o Create a visited array to track visited
vertices.
2. DFS Function:
python
Copy code
def dfs(graph, vertex, visited):
visited[vertex] = True # Mark the
current vertex as visited
print(vertex) # Process the
vertex

for neighbor in graph[vertex]: #


Explore all adjacent vertices
if not visited[neighbor]: # If not
visited
dfs(graph, neighbor, visited) #
Recur for the neighbor
3. Main Function:
python
Copy code
def perform_dfs(graph, start):
visited = [False] * len(graph) #
Initialize visited array
dfs(graph, start, visited) # Call
DFS from the starting vertex
Example Usage
python
Copy code
# Example graph represented as an
adjacency list
graph = [
[1, 2], # Neighbors of vertex 0
[0, 3, 4], # Neighbors of vertex 1
[0, 4], # Neighbors of vertex 2
[1], # Neighbors of vertex 3
[1, 2] # Neighbors of vertex 4
]

perform_dfs(graph, 0) # Start DFS from


vertex 0
Characteristics
 Time Complexity: O(V+E)O(V +
E)O(V+E), where VVV is the number of
vertices and EEE is the number of edges.
 Space Complexity: O(V)O(V)O(V) for
the visited array and the recursion stack.
 Use Cases: Pathfinding, topological
sorting, solving puzzles (like mazes), and
exploring connected components in
graphs.

Q9. Differentiate between depth first


search and breadth first search.
Differences Between Depth First
Search (DFS) and Breadth First
Search (BFS)
Depth Breadth
First First
Feature
Search Search
(DFS) (BFS)
Traversal Explores Explores
Method as far all
down one neighbors
branch at the
before current
Depth Breadth
First First
Feature
Search Search
(DFS) (BFS)
depth
backtracki before
ng moving
deeper
Uses a
Data Uses a
stack (or
Structure queue
recursion)
Guarantee
Does not
s the
guarantee
Path shortest
the
Finding path in
shortest
unweighte
path
d graphs
Time O(V+E)O( O(V+E)O(
Complexit V+ V+
y E)O(V+E) E)O(V+E)
Space O(V)O(V)O O(V)O(V)O
Complexit (V) (worst (V) (for
y case for the
recursion queue)
Depth Breadth
First First
Feature
Search Search
(DFS) (BFS)
stack)
Shortest
Topologica path
l sorting, finding,
Use Cases
solving network
puzzles broadcasti
ng
Not
Complete
Completen complete
for finite
ess in infinite
graphs
graphs
Not
Optimal
optimal
for
Optimality for
unweighte
shortest
d graphs
paths
4o mini

Q10. Given an adjacency-list


representation of a directed graph, how
long does it take to compute the out-
degree of every vertex? How long does it
take to compute the in-degrees?
Computing Out-Degree and In-
Degree in a Directed Graph
Out-Degree
 Definition: The out-degree of a vertex is
the number of edges directed away from
it.
 Time Complexity: O(V)O(V)O(V), where
VVV is the number of vertices.
o Reason: For each vertex, you can
directly count the number of edges in
its adjacency list.
In-Degree
 Definition: The in-degree of a vertex is
the number of edges directed towards it.
 Time Complexity: O(V+E)O(V +
E)O(V+E), where EEE is the number of
edges.
o Reason: You can compute in-degrees
by iterating over all edges in the
adjacency list, incrementing the in-
degree for each destination vertex.
Summary
 Out-Degree: O(V)O(V)O(V)
 In-Degree: O(V+E)O(V + E)O(V+E)
4o mini

You might also like