0% found this document useful (0 votes)
57 views10 pages

Daa PBL

The document discusses various algorithmic concepts including the Greedy Approach, Hamiltonian Cycle Problem, comparison of Merge Sort and Quick Sort, and the solution to a recurrence relation. It also covers Strassen's Algorithm for matrix multiplication, Huffman coding for data compression, asymptotic notations, and Dijkstra's Algorithm for finding the shortest path in a graph. Each section provides definitions, properties, examples, and step-by-step explanations of the algorithms and their complexities.

Uploaded by

riddhiybansal04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views10 pages

Daa PBL

The document discusses various algorithmic concepts including the Greedy Approach, Hamiltonian Cycle Problem, comparison of Merge Sort and Quick Sort, and the solution to a recurrence relation. It also covers Strassen's Algorithm for matrix multiplication, Huffman coding for data compression, asymptotic notations, and Dijkstra's Algorithm for finding the shortest path in a graph. Each section provides definitions, properties, examples, and step-by-step explanations of the algorithms and their complexities.

Uploaded by

riddhiybansal04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

DAA PBL

Q.1 (a) Define Greedy Approach. Also Mention its Properties.

Answer:
The Greedy Approach is a problem-solving technique that makes a sequence of
choices, each of which looks best at the moment, with the hope of finding a global
optimum. In each step, it selects the option that seems to be the most beneficial without
considering future consequences.

Properties of the Greedy Approach:

1. Greedy-choice property: A global optimum can be arrived at by selecting a local


optimum.
2. Optimal substructure: An optimal solution to the problem contains optimal
solutions to its sub-problems.
3. Feasibility: The choice must satisfy the problem's constraints.
4. Local Optimality: The choice made must be the best local choice among the
feasible options.
5. Non-retractable: Once a choice is made, it cannot be reversed.

Q.1 (b) What is the Hamiltonian Cycle Problem? Explain with an Example.

Answer:
The Hamiltonian Cycle Problem is a problem of determining whether there exists a
cycle in an undirected or directed graph that visits each vertex exactly once and returns
to the starting vertex.

Example: Consider a graph with vertices A,B,C,DA,B,C,D and edges connecting them.
If there exists a path like A→B→C→D→AA→B→C→D→A that visits each vertex
exactly once and returns to AA, then it is a Hamiltonian Cycle.

Q.1 (c) Compare Merge Sort and Quick Sort w.r.t Their Time Complexity and
Space Complexity.

Answer:

Criterion Merge Sort Quick Sort


Time Complexity (Best) O(nlog⁡n)O(nlogn) O(nlog⁡n)O(nlogn)

Time Complexity O(nlog⁡n)O(nlogn) O(nlog⁡n)O(nlogn)


(Average)

Time Complexity (Worst) O(nlog⁡n)O(nlogn) O(n2)O(n2)

Space Complexity O(n)O(n) (uses extra O(log⁡n)O(logn) (in-place


array) sort)

Merge Sort is stable and performs better with large data sets, while Quick Sort is
generally faster for small and medium datasets but has a higher worst-case time
complexity without optimizations.

Q.1 (d) Solve the Recurrence Relation T(n) = 2T(n - 1) + O(n).

Answer:

Let's go through the recurrence relation T(n)=2T(n−1)+O(n)T(n)=2T(n−1)+O(n)


step-by-step using iterative substitution. This approach will help us see the pattern and
reach the solution more systematically.

Given:

The recurrence relation is:

T(n)=2T(n−1)+O(n)T(n)=2T(n−1)+O(n)

For simplicity, let's assume O(n)=c⋅nO(n)=c⋅n where cc is a constant. So we rewrite the


recurrence as:

T(n)=2T(n−1)+c⋅nT(n)=2T(n−1)+c⋅n

Step 1: Expand the Recurrence Relation

Let's start by expanding T(n)T(n) in terms of T(n−1)T(n−1), T(n−2)T(n−2), and so on.


First Iteration:

Substitute T(n−1)T(n−1) using the recurrence relation:

T(n)=2(T(n−2)+c⋅(n−1))+c⋅nT(n)=2(T(n−2)+c⋅(n−1))+c⋅n=22T(n−2)+2⋅c⋅(n−1)+c⋅n=22
T(n−2)+2⋅c⋅(n−1)+c⋅n

Second Iteration:

Now substitute T(n−2)T(n−2):

T(n)=22(T(n−3)+c⋅(n−2))+2⋅c⋅(n−1)+c⋅nT(n)=22(T(n−3)+c⋅(n−2))+2⋅c⋅(n−1)+c⋅n=23T
(n−3)+22⋅c⋅(n−2)+2⋅c⋅(n−1)+c⋅n=23T(n−3)+22⋅c⋅(n−2)+2⋅c⋅(n−1)+c⋅n

General Form after kk Iterations:

After expanding kk times, the recurrence will look like:

T(n)=2kT(n−k)+c(n⋅20+(n−1)⋅21+(n−2)⋅22+⋯+(n−k+1)⋅2k−1)T(n)=2kT(n−k)+c(n⋅20+(n
−1)⋅21+(n−2)⋅22+⋯+(n−k+1)⋅2k−1)

Step 2: Determine When to Stop Expanding

We stop expanding when we reach the base case, which we assume is T(0)T(0) (for
simplicity, or we can assume T(1)=dT(1)=d if given).

Let's say the base case is T(0)=dT(0)=d. This means we need n−k=0n−k=0, which gives
k=nk=n.

Now, substituting k=nk=n into the expanded form:

T(n)=2nT(0)+c(n⋅20+(n−1)⋅21+(n−2)⋅22+⋯+1⋅2n−1)T(n)=2nT(0)+c(n⋅20+(n−1)⋅21+(n−
2)⋅22+⋯+1⋅2n−1)

Since T(0)=dT(0)=d, we have:

T(n)=2nd+c∑j=0n−1(n−j)⋅2jT(n)=2nd+cj=0∑n−1​(n−j)⋅2j

Step 3: Simplify the Summation

The summation term ∑j=0n−1(n−j)⋅2j∑j=0n−1​(n−j)⋅2j grows exponentially due to the


2j2j factor. Thus, the exponential term 2nd2nd will dominate the growth of T(n)T(n).

Conclusion
The asymptotic solution to the recurrence relation is:

T(n)=O(2n)T(n)=O(2n)

So, T(n)T(n) grows exponentially as O(2n)O(2n).

Q.2 (a) How Strassen’s Algorithm Uses Divide and Conquer to Reduce the Time
Complexity of Matrix Multiplication?

Answer:

Strassen’s Algorithm reduces the time complexity of matrix multiplication by breaking


down the original n×n matrix multiplication problem into seven smaller matrix
multiplications of size n/2×n/2n/2×n/2 using a divide-and-conquer approach. Traditional
matrix multiplication has a time complexity of O(n3)O(n3), while Strassen’s Algorithm
improves it to O(n2.81)O(n2.81) by reducing the number of required multiplications.

Given:

● A 5-character text file with frequencies as follows:


○ a=50%a=50%
○ b=5%b=5%
○ c=10%c=10%
○ d=32%d=32%
○ e=3%e=3%

Step 1: Build the Huffman Tree

1. Initialize Nodes: Create a node for each character with its frequency:
○ a:50%a:50%, b:5%b:5%, c:10%c:10%, d:32%d:32%, e:3%e:3%.
2. Sort Nodes: Sort the nodes in ascending order based on frequency.
○ Order: e:3%e:3%, b:5%b:5%, c:10%c:10%, d:32%d:32%, a:50%a:50%.
3. Build the Tree:
○ Combine the two lowest frequencies:
■ e+b=3%+5%=8%e+b=3%+5%=8%.
○ Updated list: {8%,c:10%,d:32%,a:50%}{8%,c:10%,d:32%,a:50%}.
○ Combine the two lowest again:
■ 8%+10%=18%8%+10%=18%.
○ Updated list: {18%,d:32%,a:50%}{18%,d:32%,a:50%}.
○ Combine the two lowest:
■ 18%+32%=50%18%+32%=50%.
○ Updated list: {50%,a:50%}{50%,a:50%}.
○ Combine the last two nodes:
■ 50%+50%=100%50%+50%=100%.
○ This completes the Huffman Tree.

Step 2: Assign Codes Assign binary codes based on the tree structure, with shorter
codes for more frequent characters.

Step 3: Calculate Fixed-Length and Variable-Length Encodings

1. Fixed-Length Encoding:
○ For 5 characters, each character would require ⌈log⁡2(5)⌉=3⌈log2​(5)⌉=3
bits.
○ Total bits required for 100 characters: 100×3=300100×3=300 bits.
2. Variable-Length (Huffman) Encoding:
○ Calculate the total bits required using the Huffman encoding for each
character based on frequency.
3. Space Savings:
○ Calculate the difference in bits between fixed-length and variable-length
encoding.
○ Calculate the percentage savings:Savings=Fixed-Length Bits−Huffman
BitsFixed-Length Bits×100%Savings=Fixed-Length BitsFixed-Length
Bits−Huffman Bits​×100%

Question 3(a): Define various asymptotic notations with the help of graph and
mathematical function.

Expansion:

● Asymptotic Notations:
○ Asymptotic notations are mathematical tools used to describe the running
time complexity of algorithms in terms of input size nn as n→∞n→∞.
○ Big-O Notation O(g(n))O(g(n)): Describes an upper bound.
f(n)=O(g(n))f(n)=O(g(n)) if there exist constants c>0c>0 and n0n0​such
that f(n)≤c⋅g(n)f(n)≤c⋅g(n) for all n≥n0n≥n0​.
○ Big-Omega Notation Ω(g(n))Ω(g(n)): Describes a lower bound.
f(n)=Ω(g(n))f(n)=Ω(g(n)) if there exist constants c>0c>0 and n0n0​such
that f(n)≥c⋅g(n)f(n)≥c⋅g(n) for all n≥n0n≥n0​.
○ Big-Theta Notation Θ(g(n))Θ(g(n)): Describes a tight bound.
f(n)=Θ(g(n))f(n)=Θ(g(n)) if f(n)=O(g(n))f(n)=O(g(n)) and
f(n)=Ω(g(n))f(n)=Ω(g(n)).
Q3(b): Single Source Shortest Path using Dijkstra's Algorithm
Answer:

Assume that the nodes in the graph are labeled AA, BB, CC, DD, and EE, and the
weights of the edges are as provided in the image. Here’s a summary of the edges for
easy reference:

● Edge A→BA→B: Weight = 2


● Edge A→CA→C: Weight = 3
● Edge B→DB→D: Weight = 4
● Edge B→EB→E: Weight = 5
● Edge C→DC→D: Weight = 7
● Edge D→ED→E: Weight = 3

Our goal is to find the shortest path from node AA to all other nodes using Dijkstra’s
Algorithm.

Step-by-Step Execution

1. Initialize Distances and Predecessors:


○ Set the distance of the source node AA to 0.
○ Set the distance of all other nodes to infinity initially:
■ dist(A)=0dist(A)=0
■ dist(B)=∞dist(B)=∞
■ dist(C)=∞dist(C)=∞
■ dist(D)=∞dist(D)=∞
■ dist(E)=∞dist(E)=∞
2. Select Node with Minimum Distance:
○ Start with node AA since it has the minimum distance (0).
3. Update Neighbor Distances of AA:
○ For A→BA→B: The new distance to BB is 0+2=20+2=2. Update
dist(B)dist(B) to 2.
○ For A→CA→C: The new distance to CC is 0+3=30+3=3. Update
dist(C)dist(C) to 3.
4. Updated distances:
○ dist(A)=0dist(A)=0
○ dist(B)=2dist(B)=2
○ dist(C)=3dist(C)=3
○ dist(D)=∞dist(D)=∞
○ dist(E)=∞dist(E)=∞
5. Mark AA as Visited and Move to the Next Node with the Smallest Distance:
○ AA is now visited.
○ Among unvisited nodes, BB has the smallest distance (2).
6. Update Neighbor Distances of BB:
○ For B→DB→D: The new distance to DD is 2+4=62+4=6. Update
dist(D)dist(D) to 6.
○ For B→EB→E: The new distance to EE is 2+5=72+5=7. Update
dist(E)dist(E) to 7.
7. Updated distances:
○ dist(A)=0dist(A)=0
○ dist(B)=2dist(B)=2
○ dist(C)=3dist(C)=3
○ dist(D)=6dist(D)=6
○ dist(E)=7dist(E)=7
8. Mark BB as Visited and Move to the Next Node with the Smallest Distance:
○ BB is now visited.
○ Among unvisited nodes, CC has the smallest distance (3).
9. Update Neighbor Distances of CC:
○ For C→DC→D: The new distance to DD is 3+7=103+7=10. However, the
current known distance to DDis 6, which is smaller, so we do not update
dist(D)dist(D).
10. Updated distances remain the same:
○ dist(A)=0dist(A)=0
○ dist(B)=2dist(B)=2
○ dist(C)=3dist(C)=3
○ dist(D)=6dist(D)=6
○ dist(E)=7dist(E)=7
11. Mark CC as Visited and Move to the Next Node with the Smallest Distance:
○ CC is now visited.
○ Among unvisited nodes, DD has the smallest distance (6).
12. Update Neighbor Distances of DD:
○ For D→ED→E: The new distance to EE is 6+3=96+3=9. The current
known distance to EE is 7, which is smaller, so we do not update
dist(E)dist(E).
13. Updated distances remain the same:
○ dist(A)=0dist(A)=0
○ dist(B)=2dist(B)=2
○ dist(C)=3dist(C)=3
○ dist(D)=6dist(D)=6
○ dist(E)=7dist(E)=7
14. Mark DD as Visited and Move to the Next Node with the Smallest Distance:
○ DD is now visited.
○ EE is the only remaining unvisited node.
15. Mark EE as Visited:
○ All nodes are now visited, and the algorithm terminates.

Final Distances from Source Node AA

After executing Dijkstra’s Algorithm, the shortest distances from node AA to all other
nodes are:

● dist(A)=0dist(A)=0
● dist(B)=2dist(B)=2
● dist(C)=3dist(C)=3
● dist(D)=6dist(D)=6
● dist(E)=7dist(E)=7

Shortest Paths

The shortest paths from AA to each node can also be traced back through the nodes’
predecessors:

● A→BA→B: Distance = 2
● A→CA→C: Distance = 3
● A→B→DA→B→D: Distance = 6
● A→B→EA→B→E: Distance = 7

Q.4 (a) Define Various Operations on Disjoint Sets. How the Time Complexity of
Operations on Disjoint Sets Can Be Reduced Using Heuristics.

Answer:
Operations on Disjoint Sets:

1. MakeSet: Create a set with a single element.


2. Find: Determine which set a particular element is in.
3. Union: Join two sets.

Optimizations:

● Union by Rank: Attach smaller tree under the root of the deeper tree.
● Path Compression: Make nodes in the path point directly to the root.

These heuristics reduce the complexity of operations to near-constant time O(α(n)),


where α is the inverse Ackermann function.
Q.4 (b) List the Problems in Which Backtracking Can Be Applied. Solve n-Queen
Problem Using Backtracking.

Answer:

Part 1:

​Backtracking is a powerful algorithmic technique used to solve combinatorial and


constraint satisfaction problems. Here are some well-known problems that can be
solved using backtracking:

1. N-Queens Problem: Placing n queens on an n×n chessboard so that no two


queens threaten each other.
2. Sudoku Solver: Filling in a partially completed 9x9 grid so that every row, column,
and 3x3 subgrid contains all digits from 1 to 9 without repetition.
3. Subset Sum Problem: Finding subsets of numbers that sum up to a given target
value.
4. Knapsack Problem: Selecting a subset of items to maximize value without
exceeding weight constraints.
5. Graph Coloring: Coloring the vertices of a graph with a minimum number of
colors such that no two adjacent vertices share the same color.
6. Hamiltonian Path and Cycle: Finding a path or cycle in a graph that visits each
vertex exactly once.

Part 2:

Step 1: Start with an Empty Board

We begin with an empty 4x4 board: ​ ​ ​ ​

Step 2: Place the First Queen in Row 0

We place the first queen in the first row, starting from the left. Let's put the queen in
column 0 of row 0.

Step 3: Move to the Next Row (Row 1)

In row 1, we cannot place a queen in column 0, because it would be in the same column
as the queen in row 0. We also cannot place it in column 1 because it’s on the diagonal
with the first queen.

The first safe position in row 1 is column 2, so we place the queen there.
Step 4: Move to the Next Row (Row 2)

In row 2, columns 0 and 2 are unavailable due to the queens in rows 0 and 1, and
column 3 is unavailable due to the diagonal. The only safe position is column 1, so we
place the queen there.

Step 5: Move to the Next Row (Row 3)

In row 3, columns 0, 1, and 2 are either in the same column or diagonal with previously
placed queens. We place the last queen in column 3, which is the only safe position.

This configuration satisfies the N-Queens constraints: no two queens share the same
row, column, or diagonal. So, we have found one solution.

Backtracking to Find Other Solutions

To find other solutions, we backtrack by moving the queens to other possible positions
and checking if they satisfy the constraints. Here’s another valid solution:

In this second solution:

1. The queen in row 0 is placed in column 1.


2. The queen in row 1 is placed in column 3.
3. The queen in row 2 is placed in column 0.
4. The queen in row 3 is placed in column 2.

You might also like