0% found this document useful (0 votes)
7 views27 pages

2 Marks DAA

The document provides answers to various algorithm-related questions, covering topics such as Euclid's Algorithm for GCD, problem-solving steps using algorithms, running time measurement, proof techniques using loop invariants, and dynamic programming principles. It also discusses the time complexity of specific algorithms like Kruskal's and Dijkstra's, as well as the construction of Huffman trees and the naive string matching algorithm. Additionally, it explains concepts like flow networks and the purpose of slack variables in linear programming.

Uploaded by

adithya6571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views27 pages

2 Marks DAA

The document provides answers to various algorithm-related questions, covering topics such as Euclid's Algorithm for GCD, problem-solving steps using algorithms, running time measurement, proof techniques using loop invariants, and dynamic programming principles. It also discusses the time complexity of specific algorithms like Kruskal's and Dijkstra's, as well as the construction of Huffman trees and the naive string matching algorithm. Additionally, it explains concepts like flow networks and the purpose of slack variables in linear programming.

Uploaded by

adithya6571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Here are the answers to the questions from your exam paper.

1. Euclid's Algorithm for gcd(m, n)


Algorithm: Euclid's algorithm computes the greatest common divisor (GCD) of two integers, m
and n.
1.​ If n is 0, return m.
2.​ Otherwise, calculate the remainder r when m is divided by n.
3.​ Replace m with n and n with r, and repeat from step 1.

Example: Find gcd(60, 24)


●​ m = 60, n = 24. Remainder of 60 / 24 is 12.
●​ m = 24, n = 12. Remainder of 24 / 12 is 0.
●​ m = 12, n = 0. Since n is 0, we return m.

Answer: The gcd(60, 24) is 12.

2. Fundamental Steps to Solve a Problem using Algorithms


1.​ Understand the Problem: Clearly define the input and the desired output.
2.​ Design the Algorithm: Create a step-by-step procedure to solve the problem. This
involves choosing appropriate data structures and algorithmic techniques.
3.​ Prove Correctness: Ensure the algorithm yields the correct output for all legitimate
inputs.
4.​ Analyze the Algorithm: Determine the algorithm's efficiency in terms of time and space
complexity.
5.​ Code and Test the Algorithm: Implement the algorithm in a programming language
and test it with various inputs.

3. Parameters to Measure Running Time


The running time of an algorithm is measured by two key parameters:
1.​ Input Size (n): The primary factor influencing the running time. For example, the number
of elements in an array or the number of nodes in a graph.
2.​ Basic Operation Count: The number of times the algorithm's most fundamental and
frequently executed instruction is performed. The overall running time is estimated as a
function of the input size n.

4. Three Steps for Proof using Loop Invariant


A loop invariant is a property that holds true before and after each iteration of a loop. The three
steps to prove correctness using a loop invariant are:
1.​ Initialization: Show that the invariant is true before the first loop iteration.
2.​ Maintenance: Assume the invariant is true before an iteration and show it remains true
after that iteration.
3.​ Termination: When the loop terminates, the invariant (along with the termination
condition) should help prove the algorithm's correctness.

5. Mathematical Analysis of Non-Recursive Algorithms


The general steps to analyze the time efficiency of a non-recursive algorithm are:
1.​ Identify Input Size: Determine the parameter n that represents the size of the input.
2.​ Identify Basic Operation: Find the most frequently executed operation in the
algorithm's innermost loop.
3.​ Set up Summation: Formulate a sum corresponding to the number of times the basic
operation is executed. This may depend on the worst, best, or average case.
4.​ Find Closed-Form: Simplify the summation to derive a closed-form formula for the
operation count.

Example: Finding the maximum element in an array of size n.


●​ The basic operation is the comparison A[i] > max_val.
●​ This comparison is performed n-1 times.
●​ The efficiency is T(n)=∑i=1n−1​1=n−1. Thus, the complexity is O(n).

6. Comparison of Order of Growth: n(n-1)/2 vs. n


The expression n(n-1)/2 simplifies to (n^2 - n)/2, which is a quadratic function. Its order of
growth is O(n2). The expression n is a linear function with an order of growth of O(n).

Comparison: For large values of n, the O(n2) function grows significantly faster than the O(n)
function. Therefore, an algorithm with a running time of n(n-1)/2 is much less efficient for large
inputs than an algorithm with a running time of n.

7. Travelling Salesman Problem by Exhaustive Search


Problem: Find the shortest tour that starts at a city, visits every other city exactly once, and
returns to the starting city.

Method: We list all possible tours (Hamiltonian cycles), calculate their total distance, and find
the minimum. Let's start from city 0.
1.​ Tour 1: 0 → 1 → 2 → 3 → 0
○​ Cost: 20 + 15 + 17 + 10 = 62
2.​ Tour 2: 0 → 1 → 3 → 2 → 0
○​ Cost: 20 + 11 + 17 + 12 = 60
3.​ Tour 3: 0 → 2 → 1 → 3 → 0
○​ Cost: 12 + 15 + 11 + 10 = 48

The other possible tours are reversals of these and have the same cost.

Optimal Solution: The minimum cost is 48. The optimal path is 0 → 2 → 1 → 3 → 0 (or its
reverse 0 → 3 → 1 → 2 → 0).

8. Solving T(n) = T(n-1) + 1 by Substitution


This is a recurrence relation for a linear search-like algorithm. We assume a base case, for
instance, T(1)=1.

Method:
1.​ T(n)=T(n−1)+1
2.​ Substitute for T(n−1): T(n)=[T(n−2)+1]+1=T(n−2)+2
3.​ Substitute for T(n−2): T(n)=[T(n−3)+1]+2=T(n−3)+3
4.​ Generalize the pattern after k substitutions: T(n)=T(n−k)+k
5.​ Let k = n-1 to reach the base case T(1): T(n)=T(n−(n−1))+(n−1) T(n)=T(1)+n−1
6.​ Using the base case T(1)=1: T(n)=1+n−1=n

Solution: The time complexity is T(n)=O(n).

9. Divide and Conquer Technique


The Divide and Conquer strategy solves a problem by:
1.​ Divide: Breaking the main problem into several smaller, independent sub-problems of
the same type.
2.​ Conquer: Solving the sub-problems recursively. If a sub-problem is small enough, it is
solved directly.
3.​ Combine: Merging the solutions of the sub-problems to get the solution for the original
problem.

Sketch (Example: Merge Sort)


A diagram illustrating the process:

[Problem of size n]
/ \
[Sub-problem n/2] [Sub-problem n/2] <-- Divide
/ \ / \
... ... ... ... <-- Conquer (Recursive calls)
\ / \ /
[Solved n/2] [Solved n/2]
\ /
[Solution for size n] <-- Combine

10. Solving T(n) = 4T(n/2) + n² using Master's Theorem


The Master's Theorem solves recurrences of the form T(n)=aT(n/b)+f(n).

Parameters:
●​ a=4
●​ b=2
●​ f(n)=n2

Analysis:
1.​ Calculate nlogb​a=nlog2​4=n2.
2.​ Compare f(n) with nlogb​a. In this case, f(n)=n2 is asymptotically equal to n2.
3.​ This corresponds to Case 2 of the Master Theorem, where f(n)=Θ(nlogb​a).

Solution:

According to Case 2, the solution is T(n)=Θ(nlogb​alogn).

Therefore, T(n)=Θ(n2logn).

FAT 2

Of course! Here are the answers to the questions in the image.

1. Key Steps in Divide & Conquer for Closest Pair of Points


The key steps to find the closest pair of points in a 2D plane using the Divide and Conquer
approach are:
1.​ Divide: Sort the points based on their x-coordinate. Find a vertical line L that splits the
set of points P into two equal-sized subsets, P_left and P_right.
2.​ Conquer: Recursively find the closest pair in P_left (let the distance be d_left) and in
P_right (let the distance be d_right). Let d be the minimum of d_left and d_right.
3.​ Combine: The closest pair might be a "split pair" where one point is in P_left and the
other is in P_right. Create a "strip" of points within distance d of the line L. For each point
in this strip, check its distance against its neighbors (only a constant number of checks
are needed per point if sorted by y-coordinate). If a closer pair is found, update d. The
final d is the answer.

2. Principle of Optimality in Dynamic Programming


The principle of optimality states that an optimal solution to a problem is composed of optimal
solutions to its subproblems. In simpler terms, if you have found the best overall solution, the
parts or stages of that solution must also be the best possible solutions for their respective
subproblems.

3. Memoization vs. Tabulation in Dynamic Programming


Memoization is a top-down approach in dynamic programming. You solve the problem
recursively, but store the result of each subproblem in a cache or lookup table. When the same
subproblem occurs again, you retrieve the stored result instead of re-computing it.

Key Differences:

Feature Memoization (Top-Down) Tabulation (Bottom-Up)

Approach Recursive Iterative

Execution Solves only the subproblems that Solves all subproblems, starting
are actually needed. from the smallest.

Overhead May have overhead due to Avoids recursion, using loops


recursive function calls. instead.

4. Time Complexity of Kruskal's Algorithm


●​ Time Complexity: The time complexity of Kruskal's algorithm is O(E log E) or O(E log
1
V), where E is the number of edges and V is the number of vertices.
●​ Optimizing Data Structure: The algorithm is optimized using a Union-Find (also known
as Disjoint Set Union) data structure. This structure is used to efficiently track the
connected components of the graph and detect if adding an edge would form a cycle.

5. Dynamic Programming vs. Greedy Technique

Basis for Greedy Technique Dynamic Programming


Comparison

Choice Makes the locally Makes a decision based on the


optimal choice at each optimal solution to subproblems.
step.

Guarantee Does not guarantee a Guarantees a globally optimal


globally optimal solution. solution.

Approach Makes one choice after Solves all necessary subproblems


another without and combines them for the final
revisiting. solution.

Example Dijkstra's Algorithm, Floyd-Warshall Algorithm,


Kruskal's Algorithm Knapsack Problem

6. Why Dijkstra's Algorithm Fails with Negative Weights


Dijkstra's algorithm cannot handle negative edge weights because it is a greedy algorithm. It
operates on the assumption that once it selects a vertex and marks it as "visited," the path found
to it is the shortest possible path. This assumption is only true if all edge weights are
non-negative.

A negative edge could create a "shortcut" to a vertex that has already been marked as visited,
but the algorithm will not re-evaluate it.
Example:

Consider a path from A to C.

●​ Edges: A -> C (cost 5), A -> B (cost 3), B -> C (cost -4).

Dijkstra's running from A would first finalize the path to B with cost 3. It would then see the path
A -> C with cost 5. The algorithm would incorrectly conclude the shortest path to C is 5, because
it will not correctly process the path A -> B -> C which has a total cost of 3 + (-4) = -1.

7. Huffman Tree Construction


Given characters and their frequencies: A(5), B(9), C(12), D(13), E(16).
1.​ Combine the two lowest frequencies: A(5) and B(9) to create a new node with frequency
14.
2.​ The available nodes are now: C(12), D(13), Node(14), E(16).
3.​ Combine the next two lowest: C(12) and D(13) to create a new node with frequency 25.
4.​ The available nodes are now: Node(14), E(16), Node(25).
5.​ Combine the next two lowest: Node(14) and E(16) to create a new node with frequency
30.
6.​ Finally, combine the last two nodes: Node(25) and Node(30) to create the root with
frequency 55.

The resulting Huffman Tree is:

(55)
/ \
(30) (25)
/ \ / \
(14) E:16 C:12 D:13
/ \
A:5 B:9

(Assigning 0 to left branches and 1 to right branches)


●​ A: 000
●​ B: 001
●​ E: 01
●​ C: 10
●​ D: 11

8. Worst-Case Time Complexity of Naive String Matching


●​ Worst-Case Time Complexity: O(m * n), where n is the length of the text and m is the
length of the pattern.
●​ Justification: The naive algorithm slides the pattern over the text one position at a time
and checks for a match. In the worst case, for each of the (n-m+1) possible starting
positions, we might have to compare all m characters of the pattern. This occurs with
repetitive inputs like:
○​ Text: T = "AAAAAAAAAB"
○​ Pattern: P = "AAAAB" Here, almost every alignment requires checking all
characters in the pattern before finding a mismatch at the very end.

9. Prefix Function for P = "ABABABC" (KMP Algorithm)


The prefix function π[q] stores the length of the longest proper prefix of the pattern P[1..q] that is
also a suffix of P[1..q].

| q | Pattern P[1..q] | Longest Proper Prefix that is also a Suffix | π[q] |

| :-: | :--- | :--- | :-: |

| 1 | A | (empty) | 0 |

| 2 | AB | (empty) | 0 |

| 3 | ABA | A | 1 |

| 4 | ABAB | AB | 2 |

| 5 | ABABA | ABA | 3 |

| 6 | ABABAB | ABAB | 4 |

| 7 | ABABABC | (empty) | 0 |

The prefix function array is [0, 0, 1, 2, 3, 4, 0].

10. Flow Network Definition and Constraints


A flow network is a directed graph G = (V, E) where:
●​ Every edge (u, v) has a non-negative capacity c(u, v).
●​ There is a designated source vertex s (from which flow originates).
●​ There is a designated sink vertex t (where flow terminates).

A flow f in the network must satisfy two main constraints:


1.​ Capacity Constraint: The amount of flow f(u, v) on any edge (u, v) cannot exceed the
capacity of that edge. 0≤f(u,v)≤c(u,v)
2.​ Flow Conservation: For every vertex except the source and the sink, the total flow
entering the vertex must equal the total flow leaving it. x∈V∑​f(x,u)=y∈V∑​f(u,y)for all
u∈V−{s,t}

FAT 3

Of course. Here are the answers to the questions from your exam paper.

1. Purpose of Slack Variables in Linear Programming


The primary purpose of introducing slack variables is to convert linear inequalities of the type
"less than or equal to" (≤) into strict equalities (=).

This transformation is essential for using the Simplex algorithm, which is designed to solve
systems of linear equations rather than inequalities. For each constraint like a1​x1​+a2​x2​≤b, a
non-negative slack variable s is added to the left side, resulting in the equation a1​x1​+a2​x2​+s=b.

2. Standard Canonical Form of the Objective Function


Given the Linear Programming Problem:

Maximize z=3x1​+2x2​

Subject to:

x1​+4x2​≤6

3x1​+2x2​≤18 (assuming a typo correction)

To prepare the objective function for the Simplex method, it is written in a standard canonical
form where all variables are on one side, set equal to zero.

Standard Form:

z−3x1​−2x2​=0

3. Graphical Method for Solving an LP Problem


Problem:
Maximize Z=x1​+x2​

Subject to the constraints:

1.​ 3x1​+2x2​≤5
2.​ x2​≤2
3.​ x1​,x2​≥0

Steps:
1.​ Plot the constraints: Draw the lines 3x1​+2x2​=5 and x2​=2.
2.​ Identify the Feasible Region: The feasible region is the polygon formed by the
intersection of all constraints, including x1​≥0 and x2​≥0.
3.​ Find Corner Points: The vertices (corners) of this feasible region are:
○​ A = (0, 0)
○​ B = (5/3, 0)
○​ C = (1/3, 2) ← Intersection of 3x1​+2x2​=5 and x2​=2
○​ D = (0, 2)
4.​ Evaluate Objective Function: Check the value of Z=x1​+x2​at each corner point:
○​ At A(0, 0): Z = 0 + 0 = 0
○​ At B(5/3, 0): Z = 5/3 + 0 ≈ 1.67
○​ At D(0, 2): Z = 0 + 2 = 2
○​ At C(1/3, 2): Z = 1/3 + 2 = 7/3 ≈ 2.33 (Maximum)

Solution: The maximum value of Z is 7/3, which occurs at the point (x1​=1/3, x2​=2).

4. Feasible Region of Intersection


Problem:

Max: z=45x1​+60x2​

Subject to:

●​ x1​≤45
●​ x2​≤50
●​ 10x1​+10x2​≥600 (or x1​+x2​≥60)
●​ 25x1​+5x2​≤750 (or 5x1​+x2​≤150)
●​ x1​,x2​≥0

The feasible region is the geometric space (a polygon) on the 2D plane that satisfies all the
given constraints simultaneously. It is bounded by the lines x1​=45, x2​=50, x1​+x2​=60, and
5x1​+x2​=150, within the first quadrant. Any point within or on the boundary of this polygon
represents a valid solution to the LPP.
5. Line Segment Properties in Computational Geometry
In computational geometry, line segments have several key properties used in algorithms:
●​ Representation: A line segment is defined by its two endpoints, e.g., P1​(x1​,y1​) and
P2​(x2​,y2​).
●​ Intersection: A crucial property is whether two line segments intersect. This is
determined algorithmically using orientation tests.
●​ Orientation: Given an ordered triplet of points (p, q, r), their orientation can be
determined as collinear, clockwise (right turn), or counter-clockwise (left turn). This
is the fundamental building block for intersection tests and convex hull algorithms.

6. Comparison of Class P and Class NP Problems

Feature Class P (Polynomial Class NP (Nondeterministic


Time) Polynomial Time)

Core Idea Problems that can be Problems whose solutions can be


solved quickly. verified quickly.

Definition Contains all decision Contains all decision problems for


problems that can be which a "yes" answer can be
solved by a deterministic verified in polynomial time with a
algorithm in polynomial given certificate.
time.

Relationship P is a subset of NP It is the superset of P. Whether


(P⊆NP). P=NP is a major unsolved question.

Example Sorting an array, Finding a Traveling Salesman Problem (TSP),


minimum spanning tree. Circuit Satisfiability (SAT).

7. Differentiating Circuit Satisfiability from Formula Satisfiability


Both are classic NP-complete problems, but they differ in their input representation.
●​ Formula Satisfiability (SAT): The input is a Boolean formula (e.g.,
(x1​∨¬x2​)∧(x2​∨x3​)). The goal is to find if there's a TRUE/FALSE assignment to
variables that makes the formula TRUE.
●​ Circuit Satisfiability (CIRCUIT-SAT): The input is a Boolean combinatorial circuit
made of logic gates (AND, OR, NOT). The goal is to find if there's a set of inputs that
makes the circuit's output TRUE.

The key difference is that a circuit can be much more compact than an equivalent formula
because it can reuse the output of gates, whereas a formula might have to repeat
sub-expressions.

8. Final Solution State for 8-Queens Problem


Here is one of the 92 possible solutions for the 8-Queens problem, where no two queens can
attack each other.

....Q...
......Q.
.Q......
.....Q..
..Q.....
Q.......
...Q....
.......Q

The queen positions (row, column) are: (0,4), (1,6), (2,1), (3,5), (4,2), (5,0), (6,3), (7,7).

9. Backtracking Algorithm for the N-Queens Problem


Code snippet

function solveNQueens(N):
// Initialize an N x N board with empty squares
board = create_board(N)

// Start placing queens from the first column (col=0)


if placeQueen(board, 0, N) == false:
print "Solution does not exist"
return

print_solution(board)

function placeQueen(board, col, N):


// BASE CASE: If all columns are filled, queens are placed successfully
if col >= N:
return true

// RECURSIVE STEP: Try placing a queen in each row of the current column
for row from 0 to N-1:
// Check if it's safe to place a queen at board[row][col]
if isSafe(board, row, col, N):
// Place the queen
board[row][col] = 'Q'

// Recur for the next column


if placeQueen(board, col + 1, N) == true:
return true

// BACKTRACK: If placing queen didn't lead to a solution, remove it


board[row][col] = '.'

// If no row in this column leads to a solution, return false


return false

// isSafe() function checks the row, upper-left diagonal, and lower-left diagonal

10. Max Clique Problem


The Max Clique Problem is the computational problem of finding the largest clique in a given
undirected graph.
●​ Clique: A subset of vertices in a graph where every two distinct vertices are connected
by an edge.
●​ Maximum Clique: A clique with the highest possible number of vertices in the entire
graph.

The decision version of this problem ("Does a clique of size k exist?") is a classic NP-complete
problem, meaning it is considered computationally hard to solve for large graphs.
PREV SEM

Of course! Here are the answers to your questions.

1. Algorithm for Greatest Common Divisor (GCD)


The most efficient method is Euclid's Algorithm.

Algorithm Steps:
1.​ Given two non-negative integers, m and n.
2.​ If n is 0, the GCD is m.
3.​ Otherwise, the GCD is the result of gcd(n, m % n), where m % n is the remainder of m
divided by n.

Pseudocode:

function gcd(m, n):


if n == 0:
return m
else:
return gcd(n, m % n)

2. Best, Worst, and Average Case Time Complexity


●​ Best Case: The minimum amount of time an algorithm takes to complete for an input of
a given size. This represents the most favorable scenario.
●​ Worst Case: The maximum amount of time an algorithm takes for an input of a given
size. This provides an upper bound on the running time and is the most common
complexity measure.
●​ Average Case: The expected running time of an algorithm for a "typical" or random
input of a given size. It requires making assumptions about the probability distribution of
the inputs.

3. Recursive Fibonacci Algorithm and Recurrence


Algorithm:

function fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)

Recurrence Relation:

The time complexity, T(n), can be described by the following recurrence:

T(n)={O(1)T(n−1)+T(n−2)+c​if n≤1if n>1​

This solves to an exponential time complexity, approximately O(1.618n).

4. Steps in Mathematical Analysis for Non-Recursive Algorithms


1.​ Identify Input Size: Determine the parameter, n, that represents the size of the input.
2.​ Identify Basic Operation: Find the most frequently executed instruction, which is
usually inside the algorithm's innermost loop.
3.​ Set up Summation: Create a mathematical sum that represents the total number of
times the basic operation is executed. This may be different for the worst, best, or
average cases.
4.​ Find Closed-Form Formula: Simplify the summation to get a closed-form expression
for the count, which defines the algorithm's time complexity.

5. General Strategy of Divide and Conquer


The Divide and Conquer method involves three main steps:
1.​ Divide: The problem is broken down into several smaller, independent subproblems of
the same type.
2.​ Conquer: The subproblems are solved recursively. If a subproblem is small enough, it is
solved directly as a base case.
3.​ Combine: The solutions to the subproblems are merged to form the solution for the
original problem.

6. Closest-Pair Problem
The closest-pair problem is a fundamental problem in computational geometry. Given n points
in a metric space (like a 2D plane), the objective is to find the pair of points that have the
smallest distance between them.

7. Complexity of Binary Search


The binary search algorithm works by repeatedly dividing the search interval in half.
●​ Recurrence Relation: Let T(n) be the time to search in an array of size n. At each step,
one comparison is made, and the problem size is reduced to n/2. This gives the
recurrence: T(n)=T(n/2)+c (where c is a constant for the comparison).
●​ Derivation: By unrolling the recurrence, we get T(n)=T(n/2k)+k⋅c. The process stops
when the problem size is 1, i.e., n/2k=1, which means k=log2​n.
●​ Complexity: Substituting k, we get T(n)=T(1)+c⋅log2​n. Therefore, the time complexity is
O(log n).

8. Order of Growth for Recurrences


We use the Master Theorem for recurrences of the form T(n)=aT(n/b)+f(n).

a. T(n) = 4T(n/2) + n
●​ a=4, b=2, f(n)=n.
●​ We compare f(n) with nlogb​a=nlog2​4=n2.
●​ Since f(n)=n=O(n2−ϵ) for ϵ=1, this falls into Case 1.
●​ Order of growth: Θ(n2).

b. T(n) = 4T(n/2) + n²
●​ a=4, b=2, f(n)=n2.
●​ We compare f(n) with nlogb​a=nlog2​4=n2.
●​ Since f(n)=n2=Θ(n2), this falls into Case 2.
●​ Order of growth: Θ(n2logn).

9. Dynamic Programming vs. Divide-and-Conquer


●​ Commonality: Both strategies solve problems by breaking them into smaller
subproblems and using the principle of optimal substructure (where the optimal
solution to the main problem contains optimal solutions to subproblems).​

●​ Principal Difference: The main difference is how they handle subproblems.


Divide-and-conquer is suited for problems where subproblems are independent.
Dynamic programming is designed for problems with overlapping subproblems, where
it solves each subproblem only once and stores its result to avoid re-computation.​

10. Prim's Algorithm and Negative Edge Weights


Yes, Prim's algorithm works correctly on graphs with negative edge weights. The algorithm's
greedy approach—always adding the cheapest edge that connects a vertex in the growing
spanning tree to a vertex outside of it—is not invalidated by negative weights. It will still produce
a Minimum Spanning Tree (MST).

11. Pseudocode for Huffman-Tree Construction


Code snippet

function Huffman(C):
// C is a set of characters with their frequencies
n = number of characters in C
Q = a min-priority queue initialized with characters from C

for i from 1 to n-1:


z = allocate a new node
z.left = EXTRACT-MIN(Q) // Smallest frequency node
z.right = EXTRACT-MIN(Q) // Second smallest
z.freq = z.left.freq + z.right.freq
INSERT(Q, z)

return EXTRACT-MIN(Q) // The root of the Huffman tree

12. Pseudocode for 0/1 Knapsack (Bottom-Up DP)


Code snippet

function Knapsack(W, weights, values, n):


// W: max capacity, n: number of items
// V[i][w]: max value from first i items with capacity w
V = a 2D array of size (n+1) x (W+1), initialized to 0

for i from 1 to n:
for w from 1 to W:
if weights[i-1] <= w:
// Item i can fit. Choose max of taking it vs. not taking it.
V[i][w] = max(values[i-1] + V[i-1][w - weights[i-1]], V[i-1][w])
else:
// Item i cannot fit, so we don't take it.
V[i][w] = V[i-1][w]

return V[n][W]

13. Perfect Matching in Bipartite Graphs


In a bipartite graph with two sets of vertices, U and V, where ∣U∣=∣V∣=n, a perfect matching is
a set of n edges where no two edges share a vertex. This matching covers every vertex in the
graph exactly once.

14. Iterative Improvement Technique


Iterative improvement is an optimization strategy that works as follows:
1.​ Start with a feasible (but not necessarily optimal) solution.
2.​ Repeatedly try to make small, local changes to the current solution to find a better one.
3.​ The process stops when no further improvements can be made, reaching a locally
optimal solution.

15. Principle of Duality


In linear programming, the principle of duality states that every linear programming problem
(called the primal problem) has a corresponding dual problem. If the primal problem has an
optimal solution, then so does its dual, and their optimal objective values are equal.

16. Residual Network


In the context of flow networks, a residual network represents the remaining capacity for flow
between vertices. For a given flow f, the residual network has the same vertices as the original
graph but its edges represent:
●​ Forward Edges: The additional flow that can be sent in the original direction.
●​ Backward Edges: The flow that can be "pushed back" or canceled.

This network is crucial for algorithms like Ford-Fulkerson to find augmenting paths.

17. NP-Hard Problems


A problem is NP-hard if it is at least as difficult as the hardest problems in the class NP. This
means that any problem in NP can be reduced to an NP-hard problem in polynomial time.
NP-hard problems are not necessarily decision problems and may not be in NP themselves.

18. Feasible vs. Optimal Solution


●​ Feasible Solution: In an optimization problem, a feasible solution is any solution that
satisfies all the problem's constraints.
●​ Optimal Solution: An optimal solution is a feasible solution that achieves the best
possible value (maximum or minimum) for the objective function. It is the best among all
feasible solutions.
19. Hamiltonian Circuit Problem
The Hamiltonian circuit problem is a decision problem that asks: For a given graph G, does
there exist a cycle that visits every vertex in G exactly once before returning to the start? This is
a classic NP-complete problem.

20. Tractable vs. Intractable Problems


●​ Tractable Problems: Problems that can be solved in polynomial time (e.g., O(n2),
O(nlogn)). They are considered efficiently solvable. The class of tractable problems
corresponds to P.
●​ Intractable Problems: Problems for which no polynomial-time algorithm is known. They
typically require exponential time, making them practically unsolvable for large inputs.
Problems that are NP-hard are considered intractable.

PREV Arrear Paper

Of course! Here are the answers to your questions.

1. Fundamental Steps in Algorithmic Problem Solving


1.​ Understand the Problem: Clearly grasp the input, the desired output, and the
constraints.
2.​ Choose an Approach: Decide between exact and approximate problem-solving and
select an appropriate algorithmic strategy (e.g., brute force, divide and conquer).
3.​ Design the Algorithm: Develop a clear, step-by-step procedure to solve the problem.
This includes choosing the right data structures.
4.​ Prove Correctness: Formally or informally demonstrate that the algorithm produces the
correct output for all valid inputs.
5.​ Analyze the Algorithm: Determine the algorithm's efficiency in terms of time and space
complexity.
6.​ Code and Test: Implement the algorithm in a programming language and test it with
various inputs.

2. Important Problem Types


Some of the most important and frequently studied problem types in computer science include:
●​ Sorting: Arranging items in a predefined order.
●​ Searching: Finding a specific item in a dataset.
●​ String Processing: Manipulating sequences of characters (e.g., string matching).
●​ Graph Problems: Problems related to networks, like finding shortest paths or minimum
spanning trees.
●​ Combinatorial Problems: Finding a combination, permutation, or subset that satisfies
certain criteria (e.g., Traveling Salesman Problem).
●​ Geometric Problems: Problems involving geometric objects like points, lines, and
polygons (e.g., closest-pair, convex hull).

3. Analysis of Linear Search Algorithm


The provided algorithm performs a linear search.
●​ Analysis: The basic operation is the comparison A[i] != K inside the while loop.
○​ Best Case: The key K is the first element (A[0]). The loop runs only once.
○​ Worst Case: The key K is the last element or not in the array at all. The loop
runs n times.
●​ Efficiency Class: The algorithm's efficiency is determined by its worst-case
performance, which is directly proportional to the size of the array n. Therefore, the
efficiency class is O(n).

4. Order of Growth (Decreasing Order)


Here are the functions arranged in decreasing order of their growth rate, from fastest to slowest
growing:

n!>3n>2n>n3>n2>nlogn>n>logn

5. Merge Sort vs. Quick Sort

Feature Merge Sort Quick Sort

Time O(n log n) in all cases (best, O(n log n) on average, but
Complexity average, worst). O(n²) in the worst case.

Space O(n) extra space is required O(log n) extra space on


Complexity (out-of-place). average (in-place).
Stability It is a stable sort (maintains It is an unstable sort.
relative order of equal elements).

Primary Work Most of the work is done in the Most of the work is done in
merge step (combining). the partition step (dividing).

6. Advantages of the Divide and Conquer Technique


●​ Solves Complex Problems: It provides a way to solve difficult and complex problems
efficiently (e.g., Merge Sort, Strassen's matrix multiplication).
●​ Parallelism: The subproblems are independent and can be solved simultaneously on
multi-processor systems.
●​ Efficiency: It often leads to efficient algorithms, typically with logarithmic components in
their time complexity, like O(n log n).
●​ Clarity: It can lead to cleaner, more understandable recursive code.

7. Quick Sort: First Phase


Given the sequence [7, 11, 14, 6, 9, 4, 3, 12] with the first element 7 as the pivot.

Using a standard partitioning scheme (like Lomuto's), the goal is to place all elements smaller
than 7 to its left and all larger elements to its right.

The sequence after the first phase of partitioning will be:

[3, 6, 4, 7, 9, 14, 11, 12]

8. Applications of Closest-Pair and Convex Hull


●​ Closest-Pair Problem Applications:​

○​ Air Traffic Control: Detecting aircraft that are too close to each other.
○​ Computer Vision: Used in pattern recognition and object clustering.
○​ Molecular Modeling: Finding the closest pair of atoms in a molecule.
●​ Convex Hull Problem Applications:​

○​ Collision Avoidance: Calculating the path for a robot to move without hitting
obstacles.
○​ Image Processing: Identifying the shape or boundary of an object in an image.
○​ Geographic Information Systems (GIS): Defining the boundary of a set of
geographical points.

9. Huffman Code for Fibonacci Frequencies


For the frequencies a:1, b:1, c:2, d:3, e:5, f:8, g:13, h:21, the Huffman tree has a distinct skewed
structure. The optimal codes are:
●​ h: 1
●​ g: 01
●​ f: 001
●​ e: 0001
●​ d: 00001
●​ c: 000001
●​ b: 0000001
●​ a: 0000000

10. Advantages of Variable-Length Encoding


The main advantage of variable-length encoding (like Huffman coding) over fixed-length
encoding is data compression. By assigning shorter binary codes to more frequent symbols
and longer codes to less frequent ones, the average code length per symbol is reduced, leading
to smaller file sizes.

11. Prim's vs. Kruskal's Algorithm

Feature Prim's Algorithm Kruskal's Algorithm

Strategy Builds the MST by adding Builds the MST by adding the
vertices to a single tree. safest edges.

Output The intermediate graph is The intermediate graph can be a


always a connected tree. forest (disconnected).
Approach Starts from an arbitrary vertex Sorts all edges and adds them if
and grows outwards. they don't form a cycle.

Best For Dense graphs (many edges). Sparse graphs (few edges).

12. Binary Search Trees for (do, if, stop)


The alphabetical order is do < if < stop. There are 5 possible unique Binary Search Trees
(Catalan number C₃ = 5):
1.​ if as root:

if
/ \
do stop
2.​
3.​
4.​ do as root (right-skewed):

do
\
if
\
stop
5.​
6.​
7.​ do as root (mixed):

do
\
stop
/
if
8.​
9.​
10.​stop as root (left-skewed):
stop
/
if
/
do
11.​
12.​
13.​stop as root (mixed):

stop
/
do
\
if
14.​
15.​

13. Augmenting Path (Ford-Fulkerson)


In the Ford-Fulkerson method for finding the maximum flow in a network, an augmenting path
is a simple path from the source vertex s to the sink vertex t in the residual network.

The residual network represents the available capacity for sending more flow. The
Ford-Fulkerson algorithm works by repeatedly finding an augmenting path and pushing the
maximum possible flow along it until no more such paths can be found.

14. Convert LPP into Slack Form


This question requires the Linear Programming Problem, which was not provided in the text. To
convert an LPP into slack form, you would:
1.​ Ensure it is a maximization problem.
2.​ For each ≤ inequality, add a new non-negative slack variable to the left side to turn it
into an equality.

15. Steps for the Simplex Method


1.​ Standard Form: Convert the LPP into a maximization problem with all constraints as
equalities by adding slack/surplus variables.
2.​ Initial Tableau: Construct the initial simplex tableau from the equations.
3.​ Check Optimality: If all indicators in the objective function row are non-negative, the
solution is optimal.
4.​ Select Pivot Column: Choose the column with the most negative indicator (the entering
variable).
5.​ Select Pivot Row: Calculate the non-negative ratios of the solution column to the pivot
column. The row with the smallest ratio is the pivot row (the leaving variable).
6.​ Pivot: Perform row operations to make the pivot element 1 and all other elements in its
column 0.
7.​ Repeat: Go back to Step 3 and continue until an optimal solution is found.

16. Converting Primal to Dual Problem


Primal Problem:
●​ Minimize Z=4x1​+2x2​−x3​
●​ Subject to:
○​ x1​+x2​+2x3​≥3
○​ 2x1​−2x2​+4x3​≤5
○​ x1​,x2​,x3​≥0.

Dual Problem:
●​ Maximize W=3y1​+5y2​
●​ Subject to:
○​ y1​+2y2​≤4
○​ y1​−2y2​≤2
○​ 2y1​+4y2​≤−1
●​ And variable constraints: y1​≥0, y2​≤0.

17. Backtracking and 4-Queens Problem


Backtracking is an algorithmic technique for solving problems recursively by trying to build a
1
solution incrementally. When it determines that a path cannot lead to a valid solution, it
"backtracks" to the previous step and tries a different path.

4-Queens Solution:

One of the two possible solutions places queens at coordinates (1,0), (3,1), (0,2), and (2,3).

..Q.
Q...
...Q
.Q..
Portion of the State-Space Tree:

The tree shows starting with a queen at (0,0), which leads to a dead end, forcing a backtrack.

(Start)
|
-----------------
| (Q at (0,0)) | (Q at (1,0)) -- (Leads to a solution)
-----------------
|
(Col 1: can't place)
|
(DEAD END)

18. 0/1 Knapsack Problem in Branch and Bound


The 0/1 Knapsack Problem seeks to maximize the value of items placed in a knapsack without
exceeding its weight capacity, where each item must be either fully taken or left behind.

The Branch and Bound technique solves this by exploring a state-space tree where each node
represents a decision (take or leave an item). It efficiently prunes the tree by:
1.​ Calculating an upper bound at each node (e.g., by solving the fractional knapsack
problem for the remaining items).
2.​ If this bound is lower than the best solution found so far, the entire subtree from that
node is discarded.

19. NP-Hard vs. NP-Complete


●​ NP-Complete: A problem is NP-Complete if it meets two conditions:​

1.​ It is in the class NP (a solution can be verified in polynomial time).


2.​ It is NP-Hard.
●​ NP-Hard: A problem is NP-Hard if it is at least as hard as any problem in NP. Any NP
problem can be reduced to an NP-Hard problem in polynomial time.​

Key Difference: All NP-Complete problems are NP-Hard, but an NP-Hard problem is not
necessarily in NP (e.g., the Halting Problem). NP-Complete problems are always decision
problems, while NP-Hard problems can be optimization problems.

20. Max Clique Problem


The Max Clique Problem is the problem of finding the largest clique in a given undirected
graph.
●​ A clique is a subset of vertices in a graph where every two distinct vertices are
connected by an edge.
●​ The goal is to find the clique with the maximum possible number of vertices. The
decision version of this problem ("Does a clique of size k exist?") is NP-Complete.

You might also like