Lab Manual DAA
Lab Manual DAA
NO:1
IMPLEMENT RECURSIVE AND NON-RECURSIVE ALGORITHMS AND STUDY THE
ORDER OF GROWTH FROM LOG2N TO N!.
Algorithm:
1. Define the function Factorial that takes an integer parameter n.
2. Check if n is 0 or 1. If so, return 1 because the factorial of 0 and 1 is 1.
3. If n is not 0 or 1, recursively call the Factorial function with the argument n - 1.
This means calculating the factorial of the previous number.
4. Multiply n by the result obtained from the recursive call and return the final result.
Program:
Recursive Algorithm:
def recursive_factorial(n):
if n == 0:
return 1
else:
return n * recursive_factorial(n - 1)
# Example usage
n=5
recursive_result = recursive_factorial(n)
print("Recursive Factorial Result:", recursive_result)
OUTPUT:
The recursive algorithm calculates the factorial of a number n by recursively multiplying n with the
factorial of n-1. The base case is when n reaches 0, in which case the function returns 1.
Non-Recursive Algorithm:
Algorithm:
pythonCopy code:
def non_recursive_factorial(n):
result = 1
for i in range(1, n + 1):
result *= i
return result
# Example usage
n=5
non_recursive_result = non_recursive_factorial(n)
print("Non-Recursive Factorial Result:", non_recursive_result)
OUTPUT:
The non-recursive algorithm calculates the factorial of a number n by using a loop. It initializes the
factorial variable to 1 and then iteratively multiplies it with each number from 1 to n.
The order of growth refers to how the runtime of an algorithm increases as the input size (n)
grows. Let's analyze the order of growth of the factorial algorithms:
Recursive Algorithm:
log2n < n < nlog2n < n^2 < n^3 < ... < 2^n < n!
VIVA QUESTIONS:
A recursive algorithm is an algorithm that solves a problem by solving smaller instances of the same
problem. It involves breaking down a problem into smaller subproblems and solving them
recursively until reaching a base case, which is a simple case that can be solved directly.
A non-recursive algorithm is an algorithm that solves a problem without using recursion. It typically
uses iterative techniques such as loops and stack data structures to achieve the desired result.
Recursive algorithms are often simpler and more concise than their non-recursive counterparts, as
they express the problem in terms of smaller subproblems. They can be easier to understand and
implement for problems that naturally exhibit recursive structure.
Non-recursive algorithms can be more efficient in terms of time and space complexity compared to
recursive algorithms. They usually involve explicit control flow and avoid the overhead of function
calls, making them more suitable for large-scale problems.
To analyze the order of growth for algorithms, you can consider the time or space complexity. Time
complexity refers to how the algorithm's execution time increases with the input size, while space
complexity refers to how the algorithm's memory usage grows with the input size. Common
notations used to represent the order of growth include O(), Ω(), and Θ().
Algorithm:
Program :
import numpy as np
# Recursive steps
P1 = strassen_matrix_multiply(A11 + A22, B11 + B22)
P2 = strassen_matrix_multiply(A21 + A22, B11)
P3 = strassen_matrix_multiply(A11, B12 - B22)
P4 = strassen_matrix_multiply(A22, B21 - B11)
P5 = strassen_matrix_multiply(A11 + A12, B22)
P6 = strassen_matrix_multiply(A21 - A11, B11 + B12)
P7 = strassen_matrix_multiply(A12 - A22, B21 + B22)
# Example usage
A = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]])
product = strassen_matrix_multiply(A, B)
print("Product of A and B:")
print(product)
OUTPUT:
Product of A and B:
VIVA QUESTIONS:
Strassen's Matrix Multiplication algorithm is an efficient divide and conquer algorithm used to
multiply two matrices. It reduces the number of required multiplications by breaking down the
matrices into smaller submatrices and recursively computing partial products.
2. How does Strassen's Matrix Multiplication algorithm work?
Strassen's algorithm works by dividing each matrix into four submatrices of equal size,
recursively multiplying these submatrices using seven multiplications instead of the usual eight, and
combining the results to obtain the final product.
3. What are the advantages of Strassen's Matrix Multiplication algorithm?
Strassen's algorithm reduces the number of multiplications compared to the traditional matrix
multiplication algorithm, resulting in a lower time complexity. It is particularly efficient for large
matrices, leading to improved performance in certain scenarios.
4. What is the time complexity of Strassen's Matrix Multiplication algorithm?
The time complexity of Strassen's algorithm is O(n^log2(7)), where n is the dimension of the
matrices being multiplied. This is a more efficient time complexity compared to the traditional
matrix multiplication algorithm, which has a time complexity of O(n^3).
5. Are there any limitations or considerations when using Strassen's Matrix
Multiplication?
Strassen's algorithm is more efficient for large matrices, but it may not always be the fastest
algorithm for smaller matrices due to the overhead of recursion and additional operations involved.
Additionally, it requires matrices to have dimensions that are powers of 2.
Algorithm :
Program:
def topological_sort(graph):
def dfs(node):
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
dfs(neighbor)
sorted_nodes.append(node)
visited = set()
sorted_nodes = []
for node in graph:
if node not in visited:
dfs(node)
return sorted_nodes[::-1]
# Example usage
graph = defaultdict(list)
graph[1] = [2, 3]
graph[2] = [4]
graph[3] = [4, 5]
graph[4] = [6]
graph[5] = []
graph[6] = []
sorted_order = topological_sort(graph)
print("Topological Sort Order:")
print(sorted_order)
OUTPUT:
[1, 3, 5, 2, 4, 6]
VIVA QUESTIONS :
Algorithm:
Program:
def heapify(arr, n, i):
largest = i
left = 2 * i + 1
right = 2 * i + 2
if left < n and arr[left] > arr[largest]:
largest = left
if right < n and arr[right] > arr[largest]:
largest = right
if largest != i:
arr[i], arr[largest] = arr[largest], arr[i]
heapify(arr, n, largest)
def heap_sort(arr):
n = len(arr)
# Perform sorting
for i in range(n - 1, 0, -1):
arr[0], arr[i] = arr[i], arr[0]
heapify(arr, i, 0)
# Example usage
arr = [12, 11, 13, 5, 6, 7]
heap_sort(arr)
print("Sorted array:")
print(arr)
OUTPUT:
Sorted array:
VIVA QUESTIONS:
Heap Sort is an efficient comparison-based sorting algorithm that uses the concept of a binary heap
data structure. It involves building a heap from the given array and repeatedly extracting the
maximum (for ascending order) or minimum (for descending order) element to sort the array.
Heap Sort works by first building a binary heap from the given array, which can be done in O(N) time
complexity. Then, it repeatedly extracts the root element of the heap (the maximum or minimum,
depending on the desired order) and places it at the end of the array. After each extraction, the heap
is restored to maintain its properties, and the process continues until the entire array is sorted.
The time complexity of Heap Sort is O(N log N), where N is the number of elements in the array. Both
the building of the heap and the extraction/restoration steps take O(log N) time complexity, and
these operations are performed N times
● It has a guaranteed worst-case time complexity of O(N log N), making it efficient
for large datasets.
● It is an in-place sorting algorithm, meaning it does not require additional memory
beyond the input array.
● It is stable, preserving the relative order of equal elements.
6. What are the limitations of Heap Sort?
● It is not a stable sorting algorithm, meaning it may change the order of equal
elements.
● It has a relatively high constant factor and requires more comparisons than
some other sorting algorithms, which can impact performance for small datasets.
7. Can you explain the process of building a binary heap in Heap Sort?
Building a binary heap involves starting from the middle index of the array and repeatedly percolating
down each element to its proper position in the heap. The percolation process compares the
element with its children and swaps it with the larger (for max heap) or smaller (for min heap) child
until the heap property is satisfied.
EXP.NO:5(a) COIN CHANGE PROBLEM USING DYNAMIC PROGRAMMING
Algorithm:
Program :
def coin_change(coins, target):
dp = [float('inf')] * (target + 1)
dp[0] = 0
if dp[target] == float('inf'):
return -1
coins_used.reverse()
# Example usage
coins = [1, 2, 5]
target = 11
OUTPUT:
VIVA QUESTIONS:
The Coin Change Problem is a classic dynamic programming problem that involves finding the
minimum number of coins needed to make a given amount of money. Given a set of coin
denominations and a target amount, the goal is to determine the minimum number of coins required
to make up that amount.
3. What is the approach used in Dynamic Programming for the Coin Change
Problem?
The approach used in Dynamic Programming for the Coin Change Problem is known as the
"bottom-up" approach. It involves building a table or an array to store the optimal solutions for
subproblems, starting from the smallest possible subproblem and progressively filling the table until
reaching the target amount.
4. How does the Dynamic Programming algorithm initialize the table for the Coin
Change Problem?
The table for the Coin Change Problem is initialized with values representing an invalid or
unreachable state, such as infinity or a large value. This ensures that the algorithm can properly
track the minimum number of coins required to make each amount.
5. What is the recurrence relation used in the Dynamic Programming algorithm for
the Coin Change Problem?
The recurrence relation for the Coin Change Problem states that the minimum number of coins
required to make a certain amount is the minimum between taking the current coin and considering
the remaining amount or not taking the current coin and considering the remaining amount with
previously used coins. This relation is used to fill the table iteratively.
6. What is the time complexity of the Dynamic Programming algorithm for the
Coin Change Problem?
The time complexity of the Dynamic Programming algorithm for the Coin Change Problem is O(A *
N), where A is the target amount and N is the number of coin denominations. It iterates over each
amount and each coin denomination once to fill the table.
‘
EXP.NO:5(b) WARSHALL’S AND FLOYD‘S ALGORITHMS USING DYNAMIC
PROGRAMMING
Algorithm:
Program :
def warshall(adj_matrix):
num_vertices = len(adj_matrix)
tc = [row[:] for row in adj_matrix] # Create a copy of the adjacency matrix
for k in range(num_vertices):
for i in range(num_vertices):
for j in range(num_vertices):
tc[i][j] = tc[i][j] or (tc[i][k] and tc[k][j])
return tc
# Example usage
adj_matrix = [[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 0, 0],
[1, 0, 1, 0]]
transitive_closure = warshall(adj_matrix)
print("Transitive Closure Matrix:")
for row in transitive_closure:
print(row)
OUTPUT:
[1, 1, 1, 1]
[1, 1, 1, 1]
[0, 0, 0, 0]
[1, 1, 1, 1]
VIVA QUESTIONS:
1. What is Warshall's Algorithm?
Warshall's Algorithm is a dynamic programming algorithm used to compute the transitive closure
of a directed graph. It determines the reachability between all pairs of vertices in a graph by
iteratively considering intermediate vertices.
Warshall's Algorithm works by initializing a matrix to represent the reachability between pairs of
vertices. It then iteratively updates the matrix by considering an intermediate vertex and updating the
reachability based on whether a path exists between two vertices that goes through the intermediate
vertex.
The time complexity of Warshall's Algorithm is O(N^3), where N is the number of vertices in the
graph. It involves three nested loops that iterate over all vertices, making it cubic in the worst case.
Floyd's Algorithm is a dynamic programming algorithm used to compute the shortest paths between
all pairs of vertices in a weighted directed graph. It determines the shortest distance between all
pairs of vertices by iteratively considering intermediate vertices.
Floyd's Algorithm works by initializing a matrix to represent the shortest distances between pairs of
vertices. It then iteratively updates the matrix by considering an intermediate vertex and updating the
shortest distances based on whether a shorter path exists between two vertices that goes through
the intermediate vertex.
The time complexity of Floyd's Algorithm is O(N^3), where N is the number of vertices in the graph. It
involves three nested loops that iterate over all vertices, making it cubic in the worst case.
Warshall's Algorithm computes the transitive closure of a directed graph, determining reachability
between all pairs of vertices. Floyd's Algorithm computes the shortest paths between all pairs of
vertices in a weighted directed graph.
def floyd(adj_matrix):
num_vertices = len(adj_matrix)
dist = [row[:] for row in adj_matrix] # Create a copy of the adjacency matrix
for k in range(num_vertices):
for i in range(num_vertices):
for j in range(num_vertices):
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
return dist
# Example usage
adj_matrix = [[0, 1, float('inf'), 3],
[2, 0, 4, float('inf')],
[float('inf'), float('inf'), 0, 2],
[float('inf'), float('inf'), 1, 0]]
shortest_distances = floyd(adj_matrix)
print("Shortest Distance Matrix:")
for row in shortest_distances:
print(row)
OUTPUT:
[0, 1, 5, 3]
[2, 0, 4, 6]
[4, 5, 0, 2]
[3, 4, 1, 0]
VIVVA QUESTIONS :
Floyd's Algorithm is a dynamic programming algorithm used to compute the shortest paths
between all pairs of vertices in a weighted directed graph. It determines the shortest distance
between all pairs of vertices by iteratively considering intermediate vertices.
Floyd's Algorithm works by initializing a matrix to represent the shortest distances between pairs
of vertices. It then iteratively updates the matrix by considering an intermediate vertex and updating
the shortest distances based on whether a shorter path exists between two vertices that goes
through the intermediate vertex.
The main idea behind Floyd's Algorithm is to consider all possible intermediate vertices and
check if using a particular intermediate vertex results in a shorter path between two vertices. By
iteratively updating the matrix, the algorithm gradually finds the shortest distances between all pairs
of vertices.
The intermediate vertices in Floyd's Algorithm play a crucial role in determining the shortest paths.
By considering all possible intermediate vertices, the algorithm explores different paths and updates
the matrix to find the shortest distances between all pairs of vertices.
The Floyd-Warshall Algorithm is a related algorithm used to compute the shortest paths between
all pairs of vertices in a weighted directed graph, similar to Floyd's Algorithm. However, the Floyd-
Warshall Algorithm is specifically designed for graphs with negative edge weights, whereas Floyd's
Algorithm assumes non-negative edge weights.
The time complexity of Floyd's Algorithm is O(N^3), where N is the number of vertices in the
graph. It involves three nested loops that iterate over all vertices, making it cubic in the worst case.
Floyd's Algorithm has various applications, including finding the shortest paths in transportation
networks, computing distances between locations, network routing, and optimizing communication
networks.
Algorithm:
Program :
max_value = dp[n][capacity]
# Retrieve selected items (optional)
selected_items = []
i=n
j = capacity
while i > 0 and j > 0:
if dp[i][j] != dp[i - 1][j]:
selected_items.append(items[i - 1])
j -= weights[i - 1]
i -= 1
selected_items.reverse()
# Example usage
items = ['Item1', 'Item2', 'Item3', 'Item4', 'Item5']
weights = [2, 3, 4, 5, 6]
values = [5, 7, 9, 11, 13]
capacity = 10
OUTPUT:
Maximum value: 24
Selected items: ['Item3', 'Item5']
VIVA QUESTIONS:
The Knapsack Problem is a classic optimization problem that involves selecting items with certain
values and weights to maximize the total value while ensuring that the total weight does not exceed
a given capacity.
Dynamic Programming solves the Knapsack Problem by breaking it down into smaller subproblems
and solving them iteratively. It builds an optimal solution for each subproblem based on previously
computed solutions, gradually solving larger subproblems until reaching the desired capacity.
3. What is the approach used in Dynamic Programming for the Knapsack Problem?
The approach used in Dynamic Programming for the Knapsack Problem is known as the "bottom-up"
approach. It involves building a table or an array to store the optimal solutions for subproblems,
starting from the smallest possible subproblem and progressively filling the table until reaching the
desired capacity.
4. What is the time complexity of the Dynamic Programming algorithm for the
Knapsack Problem?
The time complexity of the Dynamic Programming algorithm for the Knapsack Problem is O(NW),
where N is the number of items and W is the capacity of the knapsack. It iterates over each item and
each possible capacity once to fill the table.
5. How does the Dynamic Programming algorithm initialize the table for the
Knapsack Problem?
The table for the Knapsack Problem is initialized with appropriate values based on the problem
requirements. Typically, it is initialized with zeros or negative infinity to indicate the absence of any
items or a non-feasible solution.
6. What is the recurrence relation used in the Dynamic Programming algorithm for
the Knapsack Problem?
The recurrence relation for the Knapsack Problem states that the optimal value for a given capacity
is the maximum between taking the current item and considering the remaining capacity or not
taking the current item and considering the remaining items.
Algorithm:
Program:
import heapq
while pq:
dist, u = heapq.heappop(pq)
OUTPUT:
[0, 3, 2, 3, inf]
VIVA QUESTIONS:
Dijkstra's Algorithm is a greedy algorithm used to find the shortest path from a starting vertex to
all other vertices in a weighted graph with non-negative edge weights. It constructs a shortest path
tree by iteratively selecting the vertex with the minimum distance and relaxing its adjacent vertices.
Dijkstra's Algorithm works by maintaining a set of vertices whose shortest path distances from
the starting vertex have been determined. It iteratively selects the vertex with the minimum distance
and updates the distances of its adjacent vertices if a shorter path is found. This process continues
until all vertices have been processed.
The main idea behind Dijkstra's Algorithm is to iteratively find the vertex with the minimum
distance and add it to the set of processed vertices. By doing so, the algorithm gradually builds the
shortest path tree, ensuring that the selected vertices have the shortest distances from the starting
vertex.
Dijkstra's Algorithm typically uses a priority queue (such as a min-heap) to efficiently select the
vertex with the minimum distance. Additionally, it uses an array or a data structure to store the
shortest distances from the starting vertex to each vertex in the graph.
The time complexity of Dijkstra's Algorithm depends on the implementation and the data
structures used. With a binary heap as the priority queue, the time complexity is O((V + E) log V),
where V is the number of vertices and E is the number of edges in the graph.
Dijkstra's Algorithm assumes non-negative edge weights. If the graph contains negative edge
weights, it may produce incorrect results. Additionally, the algorithm does not handle graphs with
cycles, as it is designed for finding shortest paths in acyclic graphs.
Algorithm:
Program:
class Node:
def __init__(self, char, frequency):
self.char = char
self.frequency = frequency
self.left = None
self.right = None
def build_huffman_tree(characters, frequencies):
pq = PriorityQueue()
for i in range(len(characters)):
node = Node(characters[i], frequencies[i])
pq.put((node.frequency, node))
while pq.qsize() > 1:
freq1, node1 = pq.get()
freq2, node2 = pq.get()
new_node = Node(None, freq1 + freq2)
new_node.left = node1
new_node.right = node2
pq.put((new_node.frequency, new_node))
root = pq.get()[1]
return root
def generate_huffman_codes(root):
codes = {}
print("Huffman Codes:")
for char, code in huffman_codes.items():
print(char, ":", code)
OUTPUT:
Huffman Codes:
C : 00
E : 01
A : 10
B : 110
D : 111
VIVA QUESTIONS:
Huffman Trees and Codes are a technique used for lossless data compression. They involve
constructing a binary tree known as a Huffman tree and assigning variable-length codes to different
characters or symbols based on their frequencies in the input data.
Huffman Coding works by assigning shorter codes to more frequently occurring symbols and
longer codes to less frequently occurring symbols. It achieves data compression by representing
frequently occurring symbols with fewer bits and less frequently occurring symbols with more bits.
The main idea behind Huffman Trees and Codes is to create an efficient binary encoding scheme
by constructing a tree where the most frequent symbols are closer to the root. This ensures that the
most common symbols have shorter codes, reducing the overall number of bits required to
represent the input data.
A Huffman Tree is constructed by iteratively merging the two nodes with the lowest frequencies to
create a new parent node. This process continues until all symbols are combined into a single root
node, resulting in a binary tree.
The advantage of Huffman Coding is that it produces an optimal prefix-free code, meaning that no
code is a prefix of any other code. This property ensures that the encoded data can be uniquely
decoded, allowing for lossless compression.
7. Can you explain the process of encoding and decoding using Huffman Trees
and Codes?
Encoding using Huffman Trees and Codes involves traversing the tree to find the corresponding
code for each symbol and concatenating these codes to form the encoded data. Decoding involves
traversing the tree based on the encoded bits to reconstruct the original symbols.
Algorithm:
Program:
import numpy as np
from scipy.optimize import linprog
OUTPUT:
Optimal Solution:
[0. 0.]
0.0
VIVA QUESTIONS:
The Simplex Method is an iterative algorithm used to solve linear programming problems. It
optimizes a linear objective function while satisfying a set of linear constraints by iteratively moving
from one feasible solution to another until an optimal solution is reached.
The Simplex Method starts with an initial feasible solution and iteratively improves it by moving
along the edges of the feasible region. It selects an entering variable (corresponding to a non-basic
variable with a positive coefficient in the objective function) and a departing variable (corresponding
to a basic variable to be replaced). It then performs pivot operations to obtain a new feasible
solution with a better objective function value.
3. What is the main idea behind the Iterative Improvement in the Simplex Method?
The main idea behind the Iterative Improvement in the Simplex Method is to iteratively improve the
current feasible solution by iteratively moving from one basic feasible solution to another, eventually
reaching the optimal solution. This improvement is achieved through pivot operations that change
the basic and non-basic variables.
The basic components of the Simplex Method algorithm include the initialization of the initial
feasible solution, selecting the entering and departing variables, performing pivot operations to
update the basic and non-basic variables, and checking for termination conditions to determine if an
optimal solution has been reached.
The Simplex Method detects unbounded solutions by checking if the objective function can be
improved indefinitely. If an unbounded solution is detected, it means that no optimal solution exists.
In the case of infeasible problems, the Simplex Method can detect infeasibility by examining the
constraints and determining that they cannot be satisfied simultaneously.
The time complexity of the Simplex Method varies depending on the specific problem and the size of
the constraints and variables. In the worst-case scenario, the Simplex Method has an exponential
time complexity. However, in practice, it often performs efficiently for problems with a moderate
number of constraints and variables.
Algorithm:
Program:
def is_safe(row, col, board, N):
# Check if a queen can be placed at the current position without conflicts
# Check column
for i in range(row):
if board[i][col] == 1:
return False
# Check upper diagonal
i = row - 1
j = col - 1
while i >= 0 and j >= 0:
if board[i][j] == 1:
return False
i -= 1
j -= 1
# Check lower diagonal
i = row - 1
j = col + 1
while i >= 0 and j < N:
if board[i][j] == 1:
return False
i -= 1
j += 1
return True
def place_queens(row, board, N, solutions):
if row == N:
# All queens have been placed successfully
solutions.append(board.copy())
return
for col in range(N):
if is_safe(row, col, board, N):
# Place the queen at the current position
board[row][col] = 1
# Recursively place queens in the next row
place_queens(row + 1, board, N, solutions)
def solve_n_queen(N):
board = [[0] * N for _ in range(N)]
solutions = []
place_queens(0, board, N, solutions)
return solutions
# Example usage
N=4
solutions = solve_n_queen(N)
Number of solutions: 2
Solution 1
[0, 1, 0, 0]
[0, 0, 0, 1]
[1, 0, 0, 0]
[0, 0, 1, 0]
Solution 2
[0, 0, 1, 0]
[1, 0, 0, 0]
[0, 0, 0, 1]
[0, 1, 0, 0]
VIVA QUESTIONS:
The N-Queen Problem is a classic problem in computer science and mathematics that involves
placing N queens on an N x N chessboard such that no two queens threaten each other. The goal is
to find all possible configurations or solutions for placing the queens.
The Backtracking algorithm solves the N-Queen Problem by systematically exploring all possible
placements of queens on the chessboard, backtracking whenever a conflict is detected. It tries
different possibilities and recursively explores the solution space until a valid solution is found or all
possibilities are exhausted.
3. What is the approach used in the Backtracking algorithm for the N-Queen
Problem?
The Backtracking algorithm for the N-Queen Problem uses a recursive approach. It starts with an
empty board and places queens row by row, making sure each placement is safe and does not
conflict with previously placed queens. If a conflict is detected, the algorithm backtracks and tries a
different possibility.
4. How do you check if a queen can be safely placed in a specific position on the
chessboard?
To check if a queen can be safely placed at a specific position (row, column) on the chessboard, you
need to ensure it does not conflict with any previously placed queens. This involves checking if there
are no other queens in the same column, the same diagonal, or the same row.
During the backtracking process, if a placement of a queen leads to a conflict, the algorithm
removes the queen from that position and continues exploring other possibilities. It goes back to the
previous row and tries a different column to place the queen. This process continues until a valid
solution is found or all possibilities have been exhausted.
6. What is the time complexity of the Backtracking algorithm for the N-Queen
Problem?
The time complexity of the Backtracking algorithm for the N-Queen Problem is exponential,
specifically O(N!) in the worst-case scenario. This is because the algorithm explores all possible
permutations of queen placements. However, by using various optimizations and heuristics, the
actual number of backtracks can be significantly reduced in practice.
Algorithm:
Program:
return
subset.append(nums[i])
subset.pop()
subsets = []
return subsets
# Example usage
nums = [2, 4, 6, 8]
target_sum = 8
print(subset)
OUTPUT:
[2, 6]
[4, 4]
[8]
VIVA QUESTIONS:
The Subset Sum Problem is a classic problem in computer science that involves finding all
possible subsets of a given set whose elements sum up to a target value.
2. How does the Backtracking algorithm solve the Subset Sum Problem?
The Backtracking algorithm solves the Subset Sum Problem by systematically exploring all possible
subsets of the given set and checking if their elements sum up to the target value. It tries different
possibilities and recursively explores the solution space until a valid subset sum is found or all
possibilities are exhausted.
3. What is the approach used in the Backtracking algorithm for the Subset Sum
Problem?
The Backtracking algorithm for the Subset Sum Problem uses a recursive approach. It starts with an
empty subset and incrementally adds elements to it, checking if the current subset sum equals the
target sum. If the current subset sum exceeds the target sum, the algorithm backtracks and tries a
different possibility.
4. How does the Backtracking algorithm handle the backtracking process in the
Subset Sum Problem?
During the backtracking process, if the current subset sum exceeds the target sum, the algorithm
removes the most recently added element and continues exploring other possibilities. It goes back
to the previous level and tries a different element to include in the subset. This process continues
until a valid subset sum is found or all possibilities have been exhausted.
5. What is the time complexity of the Backtracking algorithm for the Subset Sum
Problem?
The time complexity of the Backtracking algorithm for the Subset Sum Problem can vary based on
the problem instance. In the worst-case scenario, the algorithm has an exponential time complexity
of O(2^N), where N is the number of elements in the set. However, using various optimizations and
heuristics, the actual number of backtracks can be significantly reduced in practice.
6. Can the Backtracking algorithm find all possible subsets with the desired sum
in the Subset Sum Problem?
Yes, the Backtracking algorithm can find all possible subsets of the given set that sum up to the
target value. By systematically exploring the solution space and trying different possibilities, it
exhaustively searches for all valid subsets with the desired sum.
Algorithm
import numpy as np
def branch_and_bound(cost_matrix):
N = cost_matrix.shape[0]
assignment_matrix = np.zeros((N, N), dtype=int)
selected_tasks = []
min_cost = float('inf')
def backtrack(selected_tasks, current_cost):
nonlocal min_cost
if current_cost > min_cost:
return
if len(selected_tasks) == N:
min_cost = current_cost
assignment_matrix.fill(0)
for i, task in enumerate(selected_tasks):
assignment_matrix[task][i] = 1
return
backtrack(selected_tasks, 0)
return assignment_matrix, min_cost
# Example usage
cost_matrix = np.array([[5, 7, 3, 8],
[9, 2, 6, 4],
[1, 3, 8, 6],
[7, 6, 4, 2]])
assignment, min_cost = branch_and_bound(cost_matrix)
print("Assignment Matrix:")
print(assignment)
print("Minimum Cost:", min_cost)
OUTPUT:
Assignment Matrix:
[[0 0 1 0]
[1 0 0 0]
[0 1 0 0]
[0 0 0 1]]
Minimum Cost: 14
VIVA QUESTIONS:
1. What is the Assignment Problem?
The Assignment Problem is a classic problem in operations research that involves finding the
optimal assignment of tasks to agents, given a cost matrix representing the cost of each
assignment. The objective is to minimize the total cost while ensuring that each task is assigned to
exactly one agent and each agent is assigned to at most one task.
2. How does the Branch and Bound algorithm solve the Assignment Problem?
The Branch and Bound algorithm solves the Assignment Problem by systematically exploring the
solution space and using a bounding mechanism to prune unpromising branches. It starts with an
initial solution and iteratively branches out, considering different possible assignments and
bounding the search based on the lower bounds of the current solutions.
3. What is the approach used in the Branch and Bound algorithm for the
Assignment Problem?
The Branch and Bound algorithm for the Assignment Problem uses a combination of depth-first
search and bounding techniques. It explores the solution space by considering different task-agent
assignments and uses lower bounds to determine which branches to prune, reducing the number of
solutions that need to be examined.
4. How does the Branch and Bound algorithm perform branching in the
Assignment Problem?
In the Assignment Problem, branching in the Branch and Bound algorithm involves considering
different task-agent assignments at each level of the search. It branches out by selecting an
unassigned task and trying different agents for the assignment, creating child nodes corresponding
to each possibility.
5. How does the Branch and Bound algorithm use bounding in the Assignment
Problem?
The Branch and Bound algorithm uses bounding in the Assignment Problem by evaluating the
current partial assignment and using lower bounds to estimate the potential minimum cost of
completing the assignment. If the lower bound exceeds the current best solution cost, the algorithm
prunes that branch of the search, avoiding unnecessary exploration.
6. What is the time complexity of the Branch and Bound algorithm for the
Assignment Problem?
The time complexity of the Branch and Bound algorithm for the Assignment Problem can vary
depending on the problem instance and the bounding techniques used. In the worst-case scenario,
the algorithm has an exponential time complexity of O(2^N * N^2)
Algorithm:
import numpy as np
visited_cities.remove(city)
path.pop()
# Example usage
distances = np.array([[0, 2, 9, 10],
[1, 0, 6, 4],
[15, 7, 0, 8],
[6, 3, 12, 0]])
N = distances.shape[0]
min_distance = float('inf')
best_path = []
Minimum Distance: 19
VIVA QUESTIONS :
The Traveling Salesman Problem is a well-known optimization problem in computer science that
involves finding the shortest possible route that visits a set of cities exactly once and returns to the
starting city, given the distances between each pair of cities.
2. How does the Branch and Bound algorithm solve the Traveling Salesman
Problem?
The Branch and Bound algorithm solves the Traveling Salesman Problem by systematically exploring
the solution space of possible routes and using lower bounds to prune unpromising branches. It
starts with an initial route and iteratively branches out, considering different possible paths and
bounding the search based on the lower bounds of the current solutions.
3. What is the approach used in the Branch and Bound algorithm for the Traveling
Salesman Problem?
The Branch and Bound algorithm for the Traveling Salesman Problem uses a combination of depth-
first search and bounding techniques. It explores the solution space by considering different city
permutations and uses lower bounds to determine which branches to prune, reducing the number of
solutions that need to be examined.
4. How does the Branch and Bound algorithm perform branching in the Traveling
Salesman Problem?
In the Traveling Salesman Problem, branching in the Branch and Bound algorithm involves
considering different city permutations at each level of the search. It branches out by selecting an
unvisited city and trying different positions for it in the current path, creating child nodes
corresponding to each possibility.
5. How does the Branch and Bound algorithm use bounding in the Traveling
Salesman Problem?
The Branch and Bound algorithm uses bounding in the Traveling Salesman Problem by evaluating
the current partial path and using lower bounds to estimate the potential minimum distance of
completing the route. If the lower bound exceeds the current best distance, the algorithm prunes
that branch of the search, avoiding unnecessary exploration.
6. What is the time complexity of the Branch and Bound algorithm for the
Traveling Salesman Problem?
The time complexity of the Branch and Bound algorithm for the Traveling Salesman Problem can
vary depending on the problem instance and the bounding techniques used. In the worst-case
scenario, the algorithm has an exponential time complexity of O(N^2 * 2^N)