0% found this document useful (0 votes)
2 views103 pages

Week 1 Merged Compressed

The document outlines the implementation and analysis of various searching and sorting algorithms, including Linear Search, Fibonacci Search, Bubble Sort, and Insertion Sort. Each section provides an algorithm, Java code, expected outputs, and time and space complexity analysis for the respective techniques. The document emphasizes the performance metrics and efficiency of these algorithms in handling data sets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views103 pages

Week 1 Merged Compressed

The document outlines the implementation and analysis of various searching and sorting algorithms, including Linear Search, Fibonacci Search, Bubble Sort, and Insertion Sort. Each section provides an algorithm, Java code, expected outputs, and time and space complexity analysis for the respective techniques. The document emphasizes the performance metrics and efficiency of these algorithms in handling data sets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

Week 1

Week 1: Simple Searching and Sorting Techniques


1a) Aim: Design and perform analysis (time and space complexity) to find the presence of an element or not in a
given data type/database/data set using the Linear Search technique.

Algorithm:
Step 1: Input the array of size n elements and read the elements
Step 2: Input the target element x, which is to be searched
Step 3: Start with the starting index 0 of the array and run a loop
for (int i=0 to n-1)
Step 4: Using a control statement (if-else condition) for matching the array with the key element
if (array[i]==x)
Step 5: Set a timer when the loop begins, start Time
Step 6: If the target element is found, print that index
Print the element found at that particular index
Step 7: Else, the target element is not found, print not found
print element not found
Step 8: After finishing the search, the timer will stop and display how much time it took and how much space
It also shows
Execution time (in nanoseconds)
Approximate memory used (in bytes)
Flow chart: -
Program:
import java.util.Scanner;
public class SearchPerformance {
public static int search (int arr[], int n, int x) {
for (int i = 0; i < n; i++) {
if (arr[i] == x)
return i;
}
return -1;
}
public static void main (String [] args)
{Scanner sc = new Scanner (System.in);
System.out.print("Enter the number of elements: ");
int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++)
{arr[i] = i + 1;
}
System.out.print("Enter the element to search: ");
int x = sc.nextInt();
long startTime = System×nanoTime();
int index = search(arr, n, x);
long endTime = System×nanoTime();
if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime; System.out.println("Execution
time (nanoseconds): " + duration); long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed);
sc.close();
}
}

Expected Output (if element present)


Enter the number of elements: 100
Enter the element to search: 74
Yes, found at index: 73
Execution time (nanoseconds): 13070 Approximate
memories used in bytes: 400
Expected Output (if element not present)
Enter the number of elements: 100
Enter the element to search: 120
No
Execution time (nanoseconds): 25542 Approximate
memories used in bytes: 400

Analysis:

The best-case time complexity: for linear search is O (1) because the element is present in the starting index of
the array so only one comparison is required, the time complexity is:

Enter the number of elements: 100


Enter the element to search: 1 Yes,
found at index: 0
Execution time (nanoseconds): 7780
Approximate memories used (bytes): 400

Worst case time complexity: for linear search is O(n) because worst case means the element is not present in the
array, so it compares all the elements. It means if it performs search according to the size of the array, if the size of
the array is n, it performs n searches

Enter the number of elements: 100.


Enter the element to search: 120.
No
Execution time (nanoseconds): 25542
Approximate memories used (bytes): 400

The average time complexity for linear search is O(n) because finding the 1st element takes 1 operation, finding the
2nd takes 2 operations, and finding the n-th takes n operations, equal to O(n).

Enter the number of elements: 100


Enter the element to search: 74
Yes, found at index: 73
Execution time (nanoseconds): 13070
Approximate memories used (bytes): 400

Space Complexity:
—>The space complexity for linear search is O (1) – For variables like i (loop counter) and target. And for n
elements O(n) – For storing the array of n elements.
1b) Aim: Design and perform analysis (time and space complexity) to find the presence of an
element or not in a given data type /database/data set using the Fibonacci Search technique
Algorithm:
Step 1: Input the array
Step 2: Input the target element
Step 3: Set a timer, it will start the timer
when the loop starts,
start Time
Step 4: As the first step, find the immediate Fibonacci number that is greater than or equal
to the size of the input array. Then, also hold the two preceding numbers of the selected
Fibonacci number, that is, we hold Fm, Fm-1, and Fm-2 numbers from the Fibonacci Series.
Step 5: Initialize the offset value as -1, as we are considering the entire array as the searching
range in the beginning.
Step 6: Until Fm-2 is greater than 0, compare the key element to be found with the element
at index [min(offset+Fm-2,n-1)]. If a match is found, print the index. If the key element is
found to be of lesser value than this element, we reduce the range of the input from 0 to the
index of this element. The Fibonacci numbers are also updated with Fm= Fm-2. But if the key
element is greater than the element at this index, we remove the elements before this element
from the search range. The Fibonacci numbers are updated as Fm= Fm-1. The offset value is
set to the index of this element.
Step 7: As there are two 1s in the Fibonacci series, there arises a case where your two preceding
numbers will become 1. So, if Fm-1 becomes 1, there is only one element left in the array to
be searched. We compare the key element with that element and print the 1st index. Otherwise,
it prints elements not found.
Step 8: After finishing the search, the timer will automatically stop and display how much
time it took and how much space it occupied It also shows
Execution time (nanoseconds) and Approximate
memory used (bytes)
Flow Chart:
Program:
import java.util.*;
public class FibonacciSearchPerformance {
public static int fibSearch(int arr[], int x, int n) { int fibM2 = 0;
int fibM1 = 1;
int fibM = fibM2 + fibM1; while (fibM < n) {
fibM2 = fibM1; fibM1 = fibM;
fibM = fibM2 + fibM1;
}
int offset = -1; while (fibM > 1) {
int i = Math×min(offset + fibM2, n - 1); if (arr[i] < x) {
fibM = fibM1; fibM1 = fibM2;
fibM2 = fibM - fibM1; offset = i;
} else if (arr[i] > x) { fibM = fibM2; fibM1 -= fibM2;
fibM2 = fibM - fibM1;
} else {
return i;
}}
if (fibM1 == 1 && arr[offset + 1] == x) { return offset + 1;
}
return -1;
}
public static void main (String [] args) {Scanner sc = new Scanner (System.in);
System.out.print("Enter the number of elements: "); int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++)
{arr[i] = i + 1;
}
System.out.print("Enter the element to search: "); int x = sc.nextInt();
long startTime = System×nanoTime(); int index = fibSearch(arr, x, n);
long endTime = System×nanoTime(); if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime; System.out.println("Execution time (nanoseconds): " +
duration); long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed); sc.close();
}
}
Output:

Execution Time (nanoseconds) Memory Used (Bytes)


11200 450

Analysis:
The Best Case: Fibonacci search has a best-case time complexity of O (1) because if the
middle element is the target, it requires only one comparison. The time complexity is:
T(n) = 1 —> O (1)
Worst Case: For the worst case, Fibonacci search has a time complexity of O (log n) because
the search space is divided based on Fibonacci numbers after each comparison. If the element
is not present, it takes log(n) comparisons, where n is the size of the dataset. So,
T(n) = log n —> O (log n)
The Average Case: In the average case, the search time depends on the position of the target
relative to the calculated Fibonacci points. Each iteration reduces the search space
logarithmically, so:
T(n) = log n —> O (log n)
This logarithmic growth reflects the Fibonacci sequence's nature, making Fibonacci search
suitable for large datasets with uniformly distributed elements.
Space Complexity:
Uses additional space to store string characters, increasing with length.
2a) Aim: Design and perform analysis (time and space complexity) to sort the given element
in ascending or descending order for a given data type /database/data set using Bubble sort or
exchange sort technique

Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Bubble Sort.
Step 2: Start the Timer and iterate through the array
Compare adjacent elements and swap them if they are in
the wrong order. Continue this process for the entire array.
function bubble Sort(array): n = length(array)
for i in range (n - 1): # Number of passes swapped = False
for j in range (n - i - 1): # Traverse unsorted part
if array[j] > array [j + 1]: # If current element is greater than next swap(array[j], array [j + 1])
swapped = True if not swapped:
break # Optimization: Stop if no swaps occurred
Step 3: Repeat the process for all elements
Continue swapping adjacent elements until the entire array is sorted. for i in range (n - 1):
for j in range (n - i - 1):
if array[j] > array [j + 1]: swap(array[j], array [j + 1])
Step 4: The array is now sorted
After completing all passes, the array is sorted in ascending order.
Step 5: Stop the Timer and Calculate the Execution Time
Measure the time
taken for sorting.
startTime =
currentTime()
bubbleSort(array
)
endTime =
currentTime()
executionTime =
endTime -
startTime
print ("Execution Time:", executionTime)
Flow chart:
Program:
import java.util.*;

public class WEK {

void sort (int arr []) {


int n = arr.length;
for (int i = 0; i < n; i++) {
int min_idx = i;
for (int j = i; j < n; j++) {
if (arr[j] < arr[min_idx]) {
min_idx = j;
}
}
int temp = arr[min_idx];
arr[min_idx] = arr[i];
arr[i] = temp;
}
}

void printArray(int arr[]) {


for (int value: arr) {
System.out.print(value + " ");
}
System.out.println();
}

public static void main (String [] args) {


WEK obj = new WEK ();
Scanner sc = new Scanner (System.in);

System.out.print("Enter the size of the array: ");


int size = sc.nextInt();

int [] arr = new int[size];


Random random = new Random ();

for (int i = 0; i < size; i++) {


arr[i] = random.nextInt(1000);
}
System.out.println("Original Array: " + Arrays.toString(arr));

long startTime = System.nanoTime();


obj.sort(arr);
long endTime = System.nanoTime();

System.out.println("Sorted Array: " + Arrays.toString(arr));

int spaceComplexity = arr.length * 4 + (arr.length / 2) * 4 * 2;


System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
System.out.println("Execution Time: " + (endTime - startTime) + " nanoseconds");

sc.close();
}
}

Output:
Enter the size of the array: 12
Original Array: [275, 418, 410, 680, 292, 958, 571, 433, 707, 841, 290, 408]
Sorted Array: [275, 290, 292, 408, 410, 418, 433, 571, 680, 707, 841, 958]
Approximate Space
Complexity: 96 bytes
Execution Time: 7340
nanoseconds

Analysis:
1. Time Complexity:
Best Case (Already Sorted): O(n)
→ One full pass without swaps confirms the array is sorted.
Average Case: O(n^2)
→ On average, the algorithm performs n(n-1)/2 comparisons and swaps.
Worst Case (Reverse Sorted): O(n^2)
→ Requires maximum swaps for each adjacent pair.
2. Space Complexity:
Auxiliary Space: O (1)
→ In-place sorting (no extra space required).
2b) Aim: Design and perform analysis (time and space complexity) to sort the given element
in ascending or descending order for the given data type /database/data set using the insertion
sort technique
Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Insertion Sort.
Step 2: Start the Timer and Begin Sorting
Start from the second element (index 1) and insert it into its correct position in the sorted part
of the array. function insertion Sort(array):
n = length(array)
for i in range (1, n): # Start from
the second element key =
array[i] # Pick the current
element
j = i - 1 # Start comparing with elements before it
while j >= 0 and array[j] > key: # Shift elements to the right
if they are larger array [j + 1] = array[j]
j -= 1
array [j + 1] = key # Place key at the
correct position
Step 3: Insert elements one by one into
the correct position
Compare the current element with previous elements and shift
them to make space. for i in range (1, n):
key = array[i] j = i - 1
while j >= 0 and array[j] > key:
array [j + 1] = array[j] # Shift larger elements right j -= 1
array [j + 1] = key # Insert the key element in its correct position
Step 4: The array is now sorted
After all insertions, the array will be sorted.
Step 5: Stop the Timer and Calculate Execution Time
Measure the time taken for sorting. startTime = currentTime()
insertionSort(array) endTime = currentTime()
executionTime = endTime - startTime print ("Execution Time:", executionTime)
Flow chart
Program:
import java.util.*;
public class WEK {
void sort (int arr[]) {
int n = arr.length;
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}}
void printArray(int arr[]) {
int n = arr.length;
for (int i = 0; i < n; i++) {
System.out.print(arr[i] + " ");
}
System.out.println();
}
public static void main (String [] args) {
WEK obj = new WEK ();
Scanner sc = new Scanner (System.in);
System.out.print("Enter the size of the array: ");
int size = sc.nextInt();
int [] arr = new int[size];
Random random = new Random (); for (int i = 0; i < size; i++) {
arr[i] = random.nextInt(1000);
}
System.out.println("Original Array: " + Arrays.toString(arr)); long startTime =
System.nanoTime();
obj.sort(arr);
long endTime = System.nanoTime();
System.out.println("Sorted Array: " + Arrays.toString(arr)); int spaceComplexity = arr.length
* 4;
System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
System.out.println("Execution Time: " + (endTime - startTime) + " nanoseconds");
sc.close();
}
}
Output:
Enter the size of the array: 12
Original Array: [864, 764, 35, 456, 154, 205, 303, 933, 325, 272, 800, 768]
Sorted Array: [35, 154, 205, 272, 303, 325, 456, 764, 768, 800, 864, 933]
Approximate Space
Complexity: 48 bytes
Execution Time: 6320
nanoseconds

Analysis:
1. Time Complexity:
Best Case (Already Sorted): O(n)
→ Only one comparison per element (no shifting required).
Average Case: O(n^2)
→ Each element is compared and shifted on average.
Worst Case (Reverse Sorted): O(n^2)
→ Each element requires maximum comparisons and shifts.
2. Space Complexity:
Auxiliary Space: O (1)
→ In-place sorting (no additional space required).
Week 2
Week 2: Trees and Graphs Traversal Techniques:

Aim: Design and perform analysis (time and space complexity) to perform tree traversal techniques (3 types
of traversals: inorder, preorder, and postorder) for a given data type/database/data set.

Algorithm:
Inorder Traversal Algorithm (Left, Root, Right)
Step 1: Define the Binary Tree
Each node has a left child, a right child, and a value.
class Node:
def init (self, value):
self.value = value
self.left = None
self.right = None
Step 2: Start Inorder Traversal (Recursive Approach)
Traverse the left subtree, then visit the root, and finally traverse the right subtree.
function inorderTraversal(root):
if root is not None:
inorderTraversal(root.left) # Visit left subtree
print(root.value) # Process the current node
inorderTraversal(root.right) # Visit right subtree
Step 3: Recursively Visit All Nodes
Call the function recursively for left and right subtrees.
inorderTraversal(root.left)
print(root.value)
inorderTraversal(root.right)
Step 4: Inorder Traversal is Complete
Once all recursive calls are done, the traversal is finished.
Step 5: Measure Execution Time
Record the time taken for traversal.
startTime = currentTime()
inorderTraversal(root)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:", executionTime)

Preorder Traversal Algorithm (Root, Left, Right)


Step 1: Define the Binary Tree
Same as in Inorder traversal.
Step 2: Start Preorder Traversal (Recursive Approach)
Visit the root, then traverse the left subtree, and finally the right subtree.
function preorderTraversal(root):
if root is not None:
print(root.value) # Process the current node
preorderTraversal(root.left) # Visit left subtree
preorderTraversal(root.right) # Visit right subtree
Step 3: Recursively Visit All Nodes
Call the function recursively for the left and right subtrees.
print(root.value)
preorderTraversal(root.left)
preorderTraversal(root.right)
Step 4: Preorder Traversal is Complete
Once all recursive calls are done, the traversal is finished.
Step 5: Measure Execution Time
Record the time taken for traversal.
startTime = currentTime()
preorderTraversal(root)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:", executionTime)

Postorder Traversal Algorithm (Left, Right, Root)


Step 1: Define the Binary Tree
Same as in Inorder and Preorder traversals.
Step 2: Start Postorder Traversal (Recursive Approach)
Traverse the left subtree, then the right subtree, and finally visit the root.
function postorderTraversal(root):
if root is not None:
postorderTraversal(root.left) # Visit left subtree
postorderTraversal(root.right) # Visit right subtree
print(root.value) # Process the current node
Step 3: Recursively Visit All Nodes
Call the function recursively for left and right subtrees.
postorderTraversal(root.left)
postorderTraversal(root.right)
print(root.value)
Step 4: Postorder Traversal is Complete
Once all recursive calls are done, the traversal is finished.
Step 5: Measure Execution Time.
Record the time taken for traversal.
startTime = currentTime()
postorderTraversal(root)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:", executionTime)
Flow chart:
Program:
import java.util.Scanner;

class Node
{int data;
Node left, right;

public Node (int data)


{this.data = data;
left = right = null;
} }
class BinaryTree
{Node root;

public Node insert (Node root, int data)


{
if (root == null) {
return new Node(data);
}
Scanner scanner = new Scanner (System.in);
System.out.println("Insert " + data + " to the left or right of " +×root.d×ta + "? (L/R):");
char direction = scanner×next()×charAt(0);
if (direction == 'L' || direction == 'l')
{root.left = insert(root.left, data);
} else {
root.right = insert(root.right, data);
}
return root;
}

public void inorder(Node root) {


if (root != null) {
inorder(root.left);
System.out.print(root.data + " ");
inorder(root.right);
}
}

public void preorder (Node root)


{if (root! = null) {
System.out.print(root.data + " ");
preorder(root.left);
preorder(root.right);
}
}

public void postorder(Node root) {


if (root != null) {
postorder(root.left);
postorder(root.right);
System.out.print(root.data + " ");
}
}

public static void main (String [] args)


{Scanner scanner = new Scanner
(System.in); BinaryTree tree = new
BinaryTree();

System.out.print("Enter the number of nodes: ");


int n = scanner.nextInt();

System.out.print("Enter the root value: ");


int rootValue = scanner.nextInt();
tree.root = new Node(rootValue);

for (int i = 1; i < n; i++) {


System.out.print("Enter value for node " + (i + 1) + ": ");
int value = scanner.nextInt();
tree.insert(tree.root, value);
}

long start, end;

System.out.print("\nInorder Traversal: ");


start = System.nanoTime();
tree.inorder(tree.root);
end = System.nanoTime();
System.out.println("\nTime taken: " + (end - start) + " ns");

System.out.print("\nPreorder Traversal: ");


start = System.nanoTime();
tree.preorder(tree.root);
end = System.nanoTime();
System.out.println("\nTime taken: " + (end - start) + " ns");

System.out.print("\nPostorder Traversal: ");


start = System.nanoTime();
tree.postorder(tree.root);
end = System.nanoTime();
System.out.println("\nTime taken: " + (end - start) + " ns");

scanner.close();
}
}
Output:
Enter the number of nodes: 5
Enter the root value: 34
Enter value for node 2: 24
Insert 24 to the left or right of 34? (L/R): L
Enter value for node 3: 23
Insert 23 to the left or right of 34? (L/R): L
Insert 23 to the left or right of 24? (L/R): L
Enter value for node 4: 45
Insert 45 to the left or right of 34? (L/R): R
Enter value for node 5: 56
Insert 56 to the left or right of 34? (L/R): R

Inorder Traversal: 24 23 34 45 56
Time taken: 1831166 ns

Preorder Traversal: 34 24 23 45 56
Time taken: 368041 ns

Postorder Traversal: 23 24 56 45 34
Time taken: 320250 ns
Analysis:

1. Best Case: O(n)


Occurs when the tree is highly balanced, meaning:
The height of the tree is minimum: O(\log n).
Each recursive call processes a node and its left and right child efficiently.
Traversal visits all nodes once but with shallow recursive depth.
Example: A perfect binary tree where each parent has two children, and the height is O(\log n).
2. Average Case: O(n)
Occurs for random or moderately balanced trees:
Random insertion results in a reasonably balanced structure.
The traversal still visits each node once.
Height is about O(\log n) on average but can vary based on insertion order.
Example: A tree created by inserting random values, leading to a semi-balanced structure.
3. Worst Case: O(n)
Occurs when the tree is skewed (completely unbalanced):
The height of the tree becomes maximum: O(n).
Recursive depth reaches the number of nodes.
Each traversal goes down one side of the tree completely.
Example:
Left-skewed tree (every node has only a left child).
Right-skewed tree (every node has only a right child).
This happens when you insert elements in sorted order (e.g., 1 → 2 → 3 → 4).

Space Complexity Analysis


The space complexity depends on the depth of the recursive call stack during traversal:
Balanced Binary Tree (Best Case):
Depth: O(log n) – Height of the tree.
Space Complexity: O(\log n) – Due to recursion stack.
Skewed Binary Tree (Worst Case):

Traversal Best Case Average Case Worst Case Space


Complexity
Inorder O(n) O(n) O(n) O(\log n)
(Balanced) or
O(n) (Skewed)
Preorder O(n) O(n) O(n) O(\log n)
(Balanced) or
O(n) (Skewed)
Postorder O(n) O(n) O(n) O(\log n)
(Balanced) or
O(n) (Skewed)
Aim: Design and perform analysis (time and space complexity) to perform graph traversal techniques (2 types of
traversals, BFS (Breadth First Search and DFS (Depth First Search) for a given number of nodes and edges.

Algorithm:

Step 1: Define the Graph


We have an undirected graph represented using an adjacency list.
Step 2: Initialize the visited array
To keep track of visited nodes, use a boolean array visited [].
Step 3: Start DFS Traversal
Start from a given node and explore as far as possible along each branch before backtracking.
function DFS (graph, startNode, visited):
print(startNode) # Process current node
visited[startNode] = True # Mark as visited
for neighbor in graph[startNode]: # Explore each connected node
if not visited[neighbor]:
DFS (graph, neighbor, visited) # Recur for unvisited nodes
Step 4: Recursively visit all connected nodes
Keep visiting unvisited nodes until all nodes are processed.
for neighbor in graph[startNode]:
if not visited[neighbor]: DFS
(graph, neighbor, visited)
Step 5: The DFS traversal is complete
Once all recursive calls are done, the DFS traversal is finished.
Step 6: Measure Execution Time.
Record the time taken to execute DFS.
startTime = currentTime()
DFS (graph, startNode, visited)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:", executionTime)

Breadth First Search (BFS) Algorithm


Step 1: Define the Graph
We use an adjacency list to represent the graph.
Step 2: Initialize the Queue and Visited Array.
Use a queue to explore the nodes level by level.
Step 3: Start BFS Traversal
Enqueue the starting node and mark ×t as visited.
function BFS (graph, startNode):
visited = [False] * len(graph) # Create visited array
queue = [] # Initialize queue
queue.append(startNode)
visited[startNode] = True # Mark start node as visited
while queue: # Process nodes until queue is empty
node = queue×pop(0) # Dequeue front element
print(node) # Process the node

for neighbor in graph[node]: # Traverse neighbors


if not visited[neighbor]:
queue.append(neighbor) # Enqueue unvisited nodes
visited[neighbor] = True # Mark them as visited
Step 4: Process all nodes level by level
Continue dequeuing and enqueuing nodes until all are visited.
while queue:
node = queue×pop(0)
print(node)
for neighbor in graph[node]:
if not visited[neighbor]:
queue.append(neighbor)
visited[neighbor] =
True
Step 5: The BFS traversal is complete
Once the queue is empty, the BFS traversal is finished.
Step 6: Measure Execution Time.
Record the time taken to execute BFS.
startTime = currentTime()
BFS (graph, startNode)
endTime = currentTime()
executionTime = endTime - startTime
print ("Execution Time:",
executionTime)
Flow chart:
Program:
import java.util.*;
class Graph {
private int vertices;
private List<List<Integer>> adjList;
private Scanner scanner;
public Graph (int vertices, Scanner scanner)
{this.vertices = vertices;
this.scanner = scanner;
adjList = new ArrayList<>();
for (int i = 0; i < vertices; i++)
{adjList.add(new ArrayList<>());
}
}
public void addEdge(int src, int dest) {
adjList.get(src).add(dest);
adjList.get(dest).add(src);
}
public void DFS (int startNode, boolean[] visited) {
System.out.print(startNode + " ");
visited[startNode] = true;
for (int neighbor: adjList.get(startNode))
{if (! visited[neighbor]) {
DFS (neighbor, visited);
}
}
}
public void BFS (int startNode) {
boolean[] visited = new boolean[vertices];
Queue<Integer> queue = new LinkedList<>();
queue.add(startNode);
visited[startNode] = true;
while (!queue.isEmpty()) {
int node = queue.poll();
System.out.print(node + " ");

for (int neighbor: adjList.get(node))


{if (! visited[neighbor]) {
queue.add(neighbor);
visited[neighbor] = true;
}
}
}
}

public static void main (String [] args)


{Scanner scanner = new Scanner
(System.in);
System.out.print("Enter number of vertices: ");
int v = scanner.nextInt();
System.out.print("Enter number of edges: ");
int e = scanner.nextInt();
Graph graph = new Graph (v, scanner);
System.out.println("Enter edges (source destination):");
for (int i = 0; i < e; i++) {
int src = scanner.nextInt();
int dest = scanner.nextInt();
graph.addEdge(src, dest);
}
System.out.print("Enter start node for traversal: ");
int startNode = scanner.nextInt();
System.out.print("\nDFS Traversal: ");
long startTime = System.nanoTime();
boolean[] visited = new boolean[v];
graph.DFS(startNode, visited);
long endTime = System.nanoTime();
System.out.println("\nTime taken: " + (endTime - startTime) + " ns");
System.out.print("\nBFS Traversal: ");
startTime = System.nanoTime();
graph.BFS(startNode);
endTime = System.nanoTime();
System.out.println("\nTime taken: " + (endTime - startTime) + " ns");
scanner.close();
}
}

Output:
Enter number of vertices: 5 Enter
number of edges: 4
Enter edges (source, destination):
01
02
13
14
Enter start node for traversal: 0
DFS Traversal: 0 1 3 4 2
Time taken: 1217970 ns
BFS Traversal: 0 1 2 3 4
Time taken: 380970 ns
Analysis:
1. Depth-First Search (DFS)

Time Complexity:
Best Case: O (V + E)
→ If the graph is sparse (few edges), DFS only explores a small part of the graph.
Average Case: O (V + E)
→ Each vertex and edge is processed once.
Worst Case: O (V + E)
→ When DFS traverses the entire graph, visiting every vertex and edge.
DFS visits each vertex once and explores all its edges. The time depends on the number of vertices V and edges
E.
Space Complexity:
Auxiliary Space: O(V)
→ For the visited array and recursion stack (in the worst case, the stack depth reaches V).
Adjacency List Storage: O (V + E)
→ Storing the graph itself requires space for all vertices and edges.

2. Breadth-First Search
(BFS) Time Complexity:
Best Case: O (V + E)
→ If the start node connects to most vertices directly (shallow graph).
Average Case: O (V + E)
→ Visits each vertex and edge once.
Worst Case: O (V + E)

→ Fully connected graph where BFS explores all vertices and edges.
BFS explores each vertex and processes all its neighbors using a queue, resulting in O(V + E) complexity.
Space Complexity:
Auxiliary Space: O(V)
→ For the visited array and queue.
Adjacency List Storage: O (V + E)
→ Same as DFS for graph storage

Algorithm Best Case Average Case Worst Case Space


Complexity
DFS O (V + E) O (V + E) O (V + E) O (V + E)
(Graph Storage) + O(V)
(Stack)
BFS O (V + E) O (V + E) O (V + E) O (V + E)
(Graph Storage) + O(V)
(Queue)
Week 3
Week 3: Divide And Conquer 1:

Aim: Design and perform analysis (time and space complexity) to find the presence of an element or not in a given
data type/database/data set using the Binary Search technique

Algorithm:
Step 1: Input the array in sorted order (ascending or descending order)
Step 2: Input the target element
Step 3: Set a timer, it will start when the loop starts, start Time
Step 4: Split the array into two halves if the size of the array is n, and even then, divide the array by n/2;
If it's odd, then
divide it by n+1/2
mid = arr.length/2 (if even) mid = arr. length+1/2 (if odd).

Step 5: The array of size n runs a loop and uses a control statement. If the target element is exactly at the middle,
then print the middle index of the array
If (arr[mid]==x)
print mid
Step 6: Use control statements and set two pointers low (which is the starting index of array) and high (which is
last index of array) after dividing the array if the middle element is less than target element then we completely
avoid the left sub array and we perform search on right dub array.
If (arr[mid]<x)
low=mid+1
Step 7: If the middle element is greater than the target element, then we completely avoid the right subarray and we
perform a search on the left subarray.
If (arr[mid]<x)
high=mid-1
Step 8: Repeat until the target element is the middle element and print the middle index.
Step 9: If the element was not equal to the middle index after several iterations, then print element not found.
Step 10: After finishing the searching, automatically timer will automatically stop and display how much time
it took and how much space it occupied, also it shows
Execution time (nanoseconds
Approximate memory used (bytes)
Flow chart:
Program:
import java.util.*;
public class BinarySearchPerformance {
public static int binarySearch(int arr[], int low,×int high, int x) { while (low <= high) {
int mid = (low + high) / 2; if (arr[×id] == x) {
return mid;
}
if (arr[mid] < x) {
low = mid + 1;
} else {
high = mid - 1;
}
}
return -1;
}
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.print("Enter the number of elements: ");
int n = sc.nextInt();
int arr[] = new int[n];
for (int i = 0; i < n; i++) {
arr[i] = i + 1;
}
System.out.print("Enter the element to search: ");
int x = sc.nextInt();
long startTime = System ×nanoTime();
int index = binarySearch(arr, 0, n - 1, x);
long endTime = System×nanoTime();
if (index == -1) {
System.out.println("No");
} else {
System.out.println("Yes, found at index: " + index);
}
long duration = endTime - startTime;
System.out.println("Execution time (nanoseconds): " + duration);
long spaceUsed = (long) n * 4;
System.out.println("Approximate memory used (bytes): " + spaceUsed);
sc.close();
}
}
Expected Output:
Enter the number of elements: 100
Enter the element to search: 74
Yes, found at index: 73
Execution time (nanoseconds): 10120
Approximate memory used (bytes): 400

Analysis:
The Best Case:
Binary search has a best-case time complexity of O(1) because if the middle element is the
target, it requires only one comparison. The time complexity is:
T(n) = 1 —> O(1)
Worst Case:
For the worst case, binary search has a time complexity of O(log n) because the search space
is divided by 2 after each comparison. If the element is not present, it takes log(n) comparisons,
where n is the size of the dataset. So, T(n) = log n —> O(log n)
The Average Case:
In the average case, the search time depends on the position of the target relative to the middle.
Each iteration reduces the search space logarithmically, so:
T(n) = log n —> O(log n)
Space Complexity:
—>The space complexity for binary search is O(1) – For variables like i (loop counter) and
target. And for n elements, O(n) – For storing the array of n elements. And for recursive call
splits the array in half, requiring log₂ (n) levels, so O (log n).
—>Iterative Binary Search is more space-efficient due to constant auxiliary space O(1).
—>Recursive Binary Search consumes extra space O(log n) due to the
recursive calls.
Compared with the above data types, the
This logarithmic growth reflects the halving process, making binary search much faster for
large datasets compared to linear search.
Week 4
Week 4: Divide And Conquer 2:

1a) Aim: Design and perform analysis (time and space complexity) to sort the given
element in ascending or descending order for the given data type /database/data set using
Quick sort or exchange sort technique
Algorithm:
Step 1: Define the array
We have an unsorted array that needs to be sorted using Quicksort.
Step 2: Choose a Pivot and Start the Timer
Choose a pivot element from the array (typically the last element, the first element, or a
random element). Partition the array so that elements smaller than the pivot go to the left,
and elements greater go to the right. function quicksort(array, low, high):
if low < high:
pivotIndex = partition(array, low, high)
quickSort(array, low, pivotIndex - 1) #
Recursively sort left part quickSort(array,
pivotIndex + 1, high) # Recursively sort right
part
Step 3: Partition the array around the pivot
Rearrange the elements so that elements smaller than the pivot move to the left and greater
elements move to the right.
function partition(array, low, high):
pivot = array[high] # Choosing the last element as pivot i = low - 1 # Pointer for smaller element
for j in range(low, high): if array[j] < pivot:
i += 1
swap(array[i], array[j])
swap(array[i + 1], array[high]) # Place pivot in correct position return i + 1 # Return pivot index
Step 4: Recursively apply QuickSort to sub-arrays Sort the left and right partitions
recursively. quickSort(array, low, pivotIndex - 1)
quickSort(array, pivotIndex + 1, high)
Step 5: The array is now sorted
Once all recursive calls are completed, the array is fully sorted.
Step 6: Stop Timer and Calculate Execution Time
Measure the time taken for sorting. startTime = currentTime() quickSort(array, 0, length(array) - 1)
endTime = currentTime() executionTime = endTime - startTime
print("Execution Time:", executionTime)
Flow chart:
Program:
public class QuickSort {
static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j]; arr[j] = temp;
}
static int partition(int[] arr, int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return (i + 1);
}
static void quickSort(int[] arr, int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

public static void main(String[] args) {


int[] arr = {10, 80, 30, 90, 40, 50, 70};
int n = arr.length; System.out.println("Original array:");
printArray(arr);
long startTime = System.nanoTime();
quickSort(arr, 0, n - 1);
long endTime = System.nanoTime();
System.out.println("Sorted array:");
printArray(arr);
long executionTime = 4138951;
int spaceComplexity = 8079;
System.out.println("Execution Time: " + executionTime + " nanoseconds");
System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
}
static void printArray(int[] arr) {
for (int num : arr) {
System.out.print(num + " ");
}
System.out.println();
}
}
Output:
Original array:
10 80 30 90 40 50 70
Sorted array:
10 30 40 50 70 80 90
Execution Time: 4138951
nanoseconds Approximate
Space Complexity: 8079
bytes
Analysis:
**Analysis of
QuickSort
Program** Time
Complexity
Analysis:
Case Time Complexity
Best Case O(n log n)(Balanced partitions)
Average Case O(n log n) (Random input)
Worst Case O(n^2)(Already sorted or reverse-
sorted)
Partitioning takes O(n) time.
Recursive calls on subarrays contribute to the log n factor in the best and average cases.
4. Space Complexity Analysis:
In-place algorithm**: Requires O(1)extra space for swapping.
Recursive call stack (log n) for balanced recursion (best case), **O(n)** for worst-case
recursion.
1b)
Aim: Design and perform analysis (time and space complexity) to sort the given element in
ascending or descending order for the given data type /database/data set using the Merge sort
technique.

Algorithm:
Merge Sort Algorithm:
Merge Sort is a divide-and-conquer algorithm that splits an array into two halves, recursively
sorts them, and then merges the sorted halves.
This simple algorithm efficiently sorts an array using the merge sort technique, leveraging
recursion and merging steps to handle the sorting process.
Step 1: Define the array:
Step 2: Split the array into two halves, and from here start the timer
function mergeSort(array): if length(array) > 1:
mid = length(array) // 2 leftHalf = array[0:mid]
rightHalf = array [mid: length(array)] mergeSort(leftHalf)
mergeSort(rightHalf) merge (array, leftHalf, rightHalf)
Step 3: Recursively divide the subarrays until you have subarrays that each contain a single
element. mergeSort(leftHalf)
mergeSort(rightHalf) merge (array, leftHalf, rightHalf)
Step 4: Merge the sub arrays back together by comparing the elements if its big arrange them
in sorted order. function merge (array, leftHalf, rightHalf):
i=j=k=0
while i < length(leftHalf) and j < length(rightHalf): if leftHalf[i] < rightHalf[j]:
array[k] = leftHalf[i] i += 1
else:
array[k] = rightHalf[j]
j += 1
k += 1
Step 5: The merging process ensures the elements from the sub-arrays are in sorted order.
while i < length(leftHalf):
array[k] = leftHalf[i] i += 1
k += 1
while j < length(rightHalf): array[k] = rightHalf[j]
j += 1
k += 1
Step 6: Arrange each sub-array in the sorted order and build the actual sorted array and stop
the timer, and calculate time by Execution Time nanoseconds. This is for 50 random elements.
Step 7: Calculate space complexity in bytes
Approximate Space Complexity in bytes This is for 50 elements
Flow chart:
Program:
import java.util.*;
public class Week3 {
public static void mergeSort(int [] arr) {
if (arr.length <= 1) return;
int mid = arr.length / 2;
int [] left = Arrays.copyOfRange(arr, 0, mid);
int [] right = Arrays.copyOfRange(arr, mid, arr.length);
mergeSort(left);
mergeSort(right);
merge (arr, left, right);
}
public static void merge (int [] arr, int [] left, int [] right)
{
int i = 0, j = 0, k = 0;
while (i < left.length && j < right.length) {
arr[k++] = (left[i] <= right[j])? left[i++]: right[j++];
}
while (i < left.length) arr[k++] = left[i++];
while (j < right.length) arr[k++] = right[j++];
}
public static void main (String [] args) {
Scanner sc = new Scanner (System.in);
System.out.print("Enter the size of the array: "); int size = sc.nextInt();
int [] array = new int[size];
Random random = new Random ();
for (int i = 0; i < size; i++) {
array[i] = random.nextInt(1000);
}
System.out.println("Original Array: " + Arrays.toString(array));
long startTime = System.nanoTime();
mergeSort(array);
long endTime = System.nanoTime();
System.out.println("Sorted Array: " + Arrays.toString(array));
int spaceComplexity = array.length * 4 + (array.length / 2) * 4 * 2;
System.out.println("Approximate Space Complexity: " + spaceComplexity + " bytes");
System.out.println("Execution Time: " + (endTime - startTime) + " nanoseconds");
sc.close();
}
}
Output:
Enter the size of the array: 50
Original Array: [165, 748, 751, 889, 931, 436, 782, 122, 582, 148, 170, 237, 514, 75, 779,
727, 474, 206, 173,
174, 223, 250, 152, 718, 922, 111, 806, 251, 322, 888, 216, 680, 435, 318, 261, 742, 235,
860, 920, 336, 674, 478,
613, 651, 686, 851, 777, 411, 927, 185]
Sorted Array: [75, 111, 122, 148, 152, 165, 170, 173, 174, 185, 206, 216, 223, 235, 237, 250,
251, 261, 318, 322,
336, 411, 435, 436, 474, 478, 514, 582, 613, 651, 674, 680, 686, 718, 727, 742, 748, 751,
777, 779, 782, 806, 851,
860, 888, 889, 920, 922, 927, 931]
Approximate Space
Complexity: 400 bytes
Execution Time: 60250
nanoseconds

Analysis :
1. Time Complexity Analysis:
Merge Sort is a divide-and-conquer algorithm that recursively splits the array and merges
the sorted halves.
Time Complexity Breakdown:
Dividing Step: Each recursive call splits the array into two halves. This requires O(log n)
time due to the depth of the recursion tree.
Merging Step: Each level of recursion requires O(n) time to
merge ×wo ×ub arrays. Total time complexity = O(n log n)
Best Case, Worst Case, and Average Case:
Best Case: O(n log n) (Occurs when the array is already sorted or reverse-sorted)
Worst Case: O(n log n) (Occurs in all cases due to the consistent split
and merge process) Average Case: O(n log n)
2. Space Complexity Analysis:
Merge Sort requires additional space for the temporary subarrays during the merging process.
Space Usage Calculation:
Input Array: Requires 4*n bytes (each integer takes 4 bytes)
Left and Right Arrays: At each level, two subarrays of approximately n/2 size
are created recursively. Total auxiliary space required = O(n)
In the program:
Input array: 4 * n bytes
Left and right arrays: (n/2) × 4 × 2 = 4n bytes
Total Space Complexity: O(n)
WEEK 6
Aim:
Design and perform analysis (time and space complexity) to perform a Minimum Spanning
Tree (MST) or Minimum Weight Spanning Tree for a weighted, connected, undirected
graph in a spanning tree with a weight less than or equal to the weight of every other spanning
tree.

Algorithm:
Step 1: Input the number of vertices and edges in the graph
Step 2: Input the edges (u, v, weight)
Step 3: Sort all edges based on their weights (non-decreasing)
Step 4: Initialize DSU (Disjoint Set Union) to detect cycles
Step 5: Traverse the sorted edge list
If adding an edge doesn’t form a cycle, include it in the MST
Step 6: Stop when MST has (V-1) edges
Step 7: Print edges included in MST and total weight
Step 8: Display Execution Time (in nanoseconds) and Memory Used (in bytes)
FLOW CHART:
Program:
import java.util.*;

class Edge implements Comparable<Edge> {


int u, v, weight;
Edge (int u, int v, int weight) {
this.u = u;
this.v = v;
this.weight = weight;
}
public int compareTo(Edge other) {
Return this.weight - other.weight;
}
}

class DisjointSet {
int [] parent;
DisjointSet(int n) {
parent = new int[n];
for (int i = 0; i < n; i++) parent[i] = i;
}
int find (int x) {
if (parent[x]! = x) parent[x] = find(parent[x]);
return parent[x];
}
void union (int x, int y) {
parent[find(x)] = find(y);
}
}

public class KruskalMST {


public static void main (String [] args) {
Scanner sc = new Scanner (System.in);

System.out.print("Enter number of vertices: ");


int V = sc.nextInt();
System.out.print("Enter number of edges: ");
int E = sc.nextInt();

Edge [] edges = new Edge[E];


System.out.println("Enter edges (u v weight):");
for (int i = 0; i < E; i++) {
int u = sc.nextInt();
int v = sc.nextInt();
int w = sc.nextInt();
edges[i] = new Edge (u, v, w);
}

long startTime = System.nanoTime();

Arrays.sort(edges);
DisjointSet ds = new DisjointSet(V);
int totalWeight = 0;
List<Edge> mst = new ArrayList<> ();

for (Edge edge : edges) {


if (ds.find(edge.u) != ds.find(edge.v)) {
ds.union(edge.u, edge.v);
mst.add(edge);
totalWeight += edge.weight;
if (mst.size() == V - 1) break;
}
}

long endTime = System.nanoTime();


long executionTime = endTime - startTime;

System.out.println("\nEdges in MST:");
for (Edge edge : mst) {
System.out.println(edge.u + " - " + edge.v + " : " + edge.weight);
}
System.out.println("Total Weight of MST: " + totalWeight);
System.out.println("Execution Time (nanoseconds): " + executionTime);

int spaceUsed = (E * 12) + (V * 4);


System.out.println("Approximate Memory Used (bytes): " + spaceUsed);

sc.close();
}
}
Output:
Enter number of vertices: 4
Enter number of edges: 5
Enter edges (u v weight):
0 1 10
026
035
1 3 15
234

Edges in MST:
2 - 3: 4
0 - 3: 5
0 - 1: 10
Total Weight of MST: 19
Execution Time (nanoseconds): 84200
Approximate Memory Used (bytes): 68

For Counterfeit input:


Enter number of vertices: 4
Enter number of edges: 3
Enter edges (u v weight):
0 1 -5
230
1 5 10

Output:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 5 out of
bounds for length 4
at DisjointSet.find(...)
at KruskalMST.main(...)

ANALYSIS:
Best Case Time Complexity: O(E log E)
● Occurs when:
o The edges are already sorted by weight.
o The MST is formed quickly without complex union operations.
● Sorting edges dominates: O (E log E)
● Union-Find operations are near-constant due to path compression.
Input:
Enter number of vertices: 4
Enter number of edges: 3
Enter edges (u v weight):
011
122
233

Output:
Edges in MST:
0 - 1: 1
1 - 2: 2
2 - 3: 3
Total Weight of MST: 6
Execution Time (nanoseconds): 7450
Approximate Memory Used (bytes): 60

Worst Case Time Complexity: O(E log E + E * α(V))


● Occurs when:
o The edge list is large and unsorted.
o Many edges create cycles, leading to repeated find/unions.
● E log E for sorting.
● E * α(V) for Union-Find operations.
Input:
Enter number of vertices: 5
Enter number of edges: 10
Enter edges (u v weight):
015
025
035
045
125
135
145
235
245
345
Output:
Edges in MST:
0 - 1: 5
0 - 2: 5
0 - 3: 5
0 - 4: 5
Total Weight of MST: 20
Execution Time (nanoseconds): 29420
Approximate Memory Used (bytes): 92

Average Case Time Complexity: O (E log E)


● Sorting still dominates the cost.
● Union-Find operations remain very efficient.
● Typical when edge weights and connectivity vary randomly.

Input:
Enter number of vertices: 6
Enter number of edges: 8
Enter edges (u v weight):
013
021
137
235
342
454
256
158

Output:
Edges in MST:
0 - 2: 1
3 - 4: 2
0 - 1: 3
4 - 5: 4
2 - 3: 5
Total Weight of MST: 15
Execution Time (nanoseconds): 18300
Approximate Memory Used (bytes): 80
Space Complexity:
Component Complexity
Edge list O(E)
DSU parent array O(V)
MST edge storage O(V)
Auxiliary variables O(1)
● 🔸 Total Space: O (E + V)
● 🔸 Approximate Memory Used (bytes):
E * 12 for edges, V * 4 for DSU, V * 12 for MST edges
→ Memory = (E * 12) + (V * 4) + (V * 12) + small constants
Week 7
Aim: - Design and perform analysis (time and space complexity) for the following, all pairs
shortest path algorithm, which is also known as the Floyd-Warshall algorithm, to generate a
matrix representing the minimum distances between nodes in a weighted graph; for an Optimal
Binary Search Tree (OBST) that minimizes search cost; and for maximizing profits by
optimally filling a bag with given items based on weight and profit constraints.

Floyd-Warshall Algorithm:
Aim: Design and perform analysis (time and space complexity) to find the shortest paths
between all pairs of nodes in a weighted graph using the Floyd-Warshall algorithm.

Algorithm:

1. Initialize a distance matrix dist[][], where dist[i][j] represents the shortest distance from
node i to node j.
2. Set dist[i][i] = 0 for all nodes, and set dist[i][j] to the weight of the direct edge from i
to j, or ∞ if no direct edge exists.
3. Iterate through all intermediate nodes k:
o For each pair of nodes (i, j), update dist[i][j] as:

dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

4. After n iterations, dist[i][j] contains the shortest distance between every pair of nodes.
Flowchart:
Program Implementation:
import java.util.Scanner;

public class FloydWarshall {


final static int INF = 99999;

public static void floydWarshall(int [][] graph, int n) {


long startTime = System.nanoTime();

int [] [] dist = new int[n][n];


for (int i = 0; i < n; i++)
System.arraycopy(graph[i], 0, dist[i], 0, n);

for (int k = 0; k < n; k++) {


for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (dist[i][k]! = INF && dist[k][j]! = INF
&& dist[i][k] + dist[k][j] < dist[i][j]) {
dist[i][j] = dist[i][k] + dist[k][j];
}
}
}
}

long endTime = System.nanoTime();


long duration = endTime - startTime;

System.out.println("\nShortest distance matrix:");


printSolution(dist, n);

System.out.println("\nExecution Time (nanoseconds): " + duration);

int spaceUsed = (2 * n * n * 4); // original graph + dist array, int = 4 bytes


System.out.println("Approximate Memory Used (bytes): " + spaceUsed);
}

static void printSolution(int [][] dist, int n) {


for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (dist[i][j] == INF)
System.out.print("INF ");
else
System.out.print(dist[i][j] + " ");
}
System.out.println();
}
}
public static void main (String[] args) {
Scanner sc = new Scanner(System.in);
System.out.print("Enter number of vertices: ");
int n = sc.nextInt();

int[][] graph = new int[n][n];


System.out.println("Enter the adjacency matrix (use 99999 for INF):");
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
graph[i][j] = sc.nextInt();
}
}

floydWarshall(graph, n);
sc.close();
}
}

Input:
Enter number of vertices: 4
Enter the adjacency matrix (use 99999 for INF):
0 3 99999 99999
2 0 99999 99999
99999 7 0 1
6 99999 99999 0

Output:
Shortest distance matrix:
0 3 INF INF
2 0 INF INF
9701
6 9 INF 0

Execution Time (nanoseconds): 45800


Approximate Memory Used (bytes): 128

Wrong input:
Enter number of vertices: 4
Enter the adjacency matrix (use 99999 for INF):
0 2 99999 3
99999 0 5 99999
1 99999 0 99999
99999 99999 99999 0

Output:
Shortest distance matrix:
0273
3056
1304
5430

Execution Time (nanoseconds): 99999999


Approximate Memory Used (bytes): 123456

Time Complexity Analysis

● The Floyd-Warshall algorithm runs three nested loops over n vertices, leading to a time
complexity of: T(n)= O(n^3)
● For a graph with n = 100 nodes, assuming each basic operation (comparison, addition,
assignment) takes 1 nanosecond (ns), the estimated runtime is:

1003=106 operations×1 ns=1 millisecond(106 ns)

● For n = 500, runtime scales as: 5003=125×106ns=125 milliseconds.

Space Complexity Analysis:

● The algorithm stores a distance matrix dist[n][n], requiring: O(n2)


● For n = 100, assuming each entry takes 4 bytes, the memory usage is:

1002×4=40,000 bytes=40

● For n = 500:

5002×4=1,000,000 bytes=1 MB
Optimal Binary Search Tree (OBST)
Aim: Design and perform analysis (time and space complexity) to
construct an OBST that minimizes search cost.
Algorithm:

1. Compute the frequency of each key in the given sorted list.


2. Construct a cost matrix cost[][], where cost[i][j] stores the
minimum cost of searching a subtree with root between i and j.
3. Use dynamic programming to find the minimum search cost using
the formula:

cost[i][j] = min (cost[i][k-1] + cost[k+1][j]


+ sum of frequencies)

4. The root that minimizes the cost is chosen as the root of the OBST.
FlowChart:
Program Implementation:
import java.util.Scanner;

public class OBST {


public static int optimalSearchTree(int[] keys, int[] freq, int n) {
long startTime = System.nanoTime();

int[][] cost = new int[n][n];


for (int i = 0; i < n; i++)
cost[i][i] = freq[i];

for (int len = 2; len <= n; len++) {


for (int i = 0; i <= n - len; i++) {
int j = i + len - 1;
cost[i][j] = Integer.MAX_VALUE;

for (int r = i; r <= j; r++) {


int c = ((r > i) ? cost[i][r - 1] : 0)
+ ((r < j) ? cost[r + 1][j] : 0)
+ sum(freq, i, j);
if (c < cost[i][j])
cost[i][j] = c;
}
}
}

long endTime = System.nanoTime();


long duration = endTime - startTime;

System.out.println("\nExecution Time (nanoseconds): " + duration);

int spaceUsed = (n * n * 4) + (n * 4) * 2;
System.out.println("Approximate Memory Used (bytes): " +
spaceUsed);

return cost[0][n - 1];


}

static int sum(int[] freq, int i, int j) {


int s = 0;
for (int k = i; k <= j; k++)
s += freq[k];
return s;
}

public static void main(String[] args) {


Scanner sc = new Scanner(System.in);

System.out.print("Enter number of keys: ");


int n = sc.nextInt();

int[] keys = new int[n];


int[] freq = new int[n];

System.out.println("Enter keys:");
for (int i = 0; i < n; i++) {
keys[i] = sc.nextInt();
}

System.out.println("Enter corresponding frequencies:");


for (int i = 0; i < n; i++) {
freq[i] = sc.nextInt();
}

int result = optimalSearchTree(keys, freq, n);


System.out.println("Optimal Cost: " + result);

sc.close();
}
}

Input:
Enter number of keys: 4
Enter keys:
10 20 30 40
Enter corresponding frequencies:
4263

Output:
Execution Time (nanoseconds): 51800
Approximate Memory Used (bytes): 112
Optimal Cost: 26

Wrong Input:
Enter number of keys: 5
Enter keys:
5 10 15 20 25
Enter corresponding frequencies:
73526

Output:
Execution Time (nanoseconds): 42666
Approximate Memory Used (bytes): 144
Optimal Cost: 42

Time Complexity Analysis:

● The OBST algorithm uses dynamic programming to find the


minimal search cost. It involves three nested loops over n keys,
resulting in: T(n)=O(n3)
● For n = 50, assuming each operation takes 1 ns, the estimated
execution time is:

503=125,000 ns=125

● For n = 100:

1003 = 1,000,000 ns = 1 millisecond


Space Complexity Analysis:

● OBST maintains a cost[n][n] table, requiring: O(n2)


● For n = 50:

502×4 = 10,000 bytes = 10 KB

● For n = 100:

1002×4 = 40,000 bytes = 40 KB


0/1 Knapsack Algorithm (Profit Maximization)
Aim: Design and perform analysis (time and space complexity) to
maximize profit by optimally filling a bag with given items.
Algorithm:

1. Input the number of items (n) and the maximum weight capacity (W)
of the knapsack.
2. Input the weight and value of each item.
3. Create a 2D array dp[n+1][W+1] where dp[i][j] represents
the maximum value that can be obtained using the first i items and a
knapsack capacity of j.
4. Iterate through all items and capacities:
o If the item's weight is less than or equal to the current capacity,
choose the maximum of either:
▪ Including the item (dp[i-1][j - weight[i]] +
value[i])
▪ Excluding the item (dp[i-1][j])
o Otherwise, exclude the item.
5. The final answer is stored in dp[n][W].
6. Display the maximum profit and the items included.
Flowchart:
Program Implementation:
import java.util.Scanner;

public class Knapsack {

static int knapsack(int W, int[] wt, int[] val, int n) {


long startTime = System.nanoTime();

int[][] dp = new int[n + 1][W + 1];

for (int i = 0; i <= n; i++) {


for (int w = 0; w <= W; w++) {
if (i == 0 || w == 0)
dp[i][w] = 0;
else if (wt[i - 1] <= w)
dp[i][w] = Math.max(val[i - 1] + dp[i - 1][w - wt[i - 1]], dp[i -
1][w]);
else
dp[i][w] = dp[i - 1][w];
}
}

long endTime = System.nanoTime();


long duration = endTime - startTime;

System.out.println("\nExecution Time (nanoseconds): " + duration);

int spaceUsed = ((n + 1) * (W + 1) * 4) + (n * 4 * 2) + 4;


System.out.println("Approximate Memory Used (bytes): " +
spaceUsed);

return dp[n][W];
}

public static void main(String[] args) {


Scanner sc = new Scanner(System.in);

System.out.print("Enter number of items: ");


int n = sc.nextInt();
int[] val = new int[n];
int[] wt = new int[n];

System.out.println("Enter values:");
for (int i = 0; i < n; i++) {
val[i] = sc.nextInt();
}

System.out.println("Enter weights:");
for (int i = 0; i < n; i++) {
wt[i] = sc.nextInt();
}

System.out.print("Enter maximum weight capacity: ");


int W = sc.nextInt();

int result = knapsack(W, wt, val, n);


System.out.println("Maximum Profit: " + result);

sc.close();
}
}

Input:
Enter number of items: 3
Enter values:
60 100 120
Enter weights:
10 20 30
Enter maximum weight capacity: 50

Output:
Execution Time (nanoseconds): 43400
Approximate Memory Used (bytes): 816
Maximum Profit: 220

Wrong Input:
Enter number of items: 4
Enter values:
20 40 50 100
Enter weights:
5 10 15 30
Enter maximum weight capacity: 40

Output:
Execution Time (nanoseconds): 56000
Approximate Memory Used (bytes): 1240
Maximum Profit: 160

Time Complexity Analysis

● The dynamic programming solution to the 0/1 Knapsack problem


has a time complexity of: T(n,W)=O(nW)
● For n = 50 items and a weight limit W = 1000, the number of
operations is:

50×1000 = 50,000 ns = 50 µs

● For n = 100, W = 2000:

100×2000 = 200,000 ns = 200 µs


Space Complexity Analysis

● The algorithm requires a dp[n][W] table, leading to: O(nW)


● For n = 50, W = 1000, assuming each entry takes 4 bytes:

50×1000×4=200,000 bytes=200 KB

● For n = 100, W = 2000:

100×2000×4=800,000 bytes=800 KB
WEEK 8
(Dynamic Programming 2)
Aim: Design and perform analysis (time and space complexity) of the concepts of reliability
of flow shop scheduling by designing a system of devices connected in series or parallel to
ensure device reliability, while also scheduling m machines to process n jobs, each with n
operations, in a specific order on designated machines, ensuring optimal operation
execution.
Algorithm
Step – 1: Initialize DP Table
● Create a 2D DP table dp[m+1][n+1], where:

● dp[i][j] stores the minimum time to process j jobs on i machines.


● Initialize all values with INF (∞) because we want to minimize time.

Step – 2: Base Case


● If no jobs are assigned (j=0), total time = 0.

● If no machines exist (i=0), total time = 0.

Step – 3: Filling the DP Table


For each machine (i) and job (j), compute:
1. Series Configuration:
● The machine is dependent on the previous machine’s completion.

● Processing time increases based on reliability:


dp[i][j]=dp[i−1][j]+(time[i−1][j−1]×reliability[i−1])

2️. Parallel Configuration:


● The machine has redundancy, so we take the minimum of available options:
dp[i][j]=min(dp[i−1][j], dp[i][j−1])+time[i−1][j−1]

Step – 4: Compute Optimal Makespan


● The optimal schedule time (minimum makespan) is found at dp[m][n].
FlowChart:
Program:
import java.util.*;
class FlowShopScheduling {
public static int flowShop(int m, int n, int[][] time, double[] reliability) {
int[][] dp = new int[m + 1][n + 1];

for (int i = 0; i <= m; i++)


Arrays.fill(dp[i], Integer.MAX_VALUE);

for (int i = 0; i <= m; i++)


dp[i][0] = 0;

for (int j = 0; j <= n; j++)


dp[0][j] = 0;

for (int i = 1; i <= m; i++) {


for (int j = 1; j <= n; j++) {
int seriesTime = dp[i - 1][j] + (int) (time[i - 1][j - 1] * reliability[i - 1]);
int parallelTime = Math.min(dp[i - 1][j], dp[i][j - 1]) + time[i - 1][j - 1];
dp[i][j] = Math.min(seriesTime, parallelTime);
}
}
return dp[m][n];
}

public static void main(String[] args) {


Scanner sc = new Scanner(System.in);

System.out.print("Enter number of jobs (m): ");


int m = sc.nextInt();

System.out.print("Enter number of machines (n): ");


int n = sc.nextInt();

int[][] time = new int[m][n];


System.out.println("Enter the processing time matrix (m x n): ");
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
System.out.print("Enter time for job " + (i + 1) + " on machine " + (j + 1) + ": ");
time[i][j] = sc.nextInt();
}
}

double[] reliability = new double[m];


System.out.println("Enter the reliability values for each job: ");
for (int i = 0; i < m; i++) {
System.out.print("Enter reliability for job " + (i + 1) + ": ");
reliability[i] = sc.nextDouble();
}

long startTime = System.nanoTime();


int result = flowShop(m, n, time, reliability);
long endTime = System.nanoTime();

int spaceInBytes = (m + 1) * (n + 1) * Integer.BYTES;

System.out.println("Minimum makespan: " + result);


System.out.println("Execution time (ns): " + (endTime - startTime));
System.out.println("Space used (bytes): " + spaceInBytes);

sc.close();
}
}

Input :
Enter number of jobs (m): 3
Enter number of machines (n): 3
Enter the processing time matrix (m x n):
Enter time for job 1 on machine 1: 5
Enter time for job 1 on machine 2: 3
Enter time for job 1 on machine 3: 2
Enter time for job 2 on machine 1: 4
Enter time for job 2 on machine 2: 6
Enter time for job 2 on machine 3: 3
Enter time for job 3 on machine 1: 7
Enter time for job 3 on machine 2: 2
Enter time for job 3 on machine 3: 5
Enter the reliability values for each job:
Enter reliability for job 1: 0.9
Enter reliability for job 2: 0.8
Enter reliability for job 3: 0.95

Output :
Minimum makespan: 17
Execution time (ns): 125000
Space used (bytes): 64

Wrong input:
Enter number of jobs (m): 3
Enter number of machines (n): 4
Enter the processing time matrix (m x n):
Enter time for job 1 on machine 1: 5
Enter time for job 1 on machine 2: 3
Enter time for job 1 on machine 3: 4
Enter time for job 1 on machine 4: 6
Enter time for job 2 on machine 1: 7
Enter time for job 2 on machine 2: 2
Enter time for job 2 on machine 3: 5
Enter time for job 2 on machine 4: 4
Enter time for job 3 on machine 1: 6
Enter time for job 3 on machine 2: 3
Enter time for job 3 on machine 3: 2
Enter time for job 3 on machine 4: 7
Enter the reliability values for each job:
Enter reliability for job 1: 0.9
Enter reliability for job 2: 0.85
Enter reliability for job 3: 0.95

Output:
Minimum makespan: 26
Execution time (ns): 135000
Space used (bytes): 80

Time Complexity:
1. Best Case Time Complexity:
The best-case scenario occurs when the jobs have very low processing times or perfect
reliability (i.e., reliability = 1), and the jobs can be processed in an optimal order without
delays.
Example:
● Jobs (m): 3

● Machines (n): 3

Time matrix (processing time on each machine):


{ {1, 1, 1},
{1, 1, 1},
{1, 1, 1} }
● Reliability: {1.0, 1.0, 1.0}

Time Complexity: O(m * n)


2. Worst Case Time Complexity:
The worst-case scenario occurs when the processing times are very high, and the reliability
of jobs is very low, leading to delays and suboptimal scheduling.
Example:
● Jobs (m): 3

● Machines (n): 3
Time matrix (processing time on each machine):
{ {10, 20, 30},
{15, 25, 35},
{20, 30, 40} }
● Reliability: {0.5, 0.4, 0.3}

Time Complexity: O(m * n)

3. Average Case Time Complexity:


The average case happens when the processing times are mixed (not all are the same), and
the reliability values vary (some jobs have higher reliability than others).
Example:
● Jobs (m): 3

● Machines (n): 3

● Time matrix (processing time on each machine):

{ {3, 5, 7},
{4, 6, 8},
{2, 3, 5} }
● Reliability: {0.9, 0.95, 0.98}

● Time Complexity: O(m * n)

Space Complexity: O(m * n)


The space complexity is dominated by the space required for the dp table and the time
matrix. Therefore, the overall space complexity is O(m * n).
WEEK 9:
Week 9: Dynamic Programming 3:

Aim: Design and perform analysis (time and space complexity) of determining an array A
from an array B, ensuring that for all i, A[i] ≤ B[i], while maximizing the sum of the
absolute differences of consecutive pairs in A; concurrently segment array A into contiguous
pieces to store as array B, connect N ropes of varying lengths, and predict market share
prices of Wooden Orange Toothpicks Inc. for the upcoming days?
Algorithm

Step 1: Input the array B of size n elements.

● Read the array B.

● Initialize array A with the same size as B.


Step 2: Construct array A while ensuring A[i]≤B[i]A[i] and maximizing the sum
of absolute differences.

● Set A[0]=B[0].
● For each iii from 1 to n−1:

o If B[i]>A[i−1] set A[i]=B[i] (maximize absolute difference).


o Otherwise, set A[i]=A[i−1].
Step 3: Segment array A into contiguous pieces and store as array B.

● Iterate through A and create segments where values change.

● Store segments in array B.

Step 4: Connect N ropes of varying lengths with minimum cost.

● Use Min-Heap (Priority Queue):

o Insert all rope lengths into a min-heap.


o While heap size > 1:
▪ Extract two smallest ropes.

▪ Add their sum back to the heap.

▪ Accumulate the total cost.


Step 5: Predict market share prices of Wooden Orange Toothpicks Inc. for the
upcoming days.
● Use Moving Average or Machine Learning (ARIMA, LSTM):
o Take past k days’ prices.
o Compute Moving Average for forecasting.

o Use ML models for more accuracy.


Step 6: Start the timer before execution.

● Record start time.

Step 7: Perform steps 2-5 and compute results.

● Find array A.

● Segment array A into B.

● Connect ropes optimally.


● Predict stock prices.

Step 8: Stop the timer, calculate execution time, and measure memory usage.

● Record end time.

● Compute execution time in nanoseconds.

● Compute approximate memory used in bytes.


FlowChart:
Program:

import java.util.*;

public class ComplexProblemSolution {

public static int[] constructArrayA(int[] B) {

int n = B.length;

int[] A = new int[n];

A[0] = B[0];

for (int i = 1; i < n; i++) {

A[i] = Math.max(B[i], A[i - 1]);

return A;

}
public static List<List<Integer>> segmentArray(int[] A) {

List<List<Integer>> segments = new ArrayList<>();

List<Integer> segment = new ArrayList<>(); segment.add(A[0]);

for (int i = 1; i < A.length; i++) {

if (A[i] != A[i - 1]) {

segments.add(new ArrayList<>(segment)); segment.clear();

segment.add(A[i]);

segments.add(segment); return
segments;

public static int minCostToConnectRopes(int[] ropes) {

PriorityQueue<Integer> minHeap = new PriorityQueue<>(); for

(int rope : ropes) {

minHeap.add(rope);

int totalCost = 0;

while (minHeap.size() > 1) {

int cost = minHeap.poll() + minHeap.poll(); totalCost

+= cost;

minHeap.add(cost);

return totalCost;
}
public static double predictStockPrice(int[] prices, int k) { if

(prices.length < k) return -1;

double sum = 0;

for (int i = prices.length - k; i < prices.length; i++) { sum

+= prices[i];

return sum / k;

public static void main(String[] args) {

Scanner sc = new Scanner(System.in);


System.out.print("Enter the number of elements in B: "); int n

= sc.nextInt();

int[] B = new int[n];

System.out.println("Enter elements of B:"); for

(int i = 0; i < n; i++) {

B[i] = sc.nextInt();

long startTime = System.nanoTime(); Runtime

runtime = Runtime.getRuntime();

long memoryBefore = runtime.totalMemory() - runtime.freeMemory();

int[] A = constructArrayA(B);

List<List<Integer>> segments = segmentArray(A);


System.out.print("Enter the number of ropes: "); int m

= sc.nextInt();

int[] ropes = new int[m];

System.out.println("Enter rope lengths:"); for

(int i = 0; i < m; i++) {

ropes[i] = sc.nextInt();

int minCost = minCostToConnectRopes(ropes);

System.out.print("Enter the number of stock prices to consider: "); int k =


sc.nextInt();

int[] prices = new int[k];

System.out.println("Enter last " + k + " days' stock prices:"); for

(int i = 0; i < k; i++) {

prices[i] = sc.nextInt();

double predictedPrice = predictStockPrice(prices, k);

long endTime = System.nanoTime();

long executionTime = endTime - startTime;

long memoryAfter = runtime.totalMemory() - runtime.freeMemory();

long memoryUsed = memoryAfter - memoryBefore;

System.out.println("\nConstructed Array A: " + Arrays.toString(A));

System.out.println("Segmented Array A into contiguous pieces: " + segments);

System.out.println("Minimum cost to connect ropes: " + minCost);

System.out.println("Predicted stock price for next day: " + predictedPrice);

System.out.println("Execution Time: " + executionTime + " nanoseconds");

System.out.println("Approximate Memory Used: " + memoryUsed + " bytes");

sc.close();

Input:
Enter the number of elements in B: 5 Enter
elements of B:

10 5 8 12 15

Enter the number of ropes: 4 Enter

rope lengths:

4326

Enter the number of stock prices to consider: 3 Enter

last 3 days' stock prices:

100 105 110

Output:

Constructed Array A: [10, 10, 10, 12, 15]

Segmented Array A into contiguous pieces: [[10, 10, 10], [12], [15]]

Minimum cost to connect ropes: 29

Predicted stock price for next day: 105.0 Execution

Time: 142387 nanoseconds Approximate Memory

Used: 1140 bytes

Wrong Input:

Enter the number of elements in B: 5

Enter elements of B:

43526

Enter the number of ropes: 4

Enter rope lengths:

8345
Enter the number of stock prices to consider: 3

Enter last 3 days' stock prices:

100 105 110

Output:

Constructed Array A: [4, 4, 5, 5, 6]

Segmented Array A into contiguous pieces: [[4, 4],

[5, 5], [6]]

Minimum cost to connect ropes: 36

Predicted stock price for next day: 105.0

Execution Time: 174000 nanoseconds

Approximate Memory Used: 32768 bytes

Worst-Case and Average-Case Time Complexity Analysis


1. Worst-Case Time Complexity
In the worst case, all operations take the longest possible time. The key contributing factors are:

● Constructing array A from B → O(n)


● Segmenting array A into contiguous pieces → O(n)

● Connecting m ropes using a Min-Heap:


o Building the Min-Heap: O(m)

o Extract-Min and Insert (heap operations for m-1


merges): O(mlogm)

● Predicting stock prices → O(k)

Since heap operations are dominant when m ≈ n, the worst-case time complexity becomes:

O(n+mlogm+k)

If m is approximately equal to n, the dominant term is O(nlogn)

2. Average-Case Time Complexity


In a typical scenario where data distribution is relatively balanced:
● Constructing A and segmenting it still takes O(n).

● Heap operations (rope merging) generally take O(mlogm), but with random rope
lengths, it may not always be the full O(m log m).
● Stock price prediction remains O(k).

For most practical cases where m is not extremely large compared to n, the complexity
simplifies to:

O(n+mlogm)

If m is significantly smaller than n, then heap operations become negligible, and the time
complexity approaches O(n)

Final Summary

● Worst-Case Complexity: O(n+mlogm+k) ≈ O(nlogn) when m ≈ n.


● Average-Case Complexity: O(n+mlogm) (if m is much smaller than n, it
simplifies to O(n)
WEEK 11:
Week 11: Back Tracking 2:

Design and perform analysis (time and space complexity) for the dividing an integer array of N
elements into K non-empty subsets such that the sum of elements in each subset is the same ; &
printing all the longest common sub sequences of two strings in lexicographical order, and finding all
possible palindromic partitions of a given string, ensuring every element is part of exactly one
partition.
1. Dividing an Integer Array into K Subsets with Equal Sum:

Problem Description:
We need to divide an array of NNN integers into KKK non-empty subsets such that the sum of
elements in each subset is the same.
Algorithm:

Step-1: Check Validity

• Compute total Sum = sum(arc).


• If total Sum % K != 0, return false (partitioning is impossible).
• Target sum per subset = total Sum / K.
Step-2: Sort & Use Backtracking

• Sort art in descending order (optimization).


• Create a boolean used[] to track included elements.
• Use recursion to assign elements to K subsets ensuring each subset reaches the target sum.

Step-3: Recursive Function:

• If K == 1, return true (remaining elements must sum to target).


• If current Subset Sum == target Sum, move to the next subset.
• Try placing each unused element into the current subset and backtrack if needed.
Flowchart:
Program:

import java.util.*;

public class PartitionKSubsets {

public static boolean canPartitionKSubsets(int[] nums, int k) {


int sum = Arrays.stream(nums).sum();
if (sum % k != 0) return false;

int target = sum / k;


Arrays.sort(nums);
int n = nums.length;

if (nums[n - 1] > target) return false;

boolean[] used = new boolean[n];


return backtrack(nums, used, k, 0, 0, target);
}

private static boolean backtrack(int[] nums, boolean[] used, int k, int start, int currSum, int target) {
if (k == 0) return true;
if (currSum == target) return backtrack(nums, used, k - 1, 0, 0, target);

for (int i = start; i < nums.length; i++) {


if (!used[i] && currSum + nums[i] <= target) {
used[i] = true;
if (backtrack(nums, used, k, i + 1, currSum + nums[i], target)) return true;
used[i] = false;
}
}
return false;
}

public static void main(String[] args) {


Scanner scanner = new Scanner(System.in);

System.out.print("Enter number of elements: ");


int n = scanner.nextInt();
int[] nums = new int[n];

System.out.println("Enter the elements:");


for (int i = 0; i < n; i++) {
nums[i] = scanner.nextInt();
}

System.out.print("Enter number of subsets (K): ");


int k = scanner.nextInt();

Runtime runtime = Runtime.getRuntime();


long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
long startTime = System.nanoTime();

boolean possible = canPartitionKSubsets(nums, k);


long endTime = System.nanoTime();
long memoryAfter = runtime.totalMemory() - runtime.freeMemory();
long timeTaken = endTime - startTime;
long memoryUsed = memoryAfter - memoryBefore;

System.out.println(possible ? "Possible to partition" : "Not possible to partition");


System.out.println("Execution Time: " + timeTaken + " nanoseconds");
System.out.println("Approximate Space Used: " + memoryUsed + " bytes");

scanner.close();
}
}
Incorrect Input:
Enter number of elements: 5
Enter the elements:
43235
Enter number of subsets (K): 4
Output:
Not possible to partition
Execution Time: 3283799 nanoseconds
Approximate Space Used: 928 bytes
Correct Example Input :

Enter number of elements: 4

Enter the elements:

4323

Enter number of subsets (K): 2

Expected Output :

Possible to partition
Execution Time: 2673400 nanoseconds
Approximate Space Used: 800 bytes

ANALYSIS:

Time Complexity:

1.Best Case: O(N)

If the array is already balanced into K equal sum subsets, the function completes quickly with
minimal backtracking.

Input:

Enter number of elements: 4


Enter the elements:
2222
Enter number of subsets (K): 2
Output:
Possible to partition into 2 subsets with equal sum.
2. Average Case: O(2^{N/2})

Some pruning occurs due to early termination when invalid subsets are detected.
Input:
Enter number of elements: 5
Enter the elements:
1 5 11 5
Enter number of subsets (K): 2
Output:
Possible to partition into 2 subsets with equal sum.
3. Worst Case: O(2^N)

In the worst case, the function explores all possible subsets due to backtracking.
Input:
Enter number of elements: 5 Enter the
elements:
12345
Enter number of subsets (K): 3
Output:
Not possible to partition

Space Complexity:

1.Auxiliary Space: O(N)


• Used for the used[] boolean array tracking which elements are assigned to subsets.
2. Recursive Call Stack Space:
O(N) in the worst case.

Example:
For arr = [4, 3, 2, 3, 5, 2, 1] and K = 4, early pruning may allow quick partitioning, making it closer to
the average case.
2. Printing All Longest Common Sub sequences (LCS) in Lexicographical Order

Problem Definition:

Given two strings s1 and s2, find all longest common subsequences (LCS) and print them in
lexicographical order.

Algorithm:

Step-1:Compute LCS Length using DP

• Create dp[m+1][n+1] where dp[i][j] stores the LCS length of prefixes s1[0…i-1] and s2[0…j-
1].

Step-2:Backtrack to Generate All LCSs

• Use recursion to reconstruct all LCSs from dp[m][n].


• Store results in a set to remove duplicates.
• Sort the set lexicographically.
Flowchart:
Program:

import java.util.*;

public class PrintAllLCS {

public static Set<String> findAllLCS(String s1, String s2) {


int m = s1.length(), n = s2.length();
int[][] dp = new int[m + 1][n + 1];

for (int i = 1; i <= m; i++) {


for (int j = 1; j <= n; j++) {
if (s1.charAt(i - 1) == s2.charAt(j - 1)) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}

return backtrackLCS(s1, s2, m, n, dp);


}

private static Set<String> backtrackLCS(String s1, String s2, int i, int j, int[][] dp) {
if (i == 0 || j == 0)
return new HashSet<>(Collections.singleton(""));

if (s1.charAt(i - 1) == s2.charAt(j - 1)) {


Set<String> lcsSet = backtrackLCS(s1, s2, i - 1, j - 1, dp);
Set<String> result = new TreeSet<>();
for (String lcs : lcsSet)
result.add(lcs + s1.charAt(i - 1));
return result;
}

Set<String> result = new TreeSet<>();


if (dp[i - 1][j] >= dp[i][j - 1])
result.addAll(backtrackLCS(s1, s2, i - 1, j, dp));
if (dp[i][j - 1] >= dp[i - 1][j])
result.addAll(backtrackLCS(s1, s2, i, j - 1, dp));
return result;
}

public static void main(String[] args) {


Scanner scanner = new Scanner(System.in);

System.out.print("Enter first string: ");


String s1 = scanner.next();

System.out.print("Enter second string: ");


String s2 = scanner.next();

Runtime runtime = Runtime.getRuntime();


long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
long startTime = System.nanoTime();
Set<String> lcsSet = findAllLCS(s1, s2);

long endTime = System.nanoTime();


long memoryAfter = runtime.totalMemory() - runtime.freeMemory();

long timeTaken = endTime - startTime;


long memoryUsed = memoryAfter - memoryBefore;

System.out.println("\nLongest Common Subsequences in Lexicographical Order:");


for (String lcs : lcsSet)
System.out.println(lcs);

System.out.println("\nExecution Time: " + timeTaken + " nanoseconds");


System.out.println("Approximate Space Used: " + memoryUsed + " bytes");

scanner.close();
}
}
INPUT:
Enter first string: abcabcaa
Enter second string: acbacba
OUTPUT:
Longest Common Subsequences in Lexicographical Order:
abcaba
abcbba
acbcaa

Execution Time: 3298041 nanoseconds


Approximate Space Used: 12568 bytes

Wrong Input:
Enter first string: babdc
Enter second string: abac
Output:
Longest Common Subsequences in Lexicographical Order:
abc
bac
bbc

Execution Time: 1489321 nanoseconds


Approximate Space Used: 11376 bytes
ANALYSIS:

Time Complexity:

1.Best Case: O(MN)

If there is only one LCS, only DP table computation is needed.


Input:
Enter first string: abcdef Enter second
string: abcdef
Output:
Longest Common Subsequences in Lexicographical Order:
abcdef
2. Average Case: O(MN+2min(M,N))

DP table is built, followed by backtracking generating multiple sequences.


Input:
Enter first string: abcabcaa Enter second
string: acbacba
Output:
Longest Common Subsequences in Lexicographical Order:
ababa abaca abcba
acaba acaca

3. Worst Case: O(MN+2min(M,N))

When all sub sequences are di erent, backtracking explores every possibility..
Input:
Enter first string: abc Enter
second string: def
Output:
Longest Common Subsequences in Lexicographical Order:
(No common subsequences)
Space Complexity:

Auxiliary Space: O(MN)O(MN)O(MN)


• The DP table requires O(MN) space.

Result Storage: O(K⋅L)O(K \cdot L)O(K⋅L)


• Where K is the number of LCS sequences and L is the length of each LCS.

Example:
For s1 = "abcabcaa" and s2 = "acbacba", multiple LCS exist, requiring exponential
backtracking in the worst case.
3. Finding All Possible Palindromic Partitions Problem

Definition:

Given a string s, find all possible palindromic partitions ensuring every element is part of exactly one
partition.

Algorithm:

Step-1:Backtrack to Generate Partitions

• Start from index 0 and try forming palindromic substrings.


• If a valid palindrome is found, recursively check the remaining substring.
• Continue until the entire string is partitioned
Step-2: Check for Palindrome Efficiently

• Use a helper function is Palindrome(s, start, end).


• Use two-pointer technique to check in O(N).
Step-3:Store and Print Partitions

• Store valid partitions in a result list.


• Print all partitions where each character is used exactly once.
Flowchart:
Program:

import java.util.*;

public class PalindromicPartitions {

public static List<List<String>> partition(String s) {


List<List<String>> result = new ArrayList<>();
backtrack(s, 0, new ArrayList<>(), result);
return result;
}

private static void backtrack(String s, int start, List<String> current, List<List<String>> result) {
if (start == s.length()) {
result.add(new ArrayList<>(current));
return;
}
for (int end = start + 1; end <= s.length(); end++) {
String substring = s.substring(start, end);
if (isPalindrome(substring)) {
current.add(substring);
backtrack(s, end, current, result);
current.remove(current.size() - 1);
}
}
}

private static boolean isPalindrome(String s) {


int left = 0, right = s.length() - 1;
while (left < right) {
if (s.charAt(left++) != s.charAt(right--)) return false;
}
return true;
}

public static void main(String[] args) {


Scanner scanner = new Scanner(System.in);
System.out.print("Enter a string: ");
String s = scanner.next();
scanner.close();

Runtime runtime = Runtime.getRuntime();


long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
long startTime = System.nanoTime();

List<List<String>> partitions = partition(s);

long endTime = System.nanoTime();


long memoryAfter = runtime.totalMemory() - runtime.freeMemory();
long timeTaken = endTime - startTime;
long memoryUsed = memoryAfter - memoryBefore;

System.out.println("\nAll possible palindromic partitions:");


for (List<String> partition : partitions)
System.out.println(partition);
System.out.println("\nExecution Time: " + timeTaken + " nanoseconds");
System.out.println("Approximate Space Used: " + memoryUsed + " bytes");
}
}

Input:

Enter a string: aab

Output:

All possible palindromic partitions:

[a, a, b]

[aa, b]

Execution Time: 153211 nanoseconds

Approximate Space Used: 4248 bytes

Wrong Input:

Enter a string: ababa

Wrong Output:

All possible palindromic partitions:

[a, b, a, b, a]

[a, b, aba]

[a, bab, a]

[aba, b, a]

[ababa]

Execution Time: 162745 nanoseconds

Approximate Space Used: 5120 bytes


ANALYSIS:

Time Complexity:

1.Best Case: O(N)

If the string is already a palindrome, only one partition is required.


Input:
Enter a string: aaaa
Output:
All possible palindromic partitions:
[a, a, a, a]
[aa, a, a]
[a, aa, a]
[a, a, aa]
[aa, aa]
[aaa, a]
[a, aaa]
[aaaa]

2. Average Case: O(N⋅2N)

Recursive backtracking generates valid partitions e iciently.


Input:
Enter a string: aab
Output:
All possible palindromic partitions:
[a, a, b]
[aa, b]

3. Worst Case: O(N⋅2N)

If every substring is a palindrome, all 2N2^N2N partitions are generated.


Input:
Enter a string: back
Output:
All possible palindromic partitions:
[a, b, c, d]
Space Complexity Analysis:

• Auxiliary Space: O(N)O(N)O(N) o To store temporary partitions in recursive


calls.
• Result Storage: O(N⋅2N)O(N \dot 2^N)O(N⋅2N) o
Storing all valid partitions.

Example:

Input:

S = "Aba"

output:

[ ["a", "a", "b"], ["aa", "b"] ]


WEEK 13: Branch and Bound 2
Aim: Design and analysis (the time and space complexity) to generate a binary string of length N using the branch
and bound technique, while also performing feature selection to identify essential features in a dataset, employing
the branch and bound method as an effective tool for this purpose.
ALGORITHM
Step 1: Initialization:
● Start with an empty binary string current_string = "".
● Start with an empty set of selected features current_set = {}.
● Initialize best_subset as None.
● Initialize best_accuracy as 0.

Step 2: Branching:
For the binary string generation:
● Append '0' to current_string and explore this branch.
● Append '1' to current_string and explore this branch.

For feature selection:


● Include the current feature in current_set and explore this branch.
● Exclude the current feature from current_set and explore this branch.

Step 3: Bounding:
For the binary string generation:
● Check if the length of current_string is equal to NN. If yes, consider this as a valid binary string.

For feature selection:


● Evaluate the performance of the current subset of features current_set using a predefined criterion (e.g.,
accuracy of a machine learning model).
● If the performance of current_set does not improve upon the best performance (best_accuracy), prune this
branch (i.e., discard it).

Step4: Repeat:
Continue branching and bounding for both binary string generation and feature selection:
For binary string generation, append '0' or '1' to the current string until the length is NN.
For feature selection, include or exclude each feature until all subsets are evaluated.
Step5: Termination:
For binary string generation:
● Stop when all possible binary strings of length NN are explored.

For feature selection:


● Stop when all possible subsets of features have been evaluated.
● Update best_subset with the subset of features that optimizes the performance criterion.
● Update best_accuracy with the highest accuracy achieved.
Flowchart:
PROGRAM:
import java.util.*;

public class BranchAndBoundDP {

public static List<String> generateBinaryStrings(int N) {


List<String> result = new ArrayList<>();
branchBinary("", N, result);
return result;
}

private static void branchBinary(String currentString, int N, List<String> result) {


if (currentString.length() == N) {
result.add(currentString);
return;
}
branchBinary(currentString + "0", N, result);
branchBinary(currentString + "1", N, result);
}

private static final int[][] dataset = {


{1, 0, 1, 0},
{0, 1, 0, 1},
{1, 1, 1, 0},
{0, 0, 0, 1}
};
private static final int[] labels = {1, 0, 1, 0};

private static double bestAccuracy = 0.0;


private static Set<Integer> bestSubset = new HashSet<>();
private static final Map<String, Double> memo = new HashMap<>();

public static Set<Integer> selectFeatures(int numFeatures) {


bestAccuracy = 0.0;
bestSubset.clear();
memo.clear();
branchFeaturesDP(new HashSet<>(), numFeatures);
return bestSubset;
}

private static void branchFeaturesDP(Set<Integer> currentSet, int numFeatures) {


if (currentSet.size() == numFeatures) {
String key = setToString(currentSet);
if (memo.containsKey(key)) return;

double accuracy = evaluateFeatureSet(currentSet);


memo.put(key, accuracy);

if (accuracy > bestAccuracy) {


bestAccuracy = accuracy;
bestSubset = new HashSet<>(currentSet);
}
return;
}

for (int feature = 0; feature < dataset[0].length; feature++) {


if (!currentSet.contains(feature)) {
currentSet.add(feature);
branchFeaturesDP(currentSet, numFeatures);
currentSet.remove(feature);
}
}
}

private static String setToString(Set<Integer> set) {


List<Integer> list = new ArrayList<>(set);
Collections.sort(list);
return list.toString();
}

private static double evaluateFeatureSet(Set<Integer> featureSet) {


return featureSet.size() * 0.25;
}

public static void main(String[] args) {


Scanner scanner = new Scanner(System.in);

System.out.print("Enter the length of the binary string (N): ");


int N = scanner.nextInt();

System.out.print("Enter the number of features to select (numFeatures): ");


int numFeatures = scanner.nextInt();

Runtime runtime = Runtime.getRuntime();


runtime.gc();
long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
long startTime = System.nanoTime();

List<String> binaryStrings = generateBinaryStrings(N);


Set<Integer> selectedFeatures = selectFeatures(numFeatures);

long endTime = System.nanoTime();


long memoryAfter = runtime.totalMemory() - runtime.freeMemory();

long timeTaken = endTime - startTime;


long memoryUsed = memoryAfter - memoryBefore;

System.out.println("\nBinary strings of length " + N + ": " + binaryStrings);


System.out.println("Selected feature subset: " + selectedFeatures);
System.out.println("Best accuracy: " + bestAccuracy);
System.out.println("\nExecution Time: " + timeTaken + " nanoseconds");
System.out.println("Approximate Space Used: " + memoryUsed + " bytes");

scanner.close();
}
}
INPUT:
Enter the length of the binary string (N): 2
Enter the number of features to select (numFeatures): 2

Output:
Binary strings of length 2: [00, 01, 10, 11]
Selected feature subset: [0, 1]
Best accuracy: 0.5

Execution Time: 391274 nanoseconds


Approximate Space Used: 15432 bytes

Wrong Input:
Enter the length of the binary string (N): 5
Enter the number of features to select (numFeatures): 3

Output:
Binary strings of length 5: [00000, 00001, 00010, ..., 11111]
Selected feature subset: [0, 2, 3]
Best accuracy: 0.75
Execution Time: 5245398 nanoseconds
Approximate Space Used: 45312 bytes

Worst-Case Time Complexity:


● Generating Binary String of Length N:

● Branching at each step results in 2N2^N steps.


● Complexity: O(2N)O(2^N).
● Feature Selection:

● Branching explores 2k2^k subsets for kk features.

● Complexity: O(2k)O(2^k).

Average-Case Time Complexity:


● For both tasks, the exponential growth persists as the branches explored are still subsets of 2N2^N or
2k2^k:

● Binary String Generation: O(2N)O(2^N).

● Feature Selection: O(2k)O(2^k).

Best-Case Time Complexity:


Binary String Generation:
● Time Complexity: O(2N)O(2^N) (All binary strings of length NN are generated).

● Space Complexity: O(N⋅2N)O(N \cdot 2^N) (Store all strings of length NN).

Feature Selection:
● Time Complexity: O(2m)O(2^m) (Evaluate all subsets of mm features).

● Space Complexity: O(m⋅2m)O(m \cdot 2^m) (Store all feature subsets).

You might also like