AAPS Practice Questions For ETE
AAPS Practice Questions For ETE
(R1UC601B)
PRACTICE QUESTIONS FOR ETE
1. Analyse how different hashing methods can be applied to optimize search and
sort operations.
2. Analyse the performance and application scenarios of binomial and
Fibonacci heaps.
3. Investigate the disjoint set union data structure and its operations, evaluating
its efficiency in different applications.
4. Compare and contrast Depth-First Search (DFS) and Breadth-First Search (BFS)
in various contexts.
Difference Between BFS and DFS:
Parameter
s BFS DFS
BFS(Breadth First
Search) uses Queue
DFS(Depth First Search)
data structure for
Data uses Stack data structure.
finding the shortest
Structure path.
Conceptual BFS builds the tree DFS builds the tree sub-tree
Difference level by level. by sub-tree.
It works on the
Approach It works on the concept of
concept of FIFO (First
LIFO (Last In First Out).
used In First Out).
Minimum spanning tree algorithms are used to find the subset of edges in a
graph that connects all nodes together with the minimum total weight or cost.
Some common minimum spanning tree algorithms are:
Shortest path algorithms and minimum spanning tree algorithms have many
applications in real-world problems, such as:
Articulation points and bridges are crucial components in network design and
reliability, playing a vital role in maintaining network connectivity and
Advanced Algorithmic Problem Solving
(R1UC601B)
preventing fragmentation.
Articulation Points:
Bridges:
Significance of Bridges:
Real-World Applications:
1. Divide: Divide each matrix into four quadrants of roughly equal size.
2. Conquer: Compute seven products of these quadrants using the
following formulas:
3. Combine: Combine the seven products to form the final product matrix.
8 Analyse the time complexity and efficiency of algorithms based on divide and
conquer, such as counting inversions and finding the closest pair of points.
Divide and Conquer is a popular algorithm design paradigm that breaks down
complex problems into smaller sub-problems, solves them recursively, and
combines the solutions to obtain the final result. In this analysis, we'll examine
the time complexity and efficiency of two classic divide and conquer
algorithms: counting inversions and finding the closest pair of points.
1. Counting Inversions
Advanced Algorithmic Problem Solving
(R1UC601B)
Problem Statement: Given an array of n integers, count the number of
inversions, i.e., pairs of elements that are in the wrong order.
Algorithm:
1. Divide: Divide the array into two halves, each of size n/2.
2. Conquer: Recursively count the inversions in each half.
3. Combine: Combine the results by counting the inversions between the
two halves.
Time Complexity:
Let T(n) be the time complexity of the algorithm. The recurrence relation is:
Efficiency Analysis:
The counting inversions algorithm has a time complexity of O(n log n), which
is much faster than the naive approach of iterating over all pairs of elements,
which has a time complexity of O(n^2). The divide and conquer approach
reduces the problem size by half at each recursive step, leading to a
logarithmic reduction in the number of operations.
Algorithm:
1. Divide: Divide the points into two halves, each containing n/2 points.
2. Conquer: Recursively find the closest pair of points in each half.
3. Combine: Combine the results by finding the closest pair of points
between the two halves.
Time Complexity:
The time complexity of the closest pair of points algorithm can be analyzed
using the Master Theorem.
Let T(n) be the time complexity of the algorithm. The recurrence relation is:
Advanced Algorithmic Problem Solving
(R1UC601B)
T(n) = 2T(n/2) + O(n log n)
Efficiency Analysis:
The closest pair of points algorithm has a time complexity of O(n log n), which
is much faster than the naive approach of iterating over all pairs of points,
which has a time complexity of O(n^2). The divide and conquer approach
reduces the problem size by half at each recursive step, leading to a
logarithmic reduction in the number of operations.
1. Efficient: Divide and conquer algorithms are often more efficient than
naive approaches, with a lower time complexity.
2. Scalable: Divide and conquer algorithms can be easily parallelized,
making them suitable for large datasets.
3. Easy to Implement: Divide and conquer algorithms are often easier to
implement than other algorithm design paradigms.
Advantages:
Disadvantages:
Scenarios:
10 Judge the suitability of hashing methods for optimization problems and justify
the chosen method.
This method is fast and simple, but it has some limitations. The table size
should not be a power of a number, and it's better to choose a prime number
to minimize collisions. However, even with a prime table size, collisions can
still occur if the key distribution is not uniform.
Hashing by Multiplication
Advanced Algorithmic Problem Solving
(R1UC601B)
c
This method is more robust and can handle non-uniform key distributions. The
constant c should be chosen carefully to minimize collisions. A good choice
for c is a fraction of the form s / 2^w, where s is an integer and w is the word
size of the machine.
In optimization problems, we often deal with large datasets and complex key
distributions. Therefore, I recommend using the hashing by
multiplication method, which is more robust and can handle non-uniform key
distributions.
11 Judge the effectiveness of shortest path algorithms and minimum spanning tree
algorithms in solving real-world problems.
Real-World Applications:
When faced with a graph problem, choosing the right algorithm is crucial to
achieve efficient and accurate results. The choice of algorithm depends on
various problem constraints and requirements. Here's a justification for
selecting graph algorithms based on common constraints and requirements:
Problem Constraints:
1. Graph Size: For large graphs, algorithms with a lower time complexity
are preferred to reduce computational time.
Example: Using Breadth-First Search (BFS) or Depth-First
Search (DFS) for traversing large graphs, as they have a lower
time complexity (O(|E| + |V|)) compared to algorithms
like Dijkstra's (O(|E| + |V|log|V|)).
2. Graph Type: Different algorithms are suited for different graph types
(e.g., weighted, unweighted, directed, undirected).
Example: Using Dijkstra's algorithm for finding the shortest
path in a weighted graph, while BFS is suitable for unweighted
graphs.
3. Memory Constraints: Algorithms with lower memory requirements are
preferred for systems with limited memory.
Example: Using Iterative Deepening Depth-First Search
(IDDFS), which has a lower memory requirement compared
to Recursive DFS.
Problem Requirements:
Advanced Algorithmic Problem Solving
(R1UC601B)
1. Optimality: Algorithms that guarantee optimality are preferred when
the problem requires finding the shortest path or minimum spanning
tree.
Example: Using Dijkstra's algorithm or A* algorithm for finding
the shortest path, as they guarantee optimality.
2. Approximation: Algorithms that provide a good approximation are
preferred when optimality is not required or is computationally
expensive.
Example: Using Floyd-Warshall algorithm for finding the
shortest path between all pairs of nodes, which provides a good
approximation but is not guaranteed to be optimal.
3. Scalability: Algorithms that can handle large graphs and are scalable
are preferred for big data applications.
Example: Using Graph Processing Systems like Apache
Giraph or GraphX, which are designed to handle large-scale
graph processing.
Additional Considerations:
Example Justification:
Suppose we need to find the shortest path between two nodes in a large,
weighted, directed graph. Based on the problem constraints and requirements,
we would choose Dijkstra's algorithm because:
12 Create a system that dynamically selects the appropriate search method based on
the dataset characteristics.
Components:
Selection Rules:
1. Graph Size: Small graphs (< 10,000 nodes) -> BFS/DFS, Medium
graphs (10,000 - 100,000 nodes) -> Dijkstra's/A*, Large graphs (>
100,000 nodes) -> Graph Processing Systems.
2. Graph Type: Weighted graphs -> Dijkstra's/A*, Unweighted graphs ->
BFS/DFS.
3. Node/Edge Distributions: Power-law distributions -> Graph
Processing Systems, Uniform distributions -> BFS/DFS.
4. Connectivity/Clustering Coefficients: Highly connected graphs ->
Graph Processing Systems, Low connectivity graphs -> BFS/DFS.
Heuristics:
Workflow:
Advantages:
Here are the C codes for greedy algorithms tailored to solve specific
Advanced Algorithmic Problem Solving
(R1UC601B)
optimization problems like activity selection and task scheduling:
Output
Output
Time Complexity: O(N log N), If input activities may not be sorted. It takes
O(n) time when it is given that input activities are always sorted.
Auxiliary Space: O(1)
14 Apply the binary search technique to find the first occurrence of a number in
a sorted array.
#include <stdio.h>
Advanced Algorithmic Problem Solving
(R1UC601B)
int firstOccurrence(int arr[], int n, int x) {
int low = 0, high = n - 1;
int result = -1;
if (arr[mid] == x) {
result = mid;
high = mid - 1; // search in the left half
} else if (arr[mid] < x) {
low = mid + 1;
} else {
high = mid - 1;
}
}
return result;
}
int main() {
int arr[] = {1, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9};
int n = sizeof(arr) / sizeof(arr[0]);
int x = 5;
if (result != -1) {
printf("First occurrence of %d is at index %d\n", x, result);
} else {
printf("%d is not found in the array\n", x);
}
return 0;
}
Here's an example of how to apply the greedy technique to solve the activity
selection problem:
Given a set of activities and their start and finish times, select the maximum
number of activities that can be performed by a single person, assuming that a
person can only work on a single activity at a time.
Greedy Algorithm
Pseudocode
Example
A 1 4
B 3 5
C 0 6
Advanced Algorithmic Problem Solving
(R1UC601B)
D 5 7
E 3 8
F 5 9
G 6 10
H 8 11
1. C (finish time 6)
2. A (finish time 4)
3. D (finish time 7)
4. G (finish time 10)
The resulting set of activities is {C, A, D, G}, which is the maximum number of
activities that can be performed by a single person.
Correctness
The greedy algorithm is correct because it always selects the activity with the
earliest finish time that does not conflict with the previously selected activities.
This ensures that the maximum number of activities are selected.
Time Complexity
The time complexity of the greedy algorithm is O(n log n), where n is the
number of activities, due to the sorting step.
Space Complexity
The space complexity of the greedy algorithm is O(n), where n is the number
Advanced Algorithmic Problem Solving
(R1UC601B)
of activities, since we need to store the result set.
Explanation
Finally, the function returns the result of the evaluation, which is the top
element of the stack.
Stack Operations
17 Apply binary search tree operations to insert and find an element. Write the
function in C/C++/Java/Python.
#include <stdio.h>
#include <stdlib.h>
return 0;
}
18 Use BFS to implement a level order traversal of a binary tree. Write the function
in C/C++/Java/Python.
#include <stdio.h>
#include <stdlib.h>
queue[rear++] = root;
if (current->left!= NULL) {
queue[rear++] = current->left;
}
if (current->right!= NULL) {
queue[rear++] = current->right;
}
}
printf("\n");
}
int main() {
Node* root = (Node*)malloc(sizeof(Node));
Advanced Algorithmic Problem Solving
(R1UC601B)
root->data = 1;
root->left = (Node*)malloc(sizeof(Node));
root->right = (Node*)malloc(sizeof(Node));
root->left->data = 2;
root->right->data = 3;
root->left->left = (Node*)malloc(sizeof(Node));
root->left->right = (Node*)malloc(sizeof(Node));
root->right->left = (Node*)malloc(sizeof(Node));
root->right->right = (Node*)malloc(sizeof(Node));
root->left->left->data = 4;
root->left->right->data = 5;
root->right->left->data = 6;
root->right->right->data = 7;
levelOrderTraversal(root);
return 0;
}
19 Write a program that uses divide and conquer to find the closest pair of points in
a 2D plane.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
int n;
printf("Enter the number of points: ");
scanf("%d", &n);
return 0;
}
Explanation
The program uses the divide and conquer approach to find the closest pair of
points in a 2D plane. The basic idea is to divide the points into two halves,
recursively find the closest pair in each half, and then find the closest pair that
spans the two halves.
Advanced Algorithmic Problem Solving
(R1UC601B)
Here's how the program works:
Note
This program has a time complexity of O(n log n), where n is the number of
points. This is because the divide and conquer approach reduces the problem
size by half at each recursive step, and the number of recursive steps is
logarithmic in the size of the input.
return K[n][W];
}
int main() {
int val[] = {60, 100, 120};
Advanced Algorithmic Problem Solving
(R1UC601B)
int wt[] = {10, 20, 30};
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
printf("Maximum profit: %d\n", knapsack(W, wt, val, n));
return 0;
}
21 Evaluate the efficiency of using a sliding window technique for a given
dataset of temperature readings over brute force methods.
The sliding window technique is an efficient method for solving problems that
involve finding subarrays or substrings that meet certain conditions. In the
context of a dataset of temperature readings, the sliding window technique
can be used to find the maximum or minimum temperature within a given
window size.
On the other hand, the sliding window technique can be used to solve this
problem in O(N) time complexity. Here's an example implementation in C:
The sliding window technique is more efficient than the brute force approach
because it avoids recalculating the maximum temperature for each window.
Instead, it uses the previous window's results to update the maximum
temperature, resulting in a significant reduction in time complexity.
22 Given a number n, find sum of first n natural numbers. To calculate the sum, we
will use a recursive function recur_sum().
#include <stdio.h>
int recur_sum(int n) {
if (n == 0) {
return 0;
} else {
return n + recur_sum(n - 1);
}
}
int main() {
int num;
printf("Enter a positive integer: ");
scanf("%d", &num);
printf("Sum = %d", recur_sum(num));
return 0;
}
Advanced Algorithmic Problem Solving
(R1UC601B)
23 Implement a recursive algorithm to solve the Tower of Hanoi problem. Find its
complexity also.
This program takes the number of disks as input from the user and solves the
Tower of Hanoi problem using the recursive function towerOfHanoi(). The
function takes four arguments:
The space complexity is O(n), since the maximum depth of the recursion tree
is n.
Note that the Tower of Hanoi problem has a closed-form solution, which is 2^n
- 1 moves. However, the recursive algorithm is often used to illustrate the
concept of recursion and to demonstrate how to break down a complex
Advanced Algorithmic Problem Solving
(R1UC601B)
problem into smaller sub-problems.
24 Given a Binary Search Tree and a node value X, find if the node with value X is
present in the BST or not.
The searchNode() function takes the root of the BST and the value X as input
and returns 1 if the node with value X is found, and 0 otherwise. The function
works as follows:
The time complexity of this algorithm is O(h), where h is the height of the BST.
In the worst case, the BST is skewed, and the time complexity becomes O(n),
where n is the number of nodes in the BST. However, for a balanced BST, the
time complexity is O(log n).
Note that this implementation assumes that the BST is a valid binary search
tree, where for each node, all elements in the left subtree are less than the
node, and all elements in the right subtree are greater than the node.
25 Given two Binary Search Trees. Find the nodes that are common in both of
them, ie- find the intersection of the two BSTs.
This program defines two BSTs, root1 and root2, and finds the common
nodes between them using the findCommonNodes() function. The function
performs inorder traversal on both BSTs, stores the node values in arrays, and
then finds the common elements in the arrays.
The time complexity of this algorithm is O(M + N), where M and N are the
number of nodes in the first and second trees, respectively. The auxiliary
space complexity is O(M + N), since we need to store the node values in
arrays.
Note that this implementation assumes that the BSTs are valid binary search
trees, where for each node, all elements in the left subtree are less than the
node, and all elements in the right subtree are greater than the node.
The time complexity of the quicksort algorithm with random pivot selection is
O(n log n) on average, where n is the size of the array. This is because the
random pivot selection ensures that the partitioning step divides the array into
two roughly equal-sized parts, leading to a logarithmic number of recursive
calls.
However, in the worst case, the time complexity can be O(n^2) if the pivot
selection is consistently poor, leading to highly unbalanced partitions. This can
occur if the input array is already sorted or nearly sorted, and the random pivot
selection consistently chooses the smallest or largest element as the pivot.
Space complexity:
Note that the srand() function is used to seed the random number generator
with the current time, ensuring that the random pivot selection is different each
time the program is run.