0% found this document useful (0 votes)
20 views37 pages

AAPS Practice Questions For ETE

Uploaded by

Nishikant Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views37 pages

AAPS Practice Questions For ETE

Uploaded by

Nishikant Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Advanced Algorithmic Problem Solving

(R1UC601B)
PRACTICE QUESTIONS FOR ETE
1. Analyse how different hashing methods can be applied to optimize search and
sort operations.
2. Analyse the performance and application scenarios of binomial and
Fibonacci heaps.
3. Investigate the disjoint set union data structure and its operations, evaluating
its efficiency in different applications.
4. Compare and contrast Depth-First Search (DFS) and Breadth-First Search (BFS)
in various contexts.
Difference Between BFS and DFS:
Parameter
s BFS DFS

BFS stands for DFS stands for Depth First


Stands for Breadth First Search. Search.

BFS(Breadth First
Search) uses Queue
DFS(Depth First Search)
data structure for
Data uses Stack data structure.
finding the shortest
Structure path.

DFS is also a traversal


BFS is a traversal
approach in which the
approach in which we
traverse begins at the root
first walk through all
node and proceeds through
nodes on the same
the nodes as far as possible
level before moving
until we reach the node with
on to the next level.
Definition no unvisited nearby nodes.

Conceptual BFS builds the tree DFS builds the tree sub-tree
Difference level by level. by sub-tree.

It works on the
Approach It works on the concept of
concept of FIFO (First
LIFO (Last In First Out).
used In First Out).

BFS is more suitable


DFS is more suitable when
for searching vertices
Suitable there are solutions away
closer to the given
from source.
for source.

Applicatio BFS is used in various DFS is used in various


ns applications such as applications such as acyclic
bipartite graphs, graphs and finding strongly
Advanced Algorithmic Problem Solving
(R1UC601B)
shortest paths, etc. connected components etc.
5. Evaluate shortest path algorithms, minimum spanning tree algorithms, and their
applications in real-world problems.

Minimum Spanning Tree Algorithms

Minimum spanning tree algorithms are used to find the subset of edges in a
graph that connects all nodes together with the minimum total weight or cost.
Some common minimum spanning tree algorithms are:

 Kruskal's Algorithm: This algorithm is used to find the minimum


spanning tree of a graph. It works by sorting the edges of the graph by
weight and then selecting the edges that connect the nodes together
with the minimum total weight.
 Prim's Algorithm: This algorithm is used to find the minimum spanning
tree of a graph. It works by growing a tree from a starting node, adding
the edges that connect the nodes together with the minimum total
weight.

Applications in Real-World Problems

Shortest path algorithms and minimum spanning tree algorithms have many
applications in real-world problems, such as:

 Network Design: Minimum spanning tree algorithms are used to


design networks, such as phone networks, that connect all nodes
together with the minimum total cost.
 Cluster Analysis: Minimum spanning tree algorithms are used in
cluster analysis to group similar objects together.
 Image Segmentation: Minimum spanning tree algorithms are used in
image segmentation to segment an image into different regions.
 Bioinformatics: Minimum spanning tree algorithms are used in
bioinformatics to construct phylogenetic trees that represent the
evolutionary relationship between different species.
 Facility Location: Minimum spanning tree algorithms are used to
determine the optimal location of facilities, such as warehouses or
power plants, in a network.
 Geographic Information Systems (GIS): Minimum spanning tree
algorithms are used in GIS to create a map of a region with the
minimum possible total distance between the locations.

6 Investigate the significance of articulation points and bridges in network design


and reliability.

Significance of Articulation Points and Bridges in Network Design and


Reliability

Articulation points and bridges are crucial components in network design and
reliability, playing a vital role in maintaining network connectivity and
Advanced Algorithmic Problem Solving
(R1UC601B)
preventing fragmentation.

Articulation Points:

An articulation point is a node in a graph that, when removed, increases the


number of connected components in the graph. In other words, an articulation
point is a node that connects two or more subgraphs together. If an
articulation point is removed, the subgraphs it connects become disconnected.

Significance of Articulation Points:

1. Network Fragmentation: Articulation points are crucial in preventing


network fragmentation. If an articulation point is removed, the network
may break into smaller, disconnected subgraphs, leading to
communication disruptions and reduced network reliability.
2. Network Robustness: Identifying and protecting articulation points can
improve network robustness by ensuring that the network remains
connected even in the event of node failures.
3. Network Design: Articulation points are essential in network design, as
they help designers create networks that are more resilient to node
failures and can maintain connectivity even in the presence of faults.
4. Network Optimization: Articulation points can be used to optimize
network performance by identifying critical nodes that require additional
resources or redundancy to ensure network reliability.

Bridges:

A bridge is an edge in a graph that, when removed, increases the number of


connected components in the graph. In other words, a bridge is an edge that
connects two subgraphs together. If a bridge is removed, the subgraphs it
connects become disconnected.

Significance of Bridges:

1. Network Connectivity: Bridges are critical in maintaining network


connectivity. If a bridge is removed, the network may become
disconnected, leading to communication disruptions and reduced
network reliability.
2. Network Redundancy: Identifying and protecting bridges can improve
network redundancy by ensuring that there are multiple paths between
nodes, reducing the risk of network fragmentation.
3. Network Optimization: Bridges can be used to optimize network
performance by identifying critical edges that require additional
resources or redundancy to ensure network reliability.
4. Network Fault Tolerance: Bridges can be used to improve network
fault tolerance by providing alternative paths for data transmission in
the event of edge failures.

Real-World Applications:

1. Computer Networks: Articulation points and bridges are crucial in


computer networks, where they help ensure that data packets can be
Advanced Algorithmic Problem Solving
(R1UC601B)
routed efficiently and reliably between nodes.
2. Transportation Networks: In transportation networks, articulation
points and bridges are essential in maintaining connectivity between
cities, towns, and other locations.
3. Social Networks: In social networks, articulation points and bridges
play a key role in maintaining connectivity between individuals and
groups.
4. Power Grids: In power grids, articulation points and bridges are critical
in maintaining the reliability and efficiency of power distribution.

Algorithms for Identifying Articulation Points and Bridges:

1. Depth-First Search (DFS): DFS can be used to identify articulation


points and bridges in a graph.
2. Biconnected Components Algorithm: This algorithm can be used to
identify biconnected components in a graph, which are subgraphs that
have at least two articulation points.
3. Tarjan's Algorithm: This algorithm can be used to identify strongly
connected components in a graph, which are subgraphs that have at
least one articulation point.

7 Examine Strasson’s matrix multiplication algorithm and compare it


with conventional methods.

Strassen's Matrix Multiplication Algorithm

Strassen's matrix multiplication algorithm is an optimized algorithm for matrix


multiplication, developed by Volker Strassen in 1969. It is a divide-and-
conquer approach that reduces the time complexity of matrix multiplication
from O(n^3) to O(n^2.81), making it faster than conventional methods for large
matrices.

How Strassen's Algorithm Works:

1. Divide: Divide each matrix into four quadrants of roughly equal size.
2. Conquer: Compute seven products of these quadrants using the
following formulas:

P1 = A(F-H) P2 = (A+B)H P3 = (C+D)E P4 = D(G-E) P5 = (A+D)(E+H) P6 =


(B-D)(G+H) P7 = (A-C)(E+F)

3. Combine: Combine the seven products to form the final product matrix.

Comparison with Conventional Methods:

Conventional Method (Naive Approach):

The conventional method of matrix multiplication involves iterating over each


element of the matrices and performing a dot product to compute the resulting
matrix. This approach has a time complexity of O(n^3), making it inefficient for
large matrices.
Advanced Algorithmic Problem Solving
(R1UC601B)
Advantages of Strassen's Algorithm:

1. Faster Computation: Strassen's algorithm has a time complexity of


O(n^2.81), making it faster than the conventional method for large
matrices.
2. Reduced Number of Operations: Strassen's algorithm reduces the
number of operations required for matrix multiplication, making it more
efficient.
3. Parallelization: Strassen's algorithm can be parallelized, making it
suitable for distributed computing environments.

Disadvantages of Strassen's Algorithm:

1. Increased Memory Access: Strassen's algorithm requires more


memory accesses than the conventional method, which can lead to
increased memory latency.
2. Complexity: Strassen's algorithm is more complex than the
conventional method, making it more difficult to implement and debug.
3. Limited Applicability: Strassen's algorithm is only suitable for large
matrices, and its benefits may not be significant for smaller matrices.

When to Use Strassen's Algorithm:

1. Large Matrices: Strassen's algorithm is suitable for large matrices


where the benefits of reduced computation time outweigh the increased
memory access and complexity.
2. Distributed Computing: Strassen's algorithm can be parallelized,
making it suitable for distributed computing environments where
multiple processors can be utilized to speed up computation.
3. High-Performance Computing: Strassen's algorithm is suitable for
high-performance computing applications where every fraction of a
second counts.

In conclusion, Strassen's matrix multiplication algorithm is a faster and more


efficient alternative to conventional methods for large matrices. However, its
increased complexity and memory access requirements make it less suitable
for smaller matrices or applications where simplicity is preferred.

8 Analyse the time complexity and efficiency of algorithms based on divide and
conquer, such as counting inversions and finding the closest pair of points.

Divide and Conquer Algorithms: Time Complexity and Efficiency


Analysis

Divide and Conquer is a popular algorithm design paradigm that breaks down
complex problems into smaller sub-problems, solves them recursively, and
combines the solutions to obtain the final result. In this analysis, we'll examine
the time complexity and efficiency of two classic divide and conquer
algorithms: counting inversions and finding the closest pair of points.

1. Counting Inversions
Advanced Algorithmic Problem Solving
(R1UC601B)
Problem Statement: Given an array of n integers, count the number of
inversions, i.e., pairs of elements that are in the wrong order.

Algorithm:

1. Divide: Divide the array into two halves, each of size n/2.
2. Conquer: Recursively count the inversions in each half.
3. Combine: Combine the results by counting the inversions between the
two halves.

Time Complexity:

The time complexity of the counting inversions algorithm can be analyzed


using the Master Theorem, which is a general method for solving recurrence
relations.

Let T(n) be the time complexity of the algorithm. The recurrence relation is:

T(n) = 2T(n/2) + O(n)

Using the Master Theorem, we get:

T(n) = O(n log n)

Efficiency Analysis:

The counting inversions algorithm has a time complexity of O(n log n), which
is much faster than the naive approach of iterating over all pairs of elements,
which has a time complexity of O(n^2). The divide and conquer approach
reduces the problem size by half at each recursive step, leading to a
logarithmic reduction in the number of operations.

2. Finding the Closest Pair of Points

Problem Statement: Given a set of n points in a 2D plane, find the closest


pair of points.

Algorithm:

1. Divide: Divide the points into two halves, each containing n/2 points.
2. Conquer: Recursively find the closest pair of points in each half.
3. Combine: Combine the results by finding the closest pair of points
between the two halves.

Time Complexity:

The time complexity of the closest pair of points algorithm can be analyzed
using the Master Theorem.

Let T(n) be the time complexity of the algorithm. The recurrence relation is:
Advanced Algorithmic Problem Solving
(R1UC601B)
T(n) = 2T(n/2) + O(n log n)

Using the Master Theorem, we get:

T(n) = O(n log n)

Efficiency Analysis:

The closest pair of points algorithm has a time complexity of O(n log n), which
is much faster than the naive approach of iterating over all pairs of points,
which has a time complexity of O(n^2). The divide and conquer approach
reduces the problem size by half at each recursive step, leading to a
logarithmic reduction in the number of operations.

Common Characteristics of Divide and Conquer Algorithms:

1. Recursion: Divide and conquer algorithms use recursion to break


down the problem into smaller sub-problems.
2. Problem Size Reduction: The problem size is reduced by a constant
factor at each recursive step, leading to a logarithmic reduction in the
number of operations.
3. Combining Solutions: The solutions to the sub-problems are
combined to obtain the final result.

Advantages of Divide and Conquer Algorithms:

1. Efficient: Divide and conquer algorithms are often more efficient than
naive approaches, with a lower time complexity.
2. Scalable: Divide and conquer algorithms can be easily parallelized,
making them suitable for large datasets.
3. Easy to Implement: Divide and conquer algorithms are often easier to
implement than other algorithm design paradigms.

In conclusion, divide and conquer algorithms, such as counting inversions and


finding the closest pair of points, offer efficient solutions to complex problems
by breaking them down into smaller sub-problems and combining the
solutions. The time complexity of these algorithms is often lower than naive
approaches, making them suitable for large datasets and parallel computing
environments.

9 Critically evaluate the effectiveness of universal hashing in various scenarios.

Universal Hashing Effectiveness Evaluation

Universal hashing is a technique used to design hash functions that minimize


the chance of collisions between distinct keys. Its effectiveness can be
critically evaluated in various scenarios:

Advantages:

 High degree of randomness: Universal hashing provides a high


degree of randomness in the selection of hash functions, reducing the
Advanced Algorithmic Problem Solving
(R1UC601B)
likelihood of collisions.
 Simple implementation: It is simple to implement and can be used in
a wide variety of applications.
 Flexibility: Universal hashing has a high degree of flexibility, allowing
for easy modification of the family of hash functions to optimize
performance for specific data sets.

Disadvantages:

 Computational expense: Generating a large number of hash functions


can be computationally expensive.
 Careful selection: The selection of hash functions must be done
carefully to ensure they are truly independent.
 Limited applicability: Universal hashing may not be suitable for all
types of data sets, particularly those with complex or unpredictable
distributions.

Scenarios:

 Hash table operations: Universal hashing is effective in minimizing


collisions in hash table operations, leading to improved performance.
 Cryptography: Universal hashing is used in cryptographic applications,
such as digital signatures and message authentication codes.
 Complexity theory: Universal hashing has applications in complexity
theory, particularly in the study of hash functions and their properties.

10 Judge the suitability of hashing methods for optimization problems and justify
the chosen method.

Hashing Methods for Optimization Problems

When it comes to optimization problems, the choice of hashing method can


significantly impact performance. Two popular hashing methods are hashing
by division and hashing by multiplication.

Hashing by Division (Mod Method)

Open In EditorEditCopy code


1h(key) = key % table_size

This method is fast and simple, but it has some limitations. The table size
should not be a power of a number, and it's better to choose a prime number
to minimize collisions. However, even with a prime table size, collisions can
still occur if the key distribution is not uniform.

Hashing by Multiplication
Advanced Algorithmic Problem Solving
(R1UC601B)
c

Open In EditorEditCopy code


1h(key) = floor(m * (key * c mod 1))

This method is more robust and can handle non-uniform key distributions. The
constant c should be chosen carefully to minimize collisions. A good choice
for c is a fraction of the form s / 2^w, where s is an integer and w is the word
size of the machine.

Justification for Chosen Method

In optimization problems, we often deal with large datasets and complex key
distributions. Therefore, I recommend using the hashing by
multiplication method, which is more robust and can handle non-uniform key
distributions.

Here's an example implementation in C:

Open In EditorEditCopy code


1#include <math.h>
2
3int hash_function(int key, int table_size, int w) {
4 int c = (sqrt(5) - 1) / 2; // Choose a good value for c
5 int s = c * (1 << w); // Calculate s
6 int r1, r0;
7 int product = key * s;
8 r1 = product >> w; // High-order word of the product
9 r0 = product & ((1 << w) - 1); // Lower-order word of the product
10 return (r0 >> (w - table_size)) & ((1 << table_size) - 1);
11}

In this implementation, we use the multiplication method with a carefully


chosen value for c. We also use bitwise operations to extract the high-order
and low-order words of the product, which can be more efficient than using
division and modulo operations.

11 Judge the effectiveness of shortest path algorithms and minimum spanning tree
algorithms in solving real-world problems.

Effectiveness of Shortest Path Algorithms and Minimum Spanning Tree


Algorithms

Shortest path algorithms and minimum spanning tree algorithms are


fundamental concepts in graph theory, with numerous applications in real-
world problems. Here's a judgment of their effectiveness in solving various
problems:

Shortest Path Algorithms:


Advanced Algorithmic Problem Solving
(R1UC601B)
1. Dijkstra's Algorithm: Effective in finding the shortest path between
two nodes in a weighted graph. It's widely used in:
 GPS navigation systems to find the shortest route between two
locations.
 Traffic routing and optimization in transportation networks.
 Network topology design and optimization.
2. A* Algorithm: An extension of Dijkstra's algorithm, effective in finding
the shortest path with an admissible heuristic function. It's commonly
used in:
 Video games for pathfinding and navigation.
 Robotics and autonomous systems for motion planning.
 Logistics and supply chain management for route optimization.
3. Floyd-Warshall Algorithm: Effective in finding the shortest path
between all pairs of nodes in a weighted graph. It's used in:
 Network analysis and optimization.
 Traffic simulation and modeling.
 Recommendation systems for finding the shortest path between
users and items.

Minimum Spanning Tree Algorithms:

1. Kruskal's Algorithm: Effective in finding the minimum spanning tree of


a connected weighted graph. It's widely used in:
 Network design and optimization (e.g., telecommunications,
transportation).
 Clustering and data analysis.
 Image segmentation and feature extraction.
2. Prim's Algorithm: Another effective algorithm for finding the minimum
spanning tree. It's used in:
 Network topology design and optimization.
 Data compression and encoding.
 Computational biology for phylogenetic tree reconstruction.

Real-World Applications:

1. Traffic and Transportation: Shortest path algorithms are used in GPS


navigation, traffic routing, and optimization to reduce congestion and
improve travel times.
2. Network Optimization: Minimum spanning tree algorithms are used in
network design and optimization to minimize costs and improve
connectivity.
3. Logistics and Supply Chain Management: Shortest path algorithms
are used to optimize routes and reduce transportation costs.
4. Computer Vision and Image Processing: Minimum spanning tree
algorithms are used in image segmentation and feature extraction.
Advanced Algorithmic Problem Solving
(R1UC601B)
5. Recommendation Systems: Shortest path algorithms are used to find
the shortest path between users and items, improving recommendation
accuracy.

Challenges and Limitations:

1. Scalability: Large graphs can be computationally expensive to


process, making scalability a challenge.
2. Complexity: Real-world graphs can be complex, with multiple edges
and weights, making algorithm selection and implementation
challenging.
3. Noise and Uncertainty: Real-world data can be noisy and uncertain,
affecting algorithm accuracy and reliability.

In conclusion, shortest path algorithms and minimum spanning tree algorithms


are highly effective in solving various real-world problems. However, it's
essential to consider the challenges and limitations of these algorithms and
adapt them to specific problem domains to achieve optimal results.

11 Justify the choice of graph algorithms based on problem constraints and


requirements.

Justifying the Choice of Graph Algorithms

When faced with a graph problem, choosing the right algorithm is crucial to
achieve efficient and accurate results. The choice of algorithm depends on
various problem constraints and requirements. Here's a justification for
selecting graph algorithms based on common constraints and requirements:

Problem Constraints:

1. Graph Size: For large graphs, algorithms with a lower time complexity
are preferred to reduce computational time.
 Example: Using Breadth-First Search (BFS) or Depth-First
Search (DFS) for traversing large graphs, as they have a lower
time complexity (O(|E| + |V|)) compared to algorithms
like Dijkstra's (O(|E| + |V|log|V|)).
2. Graph Type: Different algorithms are suited for different graph types
(e.g., weighted, unweighted, directed, undirected).
 Example: Using Dijkstra's algorithm for finding the shortest
path in a weighted graph, while BFS is suitable for unweighted
graphs.
3. Memory Constraints: Algorithms with lower memory requirements are
preferred for systems with limited memory.
 Example: Using Iterative Deepening Depth-First Search
(IDDFS), which has a lower memory requirement compared
to Recursive DFS.

Problem Requirements:
Advanced Algorithmic Problem Solving
(R1UC601B)
1. Optimality: Algorithms that guarantee optimality are preferred when
the problem requires finding the shortest path or minimum spanning
tree.
 Example: Using Dijkstra's algorithm or A* algorithm for finding
the shortest path, as they guarantee optimality.
2. Approximation: Algorithms that provide a good approximation are
preferred when optimality is not required or is computationally
expensive.
 Example: Using Floyd-Warshall algorithm for finding the
shortest path between all pairs of nodes, which provides a good
approximation but is not guaranteed to be optimal.
3. Scalability: Algorithms that can handle large graphs and are scalable
are preferred for big data applications.
 Example: Using Graph Processing Systems like Apache
Giraph or GraphX, which are designed to handle large-scale
graph processing.

Additional Considerations:

1. Implementation Complexity: Algorithms with simpler implementations


are preferred when development time and resources are limited.
2. Parallelization: Algorithms that can be parallelized are preferred for
distributed computing environments.
3. Cache Efficiency: Algorithms that minimize cache misses are
preferred for systems with limited cache memory.

Example Justification:

Suppose we need to find the shortest path between two nodes in a large,
weighted, directed graph. Based on the problem constraints and requirements,
we would choose Dijkstra's algorithm because:

 It is suitable for weighted graphs.


 It guarantees optimality.
 It has a relatively low time complexity (O(|E| + |V|log|V|)) compared to
other algorithms.
 It can be parallelized for distributed computing environments.

By considering the problem constraints and requirements, we can justify the


choice of graph algorithm and ensure efficient and accurate results.

12 Create a system that dynamically selects the appropriate search method based on
the dataset characteristics.

Dynamic Search Method Selection System

Components:

1. Dataset Analyzer: Extracts dataset characteristics (graph size, type,


Advanced Algorithmic Problem Solving
(R1UC601B)
node/edge distributions, connectivity).
2. Search Method Repository: Database of search methods with
characteristics (time complexity, optimality guarantees, parallelization
capabilities).
3. Search Method Selector: Selects the most suitable search method
based on dataset characteristics and search method repository.
4. Search Engine: Executes the selected search method on the input
dataset.

Selection Rules:

1. Graph Size: Small graphs (< 10,000 nodes) -> BFS/DFS, Medium
graphs (10,000 - 100,000 nodes) -> Dijkstra's/A*, Large graphs (>
100,000 nodes) -> Graph Processing Systems.
2. Graph Type: Weighted graphs -> Dijkstra's/A*, Unweighted graphs ->
BFS/DFS.
3. Node/Edge Distributions: Power-law distributions -> Graph
Processing Systems, Uniform distributions -> BFS/DFS.
4. Connectivity/Clustering Coefficients: Highly connected graphs ->
Graph Processing Systems, Low connectivity graphs -> BFS/DFS.

Heuristics:

1. Optimality Guarantee: Prefer search methods with optimality


guarantees.
2. Parallelization Capabilities: Prefer search methods that can be
parallelized.
3. Implementation Complexity: Prefer search methods with simpler
implementations.

Workflow:

1. Input dataset -> Dataset Analyzer.


2. Dataset characteristics -> Search Method Selector.
3. Selected search method -> Search Engine.
4. Search Engine executes search method and returns results.

Advantages:

1. Improved Performance: Dynamic selection of search method.


2. Increased Flexibility: Handles various dataset characteristics and
search methods.
3. Simplified Development: Abstracts away search method selection
complexity.

12 Design a robust hashing system that minimizes collisions and optimizes


search and sort operations.
13 Design greedy algorithms tailored to solve specific optimization problems
like activity selection and task scheduling.

Here are the C codes for greedy algorithms tailored to solve specific
Advanced Algorithmic Problem Solving
(R1UC601B)
optimization problems like activity selection and task scheduling:

Activity Selection Problem

Open In EditorEditCopy code


1#include <stdio.h>
2
3void printMaxActivities(int s[], int f[], int n) {
4 int i, j;
5 printf("Following activities are selected \n");
6 i = 0;
7 printf("%d ", i);
8 for (j = 1; j < n; j++) {
9 if (s[j] >= f[i]) {
10 printf("%d ", j);
11 i = j;
12 }
13 }
14 printf("\n");
15}
16
17int main() {
18 int s[] = {1, 3, 0, 5, 8, 5};
19 int f[] = {2, 4, 6, 7, 9, 9};
20 int n = sizeof(s) / sizeof(s[0]);
21 printMaxActivities(s, f, n);
22 return 0;
23}

Output

Open In EditorEditCopy code


1Following activities are selected
20 1 3 4

Time Complexity: O(N)


Auxiliary Space: O(1)

Task Scheduling Problem

Open In EditorEditCopy code


1#include <stdio.h>
2
3typedef struct {
4 int start;
5 int end;
6} Pair;
7
8void SelectActivities(int s[], int f[], int n) {
9 Pair p[n];
10 for (int i = 0; i < n; i++) {
Advanced Algorithmic Problem Solving
(R1UC601B)
11 p[i].first = f[i];
12 p[i].second = s[i];
13 }
14
15 for (int i = 0; i < n; i++) {
16 for (int j = i + 1; j < n; j++) {
17 if (p[j].first < p[i].first) {
18 Pair temp = p[i];
19 p[i] = p[j];
20 p[j] = temp;
21 }
22 }
23 }
24
25 Pair ans[n];
26 int k = 0;
27 ans[k].first = p[0].second;
28 ans[k].second = p[0].first;
29 k++;
30
31 for (int i = 1; i < n; i++) {
32 if (p[i].second >= ans[k - 1].second) {
33 ans[k].first = p[i].second;
34 ans[k].second = p[i].first;
35 k++;
36 }
37 }
38
39 printf("Following Activities should be selected. \n");
40 for (int i = 0; i < k; i++) {
41 printf("Activity started at: %d and ends at %d\n", ans[i].first,
ans[i].second);
42 }
43}
44
45int main() {
46 int s[] = {1, 3, 0, 5, 8, 5};
47 int f[] = {2, 4, 6, 7, 9, 9};
48 int n = sizeof(s) / sizeof(s[0]);
49 SelectActivities(s, f, n);
50 return 0;
51}

Output

Open In EditorEditCopy code


1Following Activities should be selected.
2Activity started at: 1 and ends at 2
3Activity started at: 3 and ends at 4
4Activity started at: 5 and ends at 7
5Activity started at: 8 and ends at 9

Time Complexity: O(N log N), If input activities may not be sorted. It takes
O(n) time when it is given that input activities are always sorted.
Auxiliary Space: O(1)

14 Apply the binary search technique to find the first occurrence of a number in
a sorted array.
#include <stdio.h>
Advanced Algorithmic Problem Solving
(R1UC601B)
int firstOccurrence(int arr[], int n, int x) {
int low = 0, high = n - 1;
int result = -1;

while (low <= high) {


int mid = (low + high) / 2;

if (arr[mid] == x) {
result = mid;
high = mid - 1; // search in the left half
} else if (arr[mid] < x) {
low = mid + 1;
} else {
high = mid - 1;
}
}

return result;
}

int main() {
int arr[] = {1, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9};
int n = sizeof(arr) / sizeof(arr[0]);
int x = 5;

int result = firstOccurrence(arr, n, x);

if (result != -1) {
printf("First occurrence of %d is at index %d\n", x, result);
} else {
printf("%d is not found in the array\n", x);
}

return 0;
}

15 Apply the greedy technique to solve the activity selection problem.

Here's an example of how to apply the greedy technique to solve the activity
selection problem:

Activity Selection Problem

Given a set of activities and their start and finish times, select the maximum
number of activities that can be performed by a single person, assuming that a
person can only work on a single activity at a time.

Greedy Algorithm

The greedy algorithm for the activity selection problem is as follows:

1. Sort the activities in increasing order of their finish times.


Advanced Algorithmic Problem Solving
(R1UC601B)
2. Select the first activity and add it to the result set.
3. Iterate through the remaining activities and select the next activity that
starts after the finish time of the previously selected activity.
4. Repeat step 3 until all activities have been considered.

Pseudocode

Open In EditorEditCopy code


1function activitySelection(activities) {
2 // Sort activities by finish time
3 activities.sort((a, b) => a.finish - b.finish);
4
5 // Initialize result set with the first activity
6 let result = [activities[0]];
7
8 // Iterate through remaining activities
9 for (let i = 1; i < activities.length; i++) {
10 // Check if the current activity starts after the finish time of the last selected
activity
11 if (activities[i].start >= result[result.length - 1].finish) {
12 // Add the current activity to the result set
13 result.push(activities[i]);
14 }
15 }
16
17 return result;
18}

Example

Suppose we have the following activities:

Activity Start Time Finish Time

A 1 4

B 3 5

C 0 6
Advanced Algorithmic Problem Solving
(R1UC601B)

D 5 7

E 3 8

F 5 9

G 6 10

H 8 11

The greedy algorithm would select the following activities:

1. C (finish time 6)
2. A (finish time 4)
3. D (finish time 7)
4. G (finish time 10)

The resulting set of activities is {C, A, D, G}, which is the maximum number of
activities that can be performed by a single person.

Correctness

The greedy algorithm is correct because it always selects the activity with the
earliest finish time that does not conflict with the previously selected activities.
This ensures that the maximum number of activities are selected.

Time Complexity

The time complexity of the greedy algorithm is O(n log n), where n is the
number of activities, due to the sorting step.

Space Complexity

The space complexity of the greedy algorithm is O(n), where n is the number
Advanced Algorithmic Problem Solving
(R1UC601B)
of activities, since we need to store the result set.

I hope this helps

16 How would you apply stack operations to evaluate a postfix expression?


Write the function in C/C++/Java/Python.

Here's an example of how to apply stack operations to evaluate a postfix


expression in C:

Open In EditorEditCopy code


1#include <stdio.h>
2#include <stdlib.h>
3#include <string.h>
4
5#define MAX_STACK_SIZE 100
6
7typedef struct {
8 int data[MAX_STACK_SIZE];
9 int top;
10} Stack;
11
12void initStack(Stack* stack) {
13 stack->top = -1;
14}
15
16int isEmpty(Stack* stack) {
17 return stack->top == -1;
18}
19
20int isFull(Stack* stack) {
21 return stack->top == MAX_STACK_SIZE - 1;
22}
23
24void push(Stack* stack, int value) {
25 if (isFull(stack)) {
26 printf("Stack overflow!\n");
27 exit(1);
28 }
29 stack->data[++stack->top] = value;
30}
31
32int pop(Stack* stack) {
33 if (isEmpty(stack)) {
34 printf("Stack underflow!\n");
35 exit(1);
36 }
37 return stack->data[stack->top--];
38}
39
40int evaluatePostfix(char* expression) {
41 Stack stack;
42 initStack(&stack);
43
44 for (char* token = strtok(expression, " "); token != NULL; token =
Advanced Algorithmic Problem Solving
(R1UC601B)
strtok(NULL, " ")) {
45 if (strcmp(token, "+") == 0) {
46 int operand2 = pop(&stack);
47 int operand1 = pop(&stack);
48 push(&stack, operand1 + operand2);
49 } else if (strcmp(token, "-") == 0) {
50 int operand2 = pop(&stack);
51 int operand1 = pop(&stack);
52 push(&stack, operand1 - operand2);
53 } else if (strcmp(token, "*") == 0) {
54 int operand2 = pop(&stack);
55 int operand1 = pop(&stack);
56 push(&stack, operand1 * operand2);
57 } else if (strcmp(token, "/") == 0) {
58 int operand2 = pop(&stack);
59 int operand1 = pop(&stack);
60 push(&stack, operand1 / operand2);
61 } else {
62 push(&stack, atoi(token));
63 }
64 }
65
66 return pop(&stack);
67}
68
69int main() {
70 char expression[] = "3 4 + 2 *";
71 int result = evaluatePostfix(expression);
72 printf("Result: %d\n", result);
73 return 0;
74}

Explanation

The evaluatePostfix function takes a postfix expression as a string and


evaluates it using a stack. The function iterates through the tokens in the
expression, and for each token:

 If the token is an operator (+, -, *, /), it pops the required number of


operands from the stack, performs the operation, and pushes the result
back onto the stack.
 If the token is an operand, it pushes the operand onto the stack.

Finally, the function returns the result of the evaluation, which is the top
element of the stack.

Stack Operations

The stack operations used in this implementation are:

 initStack: Initializes the stack by setting the top index to -1.


 isEmpty: Checks if the stack is empty.
 isFull: Checks if the stack is full.
 push: Pushes an element onto the stack.
 pop: Pops an element from the stack.
Advanced Algorithmic Problem Solving
(R1UC601B)
Example

The example expression "3 4 + 2 *" is evaluated as follows:

1. Push 3 onto the stack.


2. Push 4 onto the stack.
3. Pop 4 and 3 from the stack, add them, and push the result (7) onto the
stack.
4. Push 2 onto the stack.
5. Pop 2 and 7 from the stack, multiply them, and push the result (14) onto
the stack.
6. The final result is 14, which is the top element of the stack.

17 Apply binary search tree operations to insert and find an element. Write the
function in C/C++/Java/Python.
#include <stdio.h>
#include <stdlib.h>

typedef struct Node {


int data;
struct Node* left;
struct Node* right;
} Node;

Node* createNode(int data) {


Node* newNode = (Node*)malloc(sizeof(Node));
newNode->data = data;
newNode->left = newNode->right = NULL;
return newNode;
}

Node* insert(Node* root, int data) {


if (root == NULL) {
root = createNode(data);
} else if (data < root->data) {
root->left = insert(root->left, data);
} else if (data > root->data) {
root->right = insert(root->right, data);
}
return root;
}

Node* find(Node* root, int data) {


if (root == NULL || root->data == data) {
return root;
} else if (data < root->data) {
return find(root->left, data);
} else {
return find(root->right, data);
}
}
Advanced Algorithmic Problem Solving
(R1UC601B)
int main() {
Node* root = NULL;
root = insert(root, 12);
root = insert(root, 7);
root = insert(root, 20);
root = insert(root, 5);
root = insert(root, 9);
root = insert(root, 21);

Node* found = find(root, 9);


if (found != NULL) {
printf("Found %d in the tree\n", found->data);
} else {
printf("Not found\n");
}

return 0;
}

18 Use BFS to implement a level order traversal of a binary tree. Write the function
in C/C++/Java/Python.
#include <stdio.h>
#include <stdlib.h>

typedef struct Node {


int data;
struct Node* left;
struct Node* right;
} Node;

void levelOrderTraversal(Node* root) {


if (root == NULL) return;

Node* queue[100]; // assume max 100 nodes in the tree


int front = 0, rear = 0;

queue[rear++] = root;

while (front < rear) {


Node* current = queue[front++];
printf("%d ", current->data);

if (current->left!= NULL) {
queue[rear++] = current->left;
}
if (current->right!= NULL) {
queue[rear++] = current->right;
}
}
printf("\n");
}

int main() {
Node* root = (Node*)malloc(sizeof(Node));
Advanced Algorithmic Problem Solving
(R1UC601B)
root->data = 1;
root->left = (Node*)malloc(sizeof(Node));
root->right = (Node*)malloc(sizeof(Node));
root->left->data = 2;
root->right->data = 3;
root->left->left = (Node*)malloc(sizeof(Node));
root->left->right = (Node*)malloc(sizeof(Node));
root->right->left = (Node*)malloc(sizeof(Node));
root->right->right = (Node*)malloc(sizeof(Node));
root->left->left->data = 4;
root->left->right->data = 5;
root->right->left->data = 6;
root->right->right->data = 7;

levelOrderTraversal(root);

return 0;
}
19 Write a program that uses divide and conquer to find the closest pair of points in
a 2D plane.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

typedef struct Point {


int x, y;
} Point;

// Function to calculate the distance between two points


double distance(Point p1, Point p2) {
return sqrt((p1.x - p2.x) * (p1.x - p2.x) + (p1.y - p2.y) * (p1.y - p2.y));
}

// Function to find the closest pair of points in a set of points


Point* closestPair(Point* points, int n) {
// Base case: if there are 3 or fewer points, brute force the solution
if (n <= 3) {
Point* closest = points;
double minDist = distance(points[0], points[1]);
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
double dist = distance(points[i], points[j]);
if (dist < minDist) {
minDist = dist;
closest = &points[i];
}
}
}
return closest;
}

// Divide the points into two halves


int mid = n / 2;
Point* left = points;
Point* right = points + mid;
Advanced Algorithmic Problem Solving
(R1UC601B)
// Recursively find the closest pair in each half
Point* closestLeft = closestPair(left, mid);
Point* closestRight = closestPair(right, n - mid);

// Find the closest pair that spans the two halves


double minDist = distance(*closestLeft, *(closestLeft + 1));
Point* closestSplit = closestLeft;
for (int i = 0; i < mid; i++) {
for (int j = 0; j < n - mid; j++) {
double dist = distance(left[i], right[j]);
if (dist < minDist) {
minDist = dist;
closestSplit = &left[i];
}
}
}

// Return the closest pair


if (distance(*closestLeft, *(closestLeft + 1)) < distance(*closestRight,
*(closestRight + 1))) {
return closestLeft;
} else if (distance(*closestRight, *(closestRight + 1)) < distance(*closestSplit,
*(closestSplit + 1))) {
return closestRight;
} else {
return closestSplit;
}
}

int main() {
int n;
printf("Enter the number of points: ");
scanf("%d", &n);

Point* points = (Point*)malloc(n * sizeof(Point));


for (int i = 0; i < n; i++) {
printf("Enter point %d (x y): ", i + 1);
scanf("%d %d", &points[i].x, &points[i].y);
}

Point* closest = closestPair(points, n);


printf("Closest pair: (%d, %d) and (%d, %d)\n", closest->x, closest->y, (closest
+ 1)->x, (closest + 1)->y);

return 0;
}

Explanation

The program uses the divide and conquer approach to find the closest pair of
points in a 2D plane. The basic idea is to divide the points into two halves,
recursively find the closest pair in each half, and then find the closest pair that
spans the two halves.
Advanced Algorithmic Problem Solving
(R1UC601B)
Here's how the program works:

1. The distance function calculates the distance between two points.


2. The closestPair function takes a set of points and finds the closest
pair using the divide and conquer approach.
 If there are 3 or fewer points, brute force the solution by
calculating the distance between each pair of points.
 Divide the points into two halves and recursively find the closest
pair in each half.
 Find the closest pair that spans the two halves by iterating
through the points in each half and calculating the distance
between each pair of points.
 Return the closest pair.
3. In the main function, the user is prompted to enter the number of points
and the coordinates of each point.
4. The closestPair function is called to find the closest pair of points,
and the result is printed to the console.

Note

This program has a time complexity of O(n log n), where n is the number of
points. This is because the divide and conquer approach reduces the problem
size by half at each recursive step, and the number of recursive steps is
logarithmic in the size of the input.

20 Write a program to implement dynamic programming to solve the 0/1 knapsack


problem and analyse the memory usage.
#include <stdio.h>

int knapsack(int W, int wt[], int val[], int n) {


int i, w;
int K[n + 1][W + 1];

// Initialize the table


for (i = 0; i <= n; i++) {
for (w = 0; w <= W; w++) {
if (i == 0 || w == 0)
K[i][w] = 0;
else if (wt[i - 1] <= w)
K[i][w] = (val[i - 1] + K[i - 1][w - wt[i - 1]] > K[i - 1][w]) ? val[i - 1] +
K[i - 1][w - wt[i - 1]] : K[i - 1][w];
else
K[i][w] = K[i - 1][w];
}
}

return K[n][W];
}

int main() {
int val[] = {60, 100, 120};
Advanced Algorithmic Problem Solving
(R1UC601B)
int wt[] = {10, 20, 30};
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
printf("Maximum profit: %d\n", knapsack(W, wt, val, n));
return 0;
}
21 Evaluate the efficiency of using a sliding window technique for a given
dataset of temperature readings over brute force methods.

The sliding window technique is an efficient method for solving problems that
involve finding subarrays or substrings that meet certain conditions. In the
context of a dataset of temperature readings, the sliding window technique
can be used to find the maximum or minimum temperature within a given
window size.

Let's consider an example where we want to find the maximum temperature


within a window of size K in an array of temperature readings. The brute force
approach would be to iterate through the array and calculate the maximum
temperature for each window of size K, resulting in a time complexity of
O(N*K).

On the other hand, the sliding window technique can be used to solve this
problem in O(N) time complexity. Here's an example implementation in C:

Open In EditorEditCopy code


1#include <stdio.h>
2
3int max_temperature(int arr[], int N, int K) {
4 int max_temp = 0;
5 int window_sum = 0;
6 int left = 0;
7
8 for (int right = 0; right < N; right++) {
9 window_sum += arr[right];
10
11 if (right >= K - 1) {
12 max_temp = max(max_temp, window_sum);
13 window_sum -= arr[left];
14 left++;
15 }
16 }
17
18 return max_temp;
19}
20
21int main() {
22 int arr[] = {23, 12, 34, 45, 56, 67, 78, 89, 90};
23 int N = sizeof(arr) / sizeof(arr[0]);
24 int K = 3;
25
26 printf("Maximum temperature within a window of size %d: %d\n", K,
max_temperature(arr, N, K));
27
28 return 0;
29}
Advanced Algorithmic Problem Solving
(R1UC601B)
In this implementation, we maintain a window of size K and calculate the
maximum temperature within the window by iterating through the array. We
use a variable window_sum to keep track of the sum of the temperatures within
the window, and update it by subtracting the temperature at the left end of the
window and adding the temperature at the right end of the window.

The sliding window technique is more efficient than the brute force approach
because it avoids recalculating the maximum temperature for each window.
Instead, it uses the previous window's results to update the maximum
temperature, resulting in a significant reduction in time complexity.

In terms of memory usage, the sliding window technique requires a constant


amount of memory to store the window sum and the maximum temperature,
resulting in a memory complexity of O(1).

Overall, the sliding window technique is a more efficient and scalable


approach for solving problems that involve finding subarrays or substrings that
meet certain conditions, especially when dealing with large datasets.

22 Given a number n, find sum of first n natural numbers. To calculate the sum, we
will use a recursive function recur_sum().

#include <stdio.h>

int recur_sum(int n) {
if (n == 0) {
return 0;
} else {
return n + recur_sum(n - 1);
}
}

int main() {
int num;
printf("Enter a positive integer: ");
scanf("%d", &num);
printf("Sum = %d", recur_sum(num));
return 0;
}
Advanced Algorithmic Problem Solving
(R1UC601B)
23 Implement a recursive algorithm to solve the Tower of Hanoi problem. Find its
complexity also.

Here is a C implementation of a recursive algorithm to solve the Tower of


Hanoi problem:

Open In EditorEditCopy code


1#include <stdio.h>
2
3void towerOfHanoi(int n, char from_rod, char to_rod, char aux_rod) {
4 if (n == 1) {
5 printf("Move disk 1 from rod %c to rod %c\n", from_rod, to_rod);
6 return;
7 }
8 towerOfHanoi(n - 1, from_rod, aux_rod, to_rod);
9 printf("Move disk %d from rod %c to rod %c\n", n, from_rod, to_rod);
10 towerOfHanoi(n - 1, aux_rod, to_rod, from_rod);
11}
12
13int main() {
14 int num;
15 printf("Enter the number of disks: ");
16 scanf("%d", &num);
17 towerOfHanoi(num, 'A', 'C', 'B');
18 return 0;
19}

This program takes the number of disks as input from the user and solves the
Tower of Hanoi problem using the recursive function towerOfHanoi(). The
function takes four arguments:

 n: the number of disks


 from_rod: the rod where disks are initially located
 to_rod: the rod where disks need to be moved
 aux_rod: the auxiliary rod used to help move disks

The function works as follows:

1. If there is only one disk, move it directly from the from_rod to


the to_rod.
2. Otherwise, recursively move n - 1 disks from the from_rod to
the aux_rod, using the to_rod as an auxiliary rod.
3. Move the nth disk from the from_rod to the to_rod.
4. Recursively move the n - 1 disks that we left on the aux_rod to
the to_rod, using the from_rod as an auxiliary rod.

The time complexity of this algorithm is O(2^n), where n is the number of


disks. This is because each recursive call reduces the problem size by 1, and
there are 2 recursive calls for each problem size.

The space complexity is O(n), since the maximum depth of the recursion tree
is n.

Note that the Tower of Hanoi problem has a closed-form solution, which is 2^n
- 1 moves. However, the recursive algorithm is often used to illustrate the
concept of recursion and to demonstrate how to break down a complex
Advanced Algorithmic Problem Solving
(R1UC601B)
problem into smaller sub-problems.

24 Given a Binary Search Tree and a node value X, find if the node with value X is
present in the BST or not.

Here is a C implementation of a function to search for a node with value X in a


Binary Search Tree (BST):

Open In EditorEditCopy code


1#include <stdio.h>
2#include <stdlib.h>
3
4// Define the structure for a BST node
5typedef struct Node {
6 int data;
7 struct Node* left;
8 struct Node* right;
9} Node;
10
11// Function to create a new BST node
12Node* createNode(int data) {
13 Node* newNode = (Node*)malloc(sizeof(Node));
14 if (!newNode) {
15 printf("Memory error\n");
16 return NULL;
17 }
18 newNode->data = data;
19 newNode->left = newNode->right = NULL;
20 return newNode;
21}
22
23// Function to search for a node with value X in the BST
24int searchNode(Node* root, int X) {
25 if (root == NULL) {
26 return 0; // Node not found
27 }
28 if (root->data == X) {
29 return 1; // Node found
30 }
31 if (root->data > X) {
32 return searchNode(root->left, X); // Search in the left subtree
33 } else {
34 return searchNode(root->right, X); // Search in the right subtree
35 }
36}
37
38int main() {
39 Node* root = createNode(50);
40 root->left = createNode(30);
41 root->right = createNode(70);
42 root->left->left = createNode(20);
43 root->left->right = createNode(40);
44 root->right->left = createNode(60);
45 root->right->right = createNode(80);
46
47 int X = 40;
48 if (searchNode(root, X)) {
49 printf("Node with value %d found in the BST\n", X);
50 } else {
Advanced Algorithmic Problem Solving
(R1UC601B)
51 printf("Node with value %d not found in the BST\n", X);
52 }
53
54 return 0;
55}

This program defines a BST with the following structure:

Open In EditorEditCopy code


1 50
2 / \
3 30 70
4 /\ /\
5 20 40 60 80

The searchNode() function takes the root of the BST and the value X as input
and returns 1 if the node with value X is found, and 0 otherwise. The function
works as follows:

1. If the root is NULL, return 0 (node not found).


2. If the root's data matches X, return 1 (node found).
3. If the root's data is greater than X, recursively search in the left subtree.
4. If the root's data is less than X, recursively search in the right subtree.

The time complexity of this algorithm is O(h), where h is the height of the BST.
In the worst case, the BST is skewed, and the time complexity becomes O(n),
where n is the number of nodes in the BST. However, for a balanced BST, the
time complexity is O(log n).

Note that this implementation assumes that the BST is a valid binary search
tree, where for each node, all elements in the left subtree are less than the
node, and all elements in the right subtree are greater than the node.

25 Given two Binary Search Trees. Find the nodes that are common in both of
them, ie- find the intersection of the two BSTs.

Here is a C implementation of a function to find the common nodes in two


Binary Search Trees (BSTs):

Open In EditorEditCopy code


1#include <stdio.h>
2#include <stdlib.h>
3
4// Define the structure for a BST node
5typedef struct Node {
6 int data;
7 struct Node* left;
8 struct Node* right;
9} Node;
10
11// Function to create a new BST node
12Node* createNode(int data) {
Advanced Algorithmic Problem Solving
(R1UC601B)
13 Node* newNode = (Node*)malloc(sizeof(Node));
14 if (!newNode) {
15 printf("Memory error\n");
16 return NULL;
17 }
18 newNode->data = data;
19 newNode->left = newNode->right = NULL;
20 return newNode;
21}
22
23// Function to insert a node into a BST
24Node* insert(Node* root, int data) {
25 if (root == NULL) {
26 return createNode(data);
27 }
28 if (data < root->data) {
29 root->left = insert(root->left, data);
30 } else if (data > root->data) {
31 root->right = insert(root->right, data);
32 }
33 return root;
34}
35
36// Function to perform inorder traversal of a BST
37void inorder(Node* root, int* arr, int* i) {
38 if (root == NULL) {
39 return;
40 }
41 inorder(root->left, arr, i);
42 arr[(*i)++] = root->data;
43 inorder(root->right, arr, i);
44}
45
46// Function to find the common nodes in two BSTs
47void findCommonNodes(Node* root1, Node* root2) {
48 int m, n;
49 m = n = 0;
50 Node* temp1 = root1;
51 Node* temp2 = root2;
52 while (temp1!= NULL) {
53 m++;
54 temp1 = temp1->left;
55 }
56 while (temp2!= NULL) {
57 n++;
58 temp2 = temp2->left;
59 }
60 int* arr1 = (int*)malloc(m * sizeof(int));
61 int* arr2 = (int*)malloc(n * sizeof(int));
62 int i = 0, j = 0;
63 inorder(root1, arr1, &i);
64 inorder(root2, arr2, &j);
65 i = j = 0;
66 while (i < m && j < n) {
67 if (arr1[i] < arr2[j]) {
68 i++;
69 } else if (arr1[i] > arr2[j]) {
70 j++;
71 } else {
72 printf("%d ", arr1[i]);
73 i++;
Advanced Algorithmic Problem Solving
(R1UC601B)
74 j++;
75 }
76 }
77 printf("\n");
78}
79
80int main() {
81 Node* root1 = NULL;
82 root1 = insert(root1, 5);
83 root1 = insert(root1, 1);
84 root1 = insert(root1, 10);
85 root1 = insert(root1, 0);
86 root1 = insert(root1, 4);
87 root1 = insert(root1, 7);
88 root1 = insert(root1, 9);
89
90 Node* root2 = NULL;
91 root2 = insert(root2, 10);
92 root2 = insert(root2, 7);
93 root2 = insert(root2, 20);
94 root2 = insert(root2, 4);
95 root2 = insert(root2, 9);
96
97 printf("Common nodes: ");
98 findCommonNodes(root1, root2);
99 return 0;
100}

This program defines two BSTs, root1 and root2, and finds the common
nodes between them using the findCommonNodes() function. The function
performs inorder traversal on both BSTs, stores the node values in arrays, and
then finds the common elements in the arrays.

The time complexity of this algorithm is O(M + N), where M and N are the
number of nodes in the first and second trees, respectively. The auxiliary
space complexity is O(M + N), since we need to store the node values in
arrays.

Note that this implementation assumes that the BSTs are valid binary search
trees, where for each node, all elements in the left subtree are less than the
node, and all elements in the right subtree are greater than the node.

26 Design and implement a version of quicksort that randomly chooses pivot


elements. Calculate the time and space complexity of algorithm.

Here is a C implementation of the quicksort algorithm with random pivot


selection:

Open In EditorEditCopy code


1#include <stdio.h>
2#include <stdlib.h>
3#include <time.h>
4
5// Function to swap two elements
6void swap(int* a, int* b) {
Advanced Algorithmic Problem Solving
(R1UC601B)
7 int temp = *a;
8 *a = *b;
9 *b = temp;
10}
11
12// Function to generate a random pivot index
13int randomPivot(int low, int high) {
14 return low + rand() % (high - low + 1);
15}
16
17// Function to partition the array around the pivot
18int partition(int arr[], int low, int high) {
19 int pivotIndex = randomPivot(low, high);
20 swap(&arr[pivotIndex], &arr[high]);
21 int pivot = arr[high];
22 int i = low - 1;
23 for (int j = low; j < high; j++) {
24 if (arr[j] < pivot) {
25 i++;
26 swap(&arr[i], &arr[j]);
27 }
28 }
29 swap(&arr[i + 1], &arr[high]);
30 return i + 1;
31}
32
33// Function to implement quicksort with random pivot selection
34void quicksort(int arr[], int low, int high) {
35 if (low < high) {
36 int pivot = partition(arr, low, high);
37 quicksort(arr, low, pivot - 1);
38 quicksort(arr, pivot + 1, high);
39 }
40}
41
42// Function to print the array
43void printArray(int arr[], int size) {
44 for (int i = 0; i < size; i++) {
45 printf("%d ", arr[i]);
46 }
47 printf("\n");
48}
49
50int main() {
51 int arr[] = {5, 2, 8, 1, 9, 3, 7, 6, 4};
52 int size = sizeof(arr) / sizeof(arr[0]);
53 srand(time(NULL)); // Seed the random number generator
54 printf("Original array: ");
55 printArray(arr, size);
56 quicksort(arr, 0, size - 1);
57 printf("Sorted array: ");
58 printArray(arr, size);
59 return 0;
60}

This implementation uses the Lomuto partition scheme, which is a variation of


the standard Hoare partition scheme that is slightly faster and more efficient in
practice.
Advanced Algorithmic Problem Solving
(R1UC601B)
Time complexity:

The time complexity of the quicksort algorithm with random pivot selection is
O(n log n) on average, where n is the size of the array. This is because the
random pivot selection ensures that the partitioning step divides the array into
two roughly equal-sized parts, leading to a logarithmic number of recursive
calls.

However, in the worst case, the time complexity can be O(n^2) if the pivot
selection is consistently poor, leading to highly unbalanced partitions. This can
occur if the input array is already sorted or nearly sorted, and the random pivot
selection consistently chooses the smallest or largest element as the pivot.

Space complexity:

The space complexity of the quicksort algorithm is O(log n) on average, since


the recursive function calls create a stack frame that consumes memory. In the
worst case, the space complexity can be O(n) if the recursive calls are highly
unbalanced, leading to a deep recursion stack.

Random pivot selection:

The random pivot selection is implemented using the randomPivot() function,


which generates a random index between low and high using
the rand() function. This ensures that the pivot element is chosen randomly
and uniformly from the array.

Note that the srand() function is used to seed the random number generator
with the current time, ensuring that the random pivot selection is different each
time the program is run.

27 Design and implement an efficient algorithm to merge k sorted arrays.


28 Given two strings str1 & str 2 of length n & m respectively, find the length of the
longest subsequence present in both. A subsequence is a sequence that can be
derived from the given string by deleting some or no elements without changing
the order of the remaining elements. For example, "abe" is a subsequence of
"abcde”. Example :Input: n = 6, str1 = ABCDGH and m = 6, str2 = AEDFHR
Output: 3 Explanation: LCS for input strings “ABCDGH” and “AEDFHR” is
“ADH” of length 3.
29 You are given an amount denoted by value. You are also given an array of coins.
The array contains the denominations of the given coins. You need to find
the minimum number of coins to make the change for value using the coins of
given denominations. Also, keep in mind that you have infinite supply of the
coins. Input:
value = 10
numberOfCoins = 4
coins[] = {2 5 3 6}
Output: 2
30 Hashing is very useful to keep track of the frequency of the elements in a list.
You are given an array of integers. You need to print the count of non-
repeated elements in the array. Example : Input:1 1 2 2 3 3 4 5 6 7 Output:4
31 Given two arrays a[] and b[] of size n and m respectively. The task is to find the
Advanced Algorithmic Problem Solving
(R1UC601B)
number of elements in the union between these two arrays. Union of the two
arrays can be defined as the set containing distinct elements from both the arrays.
If there are repetitions, then only one occurrence of element should be printed in
the union. Input:1 2 3 4 5 1 2 3 Output: 5
32 Inorder traversal means traversing through the tree in a Left, Node, Right
manner. We first traverse left, then print the current node, and then traverse
right. This is done recursively for each node. Given a BST, find its in-order
traversal.
Advanced Algorithmic Problem Solving
(R1UC601B)
PRACTICE QUESTIONS FOR ETE
These problems were given for practice for MTE, you are advised to practice
these for ETE also.
1. What is meant by time complexity and space complexity? Explain in detail.
2. What are asymptotic notations? Define Theta, Omega, big O, small omega, and
small o.
3. Explain the meaning of O(2^n), O(n^2), O(nlgn), O(lg n). Give one example of
each.
4. Explain sliding window protocol.
5. Explain Naive String-Matching algorithm. Discuss its time and space
complexity.
6 Explain Rabin Karp String-Matching algorithm. Discuss its time and space
complexity.
7 Explain Knuth Morris and Pratt String-Matching algorithm. Discuss its time and
space complexity.
8 What is a sliding window? Where this technique is used to solve
programming problems.
9 What are bit manipulation operators? Explain and, or, not, ex-or bit-wise
operators.
10 Why are the benefits for linked list over arrays.
11 Implement singly linked list.
11 Implement stack with singly linked list.
12 Implement queue with singly linked list.
12 Implement doubly linked list.
13 Implement circular linked list.
14 Implement circular queue with linked list.
15 What is recursion? What is tail recursion?
16 What is the tower of Hanoi problem? Write a program to implement the Tower
of Hanoi problem. Find the time and space complexity of the program.
17 What is backtracking in algorithms? What kind of problems are solved with this
technique?
18 Implement N-Queens problem. Find the time and space complexity.
19 What is subset sum problem? Write a recursive function to solve the subset sum
problem?
20 Implement a function that uses the sliding window technique to find the
maximum sum of any contiguous subarray of size K.
21 Write a recursive function to generate all possible subsets of a given set.
22 Write a program to find the first occurrence of repeating character in a given
string.
23 Write a program to print all the LEADERS in the array. An element is a leader if
it is greater than all the elements to its right side. And the rightmost element is
always a leader.
24 Write a program to find the majority element in the array. A majority element in
an array A[] of size n is an element that appears more than n/2 times.
25 Given an integer k and a queue of integers, write a program to reverse the order
of the first k elements of the queue, leaving the other elements in the same
relative order.
26 Write a program to implement a stack using queues.
27 Write a program to implement queue using stacks.
28 Given a string S of lowercase alphabets, write a program to check if string is
Advanced Algorithmic Problem Solving
(R1UC601B)
isogram or not. An Isogram is a string in which no letter occurs more than once.

29 Given a sorted array, arr[] consisting of N integers, write a program to


find the frequencies of each array element.
30 Write a program to delete middle element from stack.
31 Write a program to remove consecutive duplicates from string.
33 Write a program to display next greater element of all element given in array.
34 Write a program to evaluate a postfix expression.
35 Write a program to get MIN at pop from stack.
36 Write a program to swap kth node from ends in given single linked list.
37 Write a program to detect loop in linked list
38 Write a program to find Intersection point in Y shaped Linked list.
39 Write a program to merge two sorted linked list.
40 Write a program to find max and second max of array.
41 Write a program to find Smallest Positive missing number. You are given an
array arr[] of N integers. The task is to find the smallest positive number missing
from the array. Positive number starts from 1.
42 Given a non-negative integer N. The task is to check if N is a power of 2. More
formally, check if N can be expressed as 2x for some integer x. Return true if N
is power of 2 else return false.
43 Write a program to Count Total Digits in a Number using recursion. You are
given a number n. You need to find the count of digits in n.
44 Check whether K-th bit is set or not. Given a number N and a bit number K,
check if Kth index bit of N is set or not. A bit is called set if it is 1. Position of
set bit '1' should be indexed starting with 0 from LSB side in binary
representation of the number. Index is starting from 0. You just need to
return true or false.
45 Write a program to print 1 To N without loop.

You might also like