Quick sort
Quick sort
3. Recursively apply QuickSort on the left and right subarrays. Advantages of Heap Sort
4. The base case is when the subarray has one or zero elements. Guaranteed O(n log n) Time Complexity: Unlike QuickSort (which can degrade to O(n²)), Heap
Sort always runs in O(n log n).
Efficient for Large Datasets: Works well with large datasets due to predictable performance.
Time Complexity: In-Place Sorting: Requires only a constant amount of extra space (O(1)).
• Best/Average Case: O(nlogn) (When the partitioning is balanced) Disadvantages
• Worst Case: O(n2) (When the pivot is always the smallest or largest element) Not Stable: Heap Sort is not a stable sorting algorithm (does not maintain the order of equal
• Space Complexity: O(logn) due to recursion stack. elements).
Merge Sort More Swaps than Merge Sort: Heap Sort does more swaps, making it slower in practice than
Merge Sort for some datasets.
Merge Sort is a divide-and-conquer algorithm that splits the array into smaller subarrays, sorts them,
Time Complexity:
and then merges them back together in sorted order.
• Best/Average/Worst Case: O(nlogn) (Always balanced)
Algorithm: Applications of Heap Sort
• Space Complexity: O(n) (Due to auxiliary arrays for merging)
1. Divide: Split the array into two halves. • Priority Queues: Used in implementing priority queues where the highest/lowest priority
Heap sort element is accessed first.
2. Conquer: Recursively sort each half.
Heap Sort Algorithm • Operating Systems: Process scheduling uses heap-based priority queues.
3. Merge: Merge the sorted halves back together.
Heap Sort is a comparison-based sorting technique that uses a Binary Heap data structure. It is an • Graph Algorithms: Dijkstra’s shortest path algorithm uses heaps.
efficient in-place sorting algorithm with a worst-case time complexity of O(n log n).
Counting Sort
2. Extract the Maximum Element: Swap the root (largest element) with the last element and
Steps of Counting Sort
reduce the heap size by one.
1. Find the Range: Identify the maximum value in the array (let's call it max).
3. Heapify: Restore the heap property by calling the heapify function on the root.
2. Create a Count Array: Create an array of size max + 1 to store the frequency of each number.
4. Repeat: Continue extracting elements and heapifying until the array is sorted.
3. Cumulative Sum: Modify the count array so that each element contains the sum of previous Radix Sort Algorithm Performance Depends on Number of Digits: If numbers are large (high k), other sorting
counts. algorithms like QuickSort might be better.
Radix Sort is a non-comparison-based sorting algorithm that sorts numbers digit by digit, from the
4. Build the Output Array: Place elements in the correct position using the count array. least significant digit (LSD) to the most significant digit (MSD). It is typically implemented using
Counting Sort as a subroutine.
5. Copy Back: Copy the sorted elements back to the original array. Applications of Radix Sort
Stable Sorting: Maintains the relative order of equal elements. Worst Case O(nk) Time Complexity Analysis
Works Well for Small Integers: Ideal for sorting numbers within a limited range. Where: Case Time Complexity
• n is the number of elements.
Best Case O(n + k)
Disadvantages • k is the number of digits in the maximum number.
Average Case O(n + k)
Not Suitable for Large Range: If k is much larger than n, the space complexity O(k) becomes
Worst Case O(n²) (when all elements land in one bucket and insertion sort is used)
inefficient. Advantages of Radix Sort
Where:
Not an In-Place Sorting Algorithm: Requires additional space for the count and output arrays. Faster for Large Numbers: Works well when k (number of digits) is small.
• n is the number of elements.
Limited to Non-Negative Integers: Requires modifications to handle negative numbers. Stable Sorting: Maintains the relative order of equal elements.
• k is the number of buckets.
Good for Large Data Sets: Outperforms comparison-based algorithms when k is low.
Applications of Counting Sort
Advantages of Bucket Sort
• Sorting Student Grades (0-100) Disadvantages
Faster than Comparison-Based Sorts: Runs in O(n) when elements are uniformly distributed.
• Sorting Small Positive Integers
Not In-Place: Requires extra space for counting sort.
• Used as a Subroutine in Radix Sort (which sorts larger numbers efficiently) Efficient for Floating-Point Numbers: Well-suited for sorting decimals between 0 and 1.
Not Suitable for All Data Types: Only works for numerical values, not general-purpose sorting.
Parallelizable: Each bucket can be sorted independently.
o Case 3: Parent is RED and uncle is BLACK → Rotation & Recoloring. C++ STL (std::map, std::set, std::multimap, std::multiset)
Disadvantages Linux Process Scheduling (Completely Fair Scheduler - CFS)
Not Suitable for Large Range of Inputs: Requires careful bucket selection. 2️⃣ Deletion Database Indexing (e.g., PostgreSQL, MongoDB, etc.)
Worst-Case Complexity is O(n²): If elements are unevenly distributed. • Deleting a node may cause black-height imbalance. Difference Between B-Tree and B+ Tree
Extra Space Required: Uses additional memory for buckets. • If a RED node is deleted, no problem arises. Feature B-Tree B+ Tree
• If a BLACK node is deleted, rebalancing is done using double-black adjustments with
Leaf and Internal Both leaf and internal nodes contain Internal nodes contain only keys; data is
rotations and recoloring.
Applications of Bucket Sort Nodes keys and data. stored only in leaf nodes.
A Red-Black Tree follows these five properties: • Used when a tree is left-heavy. Search Requires more disk access because Faster for sequential access since all data is
Performance data is found at multiple levels. in leaf nodes and can be scanned easily.
1. Each node is either RED or BLACK. Rotations preserve in-order traversal order of the BST.
2. The root is always BLACK. Used in file systems and databases requiring
Usage Used in general database indexing. fast sequential access (e.g., MySQL,
3. Every leaf (NIL) is BLACK. (Leaf nodes are imaginary NIL nodes.) Time Complexity Analysis PostgreSQL).
4. Red nodes cannot have red children (No two consecutive RED nodes). (This prevents Operation Time Complexity Key Takeaway
excessive tree skewing.)
Insertion O(log n) • Use B-Tree when you need quick access to any key (random access).
5. Every path from a given node to its descendant NIL nodes must have the same number of
BLACK nodes. (This ensures balance.) Deletion O(log n) • Use B+ Tree when sequential access and range queries are important (e.g., database
indexing).
Search O(log n)
1⃣ Matrix Multiplication
Operations in Red-Black Trees
Matrix multiplication is an operation where two matrices are combined to produce a new matrix.
1⃣ Insertion
Advantages of Red-Black Trees
Basic Formula
• A new node is always inserted as RED.
Self-balancing → Ensures O(log n) operations.
For two matrices:
• If inserting a RED node violates the red-red property, rebalancing is required using rotations
Efficient insertions & deletions → Faster than AVL in some cases. C=A×BC = A \times B
and recoloring.
• Case Handling: Used in real-world applications like databases & OS schedulers. where:
o Case 2: Parent is RED and uncle is RED → Recoloring. Applications of Red-Black Trees • BB is an n × p matrix
• CC is an m × p matrix Algorithm Time Complexity Best Used For 1. Sort components by reliability-to-cost ratio (highest first).
Each element of CC is computed as: Binary Search O(logn)O(\log n) Sorted datasets. 2. Select components greedily until the budget is exhausted.
C[i][j]=∑k=1nA[i][k]×B[k][j]C[i][j] = \sum_{k=1}^{n} A[i][k] \times B[k][j] 3. Stop when no more components can be added without exceeding the budget.
Jump Search O(n)O(\sqrt{n}) Large sorted datasets.
Time Complexity Use Case
Interpolation Search O(loglogn)O(\log \log n) Uniformly distributed data.
• Standard Algorithm: Used in network design, embedded systems, and hardware selection.
Exponential Search O(logn)O(\log n) When the range of search is unknown.
O(n3)O(n^3)
O(nlogn)O(n \log n) (Worst: Optimal Substructure – A problem can be broken into smaller subproblems, and an optimal
QuickHull Divide-and-conquer approach.
O(n2)O(n^2)) solution to the whole problem includes optimal solutions to subproblems.
5. Fractional Knapsack Problem
Monotone Chain O(nlogn)O(n \log n) Variant of Graham’s Scan. • Given items with values and weights, maximize the total value in a knapsack of fixed
2️⃣ Examples of Greedy Algorithms capacity.
Used in: Computational Geometry, Image Processing, GIS (Geographical Information Systems).
1. Optimal Reliability Allocation • Greedy Choice: Always pick the item with the highest value-to-weight ratio first.
This problem involves allocating system components to maximize reliability while minimizing cost. Time Complexity: O(nlogn)
3⃣ Searching
• Given a system with different subsystems, each having multiple configurations with different 3⃣ When to Use Greedy Algorithms?
Types of Searching Algorithms
costs and reliability values.
If the Greedy Choice Property and Optimal Substructure hold.
Algorithm Time Complexity Best Used For
• Goal: Allocate resources greedily to maximize system reliability under a budget constraint.
When speed is more important than absolute accuracy.
Linear Search O(n)O(n) Unsorted or small datasets. Greedy Approach
If a heuristic solution is acceptable. • You can take fractions of an item. Huffman Coding (Data Compression)
• Sorted by Value-to-Weight Ratio. Huffman coding is a Greedy Algorithm used for lossless data compression. It reduces the size of data
by assigning shorter binary codes to frequently occurring characters.
4⃣ When Not to Use? • Greedy approach works optimally.
The Knapsack Problem is a famous optimization problem in which you need to maximize profit while 3 30 kg ₹120 4 3. Merge Nodes:
keeping the total weight within a given limit.
Knapsack Capacity = 50 kg o Pick the two smallest frequency nodes from the heap.
o Merge them into a new node with frequency = sum of the two.
Approach:
Types of Knapsack Problems
o Repeat until only one node remains (this becomes the root of the Huffman Tree).
1. Sort items by Value/Weight ratio (highest first).
1⃣ 0/1 Knapsack Problem (Dynamic Programming)
4. Generate Codes: Assign 0 to the left branch and 1 to the right. Traverse the tree to assign
2. Pick the full item if possible.
• Each item can either be taken (1) or left (0). codes to characters.
3. If the item is too heavy, take a fraction of it.
• Cannot take a fraction of an item. 5. Encode Data: Replace characters with their binary codes for compression.
Optimal Solution in Greedy
• Solved using Dynamic Programming. 6. Decode Data: Use the Huffman Tree to convert binary back to text.
Time Complexity: O(nlogn)O(n \log n) (due to sorting)
Example:
2️⃣ Fractional Knapsack Problem (Greedy Algorithm) Would you like Python code for these? 2. Merge two smallest nodes, creating a parent node with their sum as frequency.
3. Repeat until only one node remains (this becomes the root). Share the same column 7️⃣ Applications of N-Queens Problem
Final Huffman Codes: Share the same diagonal AI & Constraint Solving (e.g., Sudoku solver)
Character Huffman Code Parallel Processing & Scheduling
B 101 1. Start from the first row and try placing a queen in each column.
2. Check if it's safe to place the queen: Sum of Subset Problem (Backtracking)
R 100
o No other queen in the same column. 1⃣ Problem Statement
C 1110
o No other queen in the left diagonal. Given a set of positive integers and a target sum (S), find all possible subsets whose sum equals S.
D 1111
o No other queen in the right diagonal.
Step 3: Encode Input
3. If safe, place the queen and move to the next row.
2️⃣ Example
• "ABRACADABRA" → 0 101 100 0 1110 0 1111 0 101 100 0
4. If not safe, try the next column.
Input:
Step 4: Decode (Reconstruct Text)
5. If all columns fail, backtrack to the previous row and try a different position.
Set = {3, 34, 4, 12, 5, 2}, Target Sum = 9
• Use the Huffman Tree to convert binary back to text.
6. Repeat until all queens are placed or all possibilities are exhausted.
Possible Solutions:
{4, 5}
3⃣ Time Complexity
O(n)O(n) O(N!) We explore all possible subsets using recursive backtracking, making two choices at each step:
• Optimized using constraints checking. 1. Include the current element and reduce the remaining sum.
4⃣ Applications of Huffman Coding 2. Exclude the current element and move to the next one.
3. If a row has no valid placements, backtrack to the previous row and move Q to the next valid 4. If the sum exceeds S, backtrack (remove the last added element and try a different path).
Backtracking: N-Queens Problem position.
The N-Queens Problem is a classic backtracking problem where we need to place N queens on an N 4. Continue until all N queens are placed.
× N chessboard such that no two queens attack each other. 4⃣ Time Complexity
Place N queens on an N × N chessboard so that no two queens: Share the same row • Optimized using pruning (stopping early when the sum exceeds S)
2. Use a stack (or recursion) to go deep. • Sorting Edges:
BFS is a level-order traversal of a graph, visiting all neighbors before moving deeper. 1⃣ Minimum Spanning Tree (MST)
Algorithm A Minimum Spanning Tree (MST) is a subset of edges in a connected, weighted graph that:
1. Start from a source node and mark it as visited. Connects all vertices without cycles. 4⃣ Single Source Shortest Path (Dijkstra’s Algorithm)
2. Use a queue to explore neighbors first. Finds the shortest distance from a single source to all vertices in a weighted graph.
Has the minimum possible total edge weight.
3. Repeat for each node until all reachable nodes are visited. Approach:
Two MST Algorithms:
Time Complexity 1. Initialize distances (infinity except for the source).
1. Kruskal’s Algorithm (Greedy, Edge-Based)
• Adjacency List: O(V+E) 2. Use a priority queue (min-heap) to process nodes with the smallest known distance.
2. Prim’s Algorithm (Greedy, Vertex-Based)
• Adjacency Matrix: O(V2) 3. Update neighbors if a shorter path is found.
Time Complexity:
2️⃣ Kruskal’s Algorithm (Greedy)
• Using Min-Heap:
Approach:
3⃣ Depth-First Search (DFS)
O((V+E)logV)
1. Sort all edges by weight (ascending).
DFS explores as deep as possible before backtracking.
2. Pick the smallest edge that doesn’t form a cycle (using Disjoint Set).
Algorithm
3. Repeat until all vertices are connected.
1. Start from a source node and mark it as visited.
Time Complexity:
Finds the shortest paths between all pairs of vertices in a weighted graph. • Uses partial match table (LPS array) to skip redundant comparisons.
1. Use a distance matrix initialized with edge weights. O(n+m) • Divide and Conquer approach.
2. Iterate over all nodes as intermediate points to find shorter paths. • Time Complexity: (Better than )
• Floyd-Warshall: P vs NP O(n3)O(n^3)
• NP (Nondeterministic Polynomial Time): Problems whose solutions can be verified Summary Table
efficiently.
Topic Algorithm Time Complexity
NP-Complete Problems
Randomized Algo QuickSort (Random Pivot) O(nlogn)
6️⃣ Traveling Salesman Problem (TSP) • If a problem is in NP and as hard as any problem in NP, it is NP-Complete.
Finds the shortest possible route visiting all cities exactly once and returning to the start. String Matching KMP Algorithm O(n+m)
• Example: SAT (Boolean Satisfiability), Traveling Salesman Problem (TSP).
Approach: NP-Hard Problems NP-Complete SAT, TSP No known polynomial solution
1. Brute Force: • Even harder than NP-Complete problems. NP-Hard Halting Problem Harder than NP-Complete
O(N!) • May not even be in NP (i.e., their solutions might not be efficiently verifiable). Approximation Algo Christofides for TSP O(n2)
2. Dynamic Programming (Held-Karp Algorithm): • Example: Halting Problem, TSP (Decision Version), Graph Coloring. Matrix Multiplication Brute Force O(n3)
O(2N×N)O(2^N \times N) Key Difference:
Strassen’s Algorithm Divide & Conquer O(n2.81)
3. Heuristics: Genetic Algorithm, Nearest Neighbor, etc. • NP-Complete → Solution is verifiable in polynomial time.
Randomized algorithms use randomness in their logic to achieve a good expected performance. 4⃣ Approximation Algorithms
Types: For NP-Hard problems, finding an optimal solution is infeasible. Approximation Algorithms find a
1. Las Vegas Algorithm → Always gives the correct result but may take different times. solution close to optimal in polynomial time.
(Example: Quicksort) Example: Approximate TSP (Traveling Salesman Problem)
2. Monte Carlo Algorithm → May give incorrect results with a small probability. (Example: 1. Greedy Approach: Use MST (Minimum Spanning Tree) as an approximation.
Primality Testing)
2. Christofides Algorithm: Gives a solution 1.5 times the optimal solution.
2️⃣ String Matching Algorithms