0% found this document useful (0 votes)
1 views

Quick sort

The document provides an overview of various sorting algorithms, including QuickSort, Merge Sort, Heap Sort, Counting Sort, Radix Sort, and Bucket Sort, detailing their algorithms, time complexities, advantages, and disadvantages. It also discusses Red-Black Trees, their properties, operations, and applications, alongside matrix multiplication and convex hull algorithms. Additionally, it touches on greedy algorithms and their use in optimization problems.

Uploaded by

Vicky Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Quick sort

The document provides an overview of various sorting algorithms, including QuickSort, Merge Sort, Heap Sort, Counting Sort, Radix Sort, and Bucket Sort, detailing their algorithms, time complexities, advantages, and disadvantages. It also discusses Red-Black Trees, their properties, operations, and applications, alongside matrix multiplication and convex hull algorithms. Additionally, it touches on greedy algorithms and their use in optimization problems.

Uploaded by

Vicky Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Quick sort

QuickSort is a divide-and-conquer sorting algorithm that works by selecting a pivot element,


partitioning the array into two subarrays (one with elements smaller than the pivot and one with Time Complexity Analysis
elements greater than the pivot), and then recursively sorting the subarrays.
Case Time Complexity
Algorithm:
Best Case O(n log n)
1. Choose a pivot element (commonly first, last, or a random element).
Average Case O(n log n)
2. Partition the array:
Worst Case O(n log n)
o Elements smaller than the pivot go to the left.

o Elements greater than the pivot go to the right.

3. Recursively apply QuickSort on the left and right subarrays. Advantages of Heap Sort

4. The base case is when the subarray has one or zero elements. Guaranteed O(n log n) Time Complexity: Unlike QuickSort (which can degrade to O(n²)), Heap
Sort always runs in O(n log n).

Efficient for Large Datasets: Works well with large datasets due to predictable performance.
Time Complexity: In-Place Sorting: Requires only a constant amount of extra space (O(1)).
• Best/Average Case: O(nlogn) (When the partitioning is balanced) Disadvantages
• Worst Case: O(n2) (When the pivot is always the smallest or largest element) Not Stable: Heap Sort is not a stable sorting algorithm (does not maintain the order of equal
• Space Complexity: O(logn) due to recursion stack. elements).

Merge Sort More Swaps than Merge Sort: Heap Sort does more swaps, making it slower in practice than
Merge Sort for some datasets.
Merge Sort is a divide-and-conquer algorithm that splits the array into smaller subarrays, sorts them,
Time Complexity:
and then merges them back together in sorted order.
• Best/Average/Worst Case: O(nlogn) (Always balanced)
Algorithm: Applications of Heap Sort
• Space Complexity: O(n) (Due to auxiliary arrays for merging)
1. Divide: Split the array into two halves. • Priority Queues: Used in implementing priority queues where the highest/lowest priority
Heap sort element is accessed first.
2. Conquer: Recursively sort each half.
Heap Sort Algorithm • Operating Systems: Process scheduling uses heap-based priority queues.
3. Merge: Merge the sorted halves back together.
Heap Sort is a comparison-based sorting technique that uses a Binary Heap data structure. It is an • Graph Algorithms: Dijkstra’s shortest path algorithm uses heaps.
efficient in-place sorting algorithm with a worst-case time complexity of O(n log n).
Counting Sort

Counting Sort Algorithm


Steps of Heap Sort
Counting Sort is a non-comparison-based sorting algorithm that works efficiently when sorting
1. Build a Max Heap: Convert the given array into a max heap (a complete binary tree where integers in a small range. It uses the frequency of elements to place them in the correct position.
the parent node is always greater than its children).

2. Extract the Maximum Element: Swap the root (largest element) with the last element and
Steps of Counting Sort
reduce the heap size by one.
1. Find the Range: Identify the maximum value in the array (let's call it max).
3. Heapify: Restore the heap property by calling the heapify function on the root.
2. Create a Count Array: Create an array of size max + 1 to store the frequency of each number.
4. Repeat: Continue extracting elements and heapifying until the array is sorted.

3. Cumulative Sum: Modify the count array so that each element contains the sum of previous Radix Sort Algorithm Performance Depends on Number of Digits: If numbers are large (high k), other sorting
counts. algorithms like QuickSort might be better.
Radix Sort is a non-comparison-based sorting algorithm that sorts numbers digit by digit, from the
4. Build the Output Array: Place elements in the correct position using the count array. least significant digit (LSD) to the most significant digit (MSD). It is typically implemented using
Counting Sort as a subroutine.
5. Copy Back: Copy the sorted elements back to the original array. Applications of Radix Sort

• Sorting Large Integers (e.g., phone numbers, ZIP codes).


Steps of Radix Sort
• Used in DC3 Algorithm (suffix array construction).
Time Complexity Analysis 1. Find the Maximum Number: Determine the number with the most digits in the array.
• Sorting Strings (Lexicographically) using character-by-character sorting.
Case Time Complexity 2. Sort by Each Digit:
Bucket Sort Algorithm 🪣
Best Case O(n + k) o Start with the least significant digit (LSD) (units place).
Bucket Sort is a non-comparison-based sorting algorithm that distributes elements into multiple
Average Case O(n + k) o Use Counting Sort to sort numbers based on this digit. buckets (sub-arrays) and sorts them individually, typically using Insertion Sort or another sorting
algorithm.
o Move to the next digit (tens place, hundreds place, etc.).
Worst Case O(n + k)
3. Repeat Until All Digits are Processed.
Where:
Steps of Bucket Sort
• n is the number of elements.
1. Create Buckets: Divide the range of input values into n buckets.
• k is the range of input values (max value in the array). Time Complexity Analysis 2. Distribute Elements: Place each element into the appropriate bucket.
Case Time Complexity 3. Sort Each Bucket: Sort individual buckets using Insertion Sort (or another sorting algorithm).
Advantages of Counting Sort
Best Case O(nk) 4. Concatenate Buckets: Merge all sorted buckets back into a single array.
Faster than Comparison-Based Sorting: If k is not significantly larger than n, Counting Sort runs
in O(n), which is faster than O(n log n) sorting algorithms like QuickSort and MergeSort. Average Case O(nk)

Stable Sorting: Maintains the relative order of equal elements. Worst Case O(nk) Time Complexity Analysis

Works Well for Small Integers: Ideal for sorting numbers within a limited range. Where: Case Time Complexity
• n is the number of elements.
Best Case O(n + k)
Disadvantages • k is the number of digits in the maximum number.
Average Case O(n + k)
Not Suitable for Large Range: If k is much larger than n, the space complexity O(k) becomes
Worst Case O(n²) (when all elements land in one bucket and insertion sort is used)
inefficient. Advantages of Radix Sort
Where:
Not an In-Place Sorting Algorithm: Requires additional space for the count and output arrays. Faster for Large Numbers: Works well when k (number of digits) is small.
• n is the number of elements.
Limited to Non-Negative Integers: Requires modifications to handle negative numbers. Stable Sorting: Maintains the relative order of equal elements.
• k is the number of buckets.
Good for Large Data Sets: Outperforms comparison-based algorithms when k is low.
Applications of Counting Sort
Advantages of Bucket Sort
• Sorting Student Grades (0-100) Disadvantages
Faster than Comparison-Based Sorts: Runs in O(n) when elements are uniformly distributed.
• Sorting Small Positive Integers
Not In-Place: Requires extra space for counting sort.
• Used as a Subroutine in Radix Sort (which sorts larger numbers efficiently) Efficient for Floating-Point Numbers: Well-suited for sorting decimals between 0 and 1.
Not Suitable for All Data Types: Only works for numerical values, not general-purpose sorting.
Parallelizable: Each bucket can be sorted independently.

o Case 3: Parent is RED and uncle is BLACK → Rotation & Recoloring. C++ STL (std::map, std::set, std::multimap, std::multiset)
Disadvantages Linux Process Scheduling (Completely Fair Scheduler - CFS)
Not Suitable for Large Range of Inputs: Requires careful bucket selection. 2️⃣ Deletion Database Indexing (e.g., PostgreSQL, MongoDB, etc.)
Worst-Case Complexity is O(n²): If elements are unevenly distributed. • Deleting a node may cause black-height imbalance. Difference Between B-Tree and B+ Tree

Extra Space Required: Uses additional memory for buckets. • If a RED node is deleted, no problem arises. Feature B-Tree B+ Tree
• If a BLACK node is deleted, rebalancing is done using double-black adjustments with
Leaf and Internal Both leaf and internal nodes contain Internal nodes contain only keys; data is
rotations and recoloring.
Applications of Bucket Sort Nodes keys and data. stored only in leaf nodes.

• Sorting Floating-Point Numbers Data can be found in any node (leaf


3⃣ Rotations (Left & Right) Data Retrieval Data is always found in the leaf nodes.
or internal).
• Processing Large Data Sets Efficiently
Rotations are used to maintain balance after insertion or deletion. Inefficient for range queries as data
• Used in Histogram-Based Sorting Techniques Traversal More efficient for range queries as all data is
Left Rotation is spread across internal and leaf
Efficiency stored in a linked list of leaf nodes.
Red-Black Trees: Basics nodes.
• Moves a node down to the left and shifts its right child up.
A Red-Black Tree (RBT) is a self-balancing binary search tree (BST) that ensures balanced tree More compact, as data is distributed Better for disk access since only leaf nodes
height, allowing efficient insertions, deletions, and lookups in O(log n) time. • Used when a tree is right-heavy. Structure
across all levels. store data, reducing redundant storage.
Right Rotation
Leaf Node Leaf nodes are linked together in a linked
Leaf nodes are not linked.
Properties of Red-Black Trees • Moves a node down to the right and shifts its left child up. Linking list, allowing efficient range queries.

A Red-Black Tree follows these five properties: • Used when a tree is left-heavy. Search Requires more disk access because Faster for sequential access since all data is
Performance data is found at multiple levels. in leaf nodes and can be scanned easily.
1. Each node is either RED or BLACK. Rotations preserve in-order traversal order of the BST.

2. The root is always BLACK. Used in file systems and databases requiring
Usage Used in general database indexing. fast sequential access (e.g., MySQL,
3. Every leaf (NIL) is BLACK. (Leaf nodes are imaginary NIL nodes.) Time Complexity Analysis PostgreSQL).
4. Red nodes cannot have red children (No two consecutive RED nodes). (This prevents Operation Time Complexity Key Takeaway
excessive tree skewing.)
Insertion O(log n) • Use B-Tree when you need quick access to any key (random access).
5. Every path from a given node to its descendant NIL nodes must have the same number of
BLACK nodes. (This ensures balance.) Deletion O(log n) • Use B+ Tree when sequential access and range queries are important (e.g., database
indexing).
Search O(log n)
1⃣ Matrix Multiplication
Operations in Red-Black Trees
Matrix multiplication is an operation where two matrices are combined to produce a new matrix.
1⃣ Insertion
Advantages of Red-Black Trees
Basic Formula
• A new node is always inserted as RED.
Self-balancing → Ensures O(log n) operations.
For two matrices:
• If inserting a RED node violates the red-red property, rebalancing is required using rotations
Efficient insertions & deletions → Faster than AVL in some cases. C=A×BC = A \times B
and recoloring.

• Case Handling: Used in real-world applications like databases & OS schedulers. where:

o Case 1: Parent is BLACK → No change needed. • AA is an m × n matrix

o Case 2: Parent is RED and uncle is RED → Recoloring. Applications of Red-Black Trees • BB is an n × p matrix
• CC is an m × p matrix Algorithm Time Complexity Best Used For 1. Sort components by reliability-to-cost ratio (highest first).

Each element of CC is computed as: Binary Search O(log⁡n)O(\log n) Sorted datasets. 2. Select components greedily until the budget is exhausted.

C[i][j]=∑k=1nA[i][k]×B[k][j]C[i][j] = \sum_{k=1}^{n} A[i][k] \times B[k][j] 3. Stop when no more components can be added without exceeding the budget.
Jump Search O(n)O(\sqrt{n}) Large sorted datasets.
Time Complexity Use Case
Interpolation Search O(log⁡log⁡n)O(\log \log n) Uniformly distributed data.
• Standard Algorithm: Used in network design, embedded systems, and hardware selection.
Exponential Search O(log⁡n)O(\log n) When the range of search is unknown.
O(n3)O(n^3)

• Strassen’s Algorithm: Ternary Search O(log⁡n)O(\log n) Finding maxima/minima in unimodal functions.


2️. Huffman Coding (Data Compression)
O(n2.81)O(n^{2.81}) Used in: Databases, Web Searching, AI, and Big Data. • Used for lossless data compression (e.g., ZIP files, MP3).
• Fastest Known (Coppersmith-Winograd): • Greedy Choice: Always pick the two smallest frequency characters and merge them.
O(n2.373)O(n^{2.373}) Final Thoughts Time Complexity: O(nlog⁡n)O(n \log n)
Used in: Graphics, Machine Learning, Cryptography. • Matrix Multiplication is key in mathematics and ML.

• Convex Hull is useful in computational geometry.


3. Kruskal’s Algorithm (Minimum Spanning Tree)
2️⃣ Convex Hull • Searching Algorithms improve efficiency in large datasets.
• Used in network design (optical fiber, road networks).
A convex hull is the smallest convex shape that completely encloses a given set of points. Think of it Would you like code examples for any of these? • Greedy Choice: Always pick the smallest weight edge that does not form a cycle.
as stretching a rubber band around the outermost points.
Greedy Method (Overview) Time Complexity: O(Elog⁡E)
Algorithms for Convex Hull
The Greedy Method is an approach where locally optimal choices are made at each step without
Algorithm Time Complexity Description reconsidering past choices, hoping to reach a globally optimal solution.
4. Dijkstra’s Algorithm (Shortest Path)
Sorts points, then constructs
Graham’s Scan O(nlog⁡n)O(n \log n) • Finds the shortest path from a source node to all other nodes.
hull. 1⃣ Properties of Greedy Algorithms
• Greedy Choice: Always pick the node with the smallest known distance.
Jarvis March (Gift Greedy Choice Property – A global optimal solution can be reached by choosing local optimal
O(nh)O(nh) Finds hull points one by one.
Wrapping) solutions. Time Complexity: O(Vlog⁡V) (with priority queue).

O(nlog⁡n)O(n \log n) (Worst: Optimal Substructure – A problem can be broken into smaller subproblems, and an optimal
QuickHull Divide-and-conquer approach.
O(n2)O(n^2)) solution to the whole problem includes optimal solutions to subproblems.
5. Fractional Knapsack Problem
Monotone Chain O(nlog⁡n)O(n \log n) Variant of Graham’s Scan. • Given items with values and weights, maximize the total value in a knapsack of fixed
2️⃣ Examples of Greedy Algorithms capacity.
Used in: Computational Geometry, Image Processing, GIS (Geographical Information Systems).
1. Optimal Reliability Allocation • Greedy Choice: Always pick the item with the highest value-to-weight ratio first.

This problem involves allocating system components to maximize reliability while minimizing cost. Time Complexity: O(nlog⁡n)
3⃣ Searching

Searching is the process of finding an element in a dataset. Problem Statement

• Given a system with different subsystems, each having multiple configurations with different 3⃣ When to Use Greedy Algorithms?
Types of Searching Algorithms
costs and reliability values.
If the Greedy Choice Property and Optimal Substructure hold.
Algorithm Time Complexity Best Used For
• Goal: Allocate resources greedily to maximize system reliability under a budget constraint.
When speed is more important than absolute accuracy.
Linear Search O(n)O(n) Unsorted or small datasets. Greedy Approach

If a heuristic solution is acceptable. • You can take fractions of an item. Huffman Coding (Data Compression)
• Sorted by Value-to-Weight Ratio. Huffman coding is a Greedy Algorithm used for lossless data compression. It reduces the size of data
by assigning shorter binary codes to frequently occurring characters.
4⃣ When Not to Use? • Greedy approach works optimally.

When decisions depend on future choices. Example:


1⃣ How Huffman Coding Works?
When backtracking or dynamic programming is needed for an optimal solution. Item Weight Value Value/Weight Ratio
1. Count Frequency: Find the frequency of each character in the input.
Would you like a Python example for any of these? 1 10 kg ₹60 6
2. Build a Min-Heap: Create a priority queue (min-heap) of nodes, where each node represents
Knapsack Problem 2 20 kg ₹100 5 a character and its frequency.

The Knapsack Problem is a famous optimization problem in which you need to maximize profit while 3 30 kg ₹120 4 3. Merge Nodes:
keeping the total weight within a given limit.
Knapsack Capacity = 50 kg o Pick the two smallest frequency nodes from the heap.

o Merge them into a new node with frequency = sum of the two.
Approach:
Types of Knapsack Problems
o Repeat until only one node remains (this becomes the root of the Huffman Tree).
1. Sort items by Value/Weight ratio (highest first).
1⃣ 0/1 Knapsack Problem (Dynamic Programming)
4. Generate Codes: Assign 0 to the left branch and 1 to the right. Traverse the tree to assign
2. Pick the full item if possible.
• Each item can either be taken (1) or left (0). codes to characters.
3. If the item is too heavy, take a fraction of it.
• Cannot take a fraction of an item. 5. Encode Data: Replace characters with their binary codes for compression.
Optimal Solution in Greedy
• Solved using Dynamic Programming. 6. Decode Data: Use the Huffman Tree to convert binary back to text.
Time Complexity: O(nlog⁡n)O(n \log n) (due to sorting)
Example:

Item Weight Value 2️⃣ Example of Huffman Coding


Key Differences Between 0/1 and Fractional Knapsack
1 2 kg ₹40 Given Input:
Feature 0/1 Knapsack Fractional Knapsack
"ABRACADABRA"
2 3 kg ₹50
Item Selection Take full or leave Take full or fraction Step 1: Character Frequency Table
3 4 kg ₹100
Approach Dynamic Programming Greedy Algorithm Character Frequency
Knapsack Capacity = 5 kg
Complexity O(nW)O(nW) O(nlog⁡n)O(n \log n) A 5
Approach (DP Table):
Optimality Works only for small W Always optimal B 2
• Use a 2️D table where dp[i][w] represents the maximum value that can be obtained using
the first i items with weight limit w.
R 2
• Recurrence Relation: Use Cases of Knapsack Algorithm C 1
dp[i][w]=max⁡(dp[i−1][w],value[i]+dp[i−1][w−weight[i]])dp[i][w] = \max(dp[i-1][w], value[i] + dp[i-
Resource Allocation (budgeting, investment strategies)
1][w - weight[i]]) D 1
Cargo Loading (airlines, logistics)
• Time Complexity: (where n = number of items, W = capacity)
Network Bandwidth Optimization
O(nW)O(nW) Step 2️: Construct Huffman Tree
Stock Portfolio Management
1. Create leaf nodes for each character and insert into a min-heap.

2️⃣ Fractional Knapsack Problem (Greedy Algorithm) Would you like Python code for these? 2. Merge two smallest nodes, creating a parent node with their sum as frequency.

3. Repeat until only one node remains (this becomes the root). Share the same column 7️⃣ Applications of N-Queens Problem
Final Huffman Codes: Share the same diagonal AI & Constraint Solving (e.g., Sudoku solver)
Character Huffman Code Parallel Processing & Scheduling

A 0 2️⃣ Backtracking Approach Robotics & Pathfinding

B 101 1. Start from the first row and try placing a queen in each column.

2. Check if it's safe to place the queen: Sum of Subset Problem (Backtracking)
R 100
o No other queen in the same column. 1⃣ Problem Statement
C 1110
o No other queen in the left diagonal. Given a set of positive integers and a target sum (S), find all possible subsets whose sum equals S.
D 1111
o No other queen in the right diagonal.
Step 3: Encode Input
3. If safe, place the queen and move to the next row.
2️⃣ Example
• "ABRACADABRA" → 0 101 100 0 1110 0 1111 0 101 100 0
4. If not safe, try the next column.
Input:
Step 4: Decode (Reconstruct Text)
5. If all columns fail, backtrack to the previous row and try a different position.
Set = {3, 34, 4, 12, 5, 2}, Target Sum = 9
• Use the Huffman Tree to convert binary back to text.
6. Repeat until all queens are placed or all possibilities are exhausted.
Possible Solutions:

{4, 5}
3⃣ Time Complexity

• Building Huffman Tree: (because of priority queue operations) {3, 2, 4}

O(nlog⁡n) 3⃣ Time Complexity

• Encoding & Decoding: • Worst case: 3⃣ Approach Using Backtracking

O(n)O(n) O(N!) We explore all possible subsets using recursive backtracking, making two choices at each step:
• Optimized using constraints checking. 1. Include the current element and reduce the remaining sum.

4⃣ Applications of Huffman Coding 2. Exclude the current element and move to the next one.

4⃣ Example: Solving for N = 4 Steps:


File Compression (ZIP, GZIP)
Step-by-Step Board Placement 1. Start with an empty subset.
Multimedia Compression (JPEG, MP3)
1. Place Q in the first available column of row 1. 2. Add elements one by one and check if the sum matches S.
Data Transmission (Minimizing bandwidth)
2. Move to the next row and place the next Q in a valid column. 3. If a valid subset is found, print it.

3. If a row has no valid placements, backtrack to the previous row and move Q to the next valid 4. If the sum exceeds S, backtrack (remove the last added element and try a different path).
Backtracking: N-Queens Problem position.
The N-Queens Problem is a classic backtracking problem where we need to place N queens on an N 4. Continue until all N queens are placed.
× N chessboard such that no two queens attack each other. 4⃣ Time Complexity

• Worst-case: O(2N) (exploring all subsets)

1⃣ Problem Statement O(2N)

Place N queens on an N × N chessboard so that no two queens: Share the same row • Optimized using pruning (stopping early when the sum exceeds S)
2. Use a stack (or recursion) to go deep. • Sorting Edges:

3. Backtrack when no more unexplored neighbors remain. O(Elog⁡E)


7️⃣ Applications of Sum of Subsets Problem
Time Complexity • Union-Find Operations:
Resource Allocation (Choosing items to fit in budget)
• Adjacency List: O(V+E) O(Elog⁡V)
Subset Sum in Cryptography
• Adjacency Matrix: O(V2) • Overall:
Combinatorial Optimization Problems
4⃣ Differences Between BFS and DFS O(Elog⁡E)
Elementary Graph Algorithms
Feature BFS (Queue) DFS (Stack/Recursion)
Graphs are data structures used to represent connections between objects, consisting of:
Approach Level-wise Depth-wise
• Vertices (Nodes) → Represent objects.
Data Structure Queue Stack / Recursion 3⃣ Prim’s Algorithm (Greedy)
• Edges (Links) → Represent relationships between objects.
Shortest Path (Unweighted Graphs) Approach:
1⃣ Types of Graphs
Use Case Web Crawling, Shortest Path Maze Solving, Topological Sort 1. Start from any node and add the minimum edge to MST.
Directed Graph (Digraph) → Edges have a direction.
2. Use a priority queue (min-heap) to select the smallest edge at each step.
Undirected Graph → Edges have no direction.
3. Continue until all vertices are included.
Weighted Graph → Edges have weights/costs. 5⃣ Applications of BFS and DFS
Time Complexity:
Unweighted Graph → All edges have the same cost. BFS → Shortest Path (Dijkstra’s Algorithm), Social Networks
• Using Min-Heap:
DFS → Maze Solving, Cycle Detection, Topological Sorting
O((V+E)log⁡V)
2️⃣ Breadth-First Search (BFS) Graph Algorithms: MST, Shortest Path, and TSP

BFS is a level-order traversal of a graph, visiting all neighbors before moving deeper. 1⃣ Minimum Spanning Tree (MST)

Algorithm A Minimum Spanning Tree (MST) is a subset of edges in a connected, weighted graph that:

1. Start from a source node and mark it as visited. Connects all vertices without cycles. 4⃣ Single Source Shortest Path (Dijkstra’s Algorithm)

2. Use a queue to explore neighbors first. Finds the shortest distance from a single source to all vertices in a weighted graph.
Has the minimum possible total edge weight.
3. Repeat for each node until all reachable nodes are visited. Approach:
Two MST Algorithms:
Time Complexity 1. Initialize distances (infinity except for the source).
1. Kruskal’s Algorithm (Greedy, Edge-Based)
• Adjacency List: O(V+E) 2. Use a priority queue (min-heap) to process nodes with the smallest known distance.
2. Prim’s Algorithm (Greedy, Vertex-Based)
• Adjacency Matrix: O(V2) 3. Update neighbors if a shorter path is found.

Time Complexity:
2️⃣ Kruskal’s Algorithm (Greedy)
• Using Min-Heap:
Approach:
3⃣ Depth-First Search (DFS)
O((V+E)log⁡V)
1. Sort all edges by weight (ascending).
DFS explores as deep as possible before backtracking.
2. Pick the smallest edge that doesn’t form a cycle (using Disjoint Set).
Algorithm
3. Repeat until all vertices are connected.
1. Start from a source node and mark it as visited.
Time Complexity:

5⃣ All-Pair Shortest Path (Floyd-Warshall) KMP (Knuth-Morris-Pratt) Algorithm O(n3)O(n^3)

Finds the shortest paths between all pairs of vertices in a weighted graph. • Uses partial match table (LPS array) to skip redundant comparisons.

Approach: • Time Complexity: . 2️. Strassen’s Matrix Multiplication

1. Use a distance matrix initialized with edge weights. O(n+m) • Divide and Conquer approach.

2. Iterate over all nodes as intermediate points to find shorter paths. • Time Complexity: (Better than )

Time Complexity: 3⃣ NP-Hard & NP-Completeness O(n2.81)O(n^{2.81})

• Floyd-Warshall: P vs NP O(n3)O(n^3)

O(V3) • P (Polynomial Time): Problems that can be solved efficiently.

• NP (Nondeterministic Polynomial Time): Problems whose solutions can be verified Summary Table
efficiently.
Topic Algorithm Time Complexity
NP-Complete Problems
Randomized Algo QuickSort (Random Pivot) O(nlog⁡n)
6️⃣ Traveling Salesman Problem (TSP) • If a problem is in NP and as hard as any problem in NP, it is NP-Complete.
Finds the shortest possible route visiting all cities exactly once and returning to the start. String Matching KMP Algorithm O(n+m)
• Example: SAT (Boolean Satisfiability), Traveling Salesman Problem (TSP).
Approach: NP-Hard Problems NP-Complete SAT, TSP No known polynomial solution

1. Brute Force: • Even harder than NP-Complete problems. NP-Hard Halting Problem Harder than NP-Complete
O(N!) • May not even be in NP (i.e., their solutions might not be efficiently verifiable). Approximation Algo Christofides for TSP O(n2)
2. Dynamic Programming (Held-Karp Algorithm): • Example: Halting Problem, TSP (Decision Version), Graph Coloring. Matrix Multiplication Brute Force O(n3)
O(2N×N)O(2^N \times N) Key Difference:
Strassen’s Algorithm Divide & Conquer O(n2.81)
3. Heuristics: Genetic Algorithm, Nearest Neighbor, etc. • NP-Complete → Solution is verifiable in polynomial time.

• NP-Hard → No guarantee that a solution can be verified in polynomial time.


1⃣ Randomized Algorithms

Randomized algorithms use randomness in their logic to achieve a good expected performance. 4⃣ Approximation Algorithms
Types: For NP-Hard problems, finding an optimal solution is infeasible. Approximation Algorithms find a
1. Las Vegas Algorithm → Always gives the correct result but may take different times. solution close to optimal in polynomial time.
(Example: Quicksort) Example: Approximate TSP (Traveling Salesman Problem)
2. Monte Carlo Algorithm → May give incorrect results with a small probability. (Example: 1. Greedy Approach: Use MST (Minimum Spanning Tree) as an approximation.
Primality Testing)
2. Christofides Algorithm: Gives a solution 1.5 times the optimal solution.
2️⃣ String Matching Algorithms

Naïve String Matching (Brute Force)


5⃣ Matrix Operations
• Compares the pattern with the text one character at a time.
1. Matrix Multiplication (Brute Force)
• Time Complexity:
• Standard method:
O(m×n)

You might also like