No on of Algorithm
No on of an Algorithm
An algorithm is a finite, well-defined sequence of steps designed to solve a par cular problem or
perform a specific task.
Key Characteris cs
Input: Takes zero or more inputs.
Output: Produces at least one output.
Definiteness: Each step is precisely and unambiguously defined.
Finiteness: The algorithm must terminate a er a finite number of steps.
Effec veness: Each opera on must be basic enough to be carried out using known
techniques.
Simple Example
Let's say you want to find the largest number in a list:
1. Start with the first element as the maximum.
2. Compare it with the next element.
3. If the next element is larger, update the maximum.
4. Repeat un l the end of the list.
5. Return the final maximum.
This is a basic algorithm: clear, effec ve, and finite.
Fundamentals of Algorithmic Problem Solving
Fundamentals of Algorithmic Problem Solving
1. Understanding the Problem
Iden fy the inputs and expected outputs.
Clarify edge cases and constraints (e.g., me or memory limits).
Translate vague statements into precise defini ons.
Example: In a pathfinding algorithm, is diagonal movement allowed? Are there nega ve
edge weights?
2. Analyzing the Problem Domain
Determine if the problem is computa onally feasible.
Is it tractable (solvable in polynomial me) or NP-Hard/Undecidable?
3. Examining the Capabili es of the Computa onal Device
Consider the model (e.g., RAM, Turing Machine).
Recognize hardware/so ware limita ons (memory, processing power).
Keep in mind whether it’s suitable for parallel processing, embedded devices, etc.
4. Exact vs Approximate Solving
Some problems are too complex for exact solu ons within me limits.
Opt for:
o Exact algorithms (e.g., Dijkstra for shortest path)
o Heuris cs/Approxima ons (e.g., Greedy for Knapsack)
5. Selec ng the Right Algorithm Design Technique
Based on problem type, choose from:
o Divide and Conquer (e.g., Quick Sort)
o Greedy (e.g., Huffman Coding)
o Dynamic Programming (e.g., 0-1 Knapsack)
o Backtracking (e.g., Sudoku Solver)
o Brute-force (for exhaus ve search)
o Randomiza on, Greedy-local search, etc.
6. Designing the Algorithm
Break the task into logical steps.
Sketch decision trees, pseudocode, or flowcharts.
Think about data structure synergy (arrays, trees, graphs, etc.).
7. Specifying the Algorithm
Use clear, language-independent formats:
o Pseudocode
o Flowcharts
o Structured English
8. Proving Correctness
Prove it solves every possible valid input.
Use techniques like:
o Loop invariants
o Mathema cal induc on
9. Analyzing Time and Space Efficiency
What’s the algorithm’s:
o Time complexity in best, average, worst cases?
o Space complexity?
Use Asympto c Nota ons: O(n)O(n), Θ(n)\Theta(n), Ω(n)\Omega(n)
Coding and Implementa on
Translate from pseudocode to real code (C++, Python, etc.).
Validate with sample test cases, including edge cases.
Fundamentals of the Analysis of Algorithm Efficiency
1. Analysis Framework
Evaluate an algorithm independent of any specific programming language.
Focus on resource usage:
o Time Complexity: How many basic steps does it take to solve a problem of size
n?
o Space Complexity: How much memory does it require?
2. Measuring Input Size
Denoted by n: number of elements, digits, ver ces, etc.
Example:
o For a sor ng algorithm: n = number of elements.
o For a graph algorithm: n = number of nodes or edges.
3. Units for Measuring Running Time
Based on primi ve opera ons:
o Comparisons, assignments, addi ons, loop itera ons, etc.
Focus on how the number of these opera ons grows with n.
4. Orders of Growth (Rate of Increase)
Describes how an algorithm’s resource usage grows as the input grows.
5. Types of Efficiency Analysis
Worst-Case: Maximum number of steps on any input of size n.
Best-Case: Minimum number of steps (o en idealized).
Average-Case: Expected number of steps over all inputs.
6. Asympto c Nota on
Used to express efficiency for large n:
Nota on Meaning Usage
O(f(n))O(f(n)) Upper bound (≤ performance cap) Worst-case focus
Ω(f(n))\Omega(f(n)) Lower bound (≥ performance floor) Best-case analysis
Θ(f(n))\Theta(f(n)) Tight bound (sandwiched upper & lower) When bounds are exact
7. Proper es & Tricks
Using Limits:
o If limn→∞f(n)g(n)=0\lim_{n \to \in y} \frac{f(n)}{g(n)} = 0, then f(n)=o(g(n))f(n)
= o(g(n)) (f is much smaller).
Transi vity:
o If f(n)=O(g(n))f(n) = O(g(n)) and g(n)=O(h(n))g(n) = O(h(n)), then f(n)=O(h(n))f(n) =
O(h(n)).
Addi vity:
o If an algorithm does two things sequen ally (e.g., searching and sor ng), total
me = sum of their individual complexi es.
8. Basic Efficiency Classes (Recap)
Class Typical Use Cases
Constant Simple access (e.g., array lookup)
Logarithmic Binary search
Linear Scanning arrays
Linerarithmic Efficient sorts like Merge Sort
Quadra c Simple brute-force or nested loops
Exponen al Complex recursive problems (e.g., subsets, permuta ons)
Asympto c Nota ons
∞ Asympto c Nota ons — The Language of Efficiency
Asympto c analysis gives a high-level abstrac on of an algorithm’s performance by describing
its growth rate as input size n→∞n \-> \in y. We focus on the dominant term and ignore
constants and low-order terms.
1. Big O Nota on ( O(f(n))O(f(n)) )
Upper Bound
Describes the worst-case growth rate.
Ensures an algorithm never takes more me than this (up to constant factors).
Example: If an algorithm has me complexity O(n2)O(n^2), it means it won't take more than
c⋅n2c \c.n^2 me for large enough n.
Used when we want a performance guarantee.
2. Omega Nota on ( Ω(f(n))\Omega(f(n)) )
Lower Bound
Describes the best-case growth rate.
Ensures the algorithm takes at least this much me.
Example: Ω(n)\Omega(n) implies that for large n, the algorithm cannot do be er than linear
me.
Used to show limita ons in performance.
3. Theta Nota on ( Θ(f(n))\Theta(f(n)) )
Tight Bound
Indicates both upper and lower bounds.
Algorithm’s worst-case and best-case me is approximately the same.
Example: Θ(nlogn)\Theta(n \log n) means the me taken is always propor onal to nlognn
\log n, regardless of input varia ons.
Best to use when you're confident about both bounds.
Comparing Func ons Using Limits
To compare f(n)f(n) and g(n)g(n), use:
limn→∞f(n)g(n)\lim_{n \to \in y} \frac{f(n)}{g(n)}
= 0 → f(n)=o(g(n))f(n) = o(g(n))
= c (0 < c < ∞) → f(n)=Θ(g(n))f(n) = \Theta(g(n))
= ∞ → f(n)=ω(g(n))f(n) = \omega(g(n))
Summary Table
Nota on Bound Describes Example Use Case
Type
O(f(n))O(f(n)) Upper Worst-case Merge Sort: O(nlogn)O(n \log n)
me
Ω(f(n))\Omega(f(n)) Lower Best-case me Linear Search: Ω(1)\Omega(1)
Θ(f(n))\Theta(f(n)) Tight Exact bound Inser on Sort: Θ(n2)\Theta(n^2) for
random input
O-nota on
What is O-Nota on (Big O)?
Big O nota on describes the upper bound of an algorithm’s running me. It expresses the
worst-case scenario growth rate of opera ons in rela on to input size nn.
It answers: "How badly can the algorithm perform as the input grows?"
Formal Defini on
If f(n)f(n) and g(n)g(n) are func ons defined on natural numbers:
f(n)=O(g(n))if∃ c>0,n0>0 such that ∀n≥n0,f(n)≤c⋅g(n)f(n) = O(g(n)) \quad \text{if} \quad \exists\
c > 0, n_0 > 0 \ \text{such that} \ \forall n \geq n_0, f(n) \leq c \cdot g(n)
f(n)f(n): Actual cost or me func on of the algorithm.
g(n)g(n): Bounding func on (simplified dominant term).
cc: A constant mul plier.
n0n_0: Star ng point from which this rela onship holds.
Examples
Algorithm Time Complexity Big O Nota on
Accessing an array 1 step O(1)O(1)
Linear search n comparisons O(n)O(n)
Binary search log n steps O(logn)O(\log n)
Bubble Sort n² swaps O(n2)O(n^2)
Merge Sort n log n steps O(nlogn)O(n \log n)
Recursive Fibonacci (naive) exponen al O(2n)O(2^n)
Key Proper es
Ignore Constants: O(3n)O(3n), O(100n)O(100n), and O(n)O(n) are the same
asympto cally.
Dominant Terms Ma er: O(n2+n)⇒O(n2)O(n^2 + n) \Rightarrow O(n^2)
Focus on Input Size Growth, not actual run me.
Why Use Big O?
Language-agnos c way to compare algorithms.
Helps choose the most scalable solu on.
Avoids hardware-specific performance bias.
Ω -nota on
Ω-Nota on Explained
Ω-nota on provides a lower bound for the growth rate of an algorithm's running me. It tells
us the minimum me an algorithm must take, regardless of op miza ons or best-case inputs.
f(n)=Ω(g(n))if∃ c>0, n0>0 such that ∀n≥n0, f(n)≥c⋅g(n)f(n) = \Omega(g(n)) \quad \text{if} \quad
\exists\ c > 0,\ n_0 > 0\ \text{such that} \ \forall n \geq n_0,\ f(n) \geq c \cdot g(n)
f(n)f(n): Actual me or cost func on of the algorithm.
g(n)g(n): Comparison func on for the bound.
cc: A constant mul plier.
n0n_0: A threshold a er which the inequality holds.
Intui on
While Big-O is like saying "this car won't go faster than 100 km/h", Ω says "this car can't
go slower than 40 km/h" under op mal condi ons.
It gives us insight into best-case scenarios—how good the algorithm can possibly
perform.
Examples
Algorithm Best Case Input Ω-Nota on
Linear Search Target at index 0 Ω(1)\Omega(1)
Bubble Sort Already sorted Ω(n)\Omega(n)
Binary Search Always halving Ω(1)\Omega(1)
Merge Sort All cases same Ω(nlogn)\Omega(n \log n)
Note: If the best and worst cases are the same, you might have
O(g(n))=Ω(g(n))=Θ(g(n))O(g(n)) = \Omega(g(n)) = \Theta(g(n))
Why Use Ω-Nota on?
Helps understand the efficiency floor: how fast an algorithm can ever be.
Useful in proving lower bounds of problem complexity (e.g., comparison-based sor ng
needs at least Ω(nlogn)\Omega(n \log n)).
Θ -nota on
Θ-Nota on: The Tight Bound
Θ-nota on describes the exact asympto c behavior of an algorithm. It captures both the upper
and lower bounds, showing how the algorithm truly scales as input size nn grows large.
Formal Defini on
f(n)=Θ(g(n))if∃ c1,c2>0, n0>0 such that ∀n≥n0, c1⋅g(n)≤f(n)≤c2⋅g(n)f(n) = \Theta(g(n)) \quad
\text{if} \quad \exists\ c_1, c_2 > 0,\ n_0 > 0\ \text{such that} \ \forall n \geq n_0,\ c_1 \cdot
g(n) \leq f(n) \leq c_2 \cdot g(n)
f(n)f(n): Time or space func on of the algorithm.
g(n)g(n): Benchmark func on.
c1,c2c_1, c_2: Bounding constants.
n0n_0: Minimum input size from which the rela on holds.
Interpreta on
It gives a sandwich bound: the algorithm's performance is both no worse than and no
be er than g(n)g(n), asympto cally.
So, if:
o f(n)=O(g(n))f(n) = O(g(n)) (upper bound), and
o f(n)=Ω(g(n))f(n) = \Omega(g(n)) (lower bound),
o Then we have: f(n)=Θ(g(n))f(n) = \Theta(g(n))
Examples
Algorithm Time Complexity Θ-Nota on
Inser on Sort (avg) Average-case: n2n^2 Θ(n2)\Theta(n^2)
Merge Sort All cases: nlognn \log n Θ(nlogn)\Theta(n \log n)
Binary Search All cases: logn\log n Θ(logn)\Theta(\log n)
Linear Search Average-case: nn Θ(n)\Theta(n)
Why It Ma ers
O gives you a ceiling, Ω a floor, but Θ defines the true growth rate.
It’s the most informa ve, but it requires matching both bounds—which isn't always
possible.
Selec on Sort Algorithm and example
Selec on Sort Algorithm
Selec on Sort works by repeatedly selec ng the minimum (or maximum) element from the
unsorted part and placing it in the correct sorted posi on.
Algorithm Steps (Ascending Order)
1. Start from index i = 0.
2. Find the minimum element in the remaining unsorted array from index i to n-1.
3. Swap this minimum element with the element at index i.
4. Increment i and repeat un l the array is sorted.
Pseudocode
cpp
for i = 0 to n - 2
min_index = i
for j = i+1 to n - 1
if A[j] < A[min_index]
min_index = j
swap A[i] and A[min_index]
Example
Let’s say you want to sort this array:
Input: [29, 10, 14, 37, 13]
Pass-by-pass Breakdown:
1. First Pass (i = 0): Minimum = 10 → swap with 29 → [10, 29, 14, 37, 13]
2. Second Pass (i = 1): Minimum = 13 → swap with 29 → [10, 13, 14, 37, 29]
3. Third Pass (i = 2): Minimum = 14 → already in place → [10, 13, 14, 37, 29]
4. Fourth Pass (i = 3): Minimum = 29 → swap with 37 → [10, 13, 14, 29, 37]
5. Fi h Pass (i = 4): Sorted already.
Final Sorted Array:
[10, 13, 14, 29, 37]
Time Complexity
Scenario Comparisons Time Complexity
Best Case O(n2)O(n^2) S ll quadra c
Average Case O(n2)O(n^2) —
Worst Case O(n2)O(n^2) —
Space O(1)O(1) In-place sort
Bubble Sort Algorithm and Example
Bubble Sort Algorithm (Ascending Order)
Bubble Sort works by repeatedly swapping adjacent elements if they are in the wrong order. It
“bubbles” the largest unsorted element to its correct posi on in each pass.
Steps of the Algorithm:
1. Loop through the array.
2. For each pair of adjacent items:
o If the le element is greater than the right, swap them.
3. Repeat this process for all elements, reducing the range of comparison each me.
4. Stop when no swaps are needed (the array is sorted).
Pseudocode:
cpp
for i = 0 to n - 2:
for j = 0 to n - 2 - i:
if A[j] > A[j+1]:
swap A[j] and A[j+1]
Example
Let’s sort this array: Input: [5, 1, 4, 2, 8]
Pass-by-Pass Breakdown:
1. First Pass:
o Compare 5 & 1 → swap → [1, 5, 4, 2, 8]
o 5 & 4 → swap → [1, 4, 5, 2, 8]
o 5 & 2 → swap → [1, 4, 2, 5, 8]
o 5 & 8 → OK
2. Second Pass:
o 1 & 4 → OK
o 4 & 2 → swap → [1, 2, 4, 5, 8]
o 4 & 5 → OK
3. Third Pass:
o 1 & 2 → OK
o 2 & 4 → OK
4. Fourth Pass:
o Sorted already
Sorted Output: [1, 2, 4, 5, 8]
Time and Space Complexity
Case Comparisons Time Complexity
Best Case No swaps needed O(n2)O(n^2), O(n)O(n) if op mized
Average Case Many swaps O(n2)O(n^2)
Worst Case Reversed array O(n2)O(n^2)
Space In-place sort O(1)O(1)
Sequen al Search (Linear Search)
This algorithm checks each element one by one in a list un l it finds the target value or reaches
the end.
Algorithm Steps
1. Start from the first element.
2. Compare the current element with the target value.
3. If it matches, return the index.
4. If not, move to the next element.
5. If you reach the end and haven’t found it, return "Not Found".
C++ Implementa on Example
cpp
int linearSearch(int arr[], int n, int key) {
for (int i = 0; i < n; i++) {
if (arr[i] == key)
return i; // Found at index i
return -1; // Not found
Example
Let’s say we’re searching for 25 in the array:
Input Array: [12, 7, 25, 33, 5] Target: 25
Compare 12 → no
Compare 7 → no
Compare 25 → Found at index 2
Output: 2 (if indexing starts from 0)
Time and Space Complexity
Case Time Complexity
Best Case O(1)O(1)
Average O(n)O(n)
Worst Case O(n)O(n)
Space O(1)O(1)
Use Cases
Small datasets
Unsorted arrays
When simplicity is preferred over performance
Naive String-Matching Algorithm
This algorithm checks for a given pa ern in a text by sliding the pa ern over the text one
character at a me and checking for a match at each posi on.
Algorithm Steps (Naive Approach)
1. Let:
o T = Text of length n
o P = Pa ern of length m
2. For every posi on i from 0 to n - m:
o Compare the substring T[i...i+m-1] with P
o If all characters match, report a match at index i
C++ Code Example
cpp
void naiveSearch(string text, string pa ern) {
int n = text.length();
int m = pa ern.length();
for (int i = 0; i <= n - m; i++) {
int j;
for (j = 0; j < m; j++) {
if (text[i + j] != pa ern[j])
break;
if (j == m) {
cout << "Pa ern found at index " << i << endl;
Example
Text: "ABABDABACDABABCABAB" Pa ern: "ABABCABAB"
1. Start comparing from index 0
2. Slide one by one…
3. Match found at index 10
Output: "Pa ern found at index 10"
Time Complexity
Case Complexity
Best Case O(n)O(n) when mismatch occurs early
Worst Case O(n⋅m)O(n \cdot m) if characters repeat
Space O(1)O(1) extra space
Quick sort Algorithm and Example
Quick Sort Algorithm (Divide and Conquer)
Core Idea
Quick Sort picks a pivot element and par ons the array into two parts:
Elements less than the pivot
Elements greater than the pivot Then, it recursively applies the same logic to each part.
Algorithm Steps
1. Choose a Pivot (first, last, middle, or random element)
2. Par on the array around the pivot:
o Place elements smaller than the pivot to its le
o Place elements greater than the pivot to its right
3. Recursively apply the above steps to the le and right sub-arrays
4. Base case: Arrays of length 0 or 1 are already sorted
C++ Code Snippet
cpp
int par on(int arr[], int low, int high) {
int pivot = arr[high]; // Pivot chosen as last element
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr[i], arr[j]);
swap(arr[i + 1], arr[high]);
return i + 1; // Return pivot index
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = par on(arr, low, high);
quickSort(arr, low, pi - 1); // Sort le part
quickSort(arr, pi + 1, high); // Sort right part
Example
Input: [10, 80, 30, 90, 40, 50, 70] Let’s say the pivot is 70
1. Par on step: [10, 30, 40, 50] [70] [80, 90]
2. Recursively sort le and right sides
o Le : sort [10, 30, 40, 50]
o Right: sort [80, 90]
Final Sorted Output: [10, 30, 40, 50, 70, 80, 90]
Time and Space Complexity
Case Time Complexity Descrip on
Best/Average O(nlogn)O(n \log n) Balanced par ons
Worst Case O(n2)O(n^2) Unbalanced (e.g., sorted input with bad pivot)
Space O(logn)O(\log n) Due to recursion stack
Quick Sort vs Selec on/Bubble Sort
Faster on average than Selec on and Bubble Sort
Not stable (doesn’t preserve equal keys' rela ve order)
Used internally in many standard libraries due to its speed
Binary Search Algorithm and Example
Binary Search Algorithm (Itera ve)
Binary Search repeatedly divides the search interval in half to locate a target value.
Steps of the Algorithm
1. Set two pointers: low = 0 and high = n - 1
2. While low ≤ high:
o Compute the middle index: mid = (low + high) / 2
o If arr[mid] == key, return mid (target found)
o If arr[mid] < key, search the right half → low = mid + 1
o If arr[mid] > key, search the le half → high = mid - 1
3. If not found, return -1 (not present)
C++ Code Snippet
cpp
int binarySearch(int arr[], int n, int key) {
int low = 0, high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == key)
return mid;
else if (arr[mid] < key)
low = mid + 1;
else
high = mid - 1;
return -1; // Key not found
Example
Input Array: [5, 10, 15, 20, 25, 30, 35] Target (key): 25
Steps:
low = 0, high = 6, mid = 3 → arr[3] = 20
Since 25 > 20, search right half → low = 4
Now mid = (4 + 6)/2 = 5 → arr[5] = 30
Since 25 < 30, search le half → high = 4
Now mid = 4 → arr[4] = 25 Found at index 4
Time and Space Complexity
Case Time Complexity Descrip on
Best Case O(1)O(1) Found at mid on first check
Worst Case O(logn)O(\log n) Each step halves the problem
Space O(1)O(1) Itera ve version
If implemented recursively, space becomes O(logn)O(\log n) due to call stack.
Merge Sort: The Idea
Merge Sort works by:
1. Dividing the array into halves recursively un l each sub-array has one element.
2. Merging the sorted sub-arrays to produce new sorted arrays, un l the whole array is
sorted.
Steps of the Algorithm
1. If the array has 0 or 1 element, it is already sorted.
2. Divide the array into two halves.
3. Recursively sort each half using Merge Sort.
4. Merge the two sorted halves into one sorted array.
C++ Code Snippet
cpp
void merge(int arr[], int l, int m, int r) {
int n1 = m - l + 1; // size of le sub-array
int n2 = r - m; // size of right sub-array
int L[n1], R[n2]; // temp arrays
for (int i = 0; i < n1; i++)
L[i] = arr[l + i];
for (int j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];
// Merge temp arrays
int i = 0, j = 0, k = l;
while (i < n1 && j < n2) {
arr[k++] = (L[i] <= R[j]) ? L[i++] : R[j++];
while (i < n1) arr[k++] = L[i++];
while (j < n2) arr[k++] = R[j++];
void mergeSort(int arr[], int l, int r) {
if (l < r) {
int m = l + (r - l) / 2;
mergeSort(arr, l, m); // sort le half
mergeSort(arr, m + 1, r); // sort right half
merge(arr, l, m, r); // merge both halves
Example
Input: [38, 27, 43, 3, 9, 82, 10]
Step-by-step Breakdown:
Divide:
[38, 27, 43, 3] and [9, 82, 10]
→ [38, 27] and [43, 3] → [9, 82] and [10]
→ [38] + [27] → merge → [27, 38]
→ [43] + [3] → merge → [3, 43]
→ [9] + [82] → merge → [9, 82]
Now:
[27, 38] + [3, 43] → merge → [3, 27, 38, 43]
[9, 82] + [10] → merge → [9, 10, 82]
Final merge:
[3, 27, 38, 43] + [9, 10, 82] → [3, 9, 10, 27, 38, 43, 82]
Sorted Output: [3, 9, 10, 27, 38, 43, 82]
Time and Space Complexity
Scenario Time Space
Best Case O(nlogn)O(n \log n) O(n)O(n) (due to temp arrays)
Worst Case O(nlogn)O(n \log n) O(n)O(n)
Average Case O(nlogn)O(n \log n) O(n)O(n)
- Inser on Sort Algorithm and Example
Inser on Sort Algorithm
Core Idea
Inser on Sort works the way you might sort playing cards:
Start with an empty le hand and cards face down on the table.
Take cards one at a me and insert them into the correct posi on in the sorted part.
Algorithm Steps (Ascending Order)
1. Start from the second element (i = 1), trea ng the first as sorted.
2. Store the current element in a variable, say key.
3. Compare it with elements before it (j = i - 1).
4. Shi larger elements to the right.
5. Insert key into the correct sorted posi on.
C++ Pseudocode
cpp
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j]; // Shi right
j--;
arr[j + 1] = key; // Insert key
Example: Sort [9, 5, 1, 4, 3]
Let’s walk through it step-by-step:
1. i = 1, key = 5 Compare 9 > 5 → shi 9 → Insert 5 → [5, 9, 1, 4, 3]
2. i = 2, key = 1 9 > 1 → shi , 5 > 1 → shi → Insert 1 → [1, 5, 9, 4, 3]
3. i = 3, key = 4 9 > 4 → shi , 5 > 4 → shi → Insert 4 → [1, 4, 5, 9, 3]
4. i = 4, key = 3 9 > 3 → shi , 5 > 3 → shi , 4 > 3 → shi → Insert 3 → [1, 3, 4, 5, 9]
Sorted Output: [1, 3, 4, 5, 9]
Time and Space Complexity
Case Time Complexity Notes
Best Case O(n)O(n) Already sorted
Average/Worst O(n2)O(n^2) Reverse sorted or random order
Space O(1)O(1) In-place sort (no extra memory)
Topological Sor ng Algorithm and Example
What is Topological Sor ng?
Topological Sort is a linear ordering of ver ces in a Directed Acyclic Graph (DAG) such that for
every directed edge u → v, vertex u appears before v in the ordering.
Only works on DAGs—if the graph has a cycle, topological sor ng is not possible.
Applica ons
Task scheduling (with dependencies)
Build systems (like Makefiles)
Course prerequisite planning
Data serializa on
Algorithm: Kahn’s Algorithm (Using Indegree)
Steps:
1. Compute indegree of all nodes.
2. Add all nodes with indegree 0 to a queue.
3. While the queue is not empty:
o Remove a node from the queue and add it to the topological order.
o Decrease indegree of all its neighbors by 1.
o If any neighbor’s indegree becomes 0, add it to the queue.
4. If the topological order contains all nodes → success. Otherwise, a cycle exists.
C++ Code Snippet
cpp
void topologicalSort(int V, vector<vector<int>>& adj) {
vector<int> indegree(V, 0);
for (int u = 0; u < V; u++) {
for (int v : adj[u]) {
indegree[v]++;
queue<int> q;
for (int i = 0; i < V; i++)
if (indegree[i] == 0)
q.push(i);
vector<int> topo;
while (!q.empty()) {
int u = q.front(); q.pop();
topo.push_back(u);
for (int v : adj[u]) {
if (--indegree[v] == 0)
q.push(v);
for (int node : topo)
cout << node << " ";
Example
Let’s say we have these edges:
Edges: 5→0, 5→2, 4→0, 4→1, 2→3, 3→1
This forms a DAG, and one valid topological sort is:
5→4→2→3→1→0
Mul ple valid orders can exist!
Time Complexity
Time: O(V+E)O(V + E)
Space: O(V)O(V)
Depth First Search Algorithm and Example
Depth First Search (DFS) Algorithm
Core Idea
DFS starts at a source node and explores as far down a path as possible before backing up to try
other paths.
It works beau fully on:
Graphs and Trees (directed or undirected)
Both connected and disconnected graphs
DFS Algorithm (Recursive)
1. Mark the current node as visited.
2. Explore each unvisited neighbor recursively.
3. Repeat un l all ver ces reachable from the star ng point are visited.
C++ Code (Adjacency List, Recursive)
cpp
void DFS(int node, vector<vector<int>>& adj, vector<bool>& visited) {
visited[node] = true;
cout << node << " "; // Process current node
for (int neighbor : adj[node]) {
if (!visited[neighbor]) {
DFS(neighbor, adj, visited);
Example
Graph:
Ver ces: 0, 1, 2, 3, 4
Edges: 0→1, 0→2, 1→3, 1→4
Adjacency List:
0: [1, 2]
1: [3, 4]
2: []
3: []
4: []
DFS Star ng from 0:
Visit 0 → 1 → 3 → backtrack → 4 → backtrack → 2
Output: 0 1 3 4 2
Time and Space Complexity
Aspect Complexity
Time O(V+E)O(V + E)
Space O(V)O(V) (visited + call stack)
Where:
V is number of ver ces
E is number of edges
Breadth First Search Algorithm and Example
Breadth First Search (BFS) Algorithm
Core Idea
BFS starts from a source vertex and explores all its immediate neighbors before going deeper—
unlike DFS, which dives deep first.
It uses a queue to track the next vertex to explore and a visited array to avoid revisi ng nodes.
Algorithm Steps
1. Choose a star ng node (source).
2. Mark it as visited and enqueue it.
3. While the queue is not empty:
o Dequeue a node u from the queue
o Process u (print or store)
o For each unvisited neighbor v of u:
Mark v as visited
Enqueue v
C++ Code (Adjacency List, Itera ve)
cpp
void bfs(int start, vector<vector<int>>& adj, vector<bool>& visited) {
queue<int> q;
q.push(start);
visited[start] = true;
while (!q.empty()) {
int node = q.front();
q.pop();
cout << node << " ";
for (int neighbor : adj[node]) {
if (!visited[neighbor]) {
visited[neighbor] = true;
q.push(neighbor);
Example
Graph:
Ver ces: 0, 1, 2, 3, 4
Edges: 0 → 1, 0 → 2, 1 → 3, 1 → 4
Adjacency List:
0: [1, 2]
1: [3, 4]
2: []
3: []
4: []
BFS Star ng from Node 0:
Visit 0 → Enqueue 1, 2
Visit 1 → Enqueue 3, 4
Visit 2
Visit 3
Visit 4
Output: 0 1 2 3 4
Time and Space Complexity
Metric Complexity
Time O(V+E)O(V + E)
Space O(V)O(V)
Where:
V = number of ver ces
E = number of edges