1.
Searching Algorithms
(a) Linear Search
1. Implementation:
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i
return -1
2. Example Usage:
arr = [3, 5, 2, 8, 6]
print(linear_search(arr, 8)) # Output: 3
3. Time Complexity:
Best: O(1) – target at the beginning.
Average: O(n)
Worst: O(n) – target at the end or not present.
4. Space Complexity:
O(1) – uses constant space.
(b) Binary Search
1. Implementation:
def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
2. Example Usage:
arr = [2, 4, 6, 8, 10]
print(binary_search(arr, 8)) # Output: 3
3. Time Complexity:
Best: O(1)
Average/Worst: O(log n)
4. Space Complexity:
O(1)
(c) Comparison
Prefer Linear Search: When the list is small or unsorted.
Prefer Binary Search: When the list is large and sorted.
---
2. Sorting Algorithms
(a) Bubble Sort
1. Implementation:
def bubble_sort(arr):
a = arr.copy()
n = len(a)
for i in range(n):
for j in range(0, n-i-1):
if a[j] > a[j+1]:
a[j], a[j+1] = a[j+1], a[j]
return a
2. Example Usage:
arr = [5, 3, 1, 4, 2]
print(bubble_sort(arr)) # Output: [1, 2, 3, 4, 5]
3. Time Complexity:
Best: O(n)
Average/Worst: O(n²)
4. Space Complexity: O(1)
5. Stability: Yes, it’s stable.
Insertion Sort
def insertion_sort(arr):
a = arr.copy()
for i in range(1, len(a)):
key = a[i]
j=i-1
while j >= 0 and key < a[j]:
a[j + 1] = a[j]
j -= 1
a[j + 1] = key
return a
Time: Best O(n), Average/Worst O(n²)
Space: O(1)
Stability: Yes
Merge Sort
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr)//2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
result = []
i=j=0
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
Time: O(n log n)
Space: O(n)
Stability: Yes
Quick Sort
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quick_sort(less) + [pivot] + quick_sort(greater)
Time: Best/Average O(n log n), Worst O(n²)
Space: O(log n)
Stability: No
(b) Sorting Comparison
Bubble/Insertion Sort: Efficient for small or nearly sorted data.
Merge vs Quick Sort:
Use Merge Sort for stable sorting needs and linked lists.
Use Quick Sort for in-place fast average-case performance.
---
3. Tree Algorithms (Binary Search Tree)
class Node:
def __init__(self, key):
self.left = None
self.right = None
self.val = key
def insert(root, key):
if root is None:
return Node(key)
if key < root.val:
root.left = insert(root.left, key)
else:
root.right = insert(root.right, key)
return root
def search_bst(root, key):
if root is None or root.val == key:
return root
if key < root.val:
return search_bst(root.left, key)
return search_bst(root.right, key)
def inorder(root):
return inorder(root.left) + [root.val] + inorder(root.right) if root else []
def preorder(root):
return [root.val] + preorder(root.left) + preorder(root.right) if root else []
def postorder(root):
return postorder(root.left) + postorder(root.right) + [root.val] if root else []
(b) Complexity Analysis
Insertion/Search:
Best: O(log n)
Worst: O(n) (unbalanced tree)
Traversal:
O(n)
Space:
O(h) where h is tree height (due to recursion stack)
4. Graph Algorithms
(a) Breadth-First Search (BFS)
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
order = []
while queue:
node = queue.popleft()
if node not in visited:
visited.add(node)
order.append(node)
queue.extend(graph[node] - visited)
return order
Time: O(V + E)
Space: O(V)
(b) Depth-First Search (DFS)
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
order = [start]
for neighbor in graph[start] - visited:
order += dfs(graph, neighbor, visited)
return order
Time: O(V + E)
Space: O(V)
(d) Comparison
BFS is preferred:
When you want the shortest path (e.g., in unweighted graphs)
Level-wise exploration
DFS is preferred:
When exploring deep paths (e.g., solving puzzles)
When memory is limited