0% found this document useful (0 votes)
10 views19 pages

DSA - Notes 2

The document is a final course summary on Data Structures and Algorithms, covering key topics such as complexity analysis, iterative and recursive algorithms, hashing, collision handling, and graph algorithms. It includes detailed sections on various data structures, their properties, and algorithms for solving problems using recurrence relations. The content is structured to provide a comprehensive overview of essential concepts and techniques in the field.

Uploaded by

jaweriaamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views19 pages

DSA - Notes 2

The document is a final course summary on Data Structures and Algorithms, covering key topics such as complexity analysis, iterative and recursive algorithms, hashing, collision handling, and graph algorithms. It includes detailed sections on various data structures, their properties, and algorithms for solving problems using recurrence relations. The content is structured to provide a comprehensive overview of essential concepts and techniques in the field.

Uploaded by

jaweriaamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Data Structures and Algorithms: Final Course Summary

Author: A Very Farig Guy

May 1, 2025

Contents
1 Complexity Analysis 3
1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Common Time Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Asymptotic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Growth Rate Comparison Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Iterative and Recursive Algorithms 4


2.1 Iterative Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Recursive Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Comparison of Iteration vs Recursion . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Hashing 6
3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Ideal Hash Function Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Common Hashing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.5 Python-like Pseudocode for Hash Table . . . . . . . . . . . . . . . . . . . . . . . 7

4 Collision Handling 7
4.1 Separate Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.1.1 Separate Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Open Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2.1 Linear Probing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2.2 Quadratic Probing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2.3 Double Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 Comparison of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

5 Recurrence Relations 9
5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.2 Types of Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.3 Solving Recurrences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.3.1 Example 1: Simple Recursion . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.3.2 Example 2: Divide and Conquer Recurrence . . . . . . . . . . . . . . . . . 10
5.3.3 Example 3: Exponential Recurrence . . . . . . . . . . . . . . . . . . . . . 10

6 Graphs and Shortest-Path Algorithms 10


6.1 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.2 Shortest Path Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.2.1 Dijkstras Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.2.2 Example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1
7 Trees and Binary Search Trees (BSTs) 10
7.1 Tree Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
7.2 Binary Search Tree (BST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7.3 BST Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7.3.1 Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7.3.2 Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7.4 Time Complexity of BST Operations . . . . . . . . . . . . . . . . . . . . . . . . . 11

8 Graph Representation 11
8.1 Graph Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.2 Types of Graph Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.2.1 Edge List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.2.2 Adjacency List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
8.2.3 Adjacency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
8.3 Choosing the Right Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 12

9 Solving 10 Questions Using Recurrence Relations 13


9.1 Problem 1: Linear Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
9.2 Problem 2: Divide and Conquer Recurrence . . . . . . . . . . . . . . . . . . . . . 13
9.3 Problem 3: Exponential Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . 14
9.4 Problem 4: Nested Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
9.5 Problem 5: Logarithmic Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . 14
9.6 Problem 6: Polynomial Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . 15
9.7 Problem 7: Fibonacci Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
9.8 Problem 8: Log-Linear Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . 15
9.9 Problem 9: Square Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
9.10 Problem 10: Cubic Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

10 Solving 4 More Problems Using Back-Substitution 16


10.1 Problem 1: Simple Linear Recurrence . . . . . . . . . . . . . . . . . . . . . . . . 16
10.2 Problem 2: Decrease-and-Conquer Recurrence . . . . . . . . . . . . . . . . . . . . 16
10.3 Problem 3: A Quadratic Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . 17
10.4 Problem 4: Exponential Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . 17

11 Final Exam Style Questions with Solutions 18


11.1 Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
11.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
11.3 Hash Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
11.4 Graphs and Shortest-Path Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 19
11.5 Trees and Binary Search Trees (BSTs) . . . . . . . . . . . . . . . . . . . . . . . . 19

12 Conclusion 19

2
1 Complexity Analysis
1.1 Definition
Algorithmic complexity refers to the rate at which the computational resources used by an
algorithm (such as time or space) increase as the input size n increases. It allows us to:
• Compare the efficiency of algorithms.
• Predict performance on large inputs.
• Choose optimal solutions based on input constraints.

1.2 Common Time Complexity Classes


1. Constant Time (O(1)): Execution time remains constant regardless of input size.
• Example: Variable assignment, arithmetic operations.
2. Logarithmic Time (O(log n)): Time grows logarithmically with input size. Common in
divide-and-conquer approaches like binary search.
• Example: Binary Search.
3. Linear Time (O(n)): Time grows linearly with input size.
• Example: Iterating through an array.
4. Linearithmic Time (O(n log n)): Time grows slightly faster than linear but much slower
than quadratic.
• Example: Merge Sort.
5. Quadratic Time (O(n2 )): Time grows with the square of the input size. Seen in nested
loops.
• Example: Bubble Sort, Matrix operations.
6. Cubic Time (O(n3 )): Time grows with the cube of the input size. Rarely used for large
input sizes.
• Example: Naive Matrix Multiplication.
7. Exponential Time (O(2n )): Time doubles with each additional element. Infeasible for
large n.
• Example: Solving Tower of Hanoi.

1.3 Asymptotic Notation


To describe algorithm efficiency, we use the following:
• Big-O (O): Upper bound (worst-case)
• Big-Ω (Ω): Lower bound (best-case)
• Big-Θ (Θ): Tight bound (average-case)

Big-O Formal Definition


Let f (n) and g(n) be functions. We say f (n) = O(g(n)) if there exist constants c > 0 and
n0 ≥ 1 such that:
f (n) ≤ c · g(n) for all n ≥ n0

3
Example
Show that f (n) = 3n + 2 = O(n).
Proof: Let g(n) = n. Choose c = 5, then:

3n + 2 ≤ 5n for all n ≥ 1

Hence, f (n) = O(n).

1.4 Growth Rate Comparison Table


n O(1) O(log n) O(n) O(n log n) O(n2 ) O(n3 ) O(2n )
1 1 0 1 0 1 1 2
2 1 1 2 2 4 8 4
4 1 2 4 8 16 64 16
8 1 3 8 24 64 512 256
16 1 4 16 64 256 4096 65536
32 1 5 32 160 1024 32768 4.29E9

2 Iterative and Recursive Algorithms


2.1 Iterative Algorithms
An iterative algorithm uses loops to repeat operations until a condition is met. These are
generally easier to understand and implement.

Example 1: Simple Loop


for i in range(n):
print("Hello")
Time Complexity: O(n)

Example 2: Nested Loop


for i in range(n):
for j in range(n):
print("Hello")
Time Complexity: O(n2 )

Example 3: Logarithmic Growth


a = 1
while a < n:
a = a * 2
Time Complexity: O(log n)

Example 4: Linear-Logarithmic
for i in range(n):
j = 1
while j < n:
j = j * 2
Time Complexity: O(n log n)

4
Example 5: Conditional with Constant Time
if a > n:
a = a + 1
else:
a = a - 1
Time Complexity: O(1)

Example 6: Conditional with Variable Complexity


if a < 15:
print("Hello")
else:
for i in range(n):
print(i)
Best Case: O(1)
Worst Case: O(n)

2.2 Recursive Algorithms


A recursive algorithm solves a problem by breaking it into smaller subproblems of the same
type, calling itself with a simpler input.

Example 1: Simple Recursion


def func1(n):
if n <= 1:
return
print("Hello")
func1(n // 2)
Time Complexity: O(log n)

Example 2: Linear Recursion


def countdown(n):
if n == 0:
return
print(n)
countdown(n - 1)
Time Complexity: O(n)

2.3 Comparison of Iteration vs Recursion


• Iteration uses loops; recursion uses function calls.
• Recursive algorithms may use more memory due to function call stack.
• Some problems are naturally recursive (e.g., tree traversals).
• Iteration is usually more efficient in terms of runtime and memory.

5
3 Hashing
3.1 Definition
Hashing is a technique used to map data (keys) to fixed-size indices in a hash table using a hash
function. The goal is to provide constant-time O(1) access for insertion, deletion, and retrieval.

3.2 Terminology
• Key: The unique identifier used to store/retrieve values.
• Slot Index: The location in the hash table where a key-value pair is stored.
• Hash Table: A data structure that stores key-value pairs.
• Hash Function: Converts a key into a valid slot index.
• Synonyms: Different keys that hash to the same index.
• Collision: Occurs when two keys map to the same slot.
• Probing: Process of handling collisions.

3.3 Ideal Hash Function Properties


• Should distribute keys uniformly.
• Should be fast to compute.
• Should minimize collisions.

3.4 Common Hashing Techniques


1. Direct Hashing:
h(k) = k

2. Arithmetic Methods:
• Addition: h(k) = (k + x)
• Subtraction: h(k) = |k − x|
• Multiplication: h(k) = k × x
• Division: h(k) = k ÷ x
When out of range: h(k) = h(k) mod N , where N is the table size.
3. Modulo Division:
h(k) = k mod N
Choose N as a prime number to reduce collisions.
4. Digit Extraction:
• Extract selected digits from key to form index.
• Example: Extract 1st, 3rd, and last digit.
5. Midsquare Method:
• Square the key, extract middle digits.
6. Folding Method:
• Split key into equal parts and sum them.
• Fold Boundary: Reverse first and last part before summing.

6
7. Rotation Method:
• Rotate digits in the key before applying a modulus.
8. Pseudorandom Generation:

h(k) = (a · k + c) mod N

Where a, c are constants.


9. Radix Transformation:
• Convert key from one base to another.

3.5 Python-like Pseudocode for Hash Table


class HashTable:
def __init__(self, size):
self.size = size
self.table = [None] * size

def hash_func(self, key):


return key % self.size

def insert(self, key, value):


index = self.hash_func(key)
self.table[index] = value

def get(self, key):


index = self.hash_func(key)
return self.table[index]

def delete(self, key):


index = self.hash_func(key)
self.table[index] = None

4 Collision Handling
When two keys hash to the same slot index, we must resolve the collision to avoid data loss.
Two primary strategies are:

4.1 Separate Chaining


In Separate Chaining, each table slot holds a list (or linked list) of entries that hash to that
slot.

4.1.1 Separate Chaining


Each slot in the hash table maintains a chain (e.g., a linked list or bucket array) of (key, value)
pairs.
Insertion:
1. Compute index: i = h(k).
2. Append (k, v) to chain at slot i.
Lookup:

7
1. Compute index: i = h(k).
2. Search linearly through the chain at slot i for key k.
Deletion:
1. Compute index: i = h(k).
2. Remove the entry with key k from chain at slot i.
Time Complexity:
• Average Case: O(1 + λ), where λ is load factor.
• Worst Case: O(n), if all keys collide into one chain.

Example Code
class ChainHashTable:
def __init__(self, size):
self.table = [[] for _ in range(size)]

def hash(self, key):


return key % len(self.table)

def insert(self, key, value):


i = self.hash(key)
self.table[i].append((key, value))

def get(self, key):


i = self.hash(key)
for k, v in self.table[i]:
if k == key:
return v
return None

def delete(self, key):


i = self.hash(key)
self.table[i] = [(k, v) for (k, v) in self.table[i] if k != key]

4.2 Open Addressing


Open Addressing keeps all entries in the table itself. On collision, it probes for another free
slot.

4.2.1 Linear Probing


Linear Probing
Probe sequence: ( )
hi (k) = h(k) + i mod N, i = 0, 1, . . .

Insertion:
1. Compute base index: j = h(k).
2. If slot j is occupied, try (j + 1) mod N , then (j + 2) mod N , etc.
Time Complexity:
( 1 )
• Average Case: O 1−λ .

8
• Worst Case: O(n) if table is nearly full.

Code Snippet
def linear_probe_insert(table, key, value):
N = len(table)
i = key % N
for step in range(N):
idx = (i + step) % N
if table[idx] is None:
table[idx] = (key, value)
return
raise Exception("Hash table is full")

4.2.2 Quadratic Probing


Quadratic Probing
Probe sequence: ( )
hi (k) = h(k) + c1 i + c2 i2 mod N

4.2.3 Double Hashing


Double Hashing
Probe sequence: ( )
hi (k) = h1 (k) + i · h2 (k) mod N

4.3 Comparison of Methods


• Separate Chaining is simple and never fills up, but uses extra memory.
• Open Addressing is space-efficient but suffers clustering.
• Linear Probing may cause primary clustering.
• Quadratic and Double Hashing reduce clustering at the cost of more complex probes.

5 Recurrence Relations
5.1 Definition
A recurrence relation is an equation that expresses the value of a function in terms of its previous
values. It is commonly used to describe the time complexity of recursive algorithms.

5.2 Types of Recurrence Relations


• Linear Recurrences: Involve terms that are linearly dependent on previous terms.
• Divide and Conquer Recurrences: Involve dividing a problem into subproblems and
combining their results.

5.3 Solving Recurrences


There are multiple methods to solve recurrence relations and find their time complexity:
• Back-Substitution Method
• Recursion-Tree Method

9
• Master’s Theorem

5.3.1 Example 1: Simple Recursion


T (n) = T (n − 1) + 1, T (1) = 1
Solution: T (n) = O(n) using the back-substitution method.

5.3.2 Example 2: Divide and Conquer Recurrence


T (n) = 2T (n/2) + n, T (1) = 1
Solution: T (n) = O(n log n) using the master theorem.

5.3.3 Example 3: Exponential Recurrence


T (n) = 2T (n − 1) + 1, T (1) = 1
Solution: T (n) = O(2n ) by solving the recurrence.

6 Graphs and Shortest-Path Algorithms


6.1 Graphs
A graph is a collection of nodes (vertices) and edges (arcs) connecting pairs of nodes. Graphs
can be:
• Directed or Undirected
• Weighted or Unweighted

6.2 Shortest Path Algorithms


Shortest-path algorithms are used to find the shortest path between two vertices in a weighted
graph.

6.2.1 Dijkstras Algorithm


Applicability: Works for graphs with non-negative edge weights. It can handle directed and
undirected graphs, with or without cycles.

d[v] = min (d[v], d[u] + w(u, v))

Time Complexity: O(V 2 ) using a simple array, O((V + E) log V ) using a priority queue.

6.2.2 Example:
In a graph with nodes {0, 1, 2, 3}, Dijkstras algorithm can be used to find the shortest path
from node 0 to all other nodes.

7 Trees and Binary Search Trees (BSTs)


7.1 Tree Definition
A tree is a hierarchical data structure consisting of nodes connected by edges. The topmost
node is called the root, and the nodes that are directly connected to another node are called
children. A tree with n nodes has n − 1 edges.

10
7.2 Binary Search Tree (BST)
A binary search tree is a binary tree with the following properties:
• Each node has at most two children (left and right).
• The left subtree of a node contains only nodes with values less than the nodes value.
• The right subtree of a node contains only nodes with values greater than the nodes value.

7.3 BST Operations


7.3.1 Insertion
To insert a node, compare its value with the current nodes value and move left or right accord-
ingly. Continue the process until an empty slot is found.

7.3.2 Deletion
There are three cases when deleting a node:
1. The node is a leaf.
2. The node has one child.
3. The node has two children.
For the third case, we can replace the node with either its inorder predecessor or successor.

7.4 Time Complexity of BST Operations


• Search, Insertion, Deletion: O(h), where h is the height of the tree.
• For a balanced tree, h = O(log n), and for an unbalanced tree, h = O(n).

8 Graph Representation
8.1 Graph Definition
A graph is a non-linear data structure consisting of nodes (vertices) and edges (arcs) that
connect pairs of nodes. A graph is represented as G(V, E), where:
• V is the set of vertices, e.g., {V 1, V 2, V 3, . . . , V n}
• E is the set of edges, e.g., {E1, E2, E3, . . . , En}
A graph can be either directed or undirected, and edges can be weighted or unweighted.

8.2 Types of Graph Representation


There are several ways to represent a graph in memory, each with its pros and cons. The most
common types are:

8.2.1 Edge List


The edge list representation consists of an unordered list of edges. Each edge is represented as
a pair (or triplet for weighted graphs) of vertices.
• In an undirected graph, an edge (u, v) is considered the same as (v, u).
• In a weighted graph, an edge might be represented as (u, v, w), where w is the weight of
the edge.

11
Example:
E = {(u, v), (v, u), (u, w), (w, u), (v, w), (w, v)}
For a weighted graph:
E = {(u, v, 5), (v, u, 5), (u, w, 3), (w, u, 3)}

Operations:
• Space complexity: O(n + m), where n is the number of vertices and m is the number of
edges.
• Time complexity for inserting or removing edges: O(1).
• Searching for an edge: O(m).

8.2.2 Adjacency List


The adjacency list representation stores a list of neighbors for each vertex. It is efficient in
terms of space when the graph has a large number of vertices but relatively few edges.
Example:

V [u] → [(v, 5), (w, 3)], V [v] → [(u, 5), (w, 2)], V [w] → [(u, 3), (v, 2)]

This representation allows easy traversal of each vertex’s neighbors.


Operations:
• Space complexity: O(n + m).
• Time complexity for searching for an edge: O(deg(v)), where deg(v) is the degree of vertex
v.
• Insertion and removal of vertices and edges: O(1).

8.2.3 Adjacency Matrix


The adjacency matrix representation stores a n × n matrix, where n is the number of vertices.
Each element A[i, j] stores information about the edge between vertices i and j. For unweighted
graphs, the matrix can contain binary values (0 or 1), while for weighted graphs, the matrix
stores the weight of the edge.
Example: For the graph with vertices V = {u, v, w} and edges E = {(u, v, 3), (v, w, 5)}, the
adjacency matrix looks like:  
0 3 0
 
A = 3 0 5
0 5 0

Operations:
• Space complexity: O(n2 ).
• Time complexity for accessing an edge: O(1).
• Time complexity for listing neighbors: O(n).

8.3 Choosing the Right Representation


• For dense graphs, an adjacency matrix is preferable because of constant-time edge lookups.
• For sparse graphs, an adjacency list is more space-efficient and allows quicker edge traver-
sal.
• For small graphs with few edges, edge lists might be useful for simplicity.

12
9 Solving 10 Questions Using Recurrence Relations
In this section, we will solve 10 problems using recurrence relations. These problems illustrate
various approaches to solving recurrence relations, including back-substitution, recursion tree,
and applying the master theorem.

9.1 Problem 1: Linear Recurrence


Given the recurrence relation:

T (n) = T (n − 1) + 1, T (1) = 1

Solution: We will solve this using back-substitution.

T (n) = T (n − 1) + 1
Substitute for T (n − 1):
T (n − 1) = T (n − 2) + 1
Substitute again:
T (n) = T (n − 2) + 1 + 1 = T (n − 2) + 2
Continuing this process:
T (n) = T (n − k) + k
When k = n − 1, we get:

T (n) = T (1) + (n − 1) = 1 + (n − 1) = n

Thus, T (n) = O(n).

9.2 Problem 2: Divide and Conquer Recurrence


Consider the recurrence relation:

T (n) = 2T (n/2) + n, T (1) = 1

Solution: This is a classic divide and conquer recurrence. We will apply the master theorem
for divide-and-conquer recurrences.
The recurrence matches the form:

T (n) = aT (n/b) + f (n)

where a = 2, b = 2, and f (n) = n. Now we calculate logb a:

logb a = log2 2 = 1

We compare f (n) with nlogb a = n1 : - f (n) = n, which is Θ(nlogb a ).


According to the master theorem, if f (n) = Θ(nlogb a ), then:

T (n) = Θ(n log n)

13
9.3 Problem 3: Exponential Recurrence
Given the recurrence relation:
T (n) = 2T (n − 1) + 1, T (1) = 1
Solution: Using back-substitution:

T (n) = 2T (n − 1) + 1
Substitute for T (n − 1):
T (n − 1) = 2T (n − 2) + 1
Substitute again:
T (n) = 2(2T (n − 2) + 1) + 1 = 22 T (n − 2) + 2 + 1 = 22 T (n − 2) + 3
Continuing this:
T (n) = 2k T (n − k) + (2k − 1)
When k = n − 1, we get:
T (n) = 2n−1 T (1) + (2n−1 − 1)
Since T (1) = 1:
T (n) = 2n−1 + (2n−1 − 1) = 2n − 1
Thus, T (n) = O(2n ).

9.4 Problem 4: Nested Recurrence


Given:
T (n) = T (n/2) + T (n/2) + n, T (1) = 1
Solution: First, simplify the recurrence:
T (n) = 2T (n/2) + n
This matches the form of a divide and conquer recurrence. We can apply the master theorem.
Here a = 2, b = 2, and f (n) = n. Since logb a = log2 2 = 1, and f (n) = Θ(n), the recurrence
falls into the second case of the master theorem:
T (n) = Θ(n log n)

9.5 Problem 5: Logarithmic Recurrence


Given:
T (n) = T (n/2) + log n, T (1) = 1
Solution: Solve using the recursion tree method. We expand the recurrence step by step:

T (n) = T (n/2) + log n


T (n/2) = T (n/4) + log(n/2)
T (n) = T (n/4) + log(n/2) + log n = T (n/4) + log n + log(n/2)
Expanding further:
T (n) = T (n/8) + log(n) + log(n/2) + log(n/4)
The total cost of the recursive calls is the sum of logarithms:
∑n
log
T (n) = log(n/2i )
i=0

T (n) = log n + log(n/2) + log(n/4) + · · · = O(log2 n)

14
9.6 Problem 6: Polynomial Recurrence
Given:
T (n) = 3T (n/2) + n2 , T (1) = 1
Solution: Using the master theorem:
T (n) = aT (n/b) + f (n)
Here a = 3, b = 2, and f (n) = n2 . Calculate logb a:
logb a = log2 3 ≈ 1.585
We compare f (n) with nlogb a : - f (n) = n2 , which is Θ(n2 ). Since f (n) = Θ(n2 ), which is
greater than nlogb a , the recurrence falls into the third case of the master theorem:
T (n) = Θ(n2 )

9.7 Problem 7: Fibonacci Recurrence


Given:
T (n) = T (n − 1) + T (n − 2) + 1, T (1) = 1, T (2) = 1
Solution: The Fibonacci recurrence grows exponentially. Using back-substitution, we find:
T (n) = T (n − 1) + T (n − 2) + 1
The recurrence has exponential growth, so:
T (n) = O(2n )

9.8 Problem 8: Log-Linear Recurrence


Given:
T (n) = T (n/2) + n log n, T (1) = 1
Solution: This recurrence is solved using the recursion tree method. The costs at each level
grow logarithmically, leading to a total complexity of:
T (n) = O(n log2 n)

9.9 Problem 9: Square Recurrence


Given:
T (n) = 4T (n/2) + n2 , T (1) = 1
Solution: Using the master theorem with a = 4, b = 2, and f (n) = n2 , we calculate:
logb a = log2 4 = 2
Since f (n) = Θ(n2 ), the recurrence falls into the second case of the master theorem:
T (n) = Θ(n2 log n)

9.10 Problem 10: Cubic Recurrence


Given:
T (n) = 2T (n/2) + n3 , T (1) = 1
Solution: Using the master theorem:
T (n) = aT (n/b) + f (n)
Here a = 2, b = 2, and f (n) = n3 . Since logb a = 1, and f (n) = n3 , the recurrence falls into the
third case of the master theorem:
T (n) = Θ(n3 )

15
10 Solving 4 More Problems Using Back-Substitution
In this section, we will solve 4 additional problems using the back-substitution method. Each
problem will demonstrate the iterative process of substituting values and observing the growth
of the recurrence.

10.1 Problem 1: Simple Linear Recurrence


Given the recurrence:
T (n) = T (n − 1) + 1, T (1) = 1
Solution: Start by expanding the recurrence.

T (n) = T (n − 1) + 1
Substitute for T (n − 1):
T (n − 1) = T (n − 2) + 1
Substitute again:
T (n) = T (n − 2) + 1 + 1 = T (n − 2) + 2
Expanding further:
T (n) = T (n − 3) + 3
Continue this process until we reach T (1):
T (n) = T (1) + (n − 1)
Since T (1) = 1:
T (n) = 1 + (n − 1) = n
Thus, T (n) = O(n).

10.2 Problem 2: Decrease-and-Conquer Recurrence


Given the recurrence:
T (n) = T (n/2) + n, T (1) = 1
Solution: Start by expanding the recurrence.

T (n) = T (n/2) + n
Substitute for T (n/2):
T (n/2) = T (n/4) + n/2
Substitute again:
T (n) = T (n/4) + n/2 + n = T (n/4) + 3n/2
Expanding further:
T (n) = T (n/8) + 3n/4 + 3n/2 = T (n/8) + 7n/4
Continue until n = 1. After k iterations, the recurrence becomes:
∑n
log
1
T (n) = T (1) + n
i=0
2i
The sum is a geometric series:
∑n
log
1
=2
i=0
2i
Thus, the solution is:
T (n) = 1 + 2n = O(n)

16
10.3 Problem 3: A Quadratic Recurrence
Given:
T (n) = T (n − 1) + n2 , T (1) = 1
Solution: Start by expanding the recurrence.

T (n) = T (n − 1) + n2
Substitute for T (n − 1):
T (n − 1) = T (n − 2) + (n − 1)2
Substitute again:
T (n) = T (n − 2) + (n − 1)2 + n2
Expanding further:
T (n) = T (n − 3) + (n − 2)2 + (n − 1)2 + n2
Continuing:

n
T (n) = T (1) + k2
k=1
The sum of squares is:

n
n(n + 1)(2n + 1)
k2 =
k=1
6
Thus, the final solution is:

n(n + 1)(2n + 1)
T (n) = 1 + = O(n3 )
6

10.4 Problem 4: Exponential Recurrence


Given the recurrence:
T (n) = 2T (n − 1) + 1, T (1) = 1
Solution: Start by expanding the recurrence.

T (n) = 2T (n − 1) + 1
Substitute for T (n − 1):
T (n − 1) = 2T (n − 2) + 1
Substitute again:
T (n) = 2(2T (n − 2) + 1) + 1 = 22 T (n − 2) + 2 + 1
Expanding further:
T (n) = 23 T (n − 3) + 22 + 2 + 1
Continuing the pattern:

k−1
T (n) = 2k T (n − k) + 2i
i=0
When k = n − 1, we get:
T (n) = 2n−1 T (1) + (2n−1 − 1)
Since T (1) = 1:
T (n) = 2n−1 + (2n−1 − 1) = 2n − 1
Thus, T (n) = O(2n ).

17
11 Final Exam Style Questions with Solutions
This section presents a selection of representative questions sourced from Harvard University
and University of Waterloo final exam practice materials. These questions cover core topics in
data structures and algorithms and include detailed solutions.

11.1 Time Complexity


Q1: What is the time complexity of searching for an element in a balanced binary search tree?
A1: In a balanced BST, each comparison skips approximately half of the remaining tree.
Therefore, the time complexity is:
O(log n)

Q2: True or False: The runtime complexity of range query for kd-trees depends on the spread
factor of points.
A2: True. The spread factor (distribution of data) affects the tree balance, and hence the
query time.

11.2 Recurrence Relations


Q3: Solve the recurrence: T (n) = 2T (n/2) + n, T (1) = 1
A3: This is a divide-and-conquer recurrence. Using the Master Theorem:

a = 2, b = 2, f (n) = n, logb a = log2 2 = 1

Since f (n) = Θ(nlogb a ), by Case 2:

T (n) = Θ(n log n)



Q4: Given T (n) = 4T (n/2) + n2 n, find the tightest bound.
A4: Use Master Theorem:

a = 4, b = 2, f (n) = n2.5 , logb a = 2

Since f (n) = Ω(n2+ε ), and regularity holds:

T (n) = Θ(n2.5 )

11.3 Hash Tables


Q5: What is the average-case time complexity for search in a hash table using chaining?
A5: The average case is:
n
O(1 + α), where α = (load factor)
k

Q6: How does linear probing handle collisions?


A6: Linear probing searches sequentially for the next free slot:

Try h(k), h(k) + 1, h(k) + 2, . . . (mod m)

18
11.4 Graphs and Shortest-Path Algorithms
Q7: Which algorithm is used to find the shortest path in a graph with non-negative edge
weights?
A7: Dijkstras Algorithm. It maintains a priority queue and updates distances as:

d[v] = min(d[v], d[u] + w(u, v))

Q8: True or False: Cuckoo Hashing may require rehashing even with a low load factor.
A8: True. Cuckoo Hashing can enter a cycle due to placement conflicts, requiring a full rehash.

11.5 Trees and Binary Search Trees (BSTs)


Q9: What is the maximum number of nodes in a binary tree of height h?
A9: A full binary tree has:
2h+1 − 1 nodes

Q10: True or False: Inserting the same set of keys in different orders into a compressed trie
always gives the same structure.
A10: False. The structure of compressed tries depends on the insertion order.

12 Conclusion
You all need help... You actually read through this whole document? Why? Here is the suicide
prevention helpline number: 0311 7786264
Also you won... but at the cost of being awake at 2 in the morning staring at a blank screen?

19

You might also like