Dynamic Programming
Dynamic Programming (DP) – Mastering the Art of Optimization
Dynamic Programming is an algorithmic technique used to solve complex problems by breaking
them down into simpler subproblems, solving each subproblem once, and storing the result to
avoid redundant work.
🔁 Core Principles (The “DP Toolkit”)
1. Overlapping Subproblems
o Problem can be broken into smaller, repeating subproblems
o Example: Fibonacci numbers
2. Optimal Substructure
o The optimal solution of the problem can be built from the optimal solutions of its
subproblems
o Example: Shortest path, knapsack
3. Memoization (Top-Down DP)
o Use recursion + cache to store already computed results
4. Tabulation (Bottom-Up DP)
o Solve problems iteratively and build the answer from the ground up using a DP table
🔢 Example 1: Fibonacci (Classic DP)
🌀 Top-Down (Memoization)
python
CopyEdit
def fib(n, memo={}):
if n <= 1:
return n
if n not in memo:
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
📊 Bottom-Up (Tabulation)
python
CopyEdit
def fib(n):
if n <= 1:
return n
dp = [0] * (n+1)
dp[1] = 1
for i in range(2, n+1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
🧳 Example 2: 0/1 Knapsack Problem
python
CopyEdit
def knapsack(W, wt, val, n):
dp = [[0] * (W+1) for _ in range(n+1)]
for i in range(n+1):
for w in range(W+1):
if i == 0 or w == 0:
dp[i][w] = 0
elif wt[i-1] <= w:
dp[i][w] = max(val[i-1] + dp[i-1][w - wt[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
🕒 Time and Space Complexity
Technique Time Complexity Space Complexity
Recursive Exponential Stack space
Memoization O(n) to O(n²+) O(n) or O(n²)
Tabulation O(n) to O(n²+) O(n) or O(n²)
Space Optimized Same Reduced O(n)
Technique Time Complexity Space Complexity
🧰 Common DP Problems
Problem Type Examples
Sequence-based Fibonacci, LIS, LCS, Edit Distance
Subset-based Knapsack, Subset Sum, Partition Equal Subset
Grid-based Unique Paths, Min Path Sum, Robot in Maze
String-based Palindrome Partition, Regex Matching
Game-based Nim Game, Minimax with Memoization
Tree/Graph DP on Trees, Bellman-Ford (for shortest paths)
📌 Tips for Solving DP Problems
1. Identify subproblems – What’s the repeating structure?
2. Define DP state – Usually dp[i] or dp[i][j]
3. Write recurrence relation – Based on choices or transitions
4. Initialize base cases
5. Choose memoization or tabulation
6. Optimize space (if needed)
🧠 Example Question to Try
Problem: Given a staircase with n steps, and you can climb 1 or 2 steps at a time, how many distinct
ways to reach the top?
Hint: Classic DP with dp[i] = dp[i-1] + dp[i-2]
Top-downTop-Down Dynamic Programming (Memoization)
Top-Down DP means you start solving the problem from the original big problem and recursively
break it down into smaller subproblems. You store the results of subproblems in a cache (memo) to
avoid recomputation.
🧩 How It Works
Write the problem recursively
Use a memo (dictionary or array) to save computed results
Before computing a subproblem, check if it’s already solved
If yes, reuse cached result
If no, compute it and save it in memo
🔢 Example: Fibonacci (Top-Down Memoization)
python
CopyEdit
def fib(n, memo={}):
if n <= 1:
return n
if n not in memo:
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
print(fib(10)) # Output: 55
🛠 Another Example: 0/1 Knapsack (Top-Down)
python
CopyEdit
def knapsack(W, wt, val, n, memo):
if n == 0 or W == 0:
return 0
if (n, W) in memo:
return memo[(n, W)]
if wt[n-1] > W:
memo[(n, W)] = knapsack(W, wt, val, n-1, memo)
else:
memo[(n, W)] = max(
val[n-1] + knapsack(W - wt[n-1], wt, val, n-1, memo),
knapsack(W, wt, val, n-1, memo)
return memo[(n, W)]
# Usage
val = [60, 100, 120]
wt = [10, 20, 30]
W = 50
n = len(val)
memo = {}
print(knapsack(W, wt, val, n, memo)) # Output: 220
🔍 When to Use Top-Down?
Easy to implement when the recursive relation is natural
Good for problems where not all subproblems are needed (saves work)
Debugging is straightforward due to recursion
🕒 Complexity
Time: Usually reduced from exponential to polynomial
Space: Memo table + recursion stack
Bottom-up
Bottom-Up Dynamic Programming (Tabulation)
Bottom-Up DP solves the problem by starting from the smallest subproblems first and builds up
solutions for bigger subproblems iteratively, usually using a table (array or matrix).
🧩 How It Works
Identify base cases and fill them in the DP table.
Use the recurrence relation to fill the table from smallest to largest subproblems.
The final answer is found at the bottom (or the last cell) of the table.
No recursion needed → usually more efficient in practice (no recursion overhead).
🔢 Example: Fibonacci (Bottom-Up)
python
CopyEdit
def fib(n):
if n <= 1:
return n
dp = [0] * (n+1)
dp[1] = 1
for i in range(2, n+1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
print(fib(10)) # Output: 55
🛠 Another Example: 0/1 Knapsack (Bottom-Up)
python
CopyEdit
def knapsack(W, wt, val, n):
dp = [[0] * (W+1) for _ in range(n+1)]
for i in range(1, n+1):
for w in range(1, W+1):
if wt[i-1] <= w:
dp[i][w] = max(val[i-1] + dp[i-1][w - wt[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
val = [60, 100, 120]
wt = [10, 20, 30]
W = 50
n = len(val)
print(knapsack(W, wt, val, n)) # Output: 220
🔍 When to Use Bottom-Up?
You want to avoid recursion overhead
You want better control over the order of computation
When you want to optimize space (e.g., using rolling arrays)
Ideal for problems where all subproblems are needed
🕒 Complexity
Time and space complexity often same as top-down, but no recursion stack cost.
Easier to optimize space by reusing states.
⚖️Top-Down vs Bottom-Up Summary
Feature Top-Down (Memoization) Bottom-Up (Tabulation)
Implementation Recursive Iterative
Ease of Understanding Intuitive for recursive problems May require more careful planning
Overhead Recursion call overhead No recursion overhead
Space Recursion stack + memo Usually just DP table
Use Case When subproblems may be skipped When all subproblems are needed
Recursion
Recursion — The Power of Self-Calling Functions
Recursion is a programming technique where a function calls itself to solve smaller instances of the
same problem until it reaches a base case.
🧠 How Recursion Works
Base Case: The simplest instance of the problem, solved directly without further recursion.
Prevents infinite recursion.
Recursive Case: The function calls itself with a smaller/simpler input.
🐣 Simple Example: Factorial of n (n!)
python
CopyEdit
def factorial(n):
if n == 0 or n == 1: # Base case
return 1
else:
return n * factorial(n - 1) # Recursive case
print(factorial(5)) # Output: 120
🌳 Recursive Function Structure
python
CopyEdit
def recursive_function(parameters):
if base_condition:
return base_value
else:
# break problem into smaller subproblem(s)
return recursive_function(smaller_parameters)
⚠️Key Points
Every recursion must reach a base case to avoid infinite calls.
Recursive calls build a call stack; too deep recursion may cause stack overflow.
Recursion can be inefficient without memoization if overlapping subproblems exist.
🧩 Common Use Cases
Tree traversals (preorder, inorder, postorder)
Divide and conquer algorithms (merge sort, quicksort)
Combinatorial problems (permutations, combinations)
Dynamic Programming (with memoization)
Backtracking (N-Queens, Sudoku)
🕒 Time Complexity
Depends on problem and recursive calls.
Often exponential without memoization, but can be optimized to polynomial with DP.
Memoization
Memoization — The Smart Cache for Recursion
Memoization is an optimization technique used with recursion to store results of expensive function
calls and reuse them when the same inputs occur again, avoiding redundant computations.
🔍 Why Memoization?
Naive recursion often repeats the same calculations many times.
Memoization saves these results in a cache (dictionary or array).
It transforms an exponential-time recursive algorithm into a polynomial-time one.
🔢 Example: Fibonacci with Memoization
python
CopyEdit
def fib(n, memo={}):
if n <= 1:
return n
if n not in memo:
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
print(fib(10)) # Output: 55
🛠 How to Implement Memoization
1. Create a cache (usually a dictionary or array).
2. Before computing a subproblem, check if the result exists in the cache.
3. If yes, return the cached result.
4. If no, compute and store the result in the cache before returning it.
📈 Benefits
Dramatically improves performance for overlapping subproblems.
Easy to implement on top of recursive solutions.
Keeps code intuitive and clean.
⚠️When to Use Memoization?
Problem has overlapping subproblems.
Recursive solution is straightforward but slow.
You want to avoid writing complex iterative DP.
Problem Solving Techniques)
🚀 Top Problem Solving Techniques
1. Brute Force (Exhaustive Search)
Try all possible solutions until you find the correct one.
Simple but usually inefficient (often exponential time).
Useful as a baseline or when input size is very small.
2. Divide and Conquer
Break the problem into smaller independent subproblems.
Solve each subproblem recursively.
Combine their solutions for the final answer.
Examples: Merge Sort, Quick Sort, Binary Search.
3. Greedy Algorithms
Make a locally optimal choice at each step.
Hoping to find a global optimum.
Works only when problem has the greedy-choice property and optimal substructure.
Examples: Prim’s Algorithm, Kruskal’s Algorithm, Huffman Coding.
4. Dynamic Programming (DP)
Solve problems by breaking them into overlapping subproblems.
Store solutions to subproblems (memoization or tabulation) to avoid recomputation.
Used when problem exhibits optimal substructure and overlapping subproblems.
Examples: Fibonacci, Knapsack, Longest Common Subsequence.
5. Backtracking
Explore all potential candidates for a solution.
Abandon a path as soon as it’s clear it won’t lead to a valid solution (prune the search tree).
Useful for constraint satisfaction problems.
Examples: N-Queens, Sudoku Solver, Permutations.
6. Branch and Bound
Similar to backtracking but uses bounds to prune search space more aggressively.
Often used in optimization problems.
7. Recursion
Solve a problem by solving smaller instances of the same problem.
Base case(s) stop recursion.
Often combined with memoization or backtracking.
8. Bit Manipulation
Use bitwise operators to solve problems efficiently.
Useful in subsets, masks, and low-level operations.
9. Two Pointers / Sliding Window
Maintain pointers to represent a window or range and move them intelligently.
Often used in array/string problems for linear time solutions.
Examples: Maximum Sum Subarray, Longest Substring Without Repeating Characters.
10. Graph Algorithms
Specialized methods for graph problems.
Examples: DFS, BFS, Dijkstra, Bellman-Ford, Floyd-Warshall.
🧰 Tips for Effective Problem Solving
Understand the problem deeply — inputs, outputs, constraints.
Think about brute force first — what’s the naive solution?
Look for patterns — overlapping subproblems? Greedy choice?
Choose appropriate technique based on problem properties.
Optimize gradually — start from simple, then improve.
Practice common problems — build intuition and speed.