0% found this document useful (0 votes)
2 views16 pages

Unit 5: Dynamic Programming (8 HRS) : Pyqs

The document outlines the concepts and strategies of Dynamic Programming (DP), comparing it with Greedy Algorithms and Recursion. It details key elements of DP, including optimal substructure, overlapping subproblems, and memoization, while providing examples such as the Matrix Chain Multiplication and the 0/1 Knapsack Problem. Additionally, it discusses the advantages of DP over other methods, emphasizing its efficiency in solving complex problems through systematic storage and reuse of solutions.

Uploaded by

jonespartick42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views16 pages

Unit 5: Dynamic Programming (8 HRS) : Pyqs

The document outlines the concepts and strategies of Dynamic Programming (DP), comparing it with Greedy Algorithms and Recursion. It details key elements of DP, including optimal substructure, overlapping subproblems, and memoization, while providing examples such as the Matrix Chain Multiplication and the 0/1 Knapsack Problem. Additionally, it discusses the advantages of DP over other methods, emphasizing its efficiency in solving complex problems through systematic storage and reuse of solutions.

Uploaded by

jonespartick42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

DAA - 5

Monday, April 28, 2025 9:11 AM

Unit 5: Dynamic Programming (8 Hrs)


5.1 Greedy Algorithms vs Dynamic Programming
PYQs:
• Discuss the advantages of dynamic programming over greedy strategy.

5.1 Recursion vs Dynamic Programming


PYQs:
• Distinguish the main idea for divide and conquer approach with dynamic programming approach.
• How does dynamic programming differ from recursion? (Asked inside other PYQs too.)

5.1 Elements of DP Strategy


PYQs:
• Write down the elements of dynamic programming.
• What do you mean by Dynamic Programming Strategy? Explain the elements of DP.
• What is the concept of dynamic programming?

5.2 Matrix Chain Multiplication


PYQs:
• Find the optimal parenthesization for matrix chain product ABCD with sizes A5×10, B10×15, C15×20, D20×30.
• Write the dynamic programming algorithm for matrix chain multiplication.
• Trace matrix chain multiplication for the size array {5, 2, 3, 5, 4}.
• Write the algorithm for matrix chain multiplication and estimate its time complexity.

5.2 String Editing (Edit Distance Problem)


PYQs:
• Find the edit distance between the strings “ARTIFICIAL” and “NATURAL” using dynamic programming.

5.2 Zero-One Knapsack Problem


PYQs:
• Explain the algorithm to solve the 0/1 knapsack problem using dynamic programming and explain its complexity.
• Discuss the 0/1 knapsack problem and how it can be solved.
• What do you mean by backtracking? Explain the backtracking algorithm for solving 0/1 knapsack problem.

5.2 Floyd Warshall Algorithm


PYQs:
• What is Floyd’s algorithm? Write the details of Floyd’s algorithm to find the shortest path in a graph.
• Explain and analyze Floyd’s Warshall algorithm for all-pair shortest path problem. Trace the algorithm for a given
graph.

5.2 Travelling Salesman Problem (TSP)


PYQs:
(no specific PYQ asked for TSP in your list — just study briefly)

5.3 Memoization Strategy


PYQs:
• What do you mean by memorization strategy? Compare memorization with dynamic programming.

5.3 Dynamic Programming vs Memoization


PYQs:
• (Same as above.)
Dynamic Programming (DP)
• Dynamic Programming is a technique to solve problems by:
○ Breaking them down into smaller overlapping subproblems,
○ Solving each subproblem only once,
○ Storing their results to reuse later.
• It is especially useful when a problem has:
○ Optimal Substructure and
○ Overlapping Subproblems.
Example:
Computing Fibonacci numbers recursively leads to repeated calculations.
Using DP, we can store previously computed Fibonacci numbers and avoid re-computation.

Elements of Dynamic Programming Strategy


1. Optimal Substructure
○ A problem exhibits optimal substructure if an optimal solution can be constructed from optimal
solutions of its subproblems.
Example:
In Shortest Path problems, the shortest path from node A to C via B is the shortest path from A to
B plus the shortest path from B to C.
2. Overlapping Subproblems
○ A problem has overlapping subproblems if the same subproblems are solved multiple times.
○ Unlike Divide & Conquer (where subproblems are distinct), DP reuses solutions.
Example:
In computing the Fibonacci sequence:
Fib(5) needs Fib(4) and Fib(3), and Fib(4) again needs Fib(3) and Fib(2), causing overlapping.

Greedy Algorithms vs Dynamic Programming


Basic Concept
Aspect Greedy Algorithm Dynamic Programming
Decision Makes locally optimal choice at each step Makes decisions based on overall problem and
making future impact
Solution Never looks back or revises earlier choices May revise choices by considering multiple
approach possibilities
Problem type Works if greedy choice property and Works when overlapping subproblems and
optimal substructure exist optimal substructure exist
Conplexity Less because no need of tables and
Array to store previous choices

Guarantee No guarantee of optimal solution

Working Method
• Greedy:
○ Pick the best immediate option (locally best), hoping it leads to global best.
○ No backtracking.
• Dynamic Programming:
○ Solve and remember results of smaller subproblems.
○ Combine subproblem solutions to get overall solution.
○ May explore multiple choices before finalizing.

Conditions to Use
Condition Greedy Dynamic Programming
Greedy choice property required? Yes No
Overlapping subproblems needed? No Yes
Optimal substructure required? Yes Yes

Recursion (Divide and Conquer) vs Dynamic


Programming
Aspect Recursion (Divide and Conquer) Dynamic Programming
Definition Breaks the problem into subproblems, solves Solves subproblems and stores solutions to
them independently, then combines results avoid redundant work
(memoization/tabulation)
Subproblem Subproblems are independent Subproblems are overlapping
Structure
Optimality May not guarantee optimal solution due to Guarantees optimal solution by reusing
redundant calculations solutions to subproblems
Examples Merge Sort, Quick Sort, Binary Search Fibonacci, 0/1 Knapsack, Matrix Chain
Multiplication
Overlapping Not applicable (subproblems don’t overlap) Key characteristic (subproblems are reused
Subproblems multiple times)
Efficiency May recompute the same subproblem More efficient by storing results and reusing
multiple times, less efficient them
Time Higher due to recomputation of subproblems Lower due to storage of subproblems (e.g.,
Complexity (e.g., O(2^n) for Fibonacci) O(n) or O(n^2) depending on the problem)
Space Low (recursive stack space) Higher (requires space to store results of
Complexity subproblems)
Storage No extra storage needed for subproblem Requires storage to keep intermediate
solutions results
Problem Type Best for problems where subproblems are Best for problems where subproblems
independent overlap

Main Elements of Dynamic Programming (DP)

1. Optimal Substructure
• Definition: A problem exhibits optimal substructure if the solution to the problem can be constructed
efficiently from the solutions to its subproblems.
• Example: In the Matrix Chain Multiplication problem, the optimal way to multiply matrices is
determined by the optimal ways to multiply smaller subsequences of matrices.

2. Table Structure
• Definition: A table (usually a 2D array) is used to store the solutions to subproblems in Dynamic
Programming. This is where the results of smaller subproblems are stored to avoid redundant
computations.
• Structure: The table is typically filled iteratively, and each entry in the table corresponds to a
subproblem's solution.
• Example: In the 0/1 Knapsack Problem, we use a table where dp[i][w] stores the maximum value that
can be achieved with the first i items and capacity w.

3. Bottom-Up Approach
• Definition: In the Bottom-Up approach, the problem is solved by first solving the smallest subproblems
and then using those solutions to build up to the solution for the larger problem.
• Process: You fill the DP table from the smallest subproblems (base cases) and iteratively compute the
larger problems until you reach the desired result.
• Example: In the Fibonacci Sequence, we compute the Fibonacci numbers starting from the base cases
(F(0) and F(1)) and then build up the results until we reach the desired Fibonacci number.
daa 5
Memoization (Concept)
• Memoization = Remembering solutions of subproblems.
• Solve the problem top-down (start from the big problem and recursively solve subproblems).
• Whenever you solve a subproblem, store its result (in a table or dictionary).
• If you need the same subproblem again, reuse the stored result — don't recompute.
• Speeds up recursion by avoiding repeated calculations.

Memoization vs Dynamic Programming


Feature Memoization Dynamic Programming (DP)
Approach Top-Down Bottom-Up
How it works Solve bigger problems first, store as needed Solve smaller problems first, build up
Storage Only needed subproblems are stored All subproblems are computed and stored
Recursion Used Usually Not used (uses loops instead)
Function calls Many recursive calls Iterative filling of tables
Efficiency May have overhead of recursive calls More memory efficient and avoids call stack load
Example Recursive Fibonacci with cache Tabular Fibonacci solution

You might also like