0% found this document useful (0 votes)
31 views

Algo-CH-4 Dynamic Programming

Uploaded by

sintebeta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Algo-CH-4 Dynamic Programming

Uploaded by

sintebeta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 4: Dynamic

Programming

By: Elsay M.
Subtopics in Dynamic Programming
(6hr)
4.1. Introduction to Dynamic Programming

4.2. All pairs Shortest Path - Floyd-Warshall Algorithm

4.3. Shortest Path - Dijkstra Algorithm

4.4. 0/1 Knapsack

4.5. Depth First Search


Introduction
• Dynamic Programming
• Dynamic Programming (DP) is a problem-solving
method where complex problems are broken down into
simpler overlapping subproblems.
• It is used in cases where the same subproblems occur
multiple times, and the results of subproblems are
stored to avoid redundant work.
4.1. Introduction to Dynamic
Programming
• Dynamic programming is useful when a problem can be
divided into smaller overlapping subproblems that can be
solved independently, and their solutions can be
combined to solve the original problem. It usually
involves storing results of subproblems in a table to
avoid recomputation.
• There are two approaches to dynamic programming:
o Top-down (Memoization): Solve the problem recursively, but
store the results of subproblems to avoid recomputing them.
o Bottom-up (Tabulation): Build a solution iteratively from the
smallest subproblems and store their solutions in a table.
Example:

• Fibonacci Sequence: In Fibonacci, the nth number is


the sum of the (n-1)th and (n-2)th numbers. Using
dynamic programming, we store the results of
previously calculated Fibonacci numbers to avoid
recalculating them.
4.2. All Pairs Shortest Path -
Floyd-Warshall Algorithm
• The Floyd-Warshall Algorithm is a dynamic programming
algorithm used to find the shortest paths between all pairs of
nodes in a graph. It works for both positive and negative
weights but assumes no negative weight cycles (where you
can loop forever, reducing the total distance).
• How it works:
• The algorithm works by considering every possible pair of
nodes and tries to improve the shortest path between them
by using an intermediate node.
• It repeatedly updates the shortest distance between each pair
if going through an intermediate node offers a shorter path.
Example:
• For a graph with 4 nodes and the following distance
matrix:
0 5 ∞ 10
∞ 0 3 ∞
∞ ∞ 0 1
∞ ∞ ∞ 0
• ∞ means no direct connection between nodes. The
Floyd-Warshall algorithm updates this matrix by
checking if an intermediate node provides a shorter
path.
• Final Output (Shortest Paths):
0 5 8 9
∞ 0 3 4
∞ ∞ 0 1
∞ ∞ ∞ 0

• Now, the shortest distance between every pair of nodes


is available
4.3. Shortest Path - Dijkstra
Algorithm
• Dijkstra’s Algorithm is another algorithm for finding the
shortest path, but it focuses on finding the shortest path from
a single source node to all other nodes in the graph. It
assumes all edge weights are non-negative.
How it works:
• Start with the source node and assign it a distance of 0. All
other nodes are set to infinity.
• Explore the neighboring nodes, and update their distances if a
shorter path is found through the current node.
• Continue this process for the next unvisited node with the
smallest distance until all nodes have been processed.
Example:

• For a graph:
(4) (2)
• A -----> B -----> C
\ \
(1) (5)
\ \
D -----> E
(3)
• Using Dijkstra from node A, the shortest paths to all other
nodes are found by exploring each neighboring node and
updating their distances.
4.4. 0/1 Knapsack Problem
• The 0/1 Knapsack Problem is a classical dynamic programming
problem. You are given a set of items, each with a weight and
value, and a knapsack that can carry up to a certain weight. The
goal is to maximize the value of items in the knapsack without
exceeding its weight limit. The "0/1" means you either include an
item in the knapsack or exclude it (no partial items allowed).
How it works:
• Build a table where each cell dp[i][j] represents the maximum
value that can be achieved with the first i items and a knapsack of
weight j.
• For each item, you decide whether to include it (if it fits) or exclude
it.
• If you include the item, add its value to the total and reduce the
remaining capacity accordingly.
• Example: Let’s say we have 3 items:

Item Weight Value


1 2 3
2 3 4
3 4 5
• Knapsack capacity = 5
• We build a table to calculate the maximum value:
Capacity
Items 0 1 2 3 4 5
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
Maximum value is 7, obtained by selecting items 1 and 2
4.5. Depth First Search (DFS)
• Depth First Search (DFS) is an algorithm for traversing
or searching through graphs and trees. It explores as far
as possible along one branch before backtracking. DFS is
used for various purposes like pathfinding, cycle
detection, and solving puzzles (like mazes).
How it works:
• Start at the root or a starting node.
• Explore each branch or edge as far as possible before
moving to the next unexplored node (backtrack if
necessary).
• Mark nodes as visited to avoid cycles.
Example: For a graph:

A ---- B ---- C
| |
D ---- E
• Starting at node A:
• First visit A.
• Move to B, then to C (as far as possible).
• Backtrack to B, then visit E, and finally D.
• The DFS traversal would be: A → B → C → E → D.
Summary of Chapter 4 :
• Dynamic Programming is about breaking problems into
overlapping subproblems and using stored solutions to avoid
redundant calculations.
• Floyd-Warshall solves the all-pairs shortest path problem for graphs
with positive and negative weights.
• Dijkstra’s Algorithm finds the shortest path from a single source
node, with non-negative weights.
• The 0/1 Knapsack Problem is a combinatorial optimization problem
where we maximize value without exceeding a weight limit.
• Depth First Search (DFS) is a graph traversal technique that
explores each path as deeply as possible before backtracking.
• These algorithms are key tools for solving optimization problems in
fields like networking, resource allocation, and decision-making.

You might also like