0% found this document useful (0 votes)
43 views17 pages

Dynamic Programming Presentation (Autosaved)

The document discusses dynamic programming techniques. It covers topics such as making change, principles of optimality, the knapsack problem, and shortest paths. Dynamic programming is used to break down problems into overlapping subproblems to efficiently find optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views17 pages

Dynamic Programming Presentation (Autosaved)

The document discusses dynamic programming techniques. It covers topics such as making change, principles of optimality, the knapsack problem, and shortest paths. Dynamic programming is used to break down problems into overlapping subproblems to efficiently find optimal solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

DYNAMIC

PROGRAMMING

BENITA GWANDURE M222432


COURTENY DZERE M225084
TANAKA EZEKIEL CHIBAYA M215075
THEODE CHIRINDO M22
TABLE OF CONTENT
1. Introduction

2. Making Change

3. Principles Of Optimality

4. The Knapsack Problem

5. Shortest paths - Floyd's


algorithm

6. Chained Matrix Multiplication


1. INTRODUCTION
• Dynamic programming is a problem-solving technique used to solve optimization problems by
breaking them down into overlapping subproblems and efficiently solving each subproblem only
once.

• It is particularly useful for problems that exhibit the property of optimal substructure, which means
that an optimal solution to the problem can be constructed from optimal solutions to its subproblems.

• The main idea behind dynamic programming is to solve a problem by storing the solutions to its
subproblems in a table or an array, so that the solutions to overlapping subproblems can be reused
instead of recomputed. This approach can greatly reduce the time complexity of the overall solution.

• Dynamic programming is often used to solve problems such as finding the shortest path in a graph,
optimizing resource allocation, sequence alignment, knapsack problems, and many others. It is a
powerful technique that can significantly improve the efficiency of solving complex optimization
problems.
STEPS IN DYNAMIC PROGRAMMING
i. Characterize the structure of an optimal solution: Determine how an optimal solution to the
problem can be constructed from optimal solutions to its subproblems.
ii. Define the value of an optimal solution recursively: Express the value of an optimal solution
in terms of the values of smaller subproblems.
iii. Compute the value of an optimal solution in a bottom-up manner: Build a table or an array
and fill it in a bottom-up fashion, starting with the smallest subproblems and gradually
solving larger subproblems until the solution to the original problem is obtained.
iv. Construct an optimal solution: Once the table or array is filled, trace back through the table
to construct an optimal solution.
MAKING CHANGE
• Dynamic programming can be applied to solve the "Making Change" problem, which involves
finding the minimum number of coins needed to make change for a given amount of money.
• This approach ensures that each subproblem is solved only once, and the solutions to smaller
subproblems are reused to compute the solution for larger subproblems. As a result, the overall
time complexity is significantly reduced compared to a naive recursive approach.

• Note that the below steps outline the general approach to solving the "Making Change" problem
using dynamic programming. The specific implementation details may vary depending on the
programming language and the specific requirements of the problem.

• Here's a step-by-step explanation of how dynamic programming can be used to solve the
"Making Change" problem:
STEPS
Step 1 Step 2 Step 3

 Define the problem:  Identify the subproblem:  Define the recurrence relation:
o We can define a recurrence relation that
o Given a set of coin o Let's consider the
expresses the minimum number of coins
denominations and a target subproblem of making needed to make change for a given amount
amount, the goal is to find the change for smaller amounts, as a function of the minimum number of
minimum number of coins starting from 0 up to the coins needed for smaller amounts. The
needed to make change for target amount. We'll build recurrence relation is as follows:
minCoins(amount) =
the target amount. up the solution for larger min(minCoins(amount - coin) + 1) for
amounts based on solutions each coin in denominations
for smaller amounts. o This means that to compute the minimum
number of coins needed to make change
for the current amount, we consider each
coin in the denominations and compute the
minimum of (minimum number of coins
needed for (amount - coin) + 1).
CONT
Step 4 Step 5 Step 6 Step 7
 Build a table or  Fill in the table:  Compute the final  Trace back to
array: o Iterate over the solution: construct the
o Create an array or amounts from 1 to the o Once the table is solution:
table to store the target amount. For filled, the minimum o To find the actual
minimum number of each amount, iterate number of coins coins used to make
coins needed for each over the coin needed to make change, start from the
amount from 0 up to denominations. change for the target target amount and
the target amount. Compute the minimum amount is stored in repeatedly subtract
Initialize the values number of coins the table at the target the selected coin
for amount 0 as 0, needed for the current amount position. denomination that
and for all other amount using the leads to the minimum
amounts, initialize recurrence relation number of coins. Keep
them to infinity or a defined in step 3. track of the selected
large number. Update the table with coins until the amount
the computed becomes 0.
minimum.
PRINCIPLES OF OPTIMALITY
• The principles of optimality are fundamental concepts in dynamic programming that
guide the design and formulation of optimal solutions. These principles are central to
the application of dynamic programming to various problems. There are two main
principles of optimality in dynamic programming: the principle of optimality and the
principle of overlapping subproblems
• By applying the principles of optimality and overlapping subproblems, dynamic
programming allows us to efficiently solve complex problems by breaking them down
into simpler subproblems and reusing solutions. This approach can significantly reduce
the computational time and improve the overall efficiency of the solution.
• It's important to note that not all problems can be solved using dynamic programming,
as they may not exhibit the properties required for dynamic programming to be
applicable. However, for problems that satisfy the principles of optimality and
overlapping subproblems, dynamic programming can be a powerful technique to find
optimal solutions efficiently.
PRINCIPLE OF OPTIMALITY:
• The principle of optimality states that an optimal solution to a problem contains within it optimal
solutions to subproblems. In other words, if we have an optimal solution to a problem, then the
solution for any subproblem within that problem must also be optimal.

• This principle allows us to break down a complex problem into smaller subproblems and solve
them independently. By solving the subproblems optimally, we can combine their solutions to
obtain an optimal solution for the original problem.

• This property is particularly important in dynamic programming, where we solve subproblems


and store their solutions for later use.
PRINCIPLE OF OVERLAPPING
SUBPROBLEMS:
• The principle of overlapping subproblems states that the set of subproblems in a
dynamic programming problem overlaps, meaning that the same subproblems are
solved multiple times.
• This overlapping occurs when a problem can be divided into smaller subproblems,
and the same subproblems are encountered multiple times during the computation.
• To avoid redundant work, dynamic programming exploits this property by solving
each subproblem only once and storing its solution for future reference.
• By storing the solutions in a table or array, we can retrieve and reuse them when
needed, rather than recomputing them.
THE KNAPSACK PROBLEM
• The knapsack problem is a classic optimization problem that can be effectively solved using
dynamic programming. It involves selecting a subset of items with maximum total value, given
a weight constraint.
• The dynamic programming approach to the knapsack problem ensures that each subproblem
is solved only once, and the solutions to smaller subproblems are stored and reused when
needed.
• This approach significantly reduces the time complexity compared to a naive recursive
solution. The time complexity of the dynamic programming solution to the knapsack problem
is O(nW), where 'n' is the number of items and 'W' is the knapsack capacity.
• By leveraging dynamic programming, the knapsack problem can be efficiently solved, providing
an optimal solution for selecting items that maximize the total value while respecting the
weight constraint.
• Here's how dynamic programming can be applied to solve the knapsack problem:
i. Define the problem: Given a set of items, each with a weight and a value, and a knapsack with a
maximum weight capacity, the goal is to determine the most valuable combination of items that can be
placed in the knapsack without exceeding its weight capacity.
ii. Formulate the subproblem: Define a subproblem as finding the maximum value that can be achieved
using a subset of items considering a limited weight capacity.
iii. Define the recurrence relation: Let's denote the maximum value that can be achieved using a subset of
the first 'i' items and a weight capacity of 'w' as 'dp[i][w]'. We can define the recurrence relation as:
dp[i][w] = max(dp[i-1][w], dp[i-1][w - weight[i]] + value[i]). This means that to compute the
maximum value using 'i' items and weight capacity 'w', we consider two choices: either exclude item 'i'
(in which case the value is given by dp[i-1][w]) or include item 'i' (in which case the value is given by
dp[i-1][w - weight[i]] + value[i]).
iv. Build a table or array: Create a two-dimensional table or array, 'dp', with dimensions (number of items +
1) x (knapsack capacity + 1). Initialize the values of the table to 0.
v. Fill in the table: Iterate over the items from 1 to the total number of items. For each item, iterate over
the weight capacities from 1 to the knapsack capacity. Apply the recurrence relation defined in step 3 to
compute the maximum value for each subproblem and store it in the table.
vi. Compute the final solution: Once the table is filled, the maximum value that can be achieved using all the
items and the full knapsack capacity is given by dp[number of items][knapsack capacity].
vii. Trace back to construct the solution: To determine which items were selected, start from the last item
and iteratively check if including the item contributes to the maximum value. If it does, include the item
in the solution and reduce the weight capacity accordingly. Repeat this process until all items have been
considered.
SHORTEST PATHS - FLOYD'S
ALGORITHM
• Floyd's algorithm, also known as the Floyd-Warshall algorithm, is a dynamic programming
algorithm used to find the shortest paths between all pairs of vertices in a weighted directed
graph. It handles both positive and negative edge weights but does not work with graphs that
contain negative cycles.
• Floyd's algorithm considers all possible intermediate vertices in each iteration, gradually
building up the shortest path distances between pairs of vertices. By computing the shortest
paths for all pairs of vertices, it provides a comprehensive view of the shortest paths in the
graph.
• The time complexity of Floyd's algorithm is O(V^3), where V is the number of vertices in the
graph. The algorithm is efficient for small to medium-sized graphs but may become impractical
for large graphs due to its cubic time complexity.
• Floyd's algorithm is widely used in various applications, such as network routing protocols,
graph analysis, and distance matrix computations.
• Here's a step-by-step explanation of how Floyd's algorithm works:
STEPS
Step 1 Step 2
 Initialize the distance  Compute shortest paths:
matrix: Perform V iterations of the
o Create a two-dimensional following steps:
distance matrix, denoted as
'dist', with dimensions V x V, o For each pair of vertices (i, j), check
where V is the number of if going through vertex k can yield a
vertices in the graph. Initialize shorter path than the current
the matrix with the edge distance between i and j.
weights of the graph. If there o If dist[i][k] + dist[k][j] < dist[i][j],
is no edge between two update dist[i][j] with the new
vertices, set the corresponding shorter distance
entry in the matrix to infinity.
CONT
Step 3 Step 4
 Finalize the  Trace back to find the shortest paths:
shortest paths: o To reconstruct the actual shortest paths
o After V iterations, the between vertices, create another matrix,
distance matrix 'dist' denoted as 'next', of the same dimensions as
will contain the 'dist'. Initialize 'next' with values representing
shortest distances the next vertex in the shortest path from i to
between all pairs of j. During the iterations in step 2, whenever
vertices. the distance between i and j is updated, set
next[i][j] to k, indicating that the next vertex
in the shortest path from i to j is k.
o To find the shortest path from vertex i to
vertex j, start at vertex i and repeatedly
update i to next[i][j] until i becomes j. Each
intermediate vertex encountered during this
process represents a vertex on the shortest
path from i to j.
CHAINED MATRIX MULTIPLICATION
• Chained matrix multiplication is a classic problem that can be efficiently solved using dynamic
programming. The goal is to determine the optimal order of multiplying a series of matrices to
minimize the total number of scalar multiplications required.

• By applying dynamic programming to the chained matrix multiplication problem, we can efficiently
determine the optimal order of matrix multiplication, minimizing the number of scalar multiplications
required. The time complexity of the dynamic programming solution is O(n^3), where n is the
number of matrices.

• This approach has various applications in areas such as matrix computations, algorithm design, and
optimization problems involving sequences of operations.

• Here's how dynamic programming can be applied to solve the chained matrix multiplication problem:
i. Define the problem: Given a sequence of matrices A1, A2, ..., An, where the dimensions of matrix Ai are given
by d[i-1] x d[i] for i = 1 to n, the objective is to find the optimal order of multiplication that minimizes the total
number of scalar multiplications.
ii. Formulate the subproblem: Define a subproblem as finding the minimum number of scalar multiplications
required to multiply a subsequence of matrices from i to j, where i ≤ j ≤ n.
iii. Define the recurrence relation: Let's denote the minimum number of scalar multiplications required to multiply
matrices from i to j as 'dp[i][j]'. We can define the recurrence relation as dp[i][j] = min(dp[i][k] + dp[k+1]
[j] + d[i-1] * d[k] * d[j]). This means that to compute the minimum number of scalar multiplications for
matrices from i to j, we consider all possible split points 'k' between i and j. We calculate the number of scalar
multiplications required to multiply the matrices from i to k and from k+1 to j, and add the additional
multiplications needed to multiply the resulting matrices of dimensions d[i-1] x d[k] and d[k] x d[j].
iv. Build a table or array: Create a two-dimensional table or array, 'dp', with dimensions n x n. Initialize the values
of the table to infinity.
v. Fill in the table: Iterate over the chain length 'l' from 2 to n (the number of matrices). For each chain length,
iterate over the starting index 'i' from 1 to n-l+1. Calculate the corresponding ending index 'j' as j = i + l - 1.
Apply the recurrence relation defined in step 3 to compute the minimum number of scalar multiplications for
each subproblem and store it in the table.
vi. Compute the final solution: The optimal number of scalar multiplications required to multiply all the matrices is
given by dp[1][n].

vii. Trace back to construct the optimal order: To determine the optimal order of matrix multiplication, we can
maintain an additional table or array 'split' of the same dimensions as 'dp'. During the iterations in step 5,
whenever the minimum number of scalar multiplications is updated, set split[i][j] to the value of 'k' that
achieved the minimum. This information helps us trace back the optimal order of matrix multiplication.

To construct the optimal order, recursively split the matrices at each 'k' value stored in 'split'. This splits the problem into two

You might also like