Dynamic Programming Presentation (Autosaved)
Dynamic Programming Presentation (Autosaved)
PROGRAMMING
2. Making Change
3. Principles Of Optimality
• It is particularly useful for problems that exhibit the property of optimal substructure, which means
that an optimal solution to the problem can be constructed from optimal solutions to its subproblems.
• The main idea behind dynamic programming is to solve a problem by storing the solutions to its
subproblems in a table or an array, so that the solutions to overlapping subproblems can be reused
instead of recomputed. This approach can greatly reduce the time complexity of the overall solution.
• Dynamic programming is often used to solve problems such as finding the shortest path in a graph,
optimizing resource allocation, sequence alignment, knapsack problems, and many others. It is a
powerful technique that can significantly improve the efficiency of solving complex optimization
problems.
STEPS IN DYNAMIC PROGRAMMING
i. Characterize the structure of an optimal solution: Determine how an optimal solution to the
problem can be constructed from optimal solutions to its subproblems.
ii. Define the value of an optimal solution recursively: Express the value of an optimal solution
in terms of the values of smaller subproblems.
iii. Compute the value of an optimal solution in a bottom-up manner: Build a table or an array
and fill it in a bottom-up fashion, starting with the smallest subproblems and gradually
solving larger subproblems until the solution to the original problem is obtained.
iv. Construct an optimal solution: Once the table or array is filled, trace back through the table
to construct an optimal solution.
MAKING CHANGE
• Dynamic programming can be applied to solve the "Making Change" problem, which involves
finding the minimum number of coins needed to make change for a given amount of money.
• This approach ensures that each subproblem is solved only once, and the solutions to smaller
subproblems are reused to compute the solution for larger subproblems. As a result, the overall
time complexity is significantly reduced compared to a naive recursive approach.
• Note that the below steps outline the general approach to solving the "Making Change" problem
using dynamic programming. The specific implementation details may vary depending on the
programming language and the specific requirements of the problem.
• Here's a step-by-step explanation of how dynamic programming can be used to solve the
"Making Change" problem:
STEPS
Step 1 Step 2 Step 3
Define the problem: Identify the subproblem: Define the recurrence relation:
o We can define a recurrence relation that
o Given a set of coin o Let's consider the
expresses the minimum number of coins
denominations and a target subproblem of making needed to make change for a given amount
amount, the goal is to find the change for smaller amounts, as a function of the minimum number of
minimum number of coins starting from 0 up to the coins needed for smaller amounts. The
needed to make change for target amount. We'll build recurrence relation is as follows:
minCoins(amount) =
the target amount. up the solution for larger min(minCoins(amount - coin) + 1) for
amounts based on solutions each coin in denominations
for smaller amounts. o This means that to compute the minimum
number of coins needed to make change
for the current amount, we consider each
coin in the denominations and compute the
minimum of (minimum number of coins
needed for (amount - coin) + 1).
CONT
Step 4 Step 5 Step 6 Step 7
Build a table or Fill in the table: Compute the final Trace back to
array: o Iterate over the solution: construct the
o Create an array or amounts from 1 to the o Once the table is solution:
table to store the target amount. For filled, the minimum o To find the actual
minimum number of each amount, iterate number of coins coins used to make
coins needed for each over the coin needed to make change, start from the
amount from 0 up to denominations. change for the target target amount and
the target amount. Compute the minimum amount is stored in repeatedly subtract
Initialize the values number of coins the table at the target the selected coin
for amount 0 as 0, needed for the current amount position. denomination that
and for all other amount using the leads to the minimum
amounts, initialize recurrence relation number of coins. Keep
them to infinity or a defined in step 3. track of the selected
large number. Update the table with coins until the amount
the computed becomes 0.
minimum.
PRINCIPLES OF OPTIMALITY
• The principles of optimality are fundamental concepts in dynamic programming that
guide the design and formulation of optimal solutions. These principles are central to
the application of dynamic programming to various problems. There are two main
principles of optimality in dynamic programming: the principle of optimality and the
principle of overlapping subproblems
• By applying the principles of optimality and overlapping subproblems, dynamic
programming allows us to efficiently solve complex problems by breaking them down
into simpler subproblems and reusing solutions. This approach can significantly reduce
the computational time and improve the overall efficiency of the solution.
• It's important to note that not all problems can be solved using dynamic programming,
as they may not exhibit the properties required for dynamic programming to be
applicable. However, for problems that satisfy the principles of optimality and
overlapping subproblems, dynamic programming can be a powerful technique to find
optimal solutions efficiently.
PRINCIPLE OF OPTIMALITY:
• The principle of optimality states that an optimal solution to a problem contains within it optimal
solutions to subproblems. In other words, if we have an optimal solution to a problem, then the
solution for any subproblem within that problem must also be optimal.
• This principle allows us to break down a complex problem into smaller subproblems and solve
them independently. By solving the subproblems optimally, we can combine their solutions to
obtain an optimal solution for the original problem.
• By applying dynamic programming to the chained matrix multiplication problem, we can efficiently
determine the optimal order of matrix multiplication, minimizing the number of scalar multiplications
required. The time complexity of the dynamic programming solution is O(n^3), where n is the
number of matrices.
• This approach has various applications in areas such as matrix computations, algorithm design, and
optimization problems involving sequences of operations.
• Here's how dynamic programming can be applied to solve the chained matrix multiplication problem:
i. Define the problem: Given a sequence of matrices A1, A2, ..., An, where the dimensions of matrix Ai are given
by d[i-1] x d[i] for i = 1 to n, the objective is to find the optimal order of multiplication that minimizes the total
number of scalar multiplications.
ii. Formulate the subproblem: Define a subproblem as finding the minimum number of scalar multiplications
required to multiply a subsequence of matrices from i to j, where i ≤ j ≤ n.
iii. Define the recurrence relation: Let's denote the minimum number of scalar multiplications required to multiply
matrices from i to j as 'dp[i][j]'. We can define the recurrence relation as dp[i][j] = min(dp[i][k] + dp[k+1]
[j] + d[i-1] * d[k] * d[j]). This means that to compute the minimum number of scalar multiplications for
matrices from i to j, we consider all possible split points 'k' between i and j. We calculate the number of scalar
multiplications required to multiply the matrices from i to k and from k+1 to j, and add the additional
multiplications needed to multiply the resulting matrices of dimensions d[i-1] x d[k] and d[k] x d[j].
iv. Build a table or array: Create a two-dimensional table or array, 'dp', with dimensions n x n. Initialize the values
of the table to infinity.
v. Fill in the table: Iterate over the chain length 'l' from 2 to n (the number of matrices). For each chain length,
iterate over the starting index 'i' from 1 to n-l+1. Calculate the corresponding ending index 'j' as j = i + l - 1.
Apply the recurrence relation defined in step 3 to compute the minimum number of scalar multiplications for
each subproblem and store it in the table.
vi. Compute the final solution: The optimal number of scalar multiplications required to multiply all the matrices is
given by dp[1][n].
vii. Trace back to construct the optimal order: To determine the optimal order of matrix multiplication, we can
maintain an additional table or array 'split' of the same dimensions as 'dp'. During the iterations in step 5,
whenever the minimum number of scalar multiplications is updated, set split[i][j] to the value of 'k' that
achieved the minimum. This information helps us trace back the optimal order of matrix multiplication.
To construct the optimal order, recursively split the matrices at each 'k' value stored in 'split'. This splits the problem into two