Main
Main
solutions
In order to solve a dynamic programming problem, you need to be able to identify the state of the
problem, which is basically what you need to calculate and what are the important variables that you
need to calculate it. After that you need to know what is the subproblem, i.e. how to break the original
problem into subproblems, and build a recurrence relation so that you can compute the solution to
the original problem using the solution of the subproblems. Sometime, the order of these steps may
not be identical to how I listed it here. But once you were able to identify to important property of
the problem, which are optimal substructure and overlapping subproblem, you should be able
to identify the states and the transitioning between states (or you can say the recurrence relation).
Solution. In order to calculate the total cost after the frog has landed at the last stone, we only
care about which stone the frog lands prior to jumping to the last stone, and there are only two
possibilities: The stones of distance 1 and 2 to the last stone. So, in order to compute the minimum
cost, the important thing to keep in mind is where the frog land. We will use these information to
identify the dynamic programming state for this problem.
Let dp[i] be the minimum cost to go from 0 to i (we will use 0-indexing here). Note that in order
to reach position i, then the previous position of the frog must be either i − 1 or i − 2 (if they exist),
because in each step the frog can only jump to the stone that is either one or two stones ahead from
its current position. Hence, for any position i ≥ 2, we have the following recurrence relation:
dp[i] = min(dp[i − 1] + |h[i] − h[i − 1]|, dp[i − 2] + |h[i] − h[i − 2]|)
For the base cases, we have dp[0] = 0 because no cost will be incurred from staying at 0, and dp[1] =
|a[0] − a[1]| because the only way to go to 1 is jump from 0.
In conclusion, we have the following recursion:
dp[0] = 0
dp[1] = |a[0] − a[1]|
dp[i] = min(dp[i − 1] + |h[i] − h[i − 1]|, dp[i − 2] + |h[i] − h[i − 2]|) ∀i ≥ 2
1
The time complexity for this algorithm is O(n), where n is the number of stones.
Solution. The analysis for this problem is identical to the first one, however this problem is different
from the first problem that it has an additional variable k, which specify how far ahead the frog
can jump from its current position. To reach position i, the frog can now jump from i − k, i − k +
1, . . . , i − 2, i − 1 (if they exist), because in each steps the frog can jump to the stone that is at most k
stones ahead its current stone. Hence, using the same notation as in problem A, we have the following
recurrence relation:
k
dp[i] = min(dp[i − j] + |h[i] − h[j]|).
j=1
The time complexity for this algorithm is O(nk), where n is the number of stones and k is the length
that the frog can jump in a single jump.
Solution. In this problem, to calculate the happiness of Taro in that day, which day that is, and what
does Taro do on that day.
2
Let dp[i][j] denote the maximum total points of happiness that Taro gains if he does activity j (The
activity A, B, C are indexed by 0, 1, 2 respectively) on the ith day (here, we also use 0-indexing for
day). Then on day 0 the maximum happiness he can achieve by doing action j is a[0][j] itself. On day
i (i > 0), since Taro doesn’t do same activity on two consecutive day, we have the following recursion:
and the final answer will be the maximum among dp[n − 1][0], dp[n − 1][1] and dp[n − 1][2].
The time complexity for this algorithm is O(n), where n is the number of days.
Solution. This is a classical dynamic programming problem. Let’s assume that we are trying to fit
items into our knapsack gradually by selecting among the first item, then the first two items, etc up
until we are selecting from the whole set. We also gradually increase the weight limit of the knapsack.
To calculate the maximum value we can carry with a given limitation of weight, we should ask the
following question: Does this item fit into the knapsack?. If it does, then the maximum value of the
knapsack can be achieve with or without this item. If it can be achieve without the item, then we
can take the previous solution because we already tried the previous items. If the latter is true then
we put item 4 in which takes some value off of our capacity, the remaining capacity gets filled with a
previous solution. If the item doesn’t fit into the knapsack, then it can only be achieve without that
item, and thus we select the previous solution.
Now, we formalize this process using dynamic programming. We need to calculate the maximum
value we can achieve, with given weight limitation and given set of items. So let’s define the state like
this: dp[i][j] is the maximum value we can achieve if we select our items from the first i items, and
weight limitation is j. Then, we have this observation:
• If we don’t select item ith, then dp[i][j] is the maximum value we can achieve with weight j by
selecting among first i − 1 items, or dp[i][j] = dp[i − 1][j].
• If we do pick the ith item (provided that it fits into the capacity of the knapsack), then we have
where v[i] is the value of ith item and w[i] is the weight of ith item. That is the value of dp[i][j]
is the maximum value we can achieve with i − 1 first items and capacity j − w[i], plus that value
of ith item.
So we have this recursion:
dp[0][0] = 0
(
dp[i − 1][j] if j < w[i]
dp[i][j] =
max(dp[i − 1][j], dp[i − 1][j − w[i]] + v[i]) otherwise
3
The time complexity for this algorithm is O(nW ), where n is the number of items and W is the
capacity of the knapsack.
Solution. In this problem, because the limitation of weight is so big, we can’t use the algorithm in
problem D. But note that the limitation on price is small. So, we can define our state in this problem
as following: Let dp[i][j] be the minimum capacity so that by selecting from the first i items, we can
have a total value that is exactly equal to i. Initially, all the states are set to +∞, with the exception
of dp[0][0] = 0. This is to guarantee that there will be some state that we can’t reach (because unlike
in the previous problem, we the value of the items we select add up to exactly j, not add up to a value
that does not exceed j).1
So, we have this recursion:
dp[0][0] = 0
(
dp[i − 1][j] if j < w[i]
dp[i][j] =
min(dp[i − 1][j], dp[i − 1][j − v[i]] + w[i]) otherwise
The time complexity for this algorithm is O(nS), where n is the number of items and S is the total
value of all the items.
Solution. This is another classic dynamic programming problem. Let’s analyze what we should do to
solve this problem. Now, imagine that you are having the longest subsequence. If you cut off the last
character of it, it will be a long subsequence, and moreover, it must be the longest subsequence of the
1
This problem is actually very similar to change-making problem
4
prefixes of the two strings, otherwise the original subsequence is not the longest, because we can take
the longer subsequence of the two prefixes, and append the character we chopped off.
So, it is important to know that for two prefixes of our strings, what is the length of their longest
common subsequence. This leads us to define our states as following: dp[i][j] is the length of the longest
common subsequence of the prefix of length i of string s and prefix of length j of string t. Then, we
have this recursion:
0
if i = 0 or j = 0
dp[i][j] = dp[i − 1][j − 1] + 1 if s[i] = t[j] .
max(dp[i − 1][j], dp[i][j − 1]) otherwise
It means if the last character of prefix length i of s and prefix length j of t is matched, then the longest
common subsequence for these prefixes is equal to the LCS of prefixes of length i − 1 of s and length
j − 1 of t, appends with the character at position i of s. Otherwise, it should be the longer one of the
LCS of prefixes i − 1 of s and j of t, or prefixes i of s and j − 1 of t (since we don’t take the character
at i).
Now, we have dp[i][j] is the length of the LCS of prefix of length i of s and prefix of length j of t,
we need to complete another job: Print the LCS. This is called the tracing in DP problem. Some DP
problem asks you to not just find the optimal value, but also find the solution that can achieve such
value.
So, how to get the LCS from the information that is stored in dp? First, we know that the length of
LCS is contained in the entry dp[m][n] of our array. So, we will traverse the array using two variable
i and j. If the character corresponding to dp[i][j] is a match (s[i] = t[j]), then according to our
transitioning, dp[i][j] = dp[i − 1][j − 1] + 1, so we decrease both i and j by 1 and add this character to
our answer. If it doesn’t match, dp[i][j] = max(dp[i−1][j], dp[i][j −1]), so we will compare if dp[i−1][j]
is greater than dp[i][j − 1] then we decrement i by 1, else we decrement j by 1. We stop traversing
when either i = 0 or j = 0, because when that happen, it means we have exhausted one of the string,
so we can’t proceed any further.
The time complexity of this algorithm is O(mn), where m is the length of the first string and n is
the length of the second string.
Solution. In this problem, we have a directed graph that contains no directed cycle, which is commonly
known as Directed acyclic graph, often abbreviated as DAG. Since this graph contains no cycle,
so there will be no edge (or path) that goes from vertex u to vertex v if there was already an edge (or
path) from vertex v to vertex u. Thus, we can conveniently define a parent-child relation on this kind
of graph: If there is an edge u → v, then we say that u is a parent of v.
Now, we want to know the length of the longest path in this graph. If the graph has a single vertex,
the length of longest path will be 0. For a graph with more than 1 vertex, imagine this process: You
are standing at vertex u, and you append a vertex v on top of vertex u, meaning we create an edge
5
u → v. Then, the longest path in this new graph starting from u will be the length of the longest path
starting from v, add 1. And now, if we append another graph starts from vertex t, i.e. create an edge
u → t, the length of the longest path starting from u is the max of the longest path starting from
v and the longest path starting from t, add 1. From here, we can see how the optimal substructure
was formed: The longest path starting from a vertex can be calculate from the information about the
longest path starting from its children.
Let dp[u] be the length of the longest path starting from u. We have the following transition:
(
0 if u has no child.
dp[u] = .
max dp[v] + 1
v is child of u
This problem has connection to another infamous DP problem, which is Longest increasing sub-
sequence problem. If you consider the less than relation as a directed edge of a graph, that is in an
array, you draw an arrow from element i of that array to element j if i < j, then the LIS problem
become finding the longest directed path on that graph.
Solution. Assume that we are standing at cell (x, y). Then, we can come from at most 2 cells: (x−1, y)
or (x, y − 1), whichever cell that is not occupied with a wall and is valid2 . If there are a ways to go to a
cell, and from that cell we reach to next cell, then the number of ways to go to that next cell increase
by a. Using these two observation, we define our state as following: dp[i][j] stands for the number of
ways to reach cell (i, j), and the transitioning is like this:
dp[i − 1][j] + dp[i][j − 1] if s[i − 1][j] and s[i][j − 1] exist and are not wall
dp[i − 1][j] if s[i][j − 1] does not exist or is wall
dp[i][j] = .
dp[i][j − 1] if s[i − 1][j] does not exist or is wall
0 otherwise
The time complexity is O(HW ), where H is the height of the grid and W is the width of the
grid.3
6
lnm
Solution. So in this problem, we need to calculate the probability that we will have at least heads
2
after tossing n coins. So it is very natural to define our state like this: dp[i][j] is the probability of
having j heads after tossing i first coins. Note that the order of coin tossings doesn’t matter to the
final probability, so we can just assume we toss the coins sequentially from coin 1st to coin nth.
Now, assume that we toss the ith coin, and we want to have j heads. So, if the ith coin is a head
one, then we must have j − 1 heads from tossing i − 1 first coins, and the probability for that is
dp[i − 1][j − 1] · p[i]. If the ith coin is a tail, then we must have head from tossing first i − 1 coins, so
the probability for this is dp[i − 1][j] · (1 − p[i]). So, our transition will be:
1
if i = 0 and j = 0
dp[i][j] = dp[i − 1][j] · (1 − p[i]) if i = 0 .
dp[i − 1][j − 1] · p[i] + dp[i − 1][j] · (1 − p[i]) otherwise
The time complexity for this algorithm is O(n2 ), where n is the number of coins.
Solution. If currently, you are having s stones, and by removing x ∈ A stones, you can force your
opponent to lose the game (meaning with the remaining number of stones, no matter how your
opponent remove the stones, you will always be able to find a move allows you to win the game), then
you will definitely win the game. This gives us the idea of defining the state as following: Let dp[i]
indicates is it possible for you to win the game with i stones remain. The base case is when there is
0 stone remaining, then you will definitely loss, so dp[0] = False. Now, using the observation we have
above, we see that this problem has optimal substructure property, we can derive the value of dp[i]
using some previous value (namely, dp[i − x] for x ∈ A). We have this recursion:
(
True if dp[i − x] is False for some x ∈ A
dp[i] = .
False otherwise
The time complexity of this algorithm is O(nk) where k is the number of stones and n is the size of
set A.
7
Problem 11. Problem L - Deque
The time complexity for this algorithm is O(n2 ), since we need to loop over all possible length (from
1 to n) and all possible starting points (from 1 to n − l + 1).