Dynamic Programming (DP) on Arrays Tutorial
Last Updated :
28 Nov, 2023
We know that Dynamic Programming is a way to reduce the time complexity of a problem using memoization or tabulation of the overlapping states. While applying DP on arrays the array indices act as DP states and transitions occurs between indices.
How to Identify if Array problem has a DP solution?
- Using Constraints: Generally, in Competitive Programming the required solution should perform nearly 10^7 to 10^8 operations, so if the array size N = 1000 to 5000 we can expect to have an N^2 DP solution, and for N = 100 we can have an N^3 DP solution.
- Minimize/Maximize Values: Dynamic Programming can be applied if the problem is to maximize or minimize any value over all possible scenarios.
- Various functions in the problem: If the question consists of functions f(x) as operations then we can try to think towards Dynamic programming.
- Greedy fails: If you can think of a test case where the Greedy solution fails, then the problem has a high chance of having a DP solution.
- Recursive Sub-Problems: Determining whether the problem has a recursive solution where an over-lapping subproblem occurs is a difficult task but if you manage to do that, then DP can be a solution if the constraints allow that.
Quick Approximation of DP Time & Space Complexity for Arrays:
Before coding the solution, we need to verify whether the time complexity of our solution fits the given constraints or not, we can approximate the time complexity using the below formula:
Time Complexity = Total Number of States * ( 1 + Average number of Transitions per State)
Space Complexity = Average number of States.
Why classical-DP problems differ than DP problems on Arrays?
The problems that we see on Competitive Programming platforms are mainly made up of two parts:
The Adhoc part of the problem is the one that makes each problem unique and also the one that decides the difficulty of the problem. One needs to make several observations to deduce the Adhoc part into the standard solution. Good dp problems will often require you to make adhoc observations to figure out some properties that allow you to apply dp on them.
Let's us see how an Adhoc problem changes a standard DP problem:
Problem: You are given N tiles 1 to N, and initially you are standing at 1. In one move you can make a jump of either 1 or 2 in forward direction. Determine the total number of ways you can reach from tile 1 to tile N.
Example: N=4
Output: total 3 ways are possible
Explaination: 1->2->3->4
1->2->4
1->3->4
Adhoc Part: The Adhoc part of the problem diverts the question towards thinking of moving on tiles from 1 to N in different ways, the programmer will dry run several test cases and make several observations in order to reach to the solution, he might even try to make some mathematical formula to calculate the solution.
Standard Part: If observed carefully this question is nothing but print the N'th fibonacci number the simplest DP problem that might exist. each tile T can be reached only via previous two tiles that is T is dependent upon T-1 and T-2. Using This we can write the DP states as:
- Number _of_ways[i]= Number _of_ways[i-1] + Number _of_ways[i-2]
The recurrence relation for the factorial of a number n is given by:
- Factorial [n] = factorial[n-1] * n
The recurrence relation for the inverse factorial of a number n is given by:
- Inverse_fact[n] = (n+1) * (inverse_fact[n+1])
Recognizing which type of Standard DP can be used to solve Array Problems:
When tackling DP problems during a contest the most difficult task is to define the states and transitions for the problem. The states and transitions for problems are mostly the variations of existing standard DP problems, let us see how we can try to think towards a particular standard DP problem:
- Weight/ Volume/Value Constraints in the problem.
- 0-1 decision problem, i.e. either include or exclude an item to the final result.
- Task is to minimize or maximize the profit.
- O(N^2) solution is permissible
- Array values can be used as a denomination on a particular sum.
- Minimum array values required to achieve a target.
- Total number of ways required as a result.
- O(N^2) solution is permissible.
- For a particular index i, dp[i] depends on some previously calculated indices.
- O(N) solution is required.
- State Transition: F(n)=F(n-1) + F(n-2)
- If Each pair of indices {i,j} have to be used as a State then we can try to think towards MCM approach.
- Constraints allow a time complexity of O(N^3)
- dp[i] represents the length of the LIS ending at index i. This idea can be extended to the context where we can calculate the best answer ending at ith index.
- In Edit Distance, dp[i][j] represents the minimum number of operations to convert the first i characters of string1 to the first j characters of string2.
Transition:
- If characters s1[i] and s2[j] are the same: dp[i][j] = dp[i-1][j-1].
- Otherwise: dp[i][j] = min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1]) + 1.
The transitions in Edit distance are very commonly seen in many questions where a particular dp value depends upon previously calculated adjacent dp cells.
- In LCS dp[i][j] represents the length of the LCS of the first i characters of string1 and the first j characters of string2.
Transition:
- If characters s1[i] and s2[j] are the same: dp[i][j] = dp[i-1][j-1] + 1.
- Otherwise: dp[i][j] = max(dp[i-1][j], dp[i][j-1]).
Similar to Edit distance, LCS transitions can also be very frequently observed in various problems.
Practice/Solve DP on Array Problems:
Similar Reads
Dynamic Programming or DP Dynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of
3 min read
How Does Dynamic Programming Work? Dynamic programming, popularly known as DP, is a method of solving problems by breaking them down into simple, overlapping subproblems and then solving each of the subproblems only once, storing the solutions to the subproblems that are solved to avoid redundant computations. This technique is usefu
15 min read
Dynamic Programming (DP) on Grids Grid problems involve a 2D grid of cells, often representing a map or graph. We can apply Dynamic Programming on Grids when the solution for a cell is dependent on solutions of previously traversed cells like to find a path or count number of paths or solve an optimization problem across the grid, w
11 min read
Dynamic Programming (DP) Introduction Dynamic Programming is a commonly used algorithmic technique used to optimize recursive solutions when same subproblems are called again.The core idea behind DP is to store solutions to subproblems so that each is solved only once. To solve DP problems, we first write a recursive solution in a way t
15+ min read
Steps to solve a Dynamic Programming Problem Steps to solve a Dynamic programming problem:Identify if it is a Dynamic programming problem.Decide a state expression with the Least parameters.Formulate state and transition relationship.Apply tabulation or memorization.Step 1: How to classify a problem as a Dynamic Programming Problem? Typically,
13 min read
How do Dynamic arrays work? A Dynamic array (vector in C++, ArrayList in Java) automatically grows when we try to make an insertion and there is no more space left for the new item. Usually the area doubles in size. A simple dynamic array can be constructed by allocating an array of fixed-size, typically larger than the number
15+ min read
Dynamic Programming meaning in DSA Dynamic Programming is defined as an algorithmic technique that is used to solve problems by breaking them into smaller subproblems and avoiding repeated calculation of overlapping subproblems and using the property that the solution of the problem depends on the optimal solution of the subproblems
2 min read
Arrays for Competitive Programming In this article, we will be discussing Arrays which is one of the most commonly used data structure. It also plays a major part in Competitive Programming. Moreover, we will see built-in methods used to write short codes for array operations that can save some crucial time during contests. Table of
15+ min read
Dynamic Programming vs Divide-and-Conquer In this article Iâm trying to explain the difference/similarities between dynamic programming and divide and conquer approaches based on two examples: binary search and minimum edit distance (Levenshtein distance).The ProblemWhen I started to learn algorithms it was hard for me to understand the mai
12 min read
Dynamic Programming (DP) Notes for GATE Exam [2024] As the GATE Exam 2024 is coming up, having a good grasp of dynamic programming is really important for those looking to tackle tricky computational problems. These notes are here to help you understand the basic principles, strategies, and real-life uses of dynamic programming. They're like a handy
8 min read