A Survey of Dynamic Programming Algorithms
A Survey of Dynamic Programming Algorithms
A Survey of Dynamic Programming Algorithms
DOI: 10.54254/2755-2721/35/20230392
Yunong Zhang
Civil Engineering College, Xi’an University of Architecture and Technology, Xi’an,
Shaanxi Province, China, 710054
1. Introduction
The dynamic programming algorithm is an important and widely used algorithm idea that shows
powerful ability and flexibility in solving optimization problems. As a bottom-up solution method, the
dynamic programming algorithm effectively solves many complex problems in practical applications by
decomposing complex problems into smaller sub-problems and utilizing the properties of optimal
substructures.
In the field of computer science, dynamic programming algorithms are widely used in areas such as
algorithm design, optimization, sequence alignment, and path search. Its advantage is that many
seemingly inefficient problems can be solved in polynomial time. By ingeniously designing the state
definition and state transition equations of the problem, the dynamic programming algorithm can find
the optimal solution or near-optimal solution in an efficient manner, thus providing important support
for practical applications.
© 2023 The Authors. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0
(https://creativecommons.org/licenses/by/4.0/).
183
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
This article will comprehensively introduce the basic principles of dynamic programming algorithms,
review complexity and classic case analysis, and discuss some extensions and applications of dynamic
programming algorithms. By integrating the research results of dynamic programming algorithms on
different issues and the current research frontiers and hotspots of dynamic programming algorithms, it
provides researchers with a conceptual framework and methodology, and at the same time provides
directions and inspiration for subsequent research. Dynamic programming algorithms will continue to
play an important role in research and applications in computer science and other fields as a powerful
tool.
184
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
cases, space complexity can be reduced through optimization strategies, such as state compression
techniques or saving only necessary subproblem solutions to reduce memory usage.
185
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
Finally, according to the state transition equation, traverse and fill the dynamic programming table
dp. Starting at the upper left corner, calculate row by row or column by column until the entire table is
filled. The final optimal solution to the knapsack problem is stored in the cell dp[n][W] in the lower right
corner of dp, where n represents the number of items and W represents the capacity of the knapsack. By
backtracking the dynamic programming table, starting from the lower right corner, according to the
conditions of the state transition equation, the specific items and total value put into the backpack can be
determined.
186
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
goods
0 1 2 3 4 5 6 7 8 9 10
backpack capacity
0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 6 6 6 6 6 6 6 6 6
2 0 0 6 8 8 14 14 14 14 14 14
3 0 0 6 8 10 14 16 18 18 24 24
4 0 0 6 8 10 14 16 18 20 24 26
Therefore, the maximum item value in the final backpack is 26. We can backtrack the dynamic
programming table, starting from the lower right corner, and determine the specific items and total value
put into the backpack according to the conditions of the state transition equation. In this example, the
items selected to put in the backpack are items 1, 2, and 4, and their total value is 26.
(2)Multiple Knapsack Problem
Suppose there is a knapsack with capacity C=6. The following items are now available to choose
from in the backpack:
Item 1: weight w1=2, value v1=4, available quantity n1=2
Item 2: weight w2=3, value v2=5, available quantity n2=3
Item 3: weight w3=4, value v3=6, available quantity n3=1
The requirement of the problem is to select items to maximize the total value of the backpack without
exceeding its capacity.
Similar to the 0-1 knapsack problem, use the dynamic programming algorithm to create a
two-dimensional array dp, where dp[i][j] represents the maximum value when considering the first i
items and the knapsack capacity is j.
First, initialize the dp array to 0, and dp[i][j] represents the maximum value when no item is selected.
Secondly, considering that the number of items that can be used for each item is different for the
multiple knapsack problem, it is necessary to use the nested loop method to traverse the items and
knapsack capacity, and update the value in the dp array:
For each item i, traverse the knapsack capacity j (from 0 to C):
If people choose not to put items, then dp[i][j] can be updated to dp[i-1][j], which is to maintain the
maximum value when putting i-1 items.
If people choose to put items in, then it is divided into two steps:
For each item i, we can choose to put k items i (0 <= k <= min(n[i], j/w[i])).
For each k, update dp[i][j] to dp[i-1][j-k*w[i]] + k*v[i], choose the largest value as the result of
dp[i][j] .
Table 2. Multiple knapsack problem dp table.
goods 0 1 2 3 4 5 6
backpack
capacity
0 0 0 0 0 0 0 0
1 0 0 4 4 8 8 8
2 0 0 4 5 8 9 10
3 0 0 4 5 8 9 10
187
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
Therefore, the maximum item value in the final backpack is 12. This paper can backtrack the
dynamic programming table, starting from the lower right corner, and determine the specific items and
total value put into the backpack according to the conditions of the state transition equation. In this
example, the items selected to be put in the backpack are 2 item 1 and 1 item 3.
4.1. Application
Dynamic programming algorithms have a wide range of applications in many fields, such as stochastic
decision-making, natural language processing, and bioinformatics, for processing data such as text,
speech, DNA sequences, and protein sequences.
In natural language processing, dynamic programming algorithms are widely used in tasks such as
speech recognition, syntax analysis, machine translation, and text generation. By defining appropriate
states and state transition equations, dynamic programming algorithms can process speech signals, text
data, and find the best language model, syntax tree, or translation scheme. For example, in speech
recognition, dynamic programming algorithms can solve problems such as feature extraction of audio
signals, acoustic model matching, and vocabulary probability calculation, so as to achieve
high-precision speech-to-text conversion.
In the field of bioinformatics, dynamic programming algorithms play an important role in DNA
sequence alignment, protein structure prediction and genomics research. For example, in terms of
protein structure prediction, AlphaFold has achieved great success in recent years. It is mainly based on
deep learning and neural network technology, and predicts its three-dimensional structure by analyzing
the amino acid sequence of proteins [5]. In protein structure prediction, problems such as sequence
alignment and the longest common subsequence can be solved by dynamic programming algorithms;
these problems can help identify conserved regions and similarities in protein sequences, thereby
inferring their structure and function. AlphaFold uses a similar idea to infer the three-dimensional
structure of the protein by analyzing and modeling the protein sequence. This has important implications
for understanding the mechanisms of biological systems, disease research, and drug design.
Therefore, the application of dynamic programming algorithms in the fields of natural language
processing and bioinformatics provides researchers with powerful tools and methods to solve complex
problems. Through reasonable modeling of problems, state definition and design of state transition
equations, dynamic programming algorithms can efficiently process and analyze large-scale data,
bringing new progress and innovation to the research and application of these fields.
4.2. Optimization
The dynamic programming algorithm has certain limitations in terms of time and space complexity, but
there are already many different optimization methods for dynamic programming algorithms for
different problems. This study will briefly introduce three important optimization techniques: memory
search, state compression dynamic programming, and multi-stage dynamic programming.
Memorized search is an optimization method of a dynamic programming algorithm. By saving the
solutions of sub-problems that have been calculated, it avoids repeated calculations and improves the
efficiency of the algorithm [6]. Memorized search can be regarded as top-down dynamic programming.
Although it is also solved recursively, it uses a cache to store the calculated results when recursively
solving the problem. When it needs to be calculated again, it directly gets it from the cache to avoid
double calculations. Memorized search can significantly reduce the amount of computation and improve
the efficiency of dynamic programming algorithms.
State compression dynamic programming is an optimization technique for problems with
high-dimensional state spaces. In the traditional dynamic programming algorithm, the state is usually
stored in a complete form, which requires a large space complexity. The state compression dynamic
programming reduces the size of the state space and reduces the space complexity by compressing the
high-dimensional state into a one-dimensional or low-dimensional state. State compression dynamic
188
Proceedings of the 2023 International Conference on Machine Learning and Automation
DOI: 10.54254/2755-2721/35/20230392
programming improves the efficiency of the algorithm by ingeniously designing the state compression
method and using bit operations or other techniques for state transfer and calculation while maintaining
the correctness of the algorithm.
Multi-stage dynamic programming is a dynamic programming method applied to multi-stage
decision-making problems. It decomposes the multi-stage decision-making problem into a series of
sub-problems, and obtains the optimal solution by solving the sub-problems stage by stage. Multi-stage
dynamic programming is similar to traditional dynamic programming, but it pays more attention to the
characteristics and mutual influence of decision-making stages when solving problems. Through
multi-stage dynamic programming, a complex multi-stage decision-making problem can be
decomposed into a series of simple sub-problems, and the optimal solution can be obtained through state
transition and step-by-step decision-making.
In summary, memorized search, state compression dynamic programming and multi-stage dynamic
programming can all be regarded as improvements and extensions to the dynamic programming
algorithm. By introducing new ideas and techniques, the efficiency and application range of the
algorithm are improved.
5. Conclusion
The dynamic programming algorithm is a powerful and widely used algorithm idea that plays an
important role in solving optimization problems. By dividing the problem into subproblems and
exploiting the properties of optimal substructure and overlapping subproblems, dynamic programming
algorithms can efficiently solve many complex problems. In this research, the basic principles of the
dynamic programming algorithm are deeply discussed, including problem modeling, state definition,
derivation and solution of the state transition equation. And through the analysis of classic cases such as
the knapsack problem, it demonstrates the application of dynamic programming algorithms to practical
problems. In addition, the paper also introduces some extensions and application areas of dynamic
programming algorithms, such as memory search, state compression dynamic programming and
multi-stage dynamic programming. These extensions and applications further enrich the application
range of dynamic programming algorithms and provide more methods for solving complex problems.
Although the dynamic programming algorithm has certain limitations in terms of time and space
complexity, its effectiveness and flexibility in solving optimization problems make it an important tool
in research and practical applications. With the continuous improvement of computing power and the
continuous advancement of algorithm improvement, dynamic programming algorithms can also be
expected to play a greater role in a wider range of fields and problems in the future.
To sum up, the dynamic programming algorithm is a powerful algorithmic idea that has a wide range
of applications in the solution of optimization problems. By deeply understanding and studying the
principles and applications of dynamic programming algorithms, we can provide efficient solutions to
practical problems and open up new possibilities for further research in computer science and other
fields.
References
[1] Cormen T. Introduction to algorithms, thomas h. cormen, charles e. leiserson, ronald l. rivest,
clifford stein[J]. Journal of the Operational Research Society, 2001, 42.
[2] Bertsekas D. Dynamic programming and optimal control: Volume I[M]. Athena scientific, 2012.
[3] Martello S, Toth P. Knapsack problems: algorithms and computer implementations[M]. John
Wiley & Sons, Inc., 1990.
[4] Kellerer H, Pferschy U, Pisinger D, et al. Multidimensional knapsack problems[M]. Springer
Berlin Heidelberg, 2004.
[5] Senior A W, Evans R, Jumper J, et al. Improved protein structure prediction using potentials from
deep learning[J]. Nature, 2020, 577(7792): 706-710.
[6] Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms[M].
Society for Industrial and Applied Mathematics, 2010.
189