0% found this document useful (0 votes)
56 views5 pages

A Linear-Time Algorithm For Concave One-Dimensional Dynamic Programming

This document presents a linear-time algorithm for solving the concave one-dimensional dynamic programming problem. The problem involves computing a minimum cost sequence E(j) given a cost function w(i,j) that satisfies the quadrangle inequality, making it concave. The algorithm works by finding column minima in iterations using the SMAWK algorithm on totally monotone submatrices, updating interim column minima N(j) and computing E(j) values. By charging computation to rows or columns at each iteration based on cases, the overall runtime is shown to be linear in the size n of the problem.

Uploaded by

Bui Quoc Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views5 pages

A Linear-Time Algorithm For Concave One-Dimensional Dynamic Programming

This document presents a linear-time algorithm for solving the concave one-dimensional dynamic programming problem. The problem involves computing a minimum cost sequence E(j) given a cost function w(i,j) that satisfies the quadrangle inequality, making it concave. The algorithm works by finding column minima in iterations using the SMAWK algorithm on totally monotone submatrices, updating interim column minima N(j) and computing E(j) values. By charging computation to rows or columns at each iteration based on cases, the overall runtime is shown to be linear in the size n of the problem.

Uploaded by

Bui Quoc Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Linear-Time Algorithm

for Concave One-Dimensional Dynamic Programming

Zvi Galil
Kunsoo Park

CUCS-469-89
A Linear-Time Algorithm for Concave One-Dimensional Dynamic Programming*

Zvj Galijl.2 and Kunsoo Park 1

1 Department of Computer Science, Columbia University, New York, KY 10027


2 Department of Computer Science, Tel-Aviv University, Tel-Aviv, Israel
Keywords: Dynamic programming, quadrangle inequality, total monotonicity

The one-dimensional dynamic programming problem is defined as follows: given a real-valued


function w(i,j) for integers 0 ~ i ~ j ~ nand E[O], compute

E(j] = mjn{D[i]
O~I<)
+ u'(i,j)}, for 1 ~ j ~ n,

where D[i] is computed from E[i] in constant time. The least weight subsequence problem [4] is
a special case of the problem where D[i] = E[i]. The modified edit distance problem [3], which
arises in molecular biology. geology, and speech recognition, can be decomposed into 2n copies of
the problem.
Let ,-1 be an n X m matrix. A[i, j] denotes the element in the ith row and the jth column.
A[i: i' ,j : i'] denotes the submatrix of A that is the intersection of rows i, i + 1, ... , i' and columns
j,j + 1, ... ,i. We say that the cost function w is concave if it satisfies the quadrangle inequality [7]

w(a,c) + w(b,d) ~ w(b,c) + w(a,d), for a ~ b ~ c ~ d.

In the concave one-dimensional dynamic programming problem w is concave as defined above. A


condition closely related to the quadrangle inequality was introduced by Aggarwal et al. [1] An
n X m matrix A is totally monotone if for all a < band c < d,

A[a, c] > A[b, c] ~ A[a, d] > A[b, d].

Let rU) be the smallest row index such that A[r(j), j] is the minimum value in column j. Then
total monotonicity implies
r(1) ~ r(2) ~ ... ~ rem). (1)
That is, the minimum row indices are nondecreasing. \Ve say that an element A[i. j] is dead if
i ~ r(j). A submatrix of A. is dead if all of its elements are dead. Note that for a ~ b ~ c ~ d, the
quadrangle inequality implies total monotonicity. but the converse is not true. Aggarwal et al. [1]
show that the row maxima of a totally monotone n X m matrix A can be found in O(n + m) time
if A[i,j] for any i,j can be computed in constant time. Their algorithm is easily adapted to find
the column minima. We will refer to their algorithm as the SMAWK algorithm.
Let B[i,j] = D[i] + w(i,j) for 0 ~ i < j ~ n. We say that B[i,j] is available if D[i] is known
and therefore B[i,j] can be computed in constant time. Then the problem is to find the column
minima in the upper triangular matrix B with the restriction that B[i, j] is available only after
the column minima for columns 1. 2, ... , i have been found. It is easy to see that when w satisfies
the quadrangle inequaHty, B also satisfies the quadrangle inequaHty. For the concave problem
Hirschberg and Larmore [4] and later Galil and Giancarlo [3] gave O( n log n) algorithms using
queues. Wilber [6] proposed an O(n) time algorithm when D[i] = E[i]. However, his algorithm
does not work if the availability of matrix B must be obeyed, which happens when many copies

* Work supported in part by NSF Grants CCR-86-0.5353 and CCR-88-14977


2

1 e} p n
or-----------~--------~

Figure 1. ~Iatrix B at a typical iteration

of the problem proceed simultaneously (i.e., the computation is interleaved among many copies)
as in the modified edit distance problem [3] and the mixed convex and concave cost problem [2].
Eppstein [2] extended Wilber's algorithm for interleaved computation. Our algorithm is more
general than Eppstein's; it works for any totally monotone matrix B (we use only relation (1»,
whereas Eppstein's algorithm works only when B[i,j] = D[i] + w(i,j). Our algorithm is also
simpler than both Wilber's and Eppstein ·s. Recently, Larmore and Schieber [5] reported another
linear-time algorithm. which is quite different from ours.
The algorithm consists of a sequence of iterations. Figure 1 shows a typical iteration. \Ve use
NUl, 1 :S j :S n. to store interim column minima before row r; N[j] = B[i,j] for some i < r (the
usage will be clear shortly). At the beginning of each iteration the following invariants hold:
(a) O::S rand r < e.
(b) E[j] for all 1 :S j < e have been found.
(c) E[j] for j ~ e is min(.NU],mini~r B[i,j]).
Invariant (b) means that D[i] for all 0 :S i < c are known, and therefore B[i,j] for 0 :S i < e and
c ::S j :S n is available. Initially, r = O. e = 1, and all }VU] are +00.
Let p = rnin(2c - r, n), and let G be the union of N[c : p] and B[r : e - 1, c : pl, N[c : p]
as its first row and B[r : c - 1, e : p] as the other rows. G is a (c - r + 1) x (e - r + 1) matrix
unless 2c - r > n. Let F[jl, c:S j :S p, denote the column minima of G. Since matrix G is totally
monotone, we use the SMAWK algorithm to find the column minima of G. Once F[c : p] are found,
we compute E[j] for j = c, c + 1, ... as follows. Obviously, E[e] = F[c]. For c + 1 :S j :S p, assume
inductively that B[c : j - 2,j : p] (/3 in Figure 1) is dead and B[j - 1,j : n] is available. It is
trivially true when j = c + 1. By the assumption E[j] = min(F[j], BU - 1, j]).
(1) If E[j - l,j] < FU], then E[j] = B[j - 1,j], and by relation (1) E[r: j - 2,j: n] (a, ,8,;,
and the part of G above /3 in Figure 1) and S[j : n] are dead. We start a new iteration with
c = j + 1 and r = j - 1.
(:2) If F[j] $ B[j - I,j], then E[j] = F[j]. We compare E[j - l,p] with FlP].
(2.1) If B[j - l.p] < F[p], B[r : j - 2,p + 1 : n] (a and; in Figure 1) is dead by relation (1).
B[c : j - 2.j : p] (/3 in Figure 1) is dead by the assumption. Thus only F[j + 1 : p] among
B[O : j - 2.j + 1 : n] may become column minima in the future computation. We store
F[j + 1 : p] in N[j + 1 : p] and start a new iteration with c = j + 1 and r = j - 1.
(2.2) If F(p] :S B[j - l,p], B[j - l,j : p] (6 in Figure 1) is dead by relation (1) in submatrix
B[r : j - 1. j : p] (/3. 6, and the part of G above /3). Since B[j,j + 1 : n] is available from
3

procedure concave 1D
c +- 1;
r +- 0;
N[l: n] +- +00;
while c ~ n do
p +- min(2c - r, n):
use S)'IAWK to find column minima F[e: p] of G;
E[e] +- F[e];
for j +- e + 1 to p do
if B[j - 1,j] < F[j] then
E[j] +- B(j - 1. j];
break
else
E[j] - F(j];
if B(j - 1. p] < F[p] then
N[j + 1: p] .- F[j + l:p];
break
end if
end if
end for
if j ~ p then
e.- j + 1;
r +- j - 1
else
e - p + 1;
r +- max(r, row of F[pJ)
end if
end while
end

Figure 2. The algorithm for concave 1D dynamic programming

E(j], the assumption holds at j + 1. \Vego on to column j + 1.


If case (2.2) is repeated until j = p, we have found E[j] for c ~ j ~ p. We start a new iteration
with e = p + 1. If the row of F(P] is greater than r, it becomes the new r (it may be smaller than
r if it is the row of N(pJ). Note that the three invariants hold at the beginning of new iterations.
Figure 2 shows the algorithm, where the break statement causes the innermost encl06ing loop to
be exited immediately.
Each iteration takes time O(c - r). If either case (1) or case (2.1) happens, we charge the time
to rows r, ... , c - 1 because r is increased by (j - 1) - r ~ c - r. If case (2.2) is repeated until j = p,
there are two cases. If p < n, we charge the time to columns e .... , p because c is increased by
(p+ 1) - c ~ c - r + 1. If p = n, we have finished the whole computation, and rows r, . .. , c - 1( < n)
have not been charged yet; we charge the time to the rows. Since c and r never decrease, only
constant time is charged to each row or column. Thus the total time of the algorithm is linear in
n.
4

References

[1] Aggarwal, A., Klawe, M. M., Moran, S., ShOT, P., and Wilber, R. Geometric applications of a
matrix-searching algorithm. Algorithmica 2 (1987), 195-208.
[2] Eppstein, D. Sequence comparison with mixed convex and concave costs. J. Algorithms to
appear.
[3] Galil, Z., and Giancarlo, R. Speeding up dynamic programming with applications to molecular
biology. Theoretical Computer Science 64 (1989), 107-118.
[-1] Hirschberg, D. S., and Larmore, L. L. The least weight subsequence problem. SIAM J. Comput.
16, 4 (1987), 628-638.
[5] Larmore, L. L., and Schieber, B. On-line dynamic programming with applications to the predic-
tion of RNA secondary structure. to be presented at the First Annual ACM-SIAAf Symposium
on Discrete Algorithms.
[6] Wilber, R. The concave least-weight subsequence problem revisited. J. Algorithms 9 (1988),
418-425.
[7] Yao, F. F. Speed-up in dynamic programming. SIAM J. Alg. Disc. Meth. 3 (1982),532-540.

You might also like