0% found this document useful (0 votes)
3 views

Algo10 DynamicProgramming

Uploaded by

zxcvbnm5117123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Algo10 DynamicProgramming

Uploaded by

zxcvbnm5117123
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Algorithms

Pei-Yu Lin
pagelin@nkust.edu.tw
Q
4

BUNNY

2
4

Which algorithm solves knapsack problem?

A. Greedy-by-value: repeatedly add item with maximum vi


B. Greedy-by-weight: repeatedly add item with minimum w i
C. Greedy-by-ratio: repeatedly add item with maximum ratio vi /w i
D. None of the above.

3
value
Greedy algorithm ?

4
value The knapsack problem
如果用暴力破解法呢?

power set
(冪集合)
23 = 8 n物品 2n, O(2n)
NP-complete problems 5
Algorithmic paradigms

• Enumerate (枚舉演算法/暴力破解法)
• Recursion (遞迴演算法)
• Divide-and-Conquer (分治演算法)
• Greedy (貪婪演算法)
• Dynamic Programming (動態規劃演算法)
• K-Nearest Neighbor (K最近鄰演算法)

6
Greedy
選擇過程絕不手軟,選擇之後絕不回頭
• Process the input in some order, myopically making irrevocable decisions.

Shortest path problem Interval scheduling problem Minimum spanning tree problem
(Dijkstra's algorithm)

• Local optimal & global optimal solutions ?


7
Binary search
Divide-and-Conquer Quick sort
Merge sort

• 將複雜問題遞迴的分成兩個或數個相似的子問題,直到最後子問題可
以簡單的直接求解

3. Combine

1. Divide

recursively solve each sub-problems


2. Conquer 8
Dynamic Programming Algorithm
Divide and Conquer Greedy

• Optimization problem
− weight & value

• Break up a problem into a series of


overlapping subproblems
• Store the results of subproblems
− Iteration & Memoization
− Fancy name for caching intermediate results in a
table for later reuse
9
Divide and Conquer

• Break up a problem into


independent subproblems

• Solve each subproblem

• Combine solutions to
subproblems to form
solution to original
problem
10
Dynamic Programming

• Break up a problem into a


series of overlapping
subproblems
• Store subproblems results
and then solve
• Combine solutions to
smaller subproblems to
form solution to large
subproblem
11
Dynamic Programming Divide and Conquer

12
Dynamic Programming
Algorithm
The knapsack problem
Weighted Interval scheduling problem
Weighted shortest paths problem

13
The knapsack problem W
• Input 0/1 Knapsack
− A set S of n items, ni∈{0, 1}, and a knapsack
− wi:weight of item i, wi>0 and i=1, 2, …, n
− vi :value of item i
value vi
− W:capacity of the knapsack weight wi

• Output
− Fill the knapsack and maximize total value
− max ∑i∈S vi
14
The 0/1 knapsack problem W
• Which subproblems?
A. OPT(w) = optimal value of knapsack problem with
weight limit w.
B. OPT(i) = optimal value of knapsack problem with items
1, …, i.
C. OPT(i, w) = optimal value of knapsack problem with
items 1, …, i subject to weight limit w. value vi
D. Any of the above. weight wi

• Optimization problem formulation


− depends on vi, wi and W

max ∑i∈S vi
s.t. ∑i∈S wi ≤ W, S ⊆ {1, 2, …, n} 15
The 0/1 knapsack problem
Def. • OPT(i, w)
− optimal solution with item 1~i subject to weight limit w

Goal. • OPT(n, w)
− final optimal solution O with item 1~n subject to weight limit w

 If n-th∉O
OPT(n, w)=OPT(n-1, w)

 If n-th∈O max{OPT(i-1,w), vi+OPT(i-1, w-wi)}


OPT(n, w)=vn+OPT(n-1, w-wn)
16
1 2 3

4
value
weight

W
weights 1 2 3 4
items

17
1 2 3

4
value
weight

W
weights 1 2 3 4
Iteration
items
$1500 $1500 $1500 $1500
1
1 1 1 1
max(1500, 0)
$1500 $1500 $1500 $3000 max(1500, 3000)
n 1 2
1 1 1 2
max(1500, 0)

1 2 3 $1500 $1500 $2000 $3500 max(3000, 2000+1500)


1 1 3 31
max(1500, 2000)
18
Recurrence
0 , if i=0
• OPT(i, w)= OPT(i-1, w) , wi >w
max{OPT(i-1,w), vi+OPT(i-1, w-wi)} , otherwise

0 knapsack(n, wi, vi, W){


1 for w=0 to W
2 M[0, w]=0
3 for i = 1 to n
4 for w=0 to W
O(nW) 5 if (wi > w)
6 M[i, w] = M[i-1, w]
7 else
8 M[i, w] = max( M[i-1, w], vi + M[i-1, w-wi] )
9 return M[n,W]
10 }
19
Time complexity
• O(nW): pseudo-polynomial
W↑ , O(nW)↑
− W is not polynomial in input size
− NP-complete problems
0 knapsack(n, wi, vi, W){
1 for w=0 to W
2 M[0, w]=0
3 for i = 1 to n
4 for w=0 to W
O(nW) 5 if (wi > w)
6 M[i, w] = M[i-1, w]
7 else
8 M[i, w] = max( M[i-1, w], vi + M[i-1, w-wi] )
9 return M[n,W]
10 }
20
Time complexity
• O(nW): pseudo-polynomial
W↑ , O(nW)↑
− W is not polynomial in input size
− NP-complete problems

21
Q
4

BUNNY

22
Q

23
Q

24
25
Q
0/1 Knapsack

Fractional Knapsack

26
Q
i 1 2 3 4 5 6 7
vi 8 6 15 4 2 5 9
wi 2 3 5 4 4 2 6
Vi/Wi 4 2 3 1 0.5 2.5 1.5

Priority 1 4 2 6 7 3 5
W=15 13 3 8 6 0 (-3/6)

A
(E) value 8 6 15 5 4.5 27
Q
value / weight ratio

i 1 2 3 4 5 6
vi 3 8 16 7 9 20
wi 4 2 8 5 5 8
Vi/Wi 0.75 4 2 1.4 1.8 2.5

Priority 6 1 3 5 4 2
W=20 18 2 0 (-2/5) 10
A
(A) Value = 8 16 9╳ 20
(2/5)
28
Your task is to solve 0-1 knapsack problem using a dynamic

Q
programming approach. You have the following conditions:
• The number of items, n= 4
i 1 2 3 4 i 1 2 3 4
• Maximum weight tow pack,2W= 53 4 5 wi 2 3 4 5
i
• Each item (elementvi) has 3weight4 (wi) and 5
value (vi),
6 vi 3 4 5 6
i
(wi, vi ), as follows: (2, 3), (3, 4), (4, 5), (5, 6)
(1) how to solve the subproblem using dynamic programming approach.
(2) what the maximum value is after solving the problem.
(3) find out the data items to make the maximum value.

29
(2) what the maximum value is after solving the problem.
(3) find out the data items to make the maximum value.
i 1 2 3 4
wi 2 3 4 5
vi 3 4 5 6

W 1 2 3 4 5
item
1 0 i=1, v=3 i=1, v=3 i=1, v=3 i=1, v=3
1, 2 0 i=1, v=3 i=2, v=4 i=2, v=4 i=2, v=4
+[1, W=1]=0 + [1, W=2]
i=2&1, v=4+3=7
1, 2, 3 0 i=1, v=3 i=2, v=4 i=3, v=5 i=3, v=5
or + [1~2, 1]=0
[1~2, 4]=4 or
[1~2, 5]
i=2&1, v=4+3=7
1, 2, 3, 4 0 i=1, v=3 i=2, v=4 i=3, v=5 i=4, v=6
or
[1~3, 5]
30
i=2&1, v=4+3=7
7
15
30
40

value?

31
Weighted interval scheduling
• Input
− Set of requests {1, 2, …, k, …, n}
− with start time s(k) and finish time f(k)
− has values vi>0

• Output
− Find max-value subsets of mutually compatible requests
(requests don’t overlap)

32
Weighted interval scheduling
• Requests are in ascending order of finish time f(.)
Def. • p(i) = largest index j<i such that job j is compatible
with i

7 p(art)=0,
15
p(eng)=0,
30
p(math)=art,
40
p(cs)=0,
5
p(music)=math,
33
Weighted interval scheduling
Def.• OPT(i)
− Max value of any subset of mutually compatible requests 1~i
 If i ∉OPT(i)
OPT(i) = OPT(i-1)
max{OPT(i-1), vi+OPT(p(i))}
 If i ∈OPT(i)
OPT(i) = vi+OPT(p(i))

7 p(art)=0,
15
p(eng)=0,
30
p(math)=art,
40
p(cs)=0,
5 34
p(music)=math,


Recurrence
0 , if i=0
• OPT(i)=
Max{ OPT(i-1), vi+OPT(p(i)) } , if i > 0

• Implementation
Top-down Bottom-up
Memoization & Iteration

7
15
30 p(3) = 1
p(1) = 0
p(4) = 2
40 p(2) = 0
p(5) = 3
5
35
Top-down
Memoization
OPT(i){
if (i = 0)
return 0 x 
else return max{ OPT(i-1), vi + OPT(p(i))}
}

x 
7
15
30
40
5
p(3) = 1
p(1) = 0
p(4) = 2
p(2) = 0 36
p(5) = 3
Top-down
Memoization
OPT(i){
if (i = 0)
return 0 x  O( n )
if (M[i] max{
else return is not OPT(i-1), vi +M[i]
empty) return OPT(p(i))}
} else return M[i] = max{ OPT(i-1), vi + OPT(p(i))} M[5]
}
?
M[4]
55
7 M[3]
15 37
30
M[2]
40
15
5
M[1]
p(3) = 1
p(1) = 0 7
p(4) = 2
p(2) = 0 37
p(5) = 3
Bottom-up
Iteration
OPT(i){
M[0] = 0
for i = 1 to n x  O( n )
M[i] = max{ M[i-1], vi + M[p(i)]}
}
0 1 2 3 4 5
0
7
15 0 7
30
0 7 15
40 x
5 


p(1) = 0
p(3) = 1 0 7 15 37 55 55
p(2) = 0
p(4) = 2 x 38
p(5) = 3 5+37=42 
Top-down Bottom-up
Memoization Iteration

• Recursion • Iteration
• Compute only what we need • Compute all subproblems

39
Let us consider
Dijkstra's algorithm again…

len(.)=4
• A graph G = (V, E) , |V| = n cost=24
−directed & weighted ( ≥ 0)
− len(e): length of edge e=(u, v)∈E
len(.)=10
• Single-pair Shortest Path
− The shortest path form s to each other node t ∈V-{s}
− Cost of path = sum of edge costs in path

s cost t
source
destination
40
Weighted shortest paths
Negative-weight edges

x
?
Cost: 6
s

y t
Cost: ?
-1
Cost: 2 ?

41
想法1: Reweighting ?
Adding a constant to every edge length
does NOT necessarily make Dijkstra's
algorithm produce shortest paths.

a a
5 +7 12
s 0
-7
s 7
0

b 35 t b 44 t

s
a 12 a 12 a 12 a 12
b 7 b 7 b 0 b 0
t ∞ t 51 t 51 t 51
Bellman–Ford algorithm

1 If some v ↝ t path contains a negative cycle,


then there does not exist a shortest v ↝ t path

sum of its edge lengths is negative


43
Bellman–Ford algorithm

2 If G has no negative cycles, then there


exists a shortest v ↝ t path that is simple
(and has ≤ n – 1 edges)

If that path P contains a directed cycle W, can


remove the portion of P corresponding to W
without increasing its length.

44
Bellman–Ford algorithm

∴ 目的在找一條 simple path: n – 1 edges


使得 shortest path v ↝ t

n – 1 edges

Induction on edge

45
Bellman–Ford algorithm Induction on edge

• A graph G = (V, E) , |V| = n


−directed & weighted (may negative)
−ℓvw: edge lengths (no negative cycles)
−a distinguished/destination node t
• Find a shortest path v ↝ t for every nod v

v w t
destination
≤ n-1 edges
46
Bellman–Ford algorithm Induction on edge

Def. • OPT(i, v) = length of shortest v↝t path that uses ≤ i edges

Goal. • OPT(n–1, s) = length of shortest s↝t path that uses ≤ n-1 edges

 v↝t uses ≤ i – 1 edges


v t
destination
≤ i-1 edges
− OPT(i, v) = OPT(i-1, v)
OPT(i, v)
 v↝t uses exactly i edges
ℓvw
− OPT(i, v) = OPT(i-1, w) + ℓvw v w ≤ i-1 edges t
destination
i edges
47
Bellman–Ford algorithm Induction on edge

• OPT(i, v) = length of shortest v↝t path that uses ≤ i edges

• OPT(n–1, s) = length of shortest s↝t path that uses ≤ n-1 edges

 v↝t uses ≤ i – 1 edges


v t
destination
≤ i-1 edges
− OPT(i, v) = OPT(i-1, v)
OPT(i, v)
 v↝t uses exactly i edges
ℓvw
− OPT(i, v) = OPT(i-1, w) + ℓvw v w ≤ i-1 edges t
destination
i edges
48
i edge
v 0 1 2 3
a t 0 0 0 0
Induction on edge
5
s 0
-7 s ∞ ∞ 35 33
b 35 t a ∞ ∞ 28 28
b ∞ 35 35 35

OPT(2, s) = OPT(1, s) = ∞ OPT(3, s) = OPT(2, s) = 35


OPT(2, s) = OPT(3, s) =
s↝a: ℓsa + OPT(1, a) = 5+∞ = ∞ s↝a: ℓsa + OPT(2, a) = 5+28 = 33
s↝b: ℓsb + OPT(1, b) = 0+35 = 35 s↝b: ℓsb + OPT(2, b) = 0+35 = 35

OPT(2, a) = OPT(1, a) = ∞ OPT(3, a) = OPT(2, a) = 28


OPT(2, a) = OPT(3, a) =
a↝b: ℓab + OPT(1, b) = -7+35 = 28 a↝b: ℓab + OPT(2, b) = -7+35 = 28 49
Recurrence

n-1

0 1 2 3
t 0 0 0 0
|V|=n s ∞ ∞ 35 33
a ∞ ∞ 28 28
b ∞ 35 35 35
n
m space time
For each v,
check v & its
O(n2) O(nm) ≤O(n2)
neighbors 50
Q 6
a -1
t
0
0
1
0
2
0
3
0
4
0
5
0
-2 d
3 s ∞ ∞ 4 4 4 3
s 4
b a ∞ ∞ 2 2 2 2
3
5 -2 b ∞ ∞ 6 0 0 0
c ∞ -1 -1 -1 -2 -2
c -1 t
d ∞ 3 3 3 3 3

OPT(5, s) = ℓsc + OPT(4, c) = 5-2 = 3


OPT(2, a) = ℓad + OPT(1, d) = -1+3 = 2
OPT(4, c) = ℓcb + OPT(3, b) = -2+0 = -2
OPT(1, d) = ℓdt + OPT(0, t) = 3+0 = 3
OPT(3, b) = ℓba + OPT(2, a) = -2+2 = 0
51
The set-covering problem
• 50 states

n 2n, O(2n)
NP-complete problems 52
value The knapsack problem
如果用暴力破解法呢?

power set
(冪集合)
23 = 8 n物品 2n, O(2n)
NP-complete problems 53
時間複雜度太高時?

Approximation algorithms (近似演算法)


非最佳解,但卻可以在可接受的計算時間內,找到近似解

Greedy
O(2n) O(n2)

54
P, NP, NP-Complete and NP-Hard
Problems

55
Problem Classification
• In theoretical computer science, the classification and
complexity of common problem definitions have two major sets.

easy ← Problem → hard

P NP
polynomial-time non-deterministic
polynomial-time
56
P
polynomial-time

• Problems can be solved in polynomial time with DTM (deterministic


Turing machine)

There exist constants a>0 and b>0 such that,


for every input of size n, the algorithm performs
T(n) = O(anb ) primitive computational steps.
O(n), O(n log n), O(nk)

− We say that an algorithm is efficient if it has a


polynomial running time.

57
P
polynomial-time

EX: • All basic mathematical operations: addition+, subtraction-,


division/, multiplication
• Testing for primacy
• Hashtable lookup, string operations, sorting problems
• Shortest Path Algorithms; Djikstra, Bellman-Ford, Floyd-Warshall
• Linear and Binary Search Algorithms for a given set of numbers

https://www.baeldung.com/cs/p-np-np-complete-np-hard
58
常見的 Big O 執行時間
• O(1):常數時間 constant ime

• O(log n):對數時間 logarithmic ime n


n
n
• O(n):線性時間 linear ime
P
polynomial-
time
• O(nlog n)
• O(n2):平方時間 quadratic ime
• O(nk):次方時間 polynomial time
• O(2n):指數時間 exponential time
• O(n!):階層時間,非常慢的演算法
factorial time 59
NP
non-deterministic
polynomial-time

• Problems cannot be solved in polynomial time with DTM.


However, they can be verified (or certified) in polynomial time.

There exist constants a, b>0 and k>0 such that,


for every input of size n, the algorithm performs
T(n) = O(akbn ) = O(kn)
O(2n), O(nn), O(20.000001n)
EX: • Integer Factorization
• Graph Isomorphism 60
Does P = NP ? P ≠ NP ? P = NP ?

NP

P P = NP

• P (polynomial-time)
− Problem can be solved in polynomial time in the size of input.

• NP (non-deterministic polynomial-time)
− Problem can be verified in polynomial time in the size of
input.

61
假設P ≠ NP ?

NP
Problem Classification P

• Problem: easy-to-hard scale

easy medium hard hardest


P NP NP-complete NP-hard

Polynomial Non-deterministic Polynomial

NP NP-hard

P NP-complete

62
easy hardest

NP-Complete NP NP-hard

P
NP-complete

• All of these all belong to NP, but are among the hardest in the set.
• What makes them different from other NP problems is a useful
distinction called completeness.
• For any NP problem that’s complete, there exists a polynomial-
time algorithm that can transform (reduction) the problem into
any other NP-complete problem.
• Knapsack
• Graph Coloring
63
easy hardest

NP-Complete NP
X Y
NP-hard

P
NP-complete
• NPC

− X ≤P Y
(Cook) reduces

• co-NP & NP-hard 64


Reducible X ≤ P Y

x: 3a + 2 = 0 x: 0a2 + 3a + 2 = 0
y: ra2 + sa + t = 0 y: ra2 + sa + t = 0
把x轉換成y方程式的表示式 若知道y的解法,
同理也可以解x
x: 3a + 2 = 0

x: 0a2 + 3a + 2 = 0

X ≤P Y
在「多項式時間」內,把x方程式轉換(reduces)成y方程式表示式
65
hardest

Complexity classes

• P problems are quick to solve


easy

• NP problems are quick to verify but slow to solve


• NP-Complete problems are also quick to verify, slow to
solve and can be reduced to any other NP-Complete
problem

• NP-Hard problems are slow to verify, slow to solve and can


be reduced to any other NP problem
− Ex: K-means clustering, Traveling salesman problem
66

You might also like