0% found this document useful (0 votes)
52 views6 pages

Lecture 35 (Knap Sack Problem)

Dynamic programming is an algorithm design technique for solving optimization problems that involve overlapping subproblems. It works by breaking down a problem into smaller subproblems, solving each subproblem only once, and storing the results in a table for future reference. The knapsack problem involves finding the most valuable subset of items that fit in a knapsack of capacity W, given the weights and values of n items. Dynamic programming solves this by defining the optimal value V[i,j] of instances using the first i items with capacity j, and recursively calculating values based on previously solved subinstances until reaching the initial instance V[n,W]. The optimal subset can then be determined through backtracking. Pseudocode provides an O

Uploaded by

avinash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views6 pages

Lecture 35 (Knap Sack Problem)

Dynamic programming is an algorithm design technique for solving optimization problems that involve overlapping subproblems. It works by breaking down a problem into smaller subproblems, solving each subproblem only once, and storing the results in a table for future reference. The knapsack problem involves finding the most valuable subset of items that fit in a knapsack of capacity W, given the weights and values of n items. Dynamic programming solves this by defining the optimal value V[i,j] of instances using the first i items with capacity j, and recursively calculating values based on previously solved subinstances until reaching the initial instance V[n,W]. The optimal subset can then be determined through backtracking. Pseudocode provides an O

Uploaded by

avinash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 6

CSE408

Knapsack problem

Lecture # 35
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by or formulated as recurrences with
overlapping sub instances

• Invented by American mathematician Richard Bellman in the


1950s to solve optimization problems and later assimilated by CS

• “Programming” here means “planning”

• Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the
knapsack
Consider instance defined by first i items and capacity
j (j  W).
Let V[i,j] be optimal value of such an instance. Then
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0
{
V[i,j] =
V[i-1,j] if j- wi < 0
Initial conditions: V[0,j] = 0 and V[i,0] = 0
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
00 0 0
w1 = 2, v1= 12 1 0 0 12 Backtracing
w2 = 1, v2= 10 2 0 10 12 22 22 22 finds the actual
optimal subset,
w3 = 3, v3= 20 3 0 10 12 22 30 32 i.e. solution.
w4 = 2, v4= 15 4 0 10 15 25 30 37
Knapsack Problem by DP (pseudocode)

Algorithm DPKnapsack(w[1..n], v[1..n], W)


var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do Running time and space:
O(nW).
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i]  j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing

You might also like