Day4.3 Algorithms Part2
Day4.3 Algorithms Part2
12/08/2022 1
Divide and Conquer
Selection Algorithms
12/08/2022 2
Order Statistics
• The ith order statistic in a set of n elements is
the ith smallest element
• The minimum is thus the 1st order statistic
• The maximum is the nth order statistic
• The median is the n/2 order statistic
– If n is even, there are 2 medians
• How can we calculate order statistics?
• What is the running time?
12/08/2022 3
Order Statistics
• How many comparisons are needed to find the
minimum element in a set? The maximum?
• Can we find the minimum and maximum with
less than twice the cost?
• Yes:
– Walk through elements by pairs
• Compare each element in pair to the other
• Compare the largest to maximum, smallest to minimum
– Total cost: 3 comparisons per 2 elements = 3n/2
12/08/2022 4
Finding Order Statistics:
The Selection Problem
• A more interesting problem is selection:
finding the ith smallest element of a set
• We will show:
– A practical randomized algorithm with O(n)
expected running time
– A cool algorithm of theoretical interest only with
O(n) worst-case running time
12/08/2022 5
Randomized Selection
• Key idea: use partition() from quicksort
– But, only need to examine one subarray
– This savings shows up in running time: O(n)
q = RandomizedPartition(A, p, r)
A[q] A[q]
p q r
12/08/2022 6
Randomized Selection
RandomizedSelect(A, p, r, i)
if (p == r) then return A[p];
q = RandomizedPartition(A, p, r)
k = q - p + 1;
if (i == k) then return A[q];
if (i < k) then
return RandomizedSelect(A, p, q-1, i);
else
return RandomizedSelect(A, q+1, r, i-k);
k
A[q] A[q]
p q r
12/08/2022 7
Randomized Selection
• Analyzing RandomizedSelect()
– Worst case: partition always 0:n-1
T(n) = T(n-1) + O(n) = ???
= O(n2) (arithmetic series)
• No better than sorting!
– “Best” case: suppose a 9:1 partition
T(n) = T(9n/10) + O(n) = ???
= O(n) (Master Theorem, case 3)
• Better than sorting!
• What if this had been a 99:1 split?
12/08/2022 8
Randomized Selection
• Average case
– For upper bound, assume ith element always falls
in larger side of partition:
1 n −1
T (n ) T (max (k , n − k − 1)) + (n )
n k =0
2 n −1
T (k ) + (n ) What happened here?
n k =n / 2
– Let’s show that T(n) = O(n) by substitution
12/08/2022 9
Randomized Selection
• Assume T(k) ck for k < n, c sufficiently large
2 n −1
T ( n) T (k ) + (n )
n k =n / 2
The recurrence we started with
2 n −1
ck + (n ) Substitute T(n) here?
What happened cn for T(k)
n k =n / 2
2c n −1 n 2 −1
= k − k + (n ) “Split” the recurrence
What happened here?
n k =1 k =1
2c 1 1n n
= (n − 1)n − − 1 + (n ) Expand arithmetic
What happened series
here?
n 2 22 2
cn
= c(n − 1) − − 1 + (n ) Multiply it out here?
What happened
12/08/2022
22 10
Randomized Selection
• Assume T(n) cn for sufficiently large c:
cn
T (n) c(n − 1) − − 1 + (n ) The recurrence so far
2 2
cn − c − + + (n )
cn c
= What happened
Multiply it out here?
4 2
cn − − + (n )
cn c
= What happened
Subtract c/2 here?
4 2
cn c
= cn − + − (n ) Rearrange the arithmetic
What happened here?
4 2
cn (if c is big enough) What we set out here?
happened to prove
12/08/2022 11
Greedy Algorithms
12/08/2022 12
Customers at a Petrol Pump
• There are n customers waiting at a petrol
pump to be served
• Customer j requires c(j) units of time to be
served.
• Total waiting time for customer j before being
served
= Total serving time of customers served
before j
12/08/2022 13
Example
• <c(1),c(2),c(3),c(4),c(5)> = <25,21,14,10,5>
• Consider the schedule: <5,14,10,25,21>
• Cumulative waiting times = <0, 5, 19, 29, 54>
• Sum of waiting times = 5 + 19 + 29 + 54 = 107
• Now consider: <5,10,14,25,21>
• Cumulative waiting times = <0, 5, 15, 29, 54>
• Sum of waiting times = 5 + 15 + 29 + 54 = 103
• If a costlier job is serviced earlier in the schedule,
it delays more jobs than if served later
12/08/2022 14
Customers at a Petrol Pump
• Condition: c(1) ≥ c(2) ≥ …… ≥ c(n)
• Goal: To minimize the sum total of the waiting time
of all customers
• Claim: The total waiting time is minimized if the
customers are processed in the order
• S = <s(1),s(2),…..s(n)> = <c(n), c(n-1),…..,c(1)>
• Hence s(1) ≤s(2)≤s(3) ….≤s(n)
12/08/2022 15
Customers at a Petrol Pump
• Proof: Let the optimal schedule for processing the
customers be
• T = <t(1), t(2), ….t(n)>
• Let i be the smallest index such that t(i) ≠ s(i)
• s(1) = t(1), s(2) = t(2), …. s(i-1) = t(i-1)
12/08/2022 16
Customers at a Petrol Pump
• Since t(i) ≠ s(i),
• t(i) = s(k) for some k > i and
• t(j) = s(i) for some j > i
• Since s(1) ≤s(2)≤s(3) ….≤s(n), we get t(j) ≤ t(i)
12/08/2022 17
Customers at a Petrol Pump
• Let T’ be the schedule obtained from T by swapping
t(i) and t(j)
• Since j moves j-i places up and i moves j-i places
down in the schedule
• Cost(T’) = Cost(T) + t(j)*(j-i) - t(i)*(j-i)
• Cost(T’) = Cost(T) – (j-i)*(t(i)-t(j))
• Since j > i and t(j) ≤ t(i), Cost(T’) ≤ Cost(T)
12/08/2022 18
Customers at a Petrol Pump
• S = <s(1),s(2),…..s(n)>
• T = <t(1), t(2), ….t(n)>
• s(1) = t(1), s(2) = t(2), …. s(i-1) = t(i-1), s(i) ≠ t(i)
• T’ = <t’(1), t’(2), ….t’(n)>
• s(1) = t’(1), s(2) = t’(2), …. s(i-1) = t’(i-1), s(i) = t’(i)
• Cost (T’) ≤ Cost(T)
12/08/2022 19
Customers at a Petrol Pump
• S = <s(1),s(2),…..s(n)>
• T = <t(1), t(2), ….t(n)>
• T matches with S upto position i-1
• T’ matches with S upto position i
12/08/2022 20
Customers at a Petrol Pump
12/08/2022 22
Activity Selection Problem
(aka Interval Scheduling)
12/08/2022 23
The Activity Selection Problem
• Here are a set of start and finish times
12/08/2022 24
Interval Representation
12/08/2022 25
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2615
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2715
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2815
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2915
Early Finish Greedy
• Select the activity with the earliest finish
• Eliminate the activities that could not be
scheduled
• Repeat!
12/08/2022 30
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3115
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3215
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3315
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3415
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3515
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3615
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3715
Assuming activities are sorted by
finish time
12/08/2022 38
Why it is Greedy?
• Greedy in the sense that it leaves as much
opportunity as possible for the remaining
activities to be scheduled
• The greedy choice is the one that maximizes
the amount of unscheduled time remaining
12/08/2022 39
Greedy-Choice Property
• Show there is an optimal solution that begins with a
greedy choice (with activity 1, which has the earliest finish
time)
• Suppose A S in an optimal solution
– Order the activities in A by finish time. The first activity in A is k
• If k = 1, the schedule A begins with a greedy choice
• If k 1, show that there is an optimal solution B to S that begins with the
greedy choice, activity 1
– Let B = A – {k} {1}
• f1 fk → activities in B are disjoint (compatible)
• B has the same number of activities as A
• Thus, B is optimal
12/08/2022 40
Knapsack Problem
• One wants to pack n items in a luggage
– The ith item is worth vi rupees and weighs wi kgs
– Maximize the value but cannot exceed W
– vi , wi, W are integers
• 0-1 knapsack → each item is taken or not taken
• Fractional knapsack → fractions of items can be
taken
12/08/2022 41
Greedy Algorithm for Fractional
Knapsack problem
• Fractional knapsack can be solvable by the greedy
strategy
– Compute the value per weight vi/wi for each item
– Obeying a greedy strategy, take as much as possible of the
item with the greatest value per weight.
– If the supply of that item is exhausted and there is still more
room, take as much as possible of the item with the next
value per weight, and so forth until there is no more room
– O(n lg n) (we need to sort the items by value per weight)
– Greedy Algorithm?
– Correctness?
12/08/2022 42
0-1 knapsack is harder!
• 0-1 knapsack cannot be solved by the greedy
strategy
– Unable to fill the knapsack to capacity, and the empty
space lowers the effective value per pound of the packing
– We must compare the solution to the sub-problem in
which the item is included with the solution to the sub-
problem in which the item is excluded before we can make
the choice
– Dynamic Programming
12/08/2022 43
12/08/2022 44
The Knapsack Problem:
Greedy Vs. Dynamic
• The fractional problem can be solved greedily
• The 0-1 problem cannot be solved with a
greedy approach
– However, it can be solved with dynamic
programming
12/08/2022 45
Algorithms
12/08/2022 46
Divide and Conquer Vs.
Dynamic Programming
12/08/2022 48
Bottom-up technique
• Avoid calculating the same thing twice,
usually by keeping a table of known results,
which we fill up as subinstances are solved.
• Dynamic programming is a bottom-up
technique.
• Memoization is a variant of dynamic
programming that offers the efficiency of
dynamic programming (by avoiding solving
common subproblems more than once) but
maintains a top-down flow
12/08/2022 49
Dynamic Programming: History
12/08/2022 50
Fibonacci Numbers
• Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, 24
12/08/2022 51
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) for n ≥ 2
• Computing the nth Fibonacci number recursively (top-
down): f(n)
f(n-1) + f(n-2)
12/08/2022 52
Fibonacci: Simple Recursive Vs. Memoized
int fib(int n) {
if (n <= 1) return n; // stopping conditions
else return fib(n-1) + fib(n-2); // recursive step
}
int fibDyn(int n, vector<int>& fibList) {
int fibValue;
if (fibList[n] >= 0) // check for a previously computed result and return
return fibList[n];
// otherwise execute the recursive algorithm to obtain the result
if (n <= 1) // stopping conditions
fibValue = n;
else // recursive step
fibValue = fibDyn(n-1, fibList) + fibDyn(n-2, fibList);
// store the result and return its value
fibList[n] = fibValue;
return fibValue;
}
12/08/2022 53
Example: Fibonacci numbers
Computing the nth fibonacci number using bottom-up
iteration:
• f(0) = 0
• f(1) = 1
• f(2) = 0+1 = 1
• f(3) = 1+1 = 2
• f(4) = 1+2 = 3
• f(5) = 2+3 = 5
•
•
•
• f(n-2) = f(n-3)+f(n-4)
• f(n-1) = f(n-2)+f(n-3)
• f(n) = f(n-1) + f(n-2)
12/08/2022 54
Recursive calls for fib(5)
fib(5)
fib(4) fib(3)
fib(1) fib(0)
12/08/2022 55
fib(5) Using Dynamic Programming
fib(5)
6 fib(3)
fib(4)
5
fib(3) fib(2) fib(2) fib(1)
4
fib(1) fib(0)
1 2
12/08/2022 56
Statistics (function calls)
fib fibDyn
N = 20 21,891 39
N = 40 331,160,281 79
12/08/2022 57
Top down vs. Bottom up
• Top down dynamic programming moves
through recursive process and stores results as
algorithm computes and looks up result when
already precomputed.
• Bottom up dynamic programming evaluates
by computing all function values in order,
starting at lowest and using previously
computed values.
12/08/2022 58
Principle of Optimality
• Principle of optimality
– the optimal solution to any nontrivial instance of
a problem is a combination of optimal solutions
to some of its sub-instances.
12/08/2022 59
Binomial Coefficient
• Binomial coefficient:
n n!
= for 0 k n
k k!(n − k )!
12/08/2022 60
Binomial Using Divide & Conquer
• Binomial formula:
n − 1 n − 1
C + C 0 k n
n k − 1 k
C =
k n n
1 k = 0 or k = n (C or C )
0 n
12/08/2022 61
Bottom-Up
• Recursive property:
C[i] [j] = C[i – 1] [j – 1] + C[i – 1][j] 0 < j < i
1 j = 0 or j = i
12/08/2022 62
Pascal’s Triangle
0 1 2 3 4 … j k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
… C[i-1][j-1]+ C[i-1][j]
i C[i][j]
n
12/08/2022 63
Binomial Coefficient
• Record the values in a table of n+1 rows and k+1 columns
0 1 2 3 … k-1 k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
...
k 1 1
…
n − 1 n − 1
n-1 1 C C
k − 1 k
n
n 1 C k
12/08/2022 64
Binomial Coefficient
ALGORITHM Binomial(n,k)
//Computes C(n, k) by the dynamic programming algorithm
//Input: A pair of nonnegative integers n ≥ k ≥ 0
//Output: The value of C(n ,k)
for i 0 to n do
for j 0 to min (i ,k) do
if j = 0 or j = k
C [i , j] 1
else C [i , j] C[i-1, j-1] + C[i-1, j]
return C [n, k]
k i −1 n k k n
A(n, k ) = 1 + 1 = (i − 1) + k
i =1 j =1 i = k +1 j =1 i =1 i = K +1
(k − 1)k
= + k (n − k ) (nk )
2
12/08/2022 65
All pairs shortest paths
4
2 3
12/08/2022 66
Shortest Path
• Optimization problem – more than one candidate for the
solution
• Solution is the candidate with optimal value
• Solution 1 – brute force
– Find all possible paths, compute minimum
– Efficiency?
• Solution 2 – dynamic programming
– Algorithm that determines only lengths of shortest paths
– Modify to produce shortest paths as well
12/08/2022 67
Example
1 2 3 4 5 1 2 3 4 5
1 0 1 ∞ 1 5 1 0 1 3 1 4
2 9 0 3 2 ∞ 2 8 0 3 2 5
3 ∞ ∞ 0 4 ∞ 3 10 11 0 4 7
4 ∞ ∞ 2 0 3 4 6 7 2 0 3
5 3 ∞ ∞ ∞ 0 5 3 4 6 4 0
12/08/2022 69
Floyd’s Algorithm
2
a b
6 7
3
c d
1
d ij( k ) = min{d ij( k −1) , d ik( k −1) + d kj( k −1) } for k 1, d ij( 0) = wij
12/08/2022 70
Computing D
• D(0) = W
• Now compute D(1)
• Then D(2)
• Etc.
12/08/2022 71
Floyd’s Algorithm
•Return W
•Efficiency = ?
Θ(n)
12/08/2022 72
Example
Example: Apply Floyd’s algorithm to find the t All-
pairs shortest-path problem of the digraph defined by
the following weight matrix
0 2 ∞ 1 8
6 0 3 2 ∞
∞ ∞ 0 4 ∞
∞ ∞ 2 0 3
3 ∞ ∞ ∞ 0
12/08/2022 73
Floyd's Algorithm using a single D
Floyd
1. D W // initialize D array to W [ ]
2. P 0 // initialize P array to [0]
3. for k 1 to n
4. do for i 1 to n
5. do for j 1 to n
6. if (D[ i, j ] > D[ i, k ] + D[ k, j ] )
7. then D[ i, j ] D[ i, k ] + D[ k, j ]
8. P[ i, j ] k;
Floyd’s Algorithm 74
12/08/2022
Printing intermediate nodes on
shortest path from q to r
path(index q, r)
if (P[ q, r ]!=0)
path(q, P[q, r])
println( “v”+ P[q, r]) P=
path(P[q, r], r)
return;
//no intermediate nodes
else return
Floyd’s Algorithm 75
12/08/2022
Matrix-chain Multiplication
• Suppose we have a sequence or chain A1, A2,
…, An of n matrices to be multiplied
– That is, we want to compute the product
A1A2…An
12/08/2022 76
11-76
Matrix-chain Multiplication …contd
12/08/2022 78
Matrix-chain Multiplication …contd
12/08/2022 79
11-79
Algorithm to Multiply 2 Matrices
Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B
MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C
12/08/2022 82
11-82
Dynamic Programming Approach
• The structure of an optimal solution
– Let us use the notation Ai..j for the matrix that
results from the product Ai Ai+1 … Aj
– An optimal parenthesization of the product
A1A2…An splits the product between Ak and Ak+1
for some integer k where1 ≤ k < n
– First compute matrices A1..k and Ak+1..n ; then
multiply them to get the final matrix A1..n
12/08/2022 83
11-83
Dynamic Programming Approach
…contd
12/08/2022 84
11-84
Dynamic Programming Approach
…contd
12/08/2022 85
11-85
Dynamic Programming Approach
…contd
12/08/2022 86
11-86
Dynamic Programming Approach
…contd
0 if i=j
m[i, j ] =
min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j
i ≤ k< j
12/08/2022 87
11-87
Dynamic Programming Approach
…contd
12/08/2022 88
11-88
Algorithm
int minmult (int n, const ind d[], index S[ ] [ ])
{
index i, j, k, diagonal;
int M[1..n][1..n];
for (i = 1; i <= n; i++)
M[i][i] = 0;
for (diagonal = 1; diagonal <= n-1; diagonal++)
for (i = 1; i <= n-diagonal; i++)
{ j = i + diagonal;
M[i] [j] = minimum(M[i][k] + M[k+1][j] + d[i-1]*d[k]*d[j]);
// minimun (i <= k <= j-1)
S[i] [j] = a value of k that gave the minimum;
}
return M[1][n];
}
12/08/2022 89
Algorithm Visualization
M i , j = min{M i ,k + M k +1, j + d i d k +1d j +1}
ik j
• The bottom-up answer
construction fills in the M 0 1 2 j … n-1
M array by diagonals 0
• Mi,j gets values from 1
previous entries in i-th …
row and j-th column i
• Filling in each entry in
the M table takes O(n)
time.
• Total run time: O(n3)
• Getting actual
n-1
parenthesization can be
done by remembering
“k” for each M entry
12/08/2022 90
Constructing The Optimal Solution
• Our algorithm computes the minimum-cost
table m and the split table s
• The optimal solution can be constructed
from the split table S
– Each entry s[i, j ]=k shows where to split the
product Ai Ai+1 … Aj for the minimum cost
12/08/2022 91
11-91
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35
12/08/2022 92
11-92
The 0/1 Knapsack Problem
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with weight at
most W.
• If we are not allowed to take fractional amounts, then this is the 0/1
knapsack problem.
– In this case, we let T denote the set of items we take
– Objective: maximize
b
iT
i
– Constraint:
w
iT
i W
12/08/2022 93
Example
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with weight at
most W.
“knapsack”
Items: Solution:
• 5 (2 in)
1 2 3 4 5
• 3 (2 in)
• 1 (4 in)
Weight: 4 in 2 in 2 in 6 in 2 in
9 in
Benefit: $20 $3 $6 $25 $80
12/08/2022 94
First Attempt
• Sk: Set of items numbered 1 to k.
• Define B[k] = best selection from Sk.
• Problem: does not have subproblem optimality:
– Consider S={(3,2),(5,4),(8,5),(4,3),(10,9)} benefit-weight pairs
12/08/2022 95
Second Attempt
• Sk: Set of items numbered 1 to k.
• Define B[k,w] = best selection from Sk with weight exactly
equal to w
• Good news: this does have subproblem optimality:
B[k − 1, w] if wk w
B[k , w] =
max{ B[k − 1, w], B[k − 1, w − wk ] + bk } else
12/08/2022 96
Dynamic Programming Algorithm
• Fill up a table of B[i,j] values
• In what order should the computation
proceed?
12/08/2022 97
The 0/1 Knapsack Algorithm
• Recall definition of B[k,w]:
B[k − 1, w] if wk w
B[k , w] =
max{ B[k − 1, w], B[k − 1, w − wk ] + bk } else
12/08/2022 98
7.6
Analysis
• An optimal solution to the Knapsack
problem can be found in
12/08/2022 99
Algorithms
12/08/2022 100
Outline
• Problem space/ State space
• Exhaustive search
– Depth-first search
– Breadth-first search
• Backtracking
• Branch-and-bound
12/08/2022 101
Problem Space
or State Space
12/08/2022 102
Problem Space
12/08/2022 103
Example: Shortest path
• Given a graph G=(V,E), and nodes u and v, find the
shortest path between u and v.
• Problem space
– Set of all possible path between u and v.
– {(u, n1, n2, …, nk, v)| ni is in V, for 1ik}.
12/08/2022 104
Example: 0/1 Knapsack
• Given a set S of n objects, where the object i has value vi
and weight wi , and a knapsack with weight capacity C,
find the maximum of value of objects in S which can be
put in the knapsack.
• Problem space
– Any subset of S.
12/08/2022 105
Example: n Queens
• Given an nxn board, find the n squares in the
board to put n queens so that no pair can attack.
• Problem space
– A set of any n positions on the board.
12/08/2022 106
Exhaustive Search
12/08/2022 107
Exhaustive Search
• Generate every possible answer
• Test each answer with the constraint to find the
correct answer
• Inefficient because the number of answers in the
problem space can be exponential.
• Examples:
– Shortest path
• n! paths to be considered, where n is the number of nodes.
– 0/1 Knapsack
• 2n selections, where n is the number of objects.
12/08/2022 108
•
State-Space Tree
Let (a , a , …, a ) be a possible answer.
1 2 n
• Suppose ai is either 0 or 1, for 1 i nใ
(?, …, ?)
(0, ?, …, ?) (1, ?, …, ?)
12/08/2022 109
State-Space Tree: Shortest Path
8 (1)
3 5
2
1
5 2
1
(1,2) (1,3)
-6
-1
2 (1,2,3) (1,2,4) (1,3,4) (1,3,5)
2 4
(1,2,3,4,5)
12/08/2022 110
Generating Possible Paths
• Let {1,2,3, …, n} be a set of nodes and E[i][j] is the weight
of the edge between node i and j.
path(p)
last = last node in the path p
for next = 1 to n
do np = p
if next is not in np and E[last][next] != 0
then np = np || next
path(np)
else return
12/08/2022 111
State-Space Tree : 0/1 Knapsack
• Given a set of objects o1, …, o5.
{}
{1,2,3,4,5}
12/08/2022 112
State-Space Tree : n Queen
Q Q
Q Q
Q Q
Q Q
Q
Q
Q
Q
Q
Q Q
Q
Q
Q Q
Q
Q Q Q
Q Q
Q Q Q
12/08/2022 113
Depth-first Search
• Traverse the tree from root until a leaf is reached.
• Then, traverse back up to visited the next
unvisited node.
depthFirst(v)
visited[v] = 1
for each node k adjacent to v
do if not visited[k]
then depthFirst(k)
12/08/2022 114
0/1 Knapsack: Depth-first Search
Global: maxV=0 maxSack={}
DFknapsack(sack, unchosen)
for each object p in unchosen
do unchosen=unchosen-{p}
sack=sack U {p}
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV=max(maxV, val)
if maxV=val then maxSack=sack
return
else DFknapsack(sack, unchosen)
return
12/08/2022 115
Breadth-first Search
• Traverse the tree from root until the nodes of the same
depth are all visited.
• Then, visited the node in the next level.
breadthFirst(v)
Q = empty queue
enqueue(Q, v) visited[v] = 1
while not empty (Q)
do u = dequeue(Q)
for each node k adjacent to u
do if not visited[k]
then visited[k] = true
enqueue(Q, k)
12/08/2022 116
0/1 Knapsack: Breadth-first Search
BFknapsack
Q = empty queue maxV=0
sack = { }
unchosen = set of all objects
enqueue(Q, <sack, unchosen>)
while not empty (Q)
do <sack, unchosen> = dequeue(Q)
if unchosen is not empty
then for each object p in unchosen
do enqueue(Q,<sackU{p}, unchosen-{p}>)
else maxV = max(maxV, evaluate(sack))
if maxV=evaluate(sack) then maxSack = sack
12/08/2022 117
Backtracking
12/08/2022 118
Backtracking
• Reduce the search by cutting down some
branches in the tree
12/08/2022 119
0/1 Knapsack: Backtracking
{}
Node
Sack
Current weight, current value 0,0
12/08/2022
Capacity = 5 120
0/1 Knapsack: Backtracking
BTknapsack(sack, unchosen)
for each object p in unchosen
do unchosen=unchosen-{p}
if p can be put in sack then sack = sack U {p}
► Backtracking occurs when p cannot be put in sack.
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV=max(maxV, val)
maxSack=sack
return
else BTknapsack(sack, unchosen)
return
12/08/2022 121
Branch-and-Bound
12/08/2022 122
Branch-and-bound
• Use for optimization problems
• An optimal solution is a feasible solution with
the optimal value of the objective function.
• Search in state-space trees can be pruned by
using bound.
12/08/2022 123
State-Space Tree with Bound
State
bound
State State State
bound bound bound
12/08/2022 124
Branch and Bound
From a node N:
• If the bound of N is not better than the current
overall bound, terminate the search from N.
• Otherwise,
– If N is a leaf node
• If its bound is better than the current overall bound, update
the overall bound and terminate the search from N.
• Otherwise, terminate the search from N.
– Otherwise, search each child of N.
12/08/2022 125
0/1 Knapsack: Branch-and-Bound
Global: OvBound=0
BBknapsack(sack, unchosen)
if bound(sack, unchosen)>OvBound
► Estimated bound can be better than the overall bound.
then for each object p in unchosen
do unchosen=unchosen-{p}
if p can be put in sack then sack = sack U {p}
► Backtracking occurs when p cannot be put in sack.
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV = max(maxV, val)
maxSack = sack
OvBound = max(evaluate(sack), OvBound)
return
else ► A leaf is not reached.
BBknapsack(sack, unchosen)
return
12/08/2022 126
0/1 Knapsack: Estimated bound
• Current value
• available space * best ratio value:space of
unchosen objects
12/08/2022 127
Branch-and-bound
{} 20
Node
12/08/2022
Capacity = 5 128