0% found this document useful (0 votes)
11 views178 pages

DAA Unit4 2025

The document discusses Dynamic Programming as a technique for solving optimization problems defined by recurrences with overlapping subproblems, including examples like Fibonacci numbers and the Knapsack problem. It covers the algorithmic limitations, methods to cope with them, and provides recurrence relations for computing binomial coefficients and solving the Knapsack problem. The document also includes algorithmic examples and efficiency analysis for the discussed problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views178 pages

DAA Unit4 2025

The document discusses Dynamic Programming as a technique for solving optimization problems defined by recurrences with overlapping subproblems, including examples like Fibonacci numbers and the Knapsack problem. It covers the algorithmic limitations, methods to cope with them, and provides recurrence relations for computing binomial coefficients and solving the Knapsack problem. The document also includes algorithmic examples and efficiency analysis for the discussed problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 178

DESIGN AND ANALYSIS OF

ALGORITHMS
Dynamic Programming

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Dynamic Programming

Reetinder Sidhu
Department of Computer Science and
Engineering
DYNAMIC PROGRAMMING
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Dynamic Programming
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I Fibonacci numbers
Limitations of Algorithmic Power I Binomial Coefficients
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
DYNAMIC PROGRAMMING
Introduction
Dynamic Programming is a general algorithm design technique for solving problems
defined by recurrences with overlapping subproblems
Invented by American mathematician Richard Bellman in the 1950s to solve
optimization problems and later assimilated by CS
“Programming” here means “planning”
Main idea:
I set up a recurrence relating a solution to a larger instance to solutions of some smaller instances
I solve smaller instances once
I record solutions in a table
I extract solution to the initial instance from that table
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Recall definition of Fibonacci numbers:

f (n) = f (n − 1) + f (n − 2)
f (0) = 0
f (1) = 1
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Recall definition of Fibonacci numbers:

f (n) = f (n − 1) + f (n − 2)
f (0) = 0
f (1) = 1

Computing the nth Fibonacci number recursively (top-down):


f (n)

f (n − 1) f (n − 2)

f (n − 2) f (n − 3) f (n − 3) f (n − 4)
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Computing the nth Fibonacci number
using bottom-up iteration and recording
results:

f (0) = 0
f (1) = 1
f (2) = 0 + 1 = 1
f (3) = 1 + 1 = 2
f (4) = 1 + 2 = 3
..
.
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Computing the nth Fibonacci number
using bottom-up iteration and recording Efficiency:
results: time: Θ(n)
space: Θ(n) or Θ(1)
f (0) = 0
f (1) = 1
f (2) = 0 + 1 = 1
f (3) = 1 + 1 = 2
f (4) = 1 + 2 = 3
..
.
DYNAMIC PROGRAMMING
Algorithm Examples

Computing a binomial coefficient


Warshall’s algorithm for transitive closure
Floyd’s algorithm for all-pairs shortest paths
Constructing an optimal binary search tree
Some instances of difficult discrete optimization problems:
I traveling salesman
I knapsack
DYNAMIC PROGRAMMING
Binomial Coefficient
Binomial coefficients are coefficients of the binomial formula:

(a + b )n = C (n, 0)a n b 0 + . . . + C (n, k )a n−k b k + . . . + C (n, n)a 0 b 0


DYNAMIC PROGRAMMING
Binomial Coefficient
Binomial coefficients are coefficients of the binomial formula:

(a + b )n = C (n, 0)a n b 0 + . . . + C (n, k )a n−k b k + . . . + C (n, n)a 0 b 0

Recurrence:
C(n, k) = C(n−1, k)+C(n−1, k−1) for n > k > 0

C(n, 0) = 1, C(n, n) = 1 for n ≥ 0


DYNAMIC PROGRAMMING
Binomial Coefficient
Binomial coefficients are coefficients of the binomial formula:

(a + b )n = C (n, 0)a n b 0 + . . . + C (n, k )a n−k b k + . . . + C (n, n)a 0 b 0

Recurrence: Value of C(n,k) can be computed by filling a


table:
C(n, k) = C(n−1, k)+C(n−1, k−1) for n > k > 0 0 1 2 . . . k-1 k
0 1
C(n, 0) = 1, C(n, n) = 1 for n ≥ 0 1 1 1
.
.
.
n-1 C(n-1,k-1) C(n-1,k)
n C(n,k)
DYNAMIC PROGRAMMING
Binomial Coefficient Algorithm
Dynamic Programming Binomial Coefficient Algorithm
1: procedure BINOMIAL(n , k)
2: . Input: Integers n ≥ 0, k ≥ 0
3: . Output: C (n, k )
4: for i ← 0 to n do
5: for j ← 0 to min(i , k ) do
6: if j=0 or j=i then
7: C (i , j ) ← 1
8: elseC [i , j ] = C [i − 1, j ] + C [i − 1, j − 1]
9: return C [n, k ]
DYNAMIC PROGRAMMING
Binomial Coefficient Algorithm
Dynamic Programming Binomial Coefficient Algorithm
1: procedure BINOMIAL(n , k)
2: . Input: Integers n ≥ 0, k ≥ 0
3: . Output: C (n, k )
4: for i ← 0 to n do
5: for j ← 0 to min(i , k ) do
6: if j=0 or j=i then
7: C (i , j ) ← 1
8: elseC [i , j ] = C [i − 1, j ] + C [i − 1, j − 1]
9: return C [n, k ]

Time: Θ(nk)
Space: Θ(nk)
DYNAMIC PROGRAMMING
Think About It

What does dynamic programming have in common with divide-and-conquer? What


is a principal difference between them?
DYNAMIC PROGRAMMING
Think About It

What does dynamic programming have in common with divide-and-conquer? What


is a principal difference between them?
The coin change problem does not have an optimal greedy solution in all cases (ex:
coins 1,20,25 and amount 40). Is there a dynamic programming based algorithm
that can solve all cases of the coin change problem?
DESIGN AND ANALYSIS OF
ALGORITHMS
The Knapsack Problem

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

The Knapsack Problem

Reetinder Sidhu
Department of Computer Science and
Engineering
THE KNAPSACK PROBLEM
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem The Knapsack Problem
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I Recurrence
Limitations of Algorithmic Power I Example
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
THE KNAPSACK PROBLEM
Problem Definition

Given
integer weights : w1 w2 . . . wn
I n items of
values : v1 v2 . . . vn
I knapsack of capacity W (integer W > 0)
Find the most valuable subset of items such that sum of their weights does not
exceed W
THE KNAPSACK PROBLEM
Kanpsack Recurrence
To design a dynamic programming algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in terms of
solutions to its smaller subinstances
Consider the smaller knapsack problem where number of items is i (i ≤ n) and the
knapsack capacity id j (j ≤ W )
Then

max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0
(
F (i, j) =
F (i − 1, j) if j − wi < 0
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi
1 2 12
2 1 10
3 3 20
4 2 15
What is the maximum value that can be
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1 0
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1 0 12
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1 0 12 12
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1 0 12 12 12
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
1 0 12 12 12 12
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
2
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
2 10
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
2 10 12
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
2 10 12 22
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
2 10 12 22 22 22
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3 10
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3 10 12
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3 10 12 22
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3 10 12 22 30
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
3 10 12 22 30 32
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10 15
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10 15 25
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10 15 25 30
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10 15 25 30 37
stored in a knapsack of capacity 6?
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i , j ) =
F (i − 1, j ) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 1 2 3 4 5
2 1 10 1 0 12 12 12 12
3 3 20 2 10 12 22 22 22
4 2 15 3 10 12 22 30 32
What is the maximum value that can be 4 10 15 25 30 37
stored in a knapsack of capacity 6? Given above 6 items, maximum value that
can be stored in a knapsack of capacity 5 is
37
THE KNAPSACK PROBLEM
Complexity

Space complexity: Θ(nW )


Time complexity: Θ(nW )
Time to compose optimal solution: O(n)
THE KNAPSACK PROBLEM
Think About It
THE KNAPSACK PROBLEM
Think About It

Write pseudocode of the bottom-up dynamic programming algorithm for the


knapsack problem
THE KNAPSACK PROBLEM
Think About It

Write pseudocode of the bottom-up dynamic programming algorithm for the


knapsack problem
True or False:
1 A sequence of values in a row of the dynamic programming table for the knapsack problem is
always nondecreasing?
THE KNAPSACK PROBLEM
Think About It

Write pseudocode of the bottom-up dynamic programming algorithm for the


knapsack problem
True or False:
1 A sequence of values in a row of the dynamic programming table for the knapsack problem is
always nondecreasing?
2 A sequence of values in a column of the dynamic programming table for the knapsack problem is
always nondecreasing?
DESIGN AND ANALYSIS OF
ALGORITHMS
Memory Function Knapsack

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Memory Function Knapsack

Reetinder Sidhu
Department of Computer Science and
Engineering
MEMORY FUNCTION KNAPSACK
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Memory Function Knapsack
I Memory Functions I Motivation
I Warshall’s and Floyd’s Algorithms I Algorithm
Limitations of Algorithmic Power I Example
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
Example computed bottom up:
capacity j
i 1 2 3 4 5
1 0 12 12 12 12
2 10 12 22 22 22
3 10 12 22 30 32
4 10 15 25 30 37
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
Example computed bottom up:
capacity j
i 1 2 3 4 5
1 0 12 12 12 12
2 10 12 22 22 22
3 10 12 22 30 32
4 10 15 25 30 37
Disadvantage of bottom up approach: values not required also computed
MEMORY FUNCTION KNAPSACK
Top Down Approach
MEMORY FUNCTION KNAPSACK
Top Down Approach

Disadvantage of top down approach: same problem solved multiple times


MEMORY FUNCTION KNAPSACK
Top Down Approach

Disadvantage of top down approach: same problem solved multiple times


Example computed top down:
f (n)

f (n − 1) f (n − 2)

f (n − 2) f (n − 3) f (n − 3) f (n − 4)
MEMORY FUNCTION KNAPSACK
Top Down Approach

Disadvantage of top down approach: same problem solved multiple times


Example computed top down:
f (n)

f (n − 1) f (n − 2)

f (n − 2) f (n − 3) f (n − 3) f (n − 4)

Advantage of top down approach: only the required subproblems solved


MEMORY FUNCTION KNAPSACK
Memory Function Dynamic Programming

Combine the advantages of bottom up and top down approaches:


I compute each subproblem only once
I compute only the required subproblems
MEMORY FUNCTION KNAPSACK
MF-DP Algorithm
Algorithm for Memory Function Dynamic Programming
1: procedure MFKNAPSACK(i, j)
2: . Inputs: i indicating the number items, and
3: . j, indicating the knapsack capacity
4: . Output: The value of an optimal feasible subset of the first i items
5: . Note: Uses global variables input arrays Weights[1 . . . n], Values[1 . . . n],
6: . and table F [0 . . . n, 0 . . . W ] whose entries are initialized with −1’s except
7: . row 0 and column 0 is initialized with 0
8: if F [i, j] < 0 then
9: if j < Weights[i] then
10: value ← MFKnapsack(i − 1, j)
11: elsevalue ← max(MFKnapsack(i − 1, j), Values[i] + MFKnapsack(i − 1, j − Weights[i]))
12: F [i, j] ← value
13: return F [i, j]
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi
1 2 12
2 1 10
3 3 20
4 2 15
What is the maximum value that can be
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0
3 3 20 1
4 2 15 2
3
What is the maximum value that can be
4 
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0
3 3 20 1
4 2 15 2
3 
What is the maximum value that can be
4 
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0
3 3 20 1
4 2 15 2
3  
What is the maximum value that can be
4 
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0     
3 3 20 1      
4 2 15 2    
3   
What is the maximum value that can be
4  
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0     
3 3 20 1      
4 2 15 2  –   – 
3  – –  – 
What is the maximum value that can be
4  – – – – 
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0 0 0 0 0 0
3 3 20 1 0 0 12 12 12 12
4 2 15 2 0 – 12 22 – 22
3 0 – – 22 – 32
What is the maximum value that can be
4 0 – – – – 37
stored in a knapsack of capacity 5?
MEMORY FUNCTION KNAPSACK
Example
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0

F (i, j) =
F (i − 1, j) if j − wi < 0

Dynamic Programming Example


item i weight wi value vi capacity j
1 2 12 i 0 1 2 3 4 5
2 1 10 0 0 0 0 0 0 0
3 3 20 1 0 0 12 12 12 12
4 2 15 2 0 – 12 22 – 22
3 0 – – 22 – 32
What is the maximum value that can be
4 0 – – – – 37
stored in a knapsack of capacity 5?
Knapsack problem solved by
computing 21 out 30 possible subproblems
reusing subproblem entry (1, 2)
MEMORY FUNCTION KNAPSACK
Complexity

Constant factor imrpovement in efficiency


I Space complexity: Θ(nW )
I Time complexity: Θ(nW )
I Time to compose optimal solution: O(n)
Bigger gains possible where computation of a subproblem takes more than constant
time
MEMORY FUNCTION KNAPSACK
Think About It

Consider the use of the MF technique to compute binomial coefficient using the
recurrence
C (n, k) = C (n − 1, k − 1) + C (n − 1, k)

I How many table entires are filled?


I How many are reused?
DESIGN AND ANALYSIS OF
ALGORITHMS
Transitive Closure (Warshall’s Algorith

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Transitive Closure (Warshall’s Algorithm)

Reetinder Sidhu
Department of Computer Science and
Engineering
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Transitive Closure (Warshall’s
I Memory Functions Algorithm)
I Warshall’s and Floyd’s Algorithms I Motivation
Limitations of Algorithmic Power I Algorithm
I Lower-Bound Arguments
I Example
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:

1 3

2 4
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:

1 3

2 4

1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
 
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:

1 3 1 3

2 4 2 4

1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
 
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:

1 3 1 3

2 4 2 4

1 2 3 4
1 0 0 1 0 1 2 3 4
2 1 0 0 1 1 0 0 1 0
3 0 0 0 0 2 1 1
 
1 1
4 0 1 0 0 3 0

0 0 0

4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n × n matrices
R (0) , . . . , R (k) , . . . , R (n) where R (k) [i, j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate vertices
I R (0) = A (adjacency matrix), R (n) = T (transitive closure)

On the k th iteration, the algorithm computes R (k)

1 if path from i to k and k to j, i.e., R (k−1) [i, k] = R (k−1) [k, j] = 1



R (k) [i, j] =
R (k−1)
[i, j] otherwise
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n × n matrices
R (0) , . . . , R (k) , . . . , R (n) where R (k) [i, j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate vertices
I R (0) = A (adjacency matrix), R (n) = T (transitive closure)

On the k th iteration, the algorithm computes R (k)

1 if path from i to k and k to j, i.e., R (k−1) [i, k] = R (k−1) [k, j] = 1



R (k) [i, j] =
R (k−1)
[i, j] otherwise

R (k) [i, j] = R (k−1) [i, j] or (R (k−1) [i, k] and R (k−1) [k, j])
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Algorithm

Transitive Closure (Warshall’s Algorithm)


1: procedure WARSHALL(()A[1 . . . n, 1 . . . n])
2: . Input: The adjacency matrix A of a digraph with n vertices
3: . Output: The transitive closure of the digraph
4: R (0) ← A
5: for k ← 1 to n do
6: for i ← 1 to n do
7: for j ← 1 to n do
8: R (k) [i, j] ← R (k−1) [i, j] or (R (k−1) [i, k] and R (k−1) [k, j]);
9: return R
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3

2 4

1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
 
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3

2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 0 1
3 0 0 0 0 3 0 0 0 0
   
4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3

2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 1 1
3 0 0 0 0 3 0 0 0 0
   
4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3 1 3

2 4 2 4 2 4

1 2 3 4 1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 1 1 2 1 0 1 1
3 0 0 0 0 3 0 0 0 0 3 0 0 0 0
     
4 0 1 0 0 4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3 1 3

2 4 2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1
   
1
4 0 1 0 0 4 0 1 0 0 3 0

0 0 0

4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3 1 3 1 3

2 4 2 4 2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1
   
1
4 0 1 0 0 4 0 1 0 0 3 0

0 0 0

3 0

0 0 0

4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3 1 3 1 3 1 3

2 4 2 4 2 4 2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1 2 1 0 1 1
   
1
4 0 1 0 0 4 0 1 0 0 3 0

0 0 0

3 0

0 0 0

3 0

0 0 0

4 1 1 1 1 4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm

1 3 1 3 1 3 1 3 1 3

2 4 2 4 2 4 2 4 2 4

1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1 2 1 1 1
   
1 1
4 0 1 0 0 4 0 1 0 0 3 0

0 0 0

3 0

0 0 0

3 0

0 0 0

4 1 1 1 1 4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Think About It

Is Warshall’s algorithm efficient for sparse graphs? Why / why not?


Can Warshall’s algorithm be used to determine if a graph is a DAG (Directed Acyclic
Graph)? If so, how?
DESIGN AND ANALYSIS OF
ALGORITHMS
All Pairs Shortest Path (Floyd’s Algorithm)

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

All Pairs Shortest Path (Floyd’s Algorithm)

Reetinder Sidhu
Department of Computer Science and
Engineering
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem All Pairs Shortes Path (Floyd’s
I Memory Functions Algorithm)
I Warshall’s and Floyd’s Algorithms I Definition
Limitations of Algorithmic Power I Algorithm
I Lower-Bound Arguments
I Example
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Problem Definition

Given a undirected or directed graph, with weighted edges, find the shortest path between
every pair of vertices
I Dijkstra’s algorithm found shortest paths from given vertex to remaining n − 1 vertices (Θ(n) paths)
I Current problem is to find the shortest path between every pair of vertices (Θ(n2 ) paths)
Solution approach is similar to the transitive closure approach: Compute transitive closure
via sequence of n × n matrices R (0) , . . . , R (k ) , . . . , R (n) where R (k ) [i , j ] = 1 iff there is
nontrivial path from i to j with only first k vertices allowed as intermediate vertices
Compute all pairs shortest paths via sequence of n × n matrices D (0) , . . . , D (k ) , . . . , D (n)
where D (k ) [i , j ] is the shortest path from i to j with only first k vertices allowed as
intermediate vertices
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2
1 2

6 7
3

3 4
1
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2
1 2

6 7
3

3 4
1

1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
 
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2 2
1 2 1 2
10
6 5
6 7 4 7
3 3 7 6 16

9
3 4 3 4
1 1

1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
 
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2 2
1 2 1 2
10
6 5
6 7 4 7
3 3 7 6 16

9
3 4 3 4
1 1

1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 6
3 ∞ 7 0 1 3 7 7 0 1
   
4 6 ∞ ∞ 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Algorithm
Transitive Closure (Floyd’s Algorithm)
1: procedure FLOYD(()A[1 . . . n, 1 . . . n])
2: . Input: Weight matrix A of a graph with no negative length cycles
3: . Output: Distance matrix of shortest paths
4: D←W
5: for k ← 1 to n do
6: for i ← 1 to n do
7: for j ← 1 to n do
8: D[i, j] ← min(D[i, j], D[i, k] + D[k, j])
9: return D
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Algorithm
Transitive Closure (Floyd’s Algorithm)
1: procedure FLOYD(()A[1 . . . n, 1 . . . n])
2: . Input: Weight matrix A of a graph with no negative length cycles
3: . Output: Distance matrix of shortest paths
4: D←W
5: for k ← 1 to n do
6: for i ← 1 to n do
7: for j ← 1 to n do
8: D[i, j] ← min(D[i, j], D[i, k] + D[k, j])
9: return D

Complexity: Θ(n3 )
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2
1 2

6 7
3

3 4
1

1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
 
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2
1 2 1 2

6
6 7 7
3 3

3 4 3 4
1 1

1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 ∞ ∞
3 ∞ 7 0 1 3 ∞ 7 0 1
   
4 6 ∞ ∞ 0 4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2
1 2 1 2

6 5
6 7 7
3 3

9
3 4 3 4
1 1

1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1
   
4 6 ∞ ∞ 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2
1 2 1 2 1 2

6 5 6 5
6 7 7 7
3 3 3

9 9
3 4 3 4 3 4
1 1 1

1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 ∞ 7 0 1
     
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2
1 2 1 2 1 2

6 5 6 5
6 7 7 7
3 3 3 9

9 9
3 4 3 4 3 4
1 1 1

1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1
     
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2 2
1 2 1 2 1 2 1 2

6 5 6 5 6 5
6 7 7 7 7
3 3 3 9 3 9

9 9 9
3 4 3 4 3 4 3 4
1 1 1 1

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0
 
1
     
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2 2
1 2 1 2 1 2 1 2
10
6 5 6 5 6 5
6 7 7 7 4 7
3 3 3 9 3 9 6 16

9 9 9
3 4 3 4 3 4 3 4
1 1 1 1

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0
 
1
     
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2 2 2
1 2 1 2 1 2 1 2 1 2
10 10
6 5 6 5 6 5 6 5
6 7 7 7 4 7 4 7
3 3 3 9 3 9 6 16 3 9 6 16

9 9 9 9
3 4 3 4 3 4 3 4 3 4
1 1 1 1 1

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0 3 9 7 0 1
 
1
       
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example

2 2 2 2 2
1 2 1 2 1 2 1 2 1 2
10 10
6 5 6 5 6 5 6 5
6 7 7 7 4 7 4 7
3 3 3 9 3 9 6 16 3 7 6 16

9 9 9 9
3 4 3 4 3 4 3 4 3 4
1 1 1 1 1

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0 3 7 7 0 1
 
1
       
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Think About It

Give an example of a graph with negative weights for which Floyd’s algorithm does
not yield the correct result
Enhance Floyd’s algorithm so that shortest paths themselves, not just their lengths,
can be found
DESIGN AND ANALYSIS OF
ALGORITHMS
Lower-Bound Arguments

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Lower-Bound Arguments

Reetinder Sidhu
Department of Computer Science and
Engineering
LOWER-BOUND ARGUMENTS
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Lower-Bound Arguments
I Memory Functions I Trivial lower bounds
I Warshall’s and Floyd’s Algorithms I Adversary arguments
I Optimal Binary Search Trees I Problem reduction
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
LOWER-BOUND ARGUMENTS
Limitations of Algorithmic Power

There are no algorithms to solve some problems


I Ex: halting problem
Other problems can be solved algorithmically, but not in polynomial time
I Ex: traveling salesman problem
For polynomial time algorithms also, there are lower bounds
LOWER-BOUND ARGUMENTS
Definition

Lower Bound
An estimate on a minimum amount of work needed to solve a given problem (estimate can be
less than the minimum amount of work but not greater)

Examples:
I number of comparisons needed to find the largest element in a set of n numbers
I number of comparisons needed to sort an array of size n
I number of comparisons necessary for searching in a sorted array
I number of multiplications needed to multiply two n × n matrices
LOWER-BOUND ARGUMENTS
Bound Tightness
A lower bound can be: Tight Lower Bound
I an exact count There exists an algorithm with the same
I an efficiency class (Ω)
efficiency as the lower bound

Problem Lower Bound Tightness


Sorting Ω(n log n) yes
Searching a sorted array Ω(log n) yes
Element uniqueness Ω(n log n) yes
Integer multiplication (n × n) Ω(n) unknown
Matrix multiplication (n × n) Ω(n2 ) unknown
LOWER-BOUND ARGUMENTS
Methods for Establishing Lower Bounds

Trivial lower bounds


Information-theoretic arguments (decision trees)
Adversary arguments
Problem reduction
LOWER-BOUND ARGUMENTS
Trivial Lower Bounds
Trivial Lower Bounds
Based on counting the number of items that must be processed in input and generated as output

Examples
I finding max element
I polynomial evaluation
I sorting
I element uniqueness
I Hamiltonian circuit existence
Conclusions
I may and may not be useful
I be careful in deciding how many elements must be processed
LOWER-BOUND ARGUMENTS
Adversary Arguments
Adversary Argument
A method of proving a lower bound by playing role of adversary that makes algorithm work the
hardest by adjusting input

Example 1: “Guessing” a number between 1 and n with yes/no questions


I Adversary: Puts the number in a larger of the two subsets generated by last question
Example 2: Merging two sorted lists of size n a1 < a2 < . . . < an and
b1 < b2 < . . . < bn
I Adversary: ai < bj iff i < j
Output b1 < a1 < b2 < a2 < . . . < bn < an requires 2n − 1 comparisons of adjacent elements
LOWER-BOUND ARGUMENTS
Problem Reduction

Basic idea: If problem P is at least as hard as problem Q, then a lower bound for Q is
also a lower bound for P
Hence, find problem Q with a known lower bound that can be reduced to problem P
in question
Example: P is finding MST for n points in Cartesian plane Q is element uniqueness
problem (known to be in Ω(n log n))
LOWER-BOUND ARGUMENTS
Think About It

Prove that the classic recursive algorithm for the Tower of Hanoi puzzle makes the
minimum number of disk moves
Find a trivial lower-bound class and indicate if the bound is tight:
I finding the largest element in an array
I generating all the subsets of an n-element set
I determining whether n given real numbers are all distinct
DESIGN AND ANALYSIS OF
ALGORITHMS
Decision Trees
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Decision Trees

Reetinder Sidhu
Department of Computer Science and
Engineering
DECISION TREES
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Decision Trees
I Memory Functions I Smallest of three numbers
I Warshall’s and Floyd’s Algorithms I Sorting
I Optimal Binary Search Trees I Searching
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
DECISION TREES
Problem Types: Optimization and Decision

Optimization problem: find a solution that maximizes or minimizes some objective


function
Decision problem: answer yes/no to a question
Many problems have decision and optimization versions
I Ex: traveling salesman problem
I optimization: find Hamiltonian cycle of minimum length
I decision: find Hamiltonian cycle of length m
Decision problems are more convenient for formal investigation of their complexity
DECISION TREES
Introduction

Many important algorithms, especially those for sorting and searching, work by
comparing items of their inputs
We can study the performance of such algorithms with a device called the decision
tree
DECISION TREES
Example: Decision tree for minimum of three numbers
Decision tree for a determining the minimum of three numbers
DECISION TREES
Central Idea
The central idea behind this model lies in the observation that a tree with a given
number of leaves, which is dictated by the number of possible outcomes, has to be
tall enough to have that many leaves
Specifically, it is not difficult to prove that for any binary tree with leaves and height h

h ≥ dlog2 le

A binary tree of height h with the largest number of leaves has all its leaves on the
last level
Hence, the largest number of leaves in such a tree is 2h
In other words, 2h ≥ I which implies h ≥ dlog2 le
DECISION TREES
Decision Trees for Sorting Algorithms
Most sorting algorithms are comparison-based, i.e., they work by comparing
elements in a list to be sorted
By studying properties of binary decision trees, for comparison-based sorting
algorithms, we can derive important lower bounds on time efficiencies of such
algorithms
We can interpret an outcome of a sorting algorithm as finding a permutation of the
element indices of an input list that puts the list’s elements in ascending order
For example, for the outcome a < c < b obtained by sorting a list a, b, c
The number of possible outcomes for sorting an arbitrary n-element list is equal to n!
DECISION TREES
Decision Trees for Sorting Algorithms
The height of a binary decision tree for any comparison-based sorting algorithm and
hence the worst -case number of comparisons made by such an algorithm cannot be
less than
Cworst (n) ≥ dlog2 n!e
Using Stirling’s formula:
√ n log2 n log2 π
dlog2 n!e ≈ log2 2πn ( )n = n log2 n − n log2 e + + ≈ n log2 n
e 2 2
About nlog2 n comparisons are necessary to sort an arbitrary n-element list by any
comparison-based sorting algorithm
DECISION TREES
Decision Trees for Sorting Algorithms
Decision Tree for Three Element Selection Sort
DECISION TREES
Decision Trees for Sorting Algorithms
We can also use decision trees for analyzing the average-case behavior of a
comparison based sorting algorithm
We can compute the average number of comparisons for a particular algorithm as
the average depth of its decision tree’s leaves, i.e., as the average path length from
the root to the leaves
For example, for the three-element insertion sort this number is:
2+3+3+2+3+3 2
=2
6 3
DECISION TREES
Decision Trees for Sorting Algorithms

Decision Tree for Three Element Insertion Sort


DECISION TREES
Decision Trees for Sorting Algorithms

Under the standard assumption that all n! outcomes of sorting are equally likely, the
following lower bound on the average number of comparisons Cavg made by any
comparison-based algorithm in sorting an n-element list has been proved

Cavg (n) ≥ log2 n!


DECISION TREES
Decision Trees for Searching Algorithms

Decision trees can be used for establishing lower bounds on the number of key
comparisons in searching a sorted array of n keys: A[O] < A[1] < . . . < A[n − 1]
The number of comparisons made by binary search in the worst case:
bs
Cworst (n) = blog2 nc + 1 = dlog2 (n + 1)e
DECISION TREES
Decision Trees for Searching Algorithms
Ternary Decision Tree for Four Element Array Binary Search
DECISION TREES
Decision Trees for Searching Algorithms
For an array of n elements, all such decision trees will have 2n +1 leaves (n for
successful searches and n + 1 for unsuccessful ones)
Since the minimum height h of a ternary tree with l leaves is floor(log3l), we get the
following lower bound on the number of worst-case comparisons:

Cworst (n) ≥ dlog3 (2n + 1)e

This lower bound is smaller than dlog2 (n + 1)e, the number of worst-case
comparisons for binary search
Can we prove a better lower bound, or is binary search far from being optimal?
DECISION TREES
Decision Trees for Searching Algorithms
Binary Decision Tree for Four Element Array Binary Search

The binary decision tree is simply the ternary decision tree with all the middle
subtrees eliminated
Cworst (n) ≥ dlog2 (n + 1)e
DESIGN AND ANALYSIS OF
ALGORITHMS
P, NP, and NP-Complete Problems

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

P, NP, and NP-Complete Problems

Reetinder Sidhu
Department of Computer Science and
Engineering
P, NP, AND NP-COMPLETE PROBLEMS
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Decision Trees
I Memory Functions I Smallest of three numbers
I Warshall’s and Floyd’s Algorithms I Sorting
I Optimal Binary Search Trees I Searching
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
P, NP, AND NP-COMPLETE PROBLEMS
Classifying Problem Complexity

Is the problem tractable, i.e., is there a polynomial-time (O(p(n)) algorithm that


solves it?
Possible answers:
I yes
I no
F because it’s been proved that no algorithm exists at all (e.g., Turing’s halting problem)
F because it’s been be proved that any algorithm takes exponential time
unknown
P, NP, AND NP-COMPLETE PROBLEMS
Problem Types: Optimization and Decision

Optimization problem: find a solution that maximizes or minimizes some objective function
Decision problem: answer yes/no to a question

Many problems have decision and optimization versions


Example: traveling salesman problem
I optimization: find Hamiltonian cycle of minimum length
I decision: find Hamiltonian cycle of length ≤ m
Decision problems are more convenient for formal investigation of their complexity
P, NP, AND NP-COMPLETE PROBLEMS
Class P
Class P (Polynomial
The class of decision problems that are solvable in O (p (n)) time, where p (n) is a polynomial of
problem’s input size n

searching
element uniqueness
graph connectivity
graph acyclicity
primality testing
P, NP, AND NP-COMPLETE PROBLEMS
Class NP
Class NP (Nondeterministic Polynomial)
class of decision problems whose proposed solutions can be verified in polynomial time =
solvable by a nondeterministic polynomial algorithm

A nondeterministic polynomial algorithm is an abstract two-stage procedure that:


I generates a random string purported to solve the problem checks
I whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and verifying a
solution on one of its tries
Why this definition?
I led to development of the rich theory called “computational complexity”
P, NP, AND NP-COMPLETE PROBLEMS
Example: CNF satisfiability
Boolean Satisfiability (CNF)
Is a Boolean expression in its conjunctive normal form (CNF) satisfiable, i.e., are there values of its
variables that makes it true?

This problem is in NP. Nondeterministic algorithm:


I Guess truth assignment
I Substitute the values into the CNF formula to see if it evaluates to true

Example: Consider the Boolean expression in CNF form:

(a + b + c)(a + b)(a + b + c)

Can values false and true (or 0 and 1) be assigned to a, b and c such that above expression evaluates
to 1?
a = 1, b = 1, c = 0
Checking phase: Θ(n)
P, NP, AND NP-COMPLETE PROBLEMS
What problems are in NP?
Hamiltonian circuit existence
Partition problem: Is it possible to partition a set of n integers into two disjoint
subsets with the same sum?
Decision versions of TSP, knapsack problem, graph coloring, and many other
combinatorial optimization problems. (Few exceptions include: MST, shortest paths)
All the problems in P can also be solved in this manner (but no guessing is
necessary), so we have:
P ⊆ NP
Big question:
P = NP ?
P, NP, AND NP-COMPLETE PROBLEMS
NP-Complete Problems
A decision problem D is NP-complete if it’s as hard as any problem in NP, i.e.,
I D is in NP
I every problem in NP is polynomial-time reducible to D

I Cook’s theorem (1971): CNF-sat is NP-complete


P, NP, AND NP-COMPLETE PROBLEMS
NP-Complete Problems
Other NP-complete problems obtained through polynomial- time reductions from a known
NP-complete problem

Examples: TSP, knapsack, partition, graph-coloring and hundreds of other problems of combinatorial
nature
P, NP, AND NP-COMPLETE PROBLEMS
P = NP ? Dilemma Revisited
P = NP would imply that every problem in NP, including all NP-complete problems, could be solved
in polynomial time
If a polynomial-time algorithm for just one NP-complete problem is discovered, then every problem
in NP can be solved in polynomial time, i.e., P = NP

Most but not all researchers believe that P 6= NP


DESIGN AND ANALYSIS OF
ALGORITHMS
Backtracking

Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Backtracking

Reetinder Sidhu
Department of Computer Science and
Engineering
BACKTRACKING
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Backtracking
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I N Queens
I Optimal Binary Search Trees I Hamiltonian Circuit
Limitations of Algorithmic Power I Subset Sum
I Lower-Bound Arguments
I Algorithm
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
BACKTRACKING
Tackling Difficult Combinatorial Problems

There are two principal approaches to tackling difficult combinatorial problems


(NP-hard problems):
I Use a strategy that guarantees solving the problem exactly but doesn’t guarantee to find a solution
in polynomial time
I Use an approximation algorithm that can find an approximate (sub-optimal) solution in polynomial
time
BACKTRACKING
Exact Solution Strategies

Exhaustive search (brute force)


I useful only for small instances
Dynamic programming
I applicable to some problems (e.g., the knapsack problem)
Backtracking
I eliminates some unnecessary cases from consideration
I yields solutions in reasonable time for many instances but worst case is still exponential
Branch-and-bound
I further refines the backtracking idea for optimization problems
BACKTRACKING
Introduction

Construct the state-space tree


I nodes: partial solutions
I edges: choices in extending partial solutions
Explore the state space tree using depth-first search
“Prune” nonpromising nodes
I DFS stops exploring subtrees rooted at nodes that cannot lead to a solution and backtracks to such
a node’s parent to continue the search
BACKTRACKING
Example: N-Queens Problem
Place N queens on an N × N chess board so that no two of them are in the same
row, column, or diagonal
BACKTRACKING
State-Space Tree of the 4-Queens Problem
BACKTRACKING
Example: Hamiltonian Circuit Problem
State-space tree for finding a Hamiltonian circuit
Hamiltonian Circuit (numbers above the nodes of indicate the order in which
A Hamiltonian circuit is the nodes are generated):
defined as a cycle that passes
through all the vertices of the
graph exactly once.

Example graph:
BACKTRACKING
Example: Subset Sum Problem
Subset Sum Problem
Given set A = {a1 , . . . , an } of n positive integers, find a subset whose sum is equal to a given
positive integer d

State space tree for A = {3, 5, 6, 7} and d = 15 (number in each node is the sum so
far):
BACKTRACKING
Algorithm
Backtrack Algorithm
1: procedure BACKTRACK(X [1 . . . i])
2: . Input: X [1 . . . i] specifies first i promising components of a solution
3: . Output: All the tuples representing the problem’s solutions
4: if X [1 . . . i] is a solution then
5: write X [1 . . . i]
6: else
7: for each element x ∈ Si+1 consistent with X [1 . . . i] and the constraints do
8: X [i + 1] ← x
9: Backtrack (X [1 . . . i + 1])

Output: n-tuples (x1 , x2 , ..., xn )


Each xi ∈ Si , some finite linearly ordered set
BACKTRACKING
Think About It

Continue the backtracking search for a solution to the four-queens problem, to find
the second solution to the problem
Explain how the board’s symmetry can be used to find the second solution to the
four-queens problem
DESIGN AND ANALYSIS OF
ALGORITHMS
Branch and Bound
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS

Branch and Bound

Reetinder Sidhu
Department of Computer Science and
Engineering
BRANCH AND BOUND
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations

Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Backtracking
I Memory Functions I General Approach
I Warshall’s and Floyd’s Algorithms I Knapsack Problem
I Optimal Binary Search Trees I Assignment Problem
Limitations of Algorithmic Power I Travelling Salesman Problem
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch and Bound
BRANCH AND BOUND
Introduction
An enhancement of backtracking
Applicable to optimization problems
For each node (partial solution) of a state-space tree, computes a bound on the
value of the objective function for all descendants of the node (extensions of the
partial solution)
Uses the bound for:
I ruling out certain nodes as “nonpromising” to prune the tree (if a node’s bound is not better than
the best solution seen so far)
I guiding the search through state-space
BRANCH AND BOUND
Example: Assignment Problem
Select one element in each row of the cost matrix C so that:
I no two selected elements are in the same column
I the sum is minimized
Example
Job 1 Job 2 Job 3 Job 4
Person a 9 2 7 8
Person b 6 4 3 7
Person c 5 8 1 8
Person d 7 6 9 4
Lower bound (sum of smallest elements in each row): 2 + 3 + 1 + 4 = 10
Best-first branch-and-bound variation: Generate all the children of the most
promising node
BRANCH AND BOUND
Example: First two levels of the state-space tree
BRANCH AND BOUND
Example: First three levels of the state-space tree
BRANCH AND BOUND
Example: Complete state-space tree
BRANCH AND BOUND
Example: Knapsack Problem
BRANCH AND BOUND
Example: Traveling Salesman Problem
BRANCH AND BOUND
Think About It

What data structure would you use to keep track of live nodes in a best-first
branch-and-bound algorithm?
Solve the assignment problem by the best-first branch-and-bound algorithm with
the bounding function based on matrix columns rather than rows

You might also like