DAA Unit4 2025
DAA Unit4 2025
ALGORITHMS
Dynamic Programming
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Dynamic Programming
Reetinder Sidhu
Department of Computer Science and
Engineering
DYNAMIC PROGRAMMING
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Dynamic Programming
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I Fibonacci numbers
Limitations of Algorithmic Power I Binomial Coefficients
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
DYNAMIC PROGRAMMING
Introduction
Dynamic Programming is a general algorithm design technique for solving problems
defined by recurrences with overlapping subproblems
Invented by American mathematician Richard Bellman in the 1950s to solve
optimization problems and later assimilated by CS
“Programming” here means “planning”
Main idea:
I set up a recurrence relating a solution to a larger instance to solutions of some smaller instances
I solve smaller instances once
I record solutions in a table
I extract solution to the initial instance from that table
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Recall definition of Fibonacci numbers:
f (n) = f (n − 1) + f (n − 2)
f (0) = 0
f (1) = 1
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Recall definition of Fibonacci numbers:
f (n) = f (n − 1) + f (n − 2)
f (0) = 0
f (1) = 1
f (n − 1) f (n − 2)
f (n − 2) f (n − 3) f (n − 3) f (n − 4)
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Computing the nth Fibonacci number
using bottom-up iteration and recording
results:
f (0) = 0
f (1) = 1
f (2) = 0 + 1 = 1
f (3) = 1 + 1 = 2
f (4) = 1 + 2 = 3
..
.
DYNAMIC PROGRAMMING
Example: Fibonacci Numbers
Computing the nth Fibonacci number
using bottom-up iteration and recording Efficiency:
results: time: Θ(n)
space: Θ(n) or Θ(1)
f (0) = 0
f (1) = 1
f (2) = 0 + 1 = 1
f (3) = 1 + 1 = 2
f (4) = 1 + 2 = 3
..
.
DYNAMIC PROGRAMMING
Algorithm Examples
Recurrence:
C(n, k) = C(n−1, k)+C(n−1, k−1) for n > k > 0
Time: Θ(nk)
Space: Θ(nk)
DYNAMIC PROGRAMMING
Think About It
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
THE KNAPSACK PROBLEM
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem The Knapsack Problem
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I Recurrence
Limitations of Algorithmic Power I Example
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
THE KNAPSACK PROBLEM
Problem Definition
Given
integer weights : w1 w2 . . . wn
I n items of
values : v1 v2 . . . vn
I knapsack of capacity W (integer W > 0)
Find the most valuable subset of items such that sum of their weights does not
exceed W
THE KNAPSACK PROBLEM
Kanpsack Recurrence
To design a dynamic programming algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in terms of
solutions to its smaller subinstances
Consider the smaller knapsack problem where number of items is i (i ≤ n) and the
knapsack capacity id j (j ≤ W )
Then
max(F (i − 1, j), vi + F (i − 1, j − wi )) if j − wi ≥ 0
(
F (i, j) =
F (i − 1, j) if j − wi < 0
THE KNAPSACK PROBLEM
Example
max(F (i − 1, j ), vi + F (i − 1, j − wi )) if j − wi ≥ 0
F (i , j ) =
F (i − 1, j ) if j − wi < 0
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
MEMORY FUNCTION KNAPSACK
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Memory Function Knapsack
I Memory Functions I Motivation
I Warshall’s and Floyd’s Algorithms I Algorithm
Limitations of Algorithmic Power I Example
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
Example computed bottom up:
capacity j
i 1 2 3 4 5
1 0 12 12 12 12
2 10 12 22 22 22
3 10 12 22 30 32
4 10 15 25 30 37
MEMORY FUNCTION KNAPSACK
Bottom Up Approach
Advantage of bottom up approach: each value computed only once
Example computed bottom up:
capacity j
i 1 2 3 4 5
1 0 12 12 12 12
2 10 12 22 22 22
3 10 12 22 30 32
4 10 15 25 30 37
Disadvantage of bottom up approach: values not required also computed
MEMORY FUNCTION KNAPSACK
Top Down Approach
MEMORY FUNCTION KNAPSACK
Top Down Approach
f (n − 1) f (n − 2)
f (n − 2) f (n − 3) f (n − 3) f (n − 4)
MEMORY FUNCTION KNAPSACK
Top Down Approach
f (n − 1) f (n − 2)
f (n − 2) f (n − 3) f (n − 3) f (n − 4)
Consider the use of the MF technique to compute binomial coefficient using the
recurrence
C (n, k) = C (n − 1, k − 1) + C (n − 1, k)
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Transitive Closure (Warshall’s
I Memory Functions Algorithm)
I Warshall’s and Floyd’s Algorithms I Motivation
Limitations of Algorithmic Power I Algorithm
I Lower-Bound Arguments
I Example
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
1 3
2 4
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
1 3
2 4
1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
1 3 1 3
2 4 2 4
1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Transitive Closure
Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a digraph (directed graph)
Example of transitive closure:
1 3 1 3
2 4 2 4
1 2 3 4
1 0 0 1 0 1 2 3 4
2 1 0 0 1 1 0 0 1 0
3 0 0 0 0 2 1 1
1 1
4 0 1 0 0 3 0
0 0 0
4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n × n matrices
R (0) , . . . , R (k) , . . . , R (n) where R (k) [i, j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate vertices
I R (0) = A (adjacency matrix), R (n) = T (transitive closure)
R (k) [i, j] = R (k−1) [i, j] or (R (k−1) [i, k] and R (k−1) [k, j])
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Algorithm
1 3
2 4
1 2 3 4
1 0 0 1 0
2 1 0 0 1
3 0 0 0 0
4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3
2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 0 1
3 0 0 0 0 3 0 0 0 0
4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3
2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 1 1
3 0 0 0 0 3 0 0 0 0
4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3 1 3
2 4 2 4 2 4
1 2 3 4 1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
2 1 0 0 1 2 1 0 1 1 2 1 0 1 1
3 0 0 0 0 3 0 0 0 0 3 0 0 0 0
4 0 1 0 0 4 0 1 0 0 4 0 1 0 0
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3 1 3
2 4 2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1
1
4 0 1 0 0 4 0 1 0 0 3 0
0 0 0
4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3 1 3 1 3
2 4 2 4 2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1
1
4 0 1 0 0 4 0 1 0 0 3 0
0 0 0
3 0
0 0 0
4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3 1 3 1 3 1 3
2 4 2 4 2 4 2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1 2 1 0 1 1
1
4 0 1 0 0 4 0 1 0 0 3 0
0 0 0
3 0
0 0 0
3 0
0 0 0
4 1 1 1 1 4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Warshall’s Algorithm
1 3 1 3 1 3 1 3 1 3
2 4 2 4 2 4 2 4 2 4
1 2 3 4 1 2 3 4
1 0 0 1 0 1 0 0 1 0 1 2 3 4 1 2 3 4 1 2 3 4
2 1 0 0 1 2 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0
3 0 0 0 0 3 0 0 0 0 2 1 0 1 2 1 0 1 1 2 1 1 1
1 1
4 0 1 0 0 4 0 1 0 0 3 0
0 0 0
3 0
0 0 0
3 0
0 0 0
4 1 1 1 1 4 1 1 1 1 4 1 1 1 1
TRANSITIVE CLOSURE (WARSHALL’S ALGORITHM)
Think About It
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem All Pairs Shortes Path (Floyd’s
I Memory Functions Algorithm)
I Warshall’s and Floyd’s Algorithms I Definition
Limitations of Algorithmic Power I Algorithm
I Lower-Bound Arguments
I Example
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound. Architecture
(microprocessor instruction set)
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Problem Definition
Given a undirected or directed graph, with weighted edges, find the shortest path between
every pair of vertices
I Dijkstra’s algorithm found shortest paths from given vertex to remaining n − 1 vertices (Θ(n) paths)
I Current problem is to find the shortest path between every pair of vertices (Θ(n2 ) paths)
Solution approach is similar to the transitive closure approach: Compute transitive closure
via sequence of n × n matrices R (0) , . . . , R (k ) , . . . , R (n) where R (k ) [i , j ] = 1 iff there is
nontrivial path from i to j with only first k vertices allowed as intermediate vertices
Compute all pairs shortest paths via sequence of n × n matrices D (0) , . . . , D (k ) , . . . , D (n)
where D (k ) [i , j ] is the shortest path from i to j with only first k vertices allowed as
intermediate vertices
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2
1 2
6 7
3
3 4
1
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2
1 2
6 7
3
3 4
1
1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2 2
1 2 1 2
10
6 5
6 7 4 7
3 3 7 6 16
9
3 4 3 4
1 1
1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
Example of all pairs shortest paths:
2 2
1 2 1 2
10
6 5
6 7 4 7
3 3 7 6 16
9
3 4 3 4
1 1
1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 6
3 ∞ 7 0 1 3 7 7 0 1
4 6 ∞ ∞ 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Algorithm
Transitive Closure (Floyd’s Algorithm)
1: procedure FLOYD(()A[1 . . . n, 1 . . . n])
2: . Input: Weight matrix A of a graph with no negative length cycles
3: . Output: Distance matrix of shortest paths
4: D←W
5: for k ← 1 to n do
6: for i ← 1 to n do
7: for j ← 1 to n do
8: D[i, j] ← min(D[i, j], D[i, k] + D[k, j])
9: return D
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Algorithm
Transitive Closure (Floyd’s Algorithm)
1: procedure FLOYD(()A[1 . . . n, 1 . . . n])
2: . Input: Weight matrix A of a graph with no negative length cycles
3: . Output: Distance matrix of shortest paths
4: D←W
5: for k ← 1 to n do
6: for i ← 1 to n do
7: for j ← 1 to n do
8: D[i, j] ← min(D[i, j], D[i, k] + D[k, j])
9: return D
Complexity: Θ(n3 )
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2
1 2
6 7
3
3 4
1
1 2 3 4
1 0 ∞ 3 ∞
2 2 0 ∞ ∞
3 ∞ 7 0 1
4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2
1 2 1 2
6
6 7 7
3 3
3 4 3 4
1 1
1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 ∞ ∞
3 ∞ 7 0 1 3 ∞ 7 0 1
4 6 ∞ ∞ 0 4 6 ∞ ∞ 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2
1 2 1 2
6 5
6 7 7
3 3
9
3 4 3 4
1 1
1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1
4 6 ∞ ∞ 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2
1 2 1 2 1 2
6 5 6 5
6 7 7 7
3 3 3
9 9
3 4 3 4 3 4
1 1 1
1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 ∞ 7 0 1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2
1 2 1 2 1 2
6 5 6 5
6 7 7 7
3 3 3 9
9 9
3 4 3 4 3 4
1 1 1
1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2 2
1 2 1 2 1 2 1 2
6 5 6 5 6 5
6 7 7 7 7
3 3 3 9 3 9
9 9 9
3 4 3 4 3 4 3 4
1 1 1 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 ∞
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0
1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 ∞ 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2 2
1 2 1 2 1 2 1 2
10
6 5 6 5 6 5
6 7 7 7 4 7
3 3 3 9 3 9 6 16
9 9 9
3 4 3 4 3 4 3 4
1 1 1 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0
1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2 2 2
1 2 1 2 1 2 1 2 1 2
10 10
6 5 6 5 6 5 6 5
6 7 7 7 4 7 4 7
3 3 3 9 3 9 6 16 3 9 6 16
9 9 9 9
3 4 3 4 3 4 3 4 3 4
1 1 1 1 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0 3 9 7 0 1
1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Example
2 2 2 2 2
1 2 1 2 1 2 1 2 1 2
10 10
6 5 6 5 6 5 6 5
6 7 7 7 4 7 4 7
3 3 3 9 3 9 6 16 3 7 6 16
9 9 9 9
3 4 3 4 3 4 3 4 3 4
1 1 1 1 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 ∞ 3 ∞ 1 0 10 3 4 1 0 10 3 4
2 2 0 ∞ ∞ 2 2 0 5 ∞ 2 2 0 5 ∞ 2 2 0 5 6 2 2 0 5 6
3 ∞ 7 0 1 3 ∞ 7 0 1 3 9 7 0 1 3 9 7 0 3 7 7 0 1
1
4 6 ∞ ∞ 0 4 6 ∞ 9 0 4 6 ∞ 9 0 4 6 16 9 0 4 6 16 9 0
ALL PAIRS SHORTEST PATH (FLOYD’S ALGORITHM)
Think About It
Give an example of a graph with negative weights for which Floyd’s algorithm does
not yield the correct result
Enhance Floyd’s algorithm so that shortest paths themselves, not just their lengths,
can be found
DESIGN AND ANALYSIS OF
ALGORITHMS
Lower-Bound Arguments
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Lower-Bound Arguments
Reetinder Sidhu
Department of Computer Science and
Engineering
LOWER-BOUND ARGUMENTS
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Lower-Bound Arguments
I Memory Functions I Trivial lower bounds
I Warshall’s and Floyd’s Algorithms I Adversary arguments
I Optimal Binary Search Trees I Problem reduction
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
LOWER-BOUND ARGUMENTS
Limitations of Algorithmic Power
Lower Bound
An estimate on a minimum amount of work needed to solve a given problem (estimate can be
less than the minimum amount of work but not greater)
Examples:
I number of comparisons needed to find the largest element in a set of n numbers
I number of comparisons needed to sort an array of size n
I number of comparisons necessary for searching in a sorted array
I number of multiplications needed to multiply two n × n matrices
LOWER-BOUND ARGUMENTS
Bound Tightness
A lower bound can be: Tight Lower Bound
I an exact count There exists an algorithm with the same
I an efficiency class (Ω)
efficiency as the lower bound
Examples
I finding max element
I polynomial evaluation
I sorting
I element uniqueness
I Hamiltonian circuit existence
Conclusions
I may and may not be useful
I be careful in deciding how many elements must be processed
LOWER-BOUND ARGUMENTS
Adversary Arguments
Adversary Argument
A method of proving a lower bound by playing role of adversary that makes algorithm work the
hardest by adjusting input
Basic idea: If problem P is at least as hard as problem Q, then a lower bound for Q is
also a lower bound for P
Hence, find problem Q with a known lower bound that can be reduced to problem P
in question
Example: P is finding MST for n points in Cartesian plane Q is element uniqueness
problem (known to be in Ω(n log n))
LOWER-BOUND ARGUMENTS
Think About It
Prove that the classic recursive algorithm for the Tower of Hanoi puzzle makes the
minimum number of disk moves
Find a trivial lower-bound class and indicate if the bound is tight:
I finding the largest element in an array
I generating all the subsets of an n-element set
I determining whether n given real numbers are all distinct
DESIGN AND ANALYSIS OF
ALGORITHMS
Decision Trees
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Decision Trees
Reetinder Sidhu
Department of Computer Science and
Engineering
DECISION TREES
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Decision Trees
I Memory Functions I Smallest of three numbers
I Warshall’s and Floyd’s Algorithms I Sorting
I Optimal Binary Search Trees I Searching
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
DECISION TREES
Problem Types: Optimization and Decision
Many important algorithms, especially those for sorting and searching, work by
comparing items of their inputs
We can study the performance of such algorithms with a device called the decision
tree
DECISION TREES
Example: Decision tree for minimum of three numbers
Decision tree for a determining the minimum of three numbers
DECISION TREES
Central Idea
The central idea behind this model lies in the observation that a tree with a given
number of leaves, which is dictated by the number of possible outcomes, has to be
tall enough to have that many leaves
Specifically, it is not difficult to prove that for any binary tree with leaves and height h
h ≥ dlog2 le
A binary tree of height h with the largest number of leaves has all its leaves on the
last level
Hence, the largest number of leaves in such a tree is 2h
In other words, 2h ≥ I which implies h ≥ dlog2 le
DECISION TREES
Decision Trees for Sorting Algorithms
Most sorting algorithms are comparison-based, i.e., they work by comparing
elements in a list to be sorted
By studying properties of binary decision trees, for comparison-based sorting
algorithms, we can derive important lower bounds on time efficiencies of such
algorithms
We can interpret an outcome of a sorting algorithm as finding a permutation of the
element indices of an input list that puts the list’s elements in ascending order
For example, for the outcome a < c < b obtained by sorting a list a, b, c
The number of possible outcomes for sorting an arbitrary n-element list is equal to n!
DECISION TREES
Decision Trees for Sorting Algorithms
The height of a binary decision tree for any comparison-based sorting algorithm and
hence the worst -case number of comparisons made by such an algorithm cannot be
less than
Cworst (n) ≥ dlog2 n!e
Using Stirling’s formula:
√ n log2 n log2 π
dlog2 n!e ≈ log2 2πn ( )n = n log2 n − n log2 e + + ≈ n log2 n
e 2 2
About nlog2 n comparisons are necessary to sort an arbitrary n-element list by any
comparison-based sorting algorithm
DECISION TREES
Decision Trees for Sorting Algorithms
Decision Tree for Three Element Selection Sort
DECISION TREES
Decision Trees for Sorting Algorithms
We can also use decision trees for analyzing the average-case behavior of a
comparison based sorting algorithm
We can compute the average number of comparisons for a particular algorithm as
the average depth of its decision tree’s leaves, i.e., as the average path length from
the root to the leaves
For example, for the three-element insertion sort this number is:
2+3+3+2+3+3 2
=2
6 3
DECISION TREES
Decision Trees for Sorting Algorithms
Under the standard assumption that all n! outcomes of sorting are equally likely, the
following lower bound on the average number of comparisons Cavg made by any
comparison-based algorithm in sorting an n-element list has been proved
Decision trees can be used for establishing lower bounds on the number of key
comparisons in searching a sorted array of n keys: A[O] < A[1] < . . . < A[n − 1]
The number of comparisons made by binary search in the worst case:
bs
Cworst (n) = blog2 nc + 1 = dlog2 (n + 1)e
DECISION TREES
Decision Trees for Searching Algorithms
Ternary Decision Tree for Four Element Array Binary Search
DECISION TREES
Decision Trees for Searching Algorithms
For an array of n elements, all such decision trees will have 2n +1 leaves (n for
successful searches and n + 1 for unsuccessful ones)
Since the minimum height h of a ternary tree with l leaves is floor(log3l), we get the
following lower bound on the number of worst-case comparisons:
This lower bound is smaller than dlog2 (n + 1)e, the number of worst-case
comparisons for binary search
Can we prove a better lower bound, or is binary search far from being optimal?
DECISION TREES
Decision Trees for Searching Algorithms
Binary Decision Tree for Four Element Array Binary Search
The binary decision tree is simply the ternary decision tree with all the middle
subtrees eliminated
Cworst (n) ≥ dlog2 (n + 1)e
DESIGN AND ANALYSIS OF
ALGORITHMS
P, NP, and NP-Complete Problems
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
P, NP, AND NP-COMPLETE PROBLEMS
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Decision Trees
I Memory Functions I Smallest of three numbers
I Warshall’s and Floyd’s Algorithms I Sorting
I Optimal Binary Search Trees I Searching
Limitations of Algorithmic Power
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
P, NP, AND NP-COMPLETE PROBLEMS
Classifying Problem Complexity
Optimization problem: find a solution that maximizes or minimizes some objective function
Decision problem: answer yes/no to a question
searching
element uniqueness
graph connectivity
graph acyclicity
primality testing
P, NP, AND NP-COMPLETE PROBLEMS
Class NP
Class NP (Nondeterministic Polynomial)
class of decision problems whose proposed solutions can be verified in polynomial time =
solvable by a nondeterministic polynomial algorithm
(a + b + c)(a + b)(a + b + c)
Can values false and true (or 0 and 1) be assigned to a, b and c such that above expression evaluates
to 1?
a = 1, b = 1, c = 0
Checking phase: Θ(n)
P, NP, AND NP-COMPLETE PROBLEMS
What problems are in NP?
Hamiltonian circuit existence
Partition problem: Is it possible to partition a set of n integers into two disjoint
subsets with the same sum?
Decision versions of TSP, knapsack problem, graph coloring, and many other
combinatorial optimization problems. (Few exceptions include: MST, shortest paths)
All the problems in P can also be solved in this manner (but no guessing is
necessary), so we have:
P ⊆ NP
Big question:
P = NP ?
P, NP, AND NP-COMPLETE PROBLEMS
NP-Complete Problems
A decision problem D is NP-complete if it’s as hard as any problem in NP, i.e.,
I D is in NP
I every problem in NP is polynomial-time reducible to D
Examples: TSP, knapsack, partition, graph-coloring and hundreds of other problems of combinatorial
nature
P, NP, AND NP-COMPLETE PROBLEMS
P = NP ? Dilemma Revisited
P = NP would imply that every problem in NP, including all NP-complete problems, could be solved
in polynomial time
If a polynomial-time algorithm for just one NP-complete problem is discovered, then every problem
in NP can be solved in polynomial time, i.e., P = NP
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Backtracking
Reetinder Sidhu
Department of Computer Science and
Engineering
BACKTRACKING
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Backtracking
I Memory Functions I Introduction
I Warshall’s and Floyd’s Algorithms I N Queens
I Optimal Binary Search Trees I Hamiltonian Circuit
Limitations of Algorithmic Power I Subset Sum
I Lower-Bound Arguments
I Algorithm
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch-and-Bound
BACKTRACKING
Tackling Difficult Combinatorial Problems
Example graph:
BACKTRACKING
Example: Subset Sum Problem
Subset Sum Problem
Given set A = {a1 , . . . , an } of n positive integers, find a subset whose sum is equal to a given
positive integer d
State space tree for A = {3, 5, 6, 7} and d = 15 (number in each node is the sum so
far):
BACKTRACKING
Algorithm
Backtrack Algorithm
1: procedure BACKTRACK(X [1 . . . i])
2: . Input: X [1 . . . i] specifies first i promising components of a solution
3: . Output: All the tuples representing the problem’s solutions
4: if X [1 . . . i] is a solution then
5: write X [1 . . . i]
6: else
7: for each element x ∈ Si+1 consistent with X [1 . . . i] and the constraints do
8: X [i + 1] ← x
9: Backtrack (X [1 . . . i + 1])
Continue the backtracking search for a solution to the four-queens problem, to find
the second solution to the problem
Explain how the board’s symmetry can be used to find the second solution to the
four-queens problem
DESIGN AND ANALYSIS OF
ALGORITHMS
Branch and Bound
Reetinder Sidhu
Department of Computer Science and Engineering
DESIGN AND ANALYSIS OF
ALGORITHMS
Reetinder Sidhu
Department of Computer Science and
Engineering
BRANCH AND BOUND
UNIT 5: Limitations of Algorithmic Power and Coping with the Limitations
Dynamic Programming
I Computing a Binomial Coefficient
Concepts covered
I The Knapsack Problem Backtracking
I Memory Functions I General Approach
I Warshall’s and Floyd’s Algorithms I Knapsack Problem
I Optimal Binary Search Trees I Assignment Problem
Limitations of Algorithmic Power I Travelling Salesman Problem
I Lower-Bound Arguments
I Decision Trees
I P, NP, and NP-Complete, NP-Hard
Problems
Coping with the Limitations
I Backtracking
I Branch and Bound
BRANCH AND BOUND
Introduction
An enhancement of backtracking
Applicable to optimization problems
For each node (partial solution) of a state-space tree, computes a bound on the
value of the objective function for all descendants of the node (extensions of the
partial solution)
Uses the bound for:
I ruling out certain nodes as “nonpromising” to prune the tree (if a node’s bound is not better than
the best solution seen so far)
I guiding the search through state-space
BRANCH AND BOUND
Example: Assignment Problem
Select one element in each row of the cost matrix C so that:
I no two selected elements are in the same column
I the sum is minimized
Example
Job 1 Job 2 Job 3 Job 4
Person a 9 2 7 8
Person b 6 4 3 7
Person c 5 8 1 8
Person d 7 6 9 4
Lower bound (sum of smallest elements in each row): 2 + 3 + 1 + 4 = 10
Best-first branch-and-bound variation: Generate all the children of the most
promising node
BRANCH AND BOUND
Example: First two levels of the state-space tree
BRANCH AND BOUND
Example: First three levels of the state-space tree
BRANCH AND BOUND
Example: Complete state-space tree
BRANCH AND BOUND
Example: Knapsack Problem
BRANCH AND BOUND
Example: Traveling Salesman Problem
BRANCH AND BOUND
Think About It
What data structure would you use to keep track of live nodes in a best-first
branch-and-bound algorithm?
Solve the assignment problem by the best-first branch-and-bound algorithm with
the bounding function based on matrix columns rather than rows