DAA - Module 2 - Part1
DAA - Module 2 - Part1
FUNDAMENTAL ALGORITHMIC
STRATEGIES
Contents:
Algorithm strategies
- Divide & Conquer
- Brute-Force
- Greedy
- Dynamic programming
- Branch-and-Bound
- Backtracking
Problem Solving
- Bin Picking
- Knapsack
- Travelling Salesperson Problem
Heuristics – characteristics & their application Domain.
ALGORITHMIC STRATEGIES
Objectives:
At the end of this chapter you should be able to
Control Abstraction :
If the size of P is n and the sizes of k subproblems are n1, n2,....nk, respectively, then
the computing time of DandC is described by the recurrence relation as follows:
where a and b are constants. And Assume that T(1) is known and n is a power of b
(i.e. n = bk).
D&C : Binary Search
Binary Search (Iterative): Algorithm
if then
Assume Therefore
T (n) = O( log n)
Successful Searches Unsuccessful Searches
O( 1) O( log n) O( log n) O( log n)
best average worst best, average, worst
Divide and Conquer
1) Straightforward :
2) After modification : in case elements are in increasing order. (Best case)
& in case elements are in decreasing order. (Worst case)
D&C : Maximum & Minimum
Finding Maximum & Minimum (D&C):
:
:
Comparisons :
StraightMaxMin 3(n-1) including for loop comparisons. (if n = 1000, 2997 )
MaxMin (Only element comparisons considered) (1498)
MaxMin (After modification) (2497)
D&C : Merge Sort
That’s why this sorting technique is also called Split and Merge.
i) Split ii) Merge
D&C : Merge Sort
Algorithm :
Space complexity
- Merge() algorithm uses additional (Auxiliary) array of input size.
- Merge sort is not an in-place algorithm. So Space complexity is O(n).
Important points
- Merge sort uses divide-and-conquer algorithm design strategy.
- Merge sort is recursive sorting algorithm.
- Best suited when the input size is very large.
- Merge sort is the best sorting algorithm in terms of time complexity
O(n log2 n) if we are not concerned with auxiliary space used.
D&C : Matrix multiplication
Matrix Multiplication :
Let A and B be two n x n matrices. The product matrix C = AB is also an n x n
matrix. ……….(I)
X = .………(II)
Then
…..(IV) .… (V)
Knapsack problem:
We are given n objects, and a knapsack or bag.
Object i has a weight wi , profit pi and the knapsack has a capacity m.
Greedy : Knapsack Problem
For any job i profit pi is earned iff the job is completed by its deadline. To complete a
job, one has to process the job for one unit of time. Only one machine is available for
processing the jobs.
Feasible solution – a subset J of jobs such that each job in this subset can be
completed before its deadline.
Example : Let n=4, (p1 , p2 , p3 ) = (100, 10, 15, 27) and (d1 , d2 , d3 ) = (2, 1, 2, 1).
Find all feasible solutions and their values .
Sr. Feasible Processing
Value (Profit)
No. solution Sequence
1 (1, 2) 2, 1 110
2 (1, 3) 1, 3 or 3, 1 115
3 (1, 4) 4, 1 127
4 (2, 3) 2, 3 25
5 (3, 4) 4, 3 42
6 (1) 1 100
7 (2) 2 10
8 (3) 3 15
9 (4) 4 27
Greedy: Job Sequencing with Deadlines
Job Sequencing With Deadlines :
1. Algorithm GreedyJob(d, J, n)
// J is a set of jobs their can be completed by their deadlines.
2. {
3. J:= {1};
4. for i := 2 to n do
5. {
6. if (all jobs in J ∪ {i} can be completed by their deadlines ) then
7. J := J ∪ {i};
8. }
9. }
Algo. 5: Greedy algorithm for sequencing unit time jobs with dead-lines and profits
Greedy: Job Sequencing with Deadlines
Faster algorithm for JS :
Since there are only n jobs and each job takes one unit of time, it is necessary
only to consider the time slots
Greedy: Job Sequencing with Deadlines
Faster algorithm for JS :
Lines 18 & 19 and the for loop of line 23 require O(n) time.
Also each iteration of for loop of line 16 takes O(n) time.
In that, the edges of the graph are considered in non-decreasing order of cost.
In this , the interpretation is that the set t of edges so far selected for the
spanning tree be such that it is possible to complete tree t into a tree.
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm: Time Complexity
But because we used min heap to select next edge with min cost
This shortest path between s to t can be easily determined if we record the decision
made at each state (vertex). Let d(i, j) be the value of l (node) that minimizes
DP : Multistage Graphs
Problem 1: Find the minimum cost path from s to t for the five-stage graph shown
below using Forward approach.
Solution:
Since k = no. of stages = 5. Start finding cost(i, j) form stage k- 2 =5- 2= 3(stage)
cost(2, 2) = min {c(2, 6) + cost(3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= min{4 + 7, 2+5, 1 + 7} = 7 d(2, 2) = 7
cost(2, 3) = min {c(3, 6) + cost(3, 6), c(3, 7) + cost(3, 7) }
= min{ 2 + 7, 7 + 5} = 9 d(2, 3) = 6
cost(2, 4) = min {c(4, 8) + cost(3, 8) } = min{11 + 7} = 9 d(2, 4) = 8
cost(2, 5) = min {c(5, 7) + cost(3, 7), c(5, 8) + cost(3, 8) }
= min{ 11 + 5, 8 + 7 } = 15 d(2, 5) = 8
d(1, 1) = 2 d(1, 1) = 3
d(2, 2) = 7 OR d(2, 3) = 6
d(3, 7) = 10 d(3, 6) = 10
d(4, 10) = 12 d(4, 10) = 12
Therefore, minimum cost path from s Therefore, minimum cost path from s to
to t is t is
s = 1 → 2 → 7 → 10 → t= 12 s= 1 → 3 → 6 → 10 → t = 12
Cost = 16 Cost = 16
DP : Multistage Graphs
Multistage Graphs : Forward Approach Algorithm
DP : Multistage Graphs
Multistage Graphs : Backward Approach
Solution:
bcost(3, 7) = min {bcost(2, 2) + c(2, 7), bcost(2, 3) + c(3, 7), bcost(2, 5 + c(5, 7) }
= min{9 + 2, 7 + 7, 2 + 11} = 11 d(3,7) = 2
DP : Multistage Graphs
bcost(3, 8) = min {bcost(2, 2) + c(2, 8), bcost(2, 4) + c(4, 8), bcost(2, 5) + c(5, 8) }
= min{ 9 + 1, 3 + 11, 2 + 8 } = 10 d(3,8) = 2 or 5
bcost(5, 12) = min {bcost(4, 9) + c(9, 12), bcost(4, 10) + c(10, 12),
bcost(4, 11) + c(11, 12)}
= min{15 + 4, 14 + 2, 16 + 5} = 16 d(5, 12) = 10
DP : Multistage Graphs
v 1 2 3 4 5 6 7 8 9 10 11 12
bcost 0 9 7 3 2 9 11 10 15 14 16 16
d 1 1 1 1 1 3 2 2 or 5 6 or 7 6 or 7 8 10
s = 1 → 3 → 6 → 10 → t= 12 s = 1 → 2 → 7 → 10 → s =12
Cost = 16 Cost = 16
DP : Multistage Graphs
Multistage Graphs : Backward Approach Algorithm
DP : Multistage Graphs
Complexity analysis of Multistage graph:
Hence if G has |E| edges, then time for loop in line 7 is θ(|V| + |E|) .
In addition to the space needed for the input, space is needed for cost[], d[] and
p[] .
DP : 0/1 Knapsack
0/1 Knapsack: Problem : Terminology and notations used for 0/1 Knapsack is
same as that of greedy Knapsack.
Let be the value of an optimal solution to KNAP(1, j, y). Since the principle of
optimality holds we obtain
We know and
I) For i=1:
w1= 2. p1= 1 y=06
f1(1) = max { f0(1) , f0(1-2)+ p1} = max{0, + 1} = 0
f1(2) = max { f0(2) , f0(2-2)+ p1} = max{0, 0 + 1} = 1
f1(3) = max { f0(3) , f0(3-2)+ p1} = max{0, 0 + 1} = 1
f1(4) = max { f0(4) , f0(4-2)+ p1} = max{0, 0 + 1} = 1
f1(5) = max { f0(5) , f0(5-2)+ p1} = max{0, 0 + 1} = 1
f1(6) = max { f0(6) , f0(6-2)+ p1} = max{0, 0 + 1} = 1
DP : 0/1 Knapsack
II) For i=2 :
w2= 3. p2= 2 y=06
f2(1) = max { f1(1) , f1(1-3)+ p2} = max{0, + 2} = 0
f2(2) = max { f1(2) , f1(2-3)+ p2} = max{0, + 2} = 1
f2(3) = max { f1(3) , f1(3-3)+ p2} = max{0, 0 + 2} = 2
f2(4) = max { f1(4) , f1(4-3)+ p2} = max{0, 0 + 2} = 2
f2(5) = max { f1(5) , f1(5-3)+ p2} = max{0, 1 + 2} = 3
f2(6) = max { f1(6) , f1(6-3)+ p2} = max{0, 1 + 2} = 3
I) For i= 3:
Final profit – it’s previous profit = 6 -5 = 1.
m – w3 = 6 -4 = 2 Since non negative x3 = 1.
II) For i= 2:
Final profit – it’s previous profit = 3 -3 = 0.
m – w2 = 2 -3 = -1 Since negative x2 = 0.
w→
i↓ 0 1 2 3 4 5 6 xi
0 0 0 0 0 0 0 0
1 1- 1= 0
1 2 1 0 0 1 1 1 1 1
0 1- 0= 1
2 3 2 0 0 1 2 2 3 3
1 6- 5= 1
5 4 3 0 0 1 2 5 5 6
Tuple (6, 6) , so we must set x3 = 1. Pair (6, 6) came from pair (6- p3, 6-w3)
= (1, 2). Hence consider for (1, 2) in .
Since (1, 2) ϵ . We must set x2 = 0.
Let |V| = n and assume n > 1. Then, a tour of G is a directed simple cycle that
includes every vertex in V . The cost of a tour is a sum of the edges on the tour. The
travelling salesperson problem is to find a tour of minimum cost.
In the following discussion we shall regard a tour to be a simple path that starts and
ends at vertex 1. Every tour consists of an edge for some and a path
from vertex k to vertex 1. The path from vertex k to vertex 1 goes through each
vertex in exactly once.
Let g(i, S) be the length of shortest path starting at vertex i, going through
all vertices in S, and terminating at vertex 1. The function is the length
of an optimal salesperson tour.
DP : Travelling Salesperson Problem
Travelling Salesperson Problem :
Solution:
Initially,
I)
g(2, {3}) = c23 + g(3, ϕ) = 9+6=15 J(2,3) = 3 g(2, {4}) = c24 + g(4, ϕ) = 10+8=18 J(2,4) = 4
g(3, {2}) = c32 + g(2, ϕ) = 13+5=18 J(3,2) = 2 g(3, {4}) = c34 + g(4, ϕ) = 12+8=20 J(3,4) = 4
g(4, {2}) = c42 + g(2, ϕ) = 8+5=13 J(4,2) = 2 g(4, {3}) = c43 + g(3, ϕ) = 9+6=15 J(4,3) = 3
DP : Travelling Salesperson Problem
II)
g(2, {3, 4}) = min { c23 + g(3, {4}), c24 + g(4, {3})} = min {9+20, 10+15}= 25 J(2,{3,4})= 4
g(3, {2, 4}) = min { c32 + g(2, {4}), c34 + g(4, {2})} = min {13+18, 12+13}= 25 J(3,{2,4})= 4
g(4, {2, 3}) = min { c42 + g(2, {3}), c43 + g(3, {2})} = min {8+15, 9+18}= 23 J(4,{2, 3})= 2
III)
g(1, {2,3, 4}) = min {{ c12 + g(2, {3,4}), c13 + g(3, {2,4}), c14 + g(4, {2,3})}
= min {10+25, 15+25, 20+23} = 35 J(1,{2,3,4})= 2
This is better than computing all n! different tours to find the optimal one.
But, the most serious drawback of this dynamic programming solution is the space
needed, . This is too large even for modest values of n.
That’s it.