0% found this document useful (0 votes)
24 views80 pages

DAA - Module 2 - Part1

Uploaded by

kavya.jagtap04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views80 pages

DAA - Module 2 - Part1

Uploaded by

kavya.jagtap04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

MODULE II

FUNDAMENTAL ALGORITHMIC
STRATEGIES
Contents:
 Algorithm strategies
- Divide & Conquer
- Brute-Force
- Greedy
- Dynamic programming
- Branch-and-Bound
- Backtracking
 Problem Solving
- Bin Picking
- Knapsack
- Travelling Salesperson Problem
 Heuristics – characteristics & their application Domain.
ALGORITHMIC STRATEGIES

Objectives:
At the end of this chapter you should be able to

 Understand all these algorithm design strategies.


 Select an appropriate design strategy to solve the problem in hand.
 Devise the algorithm to solve the problem using selected strategy.
 Synthesize/Analyze the algorithm and write it’s recurrence relation.
 Find algorithm complexity by solving the recurrence relation.
Divide and Conquer

General Method: Given a function to compute on n inputs the divide-and-conquer


strategy suggests splitting the inputs into k distinct subsets, 1 < k ≤ n, yielding k
subproblems.
These subproblems must be solved,
and then a method must be found to combine subsolutions into a solution of the
whole.

Ex: 1) Detecting a counterfeit coin. 2) Sorting of numbers.

Suppose problem P of input size n


Divide and Conquer

Control Abstraction :

Algo 1. Control abstraction for divide and conquer


Divide and Conquer

If the size of P is n and the sizes of k subproblems are n1, n2,....nk, respectively, then
the computing time of DandC is described by the recurrence relation as follows:

The complexity of many divide-and conquer algorithms is given by recurrences of


the form:

where a and b are constants. And Assume that T(1) is known and n is a power of b
(i.e. n = bk).
D&C : Binary Search
Binary Search (Iterative): Algorithm

Algo. 2: Iterative binary search


D&C : Binary Search
Binary Search (Recursive): Algorithm

Algo. 3: Recursive binary search


D&C : Binary Search
Binary Search : Analysis (Tracing)
D&C : Binary Search
Binary Search (Recursive): Analysis According to Master Theorem
Recurrence relation: Case 2:
Case a:

if then

Assume Therefore

T (n) = O( log n)
Successful Searches Unsuccessful Searches
O( 1) O( log n) O( log n) O( log n)
best average worst best, average, worst
Divide and Conquer

Binary Search (Iterative): Analysis

Fig. 1 Binary decision tree for binary search, n=14


Divide and Conquer

Finding Maximum & Minimum (Straightforward):

Algo. 4: Straightforward maximum and minimum

1) Straightforward :
2) After modification : in case elements are in increasing order. (Best case)
& in case elements are in decreasing order. (Worst case)
D&C : Maximum & Minimum
Finding Maximum & Minimum (D&C):

Algo. 5: Recursively finding the maximum and minimum


D&C : Maximum & Minimum
Finding Maximum & Minimum (D&C): Analysis

Fig. 4 Trees of recursive calls of MaxMin


D&C : Maximum & Minimum
Finding Maximum & Minimum (D&C): Analysis

When n is power of 2, n = 2k for some positive integer k, then

:
:

Further modification possible?


D&C : Maximum & Minimum
Finding Maximum & Minimum (D&C): Analysis
Let us see what the count is when element comparisons have the same cost as
comparisons between i and j. Let C(n) be this number, First we observe that lines 6
and 7 in algorithm can be replaced with
if (i ≥ j-1) { // Small(P) then

Solving this equation by substitution method, we obtain

Comparisons :
StraightMaxMin 3(n-1) including for loop comparisons. (if n = 1000, 2997 )
MaxMin (Only element comparisons considered) (1498)
MaxMin (After modification) (2497)
D&C : Merge Sort

The operations of this sorting technique is related to the process of Merging(One-


way/Two-way Merging).

Two approaches: a) Recursive(Simple) b) Iterative(more complex)

a) Recursive Merge sort:

 Recursive Merge Sort uses the Divide-and-conquer strategy of algorithm.


 Given a sequence of n elements a[1], a[2], …, a[n] the general idea is to
imagine them split into two sets a[1], …, a[n/2] & a[n/2+1], …, a[n].
 Each set is individually sorted, and the resulting sorted sequences are merged
to produce a single sorted sequence on n elements.

That’s why this sorting technique is also called Split and Merge.
i) Split ii) Merge
D&C : Merge Sort

Algorithm :

Function MergeSort( low, high)


//a[low: high] is a global array to be sorted.

1. if (low < high) then // if there are more than 1 element


2. mid := (low high) / 2 // Divide P into sub-problems. Find where to split
3. MergeSort( low, mid) // Solve the sub-problems
4. MergeSort( mid+1, high)
5. Merge(low, mid, high) // Combine the solutions
6. end MergeSort
D&C : Merge Sort
Algorithm :
Function Merge( low, mid, high)
// a[low:high] is aglobal array containing two sorted subsets in a[low:mid] and a[mid+1: high]. The goal is to merge these
// two sets into a single set residing in a[low:high], b[] is an auxiliary global array

1. h := low; i := low; j := mid+1;


2. Repeat through step 5 while ((h ≤ mid) and ( j ≤ high))
3. if ( a[h] ≤ a[j]) then
b[i] := a[h]; h := h+1;
else
b[i] := a[j]; j := j+1;
5. i := i+1
6. if ( h > mid) then
7. Repeat for k := j to high
b[i] := a[k]; i := i+1;
else
8. Repeat for k := h to mid
b[i] := a[k]; i := i+1;
9. Repeat for k := low to high
a[k] := b[k]
10. Return
D&C : Merge Sort
D&C : Merge Sort

Fig : Tree calls of MergeSort(1, 10)


D&C : Merge Sort

Fig 2: Tree of calls of Merge


D&C : Merge Sort
Complexity Analysis:
 Time complexity
- MergrSort() algorithm calls itself log2 n times as it is recursive.
- Every call to Merge() merges at most n elements.
- Therefore Best/ Average/ Worst case Time complexity is O(n log2 n).

 Space complexity
- Merge() algorithm uses additional (Auxiliary) array of input size.
- Merge sort is not an in-place algorithm. So Space complexity is O(n).

 Important points
- Merge sort uses divide-and-conquer algorithm design strategy.
- Merge sort is recursive sorting algorithm.
- Best suited when the input size is very large.
- Merge sort is the best sorting algorithm in terms of time complexity
O(n log2 n) if we are not concerned with auxiliary space used.
D&C : Matrix multiplication
Matrix Multiplication :
Let A and B be two n x n matrices. The product matrix C = AB is also an n x n
matrix. ……….(I)

X = .………(II)

Then

? - Multiplications of n/2 x n/2 matrices and

? - Additions of n/2 x n/2 matrices are required.


D&C : Matrix multiplication
Matrix Multiplication :
void multiply(int A[ ][N], int B[ ][N], int C[ ][N])
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
C[i][j] = 0;
for (int k = 0; k < N; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
}
D&C : Matrix multiplication
Matrix Multiplication : (Using D&C strategy)
Algorithm MM(A, B, n)
// Comments……m
{
if(n <= 2)
{
C= 4 formulas;
.. .. ..
}
else
{
MM (A11, B11, n/2) + MM (A12, B21, n/2);
MM (A11, B12, n/2) + MM (A12, B22, n/2);
MM (A21, B11, n/2) + MM (A22, B21, n/2);
MM (A21, B12, n/2) + MM (A22, B22, n/2);
}
}
Then the overall time T(n) of resulting divide-and-conquer algorithm is given by
the recurrence as
D&C : Strassen’s Matrix multiplication
STRASSEN’S Matrix Multiplication :
Volker Stassen has discovered a new way to compute Cij ‘S as follows:

…..(IV) .… (V)

Then the resulting recurrence relation is


……(VI)

where a and b are constants.


Greedy Approach

 The General method


 Knapsack problem
 Job Sequencing with deadlines
 Faster Job Sequencing with deadlines
 Minimum-Cost Spanning Tree
- Prim’s Algorithm
- Kruskal’s Algorithm
Greedy Approach

The General Method:


Feasible Solution Objective Function Optimal Solution
Ex: 1) Travelling 2) Change Making
 Subset paradigm
 Ordering paradigm

Greedy Method Control Abstraction:

Algo 1. Greedy method control abstraction for the subset paradigm


Greedy : Knapsack Problem

Knapsack problem:
We are given n objects, and a knapsack or bag.
Object i has a weight wi , profit pi and the knapsack has a capacity m.
Greedy : Knapsack Problem

Knapsack problem: Greedy solution

Object Profit Weight Profit / Sorted Greedy wi* xi pi * xi


i pi wi weight Sequence Solution
xi
1 25 18 1.3888 3 0 0 0
2 24 15 1.6 1 1 15 24
3 15 10 1.5 2 1/2 10/2 =5 7.5
Total 20 31.5

Greedy Solution : x[3] = (0, 1, 1/2)


Greedy : Knapsack Problem

Knapsack problem: Algorithm


Algorithm GreedyKnapsack(m, n)
// p[1:n] and w[1:n] contain the profits and weights respectively of the n objects ordered such that
// p[i]/w[i] >= p[i+1]/w[i+1]. m is the knapsack size and x[1:n] is the solution vector.
{
for i:= 1 to n do
x[i] := 0.0; //Initialize x.
U := m;
for i := 1 to n do
{
if ( w[i] > U) then break;
x[i] := 1.0; U := U – w[i];
}
if ( i <= n) then x[i] := U/w[i];
}
Algo. 2: Algorithm for greedy strategies for the knapsack problem
Greedy : Knapsack Problem

Knapsack Problem Analysis:

Time Complexity : O (n)


Greedy: Job Sequencing with Deadlines
Job Sequencing With Deadlines :
Problem : n jobs, deadline di >= 0 and profit pi >0

For any job i profit pi is earned iff the job is completed by its deadline. To complete a
job, one has to process the job for one unit of time. Only one machine is available for
processing the jobs.

Feasible solution – a subset J of jobs such that each job in this subset can be
completed before its deadline.

Value of feasible solution J – is the sum of the profits of the jobs in J.

Optimal solution – a feasible solution with maximum profit(value).

Subset paradigm problem.


Greedy: Job Sequencing with Deadlines
Job Sequencing With Deadlines :

Example : Let n=4, (p1 , p2 , p3 ) = (100, 10, 15, 27) and (d1 , d2 , d3 ) = (2, 1, 2, 1).
Find all feasible solutions and their values .
Sr. Feasible Processing
Value (Profit)
No. solution Sequence
1 (1, 2) 2, 1 110
2 (1, 3) 1, 3 or 3, 1 115
3 (1, 4) 4, 1 127
4 (2, 3) 2, 3 25
5 (3, 4) 4, 3 42
6 (1) 1 100
7 (2) 2 10
8 (3) 3 15
9 (4) 4 27
Greedy: Job Sequencing with Deadlines
Job Sequencing With Deadlines :

1. Algorithm GreedyJob(d, J, n)
// J is a set of jobs their can be completed by their deadlines.
2. {
3. J:= {1};
4. for i := 2 to n do
5. {
6. if (all jobs in J ∪ {i} can be completed by their deadlines ) then
7. J := J ∪ {i};
8. }
9. }

Algorithm 2. High-level description of job sequencing with deadlines algorithm.


Greedy Approach
Job Sequencing With Deadlines :

Algo. 5: Greedy algorithm for sequencing unit time jobs with dead-lines and profits
Greedy: Job Sequencing with Deadlines
Faster algorithm for JS :

Since there are only n jobs and each job takes one unit of time, it is necessary
only to consider the time slots
Greedy: Job Sequencing with Deadlines
Faster algorithm for JS :

Algorithm 5. Faster algorithm for job sequencing


Greedy: Minimum Spanning Trees
Prim’s Algorithm:
 A greedy method to a obtain minimum-cost spanning tree builds this tree edge
by edge.
 The next edge to include is chosen according to some optimization criterion.
 The simplest such criterion is to choose an edge that results in a minimum
increase in the some of the costs of the edges so far included.

Figure 4.7 a. A graph and b. its minimum cost spanning tree


Greedy: Minimum Spanning Trees
Prim’s Algorithm:

Figure : Stages in Prim's algorithm


Greedy: Minimum Spanning Trees
Prim’s Algorithm:
Greedy: Minimum Spanning Trees
Prim’s Algorithm: Time Complexity

Line 9 of algorithm takes O(E) time.


For loop of line 12 takes θ(n) time.

Lines 18 & 19 and the for loop of line 23 require O(n) time.
Also each iteration of for loop of line 16 takes O(n) time.

Hence total time required by algorithm Prim is -----


Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
 Kruskal’s algorithm suggest second possible interpretation of optimization
criterion.

 In that, the edges of the graph are considered in non-decreasing order of cost.

 In this , the interpretation is that the set t of edges so far selected for the
spanning tree be such that it is possible to complete tree t into a tree.
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm:
Greedy: Minimum Spanning Trees
Kruskal’s Algorithm: Time Complexity

No of edges in the graph |E|


No of edges part of spanning tree |V|- 1
Therefore time complexity of Kruskal’s algorithm θ(|V||E|)
θ(n. e) = θ(n2)

But because we used min heap to select next edge with min cost

Time complexity of Kruskal’s algorithm reduces to θ( |E| log |E | )


θ( |V| log |E | )
θ(n log |E | )
DYANAMIC PROGRAMMING

 The General method


 Principle of Optimality
 Multistage Graphs
 Multistage Graphs : Forward and Backward approach
 0/1 Knapsack
 Traveling Salesperson problem
Dynamic Programming
Dynamic Programming: Dynamic Programming is an algorithm design
method that can be used when the solution to a problem can be viewed as a
result of a sequence of decisions.
– Like divide and conquer, DP solves problems by combining
solutions to subproblems.
– Unlike divide and conquer, subproblems are not independent.

The General method :


Dynamic Programming
Principle of Optimality : The principle of optimality states that an optimal
sequence of decisions has the property that whatever the initial state and
decision are, the remaining decisions must constitute an optimal decision
sequence with regard to the state resulting from the first decision.
Difference Between GP and DP:
- In GP only one decision sequence is ever generated.
- In DP, many decision sequences may be generated.
DP : Multistage Graphs
A) Multistage Graphs : A multistage graph G = ( V, E ) is a directed
weighted graph in which the vertices are partitioned into k ≥ 2 disjoint sets Vi,
1≤ i ≤ k. In addition if <u, v> is an edge in E, then u Є Vi and v Є Vi+1, for
some i, 1≤ i < k.

Fig. 1 Five stage Graph


DP : Multistage Graphs
Multistage Graphs :
Let p(i, j) be the minimum cost path from vertex j in Vi to vertex t .
Let cost(i, j) be the cost of this path then using the forward approach

This shortest path between s to t can be easily determined if we record the decision
made at each state (vertex). Let d(i, j) be the value of l (node) that minimizes
DP : Multistage Graphs
Problem 1: Find the minimum cost path from s to t for the five-stage graph shown
below using Forward approach.

Solution:

Since k = no. of stages = 5. Start finding cost(i, j) form stage k- 2 =5- 2= 3(stage)

cost(5, 12) = 0. cost(4, 9) = min{c(9, 12) + cost(5, 12)} = 4 + 0 = 4


cost(4, 10) = min{c(10, 12) + cost(5, 12)} = 2 + 0 = 2
cost(4, 11) = min{c(11, 12) + cost(5, 12)} = 5 + 0 = 5

cost(3, 6) = min {c(6, 9) + cost(4, 9), c(6, 10) + cost(4, 10) }


= min{6+4, 5+2} = 7 d(3,6) = 10
cost(3, 7) = min {c(7, 9) + cost(4, 9), c(7, 10) + cost(4, 10) }
= min{4+4, 3+2} = 5 d(3,7) = 10
Dynamic Programming
cost(3, 8) = min {c(8, 10) + cost(4, 10), c(8, 11) + cost(4, 11) }
= min{ 5+2 , 6+5} = 7 d(3,8) = 10

cost(2, 2) = min {c(2, 6) + cost(3, 6), c(2, 7) + cost(3, 7), c(2, 8) + cost(3, 8) }
= min{4 + 7, 2+5, 1 + 7} = 7 d(2, 2) = 7
cost(2, 3) = min {c(3, 6) + cost(3, 6), c(3, 7) + cost(3, 7) }
= min{ 2 + 7, 7 + 5} = 9 d(2, 3) = 6
cost(2, 4) = min {c(4, 8) + cost(3, 8) } = min{11 + 7} = 9 d(2, 4) = 8
cost(2, 5) = min {c(5, 7) + cost(3, 7), c(5, 8) + cost(3, 8) }
= min{ 11 + 5, 8 + 7 } = 15 d(2, 5) = 8

cost(1, 1) = min {c(1, 2) + cost(2, 2), c(1, 3) + cost(2, 3) +


c(1, 4) + cost(2, 4), c(1, 5) + cost( 2, 5)}
= min{9 + 7, 7 + 9, 3 + 18, 2 + 15} = 16 d(2, 2) = 2 or 3
DP : Multistage Graphs
v 1 2 3 4 5 6 7 8 9 10 11 12
cost 16 7 9 18 15 7 5 7 4 2 5 0
d 2 or 3 7 6 8 8 10 10 10 12 12 12 12

Let the minimum cost path be s = 1, v2, v3, v4, t =12

d(1, 1) = 2 d(1, 1) = 3
d(2, 2) = 7 OR d(2, 3) = 6
d(3, 7) = 10 d(3, 6) = 10
d(4, 10) = 12 d(4, 10) = 12

Therefore, minimum cost path from s Therefore, minimum cost path from s to
to t is t is
s = 1 → 2 → 7 → 10 → t= 12 s= 1 → 3 → 6 → 10 → t = 12

Cost = 16 Cost = 16
DP : Multistage Graphs
Multistage Graphs : Forward Approach Algorithm
DP : Multistage Graphs
Multistage Graphs : Backward Approach

The multistage problem can be solved using the backward approach.

Let bp(i, j) be the minimum cost path from vertex s in to vertex j .


Let bcost(i, j) be the cost of bp(i, j). Then using the backward approach we obtain
DP : Multistage Graphs
Problem 1: Find the minimum cost path from s to t for the five-stage graph shown
below using backward approach.

Solution:

bcost(1, 1) = 0. bcost(2, 2) = min{bcost(1, 1) + c(1, 2)} = 0 + 9 = 9


bcost(2, 3) = min{bcost(1, 1) + c(1, 3)} = 0 + 7 = 7
bcost(2, 4) = min{bcost(1, 1) + c(1, 4)} = 0 + 3 = 3
bcost(2, 5) = min{bcost(1, 1) + c(1, 5)} = 0 + 2 = 2

bcost(3, 6) = min {bcost(2, 2) + c(2, 6) , bcost(2, 3) + c(3, 6), }


= min{9+4, 7+2} = 9 d(3,6) = 3

bcost(3, 7) = min {bcost(2, 2) + c(2, 7), bcost(2, 3) + c(3, 7), bcost(2, 5 + c(5, 7) }
= min{9 + 2, 7 + 7, 2 + 11} = 11 d(3,7) = 2
DP : Multistage Graphs
bcost(3, 8) = min {bcost(2, 2) + c(2, 8), bcost(2, 4) + c(4, 8), bcost(2, 5) + c(5, 8) }
= min{ 9 + 1, 3 + 11, 2 + 8 } = 10 d(3,8) = 2 or 5

bcost(4, 9) = min {bcost(3, 6) + c(6, 9), bcost(3, 7) + c(7, 9)}


= min{9 + 6, 11 + 4} = 15 d(4, 9) = 6 or7
bcost(4, 10) = min {bcost(3, 6) + c(6, 9) , bcost(3, 7) + c(7, 10),
bcost(3, 8) + c(8, 10)}
= min{9 + 5, 11 + 3, 10 + 5} = 14 d(4, 10) = 6 or 7

bcost(4, 11) = min {bcost(3, 8) + c(8, 11)}


= min{ 10+ 6 } = 16 d(4, 11) = 8

bcost(5, 12) = min {bcost(4, 9) + c(9, 12), bcost(4, 10) + c(10, 12),
bcost(4, 11) + c(11, 12)}
= min{15 + 4, 14 + 2, 16 + 5} = 16 d(5, 12) = 10
DP : Multistage Graphs
v 1 2 3 4 5 6 7 8 9 10 11 12
bcost 0 9 7 3 2 9 11 10 15 14 16 16
d 1 1 1 1 1 3 2 2 or 5 6 or 7 6 or 7 8 10

Let the minimum cost path be s= 1, v2, v3, v4, t =12


d(5, 12) = 10 d(5, 12) = 10
d(4, 10) = 6 d(4, 10) = 7
d(3, 6) = 3 d(3, 7) = 2
d(2, 3) = 1 d(2, 2) = 1
OR
Therefore, minimum cost path Therefore, minimum cost path from
from s to t is s to t is

s = 1 → 3 → 6 → 10 → t= 12 s = 1 → 2 → 7 → 10 → s =12

Cost = 16 Cost = 16
DP : Multistage Graphs
Multistage Graphs : Backward Approach Algorithm
DP : Multistage Graphs
Complexity analysis of Multistage graph:

If G is represented by it’s adjacency list, then r in line 9 of algorithm can be found


in time proportional to the degree of vertex j.

Hence if G has |E| edges, then time for loop in line 7 is θ(|V| + |E|) .

The time for the for loop of line 16 is θ(k)

Therefore, Total time complexity of FGraph and BGraph is θ(|V| + |E|)

In addition to the space needed for the input, space is needed for cost[], d[] and
p[] .
DP : 0/1 Knapsack
0/1 Knapsack: Problem : Terminology and notations used for 0/1 Knapsack is
same as that of greedy Knapsack.

 A solution to 0/1 Knapsack problem can be obtained by making a sequence of


decisions on the variables .
 A decision on variable involves determining which of the values 0 or 1 is to be
assigned to it.
 Let us assume that the decisions on the are made in the order .
 Based on this decision, we may in one of the two possible states:
1) 0 : Capacity remaining is m & no profit earned.
2) 1 : Capacity remaining is m - wn & no profit earned pn.
 It is clear that, remaining decisions on must be optimal wrt the
problem state resulting from the decision on . Hence the principle of optimality
holds.
DP : 0/1 Knapsack
0/1 Knapsack: Formulation :

Let be the value of an optimal solution to KNAP(1, j, y). Since the principle of
optimality holds we obtain

for arbitrary equation I generalizes to

Equation II can be solved for by beginning with the knowledge for


all non-negative y and

Then can be successively computed using equation II.


When the are integer, we need to compute Since
, these values need not be computed explicitly.
DP : 0/1 Knapsack
0/1 Knapsack: Problem : Solve the 0/1 Knapsack problem using dynamic
programming approach for the instance : n =3, (w1 , w2 , w3) = (2, 3, 4) & (p1 , p2 ,
p3) = (1, 2, 5) , m= 6.

Solution: The value of an optimal solution can be computed by

We know and
I) For i=1:
w1= 2. p1= 1 y=06
f1(1) = max { f0(1) , f0(1-2)+ p1} = max{0, + 1} = 0
f1(2) = max { f0(2) , f0(2-2)+ p1} = max{0, 0 + 1} = 1
f1(3) = max { f0(3) , f0(3-2)+ p1} = max{0, 0 + 1} = 1
f1(4) = max { f0(4) , f0(4-2)+ p1} = max{0, 0 + 1} = 1
f1(5) = max { f0(5) , f0(5-2)+ p1} = max{0, 0 + 1} = 1
f1(6) = max { f0(6) , f0(6-2)+ p1} = max{0, 0 + 1} = 1
DP : 0/1 Knapsack
II) For i=2 :
w2= 3. p2= 2 y=06
f2(1) = max { f1(1) , f1(1-3)+ p2} = max{0, + 2} = 0
f2(2) = max { f1(2) , f1(2-3)+ p2} = max{0, + 2} = 1
f2(3) = max { f1(3) , f1(3-3)+ p2} = max{0, 0 + 2} = 2
f2(4) = max { f1(4) , f1(4-3)+ p2} = max{0, 0 + 2} = 2
f2(5) = max { f1(5) , f1(5-3)+ p2} = max{0, 1 + 2} = 3
f2(6) = max { f1(6) , f1(6-3)+ p2} = max{0, 1 + 2} = 3

III) For i=3:


w3= 4. p3= 5 y=06
f3(1) = max { f2(1) , f2(1-4)+ p3} = max{0, + 5} = 0
f3(2) = max { f2(2) , f2(2-4)+ p3} = max{1, + 5} = 1
f3(3) = max { f2(3) , f2(3-4)+ p3} = max{2, + 5} = 2
f3(4) = max { f2(4) , f2(4-4)+ p3} = max{2, 0 + 5} = 5
f3(5) = max { f2(5) , f2(5-4)+ p3} = max{3, 0 + 5} = 5
f3(6) = max { f2(6) , f2(6-4)+ p3} = max{3, 1 + 5} = 6
DP : 0/1 Knapsack
Therefore, maximum profit = 6.
Now we have to find the values for ie. (0/1)

I) For i= 3:
Final profit – it’s previous profit = 6 -5 = 1.
m – w3 = 6 -4 = 2 Since non negative x3 = 1.

II) For i= 2:
Final profit – it’s previous profit = 3 -3 = 0.
m – w2 = 2 -3 = -1 Since negative x2 = 0.

III) For i=1:


Final profit – it’s previous profit = 1 -1 = 0.
m – w1 = 2 - 2 = 0 Since non-negative x1 = 1.

(x1 , x2 , x3) = (1, 0, 1) Max Profit = 6.


DP : 0/1 Knapsack
0/1 Knapsack: Tabular Method : Solve the 0/1 Knapsack problem using
dynamic programming approach for the instance : n =3, (w1 , w2 , w3) = (2,
3, 4) & (p1 , p2 , p3) = (1, 2, 5) , m= 6.

Solution: We know and

w→
i↓ 0 1 2 3 4 5 6 xi
0 0 0 0 0 0 0 0
1 1- 1= 0
1 2 1 0 0 1 1 1 1 1
0 1- 0= 1
2 3 2 0 0 1 2 2 3 3
1 6- 5= 1
5 4 3 0 0 1 2 5 5 6

(x1 , x2 , x3) = (1, 0, 1) Max Profit = 6.


DP : 0/1 Knapsack
0/1 Knapsack: Set Method :
 is an ascending step function. ie there are finite no of y’s
; ; and
.

 So we need to compute only .

 We use the ordered set to represent .

 Each member of is a pair (P, W), where .

 Notice here that . Then we compute by first computing

 Then can be computed by merging the pairs in and together. While


merging, Dominance rule is applied.
DP : 0/1 Knapsack
Dominance rule : If contains two pairs with the property
that , then the pair can be discarded because of eq II .
Discarding or purging rules such as this one are also known as dominance rule.

That is dominated tuples get purged.


DP : 0/1 Knapsack
0/1 Knapsack: Set Method : Solve the 0/1 Knapsack problem using
dynamic programming approach for the instance : n =3, (w1 , w2 , w3) = (2,
3, 4) & (p1 , p2 , p3) = (1, 2, 5) , m= 6.
Solution:

 Tuple (6, 6) , so we must set x3 = 1. Pair (6, 6) came from pair (6- p3, 6-w3)
= (1, 2). Hence consider for (1, 2) in .
 Since (1, 2) ϵ . We must set x2 = 0.

 Since (1, 2) . We must set x1 = 1.

 Hence the optimal solution is (x1 , x2 , x3) = (1, 0, 1) Max Profit = 6.


DP : 0/1 Knapsack

Algo. 4: Informal Knapsack algorithm


DP : 0/1 Knapsack

Algo. 5: Knapsack algorithm


DP : 0/1 Knapsack
Complexity analysis of 0/1 Knapsack :
1) By using Brute-Force approach (Recursion) :

Time complexity : O(2n)

1) By using Dynamic Programming approach :

Time complexity : O( nW) n: no of objects &


W: capacity of knapsack
DP : Travelling Salesperson Problem
Travelling Salesperson Problem :
Let G =(V, E) be a directed graph with edge cost ci j . The variable ci j is
defined such that ci j > 0 for all i and j and .

Let |V| = n and assume n > 1. Then, a tour of G is a directed simple cycle that
includes every vertex in V . The cost of a tour is a sum of the edges on the tour. The
travelling salesperson problem is to find a tour of minimum cost.

In the following discussion we shall regard a tour to be a simple path that starts and
ends at vertex 1. Every tour consists of an edge for some and a path
from vertex k to vertex 1. The path from vertex k to vertex 1 goes through each
vertex in exactly once.
Let g(i, S) be the length of shortest path starting at vertex i, going through
all vertices in S, and terminating at vertex 1. The function is the length
of an optimal salesperson tour.
DP : Travelling Salesperson Problem
Travelling Salesperson Problem :

From the principal of optimality it follows that

Generalizing equation I, we obtain

Clearly, Hence We can use eq. II to obtain for


all S of size 1. Then we can obtain for S with |S| = 2, and so on.
DP : Travelling Salesperson Problem
Problem : Consider the directed weighted graph shown in fig 3.1(a). The edge
lengths are given by matrix c of fig. 3.1(b). Find the optimal tour for this graph with
its length.

Figure 3.1 a) Directed weighted graph b) Edge length matrix c

Solution:

Initially,
I)
g(2, {3}) = c23 + g(3, ϕ) = 9+6=15 J(2,3) = 3 g(2, {4}) = c24 + g(4, ϕ) = 10+8=18 J(2,4) = 4

g(3, {2}) = c32 + g(2, ϕ) = 13+5=18 J(3,2) = 2 g(3, {4}) = c34 + g(4, ϕ) = 12+8=20 J(3,4) = 4

g(4, {2}) = c42 + g(2, ϕ) = 8+5=13 J(4,2) = 2 g(4, {3}) = c43 + g(3, ϕ) = 9+6=15 J(4,3) = 3
DP : Travelling Salesperson Problem
II)

g(2, {3, 4}) = min { c23 + g(3, {4}), c24 + g(4, {3})} = min {9+20, 10+15}= 25 J(2,{3,4})= 4

g(3, {2, 4}) = min { c32 + g(2, {4}), c34 + g(4, {2})} = min {13+18, 12+13}= 25 J(3,{2,4})= 4

g(4, {2, 3}) = min { c42 + g(2, {3}), c43 + g(3, {2})} = min {8+15, 9+18}= 23 J(4,{2, 3})= 2

III)
g(1, {2,3, 4}) = min {{ c12 + g(2, {3,4}), c13 + g(3, {2,4}), c14 + g(4, {2,3})}
= min {10+25, 15+25, 20+23} = 35 J(1,{2,3,4})= 2

So the tour of minimum path is :


J(1,{2,3,4})= 2
J(2, {3,4}) = 4
J(4, {3}) = 3 1 2 4 3 1 with cost 35.
DP : Travelling Salesperson Problem
Complexity analysis of TSP :

An algorithm that proceeds to solve TSP (Optimal tour) by using equation


I and II will require time as the computation of g(i, S) with |S| = k
requires k-1 comparisons when solving by equation II.

This is better than computing all n! different tours to find the optimal one.

But, the most serious drawback of this dynamic programming solution is the space
needed, . This is too large even for modest values of n.
That’s it.

You might also like