DAA Unit-II Lecture Notes
DAA Unit-II Lecture Notes
of
Algorithms
Greedy
Dynamic Programming
Backtracking
Pandu Sowkuntla
4
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Divide and Conquer Approach
Pandu Sowkuntla
5
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm
Merge sort
Method :
Pandu Sowkuntla
6
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm
7
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer Pandu Sowkuntla
Merge Sort Algorithm
Sort and
Combine
the sub arrays
5 2 4 7 1 3 2 6
Divide the
5 2 4 7 1 3 2 6 initial array
5 2 4 7 1 3 2 6
Pandu Sowkuntla
8
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm
Pandu Sowkuntla
9
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis
𝑇(𝑛/2)
𝑇(𝑛/2)
𝑂(𝑛)
or
Time Complexity:
Worst case performance: Θ(𝑛𝑙𝑜𝑔𝑛) Space complexity:
Best case performance: Θ(𝑛𝑙𝑜𝑔𝑛) Worst case: Θ(𝑛)
Average case performance: Θ(𝑛𝑙𝑜𝑔𝑛)
Pandu Sowkuntla
10
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis
(Substitution method)
𝑇(𝑛) = 2 ∗ 𝑇(𝑛/2) + 𝑛
= 2 ∗ (2 ∗ 𝑇(𝑛/4) + 𝑛/2) + 𝑛
= 4 ∗ 𝑇(𝑛/4) + 𝑛 + 𝑛
= 4 ∗ 𝑇(𝑛/4) + 2 ∗ 𝑛
= 4 ∗ (2 ∗ 𝑇(𝑛/8) + 𝑛/4) + 2 ∗ 𝑛
= 8 ∗ 𝑇(𝑛/8) + 𝑛 + 2 ∗ 𝑛
= 23 ∗ 𝑇(𝑛/23) + 3 ∗ 𝑛
:
:
= 2𝑘 ∗ 𝑇(𝑛/2𝑘 ) + 𝑘 ∗ 𝑛
=
𝐼𝑓 2𝑘 𝑛 => 𝑘 = 𝑙𝑜𝑔𝑛
𝑇(𝑛) = 𝑛 ∗ 1 + 𝑙𝑜𝑔𝑛 ∗ 𝑛
𝑇(𝑛) = 𝑂(𝑛𝑙𝑜𝑔𝑛)
Time complexity=𝑂(𝑛𝑙𝑜𝑔𝑛)
Pandu Sowkuntla
11
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis (Recursion tree method)
Height of tree=logn+1
(n=number of leave
=input size)
Time complexity=cn*(logn+1)=cnlogn+cn=O(nlogn) 12
Master Theorem for recursive algorithms Merge sort analysis:
If
Pandu Sowkuntla
13
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Examples on Master Theorem
Solve the following recurrence relation using Master’s theorem-
T(n) = 3T(n/2) + n2
We compare the given recurrence relation with T(n) = aT(n/b) + θ
(nklogpn).
Then, we have-
a = 3
b = 2
k = 2
p = 0
Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
T(n) = θ (n2)
Thus,
Pandu Sowkuntla
14
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Examples on Master Theorem
So, we have-
T(n) = θ (nlogba)
T(n) = θ (nlog2√2)
T(n) = θ (n1/2)
► Partition
2. Rearrange the array elements such that the all values lesser than the pivot should come
before the pivot and all the values greater than the pivot should come after it.
At the end of the partition, the pivot element will be placed at its sorted position.
► Recursive
3. Do the above process recursively to all the sub-arrays and sort the elements.
► Base Case
If the array has zero or one element, there is no need to call the partition method.
Pandu Sowkuntla
16
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm
Pandu Sowkuntla
17
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm
Example
Pandu Sowkuntla
18
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
𝑂(𝑛)
T(𝑛/𝑎)
T(𝑛/𝑏)
Time Complexity:
Best Case: Each partition splits array in
halves and gives,
𝑇(𝑛) = 2𝑇(𝑛/2) + Θ(𝑛) = Θ(𝑛𝑙𝑜𝑔𝑛) Space Complexity: 𝑂(𝑙𝑜𝑔𝑛) (best case)
[using Divide and Conquer master theorem] 𝑂(𝑛) (worst case)
Worst Case: Each partition gives unbalanced splits, and we get 𝑇(𝑛) = 𝑇(𝑛 – 1) + Θ(𝑛)
𝑇 𝑛 = 𝑇 𝑛 − 1 + 𝑛 →(1)
𝐹𝑟𝑜𝑚 𝐸𝑞 (1)
𝑇(𝑛 − 1) = 𝑇(𝑛 − 1 − 1) + 𝑛 − 1
= 𝑇(𝑛 − 2) + 𝑛 − 1 → (2)
𝐹𝑜𝑟 𝑛 = 𝑛 , 𝑖. 𝑒. , 𝑇(0) = 1
𝑇(𝑛) = 1 + 2 + … … . + (𝑛 − 3) + (𝑛 − 2) + (𝑛 − 1) + 𝑛
𝑇(𝑛) = 𝑛 ∗ (𝑛 + 1)/2 ➔ 𝑂(𝑛2)
Pandu Sowkuntla
20
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Example
Pandu Sowkuntla
21
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Recursive calls:
After splitting the array into 2 partitions, quicksort algorithm called recursively on each
sub array
Recursive
Call 1
start=0,
end=1,
i = 0,
pIndex = 0
pivot = 3
Pandu Sowkuntla
22
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Recursive Recursive
Call 1 Call 1
Recursive Recursive
Recursive Call 3
Call 2 Call 2
Pandu Sowkuntla
23
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Recursive Recursive
Call 1 Call 4
Recursive Recursive
Call 2 Call 3
Pandu Sowkuntla
24
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Recursive Recursive
Call Call
Pandu Sowkuntla
25
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Search Trees (BST)
8 8 8
3 10 3 3 10
10
1 6 14 6 1 6
1
4 7 13 4 2
Pandu Sowkuntla
26
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Search Trees (BST)
► Empty BST is a single pointer with the value of NULL.
root = NULL;
► A node in BST can be declared as:
struct Node
{
struct Node *leftChild; //to store address of left child
int data;
struct Node *rightChild; //to store address of right child
};
► Create a node of BST as:
struct Node *newNode(int item)
{
struct node *newnode = (struct Node*) malloc(sizeof(struct Node));
newnode->data = item;
newnode->leftChild = NULL;
newnode->rightChild = NULL;
return newnode;
} Pandu Sowkuntla
27
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Step-1:45 root Step-4:90,
45 90>45, Elements: 45, 15, 79, 90, 10, 55, 12, 50
45
90>79
\0 \0
15 79
Step-2:15, 15<45 Step-6:55,
\0 \0 \0 90 45
45 55>45,
55<79
\0 \0 \0 15 79
15
Step-5:10, \0
\0 \0 10 55 90
10<45, 45
Step-3:79,79>45 10<15
\0 \0 \0 \0 \0 \0
45 15 79
10 \0 \0 90
15 79
\0 \0 \0 \0
\0 \0 \0 \0
Pandu Sowkuntla
28
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Algorithm
Pandu Sowkuntla
29
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Complexity Analysis
With “n” nodes in a binary search tree, what is the
height (h) of the tree?
Level-0 n1
In Perfect Binary Tree, all the levels are filled.
Level-1 n2 n3 No. of nodes at the level ⅈ in perfect binary tree is 2ⅈ ,
the height can be expressed in terms of level as ⅈ = ℎ
Level-2 n4 n5 n6 n7 Total no. of nodes (n) in a binary search tree with the
height “h”,
If BST==Perfect Binary Tree n = 20 + 21 +. . . . +2h (total (h+1) terms are in G.P)
n = 2h+1 -1
n = 2h+1 -1
n+1 = 2h+1
n+1 = 2*2h
2h =(n+1)/2
𝑛+1
ℎ = log
2
Pandu Sowkuntla
30
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Searching data in BST
45 searchBST(root, item)
if(root==NULL)
15 79 return FALSE
10 \0 55 90 else if(root->data==item)
return TRUE
else
searchBST(root->rightChild, item)
Pandu Sowkuntla
31
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Find Minimum or Maximum data in BST
45
void maxBST(root)
Maximum if(root==NULL)
Minimum 15 79
(no right) print(“Empty Tree”);
(no left)
return -1;
10 \0 55 90
else if(root->right==NULL)
\0 12 50 \0 \0 \0 return root->data;
return maxBST(root->right);
void minBST(root)
if(root==NULL)
print(“Empty Tree”); Time complexity to find minimum or
return -1; maximum data in BST is 𝑶 𝒉
else if(root->left==NULL)
return root->data;
return minBST(root->left);
Pandu Sowkuntla
32
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Problem with Binary Search Trees (BST)
8
8 8
10
7 3 10
14
5 1 6 14
16
3
Left Skewed Right Skewed 4 7 13
Binary Search Tree Binary Search Tree
2 17 Balanced Binary Search Tree
Unbalanced Binary Search Trees
Pandu Sowkuntla
33
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Tree Traversal
17 23 97 44 33
0 1 2 3 4 5 6 7
Traversal in a Linked list: from head to NULL
Traversal in an array:
from a[0] to a[n-1]
Linear Traversal root
Tree traversal: Process of visiting (reading or processing data) F Level-0
each node in the tree exactly once in some order.
A Preorder:
B C
Inorder:
D E F
Postorder:
G H I
Pandu Sowkuntla
36
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Greedy Approach
Pandu Sowkuntla
37
Design and Analysis of Algorithms--> Unit-II: Greedy Approach
Greedy Approach
▪ It says that the globally optimal solution can be obtained by making a local
optimal solution (Greedy).
▪ The choice made by a Greedy algorithm may depend on earlier choices but not on
future.
▪ It iteratively makes one Greedy choice after another and reduces the given problem
to a smaller one.
2) Optimal substructure
▪ That means we can solve subproblems and build up the solutions to solve larger
problems.
Pandu Sowkuntla
38
Design and Analysis of Algorithms--> Unit-II: Greedy Approach
Knapsack Problem
• Items (𝒙𝒊 ) each having some weight (𝒘𝒊 ) and value (𝒗𝒊 )
• The value or profit obtained by putting the items into the knapsack is maximum.
► Main task is to maximize the value σ𝒏𝒊=𝟏(𝒗𝒊 𝒙𝒊 ) (𝑠𝑢𝑚𝑚𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑛𝑜. 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠 𝑡𝑎𝑘𝑒𝑛 ∗ 𝑖𝑡𝑠 𝑣𝑎𝑙𝑢𝑒)
such that σ𝒏𝒊=𝟏(𝒘𝒊 𝒙𝒊 ) ≤ 𝑾 (weight of all the items should be less than the maximum weight)
Pandu Sowkuntla
39
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Knapsack Problem
Fractional Knapsack Problem 0/1 Knapsack Problem
• We can even put the fraction of any item • We can not take the fraction of any item.
into the knapsack if taking the complete item
is not possible. • We must either take an item completely or
leave it completely.
(we can take the item with the
maximum value/weight ratio as much as we Hence, only two options available for each
can and then the next item with second item, either pick item (1) or leave item (0)
most value/weight ratio and so on until the (𝒙𝒊 ∈ {𝟎, 𝟏})
maximum weight limit is reached.)
• It can be solved using Greedy Method • It can be solved using dynamic programming
approach.
Example: Find the optimal solution for the knapsack problem making use of greedy approach.
Consider:
n = 3, W = 20 kg, (w1, w2, w3) = (18, 15, 10), (v1, v2, v3) = (25, 24, 15)
Pandu Sowkuntla
40
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Fractional Knapsack Problem Using Greedy Method
2. For the given set of items and 2. A thief enters a house for robbing it. He can
knapsack capacity = 60 kg, find the carry a maximal weight of 60 kg into his bag. There
optimal solution for the fractional are 5 items in the house with the following weights
knapsack problem making use of greedy and values. What items should thief take if he can
approach. even take the fraction of any item with him?
• The main time taking step is the sorting of all items in order of their 𝒗𝒂𝒍𝒖𝒆/𝒘𝒆𝒊𝒈𝒉𝒕 ratio.
• If the items are already arranged in the required order, then while loop takes 𝑶(𝒏) time.
MST of graph G is an acyclic subset 𝑇 ⊆ 𝐸 that connects all of the vertices and whose
𝑊 𝑇 = σ 𝑤 𝑢, 𝑣
total weight 𝑊 𝑇 is minimized, where
𝑢, 𝜈 𝜖𝑇
Pandu Sowkuntla
44
Design and Analysis of Algorithms--> Unit-II: Minimum Spanning Trees
Minimum Spanning tree (MST)
Pandu Sowkuntla
45
Design and Analysis of Algorithms--> Unit-II: Minimum Spanning Trees
Prim’s Algorithm
➢ Tree vertices: Vertices that are a part of the minimum spanning tree T.
➢ Fringe vertices: Vertices that are currently not a part of T, but are adjacent to some
vertex of T.
➢ Unseen vertices: Vertices that are neither tree vertices nor fringe vertices fall under
this category.
Algorithm
Step 1: Select a starting vertex.
Step 3: Select an edge e connecting the tree vertex and fringe vertex that has
minimum weight.
Step 4: Add the selected edge and the vertex to the minimum spanning tree T.
[END OF LOOP]
Step 5: EXIT
Pandu Sowkuntla
46
Design and Analysis of Algorithms--> Unit-II: MST: Prim’s Algorithm
Prim’s Algorithm
Minimum
Spanning Tree
// using Min-Heap
Total Time Complexity:
𝑂 𝑉 =𝑂 𝑉 + 𝑂 𝑉 ∗ (𝑂(𝑙𝑜𝑔𝑉 + 𝑂 𝑉 ∗ 𝑂(𝑙𝑜𝑔𝑉))
=𝑂 𝑉 + 𝑂(𝑉𝑙𝑜𝑔𝑉) + 𝑂 𝑉 2 log 𝑉
=𝑂(𝐸𝑙𝑜𝑔𝑉)
Build heap: 𝑂 𝑉
(Because, 𝑉 2 can be considered as no. of
𝑂 𝑉 edges (E) (aggregate analysis))
𝑂 𝑙𝑜𝑔𝑉
𝑂 𝑉
Pandu Sowkuntla
48
Design and Analysis of Algorithms--> Unit-II: MST: Prim’s Algorithm
Prim’s Algorithm
49
Data Structures-->
Design and AnalysisUnit-IV:
of Algorithms--> Unit-II: MST: Prim’s Algorithm Pandu Sowkuntla
Kruskal’s Algorithm
► Algorithm treats the graph as a forest and every
node it has as an individual tree.
Pandu Sowkuntla
51
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm
Pandu Sowkuntla
52
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm
Pandu Sowkuntla
53
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm
=𝑂 𝑉 + 𝑂(𝐸𝑙𝑜𝑔𝐸) + 𝑂 𝐸
=𝑂(𝐸𝑙𝑜𝑔𝐸)
𝑂 𝑉
=𝑂 𝐸𝑙𝑜𝑔𝑉 2 = 𝑂(𝐸𝑙𝑜𝑔𝑉)
Pandu Sowkuntla
54
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Single Source Shortest Paths: Dijkstra’s Algorithm
Step-1: Maintain a list of unvisited vertices.
Step-6: Mark the current node V1 as visited and remove it from the unvisited list.
Step-7: Select next vertex with smallest cost from the unvisited list and repeat from step 4.
Step-8: The algorithm finally ends when there are no unvisited nodes left.
Note: Dijkstra’s algorithm solves the single-source shortest-paths problem on a weighted,
directed graph G = (V, E) for the case in which all edge weights are non-negative.
Pandu Sowkuntla
55
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm • Assign cost of 0 to source vertex
Ex-1 and ∞ (Infinity) to all other
vertices
• Add all the vertices to unvisited
list Step-1
Pandu Sowkuntla
56
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
Step-3: Select next vertex with smallest cost from the unvisited list. (C)
A
D
Pandu Sowkuntla
58
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
Step-9: Select next vertex with smallest cost from the unvisited list. (E)
E
A
D
A
D
Pandu Sowkuntla
59
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
Example-2
Pandu Sowkuntla
60
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
s= source vertex
Time complexity = 𝑂(𝐸𝑙𝑜𝑔𝑉) 61
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm Pandu Sowkuntla
Single Source Shortest Paths: Bellman-Ford Algorithm
► A negative cycle is one in which the overall sum of the cycle becomes negative.
► If the graph 𝐺 = (𝑉, 𝐸) contains no negative weight cycles reachable from the source 𝑠,
then for all vertices in 𝑉, the shortest-path weight 𝛿 𝑠, 𝑣 = −∞ remains well defined,
even if it has a negative value.
Example-2
Pandu Sowkuntla
64
Design and Analysis of Algorithms--> Unit-II: Bellman-Ford Algorithm
Huffman Coding
► Huffman Coding is a technique of compressing data to reduce its size without losing any
of the details (developed by David Huffman).
► Huffman Coding is generally useful to compress the data in which there are frequently
occurring characters.
► Huffman coding first creates a tree using the frequencies of the character and then
generates code for each character.
► Huffman Coding prevents any ambiguity in the decoding process using the concept
of prefix code.
► Prefix code is a code associated with a character should not be present in the prefix
of any other code.
Pandu Sowkuntla
65
Huffman Coding
Huffman code Algorithm
Huffman(C)
{
n=|C|;
Create a min-heap Q with C; 𝑂(𝑛)
for i=1 to n-1 𝑛−1
Allocate space for a new node z;
z.left=x=Extract-Min(Q); 𝑂(𝑙𝑜𝑔𝑛)
z.right=y=Extract-Min(Q); 𝑂(𝑙𝑜𝑔𝑛)
z.freq=x.freq+y.freq;
insert(Q, z); 𝑂(𝑙𝑜𝑔𝑛)
return(rootnode);
}
Pandu Sowkuntla
66
Limitations of Greedy Approach
In the game of Chess, every time we make a decision about a move, we have to also think
about the future consequences. Whereas, in the game of Tennis (or Volleyball), our
action is based on the immediate situation.
This means that in some cases making a decision that looks right at that moment gives
the best solution (Greedy), but in other cases it doesn’t.
Making locally optimal choices does not always work. Hence, Greedy algorithms will not
always give the best solutions.
Pandu Sowkuntla
67
Design and Analysis of Algorithms--> Unit-II
Dynamic Programming
Pandu Sowkuntla
68
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Dynamic Programming
Why Dynamic Programming ?
Divide and Conquer Greedy Dynamic Programming
Used to find the solution, it An optimization technique tries to An optimization technique tries
does not aim for the optimal find an optimal solution from the to find an optimal solution from
solution. set of feasible solutions. the set of feasible solutions.
Divides the problem into small sub The optimal solution is obtained Divides the problem into small
problems, each is solved from a set of feasible solutions. overlapping sub problems, each is
independently, and solutions of the interdependent and have optimal sub
smaller problems are combined to structure property.
find the solution of the large
problem
Sub problems are independent, so Greedy algorithm does not consider Sub problems are interdependent,
DC might solve same sub problem the previously solved instance and remembers previously solved
multiple times. again, thus it avoids the re- instance, thus it avoids the re-
computation. computation.
► The basic idea of dynamic programming is to store the result of a sub problem after
solving it.
Pandu Sowkuntla
70
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Dynamic Programming
► An optimization technique tries to find an optimal solution from the set of feasible
solutions.
► Optimal sub structure property: If the optimal solution of the given problem can be
obtained by finding the optimal solutions of all the sub-problems.
► Overlapping sub problems: Sub problems are interdependent, and remembers previously
solved instance, thus it avoids the re-computation.
► Divides the problem into small overlapping sub problems, each is interdependent and
have optimal sub structure property.
Pandu Sowkuntla
71
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Fibonacci sequence without Dynamic Programming
Fibonacci(n)
if n==0
return 0
if n==1
return 1
return Fibonacci(n-1) + Fibonacci(n-2)
𝐹𝑖𝑏 3 is occurring
twice, 𝐹𝑖𝑏 1 is occurring
𝑭 𝒏 = 𝑭 𝒏 − 𝟏 + 𝑭 𝒏 − 𝟐 , 𝒘𝒉𝒆𝒓𝒆 𝑭 𝟎 = 𝟎, 𝑭 𝟏 = 𝟏 4 times, etc.
Pandu Sowkuntla
72
Design and Analysis of Algorithms--> Unit-II: Fibonacci without Dynamic Programming
Fibonacci sequence using Dynamic Programming
F = [] //new array
Fibonacci(n)
if F[n] == null
if n==0
F[n] = 0
else if n==1
F[n] = 1
else
F[n] = Fibonacci(n-1) + Fibonacci(n-2)
return F[n]
► Here, we are first checking if the result is already present in the array or not
(if F[n] == null)
► But are we sacrificing anything for the speed? Yes, memory. Dynamic programming
basically trades time with memory.
► Thus, we should take care that not an excessive amount of memory is used while storing
the solutions.
Pandu Sowkuntla 73
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
Top-Down Approach of Dynamic Programming
Top-down approach
Pandu Sowkuntla
74
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
Bottom-Up Approach of Dynamic Programming
for i in 2 to n
F[i] = F[i-1] + F[i-2]
return F[n]
Pandu Sowkuntla
75
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
0-1 Knapsack Problem
Example:
Pandu Sowkuntla
76
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem
Example:
𝑖𝑓 (𝑊 == 0 || 𝑖 == 0)
𝑟𝑒𝑡𝑢𝑟𝑛 0;
𝑖𝑓 𝑤𝑚[𝑖] > 𝑊
𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖, 𝑊) = 𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖 − 1, 𝑊);
78
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method
Pandu Sowkuntla
79
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method
KS(i, W)=max{KS(i−1,W),(KS(i−1,W−wi)+pi)}
KS(1,3)=max{KS(0,3),(KS(0,0)+8)}
=max{0,8}=8
Similarly,
KS(1,4)=max{KS(0,4),(KS(0,1)+8)}
=max{0,8}=8
KS(1,5)=max{KS(0,5),(KS(0,2)+8)}
=max{0,8}=8
Pandu Sowkuntla
80
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method
For F(2,1) F(2,2)=max{F(1,2),(F(1,0)+3)} F(2,3)=max{F(1,3),(F(1,1)+3)}
w2<W, F(2,1)=F(1,1)=0 =max{0,3}=3 =max{8,3}=8
Pandu Sowkuntla
81
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Algorithm
cost[n+1, W+1]
KNAPSACK-01(n, W, wm, pm)
for w in 0 to W
cost[0, w] = 0 𝑂(𝑊)
for i in 0 to n 𝑂(𝑛)
cost[i, 0] = 0
Time Complexity=𝑂 𝑛 ∗ 𝑊
for i in 1 to n 𝑂(𝑛 ∗ 𝑊)
for w in 1 to W
if wm[i] > w
cost[i, w] = cost[i-1, w]
else
if pm[i]+cost[i-1, w-wm[i]] > cost[i-1, w]
cost[i, w] = pm[i] + cost[i-1, w-wm[i]]
else
cost[i, w] = cost[i-1, w]
return cost[n, W]
Pandu Sowkuntla
82
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
All-to-all shortest paths (Floyd Warshall Algorithm)
Recursive equation: i
k
Uses only
vertices Uses only vertices
numbered 1, … , 𝑘 − 1 numbered k − 1, … , 𝑗
𝑘
𝑑ⅈ𝑗 : Represents the cell value of the matrix which gives the distance between the vertices
𝑖 and 𝑗 which are traversed through the vertices of the path 𝑖 −−> 𝑘 and 𝑘 −−> 𝑗.
𝐷𝑘 : Represents the matrix having the values which are the distances between every pair of
vertices by traversing through the vertices of the path 𝑖 −−> 𝑘 and 𝑘 −−> 𝑗.
Pandu Sowkuntla
83
Design and Analysis of Algorithms--> Unit-II: All-to-all shortest paths
Floyd Warshall Algorithm
Example-2 Input: Graph 𝐺 = (𝑉, 𝐸), where 𝑉 = {1, 2, … , 𝑛}, with
edge-weight matrix 𝑊.
1 2 3 4 5
1 0 3 8 ∞ -4
2 ∞ 0 ∞ 1 7
4
Time Complexity= 𝑂 𝑛3
Space Complexity = 𝑂 𝑛3 (𝑛 matrices
5 ∞ ∞ ∞ 6 0
should be constructed of each size 𝑂 𝑛2 )
Pandu Sowkuntla
84
Design and Analysis of Algorithms--> Unit-II: All-to-all shortest paths