0% found this document useful (0 votes)
39 views84 pages

DAA Unit-II Lecture Notes

Uploaded by

Unipool Inc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views84 pages

DAA Unit-II Lecture Notes

Uploaded by

Unipool Inc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Design and Analysis

of
Algorithms

Dr. Pandu Sowkuntla


Asst. Professor,
Dept. of CSE, SRM University AP

Data Structures--> Unit-IV: Pandu Sowkuntla 1


GENERAL
PROBLEM-SOLVING
TECHNIQUES
UNIT-II
Problems Solving Strategies or Algorithm Design Strategies

Divide and Conquer

Greedy

Dynamic Programming

Backtracking

Branch and Bound


Pandu Sowkuntla
3
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Divide and Conquer Approach
❑ Merge Sort Algorithm
❑ Quick Sort Algorithm
❑ Binary Search Tree Operations

Pandu Sowkuntla
4
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Divide and Conquer Approach

► Divide the problem into a number of subproblems that are


smaller instances of the same problem.

► Conquer the subproblems by solving them recursively.

► Combine the solutions to the subproblems into the solution


for the original problem.

Pandu Sowkuntla
5
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm
Merge sort
Method :

A "divide and conquer" algorithm

▪ Divides the array into two


roughly equal parts
mergeSort(0, n/2-1) mergeSort(n/2, n-1)

▪ Recursively divide each part in


half, continuing until a part
contains only one element sort sort
merge(0, n/2, n-1)
▪ Recursively sort the two halves
and merge the two parts into one
sorted array.

▪ Continue to merge parts as the


recursion unfolds

Pandu Sowkuntla
6
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm

Divide: Divide the n-element sequence


to be sorted into two subsequences of
n=1 elements each.

Conquer: Sort the two subsequences


recursively using merge sort.

Combine: Merge the two sorted


subsequences to produce the sorted
sequence.

7
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer Pandu Sowkuntla
Merge Sort Algorithm

Sort and
Combine
the sub arrays

5 2 4 7 1 3 2 6
Divide the
5 2 4 7 1 3 2 6 initial array

5 2 4 7 1 3 2 6
Pandu Sowkuntla
8
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm

Pandu Sowkuntla
9
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis

𝑇(𝑛/2)
𝑇(𝑛/2)
𝑂(𝑛)

or

Time Complexity:
Worst case performance: Θ(𝑛𝑙𝑜𝑔𝑛) Space complexity:
Best case performance: Θ(𝑛𝑙𝑜𝑔𝑛) Worst case: Θ(𝑛)
Average case performance: Θ(𝑛𝑙𝑜𝑔𝑛)

Pandu Sowkuntla
10
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis
(Substitution method)
𝑇(𝑛) = 2 ∗ 𝑇(𝑛/2) + 𝑛
= 2 ∗ (2 ∗ 𝑇(𝑛/4) + 𝑛/2) + 𝑛
= 4 ∗ 𝑇(𝑛/4) + 𝑛 + 𝑛
= 4 ∗ 𝑇(𝑛/4) + 2 ∗ 𝑛
= 4 ∗ (2 ∗ 𝑇(𝑛/8) + 𝑛/4) + 2 ∗ 𝑛
= 8 ∗ 𝑇(𝑛/8) + 𝑛 + 2 ∗ 𝑛
= 23 ∗ 𝑇(𝑛/23) + 3 ∗ 𝑛
:
:
= 2𝑘 ∗ 𝑇(𝑛/2𝑘 ) + 𝑘 ∗ 𝑛
=
𝐼𝑓 2𝑘 𝑛 => 𝑘 = 𝑙𝑜𝑔𝑛

𝑇(𝑛) = 𝑛 ∗ 𝑇(𝑛/𝑛) + 𝑙𝑜𝑔𝑛 ∗ 𝑛

𝑇(𝑛) = 𝑛 ∗ 1 + 𝑙𝑜𝑔𝑛 ∗ 𝑛

𝑇(𝑛) = 𝑂(𝑛𝑙𝑜𝑔𝑛)

Time complexity=𝑂(𝑛𝑙𝑜𝑔𝑛)
Pandu Sowkuntla
11
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Merge Sort Algorithm Analysis (Recursion tree method)

Height of tree=logn+1
(n=number of leave
=input size)

Time complexity=cn*(logn+1)=cnlogn+cn=O(nlogn) 12
Master Theorem for recursive algorithms Merge sort analysis:
If

where a ≥ 1,b > 1,k ≥ 0 and p is a real


number, then:

Pandu Sowkuntla
13
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Examples on Master Theorem
Solve the following recurrence relation using Master’s theorem-
T(n) = 3T(n/2) + n2
We compare the given recurrence relation with T(n) = aT(n/b) + θ
(nklogpn).
Then, we have-
a = 3
b = 2
k = 2
p = 0

Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.

Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
T(n) = θ (n2)
Thus,

Pandu Sowkuntla
14
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Examples on Master Theorem

Solve the following recurrence relation using Master’s theorem-


T(n) = √2T(n/2) + logn

We compare the given recurrence relation with T(n) = aT(n/b) + θ


(nklogpn).
Then, we have-
a = √2
b = 2
k = 0
p = 1
Now, a = √2 = 1.414 and bk = 20 = 1.
Clearly, a > bk.
So, we follow case-01.

So, we have-
T(n) = θ (nlogba)
T(n) = θ (nlog2√2)
T(n) = θ (n1/2)

Thus, T(n) = θ (√n)


Pandu Sowkuntla
15
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm
► Quicksort is a divide and conquer algorithm.
It divides the large array into smaller sub-arrays.
And then quicksort recursively sort the sub-arrays.
► Pivot
1. Picks an element called the "pivot".

There are many ways to choose the pivot element.


i) The first element or second or middle or end element in the array
ii) We can also pick the element randomly.

► Partition
2. Rearrange the array elements such that the all values lesser than the pivot should come
before the pivot and all the values greater than the pivot should come after it.

At the end of the partition, the pivot element will be placed at its sorted position.

► Recursive
3. Do the above process recursively to all the sub-arrays and sort the elements.

► Base Case
If the array has zero or one element, there is no need to call the partition method.
Pandu Sowkuntla
16
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm

Pandu Sowkuntla
17
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm
Example

Pandu Sowkuntla
18
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

𝑂(𝑛)
T(𝑛/𝑎)
T(𝑛/𝑏)

Time Complexity:
Best Case: Each partition splits array in
halves and gives,
𝑇(𝑛) = 2𝑇(𝑛/2) + Θ(𝑛) = Θ(𝑛𝑙𝑜𝑔𝑛) Space Complexity: 𝑂(𝑙𝑜𝑔𝑛) (best case)
[using Divide and Conquer master theorem] 𝑂(𝑛) (worst case)

Worst case: Each partition splits array in


𝑇(1), 𝑇(𝑛 − 1) and gives,
𝑇(𝑛) = 𝑇(𝑛 − 1) + Θ(𝑛) = Θ(𝑛2 )
Pandu Sowkuntla
19
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

Worst Case: Each partition gives unbalanced splits, and we get 𝑇(𝑛) = 𝑇(𝑛 – 1) + Θ(𝑛)

𝑇 𝑛 = 𝑇 𝑛 − 1 + 𝑛 →(1)
𝐹𝑟𝑜𝑚 𝐸𝑞 (1)
𝑇(𝑛 − 1) = 𝑇(𝑛 − 1 − 1) + 𝑛 − 1
= 𝑇(𝑛 − 2) + 𝑛 − 1 → (2)

𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 (2) 𝑖𝑛 (1) Average case Complexity: 𝑂(𝑛𝑙𝑜𝑔𝑛)


𝑇(𝑛) = 𝑇(𝑛 − 2) + (𝑛 − 1) + 𝑛→(3)
Worst case space Complexity: 𝑂(𝑛2)
𝐹𝑟𝑜𝑚 𝐸𝑞 (3)
𝑇(𝑛 − 2) = 𝑇 (𝑛 − 2 − 2) + (𝑛 − 2 − 1) + (𝑛 − 2) → (4)

𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 (4) 𝑖𝑛 (3)


𝑇(𝑛) = 𝑇(𝑛 − 4) + (𝑛 − 3) + (𝑛 − 2) + (𝑛 − 1) + 𝑛 →(5)

𝐹𝑜𝑟 𝑛 = 𝑛 , 𝑖. 𝑒. , 𝑇(0) = 1

𝑇(𝑛) = 1 + 2 + … … . + (𝑛 − 3) + (𝑛 − 2) + (𝑛 − 1) + 𝑛
𝑇(𝑛) = 𝑛 ∗ (𝑛 + 1)/2 ➔ 𝑂(𝑛2)

Pandu Sowkuntla
20
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

Example

arr[5] = {10, 25, 3, 50, 20}


start = 0, end = 4,
pindex = 0, pivot=arr[4]=20

Pandu Sowkuntla
21
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis
Recursive calls:

After splitting the array into 2 partitions, quicksort algorithm called recursively on each
sub array

Recursive
Call 1

start=0,
end=1,
i = 0,
pIndex = 0
pivot = 3

Pandu Sowkuntla
22
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

Recursive Recursive
Call 1 Call 1

Recursive Recursive
Recursive Call 3
Call 2 Call 2

Pandu Sowkuntla
23
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

Recursive Recursive
Call 1 Call 4

Recursive Recursive
Call 2 Call 3

Pandu Sowkuntla
24
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Quick Sort Algorithm Analysis

Recursive Recursive
Call Call

Recursive Recursive Recursive


Recursive
Call Call Call
Call

Sorted array: {3, 10, 20, 25, 50}

Pandu Sowkuntla
25
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Search Trees (BST)

BST is a binary tree where each node n satisfy the following:


• Every node in the left subtree of n contains a value which is smaller than the
value in n.
• Every node in the right subtree of n contains a value which is larger than the
value in n.

8 8 8

3 10 3 3 10
10

1 6 14 6 1 6
1

4 7 13 4 2

Binary Search Tree Not a Binary Search Tree


Binary Search Tree

Pandu Sowkuntla
26
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Search Trees (BST)
► Empty BST is a single pointer with the value of NULL.
root = NULL;
► A node in BST can be declared as:
struct Node
{
struct Node *leftChild; //to store address of left child
int data;
struct Node *rightChild; //to store address of right child
};
► Create a node of BST as:
struct Node *newNode(int item)
{
struct node *newnode = (struct Node*) malloc(sizeof(struct Node));
newnode->data = item;
newnode->leftChild = NULL;
newnode->rightChild = NULL;
return newnode;
} Pandu Sowkuntla
27
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Step-1:45 root Step-4:90,
45 90>45, Elements: 45, 15, 79, 90, 10, 55, 12, 50
45
90>79
\0 \0
15 79
Step-2:15, 15<45 Step-6:55,
\0 \0 \0 90 45
45 55>45,
55<79
\0 \0 \0 15 79
15
Step-5:10, \0
\0 \0 10 55 90
10<45, 45
Step-3:79,79>45 10<15
\0 \0 \0 \0 \0 \0
45 15 79

10 \0 \0 90
15 79

\0 \0 \0 \0
\0 \0 \0 \0
Pandu Sowkuntla
28
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Algorithm

void insert(root, data)


if root == NULL
return createNode(data)
if (data < root->data)
root->left = insert(root->left, data);
else if (data > root->data)
root->right = insert(root->right, data);
return root;

If 𝒉 is the height of the BST with 𝒏 nodes, then


the time complexity to insert the data into the
tree is,𝑶 𝒉

Pandu Sowkuntla
29
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Inserting data into BST
Complexity Analysis
With “n” nodes in a binary search tree, what is the
height (h) of the tree?
Level-0 n1
In Perfect Binary Tree, all the levels are filled.
Level-1 n2 n3 No. of nodes at the level ⅈ in perfect binary tree is 2ⅈ ,
the height can be expressed in terms of level as ⅈ = ℎ
Level-2 n4 n5 n6 n7 Total no. of nodes (n) in a binary search tree with the
height “h”,
If BST==Perfect Binary Tree n = 20 + 21 +. . . . +2h (total (h+1) terms are in G.P)
n = 2h+1 -1

n = 2h+1 -1
n+1 = 2h+1
n+1 = 2*2h
2h =(n+1)/2
𝑛+1
ℎ = log
2
Pandu Sowkuntla
30
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Searching data in BST
45 searchBST(root, item)

if(root==NULL)
15 79 return FALSE

10 \0 55 90 else if(root->data==item)
return TRUE

\0 12 50 \0 \0 \0 else if(item < root->data)


searchBST(root->leftChild, item)

else
searchBST(root->rightChild, item)

Time complexity to search the data in BST is 𝑶 𝒉

Pandu Sowkuntla
31
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Find Minimum or Maximum data in BST
45
void maxBST(root)

Maximum if(root==NULL)
Minimum 15 79
(no right) print(“Empty Tree”);
(no left)
return -1;
10 \0 55 90
else if(root->right==NULL)
\0 12 50 \0 \0 \0 return root->data;

return maxBST(root->right);
void minBST(root)

if(root==NULL)
print(“Empty Tree”); Time complexity to find minimum or
return -1; maximum data in BST is 𝑶 𝒉

else if(root->left==NULL)
return root->data;

return minBST(root->left);

Pandu Sowkuntla
32
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Problem with Binary Search Trees (BST)

8
8 8

10
7 3 10

14
5 1 6 14

16
3
Left Skewed Right Skewed 4 7 13
Binary Search Tree Binary Search Tree
2 17 Balanced Binary Search Tree
Unbalanced Binary Search Trees

Insertion, Deletion or Search operations


take O(n) time in unbalanced BSTs. Cost can be reduced to O(log n) with
balanced BSTs (AVL Trees).

Pandu Sowkuntla
33
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Tree Traversal

17 23 97 44 33
0 1 2 3 4 5 6 7
Traversal in a Linked list: from head to NULL
Traversal in an array:
from a[0] to a[n-1]
Linear Traversal root
Tree traversal: Process of visiting (reading or processing data) F Level-0
each node in the tree exactly once in some order.

Tree traversal J Level-1


D
Breadth First Traversal
(Level-order traversal):F, D, J, B, E, G, K, A, C, I, H B E G Level-2
K
Depth First Traversal
1. Preorder A C I Level-3
<root, left, right>: F, D, B, A, C, E, J, G, I, H, K
2. Inorder
H Level-4
<left, root, right>: A, B, C, D, E, F, G, H, I, J, K
3. Postorder root-> Print the value,
<left, right, root>: A, C, B, E, D, H, I, G, K, J, F left-> Traverse left sub tree,
right-> Traverse right subtree
Pandu Sowkuntla
34
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Binary Tree Traversal
void preorder(struct Node *root) root
{
if(root == NULL) F
return;
printf("%d ",root->data); //visit the node J
D
preorder(root->left); //traverse the left subtree
preorder(root->right); //traverse the right subtree
} B E G K
void inorder(struct Node *root)
{ A C I
if(root == NULL)
return;
inorder(root->left); //traverse the left subtree H
printf("%d ",root->data); //visit the root
inorder(root->right); //traverse the right subtree
}
void postorder(struct Node *root)
{
if(root == NULL) return;
postorder(root->left); //traverse the left subtree
postorder(root->right); //traverse the right subtree
printf("%d ",root->data); //visit the root
35
} Data Structures--> Unit-III: Pandu Sowkuntla
Binary Tree Traversal

A Preorder:

B C
Inorder:

D E F

Postorder:
G H I

Pandu Sowkuntla
36
Design and Analysis of Algorithms--> Unit-II: Divide and Conquer
Greedy Approach

Pandu Sowkuntla
37
Design and Analysis of Algorithms--> Unit-II: Greedy Approach
Greedy Approach

1) Greedy choice property

▪ It says that the globally optimal solution can be obtained by making a local
optimal solution (Greedy).

▪ The choice made by a Greedy algorithm may depend on earlier choices but not on
future.

▪ It iteratively makes one Greedy choice after another and reduces the given problem
to a smaller one.

2) Optimal substructure

▪ A problem exhibits optimal substructure if an optimal solution to the problem


contains optimal solutions to the subproblems.

▪ That means we can solve subproblems and build up the solutions to solve larger
problems.

Pandu Sowkuntla
38
Design and Analysis of Algorithms--> Unit-II: Greedy Approach
Knapsack Problem

► You are given the following:

• A knapsack (kind of shoulder bag) with limited weight


capacity (𝑾)

• Items (𝒙𝒊 ) each having some weight (𝒘𝒊 ) and value (𝒗𝒊 )

► The problem states: Which items should be placed


into the knapsack such that,

• The value or profit obtained by putting the items into the knapsack is maximum.

• And the weight limit of the knapsack does not exceed.

► Main task is to maximize the value σ𝒏𝒊=𝟏(𝒗𝒊 𝒙𝒊 ) (𝑠𝑢𝑚𝑚𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑛𝑜. 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠 𝑡𝑎𝑘𝑒𝑛 ∗ 𝑖𝑡𝑠 𝑣𝑎𝑙𝑢𝑒)

such that σ𝒏𝒊=𝟏(𝒘𝒊 𝒙𝒊 ) ≤ 𝑾 (weight of all the items should be less than the maximum weight)

Pandu Sowkuntla
39
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Knapsack Problem
Fractional Knapsack Problem 0/1 Knapsack Problem

• Items are divisible. • Items are indivisible.

• We can even put the fraction of any item • We can not take the fraction of any item.
into the knapsack if taking the complete item
is not possible. • We must either take an item completely or
leave it completely.
(we can take the item with the
maximum value/weight ratio as much as we Hence, only two options available for each
can and then the next item with second item, either pick item (1) or leave item (0)
most value/weight ratio and so on until the (𝒙𝒊 ∈ {𝟎, 𝟏})
maximum weight limit is reached.)

• It can be solved using Greedy Method • It can be solved using dynamic programming
approach.

Example: Find the optimal solution for the knapsack problem making use of greedy approach.
Consider:
n = 3, W = 20 kg, (w1, w2, w3) = (18, 15, 10), (v1, v2, v3) = (25, 24, 15)
Pandu Sowkuntla
40
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Fractional Knapsack Problem Using Greedy Method
2. For the given set of items and 2. A thief enters a house for robbing it. He can
knapsack capacity = 60 kg, find the carry a maximal weight of 60 kg into his bag. There
optimal solution for the fractional are 5 items in the house with the following weights
knapsack problem making use of greedy and values. What items should thief take if he can
approach. even take the fraction of any item with him?

Item Weight Value Item Weight Value


1 5 30 1 5 30
2 10 40 2 10 40
3 15 45 3 15 45
4 22 77 4 22 77
5 25 90 5 25 90
O R
2. Find the optimal solution for the fractional knapsack problem making use of greedy
approach. Consider:
n = 5, W = 60 kg
(w1, w2, w3, w4, w5) = (5, 10, 15, 22, 25)
(v1, v2, v3, v4, v5) = (30, 40, 45, 77, 90)
41
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack Pandu Sowkuntla
Fractional Knapsack Problem Using Greedy Method
Items Weight Value Ratio Step-4:
Step-1: Compute the
1 5 30 6
• Knapsack weight left to be filled is 20 kg
value/weight ratio
but item-4 has a weight of 22 kg.
for each item 2 10 40 4
3 15 45 3 • Since in fractional knapsack problem, even
the fraction of any item can be taken.
4 22 77 3.5
5 25 90 3.6 • Now, knapsack will contain the following
items-
Step-2: Sort all the items in decreasing order < I1 , I2 , I5 , (20/22) I4 >
of their value / weight ratio
I1 I2 I5 I4 I3: (6) (4) (3.6) (3.5) (3) Total cost of the knapsack
= 160 + (20/22) x 77
Knapsack Items = 160 + 70
Cost = 230 units
Step-3: Start filling Weight in Knapsack
the knapsack by Important Note
60 Ø 0
putting the items
into it one by one. 55 I1 30 Had the problem been a 0/1 knapsack problem,
knapsack would contain the following items:
45 I1, I2 70
(bag still has space of 5kg)
20 I1, I2, I5 160 < I1 , I2 , I5, I3 >
Pandu Sowkuntla
42
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Fractional Knapsack Problem Using Greedy Method
Step-01: Fractional Knapsack(𝒘, 𝒑, 𝒎)
For each item, compute its 𝒗𝒂𝒍𝒖𝒆/𝒘𝒆𝒊𝒈𝒉𝒕 1. for each item 𝒊
𝒑
ratio. 2. compute 𝒊
𝒘𝒊
𝒑𝒊
Step-02: 3. Sort-Descending order of 𝒘𝒊
// 𝑶(𝒏𝒍𝒐𝒈𝒏)
Arrange all the items in decreasing order 4. for each item 𝒊 from sorted list
of their 𝒗𝒂𝒍𝒖𝒆/𝒘𝒆𝒊𝒈𝒉𝒕 ratio. 5. if(𝒎 > 𝟎 && 𝒘𝒊 <= 𝒎)
6. 𝒎 = 𝒎 − 𝒘𝒊
Step-03: 7. 𝑷 = 𝑷 + 𝒑𝑖 𝑶(𝒏)
Start putting the items into the knapsack 8. else
beginning from the item with the highest 9. break;
ratio. 10. if 𝒎 > 𝟎
𝒑
11. 𝑷 = 𝑷 + 𝒎 ∗ 𝒘𝒊
Time Complexity 𝒊

• The main time taking step is the sorting of all items in order of their 𝒗𝒂𝒍𝒖𝒆/𝒘𝒆𝒊𝒈𝒉𝒕 ratio.

• If the items are already arranged in the required order, then while loop takes 𝑶(𝒏) time.

• The average time complexity of Quick Sort is 𝑶(𝒏𝒍𝒐𝒈𝒏).

• Total time taken including the sort is 𝑶(𝒏𝒍𝒐𝒈𝒏) + 𝑶(𝒏) = 𝑶(𝒏𝒍𝒐𝒈𝒏).


Pandu Sowkuntla
43
Design and Analysis of Algorithms--> Unit-II: Fractional Knapsack
Minimum Spanning tree (MST)
• A spanning tree of a connected, undirected graph G is a sub-graph of G which is a tree
that connects all the vertices together.
• A graph G can have many different spanning trees.

Graph G Spanning Trees of G


• Weight of a spanning tree is the sum of the weights of the edges in that spanning tree.

• A minimum spanning tree (MST or minimum weighted spanning tree) is defined as a


spanning tree with weight less than or equal to the weight of every other spanning
tree.
Let G = (V, E) be an undirected connected graph.

MST of graph G is an acyclic subset 𝑇 ⊆ 𝐸 that connects all of the vertices and whose
𝑊 𝑇 = σ 𝑤 𝑢, 𝑣
total weight 𝑊 𝑇 is minimized, where
𝑢, 𝜈 𝜖𝑇
Pandu Sowkuntla
44
Design and Analysis of Algorithms--> Unit-II: Minimum Spanning Trees
Minimum Spanning tree (MST)

Minimum Spanning Tree (Weight = 9)

Pandu Sowkuntla
45
Design and Analysis of Algorithms--> Unit-II: Minimum Spanning Trees
Prim’s Algorithm

➢ Tree vertices: Vertices that are a part of the minimum spanning tree T.

➢ Fringe vertices: Vertices that are currently not a part of T, but are adjacent to some
vertex of T.

➢ Unseen vertices: Vertices that are neither tree vertices nor fringe vertices fall under
this category.

Algorithm
Step 1: Select a starting vertex.

Step 2: Repeat Steps 3 and 4 until there are no fringe vertices.

Step 3: Select an edge e connecting the tree vertex and fringe vertex that has
minimum weight.

Step 4: Add the selected edge and the vertex to the minimum spanning tree T.

[END OF LOOP]

Step 5: EXIT
Pandu Sowkuntla
46
Design and Analysis of Algorithms--> Unit-II: MST: Prim’s Algorithm
Prim’s Algorithm

Minimum
Spanning Tree

Find Minimum Spanning Tree?


Pandu Sowkuntla
47
Design and Analysis of Algorithms--> Unit-II: MST: Prim’s Algorithm
Prim’s Algorithm

// using Min-Heap
Total Time Complexity:

𝑂 𝑉 =𝑂 𝑉 + 𝑂 𝑉 ∗ (𝑂(𝑙𝑜𝑔𝑉 + 𝑂 𝑉 ∗ 𝑂(𝑙𝑜𝑔𝑉))
=𝑂 𝑉 + 𝑂(𝑉𝑙𝑜𝑔𝑉) + 𝑂 𝑉 2 log 𝑉
=𝑂(𝐸𝑙𝑜𝑔𝑉)
Build heap: 𝑂 𝑉
(Because, 𝑉 2 can be considered as no. of
𝑂 𝑉 edges (E) (aggregate analysis))
𝑂 𝑙𝑜𝑔𝑉
𝑂 𝑉

Decrease key: 𝑂 𝑙𝑜𝑔𝑉

Pandu Sowkuntla
48
Design and Analysis of Algorithms--> Unit-II: MST: Prim’s Algorithm
Prim’s Algorithm

49
Data Structures-->
Design and AnalysisUnit-IV:
of Algorithms--> Unit-II: MST: Prim’s Algorithm Pandu Sowkuntla
Kruskal’s Algorithm
► Algorithm treats the graph as a forest and every
node it has as an individual tree.

► A tree connects to another only and only if, it


has the least cost among all available options
and does not violate MST properties.

Step 1 - Remove all


loops and Parallel
Edges

Step 2 - Arrange all edges


in their increasing order
of weight
Pandu Sowkuntla
50
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm
Step 3 - Add the edge which has the least weightage

Pandu Sowkuntla
51
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm

Pandu Sowkuntla
52
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm

Pandu Sowkuntla
53
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Kruskal’s Algorithm

Total Time Complexity:

=𝑂 𝑉 + 𝑂(𝐸𝑙𝑜𝑔𝐸) + 𝑂 𝐸
=𝑂(𝐸𝑙𝑜𝑔𝐸)
𝑂 𝑉
=𝑂 𝐸𝑙𝑜𝑔𝑉 2 = 𝑂(𝐸𝑙𝑜𝑔𝑉)

𝑂 𝐸𝑙𝑜𝑔𝐸 (Because, 𝑉 2 can be considered as


no. of edges (E) (aggregate
analysis))

𝑂 𝐸 Space Complexity: 𝑂 𝑉 (To create


V sets (MAKE-SET(v))

Pandu Sowkuntla
54
Design and Analysis of Algorithms--> Unit-II: MST: Kruskal’s Algorithm
Single Source Shortest Paths: Dijkstra’s Algorithm
Step-1: Maintain a list of unvisited vertices.

Step-2: Assign a vertex as “source” and also allocate a


maximum possible cost (infinity) to every other vertex.
(The cost of the source to itself will be zero)

Step-3: In every step, try to minimize the cost for each


vertex.

Step-4: For every unvisited neighbor (V2, V3) of the current


vertex (V1) calculate the new cost from V1.

Step-5: The new cost of V2 is calculated as :

Minimum(existing cost of V2 , (sum of cost of V1 + the cost of edge from V1 to V2) )

Step-6: Mark the current node V1 as visited and remove it from the unvisited list.
Step-7: Select next vertex with smallest cost from the unvisited list and repeat from step 4.
Step-8: The algorithm finally ends when there are no unvisited nodes left.
Note: Dijkstra’s algorithm solves the single-source shortest-paths problem on a weighted,
directed graph G = (V, E) for the case in which all edge weights are non-negative.
Pandu Sowkuntla
55
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm • Assign cost of 0 to source vertex
Ex-1 and ∞ (Infinity) to all other
vertices
• Add all the vertices to unvisited
list Step-1

• For neighbor A: Step-2


cost = Minimum(∞ , 0+3) = 3
• For neighbor C:
cost = Minimum(∞ , 0+1) = 1
• For neighbor D:
cost = Minimum(∞ , 0+6) = 6

Pandu Sowkuntla
56
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
Step-3: Select next vertex with smallest cost from the unvisited list. (C)

• For neighbor A: cost = Minimum(3 , 1+2) = 3


• For neighbor D: cost = Minimum(6 , 1+4) = 5 Step-4
• For neighbor E: cost = Minimum(∞ , 1+4) = 5

Step-5: Select next vertex with smallest


cost from the unvisited list. (A)

• For neighbor F: cost = Minimum(∞ , 3+5) Pandu


= 8 Step-6
Sowkuntla
57
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm

A Step-7: Select next vertex with smallest cost from


the unvisited list. (D)

• For neighbor E: cost = Minimum(5 , 3+5) = 8 Step-8

A
D

Pandu Sowkuntla
58
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm

Step-9: Select next vertex with smallest cost from the unvisited list. (E)

• For neighbor F: cost = Minimum(8 , 5+2) = 7 Step-10


8
7

E
A
D
A
D

Pandu Sowkuntla
59
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm
Example-2

Pandu Sowkuntla
60
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm
Dijkstra’s Algorithm

In the above algorithm


S= the set which contains the
vertices of shortest path,

s= source vertex
Time complexity = 𝑂(𝐸𝑙𝑜𝑔𝑉) 61
Design and Analysis of Algorithms--> Unit-II: Dijkstra’s Algorithm Pandu Sowkuntla
Single Source Shortest Paths: Bellman-Ford Algorithm

► A negative cycle is one in which the overall sum of the cycle becomes negative.

► If the graph 𝐺 = (𝑉, 𝐸) contains no negative weight cycles reachable from the source 𝑠,
then for all vertices in 𝑉, the shortest-path weight 𝛿 𝑠, 𝑣 = −∞ remains well defined,
even if it has a negative value.

► If the graph contains a negative-weight cycle reachable from 𝑠, however, shortest-path


weights are not well defined.

Graph without negative


Graphs with negative weight cycles weight cycles
Pandu Sowkuntla
62
Design and Analysis of Algorithms--> Unit-II: Bellman-Ford Algorithm
Bellman-Ford Algorithm
Example-1

Example-2

Do we have negative cycles in this graph?


Pandu Sowkuntla
63
Design and Analysis of Algorithms--> Unit-II: Bellman-Ford Algorithm
Bellman-Ford Algorithm

Tests the negative


weight cycle

Pandu Sowkuntla
64
Design and Analysis of Algorithms--> Unit-II: Bellman-Ford Algorithm
Huffman Coding

► Huffman Coding is a technique of compressing data to reduce its size without losing any
of the details (developed by David Huffman).

► Huffman Coding is generally useful to compress the data in which there are frequently
occurring characters.

► Huffman coding first creates a tree using the frequencies of the character and then
generates code for each character.

► Once the data is encoded, it is decoded by using the same tree.

► Huffman Coding prevents any ambiguity in the decoding process using the concept
of prefix code.

► Prefix code is a code associated with a character should not be present in the prefix
of any other code.

Pandu Sowkuntla
65
Huffman Coding
Huffman code Algorithm

Huffman(C)
{
n=|C|;
Create a min-heap Q with C; 𝑂(𝑛)
for i=1 to n-1 𝑛−1
Allocate space for a new node z;
z.left=x=Extract-Min(Q); 𝑂(𝑙𝑜𝑔𝑛)
z.right=y=Extract-Min(Q); 𝑂(𝑙𝑜𝑔𝑛)
z.freq=x.freq+y.freq;
insert(Q, z); 𝑂(𝑙𝑜𝑔𝑛)
return(rootnode);
}

Total Time complexity = 𝑂 𝑛 + 𝑛 − 1 ∗ 3 ∗ 𝑙𝑜𝑔𝑛 = 𝑛𝑙𝑜𝑔𝑛

Pandu Sowkuntla
66
Limitations of Greedy Approach

In the game of Chess, every time we make a decision about a move, we have to also think
about the future consequences. Whereas, in the game of Tennis (or Volleyball), our
action is based on the immediate situation.

This means that in some cases making a decision that looks right at that moment gives
the best solution (Greedy), but in other cases it doesn’t.

Making locally optimal choices does not always work. Hence, Greedy algorithms will not
always give the best solutions.

Pandu Sowkuntla
67
Design and Analysis of Algorithms--> Unit-II
Dynamic Programming

Pandu Sowkuntla
68
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Dynamic Programming
Why Dynamic Programming ?
Divide and Conquer Greedy Dynamic Programming
Used to find the solution, it An optimization technique tries to An optimization technique tries
does not aim for the optimal find an optimal solution from the to find an optimal solution from
solution. set of feasible solutions. the set of feasible solutions.

Divides the problem into small sub The optimal solution is obtained Divides the problem into small
problems, each is solved from a set of feasible solutions. overlapping sub problems, each is
independently, and solutions of the interdependent and have optimal sub
smaller problems are combined to structure property.
find the solution of the large
problem
Sub problems are independent, so Greedy algorithm does not consider Sub problems are interdependent,
DC might solve same sub problem the previously solved instance and remembers previously solved
multiple times. again, thus it avoids the re- instance, thus it avoids the re-
computation. computation.

DC approach is recursive in Greedy algorithms are iterative in DP approach is non-recursive in


nature, so it is slower and nature and hence faster. nature, so it is faster and
inefficient. efficient.
Divide and conquer algorithms Greedy algorithms also run-in DP algorithms also run-in
mostly run in polynomial time polynomial time but takes less time polynomial time but takes less
than Divide and conquer time than DC and Greedy methods.
Pandu Sowkuntla
69
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Dynamic Programming
Why Dynamic Programming ?

► Dynamic programming is a powerful technique that allows to solve a different types of


problems in polynomial time for which a naive approach would take exponential time.

“Richard Bellman invented the name ‘dynamic programming’

► The basic idea of dynamic programming is to store the result of a sub problem after
solving it.

Pandu Sowkuntla
70
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Dynamic Programming

What is Dynamic Programming ?

► An optimization technique tries to find an optimal solution from the set of feasible
solutions.

► Dynamic Programming follows two properties:


▪ Optimal sub structure property
▪ Overlapping sub problems

► Optimal sub structure property: If the optimal solution of the given problem can be
obtained by finding the optimal solutions of all the sub-problems.

► Overlapping sub problems: Sub problems are interdependent, and remembers previously
solved instance, thus it avoids the re-computation.

► Divides the problem into small overlapping sub problems, each is interdependent and
have optimal sub structure property.

Pandu Sowkuntla
71
Design and Analysis of Algorithms--> Unit-II: Dynamic Programming
Fibonacci sequence without Dynamic Programming

Fibonacci(n)
if n==0
return 0
if n==1
return 1
return Fibonacci(n-1) + Fibonacci(n-2)

𝐹𝑖𝑏 3 is occurring
twice, 𝐹𝑖𝑏 1 is occurring
𝑭 𝒏 = 𝑭 𝒏 − 𝟏 + 𝑭 𝒏 − 𝟐 , 𝒘𝒉𝒆𝒓𝒆 𝑭 𝟎 = 𝟎, 𝑭 𝟏 = 𝟏 4 times, etc.

Time complexity is 2𝑛 (Exponential)

Pandu Sowkuntla
72
Design and Analysis of Algorithms--> Unit-II: Fibonacci without Dynamic Programming
Fibonacci sequence using Dynamic Programming

F = [] //new array
Fibonacci(n)
if F[n] == null
if n==0
F[n] = 0
else if n==1
F[n] = 1
else
F[n] = Fibonacci(n-1) + Fibonacci(n-2)
return F[n]

► Here, we are first checking if the result is already present in the array or not
(if F[n] == null)

► But are we sacrificing anything for the speed? Yes, memory. Dynamic programming
basically trades time with memory.

► Thus, we should take care that not an excessive amount of memory is used while storing
the solutions.
Pandu Sowkuntla 73
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
Top-Down Approach of Dynamic Programming

► There are two approaches of the dynamic programming:

Top-down approach and bottom-up approach.

Top-down approach

► Solution of Fibonacci sequence in previous


slide was the top-down approach.

► We start solving the problem in a natural


manner and stored the solutions of the
subproblems along the way.

► We use the term memoization for solving the


problem and storing the results for the future
calculations.

Pandu Sowkuntla
74
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
Bottom-Up Approach of Dynamic Programming

► The other way of solving Fibonacci problem is by


starting from the bottom

► Start by calculating the 2nd term and then 3rd and so


on and finally calculating the higher terms on the top.
F = [] //new array
Fibonacci-Bootom-up(n) ► We use a term tabulation for this process because it is
F[0] = 0 like filling up a table from the start.
F[1] = 1

for i in 2 to n
F[i] = F[i-1] + F[i-2]

return F[n]

Pandu Sowkuntla
75
Design and Analysis of Algorithms--> Unit-II: Fibonacci with Dynamic Programming
0-1 Knapsack Problem

Example:

Maximum weight the bag can hold is 3


units i.e., W=3.

In the case of solving the problem using brute force,


we have to check each possibility.

We have two options for each item i.e., either we can


take it or leave it.

Pandu Sowkuntla
76
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem

Example:

Time complexity with brute force is 𝑂 2𝑛


Pandu Sowkuntla
77
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem

𝑖𝑓 (𝑊 == 0 || 𝑖 == 0)
𝑟𝑒𝑡𝑢𝑟𝑛 0;

𝑖𝑓 𝑤𝑚[𝑖] > 𝑊
𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖, 𝑊) = 𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖 − 1, 𝑊);

Maximum of these two


𝑖𝑓 𝑤𝑚[𝑖] <= 𝑊
𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖, 𝑊) = max(𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖 − 1, 𝑊), 𝐾𝑛𝑎𝑝𝑠𝑎𝑐𝑘(𝑖 − 1, 𝑊 − 𝑤𝑚[𝑖]) + 𝑝𝑚[𝑖]));

78
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method

If the weight limit is 0, then we can't


pick any item making so the total
optimized value 0 in each case. This
will also happen when i is 0, then also
there is no item to pick and thus the
optimized value will be 0 for any
weight.

Pandu Sowkuntla
79
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method

Starting from KS 1,1


w1=3, W=1 => w1>W
KS(i,W)=KS(i,W)=KS(i−1,W)
KS(1,1)=KS(0,1)=0, For KS(1,3),
Similarly, KS(1,2)=KS(0,2)=0. w1=3 and W=3

KS(i, W)=max{KS(i−1,W),(KS(i−1,W−wi)+pi)}

KS(1,3)=max{KS(0,3),(KS(0,0)+8)}
=max{0,8}=8

Similarly,

KS(1,4)=max{KS(0,4),(KS(0,1)+8)}
=max{0,8}=8
KS(1,5)=max{KS(0,5),(KS(0,2)+8)}
=max{0,8}=8

Pandu Sowkuntla
80
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Tabular Method
For F(2,1) F(2,2)=max{F(1,2),(F(1,0)+3)} F(2,3)=max{F(1,3),(F(1,1)+3)}
w2<W, F(2,1)=F(1,1)=0 =max{0,3}=3 =max{8,3}=8

Similarly, we can see that we have


finally got our optimal value in the
cell (4,5) which is 15.

Pandu Sowkuntla
81
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
0-1 Knapsack Problem: Algorithm
cost[n+1, W+1]
KNAPSACK-01(n, W, wm, pm)
for w in 0 to W
cost[0, w] = 0 𝑂(𝑊)

for i in 0 to n 𝑂(𝑛)
cost[i, 0] = 0
Time Complexity=𝑂 𝑛 ∗ 𝑊
for i in 1 to n 𝑂(𝑛 ∗ 𝑊)
for w in 1 to W
if wm[i] > w
cost[i, w] = cost[i-1, w]
else
if pm[i]+cost[i-1, w-wm[i]] > cost[i-1, w]
cost[i, w] = pm[i] + cost[i-1, w-wm[i]]
else
cost[i, w] = cost[i-1, w]

return cost[n, W]

Pandu Sowkuntla
82
Design and Analysis of Algorithms--> Unit-II: 0-1 Knapsack
All-to-all shortest paths (Floyd Warshall Algorithm)

► Find the distance between every pair of vertices in a weighted graph G.


► We can make 𝑉 calls to Dijkstra’s algorithm (if no negative edges), which
takes 𝑂(𝑉𝐸𝑙𝑜𝑔𝑉) time.
► We can achieve 𝑂(𝑛3) time using dynamic programming (with the algorithm of
Floyd-Warshall algorithm), here 𝑛 is no. of vertices.

Recursive equation: i

k
Uses only
vertices Uses only vertices
numbered 1, … , 𝑘 − 1 numbered k − 1, … , 𝑗
𝑘
𝑑ⅈ𝑗 : Represents the cell value of the matrix which gives the distance between the vertices
𝑖 and 𝑗 which are traversed through the vertices of the path 𝑖 −−> 𝑘 and 𝑘 −−> 𝑗.

𝐷𝑘 : Represents the matrix having the values which are the distances between every pair of
vertices by traversing through the vertices of the path 𝑖 −−> 𝑘 and 𝑘 −−> 𝑗.
Pandu Sowkuntla
83
Design and Analysis of Algorithms--> Unit-II: All-to-all shortest paths
Floyd Warshall Algorithm
Example-2 Input: Graph 𝐺 = (𝑉, 𝐸), where 𝑉 = {1, 2, … , 𝑛}, with
edge-weight matrix 𝑊.

Output: 𝑛 × 𝑛 matrix of shortest-path lengths


𝛿(𝑖, 𝑗) for all 𝑖, 𝑗 ∈ 𝑉.

1 2 3 4 5
1 0 3 8 ∞ -4

2 ∞ 0 ∞ 1 7

4
Time Complexity= 𝑂 𝑛3
Space Complexity = 𝑂 𝑛3 (𝑛 matrices
5 ∞ ∞ ∞ 6 0
should be constructed of each size 𝑂 𝑛2 )
Pandu Sowkuntla
84
Design and Analysis of Algorithms--> Unit-II: All-to-all shortest paths

You might also like