0% found this document useful (0 votes)
8 views1,431 pages

DAA Part 2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 1431

Design and Analysis of Algorithm

Advanced Data Structure


(Binary Search Tree)

Lecture -28(Prerequisite)
(Self Reading)
Overview
• Data structures that support many dynamic-set
operations.
• Can be used as both a dictionary and as a priority
queue.
• Basic operations take time proportional to the
height of the tree.
• For complete binary tree with 𝑛 nodes: worst
case Θ(log 𝑛).
• For linear chain of n nodes: worst case Θ(𝑛).
Trees Terminology
• A tree is a data structure that represents data in a
hierarchical manner.
• It associates every object to a node in the tree and
maintains the parent/child relationships between
those nodes.
• Each tree must have exactly one node, called the
root, from which all nodes of the tree extend (and
which has no parent of its own).
• The other end of the tree – the last level down —
contains the leaf nodes of the tree.
Trees Terminology
• The number of lines you pass
through when you travel from
the root until you reach a
particular node is the depth of A
that node in the tree (node G in
B C D
the figure above has a depth of
2).
E F G H I J
• The height of the tree is the
maximum depth of any node in K
the tree (the tree in given figure
has a height of 3).
Trees Terminology
• The number of children
emanating from a given node is
referred to as its degree — for A
example, node A above has a B C D
degree of 3 and node J has a
degree of 1. E F G H I J

K
Binary Search Tree (BST)
Binary search trees are an important data structure for dynamic sets.
• Accomplish many dynamic-set operations in O(h) time,
where h = height of tree.
• A binary tree by a linked data structure in which each node is an object.
• root[T ] points to the root of tree T .
• Each node contains the fields
• key (and possibly other satellite data).
• left: points to left child.
• right: points to right child.
• p: points to parent. p[root[T ]] = NIL.
Binary Search Tree (BST)
• Stored keys must satisfy the binary-search-tree property.
• If y is in left subtree of x, then key[y] ≤ key[x].
• If y is in right subtree of x, then key[y] ≥ key[x].

𝑥
10
𝑦
6 16

3 9 14 19

17

(Figure: A BST on 8 nodes with height 3)


Binary Search Tree (BST)
• Stored keys must satisfy the binary-search-tree property.
• If y is in left subtree of x, then key[y] ≤ key[x].
• If y is in right subtree of x, then key[y] ≥ key[x].

𝑥
10

6 16 𝑦

3 9 14 19

17

(Figure: A BST on 8 nodes with height 3)


Binary Search Tree (BST)
• Possible operations on BST
• Traversing
• Searching
• Inserting
• Deleting
Binary Search Tree (BST)
• Possible operations on BST
• Traversing
• Searching
• Inserting
• Deleting
Binary Search Tree (BST)
• Traversing
The binary-search-tree property allows us to print keys in a binary
search tree in order, recursively, using an algorithm called an in-
order tree walk. Elements are printed in monotonically increasing
order.
How INORDER-TREE-TRAVERSAL works:
• Check to make sure that x is not NIL.
• Recursively, print the keys of the nodes in 𝑥′𝑠 𝑙𝑒𝑓𝑡 subtree.
• Print 𝑥′𝑠 𝑘𝑒𝑦.
• Recursively, print the keys of the nodes in 𝑥′𝑠 𝑟𝑖𝑔ℎ𝑡 subtree.
Binary Search Tree (BST)
• Traversing
• A common BST traversing algorithm (i.e. In-order tree
traversal) is given below for easy understanding:

10
𝐼𝑛𝑜𝑟𝑑𝑒𝑟 − 𝑇𝑟𝑒𝑒(𝑥)
𝑖𝑓 𝑥 ≠ 𝑁𝐼𝐿 6 16
𝑡ℎ𝑒𝑛 𝐼𝑛𝑜𝑟𝑑𝑒𝑟 − 𝑇𝑟𝑒𝑒 𝑙𝑒𝑓𝑡 𝑥
𝑝𝑟𝑖𝑛𝑡 𝑘𝑒𝑦 𝑥 3 9 14 19
𝐼𝑛𝑜𝑟𝑑𝑒𝑟 − 𝑇𝑟𝑒𝑒 𝑟𝑖𝑔ℎ𝑡 𝑥
17
Binary Search Tree (BST)
• Traversing
• Example: The in-order tree traversal on the example below,
getting the output 3 → 6 → 9 → 10 → 14 → 16 → 17 → 19
• Correctness: Follows by induction directly from the binary-
search-tree property.
• Time: Intuitively, the walk takes Θ(𝑛) time for a tree with 𝑛
nodes, because we visit and print each node once.
10

6 16

3 9 14 19

17
Binary Search Tree (BST)
• Possible operations on BST
• Traversing
• Searching
• Inserting
• Deleting
Binary Search Tree (BST)
• Searching
– The search procedure returns a pointer to the node with
key ‘k’ if the key exists, otherwise return ‘NULL’ .

𝑻𝒓𝒆𝒆 − 𝑺𝒆𝒂𝒓𝒄𝒉(𝒙, 𝒌)
𝑖𝑓 𝑥 = 𝑁𝐼𝐿 𝑜𝑟 𝑘 = 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑥
𝑖𝑓 𝑘 < 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑙𝑒𝑓𝑡[𝑥], 𝑘)
𝑒𝑙𝑠𝑒 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑖𝑔ℎ𝑡[𝑥], 𝑘)

𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑐𝑎𝑙𝑙 𝑖𝑠 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑜𝑜𝑡[𝑇 ], 𝑘).


Binary Search Tree (BST)
• Searching
– How to search key 13 on the following Tree.

15

6 18

3 7 17 20

2 4 13

9
Binary Search Tree (BST)
• Searching
– How to search key 13 on the following Tree.

𝑻𝒓𝒆𝒆 − 𝑺𝒆𝒂𝒓𝒄𝒉(𝒙, 𝒌) 15
𝑖𝑓 𝑥 = 𝑁𝐼𝐿 𝑜𝑟 𝑘 = 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑥 6 18

𝑖𝑓 𝑘 < 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑙𝑒𝑓𝑡[𝑥], 𝑘) 3 7 17 20
𝑒𝑙𝑠𝑒 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑖𝑔ℎ𝑡[𝑥], 𝑘)
2 4 13
𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑐𝑎𝑙𝑙 𝑖𝑠 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑜𝑜𝑡[𝑇 ], 𝑘).
9
Binary Search Tree (BST)
• Searching
– How to search key 13 on the following Tree. 15

𝑻𝒓𝒆𝒆 − 𝑺𝒆𝒂𝒓𝒄𝒉(𝒙, 𝒌) 6 18

𝑖𝑓 𝑥 = 𝑁𝐼𝐿 𝑜𝑟 𝑘 = 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑥 3 7 17 20

𝑖𝑓 𝑘 < 𝑘𝑒𝑦[𝑥]
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑙𝑒𝑓𝑡[𝑥], 𝑘) 2 4 13
𝑒𝑙𝑠𝑒 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑖𝑔ℎ𝑡[𝑥], 𝑘)
9
𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑐𝑎𝑙𝑙 𝑖𝑠 𝑇𝑟𝑒𝑒 − 𝑆𝑒𝑎𝑟𝑐ℎ(𝑟𝑜𝑜𝑡[𝑇 ], 𝑘).
15 → 6 → 7 → 13

Time: The algorithm is recursive and visit nodes on a downward path from the
root. Thus, running time is 𝑂(ℎ), where h is the height of the tree.
Binary Search Tree (BST)

• Searching (Minimum and Maximum)


The binary-search-tree property guarantees that
• the minimum key of a binary search tree is
located at the leftmost node, and
• the maximum key of a binary search tree is
located at the rightmost node.
Traverse the appropriate pointers (left or right)
until NIL is reached.
Binary Search Tree (BST)
• Searching
– Find minimum and maximum node in BST.
The following procedure returns a pointer to the minimum element in
the subtree rooted at a given node x, which we assume to be not
NIL.
15
𝑇𝑟𝑒𝑒 − 𝑀𝑖𝑛𝑖𝑚𝑢𝑚(𝑥)
𝑤ℎ𝑖𝑙𝑒 𝑙𝑒𝑓𝑡 𝑥 ≠ 𝑁𝐼𝐿 6 18
𝑑𝑜 𝑥 ← 𝑙𝑒𝑓𝑡 𝑥
𝑟𝑒𝑡𝑢𝑟𝑛 𝑥 3 7 17 20

2 4 13

9
Binary Search Tree (BST)
• Searching
– Find minimum and maximum node in BST.
The following procedure returns a pointer to the maximum element
in the subtree rooted at a given node x, which we assume to be not
NULL
15

𝑇𝑟𝑒𝑒 − 𝑀𝑎𝑥𝑖𝑚𝑢𝑚(𝑥)
𝑤ℎ𝑖𝑙𝑒 𝑟𝑖𝑔ℎ𝑡 𝑥 ≠ 𝑁𝐼𝐿 6 18

𝑑𝑜 𝑥 ← 𝑟𝑖𝑔ℎ𝑡 𝑥
𝑟𝑒𝑡𝑢𝑟𝑛 𝑥 3 7 17 20

2 4 13

Time: Both procedures visit nodes that form a


downward path from the root to a leaf. Both 9

procedures run in O(h) time, where h is the


height of the tree.
Binary Search Tree (BST)
• Searching (Successor and predecessor)
• Assuming that all keys are distinct, the successor of a 𝑛𝑜𝑑𝑒 𝑥 is
the 𝑛𝑜𝑑𝑒 𝑦 such that 𝑘𝑒𝑦[𝑦] is the 𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡 𝑘𝑒𝑦 > 𝑘𝑒𝑦[𝑥].
• If x has the largest key in the binary search tree, then we say that
𝑥′𝑠 successor is NIL.
There are two cases:
1. If 𝑛𝑜𝑑𝑒 𝑥 has a non-empty right subtree, then 𝑥 ′ 𝑠 successor is the
minimum in 𝑥′𝑠 right subtree.
2. If 𝑛𝑜𝑑𝑒 𝑥 has an empty right subtree, notice that:
• As long as we move to the left up the tree (move up through
right children), we are visiting smaller keys.
• 𝑥 ′ 𝑠 successor y is the node that x is the predecessor of (𝑥 is the
maximum in 𝑦′𝑠 left subtree).
Binary Search Tree (BST)
• Searching (Successor and predecessor)
– Find successor node in BST.
Let us find the successor() with the help of a tree example.
Successor:
15
The next increased value of node x.
i.e. The successor of node 15 is 17 6 18

and
3 7 17 20
The successor of node 13 is 15
2 4 13

9
Binary Search Tree (BST)
• Searching (Successor and predecessor)
– Find successor node in BST.
In BST, if all the keys are distinct, then the successor of a node x is the
node with the smallest key greater than key[x].
The following procedure returns the successor of a node x in BST.

𝑻𝒓𝒆𝒆_𝒔𝒖𝒄𝒄𝒆𝒔𝒔𝒐𝒓(𝒙) 15
𝑖𝑓 𝑟𝑖𝑔ℎ𝑡 𝑥 ≠ 𝑁𝐼𝐿
𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑀𝑖𝑛𝑖𝑚𝑢𝑚 𝑟𝑖𝑔ℎ𝑡 𝑥 6 18
𝑦 ← 𝑝𝑥
𝑤ℎ𝑖𝑙𝑒 𝑦 ≠ 𝑁𝐼𝐿 𝑎𝑛𝑑 𝑥 = 𝑟𝑖𝑔ℎ𝑡 𝑦 3 7 17 20
𝑑𝑜 𝑥 ← 𝑦
𝑦 ← 𝑝𝑦
2 4 13
𝑟𝑒𝑡𝑢𝑟𝑛 𝑦

9
Binary Search Tree (BST)
• Searching (Successor and predecessor)
– Find predecessor node in BST.
In BST, if all the keys are distinct, then the in-order predecessor of a
node x is the previous node in in-order traversal of it.
Let us find the in-order predecessor () with the help of a tree
example.
15

For Example : 6 18
In-order predecessor of 2 do not exist.
In-order predecessor of 15 is 13 3 7 17 20

In-order predecessor of 18 is 17
2 4 13

9
Binary Search Tree (BST)
• Searching (Successor and predecessor)
– Find predecessor node in BST.
In BST, if all the keys are distinct, then the in-order predecessor of a node x is
the previous node in in-order traversal of it..
15
𝑻𝒓𝒆𝒆_𝒑𝒓𝒆𝒅𝒆𝒄𝒆𝒔𝒔𝒐𝒓(𝒙)
𝑖𝑓 𝑙𝑒𝑓𝑡 𝑥 ≠ 𝑁𝐼𝐿 6 18

𝑡ℎ𝑒𝑛 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑒𝑒 − 𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑙𝑒𝑓𝑡 𝑥


𝑦 ← 𝑝𝑥 3 7 17 20

𝑤ℎ𝑖𝑙𝑒 𝑦 ≠ 𝑁𝐼𝐿 𝑎𝑛𝑑 𝑥 = 𝑙𝑒𝑓𝑡 𝑦


𝑑𝑜 𝑥 ← 𝑦 2 4 13
𝑦 ← 𝑝𝑦
𝑟𝑒𝑡𝑢𝑟𝑛 𝑦
9

Time: For both the Tree_successor and Tree_predecessor procedures, in both cases,
we visit nodes on a path down the tree or up the tree. Thus, running time is O(h),
where h is the height of the tree.
Binary Search Tree (BST)
• Possible operations on BST
• Traversing
• Searching
• Inserting
• Deleting
Binary Search Tree (BST)
• Insertion
• To insert value 𝑣 into the binary search tree, the procedure is given
node 𝑧, with 𝑘𝑒𝑦[𝑧] = 𝑣, 𝑙𝑒𝑓𝑡[𝑧] = 𝑁𝐼𝐿, 𝑎𝑛𝑑 𝑟𝑖𝑔ℎ𝑡[𝑧] = 𝑁𝐼𝐿.
• Beginning at root of the tree, trace a downward path, maintaining
two pointers.
• Pointer x: traces the downward path.
• Pointer y: “trailing pointer” to keep track of parent of x.
• Traverse the tree downward by comparing the value of node at x
with v, and move to the left or right child accordingly.
• When x is NIL, it is at the correct position for node z.
• Compare 𝑧′𝑠 value with 𝑦′𝑠 value, and insert z at either 𝑦′𝑠 left or
right, appropriately.
Binary Search Tree (BST)
• Insertion
Example: insert 14

15

6 18

3 7 17 20

2 4 13

9
Binary Search Tree (BST)
• Insertion
Example: insert 14

15
15

6 18
6 18

3 7 17 20
3 7 17 20

2 4 13
2 4 13

9 14
9
Binary Search Tree (BST)
• Insertion
15
𝑻𝒓𝒆𝒆 − 𝑰𝒏𝒔𝒆𝒓𝒕(𝑻, 𝒛)
𝑦 ← 𝑁𝐼𝐿 6 18
𝑥 ← 𝑟𝑜𝑜𝑡[𝑇 ]
𝑤ℎ𝑖𝑙𝑒 𝑥 ≠ 𝑁𝐼𝐿 3 7 17 20
𝑑𝑜 𝑦 ← 𝑥
𝑖𝑓 𝑘𝑒𝑦[𝑧] < 𝑘𝑒𝑦[𝑥]
2 4 13
𝑡ℎ𝑒𝑛 𝑥 ← 𝑙𝑒𝑓𝑡[𝑥]
𝑒𝑙𝑠𝑒
𝑥 ← 𝑟𝑖𝑔ℎ𝑡[𝑥] 9 14
𝑝[𝑧] ← 𝑦
𝑖𝑓 𝑦 = 𝑁𝐼𝐿
𝑡ℎ𝑒𝑛 𝑟𝑜𝑜𝑡 𝑇 ← 𝑧 ⊳ 𝑇𝑟𝑒𝑒 𝑇 𝑤𝑎𝑠 𝑒𝑚𝑝𝑡𝑦
𝑒𝑙𝑠𝑒 𝑖𝑓 𝑘𝑒𝑦[𝑧] < 𝑘𝑒𝑦[𝑦]
𝑡ℎ𝑒𝑛 𝑙𝑒𝑓𝑡[𝑦] ← 𝑧
𝑒𝑙𝑠𝑒
𝑟𝑖𝑔ℎ𝑡[𝑦] ← 𝑧
Binary Search Tree (BST)
• Insertion (Time complexity)
• Same as Tree-Search() . On a tree of height h,
procedure takes O(h) time.
• Tree-Insert() can be used with Inorder-Tree() to sort a
given set of numbers.
15

6 18

3 7 17 20

2 4 13

9 14
Binary Search Tree (BST)
• Possible operations on BST
• Traversing
• Searching
• Inserting
• Deleting
Binary Search Tree (BST)
Deletion
TREE-DELETE is broken into three cases.
Case 1: node z has no children.
– Delete node z by making the parent of z point to NULL,
instead of to z.
Case 2: node z has one child.
– Delete z by making the parent of z point to z.s child, instead
of to z.
Case 3: z has two children.
– node z’s successor y has either no children or one child. (y is
the minimum node.with no left child.in z.s right subtree.)
– Delete y from the tree (via Case 1 or 2).
– Replace z.s key and satellite data with y.s.
Deletion
Binary Search Tree (BST)
TREE-DELETE is broken into three cases.
Case 1: node z has no children.
– Delete node z by making the parent of z point to NULL, instead
of to z.
Example: Delete 14
15

6 18

3 7 17 20

2 4 13

9 14
Binary Search Tree (BST)
Deletion
Case 1: node z has no children.
– Delete node z by making the parent of z point to NULL, instead
of to z.
Example: Delete 14
15

15
6 18

6 18
3 7 17 20

3 7 17 20

2 4 13

2 4 13
9 Deletion of node 14
14
done successfully
9
Before deletion 14
After deletion
Binary Search Tree (BST)
Deletion
Case 2: node z has one child.
– Delete z by making the parent of z point to z.s child, instead
of to z.
Example: Delete node 13
15

6 18

3 7 17 20

2 4 13

9
Binary Search Tree (BST)
Deletion
Case 2: node z has one child.
– Delete z by making the parent of z point to z.s child, instead
of to z.
Example: Delete node 13
15
15

6 18
6 18

3 7 17 20
3 7 17 20

2 4 13
2 4 13

Deletion of node 13
9
9
done successfully

Before deletion
After deletion
Binary Search Tree (BST)
Deletion
Case 3: node z has two children.
– Find node 𝑧’𝑠 successor 𝑦 . 𝑦 has either no children or one child. (𝑦 is
the minimum node with no left child.in 𝑧′𝑠 right subtree.)
– Delete 𝑦 from the tree (via Case 1 or 2).
– Replace 𝑧 ← 𝑘𝑒𝑦 and satellite data with 𝑦 ← 𝑘𝑒𝑦 .
Example : Delete node 6

15

6 18

3 7 17 20

2 4 9
Binary Search Tree (BST)
Deletion
Case 3: node z has two children.
– Find node 𝑧’𝑠 successor 𝑦 . 𝑦 has either no children or one child. (𝑦 is
the minimum node with no left child.in 𝑧′𝑠 right subtree.)
– Delete 𝑦 from the tree (via Case 1 or 2).
– Replace 𝑧 ← 𝑘𝑒𝑦 and satellite data with 𝑦 ← 𝑘𝑒𝑦 .
Example : Delete node 6

15

6 18

3 7 17 20

In-order successor of
2 4 9 node 6
Binary Search Tree (BST)
Deletion
Case 3: node z has two children.
– Find node 𝑧’𝑠 successor 𝑦 . 𝑦 has either no children or one child. (𝑦 is
the minimum node with no left child.in 𝑧′𝑠 right subtree.)
– Delete 𝑦 from the tree (via Case 1 or 2).
– Replace 𝑧 ← 𝑘𝑒𝑦 and satellite data with 𝑦 ← 𝑘𝑒𝑦 .
Example : Delete node 6
15
15
7 18
6 18

3 9 17 20
3 7 17 20

2 4
In-order successor of
2 4 9 6
node 6

Deletion of node 6
done successfully
Deletion:
Binary Search Tree (BST)
𝑻𝒓𝒆𝒆 − 𝑫𝒆𝒍𝒆𝒕𝒆(𝑻, 𝒛)
// 𝐷𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒 𝑤ℎ𝑖𝑐ℎ 𝑛𝑜𝑑𝑒 𝑦 𝑡𝑜 𝑠𝑝𝑙𝑖𝑐𝑒 𝑜𝑢𝑡: 𝑒𝑖𝑡ℎ𝑒𝑟 𝑧 𝑜𝑟 𝑧’𝑠 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟.
𝒊𝒇 𝑙𝑒𝑓𝑡[𝑧] = 𝑁𝐼𝐿 𝑜𝑟 𝑟𝑖𝑔ℎ𝑡[𝑧] = 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑦 ← 𝑧
𝒆𝒍𝒔𝒆 𝑦 ← 𝑇𝑟𝑒𝑒 − 𝑆𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟(𝑧)
// 𝑥 𝑖𝑠 𝑠𝑒𝑡 𝑡𝑜 𝑎 𝑛𝑜𝑛 − 𝑁𝐼𝐿 𝑐ℎ𝑖𝑙𝑑 𝑜𝑓 𝑦, 𝑜𝑟 𝑡𝑜 𝑁𝐼𝐿 𝑖𝑓 𝑦 ℎ𝑎𝑠 𝑛𝑜 𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛.
𝒊𝒇 𝑙𝑒𝑓𝑡[𝑦] ≠ 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑥 ← 𝑙𝑒𝑓𝑡[𝑦]
𝒆𝒍𝒔𝒆 𝑥 ← 𝑟𝑖𝑔ℎ𝑡[𝑦]
// 𝑦 𝑖𝑠 𝑟𝑒𝑚𝑜𝑣𝑒𝑑 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒 𝑏𝑦 𝑚𝑎𝑛𝑖𝑝𝑢𝑙𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡𝑒𝑟𝑠 𝑜𝑓 𝑝[𝑦] 𝑎𝑛𝑑 𝑥.
𝒊𝒇 𝑥 ≠ 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑝[𝑥] ← 𝑝[𝑦]
𝒊𝒇 𝑝[𝑦] = 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑟𝑜𝑜𝑡[𝑇 ] ← 𝑥
𝒆𝒍𝒔𝒆 𝒊𝒇 𝑦 = 𝑙𝑒𝑓𝑡[𝑝[𝑦]]
𝒕𝒉𝒆𝒏 𝑙𝑒𝑓𝑡[𝑝[𝑦]] ← 𝑥
𝒆𝒍𝒔𝒆 𝑟𝑖𝑔ℎ𝑡[𝑝[𝑦]] ← 𝑥
// 𝐼𝑓 𝑖𝑡 𝑤𝑎𝑠 𝑧′𝑠 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟 𝑡ℎ𝑎𝑡 𝑤𝑎𝑠 𝑠𝑝𝑙𝑖𝑐𝑒𝑑 𝑜𝑢𝑡, 𝑐𝑜𝑝𝑦 𝑖𝑡𝑠 𝑑𝑎𝑡𝑎 𝑖𝑛𝑡𝑜 𝑧.
𝒊𝒇 𝑦 = 𝑧
𝒕𝒉𝒆𝒏 𝑘𝑒𝑦[𝑧] ← 𝑘𝑒𝑦[𝑦]
𝑐𝑜𝑝𝑦 𝑦. 𝑠 𝑠𝑎𝑡𝑒𝑙𝑙𝑖𝑡𝑒 𝑑𝑎𝑡𝑎 𝑖𝑛𝑡𝑜 𝑧
𝒓𝒆𝒕𝒖𝒓𝒏 𝑦
Deletion:
Binary Search Tree (BST)
𝑻𝒓𝒆𝒆 − 𝑫𝒆𝒍𝒆𝒕𝒆(𝑻, 𝒛)
// 𝐷𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒 𝑤ℎ𝑖𝑐ℎ 𝑛𝑜𝑑𝑒 𝑦 𝑡𝑜 𝑠𝑝𝑙𝑖𝑐𝑒 𝑜𝑢𝑡: 𝑒𝑖𝑡ℎ𝑒𝑟 𝑧 𝑜𝑟 𝑧’𝑠 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟.
𝒊𝒇 𝑙𝑒𝑓𝑡[𝑧] = 𝑁𝐼𝐿 𝑜𝑟 𝑟𝑖𝑔ℎ𝑡[𝑧] = 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑦 ← 𝑧
𝒆𝒍𝒔𝒆 𝑦 ← 𝑇𝑟𝑒𝑒 − 𝑆𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟(𝑧)
// 𝑥 𝑖𝑠 𝑠𝑒𝑡 𝑡𝑜 𝑎 𝑛𝑜𝑛 − 𝑁𝐼𝐿 𝑐ℎ𝑖𝑙𝑑 𝑜𝑓 𝑦, 𝑜𝑟 𝑡𝑜 𝑁𝐼𝐿 𝑖𝑓 𝑦 ℎ𝑎𝑠 𝑛𝑜 𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛.
𝒊𝒇 𝑙𝑒𝑓𝑡[𝑦] ≠ 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑥 ← 𝑙𝑒𝑓𝑡[𝑦]
𝒆𝒍𝒔𝒆 𝑥 ← 𝑟𝑖𝑔ℎ𝑡[𝑦]
// 𝑦 𝑖𝑠 𝑟𝑒𝑚𝑜𝑣𝑒𝑑 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑡𝑟𝑒𝑒 𝑏𝑦 𝑚𝑎𝑛𝑖𝑝𝑢𝑙𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡𝑒𝑟𝑠 𝑜𝑓 𝑝[𝑦] 𝑎𝑛𝑑 𝑥.
𝒊𝒇 𝑥 ≠ 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑝[𝑥] ← 𝑝[𝑦]
𝒊𝒇 𝑝[𝑦] = 𝑁𝐼𝐿
𝒕𝒉𝒆𝒏 𝑟𝑜𝑜𝑡[𝑇 ] ← 𝑥 Time: O(h), on a tree of height h.
𝒆𝒍𝒔𝒆 𝒊𝒇 𝑦 = 𝑙𝑒𝑓𝑡[𝑝[𝑦]]
𝒕𝒉𝒆𝒏 𝑙𝑒𝑓𝑡[𝑝[𝑦]] ← 𝑥
𝒆𝒍𝒔𝒆 𝑟𝑖𝑔ℎ𝑡[𝑝[𝑦]] ← 𝑥
// 𝐼𝑓 𝑖𝑡 𝑤𝑎𝑠 𝑧′𝑠 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑜𝑟 𝑡ℎ𝑎𝑡 𝑤𝑎𝑠 𝑠𝑝𝑙𝑖𝑐𝑒𝑑 𝑜𝑢𝑡, 𝑐𝑜𝑝𝑦 𝑖𝑡𝑠 𝑑𝑎𝑡𝑎 𝑖𝑛𝑡𝑜 𝑧.
𝒊𝒇 𝑦 = 𝑧
𝒕𝒉𝒆𝒏 𝑘𝑒𝑦[𝑧] ← 𝑘𝑒𝑦[𝑦]
𝑐𝑜𝑝𝑦 𝑦. 𝑠 𝑠𝑎𝑡𝑒𝑙𝑙𝑖𝑡𝑒 𝑑𝑎𝑡𝑎 𝑖𝑛𝑡𝑜 𝑧
𝒓𝒆𝒕𝒖𝒓𝒏 𝑦
Design and Analysis of Algorithm

Advanced Data Structure


(Red Black Tree)
(Properties, Rotation and Insertion)

LECTURE 28 -31
Overview

• A variation of binary search trees.


• Balanced: height is O(lg n), where n is the
number of nodes.
• Operations will take O(lg n) time in the worst
case.
Red Black Tree
• A red-black tree is a binary search tree + 1 bit per
node: an attribute color, which is either red or
black.
• All leaves are empty (nil) and colored black.
• A single sentinel, nil[T ], is used for all the leaves
of red-black tree T .
• Color of [nil[T ]](i.e. leaf node) is black.
• The root′s parent is also nil[T ].
• All other attributes of binary search trees are
inherited by red-black trees (i.e. key, left, right, and
p as parent).
• We don′t care about the key in nil[T ].
Red Black Tree
• Properties:
1. Every node is either red or black.
2. The root is always black.
3. Every leaf (nil[T]) is black.
4. If a node is red, then both its children are black.
(Hence no two reds in a row on a simple path
from the root to a leaf is allowed .)
5. For each node, all paths from the node to
descendant leaves contain the same number of
black nodes.
Red Black Tree
Height of a red-black tree h=4
bh=2
• Height of a node is the number 26
of edges in a longest path to a
h=3
leaf. h=1 bh=2
bh=1
• Black-height of a node x: bh(x) 17 41

is the number of black nodes h=2


bh=1
(including nil[T ]) on the path h=2
bh=1 47
from x to leaf, not counting x. 30

By property 5, black-height is h=1


h=1
bh=1
bh=1
well defined.
38 50
• So, a red-black tree of height h
has black height >= h/2.
Height of a red-black tree with NIL

n nodes is h<= 2 log2(n + 1)


Red Black Tree
Rotations
• The basic tree-restructuring operation.
• Needed to maintain red-black trees as balanced
binary search trees.
• Changes the local pointer structure. (Only pointers
are changed.)
• Won′t upset the binary-search-tree property.
• Have both left rotation and right rotation. They are
inverses of each other.
• A rotation takes a red-black-tree and a node within
the tree.
Red Black Tree
Rotations
Red Black Tree
Rotations

Let’s learn the game of changing pointer positions


Keep this in
Red Black Tree mind

Rotations
Left rotate
on 5

5
x

y
2 10

8 12

6 9
Keep this in
Red Black Tree mind

Rotations
Left rotate
on 5

5
x 10 y

y 5
x 12
2 10

2 8
8 12

6 9

6 9
Keep this in
Red Black Tree mind

Rotations
Left rotate
on 5

5
x 10 y

y 5
x 12
2 10

2 8
8 12

6 9

6 9
Please note
down the
observations
Red Black Tree
Rotations
LEFT-ROTATE(T, x)
y ← right[x] //Set y.
right[x] ← left[y] //Turn y.s left subtree into x.s right subtree.
if left[y] ≠ nil[T ]
then p[left[y]] ← x
p[y] ← p[x] //Link x.s parent to y.
if p[x] = nil[T ]
then root[T ] ← y
else if x = left[p[x]]
then left[p[x]] ← y
else right[p[x]] ← y
left[y] ← x //Put x on y.s left.
p[x] ← y
Red Black Tree
Rotations
The pseudocode for LEFT-ROTATE assumes that
right[x] = nil[T ], and
root.s parent is nil[T ].
Pseudocode for RIGHT-ROTATE is symmetric: exchange left and
right everywhere.
Red Black Tree
Rotations
RIGHT-ROTATE(T, x)
y ← left[x] //Set y.
left[x] ← right[y] //Turn y.s left subtree into x.s right subtree.
if right[y] ≠ nil[T ]
then p[right[y]] ← x
p[y] ← p[x] //Link x.s parent to y.
if p[x] = nil[T ]
then root[T ] ← y
else if x = right[p[x]]
then right[p[x]] ← y
else left[p[x]] ← y
right[y] ← x //Put x on y.s right.
p[x] ← y
Red Black Tree
Rotations (Example)
Demonstrate of left rotation that maintains in-order
ordering of keys.
Red Black Tree
Rotations (Example)
Demonstrate of left rotation that maintains in-order
ordering of keys.
Red Black Tree
Rotations (Example)
Demonstrate of left rotation that maintains in-order
ordering of keys.
Red Black Tree
Rotations (Example)
Demonstrate of left rotation that maintains in-order
ordering of keys.
• Before rotation: keys of 𝑥′𝑠 left
subtree ≤ 11 ≤ keys of 𝑦′𝑠 left subtree
≤ 18 ≤ keys of 𝑦′𝑠 right subtree.
• Rotation makes 𝑦′𝑠 left subtree into
𝑥′𝑠 right subtree.
• After rotation: keys of 𝑥′𝑠 left subtree
≤ 11 ≤ keys of 𝑥′𝑠 right subtree ≤ 18 ≤
keys of 𝑦′𝑠 right subtree.
• Time complexity : O(1) for both LEFT-
ROTATE and RIGHT-ROTATE, since a
constant number of pointers are
modified.
Red Black Tree
Insertion:

Start by doing regular binary-search-tree insertion:


Insertion:
Red Black Tree
RB-INSERT(T, z)
y ← nil[T ]
x ← root[T ]
while x ≠ nil[T ]
do y ← x
if key[z] < key[x]
then x ← left[x]
else x ← right[x]
p[z] ← y
if y = nil[T ]
then root[T ] ← z
else if key[z] < key[y]
then left[y] ← z
else right[y] ← z
left[z] ← nil[T ]
right[z] ← nil[T ]
color[z] ← RED
RB-INSERT-FIXUP(T, z)
Insertion:
Red Black Tree
RB-INSERT(T, z)
y ← nil[T ]
x ← root[T ]
while x ≠ nil[T ]
do y ← x
if key[z] < key[x]
then x ← left[x]
Simply Binary
else x ← right[x]
Search Tree
p[z] ← y
Implementation
if y = nil[T ]
then root[T ] ← z
else if key[z] < key[y]
then left[y] ← z
else right[y] ← z
left[z] ← nil[T ]
right[z] ← nil[T ]
color[z] ← RED
RB-INSERT-FIXUP(T, z)
Red Black Tree
Insertion:
• RB-INSERT ends by coloring the new node z red.
• Then it calls RB-INSERT-FIXUP to maintain the properties of a red-black
Tree.
Which property might be violated?
1. Every node is either red or black. OK
2. The root is always black. If z is the root, then there′s a violation.
Otherwise, OK.
3. Every leaf (nil[T]) is black. OK
4. If a node is red, then both its If p[z] is red, there′s a violation: both z
children are black. and p[z] are red.
5. For each node, all paths from the OK
node to descendant leaves contain
the same number of black nodes.
Red Black Tree
Insertion (RB-INSERT-FIXUP CASE 1) (y is red and z is a right child)
[Nodes with bold outline indicate black nodes and light outline indicate red nodes]

if color[y] = RED
then color[p[z]] ← BLACK //Case 1
color[y] ← BLACK //Case 1
color[p[p[z]]] ← RED //Case 1
z ← p[p[z]] //Case 1
Red Black Tree
Insertion (RB-INSERT-FIXUP CASE 1) (y is red and z is a left child)
[Nodes with bold outline indicate black nodes and light outline indicate red nodes]

if color[y] = RED
then color[p[z]] ← BLACK //Case 1
color[y] ← BLACK //Case 1
color[p[p[z]]] ← RED //Case 1
z ← p[p[z]] //Case 1
Red Black Tree
Insertion (RB-INSERT-FIXUP CASE 2 (y is black, z is a right child)
[Nodes with bold outline indicate black nodes and light outline indicate red nodes]

if z = right[p[z]]
then z ← p[z] //Case 2
LEFT-ROTATE(T, z) //Case 2
Red Black Tree
Insertion (RB-INSERT-FIXUP CASE 3 (y is black, z is a left child)
[Nodes with bold outline indicate black nodes and light outline indicate red nodes]

color[p[z]] ← BLACK // Case 3


color[p[p[z]]] ← RED // Case 3
RIGHT-ROTATE(T, p[p[z]]) // Case 3
Red Black Tree
Insertion:
RB-INSERT-FIXUP(T, z)
while color[p[z]] = RED
do if p[z] = left[p[p[z]]]
then y ← right[p[p[z]]]
if color[y] = RED
then
Apply Case 1
else if z = right[p[z]]
then
Apply Case 2
Apply Case 3
else (same as then clause with .right. and .left. exchanged)
color[root[T ]] ← BLACK
Red Black Tree
Insertion:
RB-INSERT-FIXUP(T, z) Once we apply
while color[p[z]] = RED Case 2
do if p[z] = left[p[p[z]]] immediately, we
then y ← right[p[p[z]]] apply Case 3
if color[y] = RED
then
Apply Case 1
else if z = right[p[z]]
then
Apply Case 2
Apply Case 3
else (same as then clause with .right. and .left. exchanged)
color[root[T ]] ← BLACK
Red Black Tree
Insertion: Once we apply
RB-INSERT-FIXUP(T, z) Case 2
while color[p[z]] = RED immediately, we
do if p[z] = left[p[p[z]]] apply Case 3
then y ← right[p[p[z]]]
if color[y] = RED
then color[p[z]] ← BLACK //Case 1
color[y] ← BLACK //Case 1
color[p[p[z]]] ← RED //Case 1
z ← p[p[z]] //Case 1
else if z = right[p[z]]
then z ← p[z] //Case 2
LEFT-ROTATE(T, z) //Case 2
color[p[z]] ← BLACK // Case 3
color[p[p[z]]] ← RED // Case 3
RIGHT-ROTATE(T, p[p[z]]) // Case 3
else (same as then clause with .right. and .left. exchanged)
color[root[T ]] ← BLACK
Red Black Tree
Insertion (Example):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -11

11
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -11

11 Apply Fix up 11
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -2

11

No Fixup required
2
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -14

11

No Fixup required
2 14
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -1

11

2 14
z

1
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -1

11

2 14
z

1
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -1
11
11
Case 1
Fixup Apply 2 14
2 14 z
z
y
1
1
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -1 z z

11
11 11

Case 1
Fixup Apply 2 14 Apply Fix up
14 2 14
2
z
y
y
1
1 1
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -7
11

14 No Fixup required
2

1 7
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -15
11

No Fixup required
2 14

1 7 15
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -5

11

2 14

1 7 15

5
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -5
11
11

Case 1 z
14
2
2 14 Fixup Apply

1 7 15
1 7 15

5
5
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -8
11

No Fixup required
2 14

1 7 15

5 8

z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Insertion (Example 1): Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 11, 2, 14, 1, 7, 15, 5, 8, 4]
Insert -4
Red Black Tree
Insertion (Example 2):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -50

50
z
Red Black Tree
Insertion (Example 2):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -50

50 Aply Fix up
z
Red Black Tree
Insertion (Example 2):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -50

50 Apply Fix up 50
z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -40

50

No Fixup required
40
z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -30

50

40 NIL

30
z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -30 50
50
Case 3
Fixup Apply 40 NIL
40 NIL y
y

30
30

40

z
30 50
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -45

40

30 50
y

z 45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -45

40

30 50
y

z 45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -45

40 40
Case 1
Fixup Apply

30 30 50
y 50 y

z 45 z 45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -45

40 40 40
Case 1
Fixup Apply Apply Fix up

30 30 30 50
y 50 y 50 y

z z 45 z 45
45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -20

40

30 50

z 20
45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -20

40

30 No Fixup required
50

z 20
45
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -5

40

30 50

20
45

z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
Insert -5

40

30 50

20 NIL 45

y
5

z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5] 40
Insert -5 40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5

z
z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5] 40
Insert -5 40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5

z
z
Insertion (Example 2):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5] 40
Insert -5 40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5 40
z
z
20 50

5 30 45

z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -41

41
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -50

41 Aply Fix up
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -50

41 Aply Fix up 41
z z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -38

41

38
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -38

41

No Fixup required

38
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31

41

38

31
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31

41

38 NIL

y
31
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31

41

Case 3
Fixup Apply
38 NIL

y
31
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31
41
41

Case 3
Fixup Apply
38 NIL
38 NIL

y
y 31
31
z
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31
41
41
After RR
Case 3 Fixup Apply
Fixup Apply
38 NIL
38 NIL

y
y 31
31
z
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -31
38
41
41
After RR
Case 3 Fixup Apply
Fixup Apply 31 41
38 NIL
38 NIL
z

y
y 31
31
z
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12

38

31 41

12
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12

38

31 41

y
12
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12

38
Case 1
Fixup Apply

31 41

y
12
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12
z 38
38
Case 1
Fixup Apply

31 31 41
41

y y
12 12

z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12
z 38
38
Case 1
Aply Fix up
Fixup Apply

31 31 41
41

y y
12 12

z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -12
z z 38
38
38
Case 1 Aply Fix up
Fixup Apply
31 41 31 41
31 41

y y
y 12
12
12
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19

38

31 41

12

19
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19

38

31 41

NIL
12

y
19
z
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19
38
38

Case 2
Fixup Apply
31 41 31 41

NIL
12 NIL
12
z
y
y
19
z 19
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19
38
38
38
After LL
Case 2 Fixup Apply
Fixup Apply 31 41
31 41 31 41

NIL
NIL 19
12 NIL
12
z
y
y
y 12
19 z
z 19
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19 38 38

After LL Case 3
Fixup Apply 31 41 Fixup Apply 31 41

NIL NIL
19 19
z
y z
y
12 12
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19 38 38

After LL Case 3
Fixup Apply 31 41 Fixup Apply 31 41

NIL NIL
19 19
z
y z
y
12 12
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -19 38 38

After LL Case 3
Fixup Apply 31 41 Fixup Apply 31 41

NIL NIL
19 19
z 38
y z
y
12 12
19 41

z
12 31
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -8
38

19 41

12 31

8
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -8
38

19 41

12 31

y
8
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -8
38

19 41 Case 1
Fixup Apply

12 31

y
8
Red Black Tree
Insertion (Example 3):
Insert the following elements into an empty RB-Tree.
[ 41, 38, 31, 12, 19, 8]
Insert -8
38
38

19 41 Case 1 19 41
Fixup Apply

12 31
12 31

y
y
8
8
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -5

5
z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -5

5 Aply Fix up 5
z z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -10

10

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -10

No Fixup required

10

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -15

10

15

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -15

NIL
10

15

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -15

5
Case 3
Fixup Apply

NIL
10

15

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -15

5 10
Case 3
Fixup Apply

NIL
10 5 15

y
z
15

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -25

10

5 15

25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -25

10

5 15

25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -25

10

Case 1
5 Fixup Apply
15

25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -25

10
10
Case 1
z
Fixup Apply
5 15
5 15
y
y
25
25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -25

10
10 10
Case 1 z
z Aply Fix up
Fixup Apply
5 15
5 5 15
15
y
y y
25
25 25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

10

5 15

25

20

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

10

5 15

NIL
25

y
20

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

10
Case 2
Fixup Apply
5 15

NIL
25

y
20

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20
10
10

Case 2
Fixup Apply 5 15
5 15

NIL
25
NIL
25
y
z
y
20
20

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20
10
After RR 10
10
Fixup Apply
Case 2
Fixup Apply 5 15 5 15
5 15

NIL
25 NIL
NIL 20
25
y
z y
y
20 25
20
z
z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

After RR 10
Fixup Apply

5 15

NIL
20

y
25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

After RR 10
Fixup Apply

5 15

NIL
20

y
25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20

After RR 10
Case 3
Fixup Apply
Fixup Apply

5 15

NIL
20

y
25

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20
10
After RR Case 3
Fixup Apply 10 Fixup Apply

5 15
5 15

NIL
20
NIL
20
y
25
y
25
z

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -20
10 After LL 10
After RR Case 3
Fixup Apply
Fixup Apply 10 Fixup Apply

5 5 20
15
5 15

NIL 15 25
20
NIL
20
y y z
25
y
25
z

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -30

10

5 20

15 25

y
30

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -30

10

5 20

15 25

y
30

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -30

10
Case 1
Fixup Apply
5 20

15 25

y
30

z
Red Black Tree
Insertion (Example 4):
Insert the following elements into an empty RB-Tree.
[ 5, 10 ,15 , 25, 20 ,30]
Insert -30

10 10
Case 1
Fixup Apply
5 5 20
20

z
15 15 25
25

y y
30 30

z
Design and Analysis of Algorithm

Advanced Data Structure


(Red Black Tree)
(Deletion)

LECTURE 32 - 36
Overview

• A variation of binary search trees.


• Balanced: height is O(lg n), where n is the
number of nodes.
• Operations will take O(lg n) time in the worst
case.
Red Black Tree (Deletion)

Start by doing regular binary-search-


tree deletion:
Red Black Tree (Deletion)
Root

15

Node having
5 16
two child

3 12 20

10 13 18 22

Node having
6
one child

Node having
7
no child
Red Black Tree (Deletion)
RB-DELETE(T, z) Root
if left[z] = nil[T] or right[z] = nil[T ]
then y ← z
15
else y ← TREE-SUCCESSOR(z)
if left[y] ≠ nil[T ]
then x ← left[y] 5 16
else x ← right[y]
p[x] ← p[y]
if p[y] = nil[T ] 3 12 20
then root[T ] ← x
else if y = left[p[y]]
then left[p[y]] ← x
10 13 18 22
else right[p[y]] ← x
if y ≠ z
then key[z] ← key[y]
6
copy 𝑦′𝑠 satellite data into z
if color[y] = BLACK
then RB-DELETE-FIXUP(T, x)
7
return y
Red Black Tree (Deletion)
RB-DELETE(T, z)
if left[z] = nil[T] or right[z] = nil[T ]
then y ← z
else y ← TREE-SUCCESSOR(z) • ‘y’ is the node that was actually
if left[y] ≠ nil[T ]
deleted(i.e. Successor of ‘z’).
then x ← left[y]
else x ← right[y]
• ‘x’ is either
p[x] ← p[y] • 𝑦′𝑠 sole non-sentinel child before y
if p[y] = nil[T ] was deleted, or
then root[T ] ← x • the sentinel, if y had no children.
else if y = left[p[y]]
• In both cases, p[x] is now the node that
then left[p[y]] ← x
else right[p[y]] ← x
was previously 𝑦′𝑠 parent.
if y ≠ z
then key[z] ← key[y]
copy 𝑦′𝑠 satellite data into z
if color[y] = BLACK
then RB-DELETE-FIXUP(T, x)
return y
Red Black Tree (Deletion)
RB-DELETE(T, z) Let execute with an example: Root
if left[z] = nil[T] or right[z] = nil[T ]
then y ← z z
15
else y ← TREE-SUCCESSOR(z)
if left[y] ≠ nil[T ]
then x ← left[y] 5 16
else x ← right[y]
p[x] ← p[y]
if p[y] = nil[T ] 3 12 20
then root[T ] ← x
else if y = left[p[y]]
then left[p[y]] ← x
10 13 18 22
else right[p[y]] ← x
if y ≠ z
y
then key[z] ← key[y]
copy 𝑦′𝑠 satellite data into z 6 Example for Node
if color[y] = BLACK x having two child
then RB-DELETE-FIXUP(T, x)
7
return y
Red Black Tree (Deletion)
Now Check
• If y is black, we could have violations of red-black properties:
• Which property might be violated?
P1. Every node is either red or black ?
OK
P2. The root is always black ?
If y is the root and x is red, then the root has become red.
P3. Every leaf (nil[T]) is black ?
OK
P4. If a node is red, then both its children are black ?
Violation if p[y] and x are both red.
Red Black Tree (Deletion)
Now Check
P5. For each node, all paths from the node to descendant leaves contain
the same number of black nodes (i.e. bh(x))?
• Any path containing y now has 1 fewer black node.
• Correct by giving x an "extra black” and make the node
“double black".
• Add 1 to count of black nodes on paths containing x.
• Now property 5 is OK, but property 1 is not.
• x is either doubly black (if color[x] = BLACK) or red & black
(if color[x] = RED).
• The attribute color[x] is still either RED or BLACK. No new
values for color attribute.
• In other words, the extra blackness on a node is by virtue of x
pointing to the node.
Red Black Tree (Deletion)
Basic Idea for Deletion
Move the extra black up the tree until

• x points to a red & black node ⇒turn it into a black node,

• x points to the root ⇒just remove the extra black, or

• Do certain rotations (i.e. LL or RR) and recoloring the node


and finished the deletion by maintain the Red-black tree
property.
Red Black Tree (Deletion)
The basic idea for deletion was executed by the help of RB-DELETE-
FIXUP function (i.e. Remove the violations by calling RB-DELETE-
FIXUP) .

This function is executed until 𝒙 ≠ 𝒓𝒐𝒐𝒕 and 𝒄𝒐𝒍𝒐𝒓 𝒙 = 𝑩𝒍𝒂𝒄𝒌

For this, first find the sibling of ‘x’ as ‘w’ and ‘w’ can not be NIL

There are 8 number of cases:


• 4 of which are symmetric to the other 4 (like RB-INSERT-FIXUP()).
• We look at cases in which ‘x’ is the left child of it’s parents.
Red Black Tree (Deletion)

Case 1: When ‘w’(sibling of ’x’ )is red and w=right[p[x]]

Case 2: When ‘w’(sibling of ’x’ )is black and both the children of
‘w’ is also black.

Case 3: When ‘w’(sibling of ’x’ )is black and w’s left child is red
and w’s right child is black.

Case 4: When ‘w’(sibling of ’x’ )is black and w’s left child is black
and w’s right child is red.
Red Black Tree (Deletion)
• Case 1:
• When ‘w’(sibling of ’x’ )is red and w=right[p[x]]
Action to be taken
• Change the color[w]=Black.
• Change the color[p[x]]=Red.
• LEFT_ROTATE(T,p[x])
• Move w=right[p[x]]
• Go immediately to case 2, 3, or 4.

Case 1

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
Red Black Tree (Deletion)
• Case 1:
• When ‘w’(sibling of ’x’ )is red and w=right[p[x]]
Action to be taken
• Change the color[w]=Black.
• Change the color[p[x]]=Red.
• LEFT_ROTATE(T,p[x])
• Move w=right[p[x]]
• Go immediately to case 2, 3, or 4.

Case 1

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
Red Black Tree (Deletion)
• Case 2:
• When ‘w’(sibling of ’x’ )is black and both the children of ‘w’ is also black.
Action to be taken
• Change color[w]=Red.
• Move x=p[x]

Case 2

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 2:
• When ‘w’(sibling of ’x’ )is black and both the children of ‘w’ is also black.
Action to be taken
• Change color[w]=Red.
• Move x=p[x]

Case 2

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 3:
• When ‘w’(sibling of ’x’ )is black and w’s left child is red and w’s right child
is black.
Action to be taken
• Color[left[w]]=Black
• Color[w]=Red
• RIGHT_ROTATE(T, w)
• Move w=right[p[x]]

Case 3

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 3:
• When ‘w’(sibling of ’x’ )is black and w’s left child is red and w’s right child
is black.
Action to be taken
• Color[left[w]]=Black
• Color[w]=Red
• RIGHT_ROTATE(T, w)
• Move w=right[p[x]]

Case 3

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 4:
• When ‘w’(sibling of ’x’ )is black and w’s left child is black and w’s right
child is red.
Action to be taken
• Color[w]=color[p[x]]
• Color[p[x]]=Black
• Color[right[w]]=Black
• LEFT_ROTATE(T, p[x])
• x=root[T]

Case 4

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 4:
• When ‘w’(sibling of ’x’ )is black and w’s left child is black and w’s right
child is red.
Action to be taken
• Color[w]=color[p[x]]
• Color[p[x]]=Black
• Color[right[w]]=Black
• LEFT_ROTATE(T, p[x])
• x=root[T]

Case 4

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
• Case 4:
• When ‘w’(sibling of ’x’ )is black and w’s left child is black and w’s right
child is red.
Action to be taken
• Color[w]=color[p[x]]
• Color[p[x]]=Black
• Color[right[w]]=Black
• LEFT_ROTATE(T, p[x])
• x=root[T]

Case 4

[Nodes with bold outline indicate black nodes and light outline indicate red nodes]
[Node with gray outline mark ‘c’ represent unknown color (i.e. may be Red or Black)]
Red Black Tree (Deletion)
RB-DELETE-FIXUP(T, x)
while x ≠ root[T ] and color[x] = BLACK
do if x = left[p[x]]
then w ← right[p[x]]
if color[w] = RED
then Apply Case 1
if color[left[w]] = BLACK and color[right[w]] = BLACK
then Apply Case 2
else if color[right[w]] = BLACK
then Apply Case 3
Apply Case 4
else (same as then clause with “right” and “left” exchanged)

color[x] ← BLACK
Red Black Tree (Deletion)
RB-DELETE-FIXUP(T, x)
while x ≠ root[T ] and color[x] = BLACK
do if x = left[p[x]]
then w ← right[p[x]]
if color[w] = RED
then color[w] ← BLACK ✄ Case 1
color[p[x]] ← RED ✄ Case 1
LEFT-ROTATE(T, p[x]) ✄ Case 1
w ← right[p[x]] ✄ Case 1
if color[left[w]] = BLACK and color[right[w]] = BLACK
then color[w] ← RED ✄ Case 2
x ← p[x] ✄ Case 2
else if color[right[w]] = BLACK
then color[left[w]] ← BLACK ✄ Case 3
color[w] ← RED ✄ Case 3
RIGHT-ROTATE(T, w) ✄ Case 3
w ← right[p[x]] ✄ Case 3
color[w] ← color[p[x]] ✄ Case 4
color[p[x]] ← BLACK ✄ Case 4
color[right[w]] ← BLACK ✄ Case 4
LEFT-ROTATE(T, p[x]) ✄ Case 4
x ← root[T ] ✄ Case 4
else (same as then clause with “right” and “left” exchanged)
color[x] ← BLACK
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -50

50 Aply Fix up
z
Red Black Tree
Insertion (Example 1):
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -50

50 Apply Fix up 50
z
Insertion (Example 1):
Red Black Tree No Fixup required

Insert the following elements into an empty RB-Tree.


[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -40

50

No Fixup required
40
z
Insertion (Example 1):
Red Black Tree No Fixup required

Insert the following elements into an empty RB-Tree.


[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -30

50

40 NIL

30
z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -30 50
50
Case 3
Fixup Apply
40 NIL
y
40 NIL
y

30
30

40

z
30 50
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -45

40

30 50
y

z 45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -45

40

30 50
y

z 45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -45

40
Case 1 40
Fixup Apply

30 50
y 30
y 50

z 45
z 45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -45

40
Case 1 40 40
Fixup Apply Apply Fix up

30 50
y 30 30 50
y 50 y

z 45
z 45 z 45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -20

40

30 50

z 20
45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -20

40

30 No Fixup required
50

z 20
45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -5

40

30 50

20
45

z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -5

40

30 50

20 NIL 45

y
5

z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -5
40
40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5

z
z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -5

40
40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5

z
z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Insert -5 40
40
Case 3
Fixup Apply
30 50
30 50

20 NIL 45
20 NIL 45

y
5
y
5 40
z
z
20 50

5 30 45

z
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5

40

20 50

5 30 45

After the Insertion the tree finally looks as shown above.


Now we perform Deletion on RB Tree.
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -40
z 40

20 50

5 30 45
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -40
z 40
z 40
Find
successor y

20 50
20 50

5 30 45
5 30 45
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -40
z 40
z 40
Find
successor y

20 50
20 50

5 30 45
5 30 45
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -40
z 40
z 40
Find
successor y

20 50
20 50

5 30 45
5 30 45
y

If y has no child and then


simply exchange the key of y
with key of z and delete y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 z 40
Delete -40
z 40
Find
successor y 20 50

20 50
5 30 45

5 30 45 y
z
45
If y has no child then simply
exchange the key of y with
key of z and delete y
20 50

5 30
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -20

45
z

20 50

5 30
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -20

45 Find 45
z successor y z

20 50 20 50

5 30 5 30
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -20

45 Find 45
z successor y z

20 50 20 50

5 30 5 30
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -20

45 Find 45
z successor y z

20 50 20 50

5 30 5 30
y

If y has no child and then


simply exchange the key of y
with key of z and delete y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
45
Delete -20 z
45 Find
z successor y
20 50

20 50
5 30
y
5 30
45
z

30 If y has no child and then


50
simply exchange the key of y
with key of z and delete y

5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45
z Find y

30 50

5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45
z Find y No successor,
Hence y=z
and x=left[y]
30 50

5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 No successor,
z Find y
Hence y=z
and x=left[y] 45
z
30 50

30 50
5
x y
5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 No successor,
z Find y
Hence y=z
and x=left[y] 45
z
30 50

30 50
5
x y
5

P[x]=p[y]
and
Left[p[y]]=x
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 No successor,
z Find y Hence y=z
and
x=right[y] 45
z
30 50

30 50
5
x y
45 5

x P[x]=p[y]
5 50
and
Left[p[y]]=x
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 No successor,
Find
z Hence y=z
successor y
and
x=right[y] 45
z
30 50

30 50
5
x y
45 5

x P[x]=p[y]
5 50
and
Left[p[y]]=x
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 It was found that, we are


unable to enter inside the
while loop, so execute the
last line of RB-Delete-
x Fixup,(i.e. color[x]=Black)
5 50

while x ≠ root[T ] and color[x] = BLACK


Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -30

45 It was found that, we are


unable to enter inside the
while loop, so execute the
last line of RB-Delete-
x Fixup,(i.e. color[x]=Black)
5 50

45

x
5 50
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -45
z
45

5 50
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -45
z
45
Find
successor y

5 50
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -45
z z
45 45
Find
successor y

5 50 5 50
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 If y has no child and then
Delete -45 simply exchange the key of y
with key of z and delete y
z z
45 45
Find
successor y

5 50 5 50
y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 If y has no child and then
Delete -45 simply exchange the key of y
with key of z and delete y
z z
45 45
Find
successor y

5 50 5 50
y

z
50

x
5 NIL
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 If y has no child and then
Delete -45 simply exchange the key of y
with key of z and delete y
z z
45 45
Find
successor y

5 50 5 50
y

z
50

x
5 NIL
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 If y has no child and then
Delete -45 simply exchange the key of y
with key of z and delete y
z z
45 45
Find
successor y

5 50 5 50
y

z
50
Find sibling
w
x
5 NIL
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5 If y has no child and then
Delete -45 simply exchange the key of y
with key of z and delete y
z z
45 45
Find
successor y

5 50 5 50
y

z
50 z
50
Find sibling
w
w x
x
5 NIL
5 NIL
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -45
z
50
Apply Fixup
Case 2
w x
5 NIL
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -45
z x
50 50
Apply Fixup
Case 2
w x w
5 NIL 5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -5

50

z
5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -5

50
Find y

z
5
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -5

50 50
Find y

z z
5 5 y
Insertion (Example 1):
Red Black Tree
Insert the following elements into an empty RB-Tree.
[ 50, 40, 30, 45, 20, 5]
and then perform delete 40, 20, 30, 45, & 5
Delete -5

50 If z has no child the y=z and


50 Find simply delete z
successor y

z z
5 5 y

50
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>

25

15 30

10 20

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30

25
z

15 30

10 20

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30

25 25
Find y
z z
y
15 30 15 30

10 20 10 20

5 5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30

25 25
Find y
z z
y
15 30 15 30

10 20 10 20

5 5
If z has no child the y=z and
simply delete z
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30
w 25
25 Find x
sibling w

15 NIL

15 Apply Fixup
Case 1

10 20
10 20

5
5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30

w 25
x
After
15 NIL
rotation

10 20

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30
After
w 25 15
rotation
x
Apply Fixup
10 Case 2
15 NIL 25

w
10 20 5 20 NIL
x

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -30
15
15 x
x
After fixup
10 25
10 25
w
w
5 20
5 20 NIL
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -25
15 Find y
z

10 25

5 20
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -25
15 Find y 15
z z

After
10 deletion
10 25 25

y
5 20 5 20

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -25
15 w 15
w x x
Apply
Fixup
10 20 10 20

5 5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -15

15 z

10 20

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -15

15 z Find
successor y

10 20

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -15
15 z
15 z
Find
successor y
10 20
10 20 y

5
5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -15 with key of z and delete y

15 z
15 z
Find
successor y
10 20
10 20 y

5
5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -15 with key of z and delete y

15 z
15 z
Find
successor y
10 20
10 20 y

5
5
15 z
w x

10
NIL

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -15 with key of z and delete y

15 z
15 z
Find
successor y
10 20
10 20 y

5
5
20 z
w x

10
NIL

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -15 with key of z and delete y

15 z
15 z
Find
successor y
10 20
10 20 y

5
5
20 z
20 z Apply w x
x Fixup
w
Case 3 10
NIL
10
NIL

5
5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -15
After 10 x
20 z Rotation

w x
5
10 20
NIL

5
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -15
After 10 x
20 z Rotation

w x
5
10 20
NIL

10 z

5 20
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -10
10 z

5 20
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -10
10 z
Find
successor y
5 20
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10>
Delete -10
10 z 10 z
Find
successor y
5 20 5 20

y
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

y
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z

NIL
5

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z

NIL
5

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z

NIL
5

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z
Apply
Fixup
Case 2 NIL
5

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z
20 z Apply
Fixup
Case 2 NIL
5
5

x
Insertion (Example 2):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <30, 25, 15 & 10> If y has no child and then
simply exchange the key of y
Delete -10 with key of z and delete y

10 z 10 z
Find
successor y
5 20 5 20

20 z
20 z Apply
Fixup
Case 2 NIL
5
5

x
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>

38

19 41

12 31

8
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -8
38

19 41

12 31
z
8
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -8
38
Find y

19 41

12 31
z
8
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -8 38

38
Find y 19 41

19 41
12 31
12 z
31
8
z
8 y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -8
38 Find y

19 41
19 41

12 31
12 31 z
z 8
8
y
y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -8 38
Find
successor y
19 41
19 41

12 31
12 31
z
z
8
8
y 38

19 41

12 31
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -12
38

19 41
z

12 31
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -12
38 19 41
z
Find y
19 41 12 31
z

12 31 y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -12
38 19 41
z
Find
19 41 successor y
12 31
z

12 31 y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -12
38 19 41
z
Find
19 41 successor y
12 31
z

12 31 y
38

19 41

x NIL 31 w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -12
38 19 41
z
Find y
19 41 12 31
z

12 31 y
38

Apply
Fixup 19 41
Case 2

x NIL 31 w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
38
Delete -12 38

19 41
19 41 z
z Find y
12 31
12 31
y
y
38
38

Apply
x Fixup 19 41
19 41
Case 2

x NIL 31 w
NIL 31 w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -12
y y
38 38
Apply
Fixup

x x
19 41 19 41

NIL 31 w NIL 31 w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38

z
19 41

31
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38
Find y

z
19 41

31
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38
38
Find y y
z
z 19 41
19 41

31
31

x
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38
38
Find y y
z
z 19 41
19 41

31
31

x 38

w
x
31 41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38
38
Find y y
z
z 19 41
19 41

31
31

x 38

x
31 41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -19
38
38
Find y
z
z 19 41
19 41

31
31 y

38 38
Apply
Fixup

z z
31 41 31 41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31

38

z
31 41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31

38 Find y

z
31 41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31
38
38 Find y

z
z 31 41
31 41 y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31
38
38 Find y

z
z 31 41
31 41 y
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31
38
38 Find y

z
z 31 41
31 41 y

38

NIL 41
x
w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31
38
38 Find y

z
z 31 41
31 41 y

Apply 38
Fixup
Case 2

NIL 41
x
w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -31
38
38 Find y

z
z 31 41
31 41 y

Apply 38
x 38
Fixup
Case 2

NIL 41
NIL 41 x
w
w
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -38

38 z

41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -38
38 z
38 z Find y

y 41
41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -38
38 z
38 z Find y

y 41
41
x

41
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -38
38 z
38 z Find y

y 41
41
x

41

41
x

x
Insertion (Example 3):
Red Black Tree
With the help of following Red-black tree perform the deletion operation on the given keys
respectively.
Keys for deletions are <8, 12, 19, 31 38 & 41>
Delete -41
If z has no child then y=z and
41
apply BST Deletion

z y

Tree is Empty
Red Black Tree
Lemma
A red-black tree with n internal nodes has 𝒉𝒆𝒊𝒈𝒉𝒕 ≤
𝟐 lg 𝒏 + 𝟏 .
Proof:
The Proof is based on two number of claims:
• Claim 1: Any node with height h has black-height
≥ h/2.
• Claim 2: The subtree rooted at any node x
contains ≥ 2𝑏ℎ(𝑥) − 1 internal nodes.
Red Black Tree
h=4
• Claim 1: Any node with height bh=2
h has black-height ≥ h/2. 26

h=3
h=1 bh=2
According to property 4(i.e. If a 17
bh=1
41
node is red, then both its children
h=2
are black) , at least half of the h=2 bh=1
nodes on any simple path from 30
bh=1 47
root to leaf node, excluding root h=1
h=1
node must be black. bh=1
bh=1

38 50
Hence, any node(x) with height h
has bh(x) ≥ h/2 (i.e. h/2 number
of black node.)
NIL
h=0
bh=0
∎proved
Red Black Tree
Claim 2:
The subtree rooted at any node x contains at least
2𝑏ℎ(𝑥) − 1 internal nodes (i.e. the nodes who bearing
keys only).
Proof by induction method on height of x.
Base case:
When x is a leaf node, h(x)=0 and bh(x)=0.
Hence the internal node of x is = 2𝑏ℎ(𝑥) − 1
= 20 − 1
=1-1=0
Red Black Tree
Induction hypothesis: h=4

bh=2 x
Consider a node x that has a 26
positive height and is an
internal node with two children h=3
h=1
of left and right. bh=2 Y (Red Child)
17 bh=1
Y (Black Child) 41
Each children has a black height
of either bh(x) or bh(x)-1 h=2

depending on weather it’s color h=2 bh=1

is red or black respectively. 30 bh=1 47

For Example : h=1


h=1

Let bh(x)=b, bh=1


bh=1
38 50
Then bh(y)=b if y is a red child
of x
bh(y)=b-1 if y is a black child
NIL
of x. h=0

bh=0
Red Black Tree
Since ,
𝒉 𝒄𝒉𝒊𝒍𝒅 𝒙 < 𝒉(𝒙)
By using the inductive hypothesis, we can conclude that each child has at
least 2𝑏ℎ 𝑥 −1 − 1 internal node. Thus , the subtree rooted at x contain at least
(𝟐𝒃𝒉 𝒙 −𝟏 −𝟏) + (𝟐𝒃𝒉 𝒙 −𝟏 −𝟏) +1= 𝟐𝒃𝒉 𝒙 − 𝟏 internal nodes.
Which proves the claim 2.
So as per the definition of claim 1 , the bh(root) must be at least h/2 .
Hence ⟹ n ≥ 2𝑏ℎ 𝑥 − 1
⟹n ≥ 2ℎ/2 − 1 (as bh(x)=h/2) [claim 1]
⟹ n+1 ≥ 2ℎ/2
Apply log both side
⟹ log(𝑛 + 1) ≥ log 2ℎ/2
⟹ log(𝑛 + 1) ≥ ℎ/2 log 2
⟹ log(𝑛 + 1) ≥ ℎ/2
⟹ h≤ 2 log(𝑛 + 1) ∎ Proved
Design and Analysis of Algorithm

Advanced Data Structure


(B Tree)
(Insertion and Deletion)

LECTURE 37 - 40
Overview

• B-trees are balanced search trees designed to


work well on magnetic disks or other direct-access
secondary storage devices.
• Time complexity of B Tree in big O notation
Algorithm Average Worst case
Space O(n) O(n)
Search O(log n) O(log n)
Insert O(log n) O(log n)
Delete O(log n) O(log n)
B Tree
A B-tree T is a rooted tree (with root root[T]) having the following
properties.
1. Every node 𝑥 has the following fields:
a. 𝑛[𝑥] , the number of keys currently stored in node x, For
example: 𝑛[𝑥] = 4
5 8 12 13
b. the 𝑛[𝑥] keys themselves, stored in nondecreasing order:
𝑘𝑒𝑦1 𝑥 ≤ 𝑘𝑒𝑦2 𝑥 ≤ ⋯ ≤ 𝑘𝑒𝑦𝑛[𝑥] [𝑥], and
c. 𝑙𝑒𝑎𝑓 [𝑥], a boolean value that is TRUE if x is a leaf and FALSE if x
is an internal node.
2. If 𝑥 is an internal node, it also contains 𝑛[𝑥] + 1 pointers.
𝐶1 [𝑥], 𝐶2 [𝑥], . . . , 𝐶𝑛 𝑥 +1 [𝑥] to its children.

5 8 12 13

Leaf nodes have no children, so their 𝐶𝑖 fields are undefined.


B Tree

3. The keys 𝑘𝑒𝑦𝑖 [𝑥] separate the ranges of keys stored in each subtree:
if 𝑘𝑖 is any key stored in the subtree with root𝐶𝑖 [𝑥], then

𝑘1 ≤ 𝑘𝑒𝑦1 𝑥 ≤ 𝑘2 ≤ 𝑘𝑒𝑦2 𝑥 ≤ ⋯ ≤ 𝑘𝑛−1 ≤ 𝑘𝑒𝑦𝑛[𝑥] [𝑥] ≤ 𝑘𝑛

5 8 12 13

4. All leaf nodes has the same depth, which is the tree's height h.
B Tree
5. There are lower and upper bounds on the number of keys a node can
contain. These bounds can be expressed in terms of a fixed integer
𝑡 ≥ 2 called the minimum degree of the B-tree:

a. Every node other than the root must have at least t - 1 keys.
Every internal node other than the root thus has at least t
children.
If the tree is nonempty, the root must have at least one key.

b. Every node can contain at most 2t - 1 keys. Therefore, an


internal node can have at most 2t children.
(We say that a node is full if it contains exactly 2t - 1 keys.)
B Tree
The simplest B-tree occurs when t = 2. Every internal node then has
either 2, 3, or 4 children, and we have a 2-3-4 tree.
• a 2-node has one data element, and if internal has two child nodes;
• a 3-node has two data elements, and if internal has three child
nodes;
• a 4-node has three data elements, and if internal has four child
nodes;
For Example

In practice, however, much larger values of t are typically used.


B Tree (Insertion)
Example 1:
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
So, The required minimum key =t-1=3-1=2
The required maximum key = 2t-1=6-1=5
B Tree (Insertion)
Example 1:
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
So, The required minimum key =t-1=3-1=2
The required maximum key = 2t-1=6-1=5
Insert : 78
78
B Tree (Insertion)
Example 1:
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
So, The required minimum key =t-1=3-1=2
The required maximum key = 2t-1=6-1=5
Insert : 78
78

Insert : 56

56 78
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 52
52 56 78
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 52
52 56 78

Insert : 95

52 56 78 95
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 52
52 56 78

Insert : 95

52 56 78 95

Insert : 88

52 56 78 88 95
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>

52 56 78 88 95
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>

52 56 78 88 95

Now we can’t insert a new key into an existing leaf node as the
maximum key size limit is achieved.
Hence we introduced a split function , which split the tree by it’s median
key y.
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert : 105
78

52 56 88 95
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 105
78

52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 15
78

15 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 35
78

15 35 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 22
78

15 22 35 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 22
78

15 22 35 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 22
78

15 22 35 52 56 88 95 105

First Split then


insert
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert :47
35 78

15 22 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert : 47
35 78

15 22 47 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 43
35 78

15 22 43 47 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 50
35 78

15 22 43 47 50 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 19
35 78

15 19 22 43 47 50 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 31
35 78

15 19 22 31 43 47 50 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 40
35 78

15 19 22 31 43 47 50 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert : 40
35 78

15 19 22 31 43 47 50 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert : 40

35 50 78

15 19 22 31 43 47 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
First split and then Insert : 40

35 50 78

15 19 22 31 40 43 47 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 41
35 50 78

15 19 22 31 40 41 43 47 52 56 88 95 105
B Tree (Insertion) Min. Key=2
Example 1: Max. Key=5
Draw a B-Tree of minimum degree t=3 of the given sequence and
assume that B-Tree is initially empty.
<78, 56, 52, 95, 88, 105, 15, 35, 22, 47, 43, 50, 19, 31, 40, 41, 59>
Insert : 59
35 50 78

15 19 22 31 40 41 43 47 52 56 59 88 95 105
B Tree (Insertion)
Example 2:
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
B Tree (Insertion)
Example 2:
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
So, The required minimum key =t-1=3-1=2
The required maximum key = 2t-1=6-1=5
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : E
E
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : E
E
Insert : A
A E
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : E
E
Insert : A
A E

Insert : S
A E S
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : E
E
Insert : A
A E

Insert : S
A E S
Insert : Y

A E S Y
B Tree (Insertion) Min. Key=2
Max. Key=5
Example 2:
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : Q

A E Q S Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : Q

A E Q S Y

First Split then


insert
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
First split and then Insert : U
Q

A E S Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
First split and then Insert : U
Q

A E S U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : E
Q

A E E S U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : S
Q

A E E S S U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : T
Q

A E E S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : I
Q

A E E I S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : O
Q

A E E I O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : N
Q

A E E I O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : N
Q

A E E I O S S T U Y

First Split then


insert
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : N
E Q

A E I N O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : I
E Q

A E I I N O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : N
E Q

A E I I N N O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : C
E Q

A C E I I N N O S S T U Y
B Tree (Insertion) Min. Key=2
Example 2: Max. Key=5
Construct a B-Tree of degree t=3 on following data set and assume that
B-Tree is initially empty.
<E, A, S, Y, Q, U, E, S, T, I, O, N, I, N, C>
Insert : C
E Q

A C E I I N N O S S T U Y
B Tree (Insertion)
A B-Tree can be constructed by order as well as degree.
The question is , how to find Maximum and Minimum key in both the
case

Order(m) Degree(t)

Maximum Key=m-1
Maximum Key=2t-1
𝑚
Minimum Key= −1 Minimum Key= t-1
2
B Tree (Insertion)
Example 3:
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
B Tree (Insertion)
Example 3:
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Soln.
Order(m)=5
Maximum Key=m-1 = 5-1=4
𝑚
Minimum Key= − 1= 3-1=2
2
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
Insert: S
F S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
Insert: S
F S
Insert: Q
F Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
Insert: S
F S
Insert: Q
F Q S
Insert: K
F K Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
Insert: S
F S
Insert: Q
F Q S
Insert: K
F K Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: F
F
Insert: S
F S
Insert: Q
F Q S
Insert: K
F K Q S

First Split then


insert
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-Tree is
initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: K
F K Q S
First Split
then insert
Rules for Splitting:
When the key elements are even and there is a need of split, at that time we
get two median(i.e. m and m+1), So on these cases following rules are
helping for splitting.
Rule 1: If the inserted item is < m then split from ‘m’.
Rule 2: If the inserted item is >m+1 then split from ‘m+1’.
Rule 3: If the inserted item is > m and <m+1 then split on inserted item.
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: K K

F Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: K K

F Q S
Insert: C
K

C F Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: L K

C F L Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: L K

C F L Q S

Insert: H

C F H L Q S
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: T K

C F H L Q S T
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
First Split
Insert: T K then insert

C F H L Q S T
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
First Split
K then insert
Insert: T

C F H L Q S T

K S

C F H L Q T
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: V K S

C F H L Q T V
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: V K S

C F H L Q T V
Insert: W
K S

C F H L Q T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

Insert: M K S

C F H L M Q T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>

K S
Insert: M

C F H L M Q T V W

Insert: R
K S

C F H L M Q R T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: N Insert and
K S
then Split

C F H L M Q R T V W

K N S

C F H L M Q R T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: N
K N S

C F H L M Q R T V W

Insert: P
K N S

C F H L M P Q R T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: A
K N S

A C F H L M P Q R T V W

Insert: B (First Insert B then Split)


C K N S

A B F H L M P Q R T V W
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: X
C K N S

A B F H L M P Q R T V W X
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: X
C K N S

A B F H L M P Q R T V W X

Insert: Y (Insert Y and then split)


C K N S

A B F H L M P Q R T V W X
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: Y (Insert Y and then split and again insert W on Root and then
again split)

C K S W

A B F H L M P Q R T V X Y
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: Z

C K S W

A B F H L M P Q R T V X Y Z
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: D

C K S W

A B D F H L M P Q R T V X Y Z
B Tree (Insertion) Min. Key=2
Example 3: Max. Key=4
Construct a B-Tree of order 5 on following data set and assume that B-
Tree is initially empty.
<F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, Z, D, E>
Insert: E

C K S W

A B D E F H L M P Q R T V X Y Z
B Tree (Deletion)
B Tree (Deletion)
Deletion from a B-tree is analogous to insertion but a little more
complicated.
Let us sketch illustrates the various cases of deleting keys from a B-tree.
Case 1: If x (one of the key to be deleted from) is a leaf node and the leaf
node have more than (t-1) keys then the key can just be removed without
disturbing the tree.
Let the degree (t)=3
For Example:
Delete
D G D G
B

A B C E F H Z A C E F H Z
B Tree (Deletion)
Case 2(a): if x (one of the key to be deleted from) is an internal node and
the key left children have at least t key, then the largest value can be
moved up to replace the k.
Let the degree (t)=3
For Example:

Q U Delete Q T
U

O P R S T W X O P R S W X
B Tree (Deletion)
Case 2(b): if x (one of the key to be deleted from) is an internal node and
the key right children have at least t key, then the smallest value can be
moved up to replace the k.
Let the degree (t)=3
For Example:

Q U Delete R U
Q

O P R S T W X O P S T W X
B Tree (Deletion)
Case 2(c): if x (one of the key to be deleted from) is an internal node
neither its child has at least t keys then the two childs of keys must be
merge into one and key must be removed.
Let the degree (t)=3
For Example:

Delete R X
R U X
U

P Q S T V W Y Z P Q S T V W Y Z
B Tree (Deletion)
Case 3:
If the key k is not present in internal node x, determine the root of the
appropriate subtree that must contain k, if k is in the tree at all.
If x has only 𝒕 − 𝟏 keys, execute step 3a or 3b as necessary to guarantee
that we descend to a node containing at least t keys. Then finish by
recursing on the appropriate child of x.
B Tree (Deletion)
Case 3 (a): If x has (t-1) keys but has an immediate sibling with at least t
keys, give x an extra key by moving a key from p[x] down into x, moving a
key from x’s immediate left or right sibling up into p[x], and moving the
appropriate child pointer from the sibling into x.
Let the degree (t)=3
For Example: C L P T X

A B E J K N O Q R S U V Y Z
Delete

B
E L P T X

A C J K N O Q R S U V Y Z
B Tree (Deletion)
• Case 3If x has (t-1) keys
(b) if p[x] (one of the key to be deleted from) and its immediate sibling
also have (t-1) keys, then merge p[x] with one of its sibling by bringing
down the p[p[x]] to be the median value and then delete the desired key.
Delete

D
Let the degree (t)=3
P
For Example:

C L T X

A B D E J K N O Q R S U V Y Z
B Tree (Deletion)
Case 3: (b) if p[x] (one of the key to be deleted from) and its immediate
sibling also have (t-1) keys, then merge p[x] with one of its sibling by
bringing down the p[p[x]] to be the median value and then delete the
desired key.
Let the degree (t)=3
For Example:

C L P T X
Delete

A B E J K N O Q R S U V Y Z
B Tree (Deletion)
Example 1:
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>

D R T

A A C C E G I N R S S T T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete: T key
D R T

A A C C E G I N R S S T T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete: T key
D R T

A A C C E G I N R S S T T U U U
Apply Case
2(b)
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete: T key
D R T

A A C C E G I N R S S T T U U U
Apply Case
2(b)

key
D R T

A A C C E G I N R S S T T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
The updated Tree is

D R T

A A C C E G I N R S S T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : S
D R T

key
A A C C E G I N R S S T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : S
D R T

key
A A C C E G I N R S S T U U U
Apply Case
3(a)
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : S
D R T

key
A A C C E G I N R S S T U U U
Apply Case
3(a)

D R T

key
A A C C E G I N R S S T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : S
After Apply Case
3(a)

D R T

A A C C E G I N R S T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : U
D R T

key
A A C C E G I N R S T U U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : U
D R T

key
A A C C E G I N R S T U U U
Apply Case
1
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : U
D R T

key
A A C C E G I N R S T U U U
Apply Case
1

D R T

A A C C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : T key
D R T

A A C C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : T key
D R T

A A C C E G I N R S T U U
Apply Case
2(C)
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : T key
D R T

A A C C E G I N R S T U U
Apply Case
2(C)

D R

A A C C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>key
Delete : D
D R

A A C C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>key
Delete : D
D R

A A C C E G I N R S T U U
Apply Case
2(a)
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>key
Delete : D
D R

A A C C E G I N R S T U U
Apply Case
2(a)

key

D R

A A C C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : D key

D R

A A C C E G I N R S T U U

C R

A A C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : C C R

key
A A C E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : C C R

key
A A C E G I N R S T U U
Apply Case
1
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : C C R

key
A A C E G I N R S T U U
Apply Case
1

C R

A A E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : I
C R

key
A A E G I N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : I
C R

key
A A E G I N R S T U U
Apply Case
1
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : I
C R

key
A A E G I N R S T U U
Apply Case
1

C R

A A E G N R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : N
C R
key

A A E G N R S T U U
Apply Case
1
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete : N
C R
key

A A E G N R S T U U
Apply Case
1

C R

A A E G R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :R
C R

key

A A E G R S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :R
C R

key

A A E G R S T U U
Apply Case
1
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :R
C R

key

A A E G R S T U U
Apply Case
1

C R

A A E G S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :C key

C R

A A E G S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :C key

C R

A A E G S T U U
Apply Case
2(C)
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree . of degree 3
<T S U T D C I N R C>
Delete :C key

C R

A A E G S T U U
Apply Case
2(C)

A A E G S T U U
Example 1:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of degree 3.
<T S U T D C I N R C>
Delete :C key

C R

A A E G S T U U
Apply Case
2(C)

Final Tree
R

A A E G S T U U
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
9 25 75

1 3 5 8 10 15 27 35 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
9 25 75

1 3 5 8 10 15 27 35 81 97 112

If order =5 then
Maximum Key=m-1 = 5-1=4
𝑚
Minimum Key= 2 − 1= 3-1=2
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
key
Delete :25 9 25 75

1 3 5 8 10 15 27 35 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
key
Delete :25 9 25 75

1 3 5 8 10 15 27 35 81 97 112

Apply Case
2(c)
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
key
Delete :25 9 25 75

1 3 5 8 10 15 27 35 81 97 112

Apply Case
2(c)

9 75

1 3 5 8 10 15 27 35 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :75 key
9 75

1 3 5 8 10 15 27 35 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :75 key
9 75

1 3 5 8 10 15 27 35 81 97 112

Apply Case
2(a)
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :75 key
9 75

1 3 5 8 10 15 27 35 81 97 112

Apply Case
2(a)

9 35

1 3 5 8 10 15 27 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :27
9 35

key
1 3 5 8 10 15 27 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :27
9 35

key
1 3 5 8 10 15 27 81 97 112
Apply Case
1

9 35

1 3 5 8 10 15 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :5
9 35

key
1 3 5 8 10 15 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :5
9 35

key
1 3 5 8 10 15 81 97 112
Apply Case
1
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :5
9 35

key
1 3 5 8 10 15 81 97 112
Apply Case
1

9 35

1 3 8 10 15 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :15
9 35

key
1 3 8 10 15 81 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :15
9 35

key
1 3 8 10 15 81 97 112

Apply Case
3(a)
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :15
9 35

key
1 3 8 10 15 81 97 112

Apply Case
3(a)

9 81

1 3 8 10 35 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :81 key
9 81

1 3 8 10 35 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :81 key
9 81

1 3 8 10 35 97 112
Apply Case
2(c)
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :81 key
9 81

1 3 8 10 35 97 112
Apply Case
2(c)

1 3 8 10 35 97 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :97 9

key

1 3 8 10 35 97 112

Apply
Case 1
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :97 9

key

1 3 8 10 35 97 112

Apply
Case 1

1 3 8 10 35 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :112
9

key
1 3 8 10 35 112
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :112
9

key
1 3 8 10 35 112

Apply Case
1
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :112
9

key
1 3 8 10 35 112

Apply Case
1

1 3 8 10 35
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :10
9

key
1 3 8 10 35
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :10
9

key
1 3 8 10 35
Apply Case
3(a)
Example 2:
B Tree (Deletion)
Perform the deletion operation with the following data sequentially on
the given B-Tree of order 5.
<25 75 27 5 15 81 97 112 10 >
Delete :10
9

key
1 3 8 10 35
Apply Case
2(a)

Final Tree 8

1 3 9 35
Theorem: B Tree (Deletion)
If 𝒏 ≥ 𝟏, then for any n- key B-Tree T of height h and minimum degree
𝒏+𝟏
𝒕 ≥ 𝟐, then 𝒉 ≤ 𝐥𝐨𝐠 𝒕 𝟐 .
Proof:
The root of B-Tree contains at least one keys and all other nodes contain
at least t-1 keys.
Thus T, whose height is h, has at least 2 nodes at depth 1. 2t nodes at
depth 2, at least 2𝑡 2 node at depth 3, and so on until it has at least
2𝑡ℎ−1 nodes.

Figure A B-tree of height 3 containing a minimum possible number of keys. Shown inside each node x is n[x].
B Tree (Deletion)
Hence the total number of elements at depth are
1 element at depth 0, 2(t-1) elements at depth 1,
2t(t-1) elements at depth 2, 2𝑡 2 𝑡 − 1 elements at depth 3, and so on.
Hence,
𝑛 ≥ 1 + 2 𝑡 − 1 + 2𝑡 𝑡 − 1 + 2𝑡 2 𝑡 − 1 + ⋯ + 2𝑡ℎ−1 𝑡 − 1

𝑛 ≥ 1 + (𝑡 − 1) ෍ 2𝑡 𝑖−1
𝑖=1

𝑡 − 1
𝑛 ≥ 1 + 2(𝑡 − 1)
(𝑡 − 1)
𝑛 ≥ 1 + 2𝑡 ℎ − 2
𝑛 ≥ 2𝑡 ℎ − 1
𝑛+1
𝑛 + 1 ≥ 2𝑡 ℎ ⟹ 𝑡 ℎ =
2
Apply log both side
𝑛+1 𝑛+1
log 𝑡 ℎ = log 2
⟹ ℎ = log 𝑡 2
proved.
Design and Analysis of Algorithm

Advanced Data Structure


(Binomial Heap)

LECTURE 41 - 44
Overview

• This section present a data structure known as


mergeable heaps, which support the following
seven operations.
• MAKE-BINOMIAL-HEAP
• BINOMIAL-HEAP-INSERT
• BINOMIAL-HEAP-MINIMUM
• BINOMIAL-HEAP-EXTRACT-MIN
• BINOMIAL-HEAP-UNION
• BINOMIAL-HEAP DECREASE-KEY
• BINOMIAL-HEAP-DELETE
Binomial Heap
• Binomial heap was introduced in 1978 by Jean
Vuillemin.
• Jean Vuillemin is a professor in mathematics and
computer science.
• The other name of Binomial Heap is Mergeable
heaps.
• A binomial heap is a collection of binomial trees.
– Lets learn What is Binomial Tree?
Binomial Tree
• Binomial tree 𝐵𝑘 is an ordered tree defined recursively.
• The binomial tree 𝐵0 has one node.
• The binomial tree 𝐵𝑘 consists of two binomial trees 𝐵𝑘−1 and they
are connected such that the root of one tree is the leftmost child of
the other.

B2
B1
B1 B0
B1

B0 B1 B2 B3
Binomial Tree
• Binomial tree 𝐵𝑘 is an ordered tree defined recursively.
• The binomial tree 𝐵0 has one node.
• The binomial tree 𝐵𝑘 consists of two binomial trees 𝐵𝑘−1 and they
are connected such that the root of one tree is the leftmost child of
the other.

B3
B2 B0

B1

B4
Binomial Tree
• Binomial tree 𝐵𝑘 is an ordered tree defined recursively.
• The binomial tree 𝐵0 has one node.
• The binomial tree 𝐵𝑘 consists of two binomial trees 𝐵𝑘−1 and they
are connected such that the root of one tree is the leftmost child of
the other.

B1 Bo
B2
Bk-2
Bk-1

Bk
(Fig: A General View of Binomial -Tree)
Binomial Tree (Property)
A Binomial tree satisfy the following properties:
• 𝐵𝑘 has 2𝑘 nodes
• 𝐵𝑘 has height 𝑘
𝑘
• There are exactly combination of nodes at depth i for i=0, 1,
𝑖
2,…,k.
For Example:
Lets check in 𝐵4 and depth 2(𝑖. 𝑒. 𝑘 = 4 𝑎𝑛𝑑 𝑖 = 2)
𝑘 𝑘! 4 4! 4𝑋3𝑋2𝑋1
= 𝑖! 𝑘−𝑖 ! = = 2! 4−2 ! = 2 𝑋 1 𝑋 2 𝑋 1 = 6
𝑖 2
Hence 6 numbers of nodes are available in depth 2 of 𝐵4
• The root has degree k which is greater than other node in the
tree. Each of the root’s child is the root of a subtree Bi.
Binomial Heap (Property)
A Binomial Heap H is a set of binomial trees that satisfies the
following properties:
• P1. Each binomial tree in H obeys the min heap property.
(i.e. key of a node is greater or equal to the key of its parent.
Hence the root has the smallest key in the tree).
• P2. For any non negative integer k, there is at most one
binomial tree whose root has degree 𝑘.
(e.g. it implies that an n node Binomial heap H consists of at
most log 𝑛 + 1 binomial Tree. (Fig. is available in next
page)
• P3. The binomial trees in the binomial heap are arranged in
increasing order of degree.
Binomial Heap (Property)
• Example:

Head[H]
10 1 6

B 12 25 8 14 29

0
18 11 17 38
B
2
27
B
3
Here n=13
So log 𝑛 + 1 = log 13 + 1 = 3 + 1 = 4 (So at most 4)

The Above Figure of Binomial Heap consists of 𝐵0 , 𝐵2 and 𝐵3


Binomial Heap (Representation)
• Each binomial tree within a binomial heap is stored
in the left-child, right-sibling representation
• Each node 𝑥 contains POINTERS
• 𝑝 𝑥 → 𝑝𝑎𝑟𝑒𝑛𝑡 to its parent
• 𝑘𝑒𝑦 𝑥 → 𝑘𝑒𝑦 to its key value
• 𝑐ℎ𝑖𝑙𝑑[𝑥] → 𝑐ℎ𝑖𝑙𝑑 to its leftmost child
• 𝑠𝑖𝑏𝑙𝑖𝑛𝑔[𝑥] → 𝑠𝑖𝑏𝑙𝑖𝑛𝑔 to its immediately right
sibling
• 𝑑𝑒𝑔𝑟𝑒𝑒[𝑥] → 𝑑𝑒𝑔𝑟𝑒𝑒 to its degree value (i.e.
denotes the number of children of 𝑥)
Binomial Heap (Representation)
a) p c)
key
degree
child sibling
NIL NIL
2 1
0 2
head[H] NIL NIL

b)
10 12
head[H] 2 1 1 0
NIL NIL

10 12

15
15
0
NIL NIL
Binomial Heap (Operations)
Binomial Heap support the following five operations:
1. MAKE-HEAP() creates and returns a new heap
containing no elements.
2. MINIMUM(H) returns a pointer to the node in
heap H whose key is minimum.
3. UNION(H1, H2) creates and returns a new heap
that contains all the nodes of heaps H1 and H2.
Heaps H1 and H2 are "destroyed" by this
operation.
4. EXTRACT-MIN(H) deletes the node from heap H
whose key is minimum, returning a pointer to the
node.
Binomial Heap (Operations)
5. INSERT(H, x) inserts node x, whose key field has
already been filled in, into heap H.
6. DECREASE-KEY(H, x, k) assigns to node x within
heap H the new key value k, which is assumed to
be no greater than its current key value.[1]
7. DELETE(H, x) deletes node x from heap H
Binomial Heap (Operations_1)
1. MAKE-HEAP() : creates and returns a new heap
containing no elements.

To make an empty binomial heap, the MAKE-


BINOMIAL-HEAP procedure simply allocates and
returns an object H , where head[H ] = NIL.

The running time is Θ(1).


Binomial Heap (Operations_2)
2. MINIMUM(H) : Since the binomial heap is a min-heap-order, the
minimum key of each binomial tree must be at the root. This
operation checks all the roots to find the minimum key.
Binomial Heap (Operations_2)
2. MINIMUM(H) : Since the binomial heap is a min-heap-order, the
minimum key of each binomial tree must be at the root. This
operation checks all the roots to find the minimum key.

Head[H]
20 15 8

35 10 28

12
Binomial Heap (Operations_2)
2. MINIMUM(H) : Since the binomial heap is a min-heap-order, the
minimum key of each binomial tree must be at the root. This
operation checks all the roots to find the minimum key.
Pseudocode: This implementation assumes that there are no keys
with value ∞.
BINOMIAL-HEAP-MINIMUM(H) Head[H]
20 15 8
1 y ← NIL
2 x ← head[H]
35 10 28
3 min ← ∞
4 while x ≠ NIL
12
5 do if key[x] < min
6 then min ← key[x]
7 y←x
8 x ← sibling[x]
9 return y
Binomial Heap (Operations_2)
2. MINIMUM(H) : Since the binomial heap is a min-heap-order, the
minimum key of each binomial tree must be at the root. This
operation checks all the roots to find the minimum key.
Pseudocode: This implementation assumes that there are no keys
with value ∞.
BINOMIAL-HEAP-MINIMUM(H) Head[H]
20 15 8
1 y ← NIL
2 x ← head[H]
35 10 28
3 min ← ∞
4 while x ≠ NIL
12
5 do if key[x] < min After Execution
6 then min ← key[x] Return y=8
7 y←x
8 x ← sibling[x]
9 return y
Binomial Heap (Operations_2)
Since binomial heap is Heap-ordered and the minimum key must reside in a
ROOT node. The BINOMIAL-HEAP-MINIMUM(H) checks all roots in 𝑂 (𝑙𝑔𝑛) .
Because,
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑜𝑜𝑡𝑠 𝑖𝑛 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙 𝐻𝑒𝑎𝑝 𝑖𝑠 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 log 𝑛 + 1( property 2)
Hence 𝑅𝑈𝑁𝑁𝐼𝑁𝐺– 𝑇𝐼𝑀𝐸 = 𝑂 (𝑙𝑔𝑛)
Binomial Heap (Operations_3)
3. UNION(H1, H2)
This operation consists of the following steps
• Merge two binomial heaps H1 and H2. The resulting
heap has the roots in increasing order of degree
• For each tree in the binomial heap H, if it has the same
order with another tree, link the two trees together
such that the resulting tree obeys min-heap-order.
For this there is an requirement of 3 pointers into the root list
x = points to the root currently being examined
prev-x = points to the root PRECEDING x on the root list
sibling [prev-x] = x
next-x = points to the root FOLLOWING x on the root list
sibling [x] = next-x
Binomial Heap (Operations_3)
This operation perform by the help of 4(four) number of cases.
Case 1: 𝒊𝒇 (𝒅𝒆𝒈𝒓𝒆𝒆[𝒙] ≠ 𝒅𝒆𝒈𝒓𝒆𝒆[𝒏𝒆𝒙𝒕 → 𝒙])
prev → 𝑥 = 𝑥
x=next → x
Example:

Head[H] x next → 𝑥 Head[H] prev → 𝑥 x

Before Case 1 After Case 1


Binomial Heap (Operations_3)
This operation perform by the help of 4(four) number of cases.
Case 2: 𝒊𝒇 (𝒅𝒆𝒈𝒓𝒆𝒆 𝒙 = 𝒅𝒆𝒈𝒓𝒆𝒆 𝒏𝒆𝒙𝒕 → 𝒙 = 𝒅𝒆𝒈𝒓𝒆𝒆[𝒔𝒊𝒃𝒍𝒊𝒏𝒈 𝒏𝒆𝒙𝒕 → 𝒙 )
prev → 𝑥 = 𝑥
x=next → x
Example:
Head[H] x next → 𝑥 sibling[next → 𝑥]

prev → 𝑥 𝑥 next → 𝑥

Head[H]

Before Case 2 After Case 2


Binomial Heap (Operations_3)
This operation perform by the help of 4(four) number of cases.
Case 3: 𝒊𝒇 (𝒅𝒆𝒈𝒓𝒆𝒆 𝒙 = 𝒅𝒆𝒈𝒓𝒆𝒆 𝒏𝒆𝒙𝒕 → 𝒙 ) and (key[x] ≤ key[next])
sibling[x]=sibling[next → x]
Binomial Link(next → x, x)
Example:

x next → 𝑥
5 8 25 5 25
Head[H] Head[H]
75 80 8 75 80

100 100

Before Case 3 After Case 3


Binomial Heap (Operations_3)
This operation perform by the help of 4(four) number of cases.

Case 4: 𝒊𝒇 (𝒅𝒆𝒈𝒓𝒆𝒆 𝒙 = 𝒅𝒆𝒈𝒓𝒆𝒆 𝒏𝒆𝒙𝒕 → 𝒙 ) and (key[x] ≥ key[next])


if(prev → x== Null)
Head[H]= next → x
else
sibling[prev → x]= next → x
Binomial Link(x, next → x)
Binomial
prev → 𝑥 𝑥
Heap
next → 𝑥
(Operations_3)
12 7 3 15
Head[H] 18 25 37 28 33

41 prev → 𝑥 𝑥 next → 𝑥
12 3 15
Head[H] 18 7 37 28 33

25 41
Binomial
prev → 𝑥 𝑥
Heap
next → 𝑥
(Operations_3)
12 7 3 15
Head[H] 18 25 37 28 33

41 prev → 𝑥 𝑥 next → 𝑥
12 3 15
Head[H] 18 7 37 28 33

12 3
25 41
Head[H] 18 15 7 37

28 33
25

41
Binomial Heap (Operations_3)
BINOMIAL-LINK(y, z)
1 p[y] = z
2 sibling[y] = child[z]
3 child[z] = y
4 degree[z] = degree[z] + 1
Binomial Heap (Operations_3)
BINOMIAL-LINK(y, z)
1 p[y] = z
2 sibling[y] = child[z]
3 child[z] = y
4 degree[z] = degree[z] + 1

z 𝑦
5 8 25
Head[H]
75 80

100
Binomial Heap (Operations_3)
BINOMIAL-LINK(y, z)
1 p[y] = z
2 sibling[y] = child[z]
3 child[z] = y
4 degree[z] = degree[z] + 1

z 𝑦 z
5 8 25 5 25
Head[H] Head[H]
75 80 8 75 80
𝑦

100 100
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H1 12 7 15 H2 18 3 6

25 28 33 37 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H1 12 7 15 H2 18 3 6

25 28 33 37 29 10 44

48 31
41 17

56

After Merging…….
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H 12 18 7 3 15 6

25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H 12 18 7 3 15 6

25 37 28 33 29 10 44

48 31
41 17

56
Fix Pointer
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 18 7 3 15 6

25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 18 7 3 15 6

25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 18 7 3 15 6

25 37 28 33 29 10 44

48 31
41 17

56
next → x
H 12 7 3 15 6

x 18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x Sibling[next → x]
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x Sibling[next → x]
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x Sibling[next → x]
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
prev → x x next → x
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
prev → x x next → x
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
prev → x x next → x
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
prev → x x next → x
H 12 7 3 15 6

18 25 37 28 33 29 10 44

48 31
41 17

56
prev → x next → x
H 12 3 15 6
x
18 7 37 28 33 29 10 44

25 48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

x next → x
H 12 3 15 6

18 7 37 28 33 29 10 44

25 48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 3 15 6

18 7 37 28 33 29 10 44

25 48 31
41 17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 3 15 6

18 7 37 28 33 29 10 44

25 48 31
41 17

56
H 12 3 6

18 15 7 37 29 10 44

28 33 25 48 31 17

41 56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 15 6

18 3 28 33 29 10 44

7 37 48 31
41 17

25 56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.
x next → x
H 12 3 6

18 15 7 37 29 10 44

28 33 25 48 31 17

41 56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H 12 3

18 6 15 7 37

29 10 44 28 33 25

48 31 41
17

56
Binomial Heap (Operations_3)
Example 1: Merge the following two Binomial heap H1 and H2.

H 12 3

18 6 15 7 37

29 10 44 28 33 25

48 31 41
17

56

Final
Binomial
Heap Tree
Binomial Heap (Operations_3)
BINOMIAL-HEAP-UNION(H1, H2)
1 H = MAKE-BINOMIAL-HEAP()
2 head[H] = BINOMIAL-HEAP-MERGE(H1, H2)
3 free the objects H1 and H2 but not the lists they point to
4 if head[H] = NIL
5 then return H
6 prev-x ← NIL
7 x ← head[H]
8 next-x ← sibling[x]
Binomial Heap (Operations_3)
9 while next-x ≠ NIL
10 do if (degree[x] ≠ degree[next-x]) or
(sibling[next-x] ≠ NIL and degree[sibling[next-x]] = degree[x])
11 then prev-x ← x ▹ Cases 1 and 2
12 x ← next-x ▹ Cases 1 and 2
13 else if key[x] ≤ key[next-x]
14 then sibling[x] ← sibling[next-x] ▹ Case 3
15 BINOMIAL-LINK(next-x, x) ▹ Case 3
16 else if prev-x = NIL ▹ Case 4
17 then head[H] ← next-x ▹ Case 4
18 else sibling[prev-x] ← next-x ▹ Case 4
19 BINOMIAL-LINK(x, next-x) ▹ Case 4
20 x ← next-x ▹ Case 4
21 next-x ← sibling[x]
22 return H
Binomial Heap (Operations_3)
9 while next-x ≠ NIL
10 do if (degree[x] ≠ degree[next-x]) or
(sibling[next-x] ≠ NIL and degree[sibling[next-x]] = degree[x])
11 then prev-x ← x ▹ Cases 1 and 2
12 x ← next-x ▹ Cases 1 and 2
13 else if key[x] ≤ key[next-x]
14 then sibling[x] ← sibling[next-x] ▹ Case 3
15 BINOMIAL-LINK(next-x, x) ▹ Case 3
16 else if prev-x = NIL ▹ Case 4
17 then head[H] ← next-x ▹ Case 4
18 else sibling[prev-x] ← next-x ▹ Case 4
19 BINOMIAL-LINK(x, next-x) ▹ Case 4
20 x ← next-x ▹ Case 4
21 next-x ← sibling[x]
22 return H
Binomial Heap (Operations_3)
Analysis of BINOMIAL-HEAP-UNION(H1, H2)
The running time of BINOMIAL-HEAP-UNION is O(lg n), where n is
the total number of nodes in binomial heaps H1 and H2.
We can see this as follows.
• Let H1 contain n1 nodes and H2 contain n2 nodes,
Hence, n = n1 + n2.
• Then H1 contains at most ⌊lg n1⌋+1 roots.
• and H2 contains at most ⌊lg n2⌋+1 roots,
• Hence H contains at most ⌊lg n1⌋+⌊lg n2⌋+2 ≤ 2⌊lg n⌋+2 = O(lg n)
roots immediately after the call of BINOMIAL-HEAP-MERGE.
• The time required to perform BINOMIAL-HEAP-MERGE is thus
O(lg n).
Binomial Heap (Operations_3)
Analysis of BINOMIAL-HEAP-UNION(H1, H2)
• Each iteration of the while loop takes O(1) time, and there are at
most ⌊lg n1⌋ + ⌊lgn2⌋ + 2 iterations.
(because each iteration either advances the pointers one position
down the root list of H or removes a root from the root list. )
• Hence the total time required to execute BINOMIAL-HEAP-UNION
is O(lg n).
Binomial Heap (Operations_4)
4. EXTRACT-MIN(H)
The following procedure extracts the node with the minimum
key from binomial heap H and returns a pointer to the extracted
node.
BINOMIAL-HEAP-EXTRACT-MIN(H)
1 find the root x with the minimum key in the root list of H,
and remove x from the root list of H
2 H′ ← call MAKE-BINOMIAL-HEAP()
3 reverse the order of the linked list of x's children, and set
head[H′] to point to the head of the resulting list
4 H ← call BINOMIAL-HEAP-UNION(H, H′)
5 return x
Binomial Heap (Operations_4)
Example:
Extract the node with minimum key from following Binomial
Heap.

H 37 10 1

41 27 13 6 16 12 25
28 8 14 29 26 23 18

11 17 38 42

27
Binomial Heap (Operations_4)
Example:
Hear minimum is 1(i.e. x), so remove it

H 37 10 x 1

41 27 13 6 16 12 25
28 8 14 29 26 23 18

11 17 38 42

27
Binomial Heap (Operations_4)
Example:
Hear minimum is 1(i.e. x), so remove it

H 37 10

41 27 13 6 16 12 25
28 8 14 29 26 23 18

11 17 38 42

27
Binomial Heap (Operations_4)
Example:
Hear minimum is 1(i.e. x), so remove it

H 37 10

41 27 13 6 16 12 25
28 8 14 29 26 23 18

11 17 38 42

27
Binomial Heap (Operations_4)
Example:
After remove x reverse the order of the list and put it in H′

H 37 10 H′ 25 12 16 6

41 27 13 18 26 23 14
8 29
28 11 17
42 38

27
Binomial Heap (Operations_4)
Example:
Apply BINOMIAL-HEAP-UNION(H, H′) on the following two
Binomial Heap

H 37 10 H′ 25 12 16 6

41 27 13 18 26 23 14
8 29
28 11 17
42 38

27
Binomial Heap (Operations_4)
Example:
After merging of two binomial heap H and H′

H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

27
Binomial Heap (Operations_4)
Example:
After merging of two binomial heap H and H′

H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

Place x and 27
next → x
Binomial Heap (Operations_4)
Example:
After placing x and next → x

x next → x
H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

27
Binomial Heap (Operations_4)
Example:
After placing x and next → x

x next → x
H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

27
Apply case 1
Binomial Heap (Operations_4)
Example:
After applying case 1

prev → x x next → x
H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

27
Binomial Heap (Operations_4)
Example:
After applying case 1

prev → x x next → x
H 25 37 12 10 16 6

41 18 27 13 26 14
23 8 29
28 11 17
42 38

27
Apply case 4
Binomial Heap (Operations_4)
Example:
After applying case 4

prev → x x next → x
H 25 12 10 16 6

18 27 13 26 14
37 23 8 29
28 11 17
41 42 38

27
Binomial Heap (Operations_4)
Example:
After applying case 4

prev → x x next → x
H 25 12 10 16 6

18 27 13 26 14
37 23 8 29
28 11 17
41 42 38

27
Apply case 2
Binomial Heap (Operations_4)
Example:
After applying case 2

prev → x x next → x
H 25 12 10 16 6

18 27 13 26 14
37 23 8 29
28 11 17
41 42 38

27
Binomial Heap (Operations_4)
Example:
After applying case 2

prev → x x next → x
H 25 12 10 16 6

18 27 13 26 14
37 23 8 29
28 11 17
41 42 38

27
Apply case 3
Binomial Heap (Operations_4)
Example:
After applying case 3

prev → x x next → x
H 25 12 10 6

18 16 27 13 14
37 8 29
26 23 28 11 17
41 38

42 27
Binomial Heap (Operations_4)
Example:
After applying case 3

prev → x x next → x
H 25 12 10 6

18 16 27 13 14
37 8 29
26 23 28 11 17
41 38

42 27
Apply case 4
Binomial Heap (Operations_4)
Example:
After applying case 4

H 25 12 6

37 18 10 8 14 29
16 27 13 11
41 17 38
26 23 28
27

42
Binomial Heap (Operations_5)
5. INSERT (H, x)
The BINOMIAL-HEAP-INSERT procedure inserts node x into
binomial heap H , assuming that x has already been allocated
and key[x] has already been filled in.
BINOMIAL-HEAP-INSERT(H, x)
1 H′ ← call MAKE-BINOMIAL-HEAP()
2 p[x] ← NIL
3 child[x] ← NIL
4 sibling[x] ← NIL
5 degree[x] ← 0
6 head[H′] ← x
7 H ← call BINOMIAL-HEAP-UNION(H, H′)
Binomial Heap (Operations_5)
Example:
inserts node x into binomial heap H

x
H 5 25 45 H’ 18

35 65 55

75
Binomial Heap (Operations_5)
Example:
inserts node x into binomial heap H

x
H 5 25 45 H’ 18

35 65 55

75

Apply Merge
Binomial Heap (Operations_5)
Example:
After Merging

H 5 18 25 45

35 65 55

75
Binomial Heap (Operations_5)
Example:
After Merging

H 5 18 25 45

35 65 55

Place x and 75
next → x
Binomial Heap (Operations_5)
Example:
After placing x and next → x

x next → x
H 5 18 25 45

35 65 55

75
Binomial Heap (Operations_5)
Example:
After placing x and next → x

x next → x
H 5 18 25 45

35 65 55

75
Apply Case 3
Binomial Heap (Operations_5)
Example:
After applying case 3
x next → x
H 5 25 45

18 35 55
65

75
Binomial Heap (Operations_5)
Example:
After applying case 3
x next → x
H 5 25 45

18 35 55
65

75

Apply Case 3
Binomial Heap (Operations_5)
Example:
After applying case 3
x next → x
H 5 45

25 18 65 55
35
75
Binomial Heap (Operations_5)
Example:
After applying case 3
x next → x
H 5 45

25 18 65 55
35
75

Apply Case 3
Binomial Heap (Operations_5)
Example:
After applying case 3

H 5
45 25 18
65 55 35

75
Binomial Heap (Operations_6)
6. DECREASE KEY (H, x, k)
The DECREASE KEY procedure decreases the key of a node x in a binomial
heap H to a new value k. It signals an error if k is greater than x's current key.
BINOMIAL-HEAP-DECREASE-KEY(H, x, k)
1 if k > key[x]
2 then error "new key is greater than current key"
3 key[x] ← k
4 y←x
5 z ← p[y]
6 while z ≠ NIL and key[y] < key[z]
7 do exchange key[y] ↔ key[z]
8 ▸ If y and z have satellite fields, exchange them, too.
9 y←z
10 z ← p[y]
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
16 27 13 11
41 17 38
x 26 23 28
27

42
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
16 27 13 11
41 17 38
x 26 23 28
27

42
Replace the
key[x] with k
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
16 27 13 11
41 17 38
x 7 23 28
27

42
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
16 27 13 11
41 17 38
x 7 23 28
27

42

Set the pointer


y and z
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
z 16 27 13 11
41 17 38
x 7 23 28
y 27

42
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6

37 18 10 8 14 29
z 16 27 13 11
41 17 38
x 7 23 28
y 27

42
Exchange key[y] with
key[z] and change the
position of y and z
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6
z
37 18 10 8 14 29
y
7 27 13 11
41 17 38
x 16 23 28
27

42
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).

H 25 12 6
z
37 18 10 8 14 29
y
7 27 13 11
41 17 38
x 16 23 28
27

42
Exchange key[y] with
key[z] and change the
position of y and z
Binomial Heap (Operations_6)
Example:
Decreases the key of a node x (i.e. 26) in a binomial heap H to a
new value k (i.e. 7).
z
H 25 12 6
y
37 18 7 8 14 29
10 27 13 11
41 17 38
x 16 23 28
27

42
Binomial Heap (Operations_7)
7. DELETE(H, x)
The delete procedure delete the key x with an assumption that no
node currently in the binomial heap has a key of -∞.

BINOMIAL-HEAP-DELETE(H, x)
1 BINOMIAL-HEAP-DECREASE-KEY(H, x, -∞)
2 BINOMIAL-HEAP-EXTRACT-MIN(H)
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞.

H 25 12 16 6

18 26 23 8 14 29

11 x 17
42 38

27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 6

18 26 23 8 14 29

11 17
42 x 38

27

Replace the
key[x] with k
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 6

18 26 23 8 14 29

-∞ 17
42 x 38

27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 6

18 26 23 8 14 29

-∞ 17
42 x 38

27

Set the pointer


y and z
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 6

18 26 23 8 14 29
z
-∞ 17
42 y x 38

27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 6

18 26 23 8 14 29
z
-∞ 17
42 y x 38

27
Exchange key[y] with
key[z] and change the
position of y and z
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 12 16
25 z 6

18 26 23 -∞ 14 29
y
8 17
42 x 38

27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 12 16
25 z 6

18 26 23 -∞ 14 29
y
8 17
42 x 38

27 Exchange
key[y] with
key[z] and
change the
position of y
and z
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 -∞ y

18 26 23 6 14 29

8 17
42 x 38

27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 -∞ y

18 26 23 6 14 29

8 17
42 x 38

27
Delete y
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16

18 26 23 6 14 29

8 17
42 38

27

After remove x reverse the order of the list and put it in H′


Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 H’ 29 14 6

18 26 23 38 8 17

42
27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 16 H’ 29 14 6

18 26 23 38 8 17

42
27

Merge two Binomial


Heao
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 29 12 14 16 6

38 26 23 8 17
18

42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 29 12 14 16 6

38 26 23 8 17
18

42 27

Place x and
next → x
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
x next → x
H 25 29 12 14 16 6

38 26 23 8 17
18

42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
x next → x
H 25 29 12 14 16 6

38 26 23 8 17
18

42 27

Apply case 3
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
x next → x
H 25 12 14 16 6

29 38 26 23 8 17
18

42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
x next → x
H 25 12 14 16 6

29 38 26 23 8 17
18

42 27

Apply case 2
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 14 16 6

29 38 26 23 8 17
18

42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 14 16 6

29 38 26 23 8 17
18

42 27

Apply case 3
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 16 6

29 14 18 26 23 8 17

38 42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 16 6

29 14 18 26 23 8 17

38 42 27

Apply case 2
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 16 6

29 14 18 26 23 8 17

38 42 27
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).
prev → x x next → x
H 25 12 16 6

29 14 18 26 23 8 17

38 42 27

Apply case 4
Binomial Heap (Operations_7)
Example: Delete the key x with an assumption that no node
currently in the binomial heap has a key of -∞(i.e. k).

H 25 12 6

29 14 18 16 8 17

38 26 23 27

42
Binomial Heap
The running time of binomial Heap w.r.t Binary Heap is given
below

Procedure Binary heap Binomial heap


(worst-case) (worst-case)

INSERT Θ(lg n) O(lg n)


MINIMUM Θ(1) O(lg n)
EXTRACT-MIN Θ(lg n) Θ(lg n)
UNION Θ(n) O(lg n)
DECREASEKEY Θ(lg n) Θ(lg n)
DELETE Θ(lg n) Θ(lg n)
Design and Analysis of Algorithm

Advanced Data Structure


(Fibonacci Heap)

LECTURE 45 - 48
Overview

• This section present data structures known as


mergeable heaps, which support the following
seven operations.
• MAKE-FIBO-HEAP()
• FIB-HEAP-INSERT()
• FIB-HEAP-MINIMUM()
• FIB-HEAP-EXTRACT-MIN()
• FIB-HEAP-UNION()
• FIB-HEAP DECREASE-KEY()
• FIB-HEAP-DELETE()
Fibonacci Heap
• Fibonacci heap is designed and developed by
Fredman and Tarjan in the year 1986
• Ingenious data structure and analysis.
• Like a binomial heap, a Fibonacci heap is a
collection of trees with looser structure.
• Fibonacci heaps are called lazy data structures
because they delay work as long as possible
using the field mark (i.e. black node).
Fibonacci Heap
Properties:
• Unlike Binomial Heap, Fibonacci Heap can have
many tree of same degree and tree does not
have exactly 2𝑘 nodes.
• Tree in Fibonacci heap are rooted but unordered.
• Roots and Sibling are circular doubly link list.
• Each node have its degree. The number of
children of x has degree[x].
• Each node mark [x] either true (i.e. black node)
or false. Newly created node is always
unmarked.
Fibonacci Heap(Structure)
• Structure of Fibonacci heaps
• Like a binomial heap, a Fibonacci heap is a
collection of min-heap-ordered trees. The trees in
a Fibonacci heap are not constrained to be
binomial trees. The picture shows below is an
example of a Fibonacci heap.
H.min

23 7 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap(Structure)
• Nodes within a Fibonacci heap can be removed
from their tree without restructuring them.
• So the order does not necessarily indicate the
maximum height of the tree or number of nodes it
contain. Some examples of order 0, 1, 2 are given
below for easy understanding.
23 17 17 24 24 24 3

30 30 26 46 26 46 26 46 18 52 41

42 35 35 55 39 44

Order0 Order1 Order2 Order3


Fibonacci Heap (Structure)
• Each node x contain the following fields.
• x.p → points to its parent
• x.child → points to any one of its children and
children of x are linked together in a circular
doubly linked list.
• x.left, x.right → points to its left and right
siblings.
• x.degree → number of children in the child list
of x
Fibonacci Heap(Structure)
• x.mark → indicate whether node x has lost a
child since the last time x was mode the child
of another node
• H.min → points to the root of the tree
containing a minimum key
• H.n → number of nodes in H
Fibonacci Heap(Structure)
• An example of Fibonacci Heap with the help of
doubly circular link list is given below.
H.min

23 7 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap
• The Fibonacci heap data structure serves a dual
purpose.
• First, it supports a set of operations that
constitutes what is known as a “mergeable
heap.”
• Second, several Fibonacci-heap operations run
in constant amortized time, which makes this
data structure well suited for applications that
invoke these operations frequently.
Fibonacci Heap
• A mergeable heap is any data structure that
supports the following seven operations, in which
each element has a key:
1. MAKE-HEAP() creates and returns a new heap containing no
elements.
2. MINIMUM(H) returns a pointer to the node in heap H whose
key is minimum.
3. UNION(H1, H2) creates and returns a new heap that contains
all the nodes of heaps H1 and H2. Heaps H1 and H2 are
"destroyed" by this operation.
4. EXTRACT-MIN(H) deletes the node from heap H whose key is
minimum, returning a pointer to the node.
Fibonacci Heap
5. INSERT(H, x) inserts node x, whose key field has already been
filled in, into heap H.
6. DECREASE-KEY(H, x, k) assigns to node x within heap H the
new key value k, which is assumed to be no greater than its
current key value.[1]
7. DELETE(H, x) deletes node x from heap H
Fibonacci Heap
Heaps Analysis
Operation Binary Binomial Fibonacci †
make-heap 1 1 1
insert log N log N 1
find-min 1 log N 1
delete-min log N log N log N
union N log N 1
decrease-key log N log N 1
delete log N log N log N
is-empty 1 1 1

Note: † amortized
Fibonacci Heap
• Amortized Analysis
• Analyze a sequence of operations on a data
structure.
• It shows that although some individual operations
may be expensive, on average the cost per
operation is small.
• Average in this context does not mean that we’re
averaging over a distribution of inputs.
• No probability is involved.
• We’re talking about average cost in the worst
case.
Fibonacci Heap
• The Three most common techniques used are
• Aggregate Analysis
• Accounting Method
• Potential Method
Fibonacci Heap
• The Three most common techniques used are
• Aggregate Analysis
• Accounting Method
• Potential Method

• The potential method is used to analyze the


performance of Fibonacci heap operations.
Fibonacci Heap
• For a given Fibonacci heap H
• t H the number of trees in the root list of H.
• m(H) the number of marked nodes in H.
• The potential of Fibonacci heap H is then defined by
marked nodes in H
Φ H = t H + 2. m(H)
Fibonacci heap Number of trees in the rooted list of H
H.min
23 7 3 17 24

18 52 38 30 26 46

39 41 35

In the above example t H = 5 and m H = 3, Hence


Φ H = 5 + 2.3 = 11
Fibonacci Heap
• Fibonacci heap application begins with no heaps.
Hence
• The initial potential is 0(zero) and the potential
is nonnegative at all subsequent times.
• An upper bound on the total amortized cost is
thus an upper bound on the total actual cost for
the sequence of operations.
• Maximum degree
• The amortized analyses was performed with a
known upper bound D(n) on the maximum
degree of any node in an n-node Fibonacci heap.
(i.e. D(n) ≤ ⌊lg n⌋).
Fibonacci Heap (Operations)
Creating a new Fibonacci Heap:
• To make an empty Fibonacci heap, the MAKE-
FIB-HEAP procedure allocates and returns the
Fibonacci heap object H ,
• where n[H] = 0 and min[H] = NIL; there are
no trees in H .
• Because t(H) = 0 and m(H) = 0 , the
potential of the empty Fibonacci heap is
• Φ(𝐻) = 0.
• The amortized cost of MAKE-FIB-HEAP is thus
equal to its O(1) actual cost.
Fibonacci Heap (Operations)
Inserting a node:
• To insert a node in Fibonacci heap the following
steps will be used.
Step1 :Create a new singleton tree.
Step2 : Add to left of min pointer.
Step3 : Update min pointer.
Fibonacci Heap (Operations)
Inserting a node:
• Example: Insert the node 21 in following
Fibonacci heap

H.min
23 7 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Inserting a node:
• Example: Insert the node 21 in following
Fibonacci heap
Insert
21

H.min
23 7 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Inserting a node:
• Example: Insert the node 21 in following
Fibonacci heap

23 7 21 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Inserting a node:
• The following procedure inserts node x into Fibonacci heap H , assuming
that the node has already been allocated and that key[x] has already
been filled in.
FIB-HEAP-INSERT(H, x)
1 degree[x] ← 0
2 p[x] ← NIL
3 child[x] ← NIL
4 left[x] ← x
5 right[x] ← x
6 mark[x] ← FALSE
7 concatenate the root list containing x with root list H
8 if min[H] = NIL or key[x] < key[min[H]]
9 then min[H] ← x
10 n[H] ← n[H] + 1
Fibonacci Heap (Operations)
Inserting a node (Analysis)
To determine the amortized cost of FIB-HEAP-INSERT,
let
𝐻 be the input Fibonacci heap and
𝐻′ be the resulting Fibonacci heap.
Then,
𝑡(𝐻′) = 𝑡(𝐻) + 1 and
𝑚(𝐻′) = 𝑚(𝐻),
and the difference in potential cost is=
((𝑡(𝐻) + 1) + 2 𝑚(𝐻)) − (𝑡(𝐻) + 2 𝑚(𝐻)) = 1.
Since the actual cost is 𝑂(1),
the amortized cost = Actual cost+ Difference in potential cost (Δ(Φ))
= 𝑂(1) + 1 = 𝑂(1).
Fibonacci Heap (Operations)
Finding the minimum node
• The minimum node of a Fibonacci heap H is given by the pointer
min[H],

• So the actual time for finding the minimum node is O(1). Because
the potential of H does not change, and the amortized cost of this
operation is equal to its O(1) actual cost.
Fibonacci Heap (Operations)
Uniting two Fibonacci heaps (Union)
This procedure unites Fibonacci heaps H1 and H2,
destroying H1 and H2 in the process. It simply
concatenates the root lists of H1 and H2 and then
determines the new minimum node.

Basic Idea:

Step 1: Concatenate two Fibonacci heaps.

Step 2: Root lists are circular, doubly linked lists.


Fibonacci Heap (Operations)
Uniting two Fibonacci heaps
FIB-HEAP-UNION(H1, H2)
1 H ← MAKE-FIB-HEAP()
2 min[H] ← min[H1]
3 Concatenate the root list of H2 with the root list of H
4 if (min[H1] = NIL) or (min[H2] ≠ NIL) and min[H2] < min[H1])
5 then min[H] ← min[H2]
6 n[H] ← n[H1] + n[H2]
7 free the objects H1 and H2
8 return H
Fibonacci Heap (Operations)
Uniting two Fibonacci heaps
Example : Apply FIB-HEAP-UNION(H1, H2) for uniting the following two
Fibonacci Heaps
𝑯𝟏 23 7 21 3 𝑯𝟐 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Uniting two Fibonacci heaps
Example : Apply FIB-HEAP-UNION(H1, H2) for uniting the following two
Fibonacci Heaps
𝑯𝟏 23 7 21 3 𝑯𝟐 17 24

18 52 38 30 26 46

39 41 35

𝑯 23 7 21 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Uniting two Fibonacci heaps (Analysis)
FIB-HEAP-UNION(H1, H2)
1 H ← MAKE-FIB-HEAP()
2 min[H] ← min[H1]
3 Concatenate the root list of H2 with the root list of H
4 if (min[H1] = NIL) or (min[H2] ≠ NIL and min[H2] < min[H1])
5 then min[H] ← min[H2]
6 n[H] ← n[H1] + n[H2]
7 free the objects H1 and H2
8 return H
Lines 1-3 concatenate the root lists of H1 and H2 into a new root list H.
Lines 2, 4, and 5 set the minimum node of H ,
and line 6 sets n[H] to the total number of nodes.
The Fibonacci heap objects H1 and H2 are freed in line 7, and line 8 returns
the resulting Fibonacci heap H.
Fibonacci Heap (Operations)
Uniting two Fibonacci heaps (Analysis)
As in the FIB-HEAP-INSERT procedure, no consolidation of trees
occurs. The change in potential is
= Φ(𝐻) − (Φ(𝐻1) + Φ(𝐻2))
= (𝑡(𝐻) + 2𝑚(𝐻)) − ((𝑡(𝐻1) + 2 𝑚(𝐻1)) + (𝑡(𝐻2) + 2 𝑚(𝐻2)))
= 0,
because 𝑡(𝐻) = 𝑡(𝐻1) + 𝑡(𝐻2) and 𝑚 𝐻 = 𝑚 𝐻1 + 𝑚 𝐻2 .
The amortized cost of FIB-HEAPUNION is therefore equal to its
𝑂(1) actual cost.
Fibonacci Heap (Operations)
Extracting the minimum node
The process of extracting the minimum node is the most
complicated of the operations presented in this section.
Basic Idea:
Step 1: Delete min and concatenate its children into root
list.
Step 2: Consolidate trees so that no two roots have same
degree.
Fibonacci Heap (Operations)
Extracting the minimum node
FIB-HEAP-EXTRACT-MIN(H)
1 z ← min[H]
2 if z ≠ NIL
3 then for each child x of z
4 do add x to the root list of H
5 p[x] ← NIL
6 remove z from the root list of H
7 if z = right[z]
8 then min[H] ← NIL
9 else min[H] ← right[z]
10 CONSOLIDATE(H) // merge the tree with same degree
11 n[H] ← n[H] - 1
12 return z.
Fibonacci Heap (Operations)
Extracting the minimum node
Example: Extract the minimum element from the following Fibonacci
heap.

H.min
23 7 21 3 17 24

18 52 38 30 26 46

39 41 35
Fibonacci Heap (Operations)
Extracting the minimum node
H.min
23 7 21 3 17 24

18 52 38 30 26 46

Delete 3
39 41 35
Fibonacci Heap (Operations)
Extracting the minimum node
H.min
23 7 21 3 17 24

18 52 38 30 26 46

After 39 41 35
Delete 3
H.min

23 7 21 18 52 38 17 24

39 41 30 26 46

35
Fibonacci Heap (Operations)
Extracting the minimum node
H.min

23 7 21 18 52 38 17 24
Apply
Consolid 39 41 30 26 46
ate()
35
Fibonacci Heap (Operations)
Extracting the minimum node
H.min

23 7 21 18 52 38 17 24
Apply
Consolid 39 41 30 26 46
ate()
0 1 2 3 35
A

w,x
23 7 21 18 52 38 17 24

39 41 30 26 46

35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A

w, x
23 7 21 18 52 38 17 24

39 41 30 26 46

0 1 2 3 35
A
w, x
23 7 21 18 52 38 17 24

39 41 30 26 46

35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A
w, x
23 7 21 18 52 38 17 24

39 41 30 26 46

35
0 1 2 3
A

w,x
23 7 21 18 52 38 17 24

39 41 30 26 46

35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A

w,x
23 7 21 18 52 38 17 24

39 41 30 26 46

0 1 2 3 35
A

x
7 21 18 52 38 17 24

23 w
39 41 26
30 46

35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A

x
7 21 18 52 38 17 24

23 w
39 41 26
30 46

0 1 2 3
A 35

x
7 21 18 52 38 24

w
23 17 39 41 26 46

30
35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A

x
7 21 18 52 38 24

w
23 17 39 41 26 46

0 1 2 3
30
A 35

x
7 21 18 52 38

24 17 23 w 39 41

26 46 30

35
Fibonacci Heap (Operations)
Extracting the minimum node
0 1 2 3
A

x
7 21 18 52 38

24 17 23 w 39 41

26 46 30 0 1 2 3
A
35
w, x
7 21 18 52 38

24 17 23 39 41

26 46 30

35
Fibonacci Heap (Operations)
Extracting the minimum node 0 1 2 3
A

w, x
7 21 18 52 38

24 17 23 39 41

26 46 30
0 1 2 3
35 A

w, x
7 21 18 52 38

24 17 23 39 41

26 46 30

35
Fibonacci Heap (Operations)
Extracting the minimum node 0 1 2 3
A

w, x
7 21 18 52 38

24 17 23 39 41
0 1 2 3
26 46 30 A
35
w, x
7 18 38

24 17 23 21 39
41

26 46 30 52
35
Fibonacci Heap (Operations)
Extracting the minimum node 0 1 2 3
A

w, x
7 18 38

24 17 23 21 39
41

26 46 30 52 0 1 2 3
A
35

w, x
7 18 38

24 17 23 21 39
41

26 46 30 52
35
Fibonacci Heap (Operations)
Extracting the minimum node 0 1 2 3
A

w, x
7 18 38

24 17 23 21 39
41

26 46 30 52
35
H.min

7 18 38

24 17 23 21 39
41

26 46 30 52
35
Fibonacci Heap (Operations)
Extracting the minimum node
CONSOLIDATE(H)
1 for i ← 0 to D(n[H])
2 do A[i] ← NIL
3 for each node w in the root list of H
4 do x ← w
5 d ← degree[x]
6 while A[d] ≠ NIL
7 do y ← A[d] ▹ Another node with the same degree as x.
8 if key[x] > key[y]
9 then exchange x ↔ y
10 FIB-HEAP-LINK(H, y, x)
11 A[d] ← NIL
12 d←d+1
13 A[d] ← x
Fibonacci Heap (Operations)
Extracting the minimum node
14 min[H] ← NIL
15 for i ← 0 to D(n[H])
16 do if A[i] ≠ NIL
17 then add A[i] to the root list of H
18 if min[H] = NIL or key[A[i]] < key[min[H]]
19 then min[H] ← A[i]

FIB-HEAP-LINK(H, y, x)
1 remove y from the root list of H
2 make y a child of x, incrementing degree[x]
3 mark[y] ← FALSE
Fibonacci Heap (Operations)
Extracting the minimum node(Analysis)
Notation:
𝐷(𝑛) = max 𝑑𝑒𝑔𝑟𝑒𝑒 𝑜𝑓 𝑎𝑛𝑦 𝑛𝑜𝑑𝑒 𝑖𝑛 𝐹𝑖𝑏𝑜𝑛𝑎𝑐𝑐𝑖 ℎ𝑒𝑎𝑝 𝑤𝑖𝑡ℎ 𝑛 𝑛𝑜𝑑𝑒𝑠.
𝑡(𝐻) = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑟𝑒𝑒𝑠 𝑖𝑛 ℎ𝑒𝑎𝑝 𝐻.
𝑚(𝐻) = 𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑟𝑘𝑒𝑑 𝑛𝑜𝑑𝑒𝑠 𝑖𝑛 𝐻.
Φ(𝐻) = 𝑡(𝐻) + 2𝑚(𝐻).
Actual cost :
𝑂 𝐷 𝑛 : for loop in Fib-Heap-Extract-Min
After extracting a minimum node from the Fibonacci heap,
the heap contain 𝑡(𝐻) − 1 trees.
Hence, the current size of root list is : 𝐷(𝑛) + 𝑡(𝐻) − 1
Total actual cost: 𝑂(𝐷(𝑛)) + 𝑡(𝐻)
Fibonacci Heap (Operations)
Extracting the minimum node(Analysis)
Potential before extracting : 𝑡(𝐻) + 2𝑚(𝐻)
Potential after extracting : ≤ 𝐷(𝑛) + 1 + 2𝑚(𝐻)
At most D(n)+1 nodes remain on the list and no nodes
become marked.
Thus the amortized cost is at most:
= 𝑂(𝐷(𝑛)) + 𝑡(𝐻) + [(𝐷(𝑛) + 1 + 2𝑚(𝐻)) – (𝑡(𝐻) + 2𝑚(𝐻))]
= 𝑂(𝐷(𝑛) + 𝑡(𝐻) − 𝑡(𝐻))
= 𝑂(𝐷(𝑛))
= 𝑂(log 𝑛)
[Note: An n node Binomial heap H consists of at most
𝑙𝑜𝑔 𝑛 + 1 binomial Tree]
Fibonacci Heap (Operations)

Decreasing a key

This pseudocode for the operation FIB-HEAP-DECREASE-KEY,

work with an assumption that removing a node from a linked

list does not change any of the structural fields in the removed

node.
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 0: min-heap property not violated.
• decrease key of x to k
• change heap min pointer if necessary
Example:
min
7 18 38
Decrease
46 to 45.
24 17 23 21 39 41

26 46 30 52
x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 0: min-heap property not violated.
• decrease key of x to k
• change heap min pointer if necessary
Example:
min
7 18 38
Decrease
46 to 45.
24 17 23 21 39 41

26 46
45 30 52
x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min
pointer
Fibonacci Heap (Operations)
Decreasing a key and deleting node
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min pointer
min
7 18 38
Decrease
45 to 15.
24 17 23 21 39 41

26 46
45 30 52

x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min pointer
min
7 18 38
Decrease
45 to 15.
24 17 23 21 39 41

26 15 30 52

x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min pointer
min
7 18 38
Decrease
45 to 15.
24 17 23 21 39 41

26 15 30 52

x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min pointer
min
7 18 38
Decrease
45 to 15.
24 17 23 21 39 41

26 15 30 52

x
35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 1: parent of x is unmarked.
• decrease key of x to k
• cut off link between x and its parent
• mark parent
• add tree rooted at x to root list, updating heap min pointer
min
15 7 18 38

x
24 17 23 21 39 41

26 30 52
Decrease
45 to 15. 35
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to
root list
• cut off link between p[x] and p[p[x]], add p[x] to root
list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to root list
• cut off link between p[x] and p[p[x]], add p[x] to root list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
min
15 7 18 38

24 17 23 21 39 41

Decrease 26 30 52
35 to 5.

35
x
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to root list
• cut off link between p[x] and p[p[x]], add p[x] to root list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
min
15 7 18 38

24 17 23 21 39 41

Decrease 26 30 52
35 to 5.

5
x
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to root list
• cut off link between p[x] and p[p[x]], add p[x] to root list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
min
15 5 7 18 38
x

24 17 23 21 39 41

Decrease 26 30 52
35 to 5.
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to root list
• cut off link between p[x] and p[p[x]], add p[x] to root list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
min
15 5 26 7 18 38
x

24 17 23 21 39 41

Decrease 30 52
35 to 5.
Fibonacci Heap (Operations)
Decreasing a key
Basic Idea:
Case 2: parent of x is marked.
• decrease key of x to k
• cut off link between x and its parent p[x], and add x to root list
• cut off link between p[x] and p[p[x]], add p[x] to root list
> If p[p[x]] unmarked, then mark it.
> If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
min
15 5 26 24 7 18 38
x

17 23 21 39 41

Decrease 30 52
35 to 5.
Fibonacci Heap (Operations)
Decreasing a key
FIB-HEAP-DECREASE-KEY(H, x, k)
1 if k > key[x]
2 then error "new key is greater than current key"
3 key[x] ← k
4 y ← p[x]
5 if y ≠ NIL and key[x] < key[y]
6 then CUT(H, x, y)
7 CASCADING-CUT(H, y)
8 if key[x] < key[min[H]]
9 then min[H] ← x
Fibonacci Heap (Operations)
Decreasing a key
CUT(H, x, y)
1 remove x from the child list of y, decrementing degree[y]
2 add x to the root list of H
3 p[x] ← NIL
4 mark[x] ← FALSE

CASCADING-CUT(H, y)
1 z ← p[y]
2 if z ≠ NIL
3 then if mark[y] = FALSE
4 then mark[y] ← TRUE
5 else CUT(H, y, z)
6 CASCADING-CUT(H, z)
Fibonacci Heap (Operations)
Decreasing a key (Analysis)
• The FIB-HEAP-DECREASE-KEY procedure takes O(1) time, plus the
time to perform the cascading cuts.

• Suppose that CASCADING-CUT is recursively called c times from a


given invocation of FIB-HEAP-DECREASE-KEY.

• Each call of CASCADING-CUT takes O(1) time exclusive of recursive


calls.

• Thus, the actual cost of FIB-HEAP-DECREASE-KEY, including all


recursive calls, is O(c)
Fibonacci Heap (Operations)
Decreasing a key (Analysis)
• Each recursive call of CASCADING-CUT except for the last one,
cuts a marked node and clears the mark bit.
[Note: Last call of CASCADING-CUT may have marked a node]
• After Decrease-key, there are at most 𝑡(𝐻) + 𝑐 trees, and at most
𝑚(𝐻) − 𝑐 + 2 marked nodes.
[Note: c-1 were unmarked by cascading cuts and the last call of CASCADING-
CUT may have marked a node]
• Thus the difference in potential cost is:
= 𝑡 𝐻 + 𝑐 + 2 𝑚 𝐻 − 𝑐 + 2 − 𝑡 𝐻 + 2𝑚 𝐻
= 4−𝑐
• The amortized cost of Fibonacci heap is:
𝑂(𝑐) + 4 − 𝑐 = 𝑂(1)
Fibonacci Heap (Operations)
Deleting a node
• It is easy to delete a node from an n-node Fibonacci heap in
O(D(n)) amortized time, as is
• done by the following pseudocode. We assume that there is no
key value of -∞ currently in the Fibonacci heap.

FIB-HEAP-DELETE(H, x)
1 FIB-HEAP-DECREASE-KEY(H, x, -∞)
2 FIB-HEAP-EXTRACT-MIN(H)
Fibonacci Heap (Operations)
Deleting a node
• It is easy to delete a node from an n-node Fibonacci heap in
O(D(n)) amortized time, as is
• done by the following pseudocode. We assume that there is no
key value of -∞ currently in the Fibonacci heap.

FIB-HEAP-DELETE(H, x)
1 FIB-HEAP-DECREASE-KEY(H, x, -∞)
2 FIB-HEAP-EXTRACT-MIN(H)

Deleting a node(Analysis)

The amortized time of FIB-HEAP-DELETE is 𝑂(lg 𝑛). [As 𝐷(𝑛) = lg 𝑛]


Design and Analysis of Algorithm

Advanced Data Structure


(Skip List and Tries)
LECTURE 49
Overview

• This section present two advance data structures


known as :

• skip list and

• Trie.
Skip List
• Skip list is a data structure used for maintaining a set of keys in
sorted order.
• Rules of Skip List
– It consists of several levels.
– In skip list all keys are appear in level 1.
– Each level of the skip list is a sorted list.
– In skip list if a key x appears in level i, then it also appears in all
levels below i.

Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
• More Rules
– An element in level i points (via down pointer) to the element with
the same key in the level below.
– In each level the keys -∞ and ∞ appear.
– Top points to the smallest element in the highest level.

Next
Pointer
Down
Top Pointer

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
• Finding an element with key x
p=top
While(1){
while (p->next->key < x ) Find
p=p->next;
117
If (p->down == NULL )
return p->next;
p=p->down ;
}
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
• Finding an element with key x
p=top
While(1){
while (p->next->key < x )
p=p->next;
Find
If (p->down == NULL )
117
return p->next;
p=p->down ;
}
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
• Finding an element with key x
p=top
While(1){
while (p->next->key < x )
p=p->next;
If (p->down == NULL ) Find
return p->next;
118
p=p->down ;
}
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
(Note: Observe that we return x, if exists, or succ(x) if x is not in the SkipList)
Skip List
• Finding an element with key x
p=top
While(1){
while (p->next->key < x )
p=p->next;
If (p->down == NULL ) Find
return p->next; 118
p=p->down ;
}
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
(Note: Observe that we return x, if exists, or succ(x) if x is not in the Skip List)
Skip List
• Inserting new element X
Do find(x), and insert x to the appropriate places in the kth level
Example - inserting 119 at k=2
Insert 119
at level(k)=2

Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
Insertion of 119
• Inserting new element X at level(k)=2 done
Do find(x), and insert x to the appropriate places in the kth level successfully
Example - inserting 119 at k=2

Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 119 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 119 ∞


Skip List
• Inserting new element X
Do find(x), and insert x to the appropriate places in the kth level
Example - inserting 121 at k=4
Inser 121 at
level(k)=4

Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 ∞
Skip List
Insertion of 121
at level(k)=4 done
• Inserting new element X successfully
Do find(x), and insert x to the appropriate places in the kth level
Example - inserting 121 at k=4

Top

Level 4 -∞ 121 ∞

Level 3 -∞ 21 37 121 ∞

Level 2 -∞ 7 21 37 71 121 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 121 ∞

[Note: If k is larger than the current number of levels, add new levels and update
the top pointer]
Skip List
• Deleting a key X
• Apply Find x in all the levels, and delete the key X by using the
standard 'delete from a linked list' method.
• If one or more of the upper levels are empty, remove them and
update the top pointer.
Example : Delete 71 from level 2

Delete 71 from
level(k)=2
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 71 119 ∞

Level 1 -∞ 7 14 21 32 37 71 85 117 119 ∞


Skip List
• Deleting a key X
• Apply Find x in all the levels, and delete the key X by using the
standard 'delete from a linked list' method.
• If one or more of the upper levels are empty, remove them and
update the top pointer.
Example : Delete 71 from level 2

Deletion of 71 done from


level 2 successfully
Top

Level 3 -∞ 21 37 ∞

Level 2 -∞ 7 21 37 119 ∞

Level 1 -∞ 7 14 21 32 37 85 117 119 ∞


Trie
Trie
Definition:
• A data structure for representing a collection of
strings.
• In computer science, a trie, also called digital tree
and sometimes radix tree or prefix tree.
• The term trie comes from retrieval.
• This term was coined by Edward Fredkin, who
pronounce it tri as in the word retrieval.
Trie
Properties:
• A multi-way tree.
• Each node has from 1 to n children.
• Each edge of the tree is labeled with a character.
• Each leaf nodes corresponds to the stored string,
which is a concatenation of characters on a path
from the root to this node.
Trie
Types:
• Standard Tries
• Compressed/Compact Tries
• Suffix Tries
Trie
Standard Trie:
• The standard trie for a set of Root
strings S is an ordered tree such
that: a
• Each node but the root is
labeled with a character.
• The children of a node are n t
alphabetically ordered.
• The paths from the external $ d y $
nodes to the root yield the
strings of S.
$ $
• Example :Strings ={an, and,
any, at}
• append a special termination
symbol “$’’
Trie
Standard Trie:
• Example: Standard trie for the set of strings
S = { bear, bell, bid, bull, buy, sell, stock, stop }

Root

b s

e i u e t

o
a l l l
d y
c p
r l l ll
k
Trie
Standard Trie Searching
Search hit: Node where search ends has a $ symbol
Example : Search - sea
Root

a s

n t e

$ d y $ a t

$ $ $ $
Trie
Standard Trie Searching
Search hit: Node where search ends has a $ symbol
Example : Search - and
Root

a s

n t e

$ d y $ a t

$ $ $ $
Trie
Standard Trie Deletion

• Three cases

Case 1: Word not found…!


Case 2: Word exists as a stand alone word.
Case 3: Word exists as a prefix of another word.
Trie
Standard Trie Deletion

• Three cases

Case 1: Word not found…!


then Return False
Case 2: Word exists as a stand alone word.
part of any other word
does not a part of any other word
Case 3: Word exists as a prefix of another word.
Trie
Standard Trie Deletion
• Three cases
Case 2: Word exists as a stand alone word.
part of any other word
does not a part of any other word
Delete Sea
Root

a s

n t e

$ d y $ a t

$ $ $ $
Trie
Standard Trie Deletion
• Three cases
Case 2: Word exists as a stand alone word.
part of any other word
does not a part of any other word
Delete Sea
Root

a s

n t e

$ d y $ a t

$ $ $
Trie
Standard Trie Deletion
• Three cases
Case 2: Word exists as a stand alone word.
part of any other word
does not a part of any other word
Delete set
Root

a s

n t e

$ d y $ a t

$ $ $
Trie
Standard Trie Deletion
• Three cases
Case 2: Word exists as a stand alone word.
part of any other word
does not a part of any other word
Delete set
Root

n t

$ d y $

$ $
Trie
Standard Trie Deletion
• Three cases
Case 3: Word exists as a prefix of any other word.
Delete - an

Delete an
Root

a s

n t e

$ d y $ a t

$ $ $ $
Trie
Standard Trie Deletion
• Three cases
Case 3: Word exists as a prefix of any other word.
Delete - an

Delete an
Root

a t

$ $
Trie
Compressed Trie
• Tries with nodes of degree at least 2
• Obtained by standard tries by compressing chains of redundant nodes
• Example: S = { bear, bell, bid, bull, buy, sell, stock, stop }

Root

s
b

e id u ell to

ck p
ar ll ll y
Trie
Suffix Trie
• A suffix trie is a compressed trie for all the suffixes of a text.
• Suffix trie are a space-efficient data structure to store a string that allows
many kinds of queries to be answered quickly.

soon$ Root

oon$ s $
o n
on$ o

n$ o n $
o
$
n $
n

$
$
Design and Analysis of Algorithm

Backtracking
(Hamiltonian Cycle, Graph Coloring,
Sum of subset and N-Queen)

Lecture – 50-53
Overview

• Backtracking is a general algorithmic technique


that consider searching every possible
combinations in order to solve an optimization
problem.
• ‘Backtrack’ the Word was first introduced by Dr.
D.H. Lehmer in 1950s. R.J Walker Was the First
man who gave algorithmic description in 1960.
Later developed by S. Golamb and L. Baumert.
• Based on DFS search.
Backtracking
• What is Backtracking?
• Backtracking is nothing but the modified process
of the brute force approach. where the technique
systematically searches for a solution to a
problem among all available options. It does so by
assuming that the solutions are represented by
vectors (v1, ..., in) of values and by traversing
through the domains of the vectors until the
solutions is found.
Backtracking
• Algorithmic Approach
1. Test whether a solution has been found.
2. If found a solution, then return it
3. Else for each choice that can be made
a) Make that choice
b) Recur
c) If recurrence returns a solution, return it.
4. If no choice remain, return failure.
Sometimes called a “Search Tree”.
Backtracking
• Problems
1. Hamiltonian cycle
2. Graph Coloring
3. Sum of subset
4. N-Queen
Backtracking
• Problems
1. Hamiltonian cycle
2. Graph Coloring
3. Sum of subset
4. N-Queen
Backtracking
• Problem 1 : Hamiltonian cycle
• Hamiltonian Cycle is a graph theory problem
where the graph cycle through a graph can visit
each node only once. The puzzle was first devised
by Sir William Rowan Hamilton and the Problem is
named after him.
• Let 𝐺 = 𝑉, 𝐸 be a connected graph with n
vertices. A Hamiltonian cycle is a round trip path
along n edges of G that visits every vertex once
and returns to its starting position.
Backtracking
• Problem 1 : Hamiltonian cycle
• In other words a Hamiltonian cycle begins at some
vertex 𝑣1 ∈ 𝐺 and the vertices of G are visited in
the order 𝑣1 , 𝑣2 , … … , 𝑣𝑛+1 ,then the edges
(𝑣1 , 𝑣𝑖+1 ) are in 𝐸, 1 ≤ 𝑖 ≤ 𝑛, and the 𝑣𝑖 are distinct
except for 𝑣1 and 𝑣𝑛+1 , which are equal.
• Some Hamiltonian cycle with graph is given in
next slide.
Backtracking
• Problem 1 : Hamiltonian cycle

1 2 3 1234561
1265431
1625431
6 5 4

2 3
123541
5 124531
1 4
Backtracking
• Problem 1 : Hamiltonian cycle

1 2 3

Hamiltonian cycle is
4 not exist in this
graph
5 6 7

1 5
Hamiltonian cycle is
2 3 not exist in this graph
because of Articulation
6 point vertex.
4
Backtracking
• Problem 1 : Hamiltonian cycle

1
Hamiltonian cycle is
2 3 not exist in this graph
because of Pendent
vertex.
4

5 6

Let’s find how backtracking is help us for finding


the Hamiltonian cycle.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.

1 2

5 4
Backtracking Draw by
himself/herself
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking. First find the state space tree.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1 0 1 1 0 1
1 2 2 1 0 1 1 1
3 1 1 0 1 0
3
4 0 1 1 0 1
5 4 5 1 1 0 1 0
Adjacent Matrix
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1 0 1 1 0 1
1 2
2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 0 0 0 0 0

State Space Tree


Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 0 0 0 0

State Space Tree


Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 1 0 0 0

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 0 0 0

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 0 0 0

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 1 0 0

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 2 0 0

Node 2 already selected so we


can’t select it again. So we go
State Space Tree for node 3.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 3 0 0

Node 2 already selected so we


can’t select it again. So we go
State Space Tree for node 3.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 3 0 0

Node 2 already selected so we


can’t select it again. So we go
State Space Tree for node 3.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 3 1 0

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 3 2 0

Node 2 already selected so we


can’t select it again. So we go
State Space Tree for node 3.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
1 2 3 4 5
x 1 2 3 3 0

Node 3 already selected so we


can’t select it again. So we go
State Space Tree for node 4.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 0

Node 3 already selected so we


can’t select it again. So we go
State Space Tree for node 4.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 1

Node 1 already selected so we


can’t select it again. So we go
State Space Tree for node 2.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 2

Node 2 already selected so we


can’t select it again. So we go
State Space Tree for node 3.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 3

Node 3 already selected so we


can’t select it again. So we go
State Space Tree for node 4.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 5
5

Node 4 already selected so we


can’t select it again. So we go
State Space Tree for node 5.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 5
5

Hamilton cycle: 1 → 2 → 3 → 4 → 5 → 1
State Space Tree
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
4
1 2 3 4 5
x 1 2 3 4 5
5
State Space Tree Hamilton cycle: 1 → 2 → 3 → 4 → 5 → 1
The complete state space tree is shown in next slide.
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 4 5 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
3 5 4
4
1 2 3 4 5
x 1 2 3 4 5
5 5
State Space Tree Hamilton cycle: 1 → 2 → 3 → 4 → 5 → 1
Similarly do it for starting node 2.(Self practice)
Backtracking
• Problem 1 : Hamiltonian cycle
• Example 1: Find the Hamiltonian cycle of the given
graph by using backtracking.
1 2 3 4 5
1
1 0 1 1 0 1
1 2
2 2 1 0 1 1 1
3
3 1 1 0 1 0
3 4 5 5 4 4 0 1 1 0 1
5 1 1 0 1 0
Adjacent Matrix
3 3 4
4
1 2 3 4 5
x 1 2 3 4 5
5 5
State Space Tree Hamilton cycle: 1 → 2 → 3 → 4 → 5 → 1
Similarly do it for starting node 2.(Self practice)
Backtracking
• Problem 1 : Hamiltonian cycle
Algorithm:
The main algorithm starts by
• initializing the adjacency matrix 𝐺[1: 𝑛, 1: 𝑛]
• setting 𝑥[2: 𝑛] = 0
• setting 𝑥[1] = 1 (As starting vertex)
• executing 𝐻𝑎𝑚𝑖𝑙𝑡𝑜𝑛𝑖𝑎𝑛(2)
Note: This algorithm(𝐻𝑎𝑚𝑖𝑙𝑡𝑜𝑛𝑖𝑎𝑛(2)) uses the recursive formulation
of backtracking to find all the Hamiltonian cycles of a graph. The
graph is stored as an adjacency matrix G[1:n][1:n]. All cycles begin
at node 1
Backtracking
• Problem 1 : Hamiltonian cycle
Algorithm:
Recursive algorithm that finds all Hamiltonian cycles
void Hamiltonian(int k)
{
repeat
{ // Generate values for x[k].
NextValue(k); // Assign a legal next value to x[k].
if (x[k]==0) the return;
if (k == n) then print([1:n])
else Hamiltonian(k+1);
} until (false);
}
Backtracking
• The NextValue(int k) searching is execute on
following procedure:
• x[1],...,x[k-1] is a path of k-1 distinct vertices.
• If x[k]==0, no vertex has yet been assigned to
x[k].
• After execution x[k] is assigned to the lowest
numbered vertex that does not already appear in
x[1],x[2],...,x[k-1]; and is connected by an edge
to [k-1] Otherwise x[k]==0, If k==n, then in
addition x[k] is connected to x[1].
Backtracking
• Problem 1 : Hamiltonian cycle
Algorithm:
void NextValue(int k)
{
repeat
{
x[k] = (x[k]+1) % (n+1); // Next vertex
if (x[k]==0) then return;
if (G[x[k-1]][x[k]]==0) then
{ // Is there an edge?
for (j=1 to k-a) do
if (x[j]==x[k]) then break; // Check for distinctness.
if (j==k) then // If true, then the vertex is distinct.
if ((k<n) || ((k==n) && G[x[n]][x[1]]!=0))
then return;
}
} until(false);
}
Backtracking
• Problems
1. Hamiltonian cycle
2. Graph Coloring
3. Sum of subset
4. N-Queen
Backtracking
• Problem 2 : Graph Coloring
• Given an undirected graph and a number m,
determine if the graph can be colored with at
most m colors such that no two adjacent vertices
of the graph are colored with same color. (Hear
coloring of a graph means assignment of colors to
all vertices)
Backtracking
• Problem 2 : Graph Coloring
• Input:
1. A 2D array graph[v][v] where v is the number of
vertices in graph and graph[v][v] is adjacency
matrix representation of the graph.
1, 𝑖𝑓 𝑡ℎ𝑒𝑟𝑒 𝑖𝑠 𝑎 𝑑𝑖𝑟𝑒𝑐𝑡 𝑒𝑑𝑔𝑒 𝑓𝑟𝑜𝑚 𝑖 𝑡𝑜 𝑗
𝑔𝑟𝑎𝑝ℎ 𝑖 𝑗 = ቊ
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2. An integer m which is maximum number of colors that
can be used.
Backtracking
• Problem 2 : Graph Coloring
• Output:
An array x[v] that should have numbers from 1 to m.
x[i] should represent the color assigned to the 𝑖 𝑡ℎ
vertex . The code should also return false if the
graph cannot be colored with m colors.
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by
using three colour (Red, Green and Blue) and
draw the state space tree by using Brute force
method.

1 2

4 3
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by using three colour
(Red, Green and Blue) and draw the state space tree by
using Brute force method.
Root
1 2
𝑥1 = 𝑅 𝐵
𝐺
𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 4 3
𝑥2 =
Graph G
𝑥3 = 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵𝑅 𝐺 𝐵 𝑅𝐺 𝐵

𝑥4 = 𝑅 𝐺 𝐵 𝑅𝐺 𝐵

State space tree


(Brute force method)
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by using three colour
(Red, Green and Blue) and draw the state space tree by
using Brute force method.
Root
1 2
𝑥1 = 𝑅 𝐵
𝐺
𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 4 3
𝑥2 =
Graph G
𝑥3 = 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵𝑅 𝐺 𝐵 𝑅𝐺 𝐵

𝑥4 = 𝑅 𝐺 𝐵 𝑅𝐺 𝐵

State space tree


(Brute force method)
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by using three colour
(Red, Green and Blue) and draw the state space tree by
using Backtracking method.
Root 1 2
𝑥1 = 𝑅 𝐵
𝐺
𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 4 3
𝑥2 =
Graph G

𝑥3 = 𝑅 𝐵 𝑅𝐺 𝐵 𝑅 𝐵
𝐺 𝐺
𝑅 𝐵 𝑅 𝐵
𝑥4 = 𝐺 𝑅 𝐵
𝐺 𝐺
𝑺𝟏 𝑺𝟐 𝑺𝟓
𝑺𝟑 𝑺𝟒
State space tree (Backtracking)
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by using three colour
(Red, Green and Blue) and draw the state space tree by
using Backtracking method.
Root 1 2
𝑥1 = 𝑅 𝐵
𝐺
𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 4 3
𝑥2 =
Graph G

𝑥3 = 𝑅 𝐵 𝑅𝐺 𝐵 𝑅 𝐵 1 2 3 4
𝐺 𝐺 𝑆1 → R G R G
𝑆2 → R G R B
𝑅 𝐵 𝑅 𝐵 𝑆3 → R G B G
𝑥4 = 𝐺 𝑅 𝐵
𝐺 𝐺 𝑆4 → G B R G
𝑺𝟏 𝑺𝟐 𝑆5 → G B R B
𝑺𝟑 𝑺𝟒 𝑺𝟓
and so on……
State space tree (Backtracking)
(Possible Solutions)
Backtracking
• Problem 2 : Graph Coloring
• Example: Colored the following graph G by using three colour
(Red, Green and Blue) and draw the state space tree by
using Backtracking method.
Root 1 2
𝑥1 = 𝑅 𝐵
𝐺
𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 𝑅 𝐺 𝐵 4 3
𝑥2 =
Graph G

𝑥3 = 𝑅 𝐵 𝑅𝐺 𝐵 𝑅 𝐵 1 2 3 4
𝐺 𝐺 𝑆1 → R G R G
𝑆2 → R G R B
𝑅 𝐵 𝑅 𝐵 𝑆3 → R G B G
𝑥4 = 𝐺 𝑅 𝐵
𝐺 𝐺 𝑆4 → G B R G
𝑺𝟏 𝑺𝟐 𝑆5 → G B R B
𝑺𝟑 𝑺𝟒 𝑺𝟓
and so on……
State space tree (Backtracking)
(Possible Solutions)
Backtracking
• Problem 2 : Graph Coloring
Algorithm:
Recursive algorithm that finds all Hamiltonian cycles

This program was formed using the recursive


backtracking scheme. The graph is represented by its
Boolean adjacency matrix G[1:n][1:n]. ssignments of
1,2,….,m
to find the vertices of the graph such that adjacent
vertices are assigned distinct integers are printed. Hear
k is the next vertex to color.
Backtracking
• Problem 2 : Graph Coloring

This algorithm mcoloring (k) was formed using the


recursive backtracking schema. The graph is
represented by its Boolean adjacency matrix G [1: n,
1: n]. All assignments of1, 2,.......... , m to the
vertices of the graph such that adjacent vertices are
assigned distinct integers are printed. k is the index
of the next vertex to color.
Backtracking
• Problem 2 : Graph Coloring
Algorithm mcoloring (k)
{
repeat
{ // Generate all legal assignments for x[k].
NextValue (k); // Assign to x [k] a legal color.
If (x [k] = 0) then return; // No new color possible
If (k = n) then // at most m colors have been used to
color the n vertices.
write (x [1: n]);
else
mcoloring (k+1);
} until (false);
}
Backtracking
• Problem 2 : Graph Coloring
The Execution procedure of function NextValue(int k):
• x[1],...,x[k-1] have been assigned integer values in the
range[1,m] such that adjacent vertices have distinct
integers. A value for x[k] determined in the range [0,m].
• X[k] is assigned the next highest numbered color while
maintain distinctness from the adjacent vertices of vertex
k.
• If no such color exits, then x[k]=0.
Backtracking
• Problem 2 : Graph Coloring
void NextValue(int k)
{repeat
{
x[k] = (x[k]+1) % (n+1); // Next vertex
if (x[k]==0) return;
for j=1 to n do
{
// Check if this color is distinct from adjacent colors
if ((G[k][j]!=0) and (x[k]==x[j]))
//if (k,j) is and edge and if adjacent vertices have the same color
then break;
}
if (j=n+1) then return;// New color found
} until(false); // Otherwise try to find another color
}
Backtracking
• Problems
1. Hamiltonian cycle
2. Graph Coloring
3. Sum of subset
4. N-Queen
Backtracking
• Problem 3 : Sum of Subset
• Given positive numbers 𝑤𝑖 , 1 ≤ 𝑖 ≤ 𝑛, 𝑎𝑛𝑑 𝑚, find
all subsets of 𝑤1 , 𝑤2 , … … , 𝑤𝑛 , whose sum is m.
Example:
1. If 𝑛 = 6, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 , 𝑤5 , 𝑤6 = 5, 10, 12, 13, 15, 18
and 𝑚 = 30 , the desired solution sets are
(5, 10, 15), (5, 12, 13) 𝑎𝑛𝑑 (12, 18) and are represented
as (1,1,0,0,1,0), (1,0,1,1,0,0), 𝑎𝑛𝑑 (0,0,1,0,0,1).
2. If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31,
the desired solution sets are (7, 11, 13), 𝑎𝑛𝑑 (7, 24)
and are represented as (1,1,1,0), 𝑎𝑛𝑑 (1,0,0,1).
Backtracking
• Problem 3 : Sum of Subset
• Given positive numbers 𝑤𝑖 , 1 ≤ 𝑖 ≤ 𝑛, 𝑎𝑛𝑑 𝑚, find
all subsets of 𝑤1 , 𝑤2 , … … , 𝑤𝑛 , whose sum is m.

Example:

If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31,


the desired solution sets are (7, 11, 13), 𝑎𝑛𝑑 (7, 24).
And are represented as (1, 1,1,0), 𝑎𝑛𝑑 (1,0,0,1).
Let's execute the above example with the help of Backtracking
and draw the State Space Tree.
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31, the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
0 1 55
𝑥1 → 1 0 7 11 13 24
S K R S K R
7 1 48 0 1 48
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
𝑥4 → 1 1
S K R S K R
55 4 0 42 4 0 0
S K R S K R
31 4 0 18 4 0
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
𝑥4 → 1 0 1 0 1 0 1 0
S K R S K R S K R S K R
55 4 0 42 4 0 44 4 0 31 4 0
S K R S K R S K R S K R
31 4 0 18 4 0 20 4 0 7 4 0
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
𝑥4 → 1 0 1 0 1 0 1 0 1 0 1 0
S K R S K R S K R S K R S K R S K R
55 4 0 42 4 0 44 4 0 31 4 0 48 4 0 35 4 0
S K R S K R S K R S K R S K R S K R
31 4 0 18 4 0 20 4 0 7 4 0 24 4 0 11 4 0
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
𝑥4 → 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
S K R S K R S K R S K R S K R S K R S K R S K R
55 4 0 42 4 0 44 4 0 31 4 0 48 4 0 35 4 0 37 4 0 24 4 0
S K R S K R S K R S K R S K R S K R S K R
S K R
31 4 0 18 4 0 20 4 0 7 4 0 24 4 0 13 4 0 0 4 0
11 4 0
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R

𝑥2 → It was observed that all possible 1 solutions of


7 1 48 0 1 48
1 0 0
S K R
thisS example
K R
are between
S K R S K R
18 2 37 7 2 37 0 0 0 0 11 2 37 0 2 37
𝑥3 → 1 0 1 0 ….. 1 0 1 0
0
S𝟒K R
S K R S K R
31 3 24 18 3 24
S K R
20 3 24
S K R
7 3 24
…. S K R S K R
24 3 24 11 3 24 𝑶(𝟐 )
13 3 24
S K R
0 3 24
𝑥4 → 1 0 1 0 1 0 1 0 …. 1 0 1 0 1 0 1 0
S K R S K R S K R S K R 1 1S1K 1R S K R S K R S K R
55 4 0 42 4 0 44 4 0 31 4 0 48 4 0 35 4 0 37 4 0 24 4 0
S K R S K R S K R S K R S K R S K R S K R
S K R
31 4 0 18 4 0 20 4 0 7 4 0 24 4 0 13 4 0 0 4 0
11 4 0
Backtracking
• Problem 3 : Sum of Subset
Example:
If 𝑛 = 4, 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 = 7, 11, 13, 24 and 𝑚 = 31 , the desired solution sets are
(11, 13, 7), 𝑎𝑛𝑑 (24, 7).
S K R 1 2 3 4
1 0 1 55
0 7 11 13 24
𝑥1 →
S K R S K R
7 1 48 0 1 48
𝑥2 → 1 0 1 0
S K R S K R S K R S K R
18 2 37 7 2 37 11 2 37 0 2 37
𝑥3 → 1 0 1 0 1 0 1 0
0
S K R S K R S K R S K R S K R S K R S K R S K R
31 3 24 18 3 24 20 3 24 7 3 24 24 3 24 11 3 24 13 3 24 0 3 24
𝑥4 → 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
S K R S K R S K R S K R S K R S K R S K R S K R
55 4 0 42 4 0 44 4 0 31 4 0 48 4 0 35 4 0 37 4 0 24 4 0
S K R S K R S K R S K R S K R S K R S K R
S K R
31 4 0 18 4 0 20 4 0 7 4 0 24 4 0 13 4 0 0 4 0
11 4 0
Backtracking
• Problem 3 : Sum of Subset
What to do before executing SumOfSubset():
• Find all subsets of 𝑤[1: 𝑛] that sum to 𝑚.
• The values of 𝑥 𝑗 , 1 ≤ 𝑗 ≤ 𝑘, have already been
determined.
• 𝑠 = σ𝑘−1 𝑛
𝑗=1 𝑤 𝑗 × 𝑥 𝑗 𝑎𝑛𝑑 𝑟 = σ𝑗=𝑘 𝑤[𝑗]
• The 𝑤 𝑗 ′ 𝑠 are in nondecreasing order.
• It is assumed that 𝑤 1 ≤ 𝑚 𝑎𝑛𝑑 σ𝑛𝑖=1 𝑤 𝑖 ≥ 𝑚.
Let's execute the example with the help of Backtracking and
draw the State Space Tree.
Backtracking
• Problem 3 : Sum of Subset
Algorithm:
SumOfSubset(s, k, r)
{
// Generate left child. Note : 𝑠 + 𝑤 𝑘 ≤ 𝑚.
x[k]=1;
if (x[k]==0) return;
if (s + w[k]=m) then print (x[1:k]); // Subset found
//There is no recursive call here as w[j]>0, 1≤ j ≤n.
else if (s + w[k] +w[k+1] ≤ m)
then SumOfSubset(s + w[k], k + 1, r - w[k]);
// Generate right child.
if ((s + r - w[k] ≥ m) and (s+w[k+1] ≤ m)) then
{
x[k]=0;
SumOfSubset(s, k + 1, r - w[k]);
}
}
Backtracking
• Problems
1. Hamiltonian cycle
2. Graph Coloring
3. Sum of subset
4. N-Queen
Backtracking
• Problem 4 : N Queen
• N-Queen problem is based on chess games.
• The problem is based on arranging the queens on
chessboard in such a way that no two queens can
attack each other.
• The N-Queen problem states as consider a n x n
chessboard on which we have to place n queens
so that no two queens attack each other by being
in the same row or in the same column or on the
same diagonal.
Backtracking
• Problem 4 : N Queen
• 2 – Queen’s problem is not solvable because 2 –
Queens can be placed on 2 x 2 chess board as
shown in below.

Q Q Q Q
Q Q

Q Q
Q Q Q Q
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
1. Let us take the example of 4 – Queens and 4 x 4
chessboard.
2. Start with an empty chessboard.
3. Place queen 1 in the first possible position of its row
i.e. on 1st row and 1st column
Q
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
4. Then place queen 2 after trying unsuccessful place
(1, 2), (2, 1), (2, 2) at (2, 3) i.e. 2nd row and 3rd
column.

Q
Q
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
5. This is a dead end because a 3rd queen cannot be
placed in next column, as there is no acceptable
position for queen 3. Hence algorithm backtracks and
places the 2nd queen at (2, 4) position..

Q Q
Q Q
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
6. Then place 3rd queen at (3,2) but it again another
dead lock end as next queen (4th queen) cannot be
placed at permissible position.

Q
Q
Q
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
7. Hence we need to backtrack all the way up to queen 1
and move it to (1, 2). viii. Place queen 1 at (1, 2), queen
2 at (2, 4), queen 3 at (3, 1) and queen 4 at (4, 3).

Q Q Q Q
Q Q Q
Q Q
Q
8. Thus the solution is obtained (2, 4, 1, 3) in row wise
manner in x array.
Backtracking
• Problem 4 : N Queen
• How to solve N-Queen Problem.
9. A classic problem in combinatorics is to place 8-Queens
on an 8 by 8 chessboard so that no two can “attack” each
other (along a row, column, or diagonal).
10. Since each queen (1-8) must be on a different row, we
can assume queen 𝒊 is on row 𝒊.
11. All solutions to the 8-queens problem can be represented
as an 8-tuple (𝒙𝟏 , 𝒙𝟐 , ……., 𝒙𝒏 ) where queen 𝒊 is on
column 𝒙𝒊 .
12. The one of the solution to 8-Queen Problem is shown in
next slide.
Backtracking
• Problem 4 : N Queen
One of the solution to 8-Queen Problem

Q
Q
Q
Q
Q
Q
Q
Q
Backtracking
• Problem 4 : N Queen
• N Queens are placed on n by n chessboard so that no
two attack (no two queens are on the same row,
column, or diagonal).
• A tree bellow shows the state space tree for n=4.
1
x1 = 1 2 3 4

2 18 34 50
Backtracking
• Problem 4 : N Queen
• N Queens are placed on n by n chessboard so that no
two attack (no two queens are on the same row,
column, or diagonal).
• A tree bellow shows the state space tree for n=4.
1
x1 = 1 2 3 4

2 18 34 50
x2 = 2 3 4 1 3 4 1 2 4 1 2 3
3 8 13 19 24 29 35 40 45 51 56 61
Backtracking
• Problem 4 : N Queen
• N Queens are placed on n by n chessboard so that no
two attack (no two queens are on the same row,
column, or diagonal).
• A tree bellow shows the state space tree for n=4.
1
x1 = 1 2 3 4

2 18 34 50
x2 = 2 3 4 1 3 4 1 2 4 1 2 3
3 8 13 19 24 29 35 40 45 51 56 61
𝑥3 = 3 4 2 4 2 3 3 4 1 2 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 6 9 11 14 16 20 22 25 27 30 32 36 38 41 43 46 48 52 54 57 59 62 64
Backtracking
• Problem 4 : N Queen
• N Queens are placed on n by n chessboard so that no
two attack (no two queens are on the same row,
column, or diagonal).
• A tree bellow shows the state space tree for n=4.
1
x1 = 1 2 3 4

2 18 34 50
x2 = 2 3 4 1 3 4 1 2 4 1 2 3
3 8 13 19 24 29 35 40 45 51 56 61
𝑥3 = 3 4 2 4 2 3 3 4 1 2 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 6 9 11 14 16 20 22 25 27 30 32 36 38 41 43 46 48 52 54 57 59 62 64
𝑥4 = 4 3 4 2 3 2 4 3 2 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
5 7 10 12 15 17 21 23 26 28 31 33 37 39 42 44 47 49 53 55 58 60 63 65
Backtracking
• Problem 4 : N Queen
• N Queens are placed on n by n chessboard so that no
two attack (no two queens are on the same row,
column, or diagonal).
• A tree bellow shows the state space tree for n=4.
1
x1 = 1 2 3 4

2 18 34 50
x2 = 2 3 4 1 3 4 1 2 4 1 2 3
3 8 13 19 24 29 35 40 45 51 56 61
𝑥3 = 3 4 2 4 2 3 3 4 1 2 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 6 9 11 14 16 20 22 25 27 30 32 36 38 41 43 46 48 52 54 57 59 62 64
𝑥4 = 4 3 4 2 3 2 4 3 2 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
5 7 10 12 15 17 21 23 26 28 31 33 37 39 42 44 47 49 53 55 58 60 63 65
Backtracking
• Problem 4 : N Queen
• Logic behind the place() of N-Queen Problem:
1. Let (𝒙𝟏 , 𝒙𝟐 , ……., 𝒙𝒏 ) represent where the 𝒊𝒕𝒉 queen is
placed ( 𝒊𝒏 𝒓𝒐𝒘 𝒊 𝒂𝒏𝒅 𝒄𝒐𝒍𝒖𝒎𝒏 𝒙 ) on an
𝑛 × 𝑛 chessboard.
2. It was observed that two queens on the same diagonal
that runs from “upper left” to “lower right” have the
same “row-column” value.
3. Also, two queens on the same diagonal from “upper-
right” to “lower left” have the same “row+column”
value. (illustrated with a 8 x 8 chessboard)
Backtracking
• Problem 4 : N Queen
Logic behind the place() of N-Queen Problem
1 2 3 4 5 6 7 8 4. The Queen is available in [i,j] =[4,2]
1 5. The squares that are diagonal to that
queen from upper left to lower right
2
are [3,1], [5,3], [6,4], [7,5], and
3 [8,6]. (Note : All these squares have
4 Q a “row-column” value of 2)
5 6. Similarly, the squares that are
diagonal to that queen from upper
6 right to lower left are [1,5], [2,4],
7 [3,3], and [5,1]. (Note : All these
8 squares have a “row+column” value
of 6)
Backtracking
• Problem 4 : N Queen
Logic behind the place() of N-Queen Problem:
7. Then two queens at (𝑖, 𝑗) and (𝑘, 𝑙) are on the same diagonal
if and only if :
⟹ 𝑖 − 𝑗 = 𝑘 − 𝑙 𝑜𝑟 𝑖 + 𝑗 = 𝑘 + 𝑙
⟹ 𝑖 − 𝑘 = 𝑗 − 𝑙 𝑜𝑟 𝑗 − 𝑙 = 𝑘 − 𝑖
⟹ 𝑗−𝑙 = 𝑖−𝑘
8. Algorithm PLACE(k,i) returns true if the kth queen can be
placed in column i and runs in 𝑂(𝑘) time.
9. Using PLACE, the recursive version of the general
backtracking method can be used to give a precise solution
to the n-queens problem
10. Array x[ ] is global. Algorithm invoked by NQueens(1,n).
Backtracking
• Problem 2 : N Queen (Algorithm)
Recursive algorithm for N-Queen
void NQueens(int k, int n)
// Using backtracking, this procedure prints all possible placements of n queens
// on an n x n chessboard so that they are nonattacking.
{
for i=1 to n do
{
if (Place(k, i))
{
x[k] = i;
if (k==n)
print (x[1:n])
else
NQueens(k+1, n);
}
}
}
Backtracking
• Problem 2 : N Queen (Algorithm)
Recursive algorithm for N-Queen
bool Place(int k, int i)
// Returns true if a queen can be placed in kth row and ith column. Otherwise it
// returns false. x[] is a global array whose first (k-1) values have been set. abs(r)
// returns the absolute value of r.
{
for j= 1 to k-1 do
if ((x[j] == i) or // Two in the same column
|| (abs(x[j]-i) == abs(j-k))) // or in the same diagonal
return(false);
return(true);
}
Design and Analysis of Algorithm

Branch and Bound


(Travelling salesman Problem)

Lecture – 54
Overview
• The method was first proposed by Ailsa
Land and Alison Doig whilst carrying
out research at the London School of
Economics sponsored by British
Petroleum in 1960 for discrete
programming, and has become the most
commonly used tool for solving NP-
hard optimization problems.
• The name "branch and bound" first
occurred in the work of Little et al. on
the traveling salesman problem.
Overview
• Branch and Bound, or B & B, is an
algorithm design paradigm that solves
combinatorial and discrete optimization
problems.
• Many optimization issues, such as crew
scheduling, network flow problems, and
production planning, cannot be solved in
polynomial time.
• Hence, B & B is a paradigm that is widely
used to solve such problems.
Overview
• Branch-and-bound algorithm consists of a
systematic enumeration of solutions by
means of state space search tree.
• The set of solutions is thought of as
forming a rooted tree with the full set at
the root.
• The algorithm explores branches of this
tree, which represent subsets of the
solution set.
Overview
• The Branch algorithms incorporate
different search techniques to traverse a
state space tree. Different search
techniques used in B&B are listed below:

• LC search
• BFS
• DFS
Overview
1. LC search (Least Cost Search):
• It uses a heuristic cost function to
compute the bound values at each
node. Nodes are added to the list of
live nodes as soon as they get
generated.
• The node with the least value of a
cost function selected as a next
Explored-node.
Overview
2.BFS(Breadth First Search):
• It is also known as a FIFO search.
• It maintains the list of live nodes in
first-in-first-out order i.e, in a queue,
The live nodes are searched in the
FIFO order to make them next
Explored-nodes.
Overview
3. DFS (Depth First Search):
• It is also known as a LIFO search.
• It maintains the list of live nodes in
last-in-first-out order i.e. in a stack.
• The live nodes are searched in the
LIFO
Travelling Salesman Problem
• Problem Statement
Given a set of cities and distance between every pair of
cities, the problem is to find the shortest possible tour
that visits every city exactly once and returns to the
starting point.
The cost matrix is defined as:

𝒘(𝒊, 𝒋) 𝒊𝒇 𝒕𝒉𝒆𝒓𝒆 𝒘𝒊𝒍𝒍 𝒃𝒆 𝒅𝒊𝒓𝒆𝒄𝒕 𝒑𝒂𝒕𝒉 𝒃𝒆𝒕𝒘𝒆𝒆𝒏 𝑪𝒊 𝒕𝒐𝑪𝒋


𝑪 𝒊, 𝒋 = ቊ
∞ 𝑶𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

Note: Use LC method to solve this problem.


Travelling Salesman Problem

Example:

1 2 1 2 3 4 5
1 ∞ 20 30 10 11
2 15 ∞ 16 4 2
4 3 3 3 5 ∞ 2 4
4 19 6 18 ∞ 3
5 16 4 7 16 ∞
5

Graph Cost Matrix of the Graph


Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11
2 15 ∞ 16 4 2
3 3 5 ∞ 2 4
4 19 6 18 ∞ 3
5 16 4 7 16 ∞
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 Find the minimum value of each row
2 15 ∞ 16 4 2 and then create the resultant matrix
3 3 5 ∞ 2 4 by subtracting the minimum value
4 19 6 18 ∞ 3 from each element of the same row
5 16 4 7 16 ∞ (i.e Reduced Row)
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 10 Find the minimum value of each row
2 15 ∞ 16 4 2 2 and then create the resultant matrix
3 3 5 ∞ 2 4 2 by subtracting the minimum value
4 19 6 18 ∞ 3 3 from each element of the same row
5 16 4 7 16 ∞ 4 (i.e Reduced Row)
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 10 Find the minimum value of each row
2 15 ∞ 16 4 2 2 and then create the resultant matrix
3 3 5 ∞ 2 4 2 by subtracting the minimum value
4 19 6 18 ∞ 3 3 from each element of the same row
5 16 4 7 16 ∞ 4 (i.e Reduced Row)
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21

1 2 3 4 5
1 ∞ 10 20 0 1 10
2 13 ∞ 14 2 0 2
3 1 3 ∞ 0 2 2
4 16 3 15 ∞ 0 3
5 12 0 3 12 ∞ 4
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 10 Find the minimum value of each row
2 15 ∞ 16 4 2 2 and then create the resultant matrix
3 3 5 ∞ 2 4 2 by subtracting the minimum value
4 19 6 18 ∞ 3 3 from each element of the same row
5 16 4 7 16 ∞ 4 (i.e Reduced Row)
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21

1 2 3 4 5
1 ∞ 10 20 0 1 Find the minimum value of each
10
2 13 ∞ 14 2 0 column and then create the
2
3 1 3 ∞ 0 2 resultant matrix by subtracting the
2
4 16 3 15 ∞ 0 3 minimum value from each element
5 12 0 3 12 ∞ 4 of the same row (i.e Reduced
column)
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 10 Find the minimum value of each row
2 15 ∞ 16 4 2 2 and then create the resultant matrix
3 3 5 ∞ 2 4 2 by subtracting the minimum value
4 19 6 18 ∞ 3 3 from each element of the same row
5 16 4 7 16 ∞ 4 (i.e Reduced Row)
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21

1 2 3 4 5
1 ∞ 10 20 0 1 Find the minimum value of each
10
2 13 ∞ 14 2 0 column and then create the
2
3 1 3 ∞ 0 2 resultant matrix by subtracting the
2
4 16 3 15 ∞ 0 3 minimum value from each element
5 12 0 3 12 ∞ 4 of the same row (i.e Reduced
1 0 3 0 0 column)
Travelling Salesman Problem
Example:
Step 1: First reduce the cost of above matrix(i.e Reducing matrix)
1 2 3 4 5
1 ∞ 20 30 10 11 10 Find the minimum value of each row
2 15 ∞ 16 4 2 2 and then create the resultant matrix
3 3 5 ∞ 2 4 2 by subtracting the minimum value
4 19 6 18 ∞ 3 3 from each element of the same row
5 16 4 7 16 ∞ 4 (i.e Reduced Row)
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21

1 2 3 4 5
1 ∞ 10 17 0 1 Find the minimum value of each
10
2 12 ∞ 11 2 0 column and then create the
2
3 0 3 ∞ 0 2 resultant matrix by subtracting the
2
4 15 3 12 ∞ 0 3 minimum value from each element
5 11 0 0 12 ∞ 4 of the same row (i.e Reduced
1 0 3 0 0 column)
෍ 𝑐𝑜𝑙 𝑚𝑖𝑛 = 4
Travelling Salesman Problem
Example:
The final reduced matrix after step 1 is:
1 2 3 4 5
1 ∞ 10 17 0 1
2 12 ∞ 11 2 0
C1= 3 0 3 ∞ 0 2
4 15 3 12 ∞ 0
5 11 0 0 12 ∞

THE COST OF NODE 1


Total cost of reduction of all rows =σ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 21
Total cost of reduction of all columns =σ 𝑐𝑜𝑙 𝑚𝑖𝑛 = 4
So the reduced cost after step 1 =21+4=25
“Now the matrix is a reduced matrix. It means the matrix
contain that one zero in each row and one zero in each column.”
Travelling Salesman Problem
Example:
From the step 1, it was observed that the cost of 1st node is 25.
Hence the State space tree is
C=25
1

Now, we calculate the cost from node 1 to node 2, node 1 to node


3, node 1 to node 4, and node 1 to node 5.
And check, whether there is a minimum cost path from node 1 to
node 2 or node 1 to node 3, node 1 to node 4 or node 1 to node 5
is exists? And find which one is minimum and explore that node
again. And show the procedure through state space tree.
Let us do it one by one……..
Travelling Salesman Problem
Example:
Step 2 Find the cost from node 1 to node 2.
• Make all the value of row1 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using following formula.
𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
Cost from node i
to j in reduction Reduction cost
matrix of current
matrix
Reduced cost of node i .
(calculated previously)
Travelling Salesman Problem
Example:
Step 2 Find the cost from node 1 to node 2.
• Make all the value of row1 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ 11 2 0
3 0 ∞ ∞ 0 2
4 15 ∞ 12 ∞ 0
5 11 ∞ 0 12 ∞
Travelling Salesman Problem
Example:
Step 2 Find the cost from node 1 to node 2.
• Make all the value of row1 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 ∞ ∞ 11 2 0 0
3 0 ∞ ∞ 0 2 0
4 15 ∞ 12 ∞ 0 0
5 11 ∞ 0 12 ∞ 0
Travelling Salesman Problem
Example:
Step 2 Find the cost from node 1 to node 2.
• Make all the value of row1 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 ∞ ∞ 11 2 0 0
3 0 ∞ ∞ 0 2 0
4 15 ∞ 12 ∞ 0 0
5 11 ∞ 0 12 ∞ 0
0 0 0 0 0
Travelling Salesman Problem
Example:
Step 2 Find the cost from node 1 to node 2.
• Make all the value of row1 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 ∞ ∞ 11 2 0 0 Cost from node 1 to node 2
3 0 ∞ ∞ 0 2 0 ⟹ 𝐶 1,2 + 𝑟 + 𝑟Ƹ
4 15 ∞ 12 ∞ 0 0 ⟹ 10 + 25 + 0 = 35
5 11 ∞ 0 12 ∞ 0
0 0 0 0 0
Travelling Salesman Problem
Example:
Step 3 Find the cost from node 1 to node 3.
• Make all the value of row1 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 12 ∞ ∞ 2 0
3 ∞ 3 ∞ 0 2
4 15 3 ∞ ∞ 0
5 11 0 ∞ 12 ∞
Travelling Salesman Problem
Example:
Step 3 Find the cost from node 1 to node 3.
• Make all the value of row1 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ ∞ 2 0 0
3 ∞ 3 ∞ 0 2 0
4 15 3 ∞ ∞ 0 0
5 11 0 ∞ 12 ∞ 0
Travelling Salesman Problem
Example:
Step 3 Find the cost from node 1 to node 3.
• Make all the value of row1 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 11
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ ∞ 2 0 0
3 ∞ 3 ∞ 0 2 0
4 15 3 ∞ ∞ 0 0
5 11 0 ∞ 12 ∞ 0
11 0 0 0 0
Travelling Salesman Problem
Example:
Step 3 Find the cost from node 1 to node 3.
• Make all the value of row1 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 11
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ ∞ 2 0 0 Cost from node 1 to node 3
3 ∞ 3 ∞ 0 2 0 ⟹ 𝐶 1,3 + 𝑟 + 𝑟Ƹ
4 15 3 ∞ ∞ 0 0 ⟹ 17 + 25 + 11 = 53
5 11 0 ∞ 12 ∞ 0
11 0 0 0 0
Travelling Salesman Problem
Example:
Step 4 Similarly we can find the cost from node 1 to node 4.
• Make all the value of row1 to ∞
• Make all the value of col 4 to ∞
• Make node 4 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ 11 ∞ 0 0 Cost from node 1 to node 4
3 0 3 ∞ ∞ 2 0 ⟹ 𝐶 1,4 + 𝑟 + 𝑟Ƹ
4 ∞ 3 12 ∞ 0 0 ⟹ 0 + 25 + 0 = 25
5 11 0 0 ∞ ∞ 0
0 0 0 0 0
Travelling Salesman Problem
Example:
Step 5 Similarly we can find the cost from node 1 to node 5.
• Make all the value of row1 to ∞
• Make all the value of col 5 to ∞
• Make node 5 to 1 is also ∞
• And then apply reduction technique to reduced the matrix.
• Calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ 11 2 ∞ 2 Cost from node 1 to node 5
3 0 3 ∞ 0 ∞ 0 ⟹ 𝐶 1,5 + 𝑟 + 𝑟Ƹ
4 15 3 12 ∞ ∞ 3 ⟹ 1 + 25 + 5 = 31
5 ∞ 0 0 12 ∞ 0
0 0 0 0 0
Travelling Salesman Problem
Example:
Hence
• the cost from node 1 to node 2 =35
• the cost from node 1 to node 3 =53
• the cost from node 1 to node 4 =25
• the cost from node 1 to node 5 =31
And the State space tree is given below:

C1=25
1

C2=35 C3=53 C4=25 C5=31


2 3 4 5
Travelling Salesman Problem
Example:
Hence
• the cost from node 1 to node 2 =35
• the cost from node 1 to node 3 =53
• the cost from node 1 to node 4 =25
• the cost from node 1 to node 5 =31
And the State space tree is given below:

C1=25
1

C2=35 C3=53 C4=25 C5=31


2 3 4 5

Find Minimum?
Travelling Salesman Problem
Example:
Hence
• the cost from node 1 to node 2 =35
• the cost from node 1 to node 3 =53
• the cost from node 1 to node 4 =25 (Minimum)
• the cost from node 1 to node 5 =31

C1=25
1

C2=35 C3=53 C4=25 C5=31


2 3 4 5
Travelling Salesman Problem
Example:
Hence
• the cost from node 1 to node 2 =35
• the cost from node 1 to node 3 =53
• the cost from node 1 to node 4 =25 (Minimum)
• the cost from node 1 to node 5 =31
1 2 3 4 5
C1=25
1 1 ∞ ∞ ∞ ∞ ∞
2 12 ∞ 11 ∞ 0
3 0 3 ∞ ∞ 2
C2=35 C3=53 C4=25 C5=31 ∞ 3 12 ∞ 0
2 3 4 5
4
5 11 0 0 ∞ ∞

“Hence the reduced matrix obtained from node 1 to node 4 will


be treated as reduced matrix for next level of the graph.”
Travelling Salesman Problem
Example:
Hence
• the cost from node 1 to node
=35 2
• the cost from node 1 to node
=53 3
• the cost from node 1 to node 4
=25 (Minimum)
• the cost from node 1 to node
=31 5 1 2 3 4 5
C1=25 ∞ ∞ ∞ ∞ ∞
1 1
2 12 ∞ 11 ∞ 0
3 0 3 ∞ ∞ 2
C2=35 C3=53 C4=25 C5=31 4 ∞ 3 12 ∞ 0
2 3 4 5
5 11 0 0 ∞ ∞
“Hence the reduced matrix obtained from node 1 to node 4 will be treated
as reduced matrix for next level of the graph”
Now further find who is the next vertex in next level?( i.e. node 2 or node 3
or node 5)
Hint: Apply the same methodology
Travelling Salesman Problem
Example: 1 2 3 4 5
C1=25 ∞ ∞ ∞ ∞ ∞
1 1
2 12 ∞ 11 ∞ 0
3 0 3 ∞ ∞ 2
C2=35 C3=53 C4=25 C5=31 4 ∞ 3 12 ∞ 0
2 3 4 5
5 11 0 0 ∞ ∞

Now, we calculate the cost from node 4 to node 2, node 4 to


node 3, and node 4 to node 5.
And check, whether there is a minimum cost path from node 4 to
node 2 or node 4 to node 3, or node 4 to node 5 is exists? And
find which one is minimum and explore that node again. And
show the procedure through state space tree.
Let us do it one by one……..
Travelling Salesman Problem
Example:
Step 6 Find the cost from node 4 to node 2.
• Make all the value of row 4 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞ as the path is 1 → 4 → 2 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ 11 ∞ 0
3 0 ∞ ∞ ∞ 2
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ 0 ∞ ∞
Travelling Salesman Problem
Example:
Step 6 Find the cost from node 4 to node 2.
• Make all the value of row 4 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞ as the path is 1 → 4 → 2 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ Apply reduction technique and
2 ∞ ∞ 11 ∞ 0 reduced the matrix
3 0 ∞ ∞ ∞ 2
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ 0 ∞ ∞
Travelling Salesman Problem
Example:
Step 6 Find the cost from node 4 to node 2.
• Make all the value of row 4 to ∞
• Make all the value of col 2 to ∞
• Make node 2 to 1 is also ∞ as the path is 1 → 4 → 2 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ

1 2 3 4 5 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0


1 ∞ ∞ ∞ ∞ ∞ 0
2 ∞ ∞ 11 ∞ 0 0 Cost from node 4 to node 2
0 ∞ ∞ ∞ 2 0 ⟹ 𝐶 4,2 + 𝑟 + 𝑟Ƹ
3 0
4 ∞ ∞ ∞ ∞ ∞
0 ⟹ 3 + 25 + 0 = 28
5 11 ∞ 0 ∞ ∞
0 0 0 0 0
Travelling Salesman Problem
Example:
Step 7 Find the cost from node 4 to node 3.
• Make all the value of row 4 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞ as the path is 1 → 4 → 3 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 2 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 3
1 ∞ ∞ ∞ ∞ ∞ 0
2 12 ∞ ∞ ∞ 0 0 Cost from node 4 to node 3
3 ∞ 3 ∞ ∞ 2 2 ⟹ 𝐶 4,3 + 𝑟 + 𝑟Ƹ
4 ∞ ∞ ∞ ∞ ∞ 0
11 0 ∞ ∞ ∞ 0 ⟹ 12 + 25 + 13 = 50
5
11 0 0 0 0
Travelling Salesman Problem
Example:
Step 8 Find the cost from node 4 to node 5.
• Make all the value of row 4 to ∞
• Make all the value of col 5 to ∞
• Make node 5 to 1 is also ∞ as the path is 1 → 4 → 5 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
∞ ∞ ∞ ∞ ∞ ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 11 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
1 0
2 12 ∞ 11 ∞ ∞ 11
Cost from node 4 to node 3
3 0 3 ∞ ∞ ∞ 0
∞ ∞ ∞ ∞ ∞ 0 ⟹ 𝐶 4,5 + 𝑟 + 𝑟Ƹ
4
∞ 0 0 ∞ ∞ 0 ⟹ 0 + 25 + 11 = 36
5
0 0 0 0 0
Travelling Salesman Problem
Example:
C1=25
Hence the cost from 1
• node 4 to node 2 =28
• node 4 to node 3 =50 C2=35 C3=53 C4=25 C5=31
2 3 4 5
• node 4 to node 5 =36
And the State space tree is
grown as: 2
3 5
C2=28 C3=50 C5=36
Travelling Salesman Problem
Example:
Hence the cost from 1
C1=25

• node 4 to node 2 =28 (Minimum)


• node 4 to node 3 =50 C2=35 C3=53 C4=25 C5=31
2 3 4
• node 4 to node 5 =36 5

And the State space tree is grown as:


“Hence the reduced matrix obtained 2
3 5
from node 4 to node 2 will be treated as C2=28 C3=50 C5=36
reduced matrix for next level of the
graph” 1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
Now further find who is the next ∞ ∞ 11 ∞ 0
2
vertex in next level?( i.e. node 3 or 0 ∞ ∞ ∞ 2
3
node 5) 4 ∞ ∞ ∞ ∞ ∞
Hint: Apply the same methodology 5 11 ∞ 0 ∞ ∞
Travelling Salesman Problem
Example:
C1=25 1 2 3 4 5
1
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ 11 ∞ 0
C2=35 C3=53 C4=25 C5=31 0 ∞ ∞ ∞ 2
2 3 4 5
3
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ 0 ∞ ∞

2 5
3
C2=28 C3=50 C5=36

Now, we calculate the cost from node 2 to node 3, and node 2 to node 5.
And check, whether there is a minimum cost path from node 2 to node 3 or
node 2 to node 5, is exists? And find which one is minimum and explore
that node again. And show the procedure through state space tree.
Let us do it one by one……..
Travelling Salesman Problem
Example:
Step 9 Find the cost from node 2 to node 3.
• Make all the value of row 2 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞ as the path is 1 → 4 → 2 → 3 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ

1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ ∞
3 ∞ ∞ ∞ ∞ 2
4 ∞ ∞ ∞ ∞ ∞
5 11 ∞ ∞ ∞ ∞
Travelling Salesman Problem
Example:
Step 9 Find the cost from node 2 to node 3.
• Make all the value of row 2 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞ as the path is 1 → 4 → 2 → 3 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 13 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 13
2 ∞ ∞ ∞ ∞ ∞ 0
∞ ∞ ∞ ∞ 2 2 Cost from node 4 to node 3
3
4 ∞ ∞ ∞ ∞ ∞ 0 ⟹ 𝐶 2,3 + 𝑟 + 𝑟Ƹ
5 11 ∞ ∞ ∞ ∞ 11 ⟹ 11 + 28 + 26 = 65
11 0 0 0 2
Travelling Salesman Problem
Example:
Step 10 Find the cost from node 2 to node 5.
• Make all the value of row 2 to ∞
• Make all the value of col 5 to ∞
• Make node 5 to 1 is also ∞ as the path is 1 → 4 → 2 → 5 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
2 ∞ ∞ ∞ ∞ ∞ 0
0 ∞ ∞ ∞ ∞ 0 Cost from node 4 to node 3
3
4 ∞ ∞ ∞ ∞ ∞ 0 ⟹ 𝐶 2,5 + 𝑟 + 𝑟Ƹ
5 ∞ ∞ 0 ∞ ∞ 0 ⟹ 0 + 28 + 0 = 28
0 0 0 0 0
Travelling Salesman Problem
Example: C1=25
1
Hence the cost from
• node 2 to node 3 =65 C2=35 C3=53 C4=25 C5=31
2 3 4 5
• node 2 to node 5 =28
And the State space tree is
grown as: C2=28 2
3 5
C3=50 C5=36

C3=65 3 5 C5=28
Travelling Salesman Problem
Example: 1
C1=25

Hence the cost from


• node 2 to node 3 =65 C2=35 C3=53 C4=25 C5=31
2 3 4 5
• node 2 to node 5 =28
(Minimum)
And the State space tree is C2=28 2 5
3
grown as:
C3=50 C5=36
“Hence the reduced matrix obtained
from node 2 to node 5 will be treated as C3=65 3 5 C5=28
reduced matrix for next level of the
graph” 1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
Now further find who is the next 2 ∞ ∞ ∞ ∞ ∞
vertex in next level?( i.e. node 3) 3 0 ∞ ∞ ∞ ∞
Hint: Apply the same methodology 4 ∞ ∞ ∞ ∞ ∞
5 ∞ ∞ 0 ∞ ∞
Travelling Salesman Problem
Example:
Step 11 Find the cost from node 5 to node 3.
• Make all the value of row 5 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞ as the path is 1 → 4 → 2 → 5 → 3 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞
2 ∞ ∞ ∞ ∞ ∞
3 ∞ ∞ ∞ ∞ ∞
4 ∞ ∞ ∞ ∞ ∞
5 ∞ ∞ ∞ ∞ ∞
Travelling Salesman Problem
Example:
Step 11 Find the cost from node 5 to node 3.
• Make all the value of row 5 to ∞
• Make all the value of col 3 to ∞
• Make node 3 to 1 is also ∞ as the path is 1 → 4 → 2 → 5 → 3 → 1.
• Then apply reduction technique to reduced the matrix.
• And calculate the cost by using → 𝐶 𝑖, 𝑗 + 𝑟 + 𝑟Ƹ
1 2 3 4 5
1 ∞ ∞ ∞ ∞ ∞ 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0 ෍ 𝑟𝑜𝑤 𝑚𝑖𝑛 = 0
2 ∞ ∞ ∞ ∞ ∞ 0
0 Cost from node 4 to node 3
3 ∞ ∞ ∞ ∞ ∞
∞ ∞ ∞ ∞ ∞ 0 ⟹ 𝐶 5,3 + 𝑟 + 𝑟Ƹ
4
∞ ∞ ∞ ∞ ∞ 0 ⟹ 0 + 28 + 0 = 28
5
0 0 0 0 0
Travelling Salesman Problem
Example: 1
C1=25

Hence the cost from


• node 5 to node 3 =28 C2=35 C3=53 C4=25 C5=31
2 3 4 5
And the final State space tree is
grown as:
C2=28 2 5
3
C3=50 C5=36

C3=65 3 5 C5=28

Hence the final Path is :


3
1→4→2→5→3 C3=28
(25) (25) (28) (28) (28) ← 𝐶𝑜𝑠𝑡 𝑎𝑡 𝑒𝑎𝑐ℎ 𝑛𝑜𝑑𝑒
Travelling Salesman Problem
Time Complexity:
As there are v(n-1)! Paths the complexity of TSP by using
Branch and bound technique is Ω(2𝑛 ) and in worst case if the
number of cities are ‘n’ then the complexity is Ο 𝑛𝑛

Suppose we have ‘n’ nodes (i.e. cities), then there is a need of


generating all the permutation of (n-1) nodes. Hence the time
complexity is Ο 𝑛 − 1 ! . Which is equal to Ο 2𝑛−1 . So the
final time complexity is Ο(2𝑛 . 𝑛2 )
Design and Analysis of Algorithm

Dynamic Programming
(0/1 knapsack problem, All pair
shortest path)

Lecture – 55-56
Overview

• Dynamic Programming is a general algorithm


design technique for solving problems defined by
recurrences with overlapping subproblems
• Invented by American mathematician “Richard
Bellman) in the year 1950s to solve optimization
problems and later assimilated by Computer
Science.
• “Programming” here means “planning”
Dynamic Programming
• “Method of solving complex problems by
breaking them down into smaller sub-problems,
solving each of those sub-problems just once,
and storing their solutions.”

• The problem solving approach looks like Divide


and conquer approach.(which is not true)
Dynamic Programming
Difference between Dynamic programming and
Divide and Conquer approach.
Dynamic Programming
Is a Four-step methods
1. Characterize the structure of an optimal
solution.
2. Recursively define the value of an optimal
solution.
3. Compute the value of an optimal solution,
typically in a bottom-up fashion.
4. Construct an optimal solution from computed
information.
Dynamic Programming
Problems:
1. 0/1 Knapsack Problem
2. Floyd-Warshall Algorithm
3. Matrix Chain Multiplication
4. Longest Common Sub-sequence
Dynamic Programming
Problem 1: 0/1 Knapsack Problem

• As the name suggests, items are indivisible here.


• We can not take the fraction of any item.
• We have to either take an item completely or
leave it completely.
• It is solved using dynamic programming
approach.
Lets solve with four step methods:
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step 1: Characterize the structure of an optimal solution
Let there are n number of objects, their profit values are
𝑣1, , 𝑣2 , 𝑣3 , … … , 𝑣𝑛 , weight are 𝑤1, , 𝑤2 , 𝑤3 , … … , 𝑤𝑛 . The maximum
capacity of Knapsack is “M”.
The 0/1 knapsack problem can states as follows
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 σ𝑛𝑖=1 𝑣𝑖 𝑥𝑖 (i.e. sum of the profit should be
maximize)
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 σ𝑛𝑖=1 𝑤𝑖 𝑥𝑖 ≤ 𝑀 (i.e. Sum of the weights should be
less than or equal to capacity of
the bag.)
𝑊ℎ𝑒𝑟𝑒, 𝑥𝑖 ∈ {0,1}
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step 2: Recursively define the value of optimal solution.

𝐿𝑒𝑡 𝑐[𝑖, 𝑗] 𝑖𝑠 𝑎𝑛 𝑡𝑤𝑜 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑎𝑙 𝑎𝑟𝑟𝑎𝑦, 𝑤ℎ𝑒𝑟𝑒


𝑖 = 0,1,2, … … , 𝑛 (i.e. number of objects)
𝑗 = 0,1,2, … … , 𝑀 (i.e. maximum weight of knapsack)
𝑇ℎ𝑒𝑛
0 𝑖𝑓 𝑖 = 0 &𝑗 = 0
𝑐 𝑖, 𝑗 = ൞ 𝑐 𝑖 − 1, 𝑗 𝑖𝑓 0 ≤ 𝑤𝑖 > 𝑗
max(𝑐 𝑖 − 1, 𝑗 , 𝑐 𝑖 − 1, 𝑗 − 𝑤𝑖 + 𝑣𝑖 ) 𝑖𝑓 𝑖 > 0 &𝑗 ≥ 𝑤𝑖
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step 3: Compute optimal solution for 0/1 knapsack problem.
𝟎/𝟏 𝑲𝒏𝒂𝒑𝒔𝒂𝒄𝒌 (𝑣, 𝑤, 𝑛, 𝑀)
1. 𝐹𝑜𝑟 𝑗 = 0 𝑡𝑜 𝑀
2. 𝐶[0, 𝑗] = 0
3. 𝑘𝑒𝑒𝑝[0, 𝑗] = 0
4. 𝐹𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛
5. 𝐹𝑜𝑟 𝑗 = 0 𝑡𝑜 𝑀
6. 𝑖𝑓 ((𝑗 ≥ 𝑤[𝑖])&&(𝑐[𝑖 − 1, 𝑗 − 𝑤[𝑖]] + 𝑣[𝑖] ) > 𝑐[𝑖 − 1, 𝑗]))
7. 𝑡ℎ𝑒𝑛 𝑐 𝑖, 𝑗 = 𝑐 i − 1, j − w i + v i
8. 𝑘𝑒𝑒𝑝 𝑖, 𝑗 = 1
9. 𝑒𝑙𝑠𝑒 𝑐 𝑖, 𝑗 = 𝑐 i − 1, j
10. 𝑘𝑒𝑒𝑝[𝑖, 𝑗] = 0
11. 𝑅𝑒𝑡𝑢𝑟𝑛 𝑐[𝑛, 𝑀]
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
(Implementation)
Consider-
Knapsack weight capacity = w
Number of items each having 0 1 2 3 ... w
some weight and value = n
0/1 knapsack problem is solved using 1 0 0 0 ... 0
dynamic programming in the following 2 0
steps- 3 0
Step-01:
• Draw a table say ‘c’ with (n+1)
…. ….
number of rows and (w+1) number …. ….
of columns. n 0
• Fill all the boxes of 0th row and 0th
column with zeroes as shown in
figure
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step-02:
• Start filling the table row wise top to bottom from left to
right by using the following formula-
𝑐 (𝑖 , 𝑗) = max{ 𝑐 𝑖 − 1 , 𝑗 , 𝑐 𝑖 − 1 , 𝑗 – 𝑤𝑖 + 𝑣𝑖 }
• Here, 𝑐(𝑖 , 𝑗) = maximum value of the selected items if we
can take items 1 to i and have weight restrictions of j.
• This step leads to completely filling the table.
• Then, value of the last box represents the maximum
possible value that can be put into the knapsack.
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step-03:
• To identify the items that must be put into the knapsack to
obtain that maximum profit, Consider the last column of
the table.
• Start scanning the entries from bottom to top.
• On encountering an entry whose value is not same as the
value stored in the entry immediately above it, mark the
row label of that entry.
• After all the entries are scanned, the marked labels
represent the items that must be put into the knapsack.
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

Item Weight Profit


1 2 1
2 3 2
3 4 5
4 5 6
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0
1 2 1
2 3 2
5 4 3
6 5 4
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0
2 3 2 0
5 4 3 0
6 5 4 0
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0
2 3 2 0
5 4 3 0
6 5 4 0
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0
2 3 2 0
5 4 3 0
6 5 4 0
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 1
2 3 2 0
5 4 3 0
6 5 4 0

As maximum weight available is 2


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0
5 4 3 0
6 5 4 0

As maximum weight available is 2


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 2 2
5 4 3 0
6 5 4 0

As maximum weight available is 2


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 2 2 3
5 4 3 0
6 5 4 0

As maximum weight available is 2


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0
6 5 4 0

As two possible weight are available (i.e. 3 and 5)


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 5
6 5 4 0

As maximum weight is available is : 4


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 5 5 6
6 5 4 0

As maximum weight is available is : 6


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 5 5 6 7
6 5 4 0

As maximum weight is available is : 7


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0

As maximum weight is available is : 6


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

As maximum weight is available is : 5


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

As maximum weight is available is : 8


Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.
0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

Apply the following formula for calculating the C Table


𝑐 (𝑖 , 𝑗) = max{ 𝑐 𝑖 − 1 , 𝑗 , 𝑐 𝑖 − 1 , 𝑗 – 𝑤𝑖 + 𝑣𝑖 }
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Step 4: Construct / print the optimal solution of 0/1 knapsack
problem.
𝟎/𝟏 𝑲𝒏𝒂𝒑𝒔𝒂𝒄𝒌 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏(𝑛, 𝑀)
1. 𝑘 = 𝑀
2. 𝐹𝑜𝑟 𝑖 = 𝑛 𝑑𝑜𝑤𝑛 𝑡𝑜 1
3. 𝑖𝑓 (𝑘𝑒𝑒𝑝[𝑖, 𝑘] == 1)
4. 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 𝑖
5. 𝑘 = 𝑘 − 𝑤[𝑖]
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 0 1 1 1 1 1 1
5 4 3 0 0 0 0 1 1 1 1 1
6 5 4 0 0 0 0 1 1 0 0 1

Keep array
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
Example 1: For the given set of items and knapsack capacity
of 8 kg, find the optimal solution for the 0/1 knapsack
problem making use of dynamic programming approach.

0 1 2 3 4 5 6 7 8
𝑃𝑖 𝑤𝑖 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 0 1 1 1 1 1 1
5 4 3 0 0 0 0 1 1 1 1 1
6 5 4 0 0 0 0 1 1 0 0 1

Keep array
Dynamic Programming
Problem 1: 0/1 Knapsack Problem
(Complexity)

Time complexity of 0/1 Knapsack problem is 𝑂(𝑛𝑀) .


where, n is the number of items and M is the capacity of
knapsack.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
• The all pair shortest path problem is the problem
of finding a path between two vertices or nodes
in a graph such that the sum of the weights of its
constituents edges is minimized.
• This problem is also known as All pair shortest
path problem.
• Floyd-Warshall Algorithm is an example of dynamic
programming approach.
• The advantages of Floyd-Warshall Algorithm are:
• Easy to implement and extremely simple.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
(Requirements)
• Graph must be weighted directed graph.
• Edge weights can be positive or negative.
• There should be no negative cycle.
• (A negative cycle is a cycle whose edges sum give
a negative value)
• This algorithm is best suited for dense graphs
because, it’s complexity depends on the number of
vertices in the given graph
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
(Algorithm)
• Graph must be weighted directed graph.
• Edge weights can be positive or negative.
• There should be no negative cycle.
• (A negative cycle is a cycle whose edges sum give
a negative value)
• This algorithm is best suited for dense graphs
because, it’s complexity depends on the number of
vertices in the given graph
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 1: Characterize the structure of an optimal solution
• For path 𝑝 = 𝑣1 , 𝑣2 , 𝑣3 , … … , 𝑣𝑙 , an intermediate vertex is any vertex of
𝑝 other than 𝑣1 𝑡𝑜 𝑣𝑙 .
• Let 𝑑𝑖𝑗
𝑘
=shortest path weight of any path 𝑖 ↝ 𝑗 with intermediate vertices
in 1,2,3, … . . , 𝑘 .
• Consider a shortest path 𝑖 ↝ 𝑗 with all intermediate vertices in
1,2,3, … . . , 𝑘 ∶
• If 𝑘 is not an intermediate vertex, then all intermediate vertices of
𝑝 are 1,2,3, … . . , 𝑘 − 1 .
• If 𝑘 is an intermediate vertex: 𝑝1 𝑝
𝑖 𝑘 2 𝑗

𝑎𝑙𝑙 𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒 𝑣𝑒𝑟𝑡𝑖𝑐𝑒𝑠 𝑖𝑛


1,2,3, … . . , 𝑘 − 1
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 2: Recursively define the value of optimal solution.

𝑤𝑖𝑗 𝑖𝑓 𝑘 = 0
𝑘
𝑑𝑖𝑗 =൝ 𝑘−1 𝑘−1 𝑘−1
min(𝑑𝑖𝑗 , 𝑑𝑖𝑘 + 𝑑𝑘𝑗 ) 𝑖𝑓 𝑘 ≥ 1
Because for any path, all intermediate vertices are in the set
{1,2,3, … . , 𝑛}, the matrix 𝐷 𝑛 = 𝑑𝑖𝑗
𝑛
give the final answer:
𝑛
𝑑𝑖𝑗 = 𝜎 𝑖, 𝑗 , 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖, 𝑗 ∈ 𝑉
𝑝1 𝑝2
𝑖 𝑘 𝑗

𝑎𝑙𝑙 𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒 𝑣𝑒𝑟𝑡𝑖𝑐𝑒𝑠 𝑖𝑛


1,2,3, … . . , 𝑘 − 1
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 3: Compute optimal solution for 0/1 knapsack problem.
𝑭𝒍𝒐𝒚𝒅_𝒘𝒂𝒓𝒔𝒉𝒂𝒍𝒍(𝑤)
1. 𝑛 = 𝑤. 𝑟𝑜𝑤𝑠
2. 𝐷0 = 𝑤
3. 𝐹𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑛
𝑘
4. let 𝐷𝑘 = 𝑑𝑖𝑗 be an new n x n matrix
5. 𝐹𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛
6. 𝐹𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛
𝑘 𝑘−1 𝑘−1 𝑘−1
7. 𝑑𝑖𝑗 = min(𝑑𝑖𝑗 , 𝑑𝑖𝑘 + 𝑑𝑘𝑗 )
8. 𝑅𝑒𝑡𝑢𝑟𝑛 𝐷𝑛
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 4: Construct / print the optimal solution of 0/1 knapsack
problem.
• Need to calculate predecessor matrix Π from the weight matrix 𝐷.
• Compute Π at the same time with 𝐷.
• Recursively calculate Π𝑖𝑗
𝑘

0
𝑁𝑈𝐿𝐿 𝑖𝑓 𝑖 = 𝑗 𝑜𝑟 𝑤𝑖𝑗 = ∞
• Π𝑖𝑗 =൝
𝑖 𝑖𝑓 𝑖 ≠ 𝑗 𝑜𝑟 𝑤𝑖𝑗 = ∞

𝑘−1 𝑘−1 𝑘−1 𝑘−1


𝑘
Π𝑖𝑗 𝑖𝑓 𝑑𝑖𝑗 ≤ 𝑑𝑖𝑘 + 𝑑𝑘𝑗
• Π𝑖𝑗 = ቐ 𝑘−1 𝑘−1 𝑘−1 𝑘−1
Π𝑘𝑗 𝑖𝑓𝑑𝑖𝑗 > 𝑑𝑖𝑘 + 𝑑𝑘𝑗
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 4: Construct / print the optimal solution of 0/1 knapsack
problem.
𝑭𝒍𝒐𝒚𝒅_𝒘𝒂𝒓𝒔𝒉𝒂𝒍𝒍(𝑤)
1. 𝑛 = 𝑤. 𝑟𝑜𝑤𝑠
2. 𝐷0 = 𝑤
3. 𝐼𝑛𝑖𝑡_𝑝𝑟𝑒𝑑𝑒𝑒𝑐𝑒𝑠𝑠𝑜𝑟𝑠 (Π 0 )
4. 𝐹𝑜𝑟 𝑘 = 1 𝑡𝑜 𝑛
5. 𝐹𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛
6. 𝐹𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛
𝑘−1 𝑘−1 𝑘−1
7. 𝑖𝑓 𝑑𝑖𝑗 ≤ 𝑑𝑖𝑘 + 𝑑𝑘𝑗
𝑘 𝑘−1 𝑘 𝑘−1
8. 𝑑𝑖𝑗 = 𝑑𝑖𝑗 and Π𝑖𝑗 = Π𝑖𝑗
𝑘 𝑘−1 𝑘−1 𝑘 𝑘−1
9. else 𝑑𝑖𝑗 = 𝑑𝑖𝑘 + 𝑑𝑘𝑗 and Π𝑖𝑗 = Π𝑘𝑗
10. 𝑅𝑒𝑡𝑢𝑟𝑛 𝐷𝑛 and Π 𝑛
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Step 4: Construct / print the optimal solution of 0/1 knapsack
problem.
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
1. 𝐼𝑓 (𝑖 = 𝑗)
2. 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 𝑖
3. 𝑒𝑙𝑠𝑒 𝑖𝑓 Π𝑖𝑗 = 𝑁𝑢𝑙𝑙
4. 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 " No path from i to J"
5. else
6. 𝑃𝑟𝑖𝑛𝑡_𝑎𝑙𝑙_𝑝𝑎𝑖𝑟𝑠_𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡_𝑝𝑎𝑡ℎ(𝛱, 𝑖, 𝛱𝑖𝑗 )
7. 𝑝𝑟𝑖𝑛𝑡 𝑗
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-

3 2
4
8 3
1
2
1
-4 -5
7
5 4
6

Using Floyd-Warshall Algorithm, find the shortest path distance


between every pair of vertices.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
Step-01:
Remove all the self loops and parallel edges (keeping the lowest
weight edge) from the graph.
In the given graph, there are neither self edges nor parallel edges.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
Step-02:
• Write the initial distance matrix.
• It represents the distance between every pair of vertices in the
form of given weights.
• For diagonal elements (representing self-loops), distance value =
0.
• For vertices having a direct edge between them, distance value =
weight of that edge.
• For vertices having no direct edge between them, distance value
= ∞.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
Step-02:
• Initial distance matrix for the given graph is-

2 1 2 3 4 5
3 4
1 0 3 8 ∞ -4
8 3
1 2 ∞ 0 ∞ 1 7
2 𝐷0 =
3 ∞ 4 0 ∞ ∞
1
-4 -5 4 2 ∞ -5 0 ∞
7
5 4 5 ∞ ∞ ∞ 6 0
6
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution: 3 2
4
8 3
1
2
1
-4 -5
7
1 2 3 4 5 5
6
4 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 = Π0 =
3 ∞ 4 0 ∞ ∞ 3 N 3 N N N
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
Step-03:
• Using Floyd-Warshall Algorithm generate the value of 𝐷1 , 𝐷2 𝐷3 ,
and 𝐷4 martixces.
• First Generate 𝐷1 from 𝐷0 and Π1 from Π 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
4 5
1 0 3 8 ∞ -4 6 1 N 1 1 N 1
2 ∞ 2 N
𝐷1 =
3 ∞ Π1 = 3 N
4 2 4 4
5 ∞ 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 ∞ -4 6 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ Π1 = 3 N 3 N N N
4 2 4 4
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 ∞ -4 6 1 N 1 1N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ Π1 = 3 N 3 N N N
4 2 0 4 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8∞ -4 6 1 N 1 1N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ Π1 = 3 N 3 N N N
4 2 5 0 4 4 1 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 ∞ -4 6 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ Π1 = 3 N 3 N N N
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷0 =
3 ∞ 4 0 ∞ ∞ Π0 = 3 N 3 N N N
3 2
4
4 2 ∞ -5 0 ∞ 4 4 N 4 N N
8
5 ∞ ∞ ∞ 6 0 1 3 5 N N N 5 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 ∞ -4 6 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ Π1 = 3 N 3 N N N
4 2 5 -5 0 -2 4 4 1 4 N 1
5 ∞ ∞ ∞ 6 0 5 N N N 5 0
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1
2 3 4 5 7 1 2 3 4 5
5 4
1 3 6 1 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 4 Π2 = 3 3
4 5 4 1
5 ∞ 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 6 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 4 0 Π2 = 3 3 N
4 5 0 4 1 N
5 ∞ 0 5 N N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 6 1 N 1 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 Π2 = 3 N 3 N
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 6 1 N 1 1 2
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 Π2 = 3 N 3 N
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 Π2 = 3 N 3 N
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 5 Π2 = 3 N 3 N 2
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 5 11 Π2 = 3 N 3 N 2 2
4 2 5 -5 0 4 4 1 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 8 ∞ -4 1 N 1 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷1 =
3 ∞ 4 0 ∞ ∞ 2
Π1 = 3 N 3 N N N
3 4
4 2 5 -5 0 -2 4 4 1 4 N 1
8 3
1
5 ∞ ∞ ∞ 6 0 2
5 N N N 5 0
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷2 =
3 ∞ 4 0 5 11 Π2 = 3 N 3 N 2 2
4 2 5 -5 0 -2 4 4 1 4 N 1
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2
3 4 5 7 1 2 3 4 5
5 4
1 8 6 1 1
2 ∞ 2 N
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 -5 4 4
5 ∞ 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2
3 4 5 7 1 2 3 4 5
5 4
1 0 8 6 1 N 1
2 0 ∞ 2 N N
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 -5 0 4 4 N
5 ∞ 0 5 N N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 8 6 1 N 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -5 0 4 4 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11 2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
1
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
4 5
1 0 3 8 6 1 N 1 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -5 0 4 4 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 6 1 N 1 1 2
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -5 0 4 4 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -5 0 4 4 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -1 -5 0 4 4 3 4 N
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
2 ∞ 0 ∞ 1 7
𝐷2 = 3 N 3 N 2 2
3 ∞ 4 0 5 11
2
Π2 =
3 4 4 4 1 4 N 1
4 2 5 -5 0 -2
8 3 5 N N N 5 N
5 ∞ ∞ ∞ 6 0 1
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 8 4 -4 6 1 N 1 1 2 1
2 ∞ 0 ∞ 1 7 2 N N N 2 2
𝐷3 =
3 ∞ 4 0 5 11 Π3 = 3 N 3 N 2 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 ∞ ∞ ∞ 6 0 5 N N N 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3
4 5 7 1 2 3 4 5
5 4
1 4 6 1 2
2 1 2 2
𝐷4 =
3 5 Π4 = 3 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 5 5
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3
4 5 7 1 2 3 4 5
5 4
1 0 4 6 1 N 2
2 0 1 2 N 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3
4 5 7 1 2 3 4 5
5 4
1 0 3 4 6 1 N 1 2
2 0 1 2 N 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 6 1 N 1 4 2
2 0 1 2 N 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 0 1 2 N 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 1 2 4 N 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 2 4 N 4 2
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷4 =
3 0 5 Π4 = 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
𝐷4 =
3 7 0 5 Π4 = 3 4 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
𝐷4 =
3 7 4 0 5 Π4 = 3 4 3 N 2
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
𝐷4 =
3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 6 0 5 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
𝐷4 =
3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 6 0 5 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
𝐷4 =
3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 6 0 5 4 3 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5
1 2 3 4 5
1 N 1 1 2 1
1 0 3 8 4 -4
2 N N N 2 2
𝐷3 = 2 ∞ 0 ∞ 1 7
3 N 3 N 2 2
3 ∞ 4 0 5 11 Π3 =
3 2
4 4 4 3 4 N 1
4 2 -1 -5 0 -2
5 N N N 5 N
8 3
1
5 ∞ ∞ ∞ 6 0 2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 3 -1 4 -4 6 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷4 =
3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 -4 6 1 1
2 -1 2 4
𝐷5 =
3 3 Π5 = 3 1
4 -2 4 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 4
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 -4 6 1 N 1
2 0 -1 2 N 4
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -4 6 1 N 3 1
2 0 -1 2 N 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 -4 6 1 N 3 4 1
2 0 -1 2 N 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 0 -1 2 N 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -1 2 4 N 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 -1 2 4 N 4 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 0 3 Π5 = 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 0 3 Π5 = 3 4 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 4 0 3 Π5 = 3 4 3 N 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 4 0 5 3 Π5 = 3 4 3 N 2 1
4 0 -2 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 4 0 5 3 Π5 = 3 4 3 N 2 1
4 2 0 -2 4 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 4 0 5 3 Π5 = 3 4 3 N 2 1
4 2 -1 0 -2 4 4 3 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
1 2 3 4 5 1 2 3 4 5
1 0 3 -1 4 -4 1 N 1 4 2 1
2 3 0 -4 1 -1 2 4 N 4 2 1
4
𝐷 = 3 7 4 0 5 3 Π4 = 3 4 3 N 2 1
3 2
4 2 -1 -5 0 -2 4 4 4 3 4 N 1
5 8 5 1 6 0 1
8 3 5 4 3 4 5 N
2
1
-4 -5
1 2 3 4 5 7 1 2 3 4 5
5 4
1 0 1 -3 2 -4 6 1 N 3 4 5 1
2 3 0 -4 1 -1 2 4 N 4 2 1
𝐷5 =
3 7 4 0 5 3 Π5 = 3 4 3 N 2 1
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.E 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒)
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓)
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)⟹ 𝟏
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓) ⟹ 𝟓
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)⟹ 𝟏
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒) ⟹ 𝟒
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓) ⟹ 𝟓
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)⟹ 𝟏
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution:
3 2
4
1 2 3 4 5 8
1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π5 = 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐)
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑) ⟹ 𝟑
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒) ⟹ 𝟒
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓) ⟹ 𝟓
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)⟹ 𝟏
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
Example 1: Consider the following directed weighted graph-
Solution: 2
3 4
1 2 3 4 5 8 1 2 3 4 5
1 3
1 0 1 -3 2 -4 2 1 N 3 4 5 1
1
2 3 0 -4 1 -1 -4 -5 2 4 N 4 2 1
𝐷5 = 7
3 7 4 0 5 3 5 4 Π 5
= 3 4 3 N 2 1
6
4 2 -1 -5 0 -2 4 4 3 4 N 1
5 8 5 1 6 0 5 4 3 4 5 N
For printing Shortest path from 1 to 2 use
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝒊, 𝒋)
i.e 𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟐) ⟹ 𝟐
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟑) ⟹ 𝟑
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟒) ⟹ 𝟒
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟓) ⟹ 𝟓
𝑷𝒓𝒊𝒏𝒕_𝒂𝒍𝒍_𝒑𝒂𝒊𝒓𝒔_𝒔𝒉𝒐𝒓𝒕𝒆𝒔𝒕_𝒑𝒂𝒕𝒉(𝚷, 𝟏, 𝟏)⟹ 𝟏
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm
(Analysis)
1. Floyd-Warshall Algorithm consists of three loops over all
the nodes.
2. The inner most loop consists of only constant complexity
operations.
3. Hence, the asymptotic complexity of Floyd Warshall
algorithm is 𝑂(𝑛3 ).
4. Here, n is the number of nodes in the given graph.
Dynamic Programming
Problem 2: Floyd-Warshall Algorithm (Home
Assignment)
8
1 2
3

7 5 2
2

4 3
1
Design and Analysis of Algorithm

Dynamic Programming
(Longest Common Subsequence)

Lecture – 57
Overview

• Dynamic Programming is a general algorithm


design technique for solving problems defined by
recurrences with overlapping subproblems
• Invented by American mathematician “Richard
Bellman) in the year 1950s to solve optimization
problems and later assimilated by Computer
Science.
• “Programming” here means “planning”
Dynamic Programming
• “Method of solving complex problems by
breaking them down into smaller sub-problems,
solving each of those sub-problems just once,
and storing their solutions.”

• The problem solving approach looks like Divide


and conquer approach.(which is not true)
Dynamic Programming
Difference between Dynamic programming and
Divide and Conquer approach.
Dynamic Programming
Is a Four-step methods
1. Characterize the structure of an optimal
solution.
2. Recursively define the value of an optimal
solution.
3. Compute the value of an optimal solution,
typically in a bottom-up fashion.
4. Construct an optimal solution from computed
information.
Dynamic Programming
Problems:
1. 0/1 Knapsack Problem
2. Floyd-Warshall Algorithm
3. Longest Common Sub-sequence
4. Matrix Chain Multiplication
Dynamic Programming
Problem 3: Longest Common Subsequences
(LCS)
Problem:
“Given two sequences 𝑋 = 𝑥1 , 𝑥2 , … … , 𝑥𝑚 and 𝑌 =
𝑦1 , 𝑦2 , … … , 𝑦𝑛 . Find a subsequence common to
both whose length is longest. A subsequence
doesn’t have to be consecutive, but it has to be in
order.”
Dynamic Programming
Problem 3: Longest Common Subsequences
(LCS)
Example:
Springtime horseback

pioneer snowflake
𝑳𝑪𝑺 ∶ 𝒑 𝒊 𝒏 𝒆 𝑳𝑪𝑺 ∶ 𝒐 𝒂 𝒌

maelestrom heroically

becalm scholarly
𝑳𝑪𝑺 ∶ 𝒆 𝒍 𝒎 𝑳𝑪𝑺 ∶ 𝒉 𝒐 𝒍 𝒍
Dynamic Programming
Problem 3: Longest Common Subsequence
• It is used, when the solution can be recursively
described in terms of solutions to subproblems
(optimal substructure)
• Algorithm finds solutions to subproblems and
stores them in memory for later use
• More efficient than “brute-force methods”, which
solve the same subproblems over and over again
Dynamic Programming
Problem 3: Longest Common Subsequence
• Application: comparison of two DNA strings
• Example: X= {A B C B D A B }, Y= {B D C A B A}
Longest Common Subsequence:
X= ABCBDAB
Y= BDCABA
• Brute force algorithm would compare each
subsequence of X with the symbols in Y
Dynamic Programming
Problem 3: Longest Common Subsequence
• if |X| = m, |Y| = n, then there are 2𝑚
subsequences of x; we must compare each with
Y (n comparisons)
• So the running time of the brute-force algorithm
is 𝑂(𝑛 2𝑚)
• Notice that the LCS problem has optimal
substructure: solutions of subproblems are parts
of the final solution.
• Subproblems: “find LCS of pairs of prefixes of X
and Y”
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 1: Characterize the structure of an optimal solution
• Define 𝑋𝑖 , 𝑌𝑗 to be the prefixes of 𝑋 and 𝑌 of length 𝑖
and 𝑗 respectively
• Define 𝑐[𝑖, 𝑗] to be the length of LCS of 𝑋𝑖 and 𝑌𝑗
• Then the length of LCS of 𝑋 and 𝑌 will be 𝑐[𝑚, 𝑛].
• We start with 𝑖 = 𝑗 = 0 (i.e empty substrings of x
and y)
• Since 𝑋0 and 𝑌0 are empty strings, their LCS is always
empty (i.e. 𝑐[0,0] = 0)
• LCS of empty string and any other string is empty, so
for every 𝑖 and j: c[0, j] = c[i,0] = 0
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 1: Characterize the structure of an optimal solution
• In the process of calculation of 𝑐[𝑖, 𝑗], there are two
cases:
• 𝐹𝑖𝑟𝑠𝑡 𝑐𝑎𝑠𝑒: 𝑥[𝑖] = 𝑦[𝑗]: one more symbol in strings 𝑋
and 𝑌 matches, so the length of LCS 𝑋𝑖 and 𝑌𝑗 equals
to the length of LCS of smaller strings 𝑋𝑖−1 and 𝑌𝑗−1 ,
plus 1.
• 𝑆𝑒𝑐𝑜𝑛𝑑 𝑐𝑎𝑠𝑒: 𝑥[𝑖] ! = 𝑦[𝑗]: As symbols don’t match,
our solution is not improved, and the length of
𝐿𝐶𝑆(𝑋𝑖 , 𝑌𝑗 ) is the same as before (i.e. maximum of
𝐿𝐶𝑆(𝑋𝑖 , 𝑌𝑗−1 ) and 𝐿𝐶𝑆(𝑋𝑖−1 , 𝑌𝑗 )
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 2: Recursively define the value of optimal solution.
• Define 𝑐[𝑖, 𝑗] to be the length of LCS of 𝑋𝑖 and 𝑌𝑗 . Then
the length of LCS of 𝑋 and 𝑌 will be calculated as
𝑐[𝑚, 𝑛].

0 , 𝑖𝑓 𝑖 = 0 𝑜𝑟 𝑗 = 0
𝑐 𝑖, 𝑗 = ൞ 𝑐[𝑖 − 1, 𝑗 − 1] + 1 , 𝑖𝑓 𝑖, 𝑗 > 0 𝑎𝑛𝑑 𝑋𝑖 = 𝑌𝑗
max 𝑐 𝑖 − 1, 𝑗 , 𝑐 𝑖, 𝑗 − 1 , 𝑖𝑓𝑖, 𝑗 > 0 𝑎𝑛𝑑 𝑋𝑖 ≠ 𝑌𝑗
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
LCS-Length(X, Y)
1. 𝑚 = 𝑙𝑒𝑛𝑔𝑡ℎ(𝑋) // 𝑔𝑒𝑡 𝑡ℎ𝑒 # 𝑜𝑓 𝑠𝑦𝑚𝑏𝑜𝑙𝑠 𝑖𝑛 𝑋
2. 𝑛 = 𝑙𝑒𝑛𝑔𝑡ℎ(𝑌) // 𝑔𝑒𝑡 𝑡ℎ𝑒 # 𝑜𝑓 𝑠𝑦𝑚𝑏𝑜𝑙𝑠 𝑖𝑛 𝑌
3. 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑚
𝑐[𝑖, 0] = 0 // 𝑠𝑝𝑒𝑐𝑖𝑎𝑙 𝑐𝑎𝑠𝑒: 𝑌0
4. 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛
𝑐[0, 𝑗] = 0 // 𝑠𝑝𝑒𝑐𝑖𝑎𝑙 𝑐𝑎𝑠𝑒: 𝑋0
5. 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑚 // 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑋𝑖
6. 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 // 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑌𝑗
7. 𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
8. 𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↖ "
9. 𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↑ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ ← “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
10. 𝑟𝑒𝑡𝑢𝑟𝑛 𝑐 and b
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
𝑋 = 𝐴𝐵𝐶𝐵
𝑌 = 𝐵𝐷𝐶𝐴𝐵
What is the Longest Common Subsequence 𝐿𝐶𝑆 (𝑋, 𝑌)?

𝑋 = 𝐴𝑩𝑪𝑩
𝑌 = 𝑩𝐷𝑪𝐴𝑩
Hence,
𝐿𝐶𝑆 (𝑋, 𝑌) = 𝐵𝐶𝐵

Note: The demonstration of this problem is given in the next page.


Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼
1 A
2 B
3 C
4 B

X = ABCB; m = |X| = 4
Y = BDCAB; n = |Y| = 5
Allocate array c[5,6]
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0
2 B 0
3 C 0
4 B 0

for i = 0 to m c[i,0] = 0
for j = 1 to n c[0,j] = 0
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0
2 B 0
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0
2 B 0
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0
2 B 0
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1
2 B 0
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution for cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 D 0 1 2 2
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ "
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗])
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 3: Compute optimal solution cost.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5 𝑋 = 𝐴𝐵𝐶𝐵
i 𝑌𝑗 B D C A B 𝑌 = 𝐵𝐷𝐶𝐴𝐵
0 𝑋𝐼 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3 The running time= 𝑂(𝑚 ∗ 𝑛)
𝑖𝑓 ( 𝑋𝑖 == 𝑌𝑗 )
since each 𝑐[𝑖, 𝑗] is calculated
𝑐[𝑖, 𝑗] = 𝑐[𝑖 − 1, 𝑗 − 1] + 1 𝑎𝑛𝑑 𝑏[𝑖, 𝑗] = “ ↘ " in constant time, and there
𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = max( 𝑐[𝑖 − 1, 𝑗], 𝑐[𝑖, 𝑗 − 1] ) 𝑎𝑛𝑑 are m*n elements in the
𝑏[𝑖, 𝑗] = “ ↓ “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖 − 1, 𝑗]) array
𝑏[𝑖, 𝑗] = “ → “ (𝑖𝑓 max 𝑖𝑠 𝑐[𝑖, 𝑗 − 1])
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?

𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗)
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗)
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗) i 𝑌𝑗 B D C A B
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛 0 𝑋𝐼 0 0 0 0 0 0
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “ 1 A 0 0 0 0 1 1
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1 2 B 0 1 1 1 1 2
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “ 3 C 0 1 1 2 2 2
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗) 4 B 0 1 1 2 2 3
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)

𝑩
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗) i 𝑌𝑗 B D C A B
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛 0 𝑋𝐼 0 0 0 0 0 0
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “ 1 A 0 0 0 0 1 1
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1 2 B 0 1 1 1 1 2
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “ 3 C 0 1 1 2 2 2
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗) 4 B 0 1 1 2 2 3
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)

𝑪𝑩
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗) i 𝑌𝑗 B D C A B
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛 0 𝑋𝐼 0 0 0 0 0 0
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “ 1 A 0 0 0 0 1 1
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1 2 B 0 1 1 1 1 2
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “ 3 C 0 1 1 2 2 2
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗) 4 B 0 1 1 2 2 3
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)

𝑩𝑪𝑩
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?
j 0 1 2 3 4 5
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗) i 𝑌𝑗 B D C A B
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛 0 𝑋𝐼 0 0 0 0 0 0
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “ 1 A 0 0 0 0 1 1
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1 2 B 0 1 1 1 1 2
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “ 3 C 0 1 1 2 2 2
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗) 4 B 0 1 1 2 2 3
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)

𝒑𝒓𝒊𝒏𝒕 𝒓𝒆𝒗𝒆𝒓𝒔𝒆(𝑩 𝑪 𝑩) = 𝑩 𝑪 𝑩
Dynamic Programming
Problem 3: Longest Common Subsequence
Step 4: Construct / print the optimal solution.
Example 1: What do ABCB and BDCAB have in common?
The initial call is 𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑋. 𝑙𝑒𝑛𝑔𝑡ℎ, 𝑌. 𝑙𝑒𝑛𝑔𝑡ℎ)
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗)
𝑖𝑓 𝑖 == 0 𝑎𝑛𝑑 𝑗 == 0
𝑟𝑒𝑡𝑢𝑟𝑛
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↘ “
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆 𝑏, 𝑋, 𝑖 − 1, 𝑗 − 1
𝑃𝑟𝑖𝑛𝑡 𝑥𝑖
𝑖𝑓 𝑏 𝑖, 𝑗 == “ ↓ “
𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖 − 1, 𝑗)
𝑒𝑙𝑠𝑒𝑃𝑟𝑖𝑛𝑡 − 𝐿𝐶𝑆(𝑏, 𝑋, 𝑖, 𝑗 − 1)
This algorithm required
Θ(𝑚 + 𝑛) time for execution
Dynamic Programming
Problem 3: Longest Common Subsequence
Example 2: What do ABCBDAB and BDCABA have in common?
𝑋 = 𝐴𝐵𝐶𝐵𝐷𝐴𝐵
𝑌 = 𝐵𝐷𝐶𝐴𝐵𝐴
What is the Longest Common Subsequence 𝐿𝐶𝑆 (𝑋, 𝑌)?

Example 3: What do AGGTA and GXTYAY have in common?


𝑋 = 𝐴𝐺𝐺𝑇𝐴
𝑌 =𝐺𝑋𝑇𝑌𝐴𝑌
What is the Longest Common Subsequence 𝐿𝐶𝑆 (𝑋, 𝑌)?

. Self
practice
Design and Analysis of Algorithm

Dynamic Programming
(Matrix Chain Multiplication)

Lecture – 58
Overview

• Dynamic Programming is a general algorithm


design technique for solving problems defined by
recurrences with overlapping subproblems
• Invented by American mathematician “Richard
Bellman) in the year 1950s to solve optimization
problems and later assimilated by Computer
Science.
• “Programming” here means “planning”
Dynamic Programming
• “Method of solving complex problems by
breaking them down into smaller sub-problems,
solving each of those sub-problems just once,
and storing their solutions.”

• The problem solving approach looks like Divide


and conquer approach.(which is not true)
Dynamic Programming
Difference between Dynamic programming and
Divide and Conquer approach.
Dynamic Programming
Is a Four-step methods
1. Characterize the structure of an optimal
solution.
2. Recursively define the value of an optimal
solution.
3. Compute the value of an optimal solution,
typically in a bottom-up fashion.
4. Construct an optimal solution from computed
information.
Dynamic Programming
Problems:
1. 0/1 Knapsack Problem
2. Floyd-Warshall Algorithm
3. Longest Common Sub-sequence
4. Matrix Chain Multiplication
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Problem:
“Given dimensions 𝑝0 , 𝑝1 , 𝑝2 , … … , 𝑝𝑛 corresponding
to matrix sequence 𝐴1 , 𝐴2 , … … , 𝐴𝑛 of 𝑛 matrices,
where for 𝑖 = 1, 2, … , 𝑛, matrix 𝐴𝑖 has dimension
𝑝𝑖−1 × 𝑝𝑖 , determine the “multiplication sequence”
that minimizes the number of scalar multiplications
in computing 𝐴1 , 𝐴2 , … … , 𝐴𝑛 .”
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Problem:
That is determine how to parenthesize the
multiplication.
Example:
𝐴1 , 𝐴2 , 𝐴3 , 𝐴4 = ((𝐴1 𝐴2 )(𝐴3 𝐴4 ))
(𝐴1 (𝐴2 𝐴3 𝐴4 ))
(𝐴1 ((𝐴2 𝐴3 )𝐴4 ))
((𝐴1 𝐴2 )(𝐴3 𝐴4 ))
(((𝐴1 𝐴2 )𝐴3 )𝐴4 )
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Given a 𝑝 × 𝑞 matrix A, a 𝑞 × 𝑟 matrix B and a 𝑟 × 𝑠 matrix
C, then ABC can be computed in two ways (AB)C and A(BC):
The number of multiplications needed are:
mult[(AB)C] = pqr + prs,
mult[A(BC)] = qrs + pqs.
When p = 5, q = 4, r = 6 and s = 2, then
mult[(AB)C] = 180,
mult[A(BC)] = 88.
Which is a big difference. Hence the implication is the the
multiplication “sequence” (parenthesization) is very
important.
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 1: Characterize the structure of an optimal solution
• Decompose thee problem into subproblems:
• For each pair 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛 , determine the multiplication
sequence for 𝐴𝑖..𝑗 = 𝐴𝑖 , 𝐴𝑖+1 , … … , 𝐴𝑗 that minimize the number of
multiplications.
• Clearly, 𝐴𝑖..𝑗 is a 𝑝𝑖−1 × 𝑝𝑖 matrix.
• High-Level Parenthesization for 𝐴𝑖..𝑗
• For any optimal multiplication sequence, at the last step you are
multiplying two matrices 𝐴𝑖..𝑘 and 𝐴𝑘+1..𝑗 for some 𝑘. That is,
𝐴𝑖..𝑗 = (𝐴𝑖 … 𝐴𝑘 ) 𝐴𝑘+1 … 𝐴𝑗 = 𝐴𝑖..𝑘 𝐴𝑘+1..𝑗
• Example
𝐴3..6 = ((𝐴3 (𝐴4 𝐴5 ))(𝐴6 ) = 𝐴3..5 𝐴6..6 . (𝐻𝑒𝑟𝑒 𝑘 = 5. )
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 1: Characterize the structure of an optimal solution
• Thus, the problem of determining the optimal
sequence of multiplications is divided into 2
questions:
• How do we decide where to split the chain (what is the
value of k)? (Search all possible values of k)
• How do we parenthesize the sub chains 𝐴𝑖..𝑘 and
𝐴𝑘+1..𝑗 ?
(Problem has optimal substructure property that 𝐴𝑖..𝑘 and
𝐴𝑘+1..𝑗 must be optimal so the same procedure can be
applied recursively)
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 1: Characterize the structure of an optimal solution
• What is Optimal Substructure Property?
• If final “optimal” solution of 𝐴𝑖..𝑗 involves splitting into
𝐴𝑖..𝑘 and 𝐴𝑘+1..𝑗 at final step then parenthesization of 𝐴𝑖..𝑘 and
𝐴𝑘+1..𝑗 in final optimal solution must also be optimal for the
subproblems “standing alone”:
• If parenthisization of 𝐴𝑖..𝑘 was not optimal we could replace it
by a better parenthesization and get a cheaper final solution,
leading to a contradiction.
• Similarly, if parenthisization of 𝐴𝑘+1..𝑗 was not optimal we
could replace it by a better parenthesization and get a
cheaper final solution, also leading to a contradiction.
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 2: Recursively define the value of optimal solution.
• For 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛 , let 𝑚[𝑖, 𝑗] denote the minimum
number of multiplications needed to compute 𝐴𝑖..𝑗 . The
optimum cost can be described by the following recursive
definition.

0 𝑖𝑓 𝑖 = 𝑗
𝑚 𝑖, 𝑗 = ቐ min 𝑚 𝑖, 𝑘 + 𝑚 𝑘 + 1, 𝑗 + 𝑝 𝑝 𝑝 𝑖𝑓 𝑖 < 𝑗
𝑖−1 𝑘 𝑗
𝑖≤𝑘<𝑗
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
𝑀𝐴𝑇𝑅𝐼𝑋 − 𝐶𝐻𝐴𝐼𝑁 − 𝑂𝑅𝐷𝐸𝑅(𝑝)
1 𝑛 ← 𝑙𝑒𝑛𝑔𝑡ℎ[𝑝] − 1
2 𝑙𝑒𝑡 𝑚[1. . 𝑛, 1. . 𝑛] 𝑎𝑛𝑑 𝑠[1. . 𝑛 − 1,2. . 𝑛] 𝑏𝑒 𝑛𝑒𝑤 𝑡𝑎𝑏𝑙𝑒𝑠.
3 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛
4 𝑚[𝑖, 𝑖] ← 0
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 ▹ 𝑙 𝑖𝑠 𝑡ℎ𝑒 𝑐ℎ𝑎𝑖𝑛 𝑙𝑒𝑛𝑔𝑡ℎ.
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
14 𝑟𝑒𝑡𝑢𝑟𝑛 𝑚 𝑎𝑛𝑑 𝑠
Lets illustrate the example with the help of an example.
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain
product whose sequence of dimension is :
30, 35, 15, 5, 10, 20, 25
Solution:
Here
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔
30 35 15 5 10 20 25
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain
product whose sequence of dimension is :
30, 35, 15, 5, 10, 20, 25
Solution:
Here
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔
30 35 15 5 10 20 25

Matrix 𝐴1 𝐴2 𝐴3 𝐴4 𝐴5 𝐴6
Dimensions 30 x 35 35 x 15 15 x 5 5 x 10 10 x 20 20 x 25
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0
1
2 0
2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0
1
2 0
2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
1 𝑛 ← 𝑙𝑒𝑛𝑔𝑡ℎ[𝑝] − 1
2 𝑙𝑒𝑡 𝑚[1. . 𝑛, 1. . 𝑛] 𝑎𝑛𝑑 𝑠[1. . 𝑛 − 1,2. . 𝑛] 𝑏𝑒 𝑛𝑒𝑤 𝑡𝑎𝑏𝑙𝑒𝑠.
3 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛
4 𝑚[𝑖, 𝑖] ← 0
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 ∞
1
2 0
2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 2 , 𝑖 = 1, 𝑗 = 2, 𝑘 =1
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞=
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0
2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 1, 𝑗 = 2, 𝑘 =1
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞 = 0 + 0 + 30 ∗ 35 ∗ 15 = 15750
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 ∞
2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 2, 𝑗 = 3, 𝑘 =2
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞=
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0
3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 2, 𝑗 = 3, 𝑘 =2
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞 = 0 + 0 + 30 ∗ 15 ∗ 5 = 2625
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 ∞
3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 3, 𝑗 = 4, 𝑘 =3
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞=
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 750
3 3
4 0
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 3, 𝑗 = 4, 𝑘 =3
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞 = 0 + 0 + 15 ∗ 5 ∗ 10 = 750
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 ∞
4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 4, 𝑗 = 5, 𝑘 =4
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞=
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 4, 𝑗 = 5, 𝑘 =4
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞 = 0 + 0 + 5 ∗ 10 ∗ 20 = 1000
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 ∞
5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 4, 𝑗 = 5, 𝑘 =4
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞=
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
𝑙 = 2 , 𝑖 = 4, 𝑗 = 5, 𝑘 =4
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞ 𝑞 = 0 + 0 + 10 ∗ 20 ∗ 25 = 5000
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 ∞
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 1, 𝑗 = 3, 𝑘 =1
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 2625 + 30 ∗ 35 ∗ 5 = 7875
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 ∞
1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 1, 𝑗 = 3, 𝑘 =1
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 2625 + 30 ∗ 35 ∗ 5 = 7875
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑙 = 3 , 𝑖 = 1, 𝑗 = 3, 𝑘 =2
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 15750 + 0 + 30 ∗ 15 ∗ 5 =18000
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 min
𝑙 = 3 , 𝑖 = 1, 𝑗 = 3, 𝑘 =1
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 2625 + 30 ∗ 35 ∗ 5 = 7875
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑙 = 3 , 𝑖 = 1, 𝑗 = 3, 𝑘 =2
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 15750 + 0 + 30 ∗ 15 ∗ 5 =18000
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 ∞
2 2
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 4, 𝑘 =2
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 750 + 35 ∗ 15 ∗ 10 = 6000
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 ∞
2 2 3
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 4, 𝑘 =2
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 750 + 35 ∗ 15 ∗ 10 = 6000
8 𝑚[𝑖, 𝑗] ← ∞ 𝑙 = 3 , 𝑖 = 2, 𝑗 = 4, 𝑘 =3
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 2625 + 0 + 35 ∗ 5 ∗ 10 =4375
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 4, 𝑘 =2
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 750 + 35 ∗ 15 ∗ 10 = 6000
8 𝑚[𝑖, 𝑗] ← ∞ min
𝑙 = 3 , 𝑖 = 2, 𝑗 = 4, 𝑘 =3
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 2625 + 0 + 35 ∗ 5 ∗ 10 =4375
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 ∞
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 5, 𝑘 =3
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 1000 + 15 ∗ 5 ∗ 20 = 2500
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 ∞
3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 5, 𝑘 =3
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 1000 + 15 ∗ 5 ∗ 20 = 2500
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑙 = 3 , 𝑖 = 3, 𝑗 = 5, 𝑘 =4
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 750 + 0 + 15 ∗ 10 ∗ 20 = 3750
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000
4 4
5 0 5000
5 5
6 0
m matrix s matrix min
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 5, 𝑘 =3
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 1000 + 15 ∗ 5 ∗ 20 = 2500
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑙 = 3 , 𝑖 = 3, 𝑗 = 5, 𝑘 =4
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 750 + 0 + 15 ∗ 10 ∗ 20 = 3750
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3
4 0 1000 ∞
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 4, 𝑗 = 6, 𝑘 =4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 5000 + 5 ∗ 10 ∗ 25 = 6250
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3
4 0 1000 ∞
4 4
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 4, 𝑗 = 6, 𝑘 =4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 5000 + 5 ∗ 10 ∗ 25 = 6250
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
𝑙 = 3 , 𝑖 = 4, 𝑗 = 6, 𝑘 =5
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 1000 + 0 + 5 ∗ 20 ∗ 25 = 3500
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 4, 𝑗 = 6, 𝑘 =4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑞 = 0 + 5000 + 5 ∗ 10 ∗ 25 = 6250
8 𝑚[𝑖, 𝑗] ← ∞ min
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
𝑙 = 3 , 𝑖 = 4, 𝑗 = 6, 𝑘 =5
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 𝑞 = 1000 + 0 + 5 ∗ 20 ∗ 25 = 3500
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 ∞
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 1, 𝑗 = 4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 4375 + 30 ∗ 35 ∗ 10 = 14875
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 ∞
1 1 1
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
𝑙 = 3 , 𝑖 = 1, 𝑗 = 4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 4375 + 30 ∗ 35 ∗ 10 = 14875
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 750 + 30 ∗ 15 ∗ 10 =18000
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
𝑙 = 3 , 𝑖 = 1, 𝑗 = 4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 4375 + 30 ∗ 35 ∗ 10 = 14875
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 750 + 30 ∗ 15 ∗ 10 =18000
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 0 + 30 ∗ 5 ∗ 10 = 9375
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛
𝑙 = 3 , 𝑖 = 1, 𝑗 = 4
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 4375 + 30 ∗ 35 ∗ 10 = 14875
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 750 + 30 ∗ 15 ∗ 10 =18000 min
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 0 + 30 ∗ 5 ∗ 10 = 9375
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 ∞
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 4375 + 35 ∗ 15 ∗ 20 = 10500
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 ∞
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 4375 + 35 ∗ 15 ∗ 20 = 10500
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 1000 + 35 ∗ 5 ∗ 20 = 7125
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 ∞
2 2 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 4375 + 35 ∗ 15 ∗ 20 = 10500
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 1000 + 35 ∗ 5 ∗ 20 = 7125
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 4, 𝑞 = 4375 + 0 + 35 ∗ 10 ∗ 20 = 11375
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 2, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 4375 + 35 ∗ 15 ∗ 20 = 10500 min
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 1000 + 35 ∗ 5 ∗ 20 = 7125
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 4, 𝑞 = 4375 + 0 + 35 ∗ 10 ∗ 20 = 11375
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 ∞
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 3, 𝑞 = 0 + 3500 + 15 ∗ 5 ∗ 25 = 5375
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 ∞
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 3, 𝑞 = 0 + 3500 + 15 ∗ 5 ∗ 25 = 5375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 4, 𝑞 = 750 + 5000 + 15 ∗ 10 ∗ 25 =9500
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 ∞
3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 3, 𝑞 = 0 + 3500 + 15 ∗ 5 ∗ 25 = 5375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 4, 𝑞 = 750 + 5000 + 15 ∗ 10 ∗ 25 =9500
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 5, 𝑞 = 2500 + 0 + 15 ∗ 20 ∗ 25 = 10000
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 3 , 𝑖 = 3, 𝑗 = 6 min
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 3, 𝑞 = 0 + 3500 + 15 ∗ 5 ∗ 25 = 5375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 4, 𝑞 = 750 + 5000 + 15 ∗ 10 ∗ 25 =9500
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 5, 𝑞 = 2500 + 0 + 15 ∗ 20 ∗ 25 = 10000
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 ∞
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 7125 + 30 ∗ 35 ∗ 20 = 28125
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 ∞
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 7125 + 30 ∗ 35 ∗ 20 = 28125
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 2500 + 30 ∗ 15 ∗ 20 = 27250
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 ∞
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 7125 + 30 ∗ 35 ∗ 20 = 28125
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 2500 + 30 ∗ 15 ∗ 20 = 27250
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 1000 + 30 ∗ 5 ∗ 20 = 11875
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 ∞
1 1 1 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 7125 + 30 ∗ 35 ∗ 20 = 28125
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 2500 + 30 ∗ 15 ∗ 20 = 27250
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 1000 + 30 ∗ 5 ∗ 20 = 11875
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑘 = 4, 𝑞 = 9375 + 0 + 30 ∗ 10 ∗ 20 = 15375
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 5
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 7125 + 30 ∗ 35 ∗ 20 = 28125
7 𝑗 ← 𝑖 + 𝑙 − 1 min
𝑘 = 2, 𝑞 = 15750 + 2500 + 30 ∗ 15 ∗ 20 = 27250
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 1000 + 30 ∗ 5 ∗ 20 = 11875
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑘 = 4, 𝑞 = 9375 + 0 + 30 ∗ 10 ∗ 20 = 15375
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125 ∞
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 2, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 5375 + 35 ∗ 15 ∗ 25 = 15375
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔
30 35 15 5 10 20 25
Solution:
1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125 ∞
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 2, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 5375 + 35 ∗ 15 ∗ 25 = 15375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 3500 + 35 ∗ 5 ∗ 25 = 10500
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125 ∞
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 2, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 5375 + 35 ∗ 15 ∗ 25 = 15375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 3500 + 35 ∗ 5 ∗ 25 = 10500
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 4, 𝑞 = 4375 + 5000 + 35 ∗ 10 ∗ 25 = 18125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125 ∞
2 2 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 2, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 5375 + 35 ∗ 15 ∗ 25 = 15375
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 3500 + 35 ∗ 5 ∗ 25 = 10500
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 4, 𝑞 = 4375 + 5000 + 35 ∗ 10 ∗ 25 = 18125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑘 = 5, 𝑞 = 7125 + 0 + 35 ∗ 20 ∗ 25 = 24625
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 2, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 2, 𝑞 = 0 + 5375 + 35 ∗ 15 ∗ 25 = 15375 min
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 3, 𝑞 = 2625 + 3500 + 35 ∗ 5 ∗ 25 = 10500
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 4, 𝑞 = 4375 + 5000 + 35 ∗ 10 ∗ 25 = 18125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑘 = 5, 𝑞 = 7125 + 0 + 35 ∗ 20 ∗ 25 = 24625
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 ∞
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 ∞
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 5375 + 30 ∗ 15 ∗ 25 = 32375
8 𝑚[𝑖, 𝑗] ← ∞
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 ∞
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 5375 + 30 ∗ 15 ∗ 25 = 32375
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 3500 + 30 ∗ 5 ∗ 25 = 15125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 ∞
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 5375 + 30 ∗ 15 ∗ 25 = 32375
8 𝑚[𝑖, 𝑗] ← ∞ 𝑘 = 3, 𝑞 = 7875 + 3500 + 30 ∗ 5 ∗ 25 = 15125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 𝑘 = 4, 𝑞 = 9375 + 5000 + 30 ∗ 10 ∗ 25 = 21875
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗]
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 ∞
1 1 1 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 5375 + 30 ∗ 15 ∗ 25 = 32375
8 𝑚[𝑖, 𝑗] ← ∞
𝑘 = 3, 𝑞 = 7875 + 3500 + 30 ∗ 5 ∗ 25 = 15125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗𝑘 = 4, 𝑞 = 9375 + 5000 + 30 ∗ 10 ∗ 25 = 21875
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗] 𝑘 = 5, 𝑞 = 11875 + 0 + 30 ∗ 20 ∗ 25 = 26875
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
Example 1: Find an optimal parenthesization of a matrix chain product whose sequence of
dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔

Solution: 30 35 15 5 10 20 25

1 2 3 4 5 6
2 3 4 5 6
1 0 15750 7875 9375 11875 15125
1 1 1 3 3 3
2 0 2625 4375 7125 10500
2 2 3 3 3
3 0 750 2500 5375
3 3 3 3
4 0 1000 3500
4 4 5
5 0 5000
5 5
6 0
m matrix s matrix
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 𝑙 = 4 , 𝑖 = 1, 𝑗 = 6
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 𝑘 = 1, 𝑞 = 0 + 10500 + 30 ∗ 35 ∗ 25 = 36750
7 𝑗 ← 𝑖 + 𝑙 − 1 𝑘 = 2, 𝑞 = 15750 + 5375 + 30 ∗ 15 ∗ 25 = 32375 min
8 𝑚[𝑖, 𝑗] ← ∞
𝑘 = 3, 𝑞 = 7875 + 3500 + 30 ∗ 5 ∗ 25 = 15125
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗𝑘 = 4, 𝑞 = 9375 + 5000 + 30 ∗ 10 ∗ 25 = 21875
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗] 𝑘 = 5, 𝑞 = 11875 + 0 + 30 ∗ 20 ∗ 25 = 26875
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 3: Compute optimal cost.
𝑀𝐴𝑇𝑅𝐼𝑋 − 𝐶𝐻𝐴𝐼𝑁 − 𝑂𝑅𝐷𝐸𝑅(𝑝)
1 𝑛 ← 𝑙𝑒𝑛𝑔𝑡ℎ[𝑝] − 1
2 𝑙𝑒𝑡 𝑚[1. . 𝑛, 1. . 𝑛] 𝑎𝑛𝑑 𝑠[1. . 𝑛 − 1,2. . 𝑛] 𝑏𝑒 𝑛𝑒𝑤 𝑡𝑎𝑏𝑙𝑒𝑠.
3 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛
4 𝑚[𝑖, 𝑖] ← 0 A simple inspection of the
5 𝑓𝑜𝑟 𝑙 ← 2 𝑡𝑜 𝑛 ▹ 𝑙 𝑖𝑠 𝑡ℎ𝑒 𝑐ℎ𝑎𝑖𝑛 𝑙𝑒𝑛𝑔𝑡ℎ. nested loop structure of
6 𝑓𝑜𝑟 𝑖 ← 1 𝑡𝑜 𝑛 − 𝑙 + 1 MATRIX-CHAIN-ORDER yields
7 𝑗 ← 𝑖 + 𝑙 − 1 a running time of 𝑂(𝑛3 ) for
8 𝑚[𝑖, 𝑗] ← ∞ the algorithm. The loops are
9 𝑓𝑜𝑟 𝑘 ← 𝑖 𝑡𝑜 𝑗 − 1 nested three deep, and each
10 𝑞 ← 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝𝑖−1 𝑝𝑘 𝑝𝑗 loop index (𝑙, 𝑖, 𝑎𝑛𝑑 𝑘) takes
11 𝑖𝑓 𝑞 < 𝑚[𝑖, 𝑗] on at most 𝑛 − 1 values.
12 𝑡ℎ𝑒𝑛 𝑚[𝑖, 𝑗] ← 𝑞
13 𝑠[𝑖, 𝑗] ← 𝑘
14 𝑟𝑒𝑡𝑢𝑟𝑛 𝑚 𝑎𝑛𝑑 𝑠
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is :
𝒑𝟎 𝒑𝟏 𝒑𝟐 𝒑𝟑 𝒑𝟒 𝒑𝟓 𝒑𝟔
30 35 15 5 10 20 25
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗)
1 𝑖𝑓 𝑖 = 𝑗
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "("
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗])
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗)
6 𝑝𝑟𝑖𝑛𝑡 ")"

Lets see, how in the discussed example the call 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 −
𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 1, 6) prints the parenthesization ((𝐴1 (𝐴2 𝐴3)) ((𝐴4 𝐴5)𝐴6)).
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗
2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "(" 3 3 3 3
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗
2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "(" 3 3 3 3
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
POP(S,1,6)
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗
2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "(" 3 3 3 3
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
POP(S,1,6)

POP(S,4,6)
POP(S,1,3)
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗
2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "(" 3 3 3 3
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
POP(S,1,6)

POP(S,4,6)
POP(S,1,3)

POP(S,4,5) POP(S,6,6)
POP(S,1,1) POP(S,2,3)
𝐴6
𝐴1
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗
2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 "
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "(" 3 3 3 3
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
POP(S,1,6)

POP(S,4,6)
POP(S,1,3)

POP(S,4,5) POP(S,6,6)
POP(S,1,1) POP(S,2,3)
𝐴6
𝐴1
POP(S,3,3) POP(S,4,4) POP(S,5,5)
POP(S,2,2)
𝐴2 𝐴3 𝐴4 𝐴4
((𝐴1 (𝐴2 𝐴3)) ( 𝐴4 𝐴5 𝐴6))
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Step 4: Construct / print the optimal solution.
Example 1: Find an optimal parenthesization of a matrix chain product whose
sequence of dimension is : 2 3 4 5 6
𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑗) 1 1 1 3 3 3
1 𝑖𝑓 𝑖 = 𝑗 2 2 3 3 3
2 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 "𝐴𝑖 " 3 3 3 3
3 𝑒𝑙𝑠𝑒 𝑝𝑟𝑖𝑛𝑡 "("
4 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑖, 𝑠[𝑖, 𝑗]) 4 4 5
5 𝑃𝑅𝐼𝑁𝑇 − 𝑂𝑃𝑇𝐼𝑀𝐴𝐿 − 𝑃𝐴𝑅𝐸𝑁𝑆(𝑠, 𝑠[𝑖, 𝑗] + 1, 𝑗) 5 5
6 𝑝𝑟𝑖𝑛𝑡 ")" s matrix
POP(s, 1, 6), s[1, 6] = 3, (A1A2A3)(A4A5A6)
POP(s, 1, 3), s[1, 3] = 1, ((A1)(A2A3))(A4A5A6)
POP(s, 4, 6), s[4, 6] = 5, ((A1)(A2A3))((A4A5)(A6))
POP(s, 2, 3), s[2, 3] = 2, ((A1)((A2)(A3)))((A4A5)(A6))
POP(s, 4, 5), s[4, 5] = 4, ((A1)((A2)(A3)))(((A4)(A5))(A6))
Hence the product is computed as follows
(A1(A2A3))((A4A5)A6).
Dynamic Programming
Problem 4: Matrix Chain Multiplication
Example 2: Find an optimal parenthesization of a matrix-chain product
whose sequence of dimensions is 5, 10, 3, 12, 5, 50, 6

Example 3: Find an optimal parenthesization of a matrix-chain product


whose sequence of dimensions is 5, 4, 6, 2, 7

Self
practice
Design and Analysis of Algorithm

Greedy Methods
(Activity Selection Algorithm, Task
Scheduling, Huffman Coding )

Lecture – 59-61
Overview

• A greedy algorithm always makes the choice that


looks best at the moment. (i.e. it makes a locally
optimal choice in the hope that this choice will
lead to a globally optimal solution).

• The objective of this section is to explores


optimization problems that are solvable by greedy
algorithms.
Greedy Algorithm
• In mathematics, computer science and
economics, an optimization problem is the
problem of finding the best solution from all
feasible solutions.
• Algorithms for optimization problems typically go
through a sequence of steps, with a set of
choices at each step.
• Many optimization problems can be solved using
a greedy approach.
• Greedy algorithms are simple and
straightforward.
Greedy Algorithm
• A greedy algorithm always makes the choice that
looks best at the moment.
• That is, it makes a locally optimal choice in the
hope that this choice will lead to a globally
optimal solution.
• Greedy algorithms do not always yield optimal
solutions, but for many problems they do.
• This algorithms are easy to invent, easy to
implement and most of the time provides best
and optimized solution.
Greedy Algorithm
• Application of Greedy Algorithm:
• A simple but nontrivial problem, the activity-
selection problem, for which a greedy algorithm
efficiently computes a solution.
• In combinatorics,(a branch of mathematics), a
‘matroid’ is a structure that abstracts and
generalizes the notion of linear independence in
vector spaces. Greedy algorithm always produces
an optimal solution for such problems. Scheduling
unit-time tasks with deadlines and penalties is an
example of such problem.
Greedy Algorithm
• Application of Greedy Algorithm:
• An important application of greedy techniques is
the design of data-compression codes (i.e.
Huffman code) .
• The greedy method is quite powerful and works
well for a wide range of problems. They are:
• Minimum-spanning-tree algorithms
(Example: Prims and Kruskal algorithm)
• Single Source Shortest Path.
(Example: Dijkstra's and Bellman ford algorithm)
Greedy Algorithm
• Application of Greedy Algorithm:
• A problem exhibits optimal substructure if an
optimal solution to the problem contains within it
optimal solutions to subproblems.
• This property is a key ingredient of assessing the
applicability of dynamic programming as well as
greedy algorithms.
• The subtleties between the above two techniques
are illustrated with the help of two variants of a
classical optimization problem known as knapsack
problem. These variants are:
• 0-1 knapsack problem (Dynamic Programming)
• Fractional knapsack problem (Greedy Algorithm)
Greedy Algorithm
• Problem 1: An activity-selection problem
• Is a problem of scheduling several competing
activities that require exclusive use of a common
resource, with a goal of selecting a maximum-size
set of mutually compatible activities.
• Let us take a set 𝑆 = {𝑎1 , 𝑎2 , 𝑎3 , …, 𝑎𝑛 } on n
proposed activities that wish to use a resource
(i.e. lecture hall) which can serve only one activity
at a time.
• Assume that the activities are sorted in
monotonically increasing order of finish time:
𝑓1 ≤ 𝑓2 ≤ 𝑓3 ≤ ⋯ ≤ 𝑓𝑛−1 ≤ 𝑓𝑛
Greedy Algorithm
• Problem 1: An activity-selection problem

• Each activity 𝑎𝑖 has a start time 𝑠𝑖 and a finish


time 𝑓𝑖 , where 0 ≤ 𝑠𝑖 < 𝑓𝑖 < ∞.
• Activities 𝑎𝑖 and 𝑎𝑗 are compatible if the intervals
[𝑠𝑖 , 𝑓𝑖 ] and [𝑠𝑗 , 𝑓𝑗 ] do not overlap. (i.e. [𝑠𝑗 ≥ 𝑓𝑖 ] ).
• In the activity-selection problem, the main goal is
to select a maximum-size subset of mutually
compatible activities.
Greedy Algorithm
• Problem 1: An activity-selection problem
Example 1: Given 10 activities with their start and
finish time compute a schedule where the largest
number of activities take place.

Activity 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7 𝑎8 𝑎9 𝑎10
𝑠𝑖 1 2 3 4 7 8 9 9 11 12
𝑓𝑖 3 5 4 7 10 9 11 13 12 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
First arraigning the following activities in increasing
order on their finishing
Activity 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7 𝑎8 𝑎9 𝑎10
𝑠𝑖 1 2 3 4 7 8 9 9 11 12
𝑓𝑖 3 5 4 7 10 9 11 13 12 14

Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎9
𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎8
𝑎9
𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎10
𝑎8
𝑎9
𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎10
𝑎8
𝑎9
𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎1 𝑎3 𝑎2 𝑎4 𝑎6 𝑎5 𝑎7 𝑎9 𝑎8 𝑎10
𝑠𝑖 1 3 2 4 8 7 9 11 9 12
𝑓𝑖 3 4 5 7 9 10 11 12 13 14

𝑎10
𝑎8
𝑎9
𝑎7
𝑎5
𝑎6
𝑎4
𝑎2
𝑎3
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Example 2: Find the optimal set in the given activity
selection problem.
Activity 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7 𝑎8 𝑎9 𝑎10
𝑠𝑖 1 2 3 4 7 8 9 9 11 12
𝑓𝑖 5 3 4 6 7 8 11 10 12 13
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
First arraigning the following activities in increasing
order on their finishing
Activity 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7 𝑎8 𝑎9 𝑎10
𝑠𝑖 1 2 3 4 5 6 7 8 9 10
𝑓𝑖 5 3 4 6 7 8 11 10 12 13

Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎8
𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎7
𝑎8
𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13

𝑎9
𝑎7
𝑎8
𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13
𝑎10
𝑎9
𝑎7
𝑎8
𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
Solution:
Activity 𝑎2 𝑎3 𝑎1 𝑎4 𝑎5 𝑎6 𝑎8 𝑎7 𝑎9 𝑎10
𝑠𝑖 2 3 1 4 5 6 8 7 9 10
𝑓𝑖 3 4 5 6 7 8 10 11 12 13
𝑎10
𝑎9
𝑎7
𝑎8
𝑎6
𝑎5
𝑎4
𝑎1
𝑎3
𝑎2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 1: An activity-selection problem
A recursive greedy algorithm
The initial call is Recursive-Activity-Selector(s,f,0,n)
Recursive-Activity-Selector(s,f,k,n)
1. m=k+1
2. while 𝑚 ≤ 𝑛 𝑎𝑛𝑑 (𝑠 𝑚 ≥ 𝑓[𝑘])
3. m=m+1
4. If ( 𝑚 ≤ 𝑛
5. return 𝑎𝑚 ∪ Recursive-Activity-Selector(s,f,m,n)
6. else return 0
Greedy Algorithm
• Problem 1: An activity-selection problem
A recursive greedy algorithm
Recursive-Activity-Selector(s,f,k,n)
1. m=k+1
2. while 𝑚 ≤ 𝑛 𝑎𝑛𝑑 (𝑠 𝑚 ≥ 𝑓[𝑘])
3. m=m+1
4. If ( 𝑚 ≤ 𝑛
5. return 𝑎𝑚 ∪ Recursive-Activity-Selector(s,f,m,n)
6. else return 0
Complexity without sorting is Θ 𝑛 .
Complexity with sorting is Ο 𝑛 lg 𝑛) + Θ(𝑛 .
Greedy Algorithm
• Problem 1: An activity-selection problem
A non recursive greedy algorithm
The procedure GREEDY-ACTIVITY-SELECTOR is an iterative
version of the procedure RECURSIVE-ACTIVITY-SELECTOR.
Greedy-Activity-Selector (s, f)
1. n=s.length
2. A={𝑎1 }
3. k=1
4. for m=2 to n
5. If (𝑠[𝑚] ≥ 𝑓 𝑘 )
6. A=A∪ {𝑎𝑚 }
7. k=m
8. Return A
Greedy Algorithm
• Problem 1: An activity-selection problem
A non recursive greedy algorithm
The procedure GREEDY-ACTIVITY-SELECTOR is an iterative
version of the procedure RECURSIVE-ACTIVITY-SELECTOR.
Greedy-Activity-Selector (s, f)
1. n=s.length
2. A={𝑎1 }
3. k=1
4. for m=2 to n
5. If (𝑠[𝑚] ≥ 𝑓 𝑘 )
6. A=A∪ {𝑎𝑚 }
7. k=m
8. Return A
Like the recursive version, Greedy-Activity-Selector schedules
a set of n activities in Θ(𝑛) time,
Greedy Algorithm
• Problem 2: Task Scheduling problem
• An interesting problem that are solving using
matroids is the problem of optimally scheduling
unit-time tasks on a single processor, where each
task has a deadline, along with a penalty paid if
the task misses its deadline.
• This problem looks complicated, when it was
solved in a surprisingly simple manner by casting
it as a matroid and using a greedy algorithm.
Greedy Algorithm
• Problem 2: Task Scheduling problem
• A unit-time task is a job, such as a program to be
run on a computer, that requires exactly one unit
of time to complete.
• Given a finite set S of unit-time tasks, a schedule
for S is a permutation of S specifying the order in
which to perform these tasks.
• The first task in the schedule begins at time 0 and
finishes at time 1, the second task begins at time
1 and finishes at time 2, and so on..
Greedy Algorithm
• Problem 2: Task Scheduling problem
• The problem of scheduling unit-time tasks with
deadlines and penalties for a single processor has
the following inputs:
• A set 𝑆 = {𝑎1 , 𝑎2 , 𝑎3 ,…….., 𝑎𝑛 } of n unit-time tasks;
• A set of n integer deadlines 𝑑1 , 𝑑2 , 𝑑3 ,…….., 𝑑𝑛 such that
each 𝑑𝑖 satisfies 1 ≤ 𝑑𝑖 ≤ 𝑛 and task 𝑎𝑖 is supposed to
finish by time 𝑑𝑖 .
• A set of n nonnegative weights or penalties 𝑤1 , 𝑤2 ,
𝑤3 ,…….., 𝑤𝑛 , such that a penalty of 𝑤𝑖 is incurred if task
𝑎𝑖 is not finished by time 𝑑𝑖 , and incurred no penalty if a
task finishes by its deadline.
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 1: Find an optimal schedule from the
following table, where the tasks with
penalties(weight) and deadlines are given.
Task 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 1:
Solution: As per the greedy algorithm, first sort the
tasks in descending order of their penalties. So that
minimum penalties will be charged.
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

(Note: In this problem the tasks are already sorted)


Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

𝑎2
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

𝑎3
𝑎2
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

Can’t place

𝑎4
𝑎3
𝑎2
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

Can’t place

𝑎4
𝑎3
𝑎2
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

𝑎7
𝑎4
𝑎3
𝑎2
𝑎1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Tasks→ 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 4 2 4 3 1 4 6
p𝑖 70 60 50 40 30 20 10

𝑎7
𝑎4 Gantt Chart
𝑎3
𝑎4 𝑎2 𝑎3 𝑎1 𝑎7
𝑎2
𝑎1

0 1 2 3 4 5 6

The Maximum deadline limit =6


So total loss is p[𝑎5 ]+p[𝑎6 ] = 30 +20= 50
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 2: Find an optimal schedule from the
following table, where the tasks with
penalties(weight) and deadlines are given.
Task 𝑎1 𝑎2 𝑎3 𝑎4
d𝑖 2 1 2 1
p𝑖 100 10 15 27
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 2:
Solution: As per the greedy algorithm, first sort the
tasks in descending order of their penalties. So that
minimum penalties will be charged.
Task 𝑎1 𝑎2 𝑎3 𝑎4
d𝑖 2 1 2 1
p𝑖 100 10 15 27
Task 𝑎1 𝑎4 𝑎3 𝑎2
d𝑖 2 1 2 1
After Sorting
p𝑖 100 27 15 10
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution: Task 𝑎1 𝑎4 𝑎3 𝑎2
d𝑖 2 1 2 1
p𝑖 100 27 15 10

𝑎4 Gantt Chart Can’t place

𝑎4 𝑎1
𝑎1

0 1 2 3 4 5 6

The Maximum deadline limit =2


So total loss is p[𝑎3 ]+p[𝑎2 ] = 15 +10= 25
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 3: Find an optimal schedule from the
following table, where the tasks with
penalties(weight) and deadlines are given.
Task 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 1 3 3 4 1 2 1
p𝑖 3 5 18 20 6 1 38
Greedy Algorithm
• Problem 2: Task Scheduling problem
Example 3:
Solution: As per the greedy algorithm, first sort the
tasks in descending order of their penalties. So that
minimum penalties will be charged.
Task 𝑎1 𝑎2 𝑎3 𝑎4 𝑎5 𝑎6 𝑎7
d𝑖 1 3 3 4 1 2 1
p𝑖 3 5 18 20 6 1 38

Task 𝑎7 𝑎4 𝑎3 𝑎5 𝑎2 𝑎1 𝑎6
d𝑖 1 4 3 1 3 1 2
p𝑖 38 20 18 6 5 3 1
Greedy Algorithm
• Problem 2: Task Scheduling problem
Solution:
Task 𝑎7 𝑎4 𝑎3 𝑎5 𝑎2 𝑎1 𝑎6
d𝑖 1 4 3 1 3 1 2
p𝑖 38 20 18 6 5 3 1

Can’t place
𝑎7 Gantt Chart
𝑎3
𝑎7 𝑎2 𝑎3 𝑎4
𝑎2
𝑎4

0 1 2 3 4

The Maximum deadline limit =4,


So total loss is p[𝑎5 ]+p[𝑎1 ] +p[𝑎6 ] = 6 +3+1= 10
Greedy Algorithm
• Problem 3: Huffman Coding (A solution to
encoding problem)
Problem:
Suppose we have a 58,000 characters data file that we
wish to store compactly. The characters occurred with
frequencies on that file is given below:
Character a e i o u s t
Frequency (in thousand) 10 15 12 3 4 13 1
Fixed length codeword (3-bit) 000 001 010 011 100 101 110

It was observed that if we assigned 3-bit to each


character, we required 1,74,000 bit to encode the file.
Greedy Algorithm
• Problem 3: Huffman Coding
• Huffman Coding is a famous Greedy Algorithm.
• It is used for the lossless compression of data.
• It uses variable length encoding.
• It assigns variable length code to all the characters.
• The code length of a character depends on how
frequently it occurs in the given text.
• The character which occurs most frequently gets the
smallest code.
• The character which occurs least frequently gets the
largest code.
• It is also known as Huffman Encoding
Greedy Algorithm
• Problem 3: Huffman Coding
• Huffman Coding implements a rule known as a prefix
rule.
• This is to prevent the ambiguities while decoding.
• It ensures that the code assigned to any character is
not a prefix of the code assigned to any other
character.
Greedy Algorithm
• Problem 3: Huffman Coding
• Major Steps in Huffman Coding-
• There are two major steps in Huffman Coding-
1. Building a Huffman Tree from the input characters.
2. Assigning code to the characters by traversing the
Huffman Tree.
Greedy Algorithm
• Problem 3: Huffman Coding
1. Building a Huffman Tree from the input characters.
The steps involved in the construction of Huffman Tree
are as follows-
Step-01:
• Create a leaf node for each character of the text.
• Leaf node of a character contains the occurring
frequency of that character.
Step-02:
• Arrange all the nodes in increasing order of their
frequency value.
Greedy Algorithm
• Problem 3: Huffman Coding
1. Building a Huffman Tree from the input characters.
Step-03:
• Considering the first two nodes having minimum
frequency,
• Create a new internal node.
• The frequency of this new node is the sum of
frequency of those two nodes.
• Make the first node as a left child and the other
node as a right child of the newly created node.
Step-04:
• Keep repeating Step-02 and Step-03 until all the
nodes form a single tree.
The tree finally obtained is the desired Huffman Tree.
Greedy Algorithm
• Problem 3: Huffman Coding
1. Building a Huffman Tree from the input characters.
Time Complexity-
The time complexity analysis of Huffman Coding is as follows-
• extractMin( ) is called 2 x (n-1) times if there are n
nodes.
• As extractMin( ) calls minHeapify( ), it takes O(logn)
time.

Thus, Overall time complexity of Huffman Coding becomes


O(nlogn).
[Note: Here, n is the number of unique characters in the given text.]
Greedy Algorithm
• Problem 3: Huffman Coding
2. Assigning code to the characters by traversing the
Huffman Tree.
• Assign weight to all the edges of the constructed Huffman
Tree.
• Let us assign weight ‘0’ to the left edges and weight ‘1’ to
the right edges.
Greedy Algorithm
• Problem 3: Huffman Coding
2. Assigning code to the characters by traversing the
Huffman Tree.
• Assign weight to all the edges of the constructed Huffman
Tree.
• Let us assign weight ‘0’ to the left edges and weight ‘1’ to
the right edges.
• Rule
• If you assign weight ‘0’ to the left edges, then assign weight ‘1’ to the
right edges.
• If you assign weight ‘1’ to the left edges, then assign weight ‘0’ to the
right edges.
• Any of the above two conventions may be followed.
• But follow the same convention at the time of decoding that is
adopted at the time of encoding.
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
A file contains the following characters with the frequencies as shown. If
Huffman Coding is used for data compression, determine-
• Huffman Code for each character
• Average code length
• Length of Huffman encoded message (in bits)

Characters Frequencies
a 10
e 15
i 12
o 3
u 4
s 13
t 1
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
• First let us construct the Huffman Tree.
• Huffman Tree is constructed in the following steps-

Step 1:

1 3 4 10 12 13 15
t o u a i s e
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 2:

4 4 10 12 13 15
u a i s e

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 3:

8 10 12 13 15
a i s e

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 4:

18 12 13 15
i s e

8 10
a

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 6:

12 13 15 18
i s e

8 10
a

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 7:
25 15 18
e

12 13 8 10
i s a

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 8:
15 18 25
e

8 10 12 13
a i s
4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 9: 33 25

15 18 12 13
e i s
8 10
a

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
Step 10: 25 33

12 13 15 18
i s e

8 10
a

4 4
u

1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
58
Step 10:
25 33

12 15 18
13
i e
s
8 10
a
4 4
u
1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Now, as per
Example 1-
the rule assign
Solution weight ‘0’ to
58 the left edges
Step 10:
and weight ‘1’
25 33 to the right
edges.
12 15 18
13
i e
s
8 10
a
4 4
u
1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1-
Solution
0 58 1
Step 10:
25 33
0 1 0 1

12 15 18
13
i e 0 1
s
8 10
0 1 a
4 4
0 1 u
1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding Now, find
Example 1- which
Solution number
58 take how
Step 10: 0 1 many bit?
25 33
0 1 0 1

12 15 18
13
i e 0 1
s
8 10
0 1 a
4 4
0 1 u
1 3
t o
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1- Question 1: Huffman Code for each character.
Solution Characters Frequencies Huffman Code
58 a 10 111
0 1

25 33 e 15 10
0 0 1
1 i 12 00
15 18
12 13
1
o 3 11001
i e 0
s
u 4 1101
8 10
0 1 a s 13 01

4 4 t 1 11000
0 1 u

1 3
t o

Huffman Tree
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1- Question 1: Huffman Code for each character.
Solution Characters Frequencies Huffman Code
58 a 10 111
0 1

25 33 e 15 10
0 0 1
1 i 12 00
15 18
12 13
1
o 3 11001
i e 0
s
u 4 1101
8 10
0 1 a s 13 01

4 4 t 1 11000
0 1 u
Observation:
1 3 • Characters occurring less frequently in the text are
t o assigned the larger code.
• Characters occurring more frequently in the text are
Huffman Tree assigned the smaller code.
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1- Question 2: Average code length
.
Solution Characters Frequencies Huffman Code
58 a 10 111
0 1
e 15 10
25 33
0 1 i 12 00
0
1
o 3 11001
12 15 18
13 u 4 1101
e 0 1
i s
s 13 01
8 10
t 1 11000
0 1 a

4 4
Average code length
0 1 u = ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 𝑿 𝒄𝒐𝒅𝒆 𝒍𝒆𝒏𝒈𝒕𝒉𝒊 ) / ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 )
= { (10 𝑥 3) + (15 𝑥 2) + (12 𝑥 2) + (3 𝑥 5) +
1 3
t o
(4 𝑥 4) + (13 𝑥 2) + (1 𝑥 5) } / (10 + 15 + 12 +
3 + 4 + 13 + 1)
Huffman Tree = 2.52
Greedy Algorithm
• Problem 3: Huffman Coding
Question 3: Length of Huffman encoded
Example 1- message (in bits)
Solution Characters Frequencies Huffman Code
58
1
. a 10 111
0
e 15 10
25 33
0 1 i 12 00
0
1
o 3 11001
12 15 18
13 u 4 1101
e 0 1
i s
s 13 01
8 10
t 1 11000
0 1 a
Total number of bits in Huffman encoded message
4 4
0 1 u
= Total number of characters in the message x Average code
length per character
1 3 = 58 x 2.52
t o = 146.16
≅ 147 bits
Huffman Tree
Greedy Algorithm
• Problem 3: Huffman Coding (Algorithm)
HUFFMAN(C)
1 n ← |C|
2 Q←C
3 for i 1 to n - 1
4 do allocate a new node z
5 left[z] ← x ← EXTRACT-MIN (Q)
6 right[z] ← y ← EXTRACT-MIN (Q)
7 f [z] ← f [x] + f [y]
8 INSERT(Q, z)
9 return EXTRACT-MIN(Q)
Greedy Algorithm
• Problem 3: Huffman Coding (Algorithm)
HUFFMAN(C)
1 n ← |C|
2 Q←C
3 for i 1 to n - 1
4 do allocate a new node z
5 left[z] ← x ← EXTRACT-MIN (Q)
6 right[z] ← y ← EXTRACT-MIN (Q)
7 f [z] ← f [x] + f [y]
8 INSERT(Q, z)
9 return EXTRACT-MIN(Q)

The total running time of HUFFMAN on a set of n characters is O (n lg n).


Greedy Algorithm
• Problem 3: Huffman Coding
Example 2-
A file contains the following characters with the frequencies as shown. If
Huffman Coding is used for data compression, determine-
• Huffman Code for each character
• Average code length
• Length of Huffman encoded message (in bits)

Characters Frequencies
a 45
b 13
c 12
d 16
e 9
f 5
Greedy Algorithm
• Problem 3: Huffman Coding
Example 2- Question 1: Huffman Code for each character.
Solution Characters Frequencies Huffman Code
100
1
a 45 0
0

55
b 13 101
45 0 1
a c 12 100
25 30
d 16 111
0 0 1
1
e 9 1101
12 13 14 16
c b 1 d f 5 1100
0
5 9
e
f

Huffman Tree
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1- Question 1: Huffman Code for each character.
Solution Characters Frequencies Huffman Code
100
1
a 45 0
0

55
b 13 101
45 0 1
a c 12 100
25 30
d 16 111
0 0 1
1
e 9 1101
12 13 14 16
c b 1 d f 5 1100
0
5 9
e Observation:
f
• Characters occurring less frequently in the text are
Huffman Tree assigned the larger code.
• Characters occurring more frequently in the text are
assigned the smaller code.
Greedy Algorithm
• Problem 3: Huffman Coding
Example 1- Question 2: Average code length
.
Solution Characters Frequencies Huffman Code
100
1
a 45 0
0

55
b 13 101
45 0 1
a c 12 100
25 30
d 16 111
0 0 1
1
e 9 1101
12 13 14 16
c b 1 d f 5 1100
0
Average code length
5 9
e
= ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 𝒙 𝒄𝒐𝒅𝒆 𝒍𝒆𝒏𝒈𝒕𝒉𝒊 ) / ∑ ( 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊 )
f
= { (45 𝑥 1) + (13 𝑥 3) + (12 𝑥 3) + (16 𝑥 3) +
Huffman Tree (9 𝑥 4) + (5 𝑥 4)} / (45 + 13 + 12 + 16 + 9 + 5)
= 2.24
Greedy Algorithm
• Problem 3: Huffman Coding
Question 3: Length of Huffman encoded
Example 1- message (in bits)
Solution Characters Frequencies Huffman Code
.
100
1
a 45 0
0

55
b 13 101
45 0 1
a c 12 100
25 30
d 16 111
0 0 1
1
e 9 1101
12 13 14 16
c b 1 d f 5 1100
0 Total number of bits in Huffman encoded message
5 9 = Total number of characters in the message x Average code
e
f length per character
= 100 x 2.24
Huffman Tree = 224 bits
Greedy Algorithm
• Problem 3: Huffman Coding
Example 3- (Practice yourself)
A file contains the following characters with the frequencies as shown. If
Huffman Coding is used for data compression, determine-
• Huffman Code for each character
• Average code length
• Length of Huffman encoded message (in bits)

Characters a b c d e f g h

Frequencies 1 1 2 3 5 8 13 21
Design and Analysis of Algorithm

Greedy Methods
(Single Source shortest path,
Knapsack problem )

Lecture – 62-64
Overview

• A greedy algorithm always makes the choice that


looks best at the moment. (i.e. it makes a locally
optimal choice in the hope that this choice will
lead to a globally optimal solution).

• The objective of this section is to explores


optimization problems that are solvable by greedy
algorithms.
Greedy Algorithm
• In mathematics, computer science and
economics, an optimization problem is the
problem of finding the best solution from all
feasible solutions.
• Algorithms for optimization problems typically go
through a sequence of steps, with a set of
choices at each step.
• Many optimization problems can be solved using
a greedy approach.
• Greedy algorithms are simple and
straightforward.
Greedy Algorithm
• A greedy algorithm always makes the choice that
looks best at the moment.
• That is, it makes a locally optimal choice in the
hope that this choice will lead to a globally
optimal solution.
• Greedy algorithms do not always yield optimal
solutions, but for many problems they do.
• This algorithms are easy to invent, easy to
implement and most of the time provides best
and optimized solution.
Greedy Algorithm
• Application of Greedy Algorithm:
• A simple but nontrivial problem, the activity-
selection problem, for which a greedy algorithm
efficiently computes a solution.
• In combinatorics,(a branch of mathematics), a
‘matroid’ is a structure that abstracts and
generalizes the notion of linear independence in
vector spaces. Greedy algorithm always produces
an optimal solution for such problems. Scheduling
unit-time tasks with deadlines and penalties is an
example of such problem.
Greedy Algorithm
• Application of Greedy Algorithm:
• An important application of greedy techniques is
the design of data-compression codes (i.e.
Huffman code) .
• The greedy method is quite powerful and works
well for a wide range of problems. They are:
• Minimum-spanning-tree algorithms
(Example: Prims and Kruskal algorithm)
• Single Source Shortest Path.
(Example: Dijkstra's and Bellman ford algorithm)
Greedy Algorithm
• Application of Greedy Algorithm:
• A problem exhibits optimal substructure if an
optimal solution to the problem contains within it
optimal solutions to subproblems.
• This property is a key ingredient of assessing the
applicability of dynamic programming as well as
greedy algorithms.
• The subtleties between the above two techniques
are illustrated with the help of two variants of a
classical optimization problem known as knapsack
problem. These variants are:
• 0-1 knapsack problem (Dynamic Programming)
• Fractional knapsack problem (Greedy Algorithm)
Greedy Algorithm
• Problem 5: Single source shortest path
• It is a shortest path problem where the shortest
path from a given source vertex to all other
remaining vertices is computed.

• Dijkstra’s Algorithm and Bellman Ford


Algorithm are the famous algorithms used for
solving single-source shortest path problem.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
• Dijkstra Algorithm is a very famous greedy
algorithm.
• It is used for solving the single source
shortest path problem.
• It computes the shortest path from one
particular source node to all other remaining
nodes of the graph.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Feasible Condition)
• Dijkstra algorithm works
• for connected graphs.
• for those graphs that do not contain any
negative weight edge.
• To provides the value or cost of the
shortest paths.
• for directed as well as undirected graphs.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Implementation)
The implementation of Dijkstra Algorithm is executed in the
following steps-
• Step-01:
• In the first step. two sets are defined-
• One set contains all those vertices which have been included
in the shortest path tree.
• In the beginning, this set is empty.
• Other set contains all those vertices which are still left to be
included in the shortest path tree.
• In the beginning, this set contains all the vertices of the
given graph.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Implementation)
The implementation of Dijkstra Algorithm is executed in the
following steps-
• Step-02:
For each vertex of the given graph, two variables are defined
as-
• Π[v] which denotes the predecessor of vertex ‘v’
• d[v] which denotes the shortest path estimate of vertex ‘v’
from the source vertex.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Implementation)
The implementation of Dijkstra Algorithm is executed in the
following steps-
• Step-02:
Initially, the value of these variables is set as-
• The value of variable ‘Π’ for each vertex is set to NIL i.e.
Π[v] = NIL
• The value of variable ‘d’ for source vertex is set to 0 i.e. d[S]
=0
• The value of variable ‘d’ for remaining vertices is set to ∞
i.e. d[v] = ∞
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Implementation)
The implementation of Dijkstra Algorithm is executed in the
following steps-
• Step-03:
The following procedure is repeated until all the vertices of the
graph are processed-
• Among unprocessed vertices, a vertex with minimum value
of variable ‘d’ is chosen.
• Its outgoing edges are relaxed.
• After relaxing the edges for that vertex, the sets created in
step-01 are updated.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
∞ ∞
10

3 2 9 4
s ∞ 6
7
5
∞ ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
∞ ∞
10

3 2 9 4
0 6
s 7
5
∞ ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
∞ ∞
10

3 2 9 4
s 0 6
7
5
∞ ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
10 ∞
10

3 2 9 4
s 0 6
7
5
5 ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
10 ∞
10

3 2 9 4
s 0 6
7
5
5 ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
10 ∞
10

3 2 9 4
s 0 6
7
5
5 ∞
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 14
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 14
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 14
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 13
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 13
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 9
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1
8 9
10

3 2 9 4
s 0 6
7
5
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
t x
1 Hence the shortest path to
8 9
10 all the vertex from s are:
3 2 9 𝑠→𝑡=8
4
s 0 6 𝑠→𝑥=9
7 𝑠→𝑦=5
5 𝑠→𝑧=7
5 7
2
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Algorithm)
DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2S←Ø
3 Q ← V[G]
4 while Q ≠ Ø
5 do u ← EXTRACT-MIN(Q)
6 S ← 𝑆 ∈ {𝑢}
7 for each vertex 𝑣 ∈ 𝐴𝑑𝑗[𝑢]
8 do RELAX(u, v, w)
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Algorithm)
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v V[G]
2 do d[v] ← ∞
3 π[v] ← NIL
4 d[s] ← 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v)
2 then d[v] ← d[u] + w(u, v)
3 π[v] ← u
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Complexity)
CASE-01: (IN CASE OF COMPLETE GRAPH)
• 𝐴[𝑖, 𝑗] stores the information about edge (𝑖, 𝑗).
• Time taken for selecting 𝑖 with the smallest 𝑑𝑖𝑠𝑡 is 𝑂(𝑉).
• For each neighbor of i, time taken for updating 𝑑𝑖𝑠𝑡[𝑗] is 𝑂(1)
and there will be maximum V neighbors.
• Time taken for each iteration of the loop is O(V) and one
vertex is deleted from Q.
• Thus, total time complexity becomes O(𝑉 2 ).
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Complexity)
CASE-02:
• With adjacency list representation, all vertices of the graph
can be traversed using BFS in 𝑂(𝑉 + 𝐸) time.
• In min heap, operations like extract-min and decrease-key
value takes 𝑂(log 𝑉) time.
• So, overall time complexity becomes
𝑂(𝐸 + 𝑉) 𝑥 𝑂(log 𝑉) which is 𝑂((𝐸 + 𝑉) 𝑥 log 𝑉) = 𝑂(𝐸 log 𝑉)
• This time complexity is reduced to 𝑂(𝐸 + 𝑉 log 𝑉) using
Fibonacci heap.
Greedy Algorithm
• Problem 5: Single source shortest path
• Dijkstra’s Algorithm (Self Practice)
Example 2: Construct the Single source shortest path for the
given graph using Dijkstra’s Algorithm-
a 2 c
∞ ∞
1 1

∞ 2 1 ∞
s e
5
2
∞ ∞
b 2 d
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
• Bellman-Ford algorithm solves the single-source
shortest-path problem in the general case in which
edges of a given digraph can have negative weight
as long as G contains no negative cycles.
• Like Dijkstra's algorithm, this algorithm, uses the
notion of edge relaxation without using greedy
method. Again, it uses d[u] as an upper bound on
the distance d[u, v] from u to v.
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
• The algorithm progressively decreases an estimate
d[v] on the weight of the shortest path from the
source vertex s to each vertex v ∈ V until it achieve
the actual shortest-path.
• The algorithm returns Boolean TRUE if the given
digraph contains no negative cycles that are
reachable from source vertex s otherwise it returns
Boolean FALSE.
Greedy Algorithm
• Problem 5: Single source shortest path
Bellman Ford Algorithm (Negative Cycle
Detection)
Assume: 4
x
𝑑[𝑢] ≤ 𝑑[𝑥] + 4
u -10
𝑑[𝑣] ≤ 𝑑[𝑢] + 5
𝑑[𝑥] ≤ 𝑑[𝑣] − 10 5 v
Adding:
𝑑[𝑢] + 𝑑[𝑣] + 𝑑[𝑥] ≤ 𝑑[𝑥] + 𝑑[𝑢] + 𝑑[𝑣] − 1
Because it’s a cycle, vertices on left are same as those on right. Thus
we get 0 ≤ −1; a contradiction.
So for at least one edge 𝑢, 𝑣 , we have to check
𝑑[𝑣] > 𝑑[𝑢] + 𝑤(𝑢, 𝑣)
This is exactly what Bellman-Ford checks for.
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm (Implementation)
• Step 1: Start with the weighted graph.
• Step 2: Choose the starting vertex by making the path
value zero and assign infinity path values to all other
vertices.
• Step 3: Visit each edge and relax the path distances if
they are inaccurate.
• Step 4: Do step 3 ‘V’ times because in the worst case a
vertex's path length might need to be readjusted V times.
• Step 5: After all vertices have their path lengths, check if
a negative cycle is present or not.
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x
∞ ∞
6 -2
-3
8 7
s ∞ -4
2
7
∞ ∞
9
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x
∞ ∞ The edges are:
6 -2
1. (s,t) 6. (t,x)
-3 2. (s,y) 7. (t,z)
8 7
s ∞ -4 3. (y,z) 8. (t,y)
2 4. (z,x) 9. (y,x)
7 5. (x,t) 10.(z,s)
∞ ∞
9
y z
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
∞ ∞ ∞ ∞
6 -2 6 -2
-3 -3
8 8 7
∞ 7 0
-4 -4
s s 2
2
7 7
∞ ∞ ∞ ∞
9 9
y z y z

Iteration - 1 s t x y z
0 ∞ ∞ ∞ ∞
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
∞ ∞ 6 ∞
6 -2 6 -2
-3 -3
8 8 7
0 7 0
-4 -4
s s 2
2
7 7
∞ ∞ ∞ ∞
9 9
y z y z

Iteration - 1 Edge no - 1
s t x y z
0 6 ∞ ∞ ∞
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 ∞ 6 ∞
6 -2 6 -2
-3 -3
8 8 7
0 7 0
-4 -4
s s 2
2
7 7
∞ ∞ 7 ∞
9 9
y z y z

Iteration - 1 Edge no - 2
s t x y z
0 6 ∞ 7 ∞
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 ∞ 6 ∞
6 -2 6 -2
-3 -3
8 8 7
0 7 0
-4 -4
s s 2
2
7 7
7 ∞ 7 16
9 9
y z y z

Iteration - 1 Edge no - 3
s t x y z
0 6 ∞ 7 16
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 ∞ 6 23
6 -2 6 -2
-3 -3
8 8 7
0 7 0
-4 -4
s s 2
2
7 7
7 16 7 16
9 9
y z y z

Iteration - 1 Edge no - 4
s t x y z
0 6 23 7 16
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 23 6 23
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 16 7 16
9 9
y z y z

Iteration - 1 Edge no - 5
s t x y z
0 6 23 7 16
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 23 6 11
-2 6 -2
6
-3 -3
8 8 7
0 7 0 -4
-4 s
s 2 2
7 7
7 16 7 16
9 9
y y z
z
Iteration - 1 Edge no - 6
s t x y z
0 6 11 7 16
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 11 6 11
6 -2 6 -2
-3 -3
8 8 7
0 7 0
-4 -4
s s 2
2
7 7
7 16 7 2
9 9
y z y z

Iteration - 1 Edge no - 7
s t x y z
0 6 11 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 11 6 11
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 2 7 2
9 9
y z y z

Iteration - 1 Edge no - 8
s t x y z
0 6 11 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 11 6 4
-2 6 -2
6
-3 -3
8 8 7
0 7 0 -4
-4 s
s 2 2
7 7
7 2 7 2
9 9
y y z
z
Iteration - 1 Edge no - 9
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 4 6 4
-2 6 -2
6
-3 -3
8 8 7
0 7 0 -4
-4 s
s 2 2
7 7
7 2 7 2
9 9
y y z
z
Iteration - 1 Edge no – 10
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 4 6 4
-2 6 -2
6
-3 -3
8 8 7
0 7 0 -4
-4 s
s 2 2
7 7
7 2 7 2
9 9
y y z
z
Iteration - 1 Edge no – 10
s t x y z
0 6 4 7 2

End of Iteration 1
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 4 6 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 2 7 2
9 9
y z y z

Iteration - 2 Edge no - 1
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 4 6 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 2 7 2
9 9
y z y z

Iteration - 2 Edge no - 2
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t x t 5 x
5
6 4 6 4
-2 6 -2
6
-3 -3
8 8 7
0 7 0 -4
-4 s
s 2 2
7 7
7 2 7 2
9 9
y y z
z
Iteration - 2 Edge no - 3
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
6 4 6 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 2 7 2
9 9
y z y z
Iteration - 2 Edge no - 4
s t x y z
0 6 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t x
5
6 4 2 4
6 -2 -2
6
-3 -3
8 7 8
0 -4 0 7
s -4
2 s 2
7 7
7 2 7 2
9 9
y z y z
Iteration - 2 Edge no - 5
s t x y z
0 2 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 2 7 2
9 9
y z y z
Iteration - 2 Edge no - 6
s t x y z
0 2 4 7 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0
s -4
2 s 2
7 7
7 2 7 -2
9 9
y z y z
Iteration - 2 Edge no - 7
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t x
5
2 4 2 4
6 -2 -2
6
-3 -3
8 7 8
0 -4 0 7
s -4
2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 2 Edge no - 8
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 2 Edge no - 9
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 2 Edge no – 10
s t x y z
0 2 4 7 -2

End of Iteration 2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 1
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 2
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 3
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 4
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 5
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 6
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 7
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 8
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no - 9
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 3 Edge no – 10
s t x y z
0 2 4 7 -2

End of Iteration 3
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 1
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 2
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 3
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 4
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 5
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 6
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 7
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 8
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no - 9
s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8 7
0 -4 0 -4
s 2 s 2
7 7
7 -2 7 -2
9 9
y z y z
Iteration - 4 Edge no – 10
s t x y z
0 2 4 7 -2

End of Iteration 4
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the
given graph using Bellman ford Algorithm-
It was observed that after 3rd iteration there was no
relaxation. So it was wise to stop after 3rd iteration and
the final answer is :

s t x y z
0 2 4 7 -2
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm
Example 1: Construct the Single source shortest path for the given graph
using Bellman ford Algorithm- x
t 5 x
t 5
t 5 x
6 4 2 4
∞ ∞ -2 6 -2
6 -2 6
-3 -3
-3 8 8
8 7 0 7
0 7 0
-4 -4
s -4 s s
2 2 2
7 7 7
∞ ∞ 7 2 7 -2
9 9 9
y z y z y z
After Iteration 1 After Iteration 2
t 5 x t 5 x
2 4 2 4
6 -2 6 -2
-3 -3
8 7 8
0
-4 0 7
s s -4
2 2
7 7
7 -2 7 −2
9 9
y z y z
After Iteration 3 After Iteration 4
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm (Algorithm)
INITIALIZE-SINGLE-SOURCE(G, s)
1 for each vertex v V[G]
2 do d[v] ← ∞
3 π[v] ← NIL
4 d[s] ← 0
RELAX(u, v, w)
1 if d[v] > d[u] + w(u, v)
2 then d[v] ← d[u] + w(u, v)
3 π[v] ← u
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm (Analysis)
BELLMAN-FORD(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s) → Θ(𝑉)
2 for i ← 1 to |V[G]| - 1
3 do for each edge (u, v) ∈ E[G] Ο(𝑉𝐸)
4 do RELAX(u, v, w) Ο(𝑉𝐸)
5 for each edge (u, v) ∈ E[G]
6 do if d[v] > d[u] + w(u, v) Ο(𝐸)
7 then return FALSE
8 return TRUE
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm (Algorithm)
BELLMAN-FORD(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 for i ← 1 to |V[G]| - 1
3 do for each edge (u, v) ∈ E[G]
4 do RELAX(u, v, w)
5 for each edge (u, v) ∈ E[G]
6 do if d[v] > d[u] + w(u, v)
7 then return FALSE
8 return TRUE
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm(Self Practice)
Example 2: Construct the Single source shortest path for the
given graph using Bellman Ford Algorithm-
a 2 c
∞ ∞
1 1

∞ 1 2 ∞
2
s e
5
2
∞ ∞
b 2 d
Greedy Algorithm
• Problem 5: Single source shortest path
• Bellman Ford Algorithm(Self Practice)
Example 3: Construct the Single source shortest path for the
given graph using Bellman Ford Algorithm-
a 4 c
∞ ∞

5
5 -10

∞ ∞
b 3 d
Greedy Algorithm
• Problem 5: Knapsack Problem
Problem definition:
• The Knapsack problem is an “combinatorial
optimization problem”, a topic in mathematics
and computer science about finding the optimal
object among a set of object .
• Given a set of items, each with a weight and a
profit, determine the number of each item to
include in a collection so that the total weight is
less than or equal to a given limit and the total
profit is as large as possible.
Greedy Algorithm
• Problem 5: Knapsack Problem
Versions of Knapsack:
• Fractional Knapsack Problem:
Items are divisible; you can take any fraction of
an item and it is solved by using Greedy
Algorithm.
• 0/1 Knapsack Problem:
Items are indivisible; you either take them or
not and it is solved by using Dynamic
Programming(DP).
Greedy Algorithm
• Problem 5: Knapsack Problem
• Fractional Knapsack Problem:
Given n objects and a knapsack with a capacity
“M” (weight)
• Each object 𝑖 has weight 𝑤𝑖 and profit 𝑝𝑖 .
• For each object 𝑖, suppose a fraction 𝑥𝑖
, 0 ≤ 𝑥𝑖 ≤ 1, can be placed in the
knapsack, then the profit earned is 𝑝𝑖 . 𝑥𝑖 .
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem:
The objective is to maximize profit subject to capacity constraints.
𝑖. 𝑒. 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒
σ𝑛𝑖=1 𝑝𝑖 𝑥𝑖 ------------------------------(1)
Subject to
σ𝑛𝑖=1 𝑤𝑖 𝑥𝑖 ≤ 𝑀 ------------------------(2)
Where, 0 ≤ 𝑥𝑖 ≤ 1,
𝑝𝑖 > 0,
𝑤𝑖 > 0.
A feasible solution is any subset {𝑥1 , 𝑥2 , 𝑥3 , … … , 𝑥𝑛 } satisfying (1) & (2).
An optimal solution is a feasible solution that maximize σ𝑛𝑖=1 𝑝𝑖 𝑥𝑖 .
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Fractional knapsack problem is solved using greedy method in the
following steps-
Step-01:
• For each item, compute its (profit / weight) ratio.(i.e 𝑝𝑖 /𝑥𝑖 )
Step-02:
• Arrange all the items in decreasing order of their (profit /
weight) ratio.
Step-03:
• Start putting the items into the knapsack beginning from the
item with the highest ratio.
• Put as many items as you can into the knapsack.
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.

Item Weight Profit


1 5 30
2 10 40
3 15 45
4 22 77
5 25 90
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-01: Compute the (profit / weight) ratio for each item-
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-01: Compute the (profit / weight) ratio for each item(i.e. one unit
cost)-
Item Weight Profit Ratio
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊
1 5 30 6
2 10 40 4
3 15 45 3
4 22 77 3.5
5 25 90 3.6
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-02: Sort all the items in decreasing order of their value / weight
ratio-
Item Weight Profit Ratio
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊
1 5 30 6
2 10 40 4
5 25 90 3.6
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4
5 25 90 3.6
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4
5 25 90 3.6
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Knapsack Items in
Item Weight Profit Ratio weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6 45 1, 2 70
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6 45 1, 2 70
4 22 77 3.5
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6 45 1,2 70
4 22 77 3.5 20 1,2,5 160
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.
Step-03: Start filling the knapsack by putting the items into it one by
one.
Item Weight Profit Ratio Knapsack Items in
𝒑𝒊
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) Τ𝒘𝒊 weight Knapsack Cost
1 5 30 6 60 ∅ 0
2 10 40 4 55 1 30
5 25 90 3.6 45 1,2 70
4 22 77 3.5 20 1,2,5 160
3 15 45 3
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Implementation)
Example 1:
Item Weight Profit Ratio Knapsack Items in
(𝑖) (𝑤𝑖 ) (𝑝𝑖 ) 𝒑𝒊Τ𝒘𝒊 weight Knapsack Cost
60 ∅ 0
1 5 30 6
55 1 30
2 10 40 4
45 1,2 70
5 25 90 3.6
20 1,2,5 160
4 22 77 3.5
1,2,5,
3 15 45 3 0 frac(4) 230
Now, Knapsack weight left to be filled is 20 kg but item-4 has a weight of 22 kg.
Since in fractional knapsack problem, even the fraction of any item can be taken.
So, knapsack will contain the fractional part of item 4.(20 out of 22)
Total cost of the knapsack = 160 + (20/22) x 77 = 160 + 70 = 230 units
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Algorithm)
FRACTIONAL_KNAPSACK (𝑣, 𝑤, 𝑀)
1. 𝑙𝑜𝑎𝑑 = 0
2. 𝑖 = 1
3. 𝑊ℎ𝑖𝑙𝑒 𝑙𝑜𝑎𝑑 < 𝑀 𝑎𝑛𝑑 𝑖 ≤ 𝑛
4. 𝑖𝑓 𝑤𝑖 ≤ 𝑀 − 𝑙𝑜𝑎𝑑
5. 𝑡𝑎𝑘𝑒 𝑎𝑙𝑙 𝑜𝑓 𝑖𝑡𝑒𝑚 𝑖
6. 𝑒𝑙𝑠𝑒 𝑡𝑎𝑘𝑒 (𝑀 − 𝑙𝑜𝑎𝑑)/ 𝑤𝑖 𝑜𝑓 𝑖𝑡𝑒𝑚 𝑖
7. 𝑎𝑑𝑑 𝑤ℎ𝑎𝑡 𝑤𝑎𝑠 𝑡𝑎𝑘𝑒𝑛 𝑡𝑜 𝑙𝑜𝑎𝑑(𝑙𝑜𝑎𝑑 = 𝑙𝑜𝑎𝑑 + 𝑤𝑖 )
8. i=i+1
(Note: Assume that the items are already sorted on the basis of ratio)
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(Algorithm)
• The main time taking step is the sorting of all items in
decreasing order of their (profit / weight) ratio.
• If the items are already arranged in the required order,
then while loop takes 𝑂(𝑛) time.
• The average time complexity of Quick Sort is 𝑂(𝑛 log 𝑛).
• Therefore, total time taken including the sort is 𝑂(𝑛 log 𝑛).
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(self practice)
Example 2: For the given set of items and knapsack capacity = 60 kg,
find the optimal solution for the fractional knapsack problem making use
of greedy approach.

Item Weight Profit


A 1 5
B 3 9
C 2 4
D 2 8
Greedy Algorithm
• Problem 5: Knapsack Problem
Fractional Knapsack Problem(self practice)
Example 3: Find the optimal solution for the fractional knapsack
problem making use of greedy approach. Consider-

n=3
M = 20 kg
(w1, w2, w3) = (18, 15, 10)
(p1, p2, p3) = (25, 24, 15)
Design and Analysis of Algorithm

Greedy Methods
(Minimum Spanning Tree)

Lecture – 65-66
Overview

• A greedy algorithm always makes the choice that


looks best at the moment. (i.e. it makes a locally
optimal choice in the hope that this choice will
lead to a globally optimal solution).

• The objective of this section is to explores


optimization problems that are solvable by greedy
algorithms.
Greedy Algorithm
• In mathematics, computer science and
economics, an optimization problem is the
problem of finding the best solution from all
feasible solutions.
• Algorithms for optimization problems typically go
through a sequence of steps, with a set of
choices at each step.
• Many optimization problems can be solved using
a greedy approach.
• Greedy algorithms are simple and
straightforward.
Greedy Algorithm
• A greedy algorithm always makes the choice that
looks best at the moment.
• That is, it makes a locally optimal choice in the
hope that this choice will lead to a globally
optimal solution.
• Greedy algorithms do not always yield optimal
solutions, but for many problems they do.
• This algorithms are easy to invent, easy to
implement and most of the time provides best
and optimized solution.
Greedy Algorithm
• Application of Greedy Algorithm:
• A simple but nontrivial problem, the activity-
selection problem, for which a greedy algorithm
efficiently computes a solution.
• In combinatorics,(a branch of mathematics), a
‘matroid’ is a structure that abstracts and
generalizes the notion of linear independence in
vector spaces. Greedy algorithm always produces
an optimal solution for such problems. Scheduling
unit-time tasks with deadlines and penalties is an
example of such problem.
Greedy Algorithm
• Application of Greedy Algorithm:
• An important application of greedy techniques is
the design of data-compression codes (i.e.
Huffman code) .
• The greedy method is quite powerful and works
well for a wide range of problems. They are:
• Minimum-spanning-tree algorithms
(Example: Prims and Kruskal algorithm)
• Single Source Shortest Path.
(Example: Dijkstra's and Bellman ford algorithm)
Greedy Algorithm
• Application of Greedy Algorithm:
• A problem exhibits optimal substructure if an
optimal solution to the problem contains within it
optimal solutions to subproblems.
• This property is a key ingredient of assessing the
applicability of dynamic programming as well as
greedy algorithms.
• The subtleties between the above two techniques
are illustrated with the help of two variants of a
classical optimization problem known as knapsack
problem. These variants are:
• 0-1 knapsack problem (Dynamic Programming)
• Fractional knapsack problem (Greedy Algorithm)
Greedy Algorithm
• Problem 4: Minimum Spanning Tree
problem
• Spanning Tree
• A tree (i.e., connected, acyclic graph) which contains all
the vertices of the graph
• Minimum Spanning Tree
• Spanning tree with the minimum sum of weights
8 7 8 7
b c d b c d
4 9 4 9
2 2
a 11 i 14 e a i 4
e
4
7 6
8 10
h g f h g f
1 2 1 2
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Problem Definition
• A town has a set of houses and a set of roads.
• A road connects 2 and only 2 houses.
• A road connecting houses 𝑢 and 𝑣 has a repair cost
𝑤(𝑢, 𝑣)
• Goal:
• Repair enough roads such that
1. everyone stays connected: can reach every
house from all other houses, and
2. total repair cost is minimum.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Graph model for MST
• Undirected graph 𝐺 = 𝑉, 𝐸
• Weight 𝑤(𝑢, 𝑣) on each edge (𝑢, 𝑣) ∈ 𝐸.
• Find T ⊆ E such that
1. T connects all vertices (T is a spanning tree), and
2. 𝑤 𝑇 = σ(𝑢,𝑣)∈𝑇 𝑤 𝑢, 𝑣 is minimized.
• Properties of an MST:
• It has 𝑉 − 1 edges.
• It has no cycles.
• It might not be unique
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Generic MST Algorithm
GENERIC-MST (𝐺, 𝑤)
𝐴 = ∅;
while A is not a spanning tree
find an edge (𝑢, 𝑣) that is safe for A
𝐴 = 𝐴 ∪ 𝑢, 𝑣
return A
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Generic MST Algorithm
GENERIC-MST (𝐺, 𝑤)
𝐴 = ∅;
while A is not a spanning tree
find an edge (𝑢, 𝑣) that is safe for A
𝐴 = 𝐴 ∪ 𝑢, 𝑣
return A
Use the loop invariant to show that this generic algorithm works.
• Initialization: The empty set trivially satisfies the loop invariant.
• Maintenance: Since we add only safe edges, A remains a subset of
some MST.
• Termination: All edges added to A are in an MST, so when we stop, A
is a spanning tree that is also an MST.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem

• Kruskal’s Algorithm
• Concept and Examples

• Prim’s Algorithm
• Concept and Examples
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem

• Kruskal’s Algorithm
• Concept and Examples

• Prim’s Algorithm
• Concept and Examples
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm
• Kruskal’s Algorithm is a famous greedy
algorithm.
• It is used for finding the Minimum Spanning
Tree (MST) of a given graph.
• To apply Kruskal’s algorithm, the given graph
must be weighted, connected and undirected.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm
• Implementation of algorithm.
• The implementation of Kruskal’s Algorithm is explained in
the following steps-
• Step-01:
• Sort all the edges from low weight to high weight.
• Step-02:
• Take the edge with the lowest weight and use it to
connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge
and go for the next least weight edge.
• Step-03:
• Keep adding edges until all the vertices are connected
and a Minimum Spanning Tree (MST) is obtained.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Kruskal’s Algorithm-

8 7
b c d
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f
1 2
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1:
Solution: Read the edges
8 7
b c d
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f
1 2
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
ab 4
Solution: Read the edges
bc 8
cd 7
8 7 de 9
b c d ef 10
4 9
2 df 14
11 cf 4
a i 4
14 e
ci 2
7 6
8 10 ih 7
h g f ig 6
1 2 gf 2
gh 1
ah 8
bh 11
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm
• Implementation of algorithm.
• The implementation of Kruskal’s Algorithm is explained in
the following steps-
• Step-01:
• Sort all the edges from low weight to high weight.
• Step-02:
• Take the edge with the lowest weight and use it to
connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge
and go for the next least weight edge.
• Step-03:
• Keep adding edges until all the vertices are connected
and a Minimum Spanning Tree (MST) is obtained.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 1
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm
• Implementation of algorithm.
• The implementation of Kruskal’s Algorithm is explained in
the following steps-
• Step-01:
• Sort all the edges from low weight to high weight.
• Step-02:
• Take the edge with the lowest weight and use it to
connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge
and go for the next least weight edge.
• Step-03:
• Keep adding edges until all the vertices are connected
and a Minimum Spanning Tree (MST) is obtained.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
Form a df 14
cycle
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 1: Edge Weight
gh 1
Solution: Apply Step 2 and 3
ci 2
gf 2
8 7 ab 4
b c d cf 4
4 9
2 ig 6
11 cd 7
a i 4
14 e
ih 7
7 6
8 10 bc 8
h g f ah 8
1 2 de 9
ef 10
bh 11
df 14
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm
MST-KRUSKAL(G, w)
1 A←Ø
2 for each vertex v V[G]
Ο(𝑉)
3 do MAKE-SET(v)
4 sort the edges of E into nondecreasing order by weight w Ο(𝐸 lg 𝐸)
5 for each edge (u, v) E, taken in nondecreasing order by weight
6 do if FIND-SET(u) ≠ FIND-SET(v)
7 then A ← A {(u, v)}
Ο(𝐸)
8 UNION(u, v)
9 return A

Hence the total Time Complexity is Ο 𝑉 + Ο(𝐸 lg 𝐸) + Ο(𝐸)


Ο 𝑉 + 𝐸 + 𝐸 lg 𝐸 = Ο 𝐸 lg 𝐸
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm(Analysis)
• The running time of Kruskal's algorithm for a graph 𝐺 =
(𝑉, 𝐸) depends on the implementation of the disjoint-set
data structure.
• Initializing the set A in line 1 takes 𝑂(1) time, and the
time to sort the edges in line 4 is O(E lg E).
• The for loop of lines 5-8 performs 𝑂(𝐸) FIND-SET and
UNION operations on the disjoint-set forest. Along with
the |V| MAKE-SET operations, these take a total of
𝑂 𝑉 + 𝐸 time.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree
problem
• Kruskal’s Algorithm (Analysis)
• the total running time of Kruskal's algorithm is
𝑂(𝐸 lg 𝐸).
• Observing that |𝐸| < |𝑉|2 (if the graph is a dense graph)
• Apply log both side ⟹ lg |𝐸| = 𝑂(lg 𝑉)
• Hence the running time of Kruskal's algorithm as
𝑂(𝐸 lg 𝑉).
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Kruskal’s Algorithm:
Example 2: Construct the minimum spanning tree (MST) for the given
graph using Kruskal’s Algorithm-

28 Self Practice
b c
10 14 16

a e d
24 18
25 12
f g
22
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem

• Kruskal’s Algorithm
• Concept and Examples

• Prim’s Algorithm
• Concept and Examples
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm
• Prim’s Algorithm is a famous greedy algorithm.
• It is used for finding the Minimum Spanning
Tree (MST) of a given graph.
• To apply Prim’s algorithm, the given graph must
be weighted, connected and undirected.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm (Implimentation)
• The implementation of Prim’s Algorithm is
explained in the following steps-
• Step-1:
• Randomly choose any vertex.
• The vertex connecting to the edge having least
weight is usually selected.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm (Implementation)
• Step-2:
• Find all the edges that connect the tree to new
vertices.
• Find the least weight edge among those edges and
include it in the existing tree.
• If including that edge creates a cycle, then reject that
edge and look for the next least weight edge.
• Step-03:
• Keep repeating step-2 until all the vertices are
included and Minimum Spanning Tree (MST) is
obtained.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm (Implementation)
• Step-2:
• Find all the edges that connect the tree to new
vertices.
• Find the least weight edge among those edges and
include it in the existing tree.
• If including that edge creates a cycle, then reject that
edge and look for the next least weight edge.
• Step-03:
• Keep repeating step-2 until all the vertices are
included and Minimum Spanning Tree (MST) is
obtained.
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm: (Algorithm)
MST-PRIM(G, w, r)
1 for each u ∈ V[G]
2 do key[u] ← ∞
3 𝜋[u] ← NIL
4 key[r] ← 0
5 Q ← V [G]
6 while Q ≠ Ø
7 do u ← EXTRACT-MIN(Q)
8 for each v ∈ Adj[u]
9 do if v ∈ Q and w(u, v) < key[v]
10 then π[v] ← u
11 key[v] ← w(u, v)
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f
1 2
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f
1 2

𝑽 a b c d e f g h i
key ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
Π / / / / / / / / /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 1
4 9 Assume that ‘a’ is the root.
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
Π / / / / / / / / /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 1
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 ∞ ∞ ∞ ∞ ∞ 8 ∞
Π / a / / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 1
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 ∞ ∞ ∞ ∞ ∞ 8 ∞
Π / a / / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 1
4 9
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 ∞ ∞ ∞ ∞ ∞ 8 ∞
Π / a / / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Kruskal’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 ∞ ∞ ∞ ∞ ∞ 8 ∞
Π / a / / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 ∞ ∞ ∞ ∞ 8 ∞
Π / a b / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 ∞ ∞ ∞ ∞ 8 ∞
Π / a b / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 ∞ ∞ ∞ ∞ 8 ∞
Π / a b / / / / a /
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 ∞ 8 2
Π / a b c / c / a c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 ∞ 8 2
Π / a b c / c / a c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 ∞ 8 2
Π / a b c / c / a c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Kruskal’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 6 7 2
Π / a b c / c i i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 6 7 2
Π / a b c / c i i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 ∞ 4 6 7 2
Π / a b c / c i i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 7 2
Π / a b c f c f i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 7 2
Π / a b c f c f i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 7 2
Π / a b c f c f i c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f g c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 10 4 2 1 2
Π / a b c f c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-

8 7
b c d Use step 2 and 3
4 9 until MST form
2
a 11 i 14 e
4
7 6
8 10
h g f Q a b c d e f g h i
1 2

𝑽 a b c d e f g h i
key 0 4 8 7 9 4 2 1 2
Π / a b c d c f h c
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-
8 7
b c d
4 9 Use step 2 and 3
2 until MST form
a 11 i 14 e
4
7 6
8 10
h g f
1 2
Since all the vertices have been included in the MST, so stop the process.
Now, Cost of Minimum Spanning Tree
= Sum of all edge weights
= w[ab]+w[bc]+w[cd]+w[ci]+w[cf]+w[de]+w[gh]+w[fg]
=4+8+7+2+4+9+2+1+=37
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm: (Analysis)
MST-PRIM(G, w, r)
1 for each u ∈ V[G]
2 do key[u] ← ∞ Ο(𝑉)
3 𝜋[u] ← NIL
4 key[r] ← 0 Ο(1)
5 Q ← V [G] Ο(1) by the help of Fibonacci Heap
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm: (Analysis) Ο(𝑉 lg 𝑉)
MST-PRIM(G, w, r)
6 while Q ≠ Ø
7 do u ← EXTRACT-MIN(Q) Ο(lg 𝑉)
8 for each v ∈ Adj[u] Ο(𝑉)
9 do if v ∈ Q and w(u, v) < key[v]
10 then π[v] ← u Ο(𝐸)
11 key[v] ← w(u, v)
𝐷𝑒𝑐𝑟𝑒𝑎𝑠𝑒 𝐾𝑒𝑦𝑜𝑓 𝐹𝑖𝑏𝑜𝑛𝑎𝑐𝑐𝑖 𝐻𝑒𝑎𝑝 𝑤𝑖𝑙𝑙 𝑡𝑎𝑘𝑒 Ο(1)
Time Complexity of Prim’s Algorithm
=Build min Heap()+Extract Min()+Decrease Key()
=Ο 𝐸 + Ο(𝑉 lg 𝑉) + Ο 𝐸. 1 {In case of Sparse graph}
=Ο(𝐸 + 𝑉𝑙𝑔 𝑉)
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm: (Analysis) Ο(𝑉 lg 𝑉)
MST-PRIM(G, w, r)
6 while Q ≠ Ø
7 do u ← EXTRACT-MIN(Q) Ο(lg 𝑉)
8 for each v ∈ Adj[u] Ο(𝑉)
9 do if v ∈ Q and w(u, v) < key[v]
10 then π[v] ← u Ο(𝐸)
11 key[v] ← w(u, v)
𝐷𝑒𝑐𝑟𝑒𝑎𝑠𝑒 𝐾𝑒𝑦𝑜𝑓 𝐹𝑖𝑏𝑜𝑛𝑎𝑐𝑐𝑖 𝐻𝑒𝑎𝑝 𝑤𝑖𝑙𝑙 𝑡𝑎𝑘𝑒 Ο(1)
Time Complexity of Prim’s Algorithm
=Build min Heap()+Extract Min()+Decrease Key()
=Ο 𝑉 + Ο(𝑉 lg 𝑉) + Ο 𝑉 2 . 1 {In case of Dense graph (𝐸 = 𝑉 2)}
=Ο 𝑉 2 + 𝑉𝑙𝑔 𝑉 = Ο(𝑉 2)
Greedy Algorithm
• Problem 4: Minimum Spanning Tree problem
• Prim’s Algorithm:
Example 1: Construct the minimum spanning tree (MST) for the given
graph using Prim’s Algorithm-
28
b c
10 16
14

a g d
24 18

25 12
f e
22
P, NP, NP-Hard & NP-complete
Problems

Lecture – 71-72

1
Objectives
• P, NP, NP-Hard and NP-Complete
• Solving 3-CNF Sat problem
• Discussion of Gate Questions

2
Types of Problems
• Trackable
• Intrackable
• Decision
• Optimization

Trackable : Problems that can be solvable in


a reasonable(polynomial) time.
Intrackable : Some problems are
intractable, as they grow large, we are
unable to solve them in reasonable time.
3
Tractability
• What constitutes reasonable time?
– Standard working definition: polynomial time
on an input of size n the worst-case running
time is O(nk) for some constant k
– O(n2), O(n3), O(1), O(n lg n), O(2n), O(nn), O(n!)
– Polynomial time: O(n2), O(n3), O(1), O(n lg n)
– Not in polynomial time: O(2n), O(nn), O(n!)
• Are all problems solvable in polynomial
time?
– No(Turing’s “Halting Problem” is not
solvable by any computer, no matter how
much time is given.)
4
Optimization/Decision Problems
• Optimization Problems
– An optimization problem is one which asks,
“What is the optimal solution to problem X?”
– Examples:
• 0-1 Knapsack
• Fractional Knapsack
• Minimum Spanning Tree
• Decision Problems
– An decision problem is one with yes/no answer
– Examples:
• Does a graph G have a MST of weight ≤ W?
5
Optimization/Decision Problems
• An optimization problem tries to find an
optimal solution
• A decision problem tries to answer a
yes/no question
• Many problems will have decision and
optimization versions
– Eg: Traveling salesman problem
• optimization: find hamiltonian
cycle of minimum weight
• decision: is there a hamiltonian cycle
of weight ≤ k
6
Till now all the algorithm we have learned
are classified and written below in two
categories:

Polynomial based Exponential based


Algorithm Algorithm
• Linear Search • 0/1 Knapsack
• Binary Search • Travelling
• Insertion Sort Salesman
• Merge Sort • Sum of Subset
• Matrix • N-Queen
Multiplication • Graph Coloring
• etc… • Hamiltonian Cycle
P, NP, NP-Hard, NP-Complete
-Definitions

8
The Class P

P: the class of problems that have polynomial-


time deterministic algorithms.
– That is, they are solvable in O(p(n)),
where p(n) is a polynomial on n
– A deterministic algorithm is (essentially) one
that always computes the correct answer

9
Sample Problems in P

• Fractional Knapsack
• MST (Minimum Spanning Tree)
• Sorting
• Others ?

10
The class NP
NP: the class of decision problems that are
solvable in polynomial time on a
nondeterministic machine (or with a
nondeterministic algorithm)
– (A determinstic computer is what we know)
– A nondeterministic computer is one that can
“guess” the right answer or solution
• Think of a nondeterministic computer as a
parallel machine that can freely spawn an
infinite number of processes
• Thus NP can also be thought of as the class of
problems “whose solutions can be
verified in polynomial time”
• Note that NP stands for
“Nondeterministic Polynomial-time”
11
Sample Problems in NP

• Fractional Knapsack
• MST
• Others?
– Traveling Salesman
– Graph Coloring
– Satisfiability (SAT)
• the problem of deciding whether a given
Boolean formula is satisfiable

12
P And NP Summary
• P = set of problems that can be solved
in polynomial time
– Examples: Fractional Knapsack, …
• NP = set of problems for which a solution can
be verified in polynomial time
– Examples: Fractional Knapsack,…, TSP,
CNF SAT, 3-CNF SAT
• Clearly P ⊆ NP
• Open question: Does P = NP? Or P ≠ NP ?
NP-hard
• What does NP-hard mean?
– A lot of times you can solve a problem by reducing it to
a different problem. I can reduce Problem B to
Problem A if, given a solution to Problem A, I can easily
construct a solution to Problem B. (In this case,
"easily" means "in polynomial time.“).
• A problem is NP-hard if all problems in NP
are polynomial time reducible to it, ...
• Ex:- Hamiltonian Cycle (HC)
Every problem in NP is reducible to HC in
polynomial time. Ex:- TSP is reducible to
HC.
B A
Example: lcm(m, n) = (m * n) / gcd(m, n), 13
NP-complete problems
• A problem is NP-complete if the problem is
both
– NP-hard, and
– NP.
Reduction

• A problem R can be reduced to another


problem Q if any instance of R can be
rephrased to an instance of Q, the
solution to which provides a solution to the
instance of R
– This rephrasing is called a transformation
• Intuitively: If R reduces in polynomial time
to Q, R is “no harder to solve” than Q
• Example: lcm(m, n) = (m * n) / gcd(m, n),
lcm(m,n) problem is reduced to gcd(m, n)
problem
NP-Hard and NP-Complete

• If R is polynomial-time reducible to Q,
we denote this R ≤𝑝 Q
• Definition of NP-Hard and NP-
Complete:
– If all problems R ∈ NP are polynomial-time
reducible to Q, then Q is NP-Hard
– We say Q is NP-Complete if Q is NP-Hard
and Q ∈ NP
• If R ≤𝑝 Q and R is NP-Hard, Q is
also NP-Hard
Relationship between P, NP,
NP-Hard and NP-Complete

18
Summary
• P is set of problems that can be solved by a
deterministic Turing machine in Polynomial
time.
• NP is set of problems that can be solved by
a Non-deterministic Turing Machine in
Polynomial time. P is subset of NP (any
problem that can be solved by
deterministic machine in polynomial time
can also be solved by non-deterministic
machine in polynomial time) but P≠NP.

19
Summary
• Some problems can be translated into
one another in such a way that a fast
solution to one problem would
automatically give us a fast solution to
the other.
• There are some problems that every
single problem in NP can be translated
into, and a fast solution to such a
problem would automatically give us a
fast solution to every problem in NP.
This group of problems are known as
NP- Complete. Ex:- Clique Problem

20
Summary
• A problem is NP-hard if an algorithm
for solving it can be translated into one
for solving any NP- problem
(nondeterministic polynomial time)
problem. NP-hard therefore means "at
least as hard as any NP Problem"
although it might, in fact, be harder.

21
First NP-complete problem
(Circuit Satisfiability )
Problem definition
• Boolean combinational circuit
– Boolean combinational elements, wired
together
– Each element, inputs and outputs (binary)
– Limit the number of outputs to 1.
– Called logic gates: NOT gate, AND gate, OR
gate.
– true table: giving the outputs for each
setting of inputs
– true assignment: a set of boolean inputs.
– satisfying assignment: a true assignment 20
causing the output to be 1.
First NP-complete problem
(Circuit Satisfiability )
• Circuit satisfying problem: given a boolean
combinational circuit composed of AND,
OR, and NOT, is it stisfiable?
• CIRCUIT-SAT={<C>: C is a satisfiable
boolean circuit}
• Implication: in the area of computer-aided
hardware optimization, if a subcircuit
always produces 0, then the subcircuit can
be replaced by a simpler subcircuit that
omits all gates and just output a 0.
23
First NP-complete problem
(Circuit Satisfiability )
Two instances of circuit satisfiability problems

Figure: Two instances of the circuit-satisfiability problem


a) The assignment (𝑥1 = 1, 𝑥2 = 1, 𝑥3 = 0) to the input of this circuit
causes the output of the circuit to be 1. The circuit is therefore
satisfiable.
b) No assignment to the input of the circuit can cause the output of
the circuit to be 1. The circuit is therefore unsatisfiable. 24
Solving circuit-satisfiability
problem
• Intuitive solution:
– for each possible assignment, check
whether it generates 1.
– suppose the number of inputs is k, then
the total possible assignments are 2k.
So the running time is Ω(2k). When the
size of the problem is Θ(k), then the
running time is not polynomial.

25
Example of reduction of CIRCUIT-SAT to SAT

Φ = 𝑥10 ∧(𝑥10 ↔ (𝑥7 ∧𝑥8 ∧𝑥9))


∧ (𝑥9 ↔ (𝑥6 ∨𝑥7))
∧(𝑥8 ↔ (𝑥5 ∨𝑥6))
∧ 𝑥7 ↔ 𝑥1 ∧𝑥2 ∧𝑥4
∧(𝑥6 ↔ ¬𝑥4 )
∧ 𝑥5 ↔ 𝑥1 ∨𝑥2
∧(𝑥4 ↔ ¬𝑥3 )

REDUCTION:Φ = 𝑥10 = 𝑥7 ∧ 𝑥8 ∧ 𝑥9 = 𝑥1 ∧ 𝑥2 ∧ 𝑥4 ∧ (𝑥5 ∨ 𝑥6 )∧ (𝑥6


∨ 𝑥7 ) = 𝑥1 ∧ 𝑥2 ∧ 𝑥4 ∧ ((𝑥1 ∨ 𝑥2 ) ∨ ¬ 𝑥4 ) ∧ (¬ 𝑥4 ∨ (𝑥1 ∧ 𝑥2 ∧ 𝑥4 ……=))
26
Conversion to 3 CNF
• The result is that in Φ′ , each clause has at most
three literals
• Change each clause into conjunctive normal form as
follows:
– Construct a true table, (small, at most 8 by 4)
– Write the disjunctive normal form for all true-
table items evaluating to 0
– Using DeMorgan law to change to CNF.
• The resulting Φ′ ′ is in CNF but each clause has 3 or less
literals.
• Change 1 or 2-literal clause into 3-literal clause as
follows:
– If a clause has one literal l, change it to 𝑙 ∨ 𝑝 ∨ 𝑞 ∧ (𝑙
∨ 𝑝 ∨ ¬𝑞) ∧ 𝑙 ∨ ¬p ∨ 𝑞 ∧ 𝑙 ∨ ¬𝑝 ∨ ¬𝑞 .
– If a clause has two literals (𝑙1 ∨ 𝑙2 )change it (𝑙1 ∨ 𝑙2
∨ ¬𝑝) ∧ (𝑙1 ∨ 𝑙2 ∨ ¬𝑝)
27
Example of a polynomial-time reduction:

Now reduce the 3CNF-satisfiability problem


to the CLIQUE problem (𝑖. 𝑒. 3Φ ≤𝑃 𝐶𝑙𝑖𝑞𝑢𝑒)

28
3CNF formula:

clause
literal
(𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥3 ∨ 𝑥5 ∨ 𝑥6 ) ∧ (𝑥3 ∨ 𝑥6 ∨ 𝑥4 ) ∧ (𝑥4 ∨ 𝑥5 ∨ 𝑥6 )

A 3 CNF has k clauses, and each clause has three literals.

Language:

3CNF-SAT ={w : w is a satisfiable 3CNF formula}

29
What is a Clique?

A clique V’ is an undirected graph G=(V,E)


i.e. 𝑉’ ⊆ 𝑉.
Each pair of which is connected by an
edge in E (𝑖. 𝑒. 𝑒𝑖 ∈ 𝐸)
The size of a clique is the number of
vertices it contains.

Definition of Clique problem : In a clique


problem each node is connected to each
other nodes of that graph.
30
Definition of Clique Problem : In a clique problem
each node is connected to each other nodes of that
graph.
Language:
CLIQUE = {< G, k > :graph G contains a k-clique}
A 5-clique in graph G

31
Definition of Clique Problem : In a clique problem
each node is connected to each other nodes of that
graph.
Language:
CLIQUE = {< G, k > :graph G contains a k-clique}
A 5-clique in graph G

32
Transform formula to graph.
Example:
(𝑥1 ∨ 𝑥2 ∨ 𝑥4 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥4 )(𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥2 ∨ 𝑥3 ∨ 𝑥4 )
Clause 2
Create Nodes: x1 x2 x4

x1 x1

Clause 3
Clause 1

x2 x2

x4 x3

x2 x3 x4

Clause 4 33
(𝑥1 ∨ 𝑥2 ∨ 𝑥4 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥4 )(𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥2 ∨ 𝑥3 ∨ 𝑥4 )

x1 x2 x4

x1 x1

x2 x2

x4 x3

x2 x3 x4

Add link from a literal 𝜉 t o a literal in every other


clause, except the complement 𝜉ഥ
34
(𝑥1 ∨ 𝑥2 ∨ 𝑥4 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥4 )(𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥2 ∨ 𝑥3 ∨ 𝑥4 )

x1 x2 x4

x1 x1

x2 x2

x4 x3

x2 x3 x4

Resulting Graph
35
(𝑥1 ∨ 𝑥2 ∨ 𝑥4 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥4 )(𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥2 ∨ 𝑥3 ∨ 𝑥4 )

x1 = 1
x2 = 0 x1 x2 x4
x3 = 0
x1 x1
x4 = 1

x2 x2

x4 x3

x2 x3 x4

The formula is satisfied if and only if the


Graph has a 4-clique
The objective is to find a clique of size 4, End of Proof
where 4 is the number of clauses. 32
CIRCUIT-SAT

SAT
The side figure
outline the
structure of the 3-CNF-SAT

NP- Completeness
CLIQUE
proof by using
SUBSET-SUM
reduction
VERTEX-COVER
methodology

HAM-CYCLE

TSP
Theorem:
If: a. Language A is NP-complete
b. Language B is in NP
c. A is polynomial time reducible to B
Then: B is NP-complete

38
Corollary: CLIQUE is NP-complete

Proof:
a. 3CNF-SAT is NP-complete
b. CLIQUE is in NP
c. 3CNF-SAT is polynomial reducible to CLIQUE
(shown earlier)

Apply previous theorem with


A=3CNF-SAT and B=CLIQUE

39
Previous Gate Questions

40
Q. No. 1

41
Q. No. 2

42
Q. No. 3

43
Q. No. 4

44
Q. No. 5

45
Q. No. 6

46
Q. No. 7

47
Q. No. 8

48
Q. No. 10

There exist: search problem

Explanation: The problem of finding


whether there exist a Hamiltonian
Cycle or not is NP Hard and NP
Complete Both.
Finding a Hamiltonian cycle in a
graph G = (V,E) with V divisible by 3
is also NP Hard.

49
Q. No. 11

50
Thank You

51

You might also like