0% found this document useful (0 votes)
62 views11 pages

Foundation of Algorithms Assignment 4 Chinmay Kulkarni (ck1166)

This document discusses algorithms for matrix chain multiplication and merge sort. It provides the time complexity of operations in a graph, explains the BFS algorithm, and compares enumeration vs recursive approaches for matrix chain orderings. Dynamic programming can speed up matrix chain multiplication due to overlapping subproblems but not merge sort. The maximum scalar multiplication problem exhibits optimal substructure suitable for a dynamic programming solution.

Uploaded by

Chinmay Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views11 pages

Foundation of Algorithms Assignment 4 Chinmay Kulkarni (ck1166)

This document discusses algorithms for matrix chain multiplication and merge sort. It provides the time complexity of operations in a graph, explains the BFS algorithm, and compares enumeration vs recursive approaches for matrix chain orderings. Dynamic programming can speed up matrix chain multiplication due to overlapping subproblems but not merge sort. The maximum scalar multiplication problem exhibits optimal substructure suitable for a dynamic programming solution.

Uploaded by

Chinmay Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Foundation of Algorithms Assignment 4

Chinmay Kulkarni (ck1166)

2.a. Out Degree = O(V+E)


In Degree = O(V+E)
Where,
V no of vertices.
E no of edges.

2.b. The expected time to lookup that an edge is O(1). The disadvantage of having Hashtable is it
will require more space i.e. memory than linked list. This problem can be solved by using Binary
search tree instead of Hashtable. But the disadvantage with Binary search tree has lookup time
for an edge as O(log n).

3.a.

If we take u vertex as source node and perform BFS, the values are as follows:
Vertex

d
u

TT
0

3.b. As in the code it can be seen that the initial color of the node is WHITE. When the node is
enqueued in the visited queue, the color of node is changed to GRAY just to show that it is in the
queue and once its all adjacent vertices are visited, the node is dequeued from queue and finally
the color of node is changed to BLACK.
But in the code we can clearly see that there is no comparison between and GRAY and BLACK
color node, and hence it doesnt matter if we remove line 5 and 14 in the code, and we can have a
single color to suffice visited node. It is redundant to have 2 colors notifying that a node is
visited, and moreover the code only checks if a node is WHITE or not.

3.c. The value u.d assigned for u is independent of adjacency list because u.d is a function of
u.d = (s,u), and the BFS algorithm doesnt assume that adjacency list is in some order.
In fig 22.3 lets assume a scenario where, in the adjacency list of w, x is ahead of t and in
adjacency list of w, u is ahead of y, then the BFS algorithm will choose the path through the (x-u)
edge instead of (t-u) edge and hence BFS path is changed. Therefore, we can say that BFS
depends on ordering in adjacency list.

4.
DFS_STACK (G, s)
for each vertex u belongs to G other than s
{
u.color = WHITE

// assigning every vertex color as white

u.pred = NIL

// assigning previous node as null.

}
time = 0

// time

S = EMPTY

// empty stack

time = time + 1
s.discovery_time = time
s.color = GRAY PUSH(S, s)

while S != EMPTY // loop to check whether the stack is empty


{
t = TOP(S)
if every element of vs the adjacency list is not visited
{
v.pred = t
time = time + 1
v. discovery_time = time
v.color = GRAY
PUSH(S, v)
}
else

// if adjacency list of v is fully visited.

{
t = POP(S)
time = time + 1
v.finish_time= time
v.color = BLACK
}
}

5.a. A1 = 5 x 10, A2 = 10 x 3, A3 = 3 x 12, A4 = 12 x 5, A5 = 5 x 50, A6 = 50


x6

Let the matrix m be of the order 6 x 6:

A1

A1

A2

A3

A4

A5

A6

A2

A3

A4

A5

x
A6

x
x

x
x

x
x

And assign m[i,j] be 0,

m[1,1] = m[2,2] = m[3,3] = m[4,4] = m[5,5] = m[6,6] = 0

p0=5, p1=10, p2=3, p3=12, p4=5, p5=50, p6=6

m[1,2] = p0xp1xp2
= 5x10x3
= 150

m[2,3] = p1xp2xp3
= 10x3x12
= 360

m[3,4] = p2xp3xp4
= 3x12x5
= 180

m[4,5] = p3xp4xp5
= 12x5x50
= 3000

m[5,6] = p4xp5xp6
= 5x50x6
= 1500

A1

A2

A3

A4

A5

A6

A1

150

A2

360

A3

180

A4

A5

x
A6

x
x

x
x

x
x

3000
0

1500

m[i,j]= {min {m[i,k] + m[k+1, j] + p(i )p(k)p(j)}}, if i < j

when we calculate m[i,j], then:

A1

A2

A3

A4

A5

A6
A1

150 (1)

330(2)

405(2)

1655 (4)

A2

360(2)

330(2)

2430(4)

A3

180(3)

930(2)

A4

2010(2)

1950(2)

1770(4)
0

3000(4)

1860(4)
A5

1500(5)
A6

Entries in matrix are in the form no. of multiplications (value of k)

Therefore the parenthesizing is (A1xA2)(A3xA4)(A5xA6).

5.b. a full parenthesization of an n-element expression has n 1 pairs of parentheses.

Base case:
For n=1,

No. of parentheses is (n-1)= 0


This is true because 1 element requires 0 parentheses.

Induction hypothesis:

Suppose that for n <k, a full parenthesization of an n-element expression has


exactly n 1 pairs of parentheses

For n= k+1 matrices, ie A1, A2, A3, A4, A5..Ak+1,

Suppose the parentheses is on jth element, so the 2 sub


products are:

(A1,A2,A3Aj)(Aj+1,Aj+2.Ak+1)
S1

s2

No of parentheses requires for s1 subproduct is= (j-1)


No of parentheses requires for s2 subproduct is= [(k+1) - (j+1)-1]

Total parentheses required are = (j-1)+ [(k+1) - (j+1)-1]


= k-1
And we will require 1 more parentheses for final parenthesization , therefore
total parethses required are = (k-1)+1
=k
Hence proved k+1 elements require k parentheses.

6.a. Recursive matrix chain multiplication is more efficient over enumeration


of all possible ways of parenthesization and calculating the best way because
while considering enumeration we need to calculate all possible
multiplications and choose the most efficient one , but in the case of
Recursive matrix chain multiplication, we only choose optimal multiplications
and hence the number of multiplications to consider are less than
enumeration. Therefore, Recursive matrix chain multiplication has the edge
over enumeration.

6.b.

Memorization fails to speed up because there are no overlapping sub


problems in Merge Sort.

6.c. The recurrence for maximum scalar multiplication can be given as:

m[i,j] = [ 0
if i=j ]
[ max [m(i,k) + m(k+1,j) + {rows(si)}* rows(sk)}* rows(sj)
if i<j ]

Hence we can say that it exhibits an optimal substructure.

You might also like