0% found this document useful (0 votes)
6 views

Assignment 2

Uploaded by

huapei30
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Assignment 2

Uploaded by

huapei30
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

HOMEWORK 2

1. According to Master Theorem:

If T ( n )=aT ( nb )+ O ( n ) for some constants a>0, b>1 and d>=0, then:


d

{
O ( n ) if d> log b a
d

T ( n )= O ( nd logn ) if d=log b a
O( n ) if d <log b a
log b a

For Algorithm A:

The recurrence relation is 5 T ( n2 )+O(n)


Here, d=1,∧log b a=log 2 5=2.32
As log b a >d ,the running time isO ( log b a ) which isO ( n ) =O(n2.32 )
log2 5

For Algorithm B:
The recurrence relation is 2 T ( n−1 )+ O(1)
T(n) = 2T(n-1)+O(1)
= 2(2T(n-2)+O(1))+O(1) =22T(n-2) + 3O(2)
= 23T(n-3) + 5O(1)
=…
= 2kT(n-k)+ xO(1)
Putting k=n-1
T(n-1) = 2n-1T(1)+ xO(1)
Which is O(2n)

For Algorithm C:

The recurrence relation is 9 T ( n3 )+O(n )2

Here d=2, a= 9 and b=3


log b a=log 3 9 = 2
As d = log b a , the running time is O ( nlog (n) )

Thus, as Algorithm C has the least running time, we should pick algorithm C.

1. Removing Duplicates:
We can modify the merge function in mergesort to not add the duplicates in the following way:
This is the original merge function (taken from the textbook Dasgupta):

function merge(x[1 … k]; y[1 … l])


if k = 0: return y[1 … l]
if l = 0: return x[1 … k]
if x[1] <= y[1]:
return x[1] o merge(x[2 .. k]; y[1 … l])
else:
return y[1] o merge(x[1 … k]; y[2 …l])

Where o denotes concatenation.

To remove duplicates, we can modify it so that we don’t copy the duplicate entries:
function merge(x[1 … k]; y[1 … l])
if k = 0: return y[1 … l]
if l = 0: return x[1 … k]
if x[1] < y[1]:
return x[1] o merge(x[2 .. k]; y[1 … l])
else if x[1]>y[1]:
return y[1] o merge(x[1 … k]; y[2 …l])
else if x[1]=y[1]:
return x[1] o merge(x[2 … k]; y[2 …l]) [Since both are equal we keep only one of
them and move the
pointer by one in both x and y]
The complexity of this is same as merge sort, which is O(nlogn).
Another solution is to mergesort the array, and then remove duplicates. This also
takes the same time which is O(nlogn).

2. Let the range of the days be R1 and R2 where,


R1 = 1,2..n/2
R2 = n/2+1,….,n
So basically, the problem is to find the maximum subranges, p(j)-p(i), of
ranges R1 and R2.
So, the max subrange can either be in R1, or R2, or one value in R1 and other
in R2.
The three possible solutions with their running time are:
- The max subrange is in R1 - Time T(n/2)
- It is in R2 – Time T(n/2)
- One in R1 and other in R2. In this case, the max value of p(j) – p(i) would
be found just by maximizing j belonging to R2 and minimizing i belonging
to R1 – Time = time for finding min and max = O(n)
Hence, the total running time
T(n) = 2T(n/2) + O(n)
By master theorem, the running time is O(nlogn).
(Ref: Jon Kleinberg, Éva Tardos, Algorithm Design)

3. As the query returns kth smallest element, we can set k to n/2 for the median
element.
K1=k2 = n/2

We find 2 medians, m1 and m2 of the 2 databases, say d1 and d2


respectively
m1 = query (k1, d1)
m2 = query (k2, d2)

The median will lie between min(m1, m2) and max(m1,m2). We discard the
rest of the elements, and repeat this process each time. Since every time the
size of array is divided by 2, the loop will run log 2n times:

for i=1 to log2n


if(m2>m1)
k1 = k1+n/2i
k2 = k2- n/2i
else if (m1>m2)
k1 = k1 - n/2i
k2 = k2 + n/2i
return min(m1,m2)

As every time the size is reduced to half, the recurrence relation is:
T(n) = T(n/2)+O(1)
By Master theorem,

{
O ( n ) if d> log b a
d

T ( n )= O ( nd logn ) if d=log b a
O( n ) if d <log b a
log b a
a = 1, b=2, d=0

Case 2 holds here,


Hence the running time is O(log n)

4. (a)Let A = [ ac bd ]
[ ac bd ][ ac bd ]=[ caa+dc ][ ]
2 2
+ bc a b+bd a +bc b(a+d )
Then A2 = 2 = 2
cb+d c (a+ d) cb+d

As we can see, there are only 5 multiplications requires to compute A squared.

(b) Let X be a matrix now, comprised of 4 matrices A, B, C, D

X= [ CA DB ]
[ CA DB ] [ CA DB ]=[ CAA ++BCDC ]
2
2 AB+ BD
X =
CB+ D 2

Here AB+ BD ≠ B ( A+ D )∧CA+ DC ≠ C( A+ D)


Also BC≠CB for all matrices

So, there are still 7 subproblems and not 5, and thus it will take the same time as strassen’s
algorithm which is O ( nlog 7 ) and not O ( nlog 5 )
2 2

(c ) Let A and B be two n x n matrices


Let X be a 2n x 2n matrix such that

X= [ B0 A0 ]
X2 = [ B0 A0 ] [ B0 A0 ]=[ AB0 BA0 ]
Given that a matrix of size n can be squared in time O(nc), then the time for squaring X will be
O((2n)c), which is equal to O(nc).
Hence the product AB can be computed in time O(nc). (Ref: L. Trevisan & P. Raghavendra DIS 02)

You might also like