01 Sol
01 Sol
1. Here, f (n) = n log (n) + n log (log (n)) and g (n) = n log (n). We need to
show that ∃ c1 , c2 , n0 such that, ∀n ≥ n0 ,
1
k 2 k
It is easy to see that 21 n decreases much faster than
3 n meaning
that the base case y ≤ 50 will be reached first. Thus,
1 n
n ≤ 50 =⇒ k ≥ log
2k 50
Finally, we can write the general form as,
log( 50
n
) k k !
X 2 1
T (n, n) = n+ n + Θ(50)
3 2
k=0
log( 50
n
) k log( 50
n
) k
X 2 X 1
= n+ n + Θ(50) (1)
3 2
k=0 k=0
Plog( n ) k
Now, k=0 50 23 n is a GP series with a = n and r = 2/3. Thus, the
summation can be written as,
log( 50
n
)
k log( 50
n
)
X 2 1 − 32
n = n 2
k=0
3 1− 3
≈ 3n
k
Since, 21k < 32k , therefore we do not need to look at the second term in
equation 1. Finally, T (n, n) = Θ(3n) = Θ(n).
3. Every time that an array gets full, a new array is created and all elements
are copied. Since each time, the array size is doubled, we have, 2k =
n =⇒ k = log(n). Let the total element assignment cost be c = 0 When
|A0 | = 1, we can assign only x1 leading to c = 1 When x2 arrives, x1 needs
to be copied to A1 and x2 is added to the array, leading to c = 1 + 2 = 3.
Thus, the number of element assignments per array is simply equal to the
size of the array. Therefore, we have,
k
X
2j = 2j − 1 = Θ(n)
j=0
2
Replacing Eqn. 3 in 2 and Eqn. 2 in 1, we get,
a2
n
3 x a
f (n) = a f 3 + bn 1 + x + 2x (4)
c c c
Based on the above pattern, we can write out the general form of the
recurrence as,
k−1
n X a j
f (n) = ak f + bnx
ck j=0
cx
logc (n)−1
X a j
= alogc (n) d + bnx (5)
j=0
cx
If a = cx , then,
5. To prove that max(f (n), g(n)) = Θ(f (n) + g(n)) for asymptotically non-
negative functions f (n) and g(n), we need to show two things:
(a) max(f (n), g(n)) = O(f (n) + g(n))
(b) max(f (n), g(n)) = Ω(f (n) + g(n))
Once both are established, we can conclude that max(f (n), g(n)) = Θ(f (n)+
g(n)).
1. Prove max(f (n), g(n)) = O(f (n) + g(n))
3
By definition, max(f (n), g(n)) is the larger of the two functions. There-
fore, we have:
max(f (n), g(n)) ≤ f (n) + g(n)
To show max(f (n), g(n)) = O(f (n) + g(n)), we need to find constants
c > 0 and n0 such that for all n ≥ n0 :
max(f (n), g(n)) ≤ c(f (n) + g(n))
Since max(f (n), g(n)) ≤ f (n) + g(n), we can choose c = 1. Thus, we have:
max(f (n), g(n)) ≤ 1 · (f (n) + g(n))
4
6. We aim to prove that for any two functions f (n) and g(n), we have f (n) =
Θ(g(n)) if and only if f (n) = O(g(n)) and f (n) = Ω(g(n)).
To prove this, we will show both directions:
1. If f (n) = Θ(g(n)), then f (n) = O(g(n)) and f (n) = Ω(g(n)):
f (n) ≥ c1 · g(n).
f (n) ≤ c2 · g(n).
Assume, for the sake of contradiction, that f = Ω(g) and also f = o(g).
Definition of Ω(g)
By definition of Ω(g), there exist constants c > 0 and nΩ such that for all
n > nΩ ,
f (n) ≥ c · g(n). (7)
5
Definition of o(g)
On the other hand, by definition of o(g), for any positive constant c, there
exists no such that for all n > no ,
f (n) < c · g(n). (8)
However, the two inequalities (3) and (4) cannot both hold for the same
n. This is a contradiction.
Therefore, our initial assumption that f = Ω(g) and also f = o(g) must
be false. Hence, if f = Ω(g), then f ∈
/ o(g).
8. • We can use the limit definitions of o(n) and ω(n) to draw the same
conclusion.
f (n)
o(g(n)) = lim =0
n→∞ g(n)
and
f (n)
ω(g(n)) = lim =∞
n→∞ g(n)
Or
Function in o(g(n)):
If f (n) ∈ o(g(n)), then for every positive constant c1 > 0 and for
sufficiently large n, we have:
0 ≤ f (n) < c1 · g(n)
• Function in ω(g(n)):
If f (n) ∈ ω(g(n)), then for every positive constant c2 > 0 and for
sufficiently large n, we have:
0 ≤ c2 · g(n) < f (n)
6
• Combine the Results:
Suppose there is a function f (n) that belongs to both o(g(n)) and
ω(g(n)). Then for some constants c1 > 0 and c2 > 0, we need:
o(g(n)) ∩ ω(g(n)) = ∅
This completes the proof that the intersection of o(g(n)) and ω(g(n)) is
indeed the empty set.
9. (a) T (n/2) + c
10. The value of j gets updated as j = 2, 4, 16, · · · . The general form of this
p p
series is 22 . Thus, the number of terms in this series are, 22 = n =⇒
p = log(log(n)). The outer loop runs for n times. Thus, the total time
complexity can be written as Θ(n log(log(n))).