0% found this document useful (0 votes)
22 views7 pages

01 Sol

The document presents a series of mathematical proofs and analyses related to asymptotic notation, including Big O, Big Omega, and Theta notations. It discusses the relationships between functions, recurrence relations, and the behavior of algorithms in terms of their growth rates. Key results include the establishment of bounds for max functions and the proof of the intersection of o(g(n)) and ω(g(n)) being empty.

Uploaded by

fpx82038
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views7 pages

01 Sol

The document presents a series of mathematical proofs and analyses related to asymptotic notation, including Big O, Big Omega, and Theta notations. It discusses the relationships between functions, recurrence relations, and the behavior of algorithms in terms of their growth rates. Key results include the establishment of bounds for max functions and the proof of the intersection of o(g(n)) and ω(g(n)) being empty.

Uploaded by

fpx82038
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

AMFAI Weekly Solution Set: Week 1

August 11, 2024

1. Here, f (n) = n log (n) + n log (log (n)) and g (n) = n log (n). We need to
show that ∃ c1 , c2 , n0 such that, ∀n ≥ n0 ,

c1 n log(n) ≤ n log(n) + n log(log(n)) ≤ c2 n log(n)

Consider the left-hand side,

c1 n log(n) ≤ n log(n) + n log(log(n))

Dividing by n on both sides,

c1 log(n) ≤ log(n) + log(log(n))


log(log(n))
∴ c1 ≤ 1 + (1)
log(n)
Similarly,
log(log(n))
c2 ≥ 1 + (2)
log(n)
∀n ≥ 2, ∃ c1 ≤ 1 and c2 ≥ 1 such that both equations, 1 and 2 are
satisfied. This is because the second term in both equations is a positive
value that slowly diminishes to 0. For example, consider, c1 = 1/2, c2 = 2
and n = 2. Then, we get 12 ≤ 1 ≤ 2.
2. Write, the first step,
 
2n n
T (n, n) = n + nT ,
3 2
Expand the recurrence term again,
 
2n n 4n n
T (n, n) = n + n + + +T ,
3 2 9 4
Thus, the general form of the recurrence looks like,
 k  k !
2 1
T n, n
3 2

1
k 2 k
It is easy to see that 21 n decreases much faster than

3 n meaning
that the base case y ≤ 50 will be reached first. Thus,
1 n
n ≤ 50 =⇒ k ≥ log
2k 50
Finally, we can write the general form as,

log( 50
n
)  k  k !
X 2 1
T (n, n) = n+ n + Θ(50)
3 2
k=0
log( 50
n
) k log( 50
n
) k
X 2 X 1
= n+ n + Θ(50) (1)
3 2
k=0 k=0

Plog( n ) k
Now, k=0 50 23 n is a GP series with a = n and r = 2/3. Thus, the
summation can be written as,

log( 50
n
)
 
k log( 50
n
)
X 2 1 − 32
n = n 2

k=0
3 1− 3

≈ 3n
k
Since, 21k < 32k , therefore we do not need to look at the second term in
equation 1. Finally, T (n, n) = Θ(3n) = Θ(n).
3. Every time that an array gets full, a new array is created and all elements
are copied. Since each time, the array size is doubled, we have, 2k =
n =⇒ k = log(n). Let the total element assignment cost be c = 0 When
|A0 | = 1, we can assign only x1 leading to c = 1 When x2 arrives, x1 needs
to be copied to A1 and x2 is added to the array, leading to c = 1 + 2 = 3.
Thus, the number of element assignments per array is simply equal to the
size of the array. Therefore, we have,
k
X
2j = 2j − 1 = Θ(n)
j=0

4. Given that n = ck =⇒ k = logc (n). Now, We first expand the given


recurrence.
n
f (n) = af + bnx (1)
n nc  n x
f = af 2 + b (2)
 nc   cn   cn x
f 2 = af 3 + b 2 (3)
c c c

2
Replacing Eqn. 3 in 2 and Eqn. 2 in 1, we get,

a2
n  
3 x a
f (n) = a f 3 + bn 1 + x + 2x (4)
c c c

Based on the above pattern, we can write out the general form of the
recurrence as,
k−1
n X a j
f (n) = ak f + bnx
ck j=0
cx
logc (n)−1 
X a j
= alogc (n) d + bnx (5)
j=0
cx

If a = cx , then,

f (n) = alogc n d + bnx logc (n)

If a ̸= cx , then we first solve the summation term in Eqn. 5 which is a


GP series with the first term being 1, the common ratio being cax and the
number of elements being k.
k
X 1 − ax k 1 − (cak )x
k
k−1
!
1 − nax

c x
= = = −c (6)
j=0
1 − cax 1 − cax a − cx

Replacing the above equation in Eqn. 5, we get,


ak
!
1 − x
f (n) = ak d − bcx nx n
a − cx
bcx nx ak bcx
= ak d − +
a − cx a − cx
bcx bcx
   
logc n
= d+ a − nx
a − cx a − cx
bcx bcx
   
logc a
= d+ n − nx
a − cx a − cx

5. To prove that max(f (n), g(n)) = Θ(f (n) + g(n)) for asymptotically non-
negative functions f (n) and g(n), we need to show two things:
(a) max(f (n), g(n)) = O(f (n) + g(n))
(b) max(f (n), g(n)) = Ω(f (n) + g(n))
Once both are established, we can conclude that max(f (n), g(n)) = Θ(f (n)+
g(n)).
1. Prove max(f (n), g(n)) = O(f (n) + g(n))

3
By definition, max(f (n), g(n)) is the larger of the two functions. There-
fore, we have:
max(f (n), g(n)) ≤ f (n) + g(n)
To show max(f (n), g(n)) = O(f (n) + g(n)), we need to find constants
c > 0 and n0 such that for all n ≥ n0 :
max(f (n), g(n)) ≤ c(f (n) + g(n))

Since max(f (n), g(n)) ≤ f (n) + g(n), we can choose c = 1. Thus, we have:
max(f (n), g(n)) ≤ 1 · (f (n) + g(n))

Therefore, max(f (n), g(n)) = O(f (n) + g(n)).

2. Prove max(f (n), g(n)) = Ω(f (n) + g(n))


To show max(f (n), g(n)) = Ω(f (n) + g(n)), we need to find constants
c > 0 and n0 such that for all n ≥ n0 :
max(f (n), g(n)) ≥ c(f (n) + g(n))

Let’s use the following facts:


f (n) ≤ max(f (n), g(n))
g(n) ≤ max(f (n), g(n))

By adding these two inequalities, we get:


f (n) + g(n) ≤ max(f (n), g(n)) + max(f (n), g(n))
f (n) + g(n) ≤ 2 max(f (n), g(n))
Dividing both sides by 2:
1
(f (n) + g(n)) ≤ max(f (n), g(n))
2
This shows that max(f (n), g(n)) ≥ 21 (f (n)+g(n)), so we can choose c = 21 .
Therefore:
1
max(f (n), g(n)) ≥ (f (n) + g(n))
2
Thus, max(f (n), g(n)) = Ω(f (n) + g(n)).
Since we have shown both:
max(f (n), g(n)) = O(f (n) + g(n))
and
max(f (n), g(n)) = Ω(f (n) + g(n))
we conclude that:
max(f (n), g(n)) = Θ(f (n) + g(n))

4
6. We aim to prove that for any two functions f (n) and g(n), we have f (n) =
Θ(g(n)) if and only if f (n) = O(g(n)) and f (n) = Ω(g(n)).
To prove this, we will show both directions:
1. If f (n) = Θ(g(n)), then f (n) = O(g(n)) and f (n) = Ω(g(n)):

Assume f (n) = Θ(g(n)). By definition, this means there exist positive


constants c1 , c2 , and n0 such that for all n ≥ n0 :

c1 · g(n) ≤ f (n) ≤ c2 · g(n).

To show f (n) = O(g(n)): - From the inequality f (n) ≤ c2 · g(n), it


follows that f (n) does not grow faster than g(n) up to a constant factor.
- Therefore, f (n) = O(g(n)) with constant c = c2 .
To show f (n) = Ω(g(n)): - From the inequality f (n) ≥ c1 · g(n), it follows
that f (n) does not grow slower than g(n) up to a constant factor. -
Therefore, f (n) = Ω(g(n)) with constant c = c1 .
2. If f (n) = O(g(n)) and f (n) = Ω(g(n)), then f (n) = Θ(g(n)):
Assume f (n) = O(g(n)) and f (n) = Ω(g(n)). This means: - There exist
constants c1 and n0 such that for all n ≥ n0 :

f (n) ≥ c1 · g(n).

- There exist constants c2 and n1 such that for all n ≥ n1 :

f (n) ≤ c2 · g(n).

Let n′0 = max(n0 , n1 ). For n ≥ n′0 , we then have:

c1 · g(n) ≤ f (n) ≤ c2 · g(n).

Thus, f (n) is bounded both above and below by constant multiples of


g(n), which means f (n) = Θ(g(n)) with constants c1 and c2 .
In conclusion, f (n) = Θ(g(n)) if and only if f (n) = O(g(n)) and f (n) =
Ω(g(n)).
7. We aim to prove that if f = Ω(g) then f ∈
/ o(g).
Proof by Contradiction:

Assume, for the sake of contradiction, that f = Ω(g) and also f = o(g).
Definition of Ω(g)
By definition of Ω(g), there exist constants c > 0 and nΩ such that for all
n > nΩ ,
f (n) ≥ c · g(n). (7)

5
Definition of o(g)
On the other hand, by definition of o(g), for any positive constant c, there
exists no such that for all n > no ,
f (n) < c · g(n). (8)

Consider n which is greater than both nΩ and no (e.g., n can be chosen


as max(nΩ , no ) + 1).
Then, according to the definition of Ω(g), we have:
f (n) ≥ c · g(n). (9)

And according to the definition of o(g), we have:


f (n) < c · g(n). (10)

However, the two inequalities (3) and (4) cannot both hold for the same
n. This is a contradiction.
Therefore, our initial assumption that f = Ω(g) and also f = o(g) must
be false. Hence, if f = Ω(g), then f ∈
/ o(g).
8. • We can use the limit definitions of o(n) and ω(n) to draw the same
conclusion.
f (n)
o(g(n)) = lim =0
n→∞ g(n)

and
f (n)
ω(g(n)) = lim =∞
n→∞ g(n)

Both of these cannot hold true as n approaches ∞.


Hence, no such f (n) exists, i.e., the intersection is indeed the empty
set.

Or

Function in o(g(n)):
If f (n) ∈ o(g(n)), then for every positive constant c1 > 0 and for
sufficiently large n, we have:
0 ≤ f (n) < c1 · g(n)

• Function in ω(g(n)):
If f (n) ∈ ω(g(n)), then for every positive constant c2 > 0 and for
sufficiently large n, we have:
0 ≤ c2 · g(n) < f (n)

6
• Combine the Results:
Suppose there is a function f (n) that belongs to both o(g(n)) and
ω(g(n)). Then for some constants c1 > 0 and c2 > 0, we need:

c2 · g(n) < f (n) < c1 · g(n)

This is a contradiction because f (n) cannot be simultaneously greater


than c2 · g(n) and less than c1 · g(n) for all sufficiently large n. As n
grows, c2 · g(n) and c1 · g(n) can be made arbitrarily close by choosing
appropriate c1 and c2 , making it impossible for f (n) to satisfy both
conditions.
Therefore, no such function f (n) can exist, which implies:

o(g(n)) ∩ ω(g(n)) = ∅

This completes the proof that the intersection of o(g(n)) and ω(g(n)) is
indeed the empty set.

9. (a) T (n/2) + c
10. The value of j gets updated as j = 2, 4, 16, · · · . The general form of this
p p
series is 22 . Thus, the number of terms in this series are, 22 = n =⇒
p = log(log(n)). The outer loop runs for n times. Thus, the total time
complexity can be written as Θ(n log(log(n))).

You might also like