0% found this document useful (0 votes)
39 views

Time - Complexity Analysis For Some Algorithms

Analysis for a couple of algorithms and the solution of some exercises from books.

Uploaded by

Jorge Castillo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Time - Complexity Analysis For Some Algorithms

Analysis for a couple of algorithms and the solution of some exercises from books.

Uploaded by

Jorge Castillo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

TIME COMPLEXITY

Jorge Luis Castillo Orduz


Algorithms
November 11, 2021

1 Cormen, Leiserson, Rivest and Stein


Exercise 1.2-2. Suppose we are comparing implementations of insertion sort and merge
sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge
sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort?

Answer. There are many ways to find the answer, but personally I prefer Excel. I am
counting the amount of steps of each algorithm for different values of n, the results is shown
in figure 1.

Figure 1: Amount of steps for each algorithm (Insertion and Merge sort).

I highlight with green the cell with the lowest value in a row. As you can see, at n=44
the relation changes, so for 1<n<44, insertion sort beats merge sort.

1
Exercise 1.2-3. What is the smallest value of n such that an algorithm whose running
time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine?

Answer. I used the same technique that I used for the last exercise. I am using a label
to make some things easy, so X=100n2 and Y=2n . We can see the results of the calculation
in the figure below:

Figure 2: Amount of steps for each algorithm (X and Y).

We can easily see now that n=15 is the smallest value for which the algorithm X runs
faster than Y.

Problem 3-1. Asymptotic behavior of polynomials

Let

where ad > 0, be a degree-d polynomial in n, and let k be a constant.

2
Use the definitions of the asymptotic notations to prove the following properties.

Answer:

• Asymptotic Upper Bound

To show that p(n) = O(nk ) when k≥d, we have to show there exists constants c,n0 >0
such that 0 ≤ p(n) ≤ c · nk for all n≥n0 . The last line follows from the fact that as

k≥d, for all possible values of i, (i−k)≤0, i.e. all ni−k will be less than or equal to 1
for all n≥1.
Pd
So we can say that 0≤p(n) ≤ c · nk , where c= i=0 ai and n0 =1.

Hence, p(n) = O(nk ).

• Asymptotic Lower Bound

To show that p(n) = Ω(nk ) when k ≤ d, we have to show there exists constants
c, n0 > 0 such that 0 ≤ c · nk ≤ p(n) for all n ≥ n0 .

3
So we can say that 0 ≤ c · nk ≤ p(n), where c = ak and n0 = 1.

Hence, p(n) = Ω(nk ).

• Asymptotic Bound

We have already proved that for k = d, p(n) = O(n) and p(n) = Ω(n).

Hence, p(n) = Θ(n).

• Asymptotic Tight Upper Bound

We can easily prove this by removing the equality from the proof of 1.

Other than that we can use the limit definition of o-notation:

Hence, p(n) = o(n).

• Asymptotic Tight Lower Bound

We can easily prove this by removing the equality from the proof of 2.

Other than that we can use the limit definition of ω-notation:

Hence, p(n) = ω(n).

4
2 Dasgupta, Papadimitriou and Vazirani
Exercise 0.1. In each of the following situations, indicate whether f = O(g), or f = Ω(g),
or both (in which case f = Θ(g)).

Answer:

a f = Θ(g). g f = Ω(g). m f = O(g).


b f = O(g). h f = Ω(g). n f = Θ(g).
c f = Θ(g). i f = Ω(g).
o f = Ω(g).
d f = Θ(g). j f = Ω(g).
e f = Θ(g). k f = Ω(g). p f = O(g).

f f = Θ(g). l f = O(g). q f = Θ(g).

5
Exercise 0.2. Show that, if c is a positive real number, then g(n) = 1 + c + c2 +...+cn
is:

The moral: in big-Θ terms, the sum of a geometric series is simply the first term if the
series is strictly decreasing, the last term if the series is strictly increasing, or the number of
terms if the series is unchanging.

Answer:

a For c<1, I am going to use the formula for the sum of a partial geometric series. So,
we know that when we have c<1 in a geometric series, this will converge to a constant.
That is why g(n) is Θ(1) in this case.

b If c=1, it means that every single element of the sum will be 1, because 1n = 1 for any
positive integer value. So, as we have a sum of 1’s, the result will n, because we are
going to sum up n times 1.

c If c>1, then I can use the formula for finite geometric series once again, which is Θ(rn ),
with r>1. So in this case, the solution would be Θ(cn ).

3 Recursive Substitution
Solve T (n) = 2T (n − 2) + 2, with n = 2k and for T(0) = 0, and T(0)= 1 by Recursive
Substitution.

Answer.

We start with the original equation: T (n) = 2 · T (n − 2) + 2

Then, we need to start with the recursion by replacing the expression T (n − 2) into the
original expression:

T (n) = 2 · [2 · T (n − 2 − 2) + 2] + 2
T (n) = 4 · T (n − 4) + 4 + 2

6
It starts appearing the pattern but it is necessary to do an additional iteration.

T (n) = 4 · [2 · T (n − 4 − 2) + 2] + 4 + 2
T (n) = 8 · T (n − 6) + 8 + 2

Now we can find the recursion in terms of k:

T (n) = 2k · T (n − 2 · k) + 2k + 2

This is the point where we replace n=2k and then we get:

T (n) = 2k · T (2 · k − 2 · k) + 2k + 2
T (n) = 2k · T (0) + 2k + 2

We are going to have two possible results, based on what value of T(0) we take:

• T (0) = 0

T (n) = 2k + 2
T (n) = 2n/2 + 2
T (n) = 2n/2

So, the complexity for this case will be O(2n/2 )

• T (0) = 1

T (n) = 2k + 2k + 2
T (n) = 2n/2 + 2n/2 + 2
T (n) = 2(n/2)+1

So, the complexity for this case will be O(2(n/2)+1 )

We have to remember that n=2k, so k=n/2. When we replace these numbers, we get the
equation in terms of n.

7
4 Complete the following table

You might also like