03.Asymptotic notation. Practice problems
03.Asymptotic notation. Practice problems
Data Structures
Instructor: Prof. Ashwin Ganesan
def program1(x):
total = 0
for i in range(1000):
total += i
while x > 0:
x -= 1
total += x
return total
def program2(x):
total = 0
for i in range(1000):
total = i
while x > 0:
x = x//2
total += x
return total
3. Provide simple and tight asymptotic bounds for each of the following.
Here, “simple” means roughly “no unnecessary terms or constants” and
“tight” means the smallest O(·) bound you can find.
(a) 7x2 + 4x log x + 2x + 6 log x.
(b) The function of N defined by
N X
X i
j.
i=0 j=0
(c) The running time, as a function of N , for the procedure foo(N) given
below.
void foo(int N){
int x;
for (x=0; x<N; x+=1) {
Page 1 of 5
int y;
for (y=0; y<x; y+=1) {
bar(x,y); //bar runs in constant time
}
}
}
4. Give, using big-oh notation, the worst case running time of the following
procedure as a function of n. [AHU, Ex. 1.12]
for i = 1 to n do
for j = 1 to n do
C[i,j] = 0;
for k = 1 to n do:
C[i,j] = C[i,j]+A[i,k]*B[k,j];
end for
end for
end for
}
5. How can we modify almost any algorithm to have a good best-case running
time? Is the best-case running time a good measure of an algorithm?
[CLRS, Ex. 2.2-4]
6. Assume the parameter n in the procedure below is a positive power of 2,
i.e., n = 2, 4, 8, 16, . . .. Give the formula that expresses the value of the
variable count in terms of the value of n when the procedure terminates.
[AHU, Ex. 1.17]
count := 0;
x := 2;
while x < n do
x := 2*x;
count := count + 1;
end while
print(count);
}
7. (Space complexity)
(a) Suppose a function A calls a function B, function A uses m units of
its own space, and function B uses n units of its own local space. What
Page 2 of 5
is the overall space complexity of function A? In other words, what is the
maximum space usage at any point in the program?
(b) Now suppose that function A calls function B not once, but 10 times.
Using the same worst-case space usage parameters as in (a), determine
the overall space complexity of function A? Has the space complexity
increased?
(c) If function A uses m units of its own space, and function A calls itself
recursively 10 times, then what is the overall space complexity of function
A?
8. (Complexity of matrix multiplication)
(a) Design an algorithm that takes as its input two n × n matrices A and
B and their size n, and that produces as its output the n × n matrix AB.
You may describe the algorithm in the form of pseudocode.
(b) Determine the space-complexity of your algorithm.
(c) Determine the time-complexity of your algorithm.
9. (Complexity of matrix addition)
Repeat parts (a)-(c) of the previous exercise for the problem of adding
two m × n matrices.
10. (Big-Oh notation)
(a) Show that 100n + 6 = O(n).
(b) Is it true that 3n + 2 = O(1)?
(c) Is it true that 10n2 + 4n + 2 = O(n)?
(d) Show that 7 = O(1).
(e) Is it true that n2 = O(2n )? Is it true that 2n = O(n2 ).
11. Prove that
(a) 5n2 − 6n = O(n2 ).
(b) n! = O(nn ).
(c) 33n3 + 4n2 = O(n3 ).
12. Prove the following statements are true:
(a) 17 is O(1).
(b) n(n − 1)/2 is O(n2 ).
(c) max(n3 , 10n2 ) is O(n3 ).
√
13. Order the following
√ functions by growth rate: n, n, log n, log log n,
n n
log2 n, logn n , n log2 n, 13 , 32 , and 17.
√ √
[Hint: Using L’Hopital’s rule, prove that n < logn n , that (log n)2 < n,
√
and that n log2 n < logn n .]
14. Importance of time complexity and asymptotic growth rate. When
we describe the running time (or time complexity) T (n) of an algorithm
using asymptotic notation, we ignore the lower order terms in T (n) and
Page 3 of 5
the coefficient of the highest order term in T (n). This exercise illustrates
the advantages of algorithms which have smaller asymptotic growth rates.
Consider the running times of four programs with different time com-
plexities 100n, 5n2 , n3 /2, and 2n , measured in seconds, for a particular
compiler-machine combination. Suppose we can afford 1000 seconds to
solve a given problem.
Running time max. problem size max. problem size increase in max.
T (n) for 103 sec for 104 sec problem size
100n 10
5n2
n3 /3
2n
(a) Determine the maximum problem size that can be solved in 1000
seconds by the four programs, i.e. fill in the second column of the
table. For example, 100n = 1000 implies n = 10, which is already
filled in the table.
(b) Now assume we buy a new machine that is ten times faster. Then for
the same cost, we can afford 104 seconds for the problem when we
spent 103 seconds before. Find the maximum problem size that can
be solved by the four programs for this faster computer (i.e. complete
the third column of the table).
(c) Determine the increase in maximum problem size due to the ten-fold
speedup in computation time (this ratio of the third column values
to second column values, is to be filled in the fourth column of the
table).
This shows that the gains of a faster computer are more when a
more efficient algorithm is used, and exponential time algorithms can
only solve problems of small size no matter how fast the underlying
computer. [AHU]
Page 4 of 5
the resulting code taking 50n log n instructions. Determine the time taken
by computer A and by computer B to sort
(i) one million numbers
(ii) ten million numbers.
[Solution: To sort one million numbers, computer A executes 2 · (106 )2
instructions, which takes 2000 seconds, while computer B takes about 100
seconds. By using an algorithm whose running time grows more slowly,
even with a poor compiler and less skilled programmer, computer B solves
the problem 20 times faster than computer A! The advantage of merge
sort is even more pronounced when we sort ten million numbers: while
insertion sort takes approximately 2.3 days, merge sort takes under 20
minutes. Observe that as the problem size increases, so does the relative
advantage of merge sort.] [CLRS]
Page 5 of 5