0% found this document useful (0 votes)
4 views

03.Asymptotic notation. Practice problems

The document contains practice problems on asymptotic notation for a Data Structures course at ATLAS SkillTech University, focusing on analyzing the best-case and worst-case scenarios of various algorithms. It includes exercises on determining time and space complexities, as well as comparisons of different algorithms' efficiencies. The problems also explore the implications of algorithm efficiency on computational resources and problem-solving capabilities.

Uploaded by

Sonit Marwah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

03.Asymptotic notation. Practice problems

The document contains practice problems on asymptotic notation for a Data Structures course at ATLAS SkillTech University, focusing on analyzing the best-case and worst-case scenarios of various algorithms. It includes exercises on determining time and space complexities, as well as comparisons of different algorithms' efficiencies. The problems also explore the implications of algorithm efficiency on computational resources and problem-solving capabilities.

Uploaded by

Sonit Marwah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ATLAS SkillTech University

Data Structures
Instructor: Prof. Ashwin Ganesan

Practice Problems on Asymptotic Notation


1. Determine the best-case and worst-case number of steps it will take to
run the program below. Express your answer in terms of n, the size of the
input x.

def program1(x):
total = 0
for i in range(1000):
total += i

while x > 0:
x -= 1
total += x
return total

2. Determine the best-case and worst-case number of steps it will take to


run the program below. Express your answer in terms of n, the size of the
input x.

def program2(x):
total = 0
for i in range(1000):
total = i

while x > 0:
x = x//2
total += x
return total

3. Provide simple and tight asymptotic bounds for each of the following.
Here, “simple” means roughly “no unnecessary terms or constants” and
“tight” means the smallest O(·) bound you can find.
(a) 7x2 + 4x log x + 2x + 6 log x.
(b) The function of N defined by
N X
X i
j.
i=0 j=0

(c) The running time, as a function of N , for the procedure foo(N) given
below.
void foo(int N){
int x;
for (x=0; x<N; x+=1) {

Page 1 of 5
int y;
for (y=0; y<x; y+=1) {
bar(x,y); //bar runs in constant time
}
}
}

4. Give, using big-oh notation, the worst case running time of the following
procedure as a function of n. [AHU, Ex. 1.12]

procedure matmpy (int n)


{
int i, j, k;

for i = 1 to n do
for j = 1 to n do
C[i,j] = 0;
for k = 1 to n do:
C[i,j] = C[i,j]+A[i,k]*B[k,j];
end for
end for
end for
}

5. How can we modify almost any algorithm to have a good best-case running
time? Is the best-case running time a good measure of an algorithm?
[CLRS, Ex. 2.2-4]
6. Assume the parameter n in the procedure below is a positive power of 2,
i.e., n = 2, 4, 8, 16, . . .. Give the formula that expresses the value of the
variable count in terms of the value of n when the procedure terminates.
[AHU, Ex. 1.17]

procedure mystery (int n)


{
int x, count;

count := 0;
x := 2;
while x < n do
x := 2*x;
count := count + 1;
end while
print(count);
}

7. (Space complexity)
(a) Suppose a function A calls a function B, function A uses m units of
its own space, and function B uses n units of its own local space. What

Page 2 of 5
is the overall space complexity of function A? In other words, what is the
maximum space usage at any point in the program?
(b) Now suppose that function A calls function B not once, but 10 times.
Using the same worst-case space usage parameters as in (a), determine
the overall space complexity of function A? Has the space complexity
increased?
(c) If function A uses m units of its own space, and function A calls itself
recursively 10 times, then what is the overall space complexity of function
A?
8. (Complexity of matrix multiplication)
(a) Design an algorithm that takes as its input two n × n matrices A and
B and their size n, and that produces as its output the n × n matrix AB.
You may describe the algorithm in the form of pseudocode.
(b) Determine the space-complexity of your algorithm.
(c) Determine the time-complexity of your algorithm.
9. (Complexity of matrix addition)
Repeat parts (a)-(c) of the previous exercise for the problem of adding
two m × n matrices.
10. (Big-Oh notation)
(a) Show that 100n + 6 = O(n).
(b) Is it true that 3n + 2 = O(1)?
(c) Is it true that 10n2 + 4n + 2 = O(n)?
(d) Show that 7 = O(1).
(e) Is it true that n2 = O(2n )? Is it true that 2n = O(n2 ).
11. Prove that
(a) 5n2 − 6n = O(n2 ).
(b) n! = O(nn ).
(c) 33n3 + 4n2 = O(n3 ).
12. Prove the following statements are true:
(a) 17 is O(1).
(b) n(n − 1)/2 is O(n2 ).
(c) max(n3 , 10n2 ) is O(n3 ).

13. Order the following
√ functions by growth rate: n, n, log n, log log n,
n n
log2 n, logn n , n log2 n, 13 , 32 , and 17.
 
√ √
[Hint: Using L’Hopital’s rule, prove that n < logn n , that (log n)2 < n,

and that n log2 n < logn n .]
14. Importance of time complexity and asymptotic growth rate. When
we describe the running time (or time complexity) T (n) of an algorithm
using asymptotic notation, we ignore the lower order terms in T (n) and

Page 3 of 5
the coefficient of the highest order term in T (n). This exercise illustrates
the advantages of algorithms which have smaller asymptotic growth rates.
Consider the running times of four programs with different time com-
plexities 100n, 5n2 , n3 /2, and 2n , measured in seconds, for a particular
compiler-machine combination. Suppose we can afford 1000 seconds to
solve a given problem.
Running time max. problem size max. problem size increase in max.
T (n) for 103 sec for 104 sec problem size
100n 10
5n2
n3 /3
2n

(a) Determine the maximum problem size that can be solved in 1000
seconds by the four programs, i.e. fill in the second column of the
table. For example, 100n = 1000 implies n = 10, which is already
filled in the table.
(b) Now assume we buy a new machine that is ten times faster. Then for
the same cost, we can afford 104 seconds for the problem when we
spent 103 seconds before. Find the maximum problem size that can
be solved by the four programs for this faster computer (i.e. complete
the third column of the table).
(c) Determine the increase in maximum problem size due to the ten-fold
speedup in computation time (this ratio of the third column values
to second column values, is to be filled in the fourth column of the
table).
This shows that the gains of a faster computer are more when a
more efficient algorithm is used, and exponential time algorithms can
only solve problems of small size no matter how fast the underlying
computer. [AHU]

15. Importance of time complexity and asymptotic growth rate. When


we describe the running time (or time complexity) T (n) of an algorithm
using asymptotic notation, we ignore the lower order terms in T (n) and
the coefficient of the highest order term in T (n). This exercise illustrates
the advantages of algorithms which have smaller asymptotic growth rates.
It can be shown that insertion sort is a O(n2 ) algorithm, while merge sort
is a O(n log n) algorithm. Let us pit a faster computer (computer A) run-
ning insertion sort against a slower computer (computer B) running merge
sort. They each must sort an array of one million numbers. Suppose that
computer A executes one billion instructions per second and computer
B executes only ten million instructions per second, so that computer A
is 100 times faster than computer B in raw computing power. To make
the difference even more dramatic, suppose that the world’s craftiest pro-
grammer codes insertion sort in machine language for computer A, and
the resulting code requires 2n2 instructions to sort n numbers. Merge
sort, on the other hand, is programmed for computer B by an average
programmer using a high-level language with an inefficient compiler, with

Page 4 of 5
the resulting code taking 50n log n instructions. Determine the time taken
by computer A and by computer B to sort
(i) one million numbers
(ii) ten million numbers.
[Solution: To sort one million numbers, computer A executes 2 · (106 )2
instructions, which takes 2000 seconds, while computer B takes about 100
seconds. By using an algorithm whose running time grows more slowly,
even with a poor compiler and less skilled programmer, computer B solves
the problem 20 times faster than computer A! The advantage of merge
sort is even more pronounced when we sort ten million numbers: while
insertion sort takes approximately 2.3 days, merge sort takes under 20
minutes. Observe that as the problem size increases, so does the relative
advantage of merge sort.] [CLRS]

16. (Polynomial time versus exponential time algorithms.) Suppose algorithm


A takes n2 operations to solve a problem for an input of size n, and
suppose algorithm B takes 2n operations to solve the same problem. Both
these algorithms are executed on a computer which takes 10−9 seconds for
each operation. Calculate the executes times of the two algorithms when
n = 10, 20, 30, 40, 50 and 100:
n n2 2n
10 µs µs
20 µs ms
30 µs sec
40 µs min
50 µs days
100 µs yrs

Page 5 of 5

You might also like