0% found this document useful (0 votes)
71 views4 pages

Pre-Tutorial Questions

This document contains 7 problems related to algorithm analysis and design that were given as tutorial questions. The problems cover topics like asymptotic analysis using Big-O, Ω, and Θ notation, analyzing time complexities of algorithms, sorting functions by growth rate, proofs involving sums and induction, and analyzing variants of the Gale-Shapley algorithm. Students are expected to analyze algorithms, sort functions, solve proofs, and design examples to achieve specific complexities.

Uploaded by

Pritam Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views4 pages

Pre-Tutorial Questions

This document contains 7 problems related to algorithm analysis and design that were given as tutorial questions. The problems cover topics like asymptotic analysis using Big-O, Ω, and Θ notation, analyzing time complexities of algorithms, sorting functions by growth rate, proofs involving sums and induction, and analyzing variants of the Gale-Shapley algorithm. Students are expected to analyze algorithms, sort functions, solve proofs, and design examples to achieve specific complexities.

Uploaded by

Pritam Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Algorithm Design COMP3027/3927 Tutorial questions The University of Sydney

2017 semester 2 Tutorial 1 School of IT

Pre-tutorial questions
Do you know the basic concepts of this week’s lecture content? These questions are only to test yourself.
They will not be explicitly discussed in the tutorial, and no solutions will be given to them.

1. Running times
(a) What is an algorithm’s worst case running time?
(b) What does it mean when we say that the algorithm runs in polynomial time?
(c) An algorithm is efficient?
(d) Look at the table with the running times in the slides. Can you explain the reason for these
numbers?
2. Asymptotic analysis
(a) Can you explain the ideas of the O(·), Θ(·) and Ω(·) functions? Why do we use these functions?
(b) What does it mean that these functions are transitive and additive?
3. Basic data structures (revision)
(a) What are linked lists, queues, stacks and balanced binary trees? What sort of operations are
usually supported by these structures?
(b) What is the height of a balanced binary tree containing n elements?

Tutorial

Problem 1

Sort the following functions in increasing order of asymptotic growth


n n √ √
n, n3 , n log n, , 2 , n, n3
log n log n

Solution: Answer should be:


√ n n √
3 3
n, 2 , log n , n, n log n, n , n
log n

Problem 2

Sort the following function in decreasing order of asymptotic growth


n 2n
n1.5 , 2n , , , n!, 1.5n , 2log2 n
2n n10

1
Solution: Answer should be:
2n n
n!, 2n , 10
, 1.5n , n1.5 , 2log2 n , n
n 2

Problem 3

Which of the following is largest asymptotically

log3 n, log2 n, log10 n


Solution: They are all equal. This is because:
logd a
logb a =
logd b

Problem 4
Use induction to prove that the sum S(n) of the first n natural numbers is n(n + 1)/2.
Solution: Proof is by induction on n.
Base case: If n = 1 then the claim is trivially true.
Induction hypothesis: Assume that the sum of the first n natural is n(n + 1)/2.
Induction step: Next we want to prove the statement for n+1. From the definition of S(n) we
know S(n+1) = S(n)+n+1, and from the induction hypothesis we know S(n) = n(n+1)/2,
and therefore S(n + 1) = S(n) + n + 1 = n(n + 1)/2 + n + 1 = (n + 2)(n + 1)/2.

Problem 5
Imagine a program A running with time complexity Θ(f (n)), taking t seconds for an input of size m. What
would your estimation be for the execution time for an input of size 2m for the following functions: n, n log n,
n2 or n3 .
Solution: n → 2m = 2t seconds
n log n → 2m log 2m = 2t(1 + log 2/ log m) seconds
n2 → 4m2 = 4t seconds
n3 → 8m3 = 8t seconds
Note: we are assuming here that m is sufficiently large that the tight bound applies.

Problem 6

1. For each of the following pseudo-code fragments. give an upper bound for their running time using the
big-Oh notation.
2. The upper bound captures the worst case running time of an algorithm. Some instances might require
the algorithm to perform more steps than others. Some instances might allow the algorithm to termi-
nate faster. A lower bound for the running time is a “best-case” running time in the worst-case. If an
algorithm is Ω(f (n)) for example, then there exists an instance that will take at least f (n) steps.
For the second algorithm shown below, give a lower bound for the running time.

Algorithm 1 Stars
1: for i = 1, . . . , n do
2: print ”*” i times
3: end for

2
Solution: first iteration prints 1 star, second prints two, third prints 3 and so on. Total
number of stars is
n
X n(n + 1)
1 + 2 + ··· + n j= ∈ O(n2 )
j=1
2

Algorithm 2 Check Numbers


1: procedure CheckNumbers(A,B)
. A and B are two lists of integers with |A| = n ≥ |B| = m
2: count = 0
3: for i = 1, . . . , n do
4: for j = i . . . m do
5: if A[i] ≥ B[j] then
6: count = count +1
7: break
8: end if
9: end for
10: end for
11: end procedure

Solution: An upper bound is O(n · m). The lower bound is Ω(n · m). Ask the students to
come up with a specific example input for this (which is easy).

Problem 7
Given an array A consisting of n integers A[0], A[1], . . . , A[n − 1], we want to compute the upper triangle
matrix
A[i] + A[i + 1] + · · · + A[j]
C[i][j] =
j−i+1
for 0 ≤ i ≤ j < n. Consider the following algorithm for computing C:

Algorithm 3 summing-up
1: function summing-up((A))
2: for i = 0, . . . , n − 1 do
3: for j = i, . . . , n − 1 do
4: add up entries A[i] through A[j] and divide by j − i + 1
5: store result in C[i][j]
6: end for
7: end for
8: return C
9: end function

1. Using the O-notation, upperbound the running time of summing-up.


2. Using the Ω-notation, lowerbound the running time of summing-up.

3
Solution:
1. The number of iterations is n + n − 1 + · · · + 1 = n2 , which is bounded by n2 . In the


iteration corresponding to indices (i, j) we need to scan j − i + 1 entries from A, so it


takes O(j − i + 1) = O(n). Thus, the overall time is O(n3 ).

2. In an implementation of this algorithm, Line 3 would be computed with a for loop;


when i < 41 n and j > 43 n, this loop would iterate least n/2 times, which takes Ω(n)
time. There are n2 /16 pairs (i, j) of this kind, which is Ω(n2 ). Thus, the overall time
is Ω(n3 ).

Problem 8
[COMP3927 only]
Come up with an implementation of the Gale-Shapely algorithm for finding a stable matching that keeps
track of the number of proposal made during the execution.

1. Design a generic instance where the number of proposal is O(n).


2. Design a generic instance where the number of proposal is Ω(n2 ).

In both cases, n is the number of men/women in the instance.

Solution:

1. We only sketch the instance. Consider the following preference lists

m1 w1 w2 ··· wn−1 wn w1 m1 m2 ··· mn−1 mn


m2 w2 w3 ··· wn w1 w2 m1 m2 ··· mn−1 mn
.. .. (1)
. .
mn wn w1 ··· wn−2 wn−1 wn m1 m2 ··· mn−1 mn

Each man proposes to a free woman, so there only n proposals.


2. We only sketch the instance. Consider the following preference lists

m1 w1 w2 ··· wn−1 wn w1 m1 m2 ··· mn−1 mn


m2 w1 w2 ··· wn−1 wn w2 m1 m2 ··· mn−1 mn
.. .. (2)
. .
mn w1 w2 ··· wn−1 wn wn m1 m2 ··· mn−1 mn

Imagine mn proposes first, followed by mn−1 and so on up to m1 . Each man proposes


to w1 , who progressively gets a better partners and ends up with m1 . This involves n
proposals.
In a second round, mn proposes to w2 , followed by mn−1 , and so on up to m2 . Again,
w2 gets progressively better partners and ends up with m2 . This involves another n−1
proposals.
This keeps going on until mn finally proposes to wn and the algorithm terminates,
having made a total of n + n − 1 + · · · + 1 = Ω(n2 ) proposals.

You might also like