Algorithmic Complexity
Algorithmic Complexity
Introduction
Algorithmic complexity is concerned about how fast or slow particular algorithm
performs. We define complexity as a numerical function T(n) - time versus the input
size n. We want to define time taken by an algorithm without depending on the
implementation details. But you agree that T(n) does depend on the implementation!
A given algorithm will take different amounts of time on the same inputs depending
on such factors as: processor speed; instruction set, disk speed, brand of compiler and
etc. The way around is to estimate efficiency of each algorithm asymptotically. We
will measure time T(n) as the number of elementary "steps" (defined in any way),
provided each such step takes constant time.
Let us consider two classical examples: addition of two integers. We will add two
integers digit by digit (or bit by bit), and this will define a "step" in our computational
model. Therefore, we say that addition of two n-bit integers takes n steps.
Consequently, the total computational time is T(n) = c * n, where c is time taken by
addition of two bits. On different computers, addition of two bits might take different
time, say c1 and c2, thus the addition of two n-bit integers takes T(n) = c1 * n and T(n)
= c2* n respectively. This shows that different machines result in different slopes, but
time T(n) grows linearly as input size increases.
The process of abstracting away details and determining the rate of resource usage in
terms of the input size is one of the fundamental ideas in computer science.
Asymptotic Notations
The goal of computational complexity is to classify algorithms according to their
performances. We will represent the time function T(n) using the "big-O" notation to
express an algorithm runtime complexity. For example, the following statement
T(n) = O(n2)
says that an algorithm has a quadratic time complexity.
For any monotonic functions f(n) and g(n) from the positive integers to the positive
integers, we say that f(n) = O(g(n)) when there exist constants c > 0 and n 0 > 0 such
that
f(n) ≤ c * g(n), for all n ≥ n0
Intuitively, this means that function f(n) does not grow faster than g(n), or that
function g(n) is an upper bound for f(n), for all sufficiently large n→∞
Examples:
1 = O(n)
n = O(n2)
log(n) = O(n)
2 n + 1 = O(n)
Exercise. Let us prove n2 + 2 n + 1 = O(n2). We must find such c and n0 that n 2 + 2 n
+ 1 ≤ c*n2. Let n0=1, then for n ≥ 1
An algorithm is said to run in constant time if it requires the same amount of time
regardless of the input size. Examples:
An algorithm is said to run in linear time if its time execution is directly proportional
to the input size, i.e. time grows linearly as input size increases. Examples:
binary search
Recall the "twenty questions" game - the task is to guess the value of a hidden number
in an interval. Each time you make a guess, you are told whether your guess is too
high or too low. Twenty questions game imploies a strategy that uses your guess
number to halve the interval size. This is an example of the general problem-solving
method known as binary search:
locate the element a in a sorted (in ascending order) array by first comparing a
with the middle element and then (if they are not equal) dividing the array into
two subarrays; if a is less than the middle element you repeat the whole
procedure in the left subarray, otherwise - in the right subarray. The procedure
repeats until a is found or subarray is a zero dimension.
Note, log(n) < n, when n→∞. Algorithms that run in O(log n) does not use the whole
input.
n = Ω(1)
n2 = Ω(n)
n2 = Ω(n log(n))
2 n + 1 = O(n)
To measure the complexity of a particular algorithm, means to find the upper and
lower bounds. A new notation is used in this case. We say that f(n) = Θ(g(n)) if and
only f(n) = O(g(n)) and f(n) = Ω(g(n)). Examples
2 n = Θ(n)
n2 + 2 n + 1 = Θ( n2)
Analysis of Algorithms
The term analysis of algorithms is used to describe approaches to the study of the
performance of algorithms. In this course we will perform the following types of
analysis:
Consider a dynamic array stack. In this model push() will double up the array size if
there is no enough space. Since copying arrays cannot be performed in constant time,
we say that push is also cannot be done in constant time. In this section, we will show
that push() takes amortized constant time.