0% found this document useful (0 votes)
5 views

Design & Analysis of Algorithms - Topic 3 - Time Complexity Basics

The document discusses the design and analysis of algorithms, focusing on time complexity and asymptotic notations. It explains the RAM model as a computational model, the need for abstraction in performance analysis, and the use of Big O, Big Omega, and Theta notations for estimating algorithm efficiency. Additionally, it covers the classification of growth functions and input cases for analyzing algorithm performance.

Uploaded by

babuurao989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Design & Analysis of Algorithms - Topic 3 - Time Complexity Basics

The document discusses the design and analysis of algorithms, focusing on time complexity and asymptotic notations. It explains the RAM model as a computational model, the need for abstraction in performance analysis, and the use of Big O, Big Omega, and Theta notations for estimating algorithm efficiency. Additionally, it covers the classification of growth functions and input cases for analyzing algorithm performance.

Uploaded by

babuurao989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

DESIGN & ANALYSIS OF

ALGORITHMS

Topic 3
Asymptotic Notations,
Time Complexity BASICS

INSTRUCTOR: Sulaman Ahmad Naz


Time Complexity of Algorithms
■ When we talk about time complexity of algorithms,
what do we actually mean?

■ It is not the actual running time. It is not the standard.


– It varies from machine to machine & compiler to compiler.
– It even varies on one machine due to availability of
resources.
Need for Model
■Design assumption
– Level of abstraction which meets our requirements
■Low-level details will not be considered
■Analysis independent of the variations in
– Machine
– Operating system
– Programming languages
– Compiler etc.
Model of Computation
RAM Model:
■Our model will be an abstraction of a standard
generic single-processor machine, called a
random access machine or RAM.
■A RAM is assumed to be an idealized machine
– Infinitely large random-access memory
– Instructions execute sequentially
– No concurrent operations
Model of Computation
■ Every instruction is in fact a basic operation on two
values in the machines memory which takes unit time.
These might be characters or integers.
■ Example of basic operations include:
– Assigning a value to a variable
– Arithmetic operation (+, - , × , /,modulus, floor, ceiling,
exponent)
– Performing any comparison e.g. a < b
– Boolean operations
– Accessing an element of an array.
Drawbacks in RAM Model
■ First poor assumption
– We assumed that each basic operation takes constant time, i.e. model
allows
■ Adding
■ Multiplying
■ Comparing,
■ Exponent-ing etc.
two numbers of any length in constant time
■ Addition of two numbers takes a unit time!
– Not good because numbers may be arbitrarily
■ Addition and multiplication both take unit time!
– Again very bad assumption
Justification for the Model

■But with all these weaknesses, our model is not so


bad because we have to give the

– Comparison; not the absolute analysis of any algorithm.


– We have to deal with large inputs not with the small size
– Model seems to work well describing computational
power of modern nonparallel machines
– Usually excellent predictions of performance on actual
machines
Summary: Computational Model
■ Analysis will be performed with respect to this
computational model for comparison of algorithms
■ We will give asymptotic analysis not detailed
comparison i.e. for large inputs
■ We will use generic uniprocessor random-access
machine (RAM) in analysis
– All memory equally expensive to access
– No concurrent operations
– All reasonable instructions take unit time, except, of course,
function calls
Conclusion
■ What, Why and Where Algorithms?
■ Designing Techniques
■ Problem solving Phases and Procedure
■ Model of computations
– Major assumptions at design and analysis level
– Merits and demerits, justification of assumptions taken
■ We proved that algorithm is a technology
■ Discussed importance of algorithms
– In almost all areas of computer science and engineering
– Algorithms make difference in users and developers
Can we do Exact Measure of Efficiency?

■ Exact, not asymptotic, measure of efficiency can be


sometimes computed but it usually requires certain
assumptions concerning implementation.
■ In theoretical analysis, computational complexity
– Estimated in asymptotic sense
– Estimating for large inputs
Can we do Exact Measure of
Efficiency?
■ Big O, Big Omega, Theta etc. notations are used to
compute the complexity
■ Asymptotic notations are used because different
implementations of algorithm may differ in efficiency
■ Efficiencies of two given algorithm are related
– By a constant multiplicative factor
– Called hidden constant.
Time Complexity of Algorithms
■ We actually calculate the estimation, not actual
running time.
■ Complexity can be viewed as the maximum number of
primitive operations that a program may execute.
■ Regular operations are:
– Addition
– Multiplication
– Assignment
– Accessing array element, etc.
■ We may leave some operations uncounted and
concentrate on those that are performed the largest
Time Complexity of Algorithms
■ We define complexity as a numerical function T(n) -
time versus the input size n.

■ We want to define time taken by an algorithm without


depending on the implementation details.

■ Talgorithm#1(n) = n(n-1) = n2 - n //quadratic time

■ Talgorithm#2(n) = 2(n-1) = 2n – 2 //linear time


Asymptotic Complexity
■ The resulting function gives only
an approximate measure of
efficiency of the original function.

■ However, this approximation is


sufficiently close to the original.

■ This measure of efficiency is


called asymptotic complexity.
Elimination of Lower Order Terms
■ Any terms that do not substantially change the
function’s magnitude should be eliminated from the
function.

■ Consider the following quadratic function:

■ For small values of n, the last term, 1,000, is the


largest. But we calculate time complexity to see the
behavior on large input sets.
Elimination of Lower Order Terms
≈n2
Asymptotic Notations
■ Asymptotic notations are mathematical tools to
represent the complexity of algorithms for asymptotic
analysis.

1. Big-Oh (O) Notation


2. Big-Omega (Ω) Notation
3. (Big) Theta () Notation
4. Small-Oh (o) Notation
5. Small-Omega (ω) Notation
Big-Oh (O) Notation
■ The Big O notation defines an upper bound of an
algorithm.

■ It bounds a function only from above.

■ It is usually the lowest upper bound.


f(n)c.g(n)
Big-Oh (O) Notation for all nN0
& c>0, n1
Let f(n) is the function of
growth of time for some
input n.
Then there is a function
g(n) such that c.g(n) is an
upper bound of f(n).
In other words, after some
value N0, the value of
c.g(n) will always be
greater than or equal to
f(n).
f(n)c.g(
n)
Example for all
■ Let f(n)=3n+2 and g(n)=n. Can we say that nN0
f(n)=O(g(n))? c=4
& c>0,
 
n f(n) 1.g(n) 2.g(n) 3.g(n) 4.g(n) 5.g(n) n1
1 5 1 2 3 4 5
No 2 8 2 4 6 8 10
3 11 3 6 9 12 15
4 14 4 8 12 16 20
5 17 5 10 15 20 25
6 20 6 12 18 24 30
7 23 7 14 21 28 35
8 26 8 16 24 32 40
9 29 9 18 27 36 45
10 32 10 20 30 40 50

■ Hence f(n)=O(g(n))=O(n)
Big-Omega (Ω) Notation
■ The Big Ω notation defines a lower bound of an
algorithm.

■ It bounds a function only from below.

■ It is usually the greatest lower bound.


f(n)c.g(n)
Big-Omega (Ω) Notation for all nN0
& c>0, n1
Let f(n) is the function of
growth of time for some
input n.
Then there is a function
g(n) such that c.g(n) is a
lower bound of f(n).
In other words, after
some value N0, the value
of c.g(n) will always be
less than or equal to f(n).
f(n)c.g(n)
for all nN0
Example
& c>0, n1
■ Let f(n)=3n+2 and g(n)=n. Can we say that
c=3
f(n)=Ω(g(n))?  
n f(n) 1.g(n) 2.g(n) 3.g(n) 4.g(n) 5.g(n)
No 1 5 1 2 3 4 5
2 8 2 4 6 8 10
3 11 3 6 9 12 15
4 14 4 8 12 16 20
5 17 5 10 15 20 25
6 20 6 12 18 24 30
7 23 7 14 21 28 35
8 26 8 16 24 32 40
9 29 9 18 27 36 45
10 32 10 20 30 40 50

■ Hence f(n)= Ω(g(n))= Ω(n)


(Big)-Theta () Notation
■ The Theta  notation defines a both upper and lower bounds
of an algorithm.

■ It bounds a function both from above and below.

■ It is the also called as tight bound & often called the Exact
Bound.

■ It is used when we have the same function as lower as well


as upper bound.
c1.g(n)f(n)c2.g(n)
(Big)-Theta () Notationfor all nN0
& c1,c2>0, n1
Let f(n) is the function of growth C2 g(n)
of time for some input n.
Then there is a function g(n)
such that c1.g(n) is a lower
bound and c2.g(n) is an upper
bound of f(n).
C1 g(n)
In other words, after some
value N0, the value of c1.g(n)
will always be less than or
equal to f(n) and the value of
c2.g(n) will always be greater
than or equal to f(n).
c1.g(n)f(n)c2.g(n
for all nN0
Example
& c1,c2>0, n1
■ Let f(n)=3n+2 and g(n)=n. Can we say that
f(n)=(g(n))? c1=3 c2=4
  
n f(n) 1.g(n) 2.g(n) 3.g(n) 4.g(n) 5.g(n)
1 5 1 2 3 4 5
No 2 8 2 4 6 8 10
3 11 3 6 9 12 15
4 14 4 8 12 16 20
5 17 5 10 15 20 25
6 20 6 12 18 24 30
7 23 7 14 21 28 35
8 26 8 16 24 32 40
9 29 9 18 27 36 45
10 32 10 20 30 40 50

Hence f(n)= (g(n))= (n)


Small-Oh (o) Notation

■ The small O notation also defines an upper bound of


an algorithm.

■ It bounds a function only from above.

■ Big-Oh is any upper bound while it is any upper bound


that is not asymptotically tight..
Small-Omega (ω) Notation

■ The small ω notation also defines a lower bound of an


algorithm.

■ It bounds a function only from below.

■ Big-Omega is any lower bound while it is any lower


bound that is not asymptotically tight.
Asymptotic Notations
time

Ω(n)
ω(n)
Growth of functions
Usually growth of functions
can be categorized into the
following:
– Constant time
– Logarithmic time
– Linear time
– N-logarithmic time
– Polynomial time
– Exponential time
– Factorial time
Remarks
■ Most of the time, we are concerned with the
upper bound of the worst case complexity of
algorithms.

■ It means we want to rate the algorithm in the


appropriate category, such that on any
(worst) input, what will be the maximum time
required to transform input to output.

■ So, most of the time, we will use the Big-Oh


Input Cases & Analysis Cases
■ There are three types of input cases.
– Best Case
– Worst Case
– Arbitrary/Random Case

■ There are three types of analysis cases.


– Best Case Analysis
– Worst Case Analysis
– Average Case Analysis
Example
■ Lets take the example of Linear Sequential Search.
■ It scans the array elements one by one and searches for
the required value.
■ If there are n elements in the array, and key is the value
to be searched, then its algorithm may be roughly
refined as:
– found=false
– For i = 1 to n
– If Array[i]=key
– found=true & stop;
– End if
– End for
Example
num_1

■ There are three types of input cases.


num_2
– Best Case key=num_1
– Worst Case key=num_n or key not
found num_3
– Arbitrary/Random Case key=any number
num_4

■ There are three types of analysis cases.


– Best Case Analysis 1 = (1)
– Worst Case Analysis n = (n)
– Average Case Analysis ≈ n/2+1 = (n) num_n
Time Complexity of Algorithms
■ We have already talked about how to get the time
complexity function of any algorithm.

■ We consider the basic operations (computational steps) only:


– Arithmetic operations
– Logical operations
– Relational operations
– Accessing the element of an array
– Assignment operation

■ We do not consider all the operations. We consider only the


most dominant ones as we need the approximation.
Time Complexity of Iterations
■What if a set of instructions is repeated many
times in an algorithm?

■Usually a set of code is repeated using either of


the two techniques:
– Loops
■For Loop
■While Loop
– Recursions
Iterative vs. Recursive Time

A() A(n)
{ {
For j = 1 to n If (-----)
Print “Hello” A(n/2)
}
}
Time Complexity (Iterations)
For i=1 to n T(n)=n
<computational instruction>
=O(n)
-----------------------------------------------------------
For i=1 to n
For j=1 to n
T(n)=n.n=n 2

=O(n )
<computational instruction>
2

i 1 2 3 4 5 6 7 … n
J n n n n n n n … n
Time Complexity (Iterations)
For i=1 to n
T(n)=n.n.n=n 3

For j=1 to n
For k=1 to n
=O(n )
3

<computational instruction>
----------------------------------------------------------------
For i=1 to n
For j=1 to n
T(n)=n.n.n.n=n
For k=1 to n =O(n )
4
For l=1 to n
<computational instruction>
Time Complexity (Iterations)
i=1 T(n)=k
s=1
While(s<=n)
i=i+1
s=s+i
<computational
instruction>
S 1 3 6 10 15 21 28 … n
i 1 2 3 4 5 6 7 … k
Things to remember
Sum of 1st m natural numbers=m(m+1)/

Time Complexity (Iterations)


i=1 T(n)=k=O()
s=1
While(s<=n)
i=i+1
s=s+i
<computational
instruction> n=k(k+1)/2
S 1
1
3
2
6
3
10
4
15
5
21
6
28
7


n
k
2n=k2+k
i
Solve it by the quadrati
formula.
Time Complexity (Iterations)

For i=1; i2<=n;i++


<computational instruction>

T(n)=O() i <=n
2

i<=
Time Complexity (Iterations)
T(n)=100+200+…+n.10
For i=1 to n =100(1+2+…+n)
For j=1 to i =100.n(n+1)/2
For k=1 to 100 =O(n2)
<computational
instruction>
i 1 2 3 4 … n
j 1 times 2 times 3 times 4 times … n times
k 1x100 2x100 3x100 4x100 … nx100
times times times times times
Things to remember
Sum of squares of 1st m natural numbers=m(m+1)(2m+1

Time Complexity (Iterations)


T(n)=1(n/2)+4(n/2)+9(n/2)+…+n2(n
=(n/2)(1+4+9+…+n2)
For i=1 to n
For j=1 to i 2 =(n/2)(1 2
+2 2
+3 2
+…n 2
)
=(n/2).n(n+1)(2n+1)/6
For k=1 to n/2 = O(n 4
)
<computational
instruction>
I 1 2 3 4 … n
J 1 times 4 times 9 times 16 times … n2 times
k n/2x1 n/2x4 n/2x9 n/2x16 … n/2xn2
times times times times times
Time Complexity (Iterations)
i 1 2 4 8 … n

For i=1 to n & i=i*2


<computational instruction>
---------------------------
For i=1 to n & i=i*3
<computational instruction>
---------------------------
For i=1 to n & i=i*m
<computational instruction>
Time Complexity (Iterations)
i 1=20 2=21 4=22 8=23 … n=2k

For i=1 to n & i=i*2


T(n)=k+1
<computational instruction>
---------------------------
=log2n+1
n=2 k
For i=1 to n & i=i*3 =O(log2n)log n=log 2 k
2 2
<computational instruction>
--------------------------- log2n=k.log22
For i=1 to n & i=i*m log2n=k
<computational instruction> Or
k=log2n
Time Complexity (Iterations)
i 1=20 2=21 4=22 8=23 … n=2k

For i=1 to n & i=i*2


T(n)=k+1
<computational instruction>
---------------------------
=log2n+1
n=2 k
For i=1 to n & i=i*3 =O(log2n)log n=log 2 k
2 2
<computational instruction>
--------------------------- log2n=k.log22
For i=1 to n & i=i*m T(n)=O(log3n)log n=k 2

<computational instruction> Or
k=log2n
T(n)=O(logmn)
Time Complexity (Iterations)
For i=n/2 to n
T(n)=(n/2)(n/2)log2n
For j=1 to n/2 =O(n2.log2n)
For k=1 to n & k=k*2
<computational instruction>
-----------------------------------------------------------
For i=n/2 to n T(n)=(n/2).log2n.log2n
For j=1 to n & j=2*j =O(n.(log2n)2)
For k=1 to n & k=k*2
<computational instruction>
Time Complexity (Iterations)

While(n>1)
n=n/2
T(n)=O(log 2 n)
<computational instruction>

i = 1 to n i = n to 1
←same→
i=i*2 i=i/2
where = 0.5772156649…

Time Complexity (Iterations)


T(n)=n+n/2+n/3+…+n/n
For i=1 to n
For j=1 to n & j=j+i =n(1+1/2+1/3+…+1/n)
=n(Hn)
<computational instruction>
≈n.ln n
=O(n. ln n)

i 1 2 3 … n
j n times n/2 n/3 … n/n
times times times
End of Lecture

THANK YOU

You might also like