0% found this document useful (0 votes)
80 views15 pages

Lecture 2 - Big O Notation

This document discusses algorithmic complexity and asymptotic analysis. It introduces big-O notation for describing how the running time of an algorithm grows relative to the size of the input. Common complexity classes described include constant, logarithmic, linear, and exponential time. An example algorithm is analyzed to determine its complexity class. The goal is to quantify algorithm efficiency to understand how it will scale with larger input sizes.

Uploaded by

Ash Bee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views15 pages

Lecture 2 - Big O Notation

This document discusses algorithmic complexity and asymptotic analysis. It introduces big-O notation for describing how the running time of an algorithm grows relative to the size of the input. Common complexity classes described include constant, logarithmic, linear, and exponential time. An example algorithm is analyzed to determine its complexity class. The goal is to quantify algorithm efficiency to understand how it will scale with larger input sizes.

Uploaded by

Ash Bee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

ALGORITHMS AND DATA STRUCTURES

LECTURE 2 – ASYMPTOTIC ANALYSIS AND


NOTATION: “BIG-O”
Askar Khaimuldin
[email protected]
CONTENT
1. Algorithm efficiency
2. Order of growth
3. Big-O notation
4. Analyzing the running time of a program (Example)
5. Complexity classes
6. Constant complexity
7. Logarithmic complexity
8. Linear complexity
9. Exponential complexity
ALGORITHM EFFICIENCY
Computers are getting faster
 Does the efficiency matter?
 How about a very large dataset?

A program can be implemented in several ways

How to measure the efficiency?


 Using timer
 Count the number of operations
 Order of growth
ALGORITHM EFFICIENCY : USING TIMER
Timers are inconsistent
 Time varies for different inputs but cannot express a relationship
between inputs and time
 Can be significantly different in various cases (because of CPU,
memory, GC, operating system, network, etc.)

Our first challenge is to determine how to make quantitative


measurements of the running time of our program using random
numbers for input

Hint: Math.random()
ALGORITHM EFFICIENCY : COUNT OPERATIONS
Assume that any of comparison, addition, value setting
and retrieving, etc. is just 1 operation

How many of those operations do I use in my


algorithm?
This could be used to give us a sense of efficiency

2 + 2 + (n+1) + n + n + 2n and 1 more for “return”


= 5n + 6 (worst case)
The worst-case scenario: all elements are zeros
ORDER OF GROWTH
If f(n) ~ Cg(n) for some constant C > 0, then the order
of growth of f(n) is g(n)
 Ignores leading coefficient
 Ignores lower-order terms
 Focuses on dominant terms

Examples:
1) 2𝑛3 + 5𝑛2 + 0.15𝑛 + 7 ≈ 𝑛3
2) 2𝑛3 + 3𝑛 ≈ 3𝑛

Big-O notation is a mathematical notation that describes the limiting behavior of a function when the
argument tends towards a particular value or infinity.
BIG-O NOTATION
Law of addition: 𝑂 𝑓 𝑛 +𝑂 𝑔 𝑛 = 𝑂(𝑓 𝑛 + 𝑔(𝑛))
𝑂(𝑛)
 Used with sequential statements

𝑂 𝑛 + 𝑂 𝑛2 = 𝑂 𝑛 + 𝑛2 = 𝑂(𝑛2 ) 𝑂(𝑛2 )

Law of multiplication 𝑂 𝑓 𝑛 ∗𝑂 𝑔 𝑛 = 𝑂(𝑓 𝑛 ∗ 𝑔(𝑛))


 Used with nested statements/loops or recursion

𝑂 𝑛 ∗ 𝑂 𝑛 = 𝑂(𝑛2 )
ANALYZING THE RUNNING TIME OF A PROGRAM
COMPLEXITY CLASSES
CONSTANT COMPLEXITY
Complexity independent of inputs
 No matter how many operations it performs exactly as long as it
doesn`t depend on the number of inputs
 Time complexity is constant

Can have loops or recursive calls, but only if number of


iterations or calls independent of input size
 It is still O(1) if it performs constant number of operations
LOGARITHMIC COMPLEXITY
Complexity grows as log of size of one of its
inputs
 When the number of iterations is divided to any constant
K > 1 at each iteration (or halved in most cases)

How about this?

It is not decreasing at each iteration


𝑛
𝑂 = 𝑂(𝑛)
2
LINEAR COMPLEXITY
Can be in form of iterative loops or recursion

Iterative loops: depends on number of iterations


Recursion: depends on number of recursive function calls
EXPONENTIAL COMPLEXITY
Exponential Time complexity denotes an algorithm
whose growth is increasing exponentially (doubles in
most cases) with each addition to the input data set

A good example is the recursive solution for Fibonacci


Complexity is 2𝑛
LITERATURE
Algorithms, 4th Edition, by Robert Sedgewick and Kevin Wayne, Addison-Wesley
 Chapter 1.4
GOOD LUCK!

You might also like