0% found this document useful (0 votes)
9 views11 pages

Week 7 - Complexity Analysis

The document provides a comprehensive overview of complexity analysis, focusing on time and space complexity, and how to measure the efficiency of programs using Big O notation. It includes examples of code with their respective time and space complexities, as well as built-in functions and their complexities. Additionally, it discusses the implications of exceeding time or memory limits in programming solutions.

Uploaded by

lujainhesham04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views11 pages

Week 7 - Complexity Analysis

The document provides a comprehensive overview of complexity analysis, focusing on time and space complexity, and how to measure the efficiency of programs using Big O notation. It includes examples of code with their respective time and space complexities, as well as built-in functions and their complexities. Additionally, it discusses the implications of exceeding time or memory limits in programming solutions.

Uploaded by

lujainhesham04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Complexity Analysis

Agenda
1. Introduction
2. How to measure the efficiency of a program?
3. Time Complexity
○ What is Time Complexity
○ How do we measure Time Complexity?
○ Understanding Big O Notation
○ How can we compare two programs?
○ How to calculate Big O Notation?

4. Space Complexity
○ What is Space Complexity
○ How do we measure Space Complexity?

5. Examples of Complexity analysis


○ Code Examples
○ Built-In Functions Complexity

6. Time/Memory Limit Exceed


1. Introduction
When we try to solve a problem or develop an application, it is common to have
multiple solutions, each of which is correct and produces the desired output.
However, implementing any of these solutions requires careful consideration, as
each one comes with its own cost. The cost of a solution is determined by how
much it consumes from the machine's resources, and the more it consumes, the
worse it becomes. This makes sense because using more resources can make the
solution work slower and less effectively.

2. How to measure the efficiency of a program?


When we want to measure how fast a program works, we can use a timer to see
how long it takes to complete a task. However, there are factors that can affect the
time, such as the computer's hardware.

For instance, a program that is not well-written might appear to work fast on a
computer with a high-performance processor, but it might perform poorly on a
slower computer. We also need to take into account other aspects such as the size
of the task (input size) and the way the program was designed (implementation).

To get a better understanding of how efficient a program is, we can use complexity
analysis. Complexity analysis helps us analyze how much time and space a
program needs to complete a task, based on the input size and how the program is
designed.

By using complexity analysis, we can compare different programs more fairly and
see how well they perform for different input sizes. This helps us create more
efficient solutions that can handle larger tasks. So, complexity analysis is a
valuable tool for evaluating the efficiency of programs and improving their
performance.
3. Time Complexity
● What is Time Complexity?

When we solve problems with computer programs, we want to know how


much time it takes for our program to complete its task. However, just
counting the number of instructions in our program is not enough to measure
how efficient it is. We need to take into account the size of the input and any
differences in how the program is implemented. Time complexity is a way of
measuring how the runtime of a program changes as the size of the input
changes. It helps us understand the long-term behavior of our program,
especially when the input gets very large.

● How do we measure Time Complexity?

We measure the time complexity of a program by counting the number of


basic operations it performs. The basic operations are usually comparisons,
assignments, and arithmetic operations.
To measure time complexity, we use a
notation called Big O notation.

● Understanding Big O Notation

Big O notation is a function that


represents how much time our program
takes to complete a task, based on the
size of the input. The input size is the
input to the function, and the output is
the expected number of instructions that
will be executed.

For example, let’s say we have two programs that solve the same problem
One program the red solution seems to be faster for small inputs, but
another program the green solution is faster for larger inputs.
● How can we compare two solutions?

Big O notation can help us here. We can use it to describe how the expected
runtime of each program changes as the size of the input grows. The
program with a lower Big O notation is generally more efficient, because it
takes less time to complete its task as the input gets larger. By describing the
long-term behavior of our program, we can compare different solutions and
choose the one that will be the most efficient in the long run.

● How to calculate Big O Notation?

Big O notation is not interested in calculating the exact number of


instructions, as it is almost impossible due to many factors. Instead, Big O
notation is interested in calculating the effect of the growth of the input on
the time consumed or the number of instructions. To calculate the Big O
notation for any solution, the first thing we need to do is to calculate the
expected number of instructions we need to execute depending on the input
size. For example, if we have solved a problem and it seems that the
2
𝑛
expected number of instructions is 2
+ 30𝑛, the first step is to get rid of
2
constants 𝑛 + 𝑛, Then, we find which of the terms grows faster than the
others, and this term will represent our solution so the complexity is O(n2)

● Conclusion

The implementation differences are no longer considered, as we treat O(n),


O(2n), and O(3n) as all being O(n). Our focus is on how the function's
complexity grows, rather than the exact number of instructions it executes.
The hardware is no longer considered a problem, and the issue of input size
has been resolved. We now have a function that can handle any value of n
and provide the expected number of instructions, so input size is no longer a
concern.
4. Space Complexity
● What is Space Complexity?
Space Complexity is almost the same as Time Complexity but instead, it
measures how much memory a program takes.

● How do we measure Space Complexity?


We measure the space complexity of a program by counting the amount of
memory it uses to complete its task. We consider the memory used by the
program itself (i.e., the code) as well as the memory used by any container
(array) and variables created by the program. We can use Big O notation to
describe the space complexity of a program, just as we do for time
complexity, which is getting rid of constants and finding the terms that grow
faster than the others.
5. Examples of Complexity analysis

● Code Examples

Example: Given two numbers a and b, check if a is divisible by b.

Input: Output:
25 5 Yes
18 8 No

Solution:

#include <iostream>
using namespace std;

int main() {
int a, b;
cin >> a >> b;
cout << ((a % b == 0) ? "Yes" : "No") << endl;
}

○ The Time Complexity of this solution is O(1). This is because, regardless


of the input size or the values of a and b, the algorithm executes a single
operation. The growth of the input size does not affect the number of
operations performed, resulting in a constant time complexity of O(1).

○ The Space Complexity of this solution is O(1). This is because there are
only two variables of type int which are allocated in the memory for the
problem requirements so it’s also a constant memory.
Example: Given a positive integer n, find the lowest power of 2 that is
greater than n.

Input: Output:
14 16
20 32

Solution:

#include <iostream>
using namespace std;

int main() {
int n;
cin >> n;
int power = 1;
while (power <= n)
power *= 2;
cout << power << endl;
}

○ The Time Complexity of this solution is O(log n). This is because, in the
worst case scenario where n is a power of 2, it will take log n iterations to
go through the loop until the highest power of 2 that is less than or equal
to n is found.

○ The Space Complexity of this solution is O(1). This is because there are
only two variables of type int which are allocated in the memory for the
problem requirements so it’s also a constant memory.
Example: Given n elements, find the maximum element among them.

Input: Output:
5 50
10 20 13 50 7

Solution:

#include <iostream>
using namespace std;

int main() {
int n;
cin >> n;
int mx = INT_MIN;
for (int i = 0; i < n; i++) {
int cur;
cin >> cur;
mx = max(mx, cur);
}
cout << mx << endl;
}

○ The Time Complexity of this solution is O(n) because the loop iterates n
times, making it run in linear time.

○ The Space Complexity of this solution is O(1). This is because there are
four variables of type int which are allocated in the memory for the
problem requirements so it’s also a constant memory.
Example: Given an array of n elements and an integer k, find if there are
two elements with sum divisible by k.

Input: Output:
5 10 “YES”
2 6 12 8 4

Solution:

#include <iostream>
using namespace std;

const int N = 1005;


int arr[N];

int main() {
int n, k;
cin >> n >> k;
for (int i = 0; i < n; i++)
cin >> arr[i];
bool flag = false;
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
if ((arr[i] + arr[j]) % k == 0)
flag = true;
cout << ((flag) ? "YES" : "NO") << endl;
}

○ The Time Complexity of this solution is O(n2). This is because the outer
loop iterates n times, and the inner loop iterates n times on each iteration
the outer loop iterates.

○ The Space Complexity of this solution is O(n). This is because there is an


array of size N which needs N*4 Bytes and it is the greatest term (int is 4
bytes).
● Built-In Functions Complexity

Function Complexity
min(a,b) O (1)
max(a,b) O (1)
swap(a,b) O (1)
binary_search(arr, arr + n, x) O (log(n))
lower_bound(arr, arr + n, x) O (log(n))
upper_bound(arr, arr + n, x) O (log(n))
pow(a,n) O (n)
reverse(arr, arr + n) O (n)
min_element(arr, arr + n) O (n)
max_element(arr, arr + n) O (n)
fill(arr, arr + n, x) O (n)
count(arr, arr + n, x) O (n)
find(arr, arr + n, x) O (n)
sort(arr, arr + n) O (nlog(n))
next_permutation(arr, arr + n) O (n!)
6. Time/Memory Limit Exceed
There are two types of verdicts that may appear to most of us while solving a
problem, namely "Time Limit Exceeded" (TLE) and "Memory Limit Exceeded"
(MLE). These errors mean that our solution may be 100% correct, but it needs
some optimization to be accepted. In such situations, we have to think of a better
way to solve the problem or optimize the time or memory needed to solve it.

Theoretically, a common computer can perform 109 operations in one second.


However, practically, our code should not take more than 108 operations due to
several factors, such as ignored factors in Big O notation and online judge
limitations.

For example, if we are given n (1 <= n <= 100000) and our complexity is
O(nlog(n)), which is about 106, it's fine. But if our complexity is O(n2), which is
about 1010, we will get a Time Limit Exceeded error.

You might also like