0% found this document useful (0 votes)
2 views

Time Complexity

Uploaded by

wasimrajaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Time Complexity

Uploaded by

wasimrajaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Time Complexity

By: Manoj
Time Complexity
• Defined as the amount of time taken by an algorithm to run, as a function of the
length of the input.

• It measures the time taken to execute each statement of code in an algorithm.

• The idea behind time complexity is that it can only measure the algorithm's
execution time in a way that is dependent solely on the algorithm and its input.
Common notations used to express time complexity are
• Big-oh(O) Notation: Denotes the worst case of an algorithm.

• Big-omega(Ω) Notation: Denotes the best runtime of an algorithm.

• Big-Theta(Θ) notation: Denotes average case time complexity.


• Best Case − Minimum time required for program execution.

• Average Case − Average time required for program execution.

• Worst Case − Maximum time required for program execution.


Common time complexities and their descriptions
Constant time – O (1)
• An algorithm is said to have constant time with order O (1) when it is not
dependent on the input size n.

• Irrespective of the input size n, the runtime will always be the same.

• Example:It's as quick as grabbing one ingredient from the kitchen, no matter


how many ingredients we have.
Example

#include <iostream>
using namespacse std;

int main() {
int x = 42;
cout << "The value of x is: " << x << endl;
return 0;
}
Example 2
#include <iostream>

// A function that returns the square of a number


int square(int n) {
return n * n;
}

int main() {
int num = 5;
int result = square(num);
std::cout << "The square of " << num << " is: " << result << std::endl;
return 0;
}
• In this example, the square function calculates the square of an integer n.

• The time it takes to execute this function is constant, regardless of the value of
n.

• Whether n is 5 or 1,000, the function performs a single multiplication operation,


which takes a constant amount of time.

• Therefore, the time complexity of the square function is O(1).


Linear time – O(n)
• An algorithm is said to have a linear time complexity when the running time
increases linearly with the length of the input.

• When the function involves checking all the values in input data, with this order
O(n).

• Example: This is like making a sandwich for each person at a picnic. If you have
10 people, you make 10 sandwiches. If you have 100 people, you make 100
sandwiches. The time it takes grows directly with the number of people.
Example
int main() {

int n;

cout << "Enter a positive integer n: ";

cin >> n;

int sum = 0;

for (int i = 1; i <= n; ++i) {

sum += i;

cout << "The sum of numbers from 1 to " << n << " is: " << sum << endl;

return 0;

}
• In this code, the program calculates the sum of numbers from 1 to the input value n
using a loop.

• The time it takes to execute the loop is directly proportional to the value of n, so the
time complexity is O(n).
Logarithmic time – O (log n)
• Logarithmic time complexity means the number of operations decreases as the input size
increases.

• Examples: Binary Search, Merge sort

• Example: Imagine we have a phone book with a lot of names, and we're trying to find a
name. We can quickly narrow down our search by looking in the middle of the book first,
and then in the middle of the remaining half, and so on. It's faster than looking at every
page one by one.
Quadratic time – O (n^2)
• The execution time grows with the square of the input size.

• Commonly seen in nested loops.

• Exampe: Imagine comparing each item in a list to every other item. If you have 10
items, it's like doing 10x10 = 100 comparisons.
Example 1
int findMax(int arr[], int size) { int myArray[] = {12, 5, 21, 8, 17, 6};
int maxElement = arr[0]; int arraySize = 6;

for (int i = 0; i < size; ++i) { int max = findMax(myArray, arraySize);


for (int j = i + 1; j < size; ++j) {
if (arr[j] > maxElement) { cout << "The maximum element in the array
maxElement = arr[j]; is: " << max << endl;
} return 0;
} }
}

return maxElement;
}

int main() {
• In this code, the findMax function uses two nested loops to compare every
element in the array with every other element to find the maximum.

• Since there are two nested loops, the time complexity of this algorithm is O(n^2)
Exponential Time - O(2^n)
• The execution time grows exponentially with the input size.

• This is highly inefficient and should be avoided whenever possible.


Linearithmic Time Complexity (O(n log n)):
• Algorithms whose execution time grows in a manner that is roughly
proportional to the product of the input size (n) and the logarithm of the input
size (log n).

• As the size of the input data (n) increases, the number of operations performed
by the algorithm increases at a rate

• Example: MergeSort
mathematical analysis of recursive algorithm
• Mathematical analysis of recursive algorithms involves determining their time
complexity using mathematical notation and techniques.

• The most common mathematical notation for expressing time complexity is Big O
notation (O),
Example:(The algorithm calculates the sum of elements in an
array using recursion)

int sumArray(int arr[], int n) {


if (n == 0) {
return 0; // Base case: when the array is empty, the sum is 0
} else {
return arr[n - 1] + sumArray(arr, n - 1); // Recursive case
}
}

You might also like