0% found this document useful (0 votes)
35 views11 pages

Dsa TimeComp

data structure time complexity

Uploaded by

snehakatkar345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views11 pages

Dsa TimeComp

data structure time complexity

Uploaded by

snehakatkar345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Time Complexity vs.

Space Complexity

You now understand space and time complexity fundamentals and how to calculate it
for an algorithm or program. In this section, you will summarize all previous discussions
and list the key differences in a table.

Time Complexity Space Complexity

Calculates the time required Estimates the space memory


required

Time is counted for all statements Memory space is counted for


all variables, inputs, and
outputs.

The size of the input data is the primary Primarily determined by the
determinant. auxiliary variable size

More crucial in terms of solution optimization More essential in terms of


solution optimization
Time Complexity in Data Structure
Introduction:
Time complexity is a critical concept in computer science and plays a vital role in the
design and analysis of efficient algorithms and data structures. It allows us to
measure the amount of time an algorithm or data structure takes to
execute, which is crucial for understanding its efficiency and scalability.

What is Time Complexity:


Time complexity measures how many operations an algorithm completes in
relation to the size of the input. It aids in our analysis of the algorithm's
performance scaling with increasing input size. Big O notation (O()) is the notation
that is most frequently used to indicate temporal complexity. It offers an upper
bound on how quickly an algorithm's execution time will increase.

Best, Worst, and Average Case Complexity:


In analyzing algorithms, we consider three types of time complexity:

1. Best-case complexity (O(best)): This represents the minimum time


required for an algorithm to complete when given the optimal input. It denotes
an algorithm operating at its peak efficiency under ideal circumstances.
2. Worst-case complexity (O(worst)): This denotes the maximum time an
algorithm will take to finish for any given input. It represents the scenario
where the algorithm encounters the most unfavorable input.
3. Average-case complexity (O(average)): This estimates the typical running
time of an algorithm when averaged over all possible inputs. It provides a
more realistic evaluation of an algorithm's performance.

Big O Notation:
Time complexity is frequently expressed using Big O notation. It represents the
maximum possible running time for an algorithm given the size of the input. Let's go
through some crucial notations.:

a) O(1) - Constant Time Complexity:


If an algorithm takes the same amount of time to execute no matter how big the
input is, it is said to have constant time complexity. This is the best case scenario
as it shows how effective the algorithm is. Examples of operations having constant time
complexity include accessing a component of an array or executing simple arithmetic
calculations.

int constantTimeExample(int arr[], int size) {


return arr[0];
}
As there is only one operation required to return the first element of the array, the
getFirstElement function has an O(1) time complexity.

b) O(log n) - Logarithmic Time Complexity:


According to logarithmic time complexity, the execution time increases
logarithmically as the input size increases. Algorithms with this complexity are
often associated with efficient searching or dividing problems in half at each step. A
well-known illustration of an algorithm having logarithmic time complexity is the binary
search.

int binarySearch(int arr[], int size, int target) {

int low = 0;
int high = size - 1;

while (low <= high) {


int mid = (low + high) / 2;

if (arr[mid] == target)
return mid;
else if (arr[mid] < target)
low = mid + 1;
else
high = mid - 1;
}

return -1; // Not found


}
The binarySearch function has a time complexity of O(log n) as it continuously halves
the search space until it finds the target element or determines its absence.

c) O(n) - Linear Time Complexity:


According to linear time complexity, the running time grows linearly with the size
of the input. When navigating data structures or performing actions on each piece, it
is frequently noticed. For example, traversing an array or linked list to find a
specific element.

int linearTimeExample(int arr[], int size) {


int sum = 0;

for (int i = 0; i < size; i++) {


sum += arr[i];
}

return sum;
}
The printArray function has a time complexity of O(n) as it iterates over each element in
the array to print its value.

Advertisement

d) O(n^2) - Quadratic Time Complexity:


An algorithm whose runtime quadratically increases with input size. O(n^2) denotes
quadratic time complexity, in which an algorithm's execution time scales quadratically
with the amount of the input. This type of time complexity is often observed in
algorithms that involve nested iterations or comparisons between multiple
elements.

void printPairs(int arr[], int size) {


for (int i = 0; i < size; i++) {
for (int j = 0; j < size; j++) {
cout << "(" << arr[i] << ", " << arr[j] << ") ";
}
}
}
The printPairs function has a time complexity of O(n^2) as it performs nested iterations
over the array, resulting in a quadratic relationship between the execution time and the
input size.

e) O(2^n) - Exponential Time Complexity:


An algorithm that exhibits an exponential relationship between the execution time and
the input size. Exponential time complexity indicates that the algorithm's execution
time doubles with each additional element in the input, making it highly
inefficient for larger input sizes. This type of time complexity is often observed in
algorithms that involve an exhaustive search or generate all possible combinations.

int fibonacci(int n) {
if (n <= 1)
return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
The Fibonacci function has a time complexity of O(2^n) as it recursively calculates the
Fibonacci sequence, resulting in an exponential increase in the execution time as the
input size increases.
f) O(n!) - Factorial Time Complexity:
An algorithm whose runtime increases proportionally to the size of the input. This kind
of time complexity is frequently seen in algorithms that generate every combination
or permutation of a set of components.

void permute(string str, int l, int r) {


if (l == r) {

cout << str << " ";


return;
}
for (int i = l; i <= r; i++) {
swap(str[l], str[i]);
permute(str, l + 1, r);
swap(str[l], str[i]);
}
}
The permute function has a time complexity of O(n!) as it generates all possible
permutations of a given string, resulting in a factorial increase in the execution time as
the input size increases.

Time Complexity of Different Data Structures:


Here are the time complexities associated with common data structures:

Arrays:
Access: O(1)
Search: O(n)
Insertion (at the end): O(1)
Insertion (at the beginning or middle): O(n)
Deletion (from the end): O(1)
Deletion (from the beginning or middle): O(n)

Linked Lists:
Access: O(n)
Search: O(n)
Insertion (at the beginning): O(1)
Insertion (at the end, with a tail pointer): O(1)
Insertion (at the end, without a tail pointer): O(n)
Insertion (in the middle): O(n)
Deletion (from the beginning): O(1)
Deletion (from the end, with a tail pointer): O(1)
Deletion (from the end, without a tail pointer): O(n)
Deletion (from the middle): O(n)

Doubly Linked List:


Accessing an element by index: O(n)
Searching for an element: O(n)
Insertion (at the beginning): O(1)
Insertion (at the end, with a tail pointer): O(1)
Insertion (at the end, without a tail pointer): O(n)
Insertion (in the middle): O(n)
Deletion (from the beginning): O(1)
Deletion (from the end, with a tail pointer): O(1)
Deletion (from the end, without a tail pointer): O(n)
Deletion (from the middle): O(n)

Stacks:
Push: O(1)
Pop: O(1)
Peek: O(1)

Queues:
Enqueue: O(1)
Dequeue: O(1)
Peek: O(1)

Hash Tables:
Search: O(1) - on average, assuming a good hash function and minimal collisions
Insertion: O(1) - on average, assuming a good hash function and minimal
collisions
Deletion: O(1) - on average, assuming a good hash function and minimal
collisions

Binary Search Trees (BSTs):


Search: O(log n) - on average for balanced BST, O(n) worst case for unbalanced
BST
Insertion: O(log n) - on average for balanced BST, O(n) worst case for unbalanced
BST
Deletion: O(log n) - on average for balanced BST, O(n) worst case for unbalanced
BST
AVL Tree:
Searching for an element: O(log n)
Insertion of an element: O(log n)
Deletion of an element: O(log n)

B-Tree:
Searching for an element: O(log n)
Insertion of an element: O(log n)
Deletion of an element: O(log n)

Red-Black Tree:
Searching for an element: O(log n)
Insertion of an element: O(log n)
Deletion of an element: O(log n)

Space complexity

Now let’s understand with an example that how to calculate the space complexity of an
algorithm.

Example 1: Addition of Numbers


{
int a = x + y + z;
return (a);
}

So in the above example, there are 4 integer variables those are a, x, y, z so they will take
4 bytes(as given in the table above) space for each variable, and extra 4-byte space will
also be added to the total space complexity for the return value that is a.
Hence, the total space complexity = 4*4 + 4 = 20 bytes

But for this example, this is the fixed complexity and because of the same variables
inputs, such space complexities are considered as constant space complexities or so-
called O(1) space complexity.

Example 2: Factorial of a number(Recursive)


factorial(N){
if(N<=1)
{
return 1;
}
else
{
return (N*factorial(N-1));
}
}

So here this time there is an algorithm to find the factorial of the number using a
recursive method. Now,

1. "N" is an integer variable that stores the value for which we have to find the
factorial, so no matter what value will, it will just take "4 bytes" of space.
2. Now function call, "if" condition, "else" condition, and return function all come
under the auxiliary space, and let's assume these all will take combined “4 bytes”
of space but the matter of fact here is that here we are calling that function
recursively "N" times so here the complexity of auxiliary space will be "4*N bytes"
where N is the number of which factorial have to be found.
Hence, Total Space Complexity = (4 + 4*N) bytes But these 4 bytes are constant so we
will not consider it and after removing all the constants(4 from 4*N) we can finally say
that this algo have a complexity of "O(N)".

*How to Calculate Space Complexity?

Evaluating the space complexity of an algorithm involves determining the amount of


memory used by various elements such as variables of different data types,
program instructions, constant values, and in some cases, function calls
and the recursion stack. The exact amount of memory used by different data types
may vary depending on the operating system, but the method of calculating the space
complexity remains constant. To determine the space-complexity, it is important to
consider all of these factors and to add up the memory used by each element to get an
overall measure of the algorithm’s memory usage. For example, here is a table
summarizing the memory space taken by various data types in the C programming
language:
Data Type Memory Space (in bytes)

int 4

float 4

double 8

char 1

short int 2

long int 4

Explore free C programming courses


Note: The above table is based on common memory configurations and may vary
depending on the specific implementation and architecture of the system being used.
Consider the following example:
Example 1:
int main()
{
int a = 10;
float b = 20.5;
char c = 'A';
int d[10];

return 0;
}
Copy code
To calculate the complexity of this algorithm, we need to determine the amount of
memory used by each of the variables. In this case:
● a is an integer, which takes up 4 bytes of memory.
● b is a float, which takes up 4 bytes of memory.
● c is a character, which takes up 1 byte of memory.
● d is an array of 10 integers, which takes up 40 bytes of memory (10 x 4).

So, the total amount of memory used by this algorithm is 4 + 4 + 1 + 40 = 49 bytes.


These are just simple examples, and in a real-world scenarios, this complexity can be
more complex and involve more variables, data structures, and functions. However, the
process of calculating the space-complexity remains the same: we need to add up the
memory used by each element to get an overall measure of the algorithm’s memory
usage.
Let’s consider one more example:
Example 2:
int factorial(int n)
{
if (n == 0)
return 1;
else
return n * factorial(n-1);
}
Copy code
To calculate the complexity of this algorithm, we need to determine the amount of
memory used by the variables and functions. In this case:
● n is an integer input parameter, which takes up 4 bytes of memory.
● The function call factorial takes up some memory for the function call stack,
which is implementation-dependent.

In this case, the function factorial is recursive, so it makes multiple function calls and
uses memory on the function call stack. The complexity of this algorithm is proportional
to the number of function calls, which is directly proportional to the value of n. The
more calls, the more memory will be used on the function call stack.
In the worst-case scenario, where n is very large, this algorithm can use a significant
amount of memory on the function call stack, leading to a high space-complexity.

* Space Complexity of Data Structures

Data Structure Space Explanation


Complexity

Array O(n) Space is proportional to the number of


elements stored in the array.

Linked List O(n) Space is proportional to the number of


nodes, each node having data and a
pointer.

Stack O(n) Space is proportional to the number of


elements in the stack.
Queue O(n) Space is proportional to the number of
elements in the queue.

Hash Table O(n) Space is proportional to the number of


elements stored, including array and
linked lists or other collision handling.

Binary Tree O(n) Space is proportional to the number of


nodes in the tree.

Binary Search Tree O(n) Space is proportional to the number of


nodes in the tree.

Balanced Trees (e.g., O(n) Space is proportional to the number of


AVL, Red-Black nodes in the tree.
Tree)

Heap (Binary Heap) O(n) Space is proportional to the number of


elements in the heap.

Trie O(n * m) Space is proportional to the number of


words (n) times the average length of the
words (m).

Graph (Adjacency O(V^2) Space is proportional to the square of the


Matrix) number of vertices (V).

Graph (Adjacency O(V + E) Space is proportional to the number of


List) vertices (V) plus the number of edges (E).

Explanation of Key Terms

● n: The number of elements or nodes.


● m: The average length of the words stored in a Trie.
● V: The number of vertices in a graph.
● E: The number of edges in a graph.

You might also like