0% found this document useful (0 votes)
22 views

Module 1

The document discusses asymptotic analysis and different asymptotic notations used to analyze algorithms. It explains concepts like best case, worst case and average case time complexities. It also covers Big-O, Omega and Theta notations and their use cases. Finally, it discusses analysis of time and space complexities of algorithms.

Uploaded by

vijipersonal2012
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Module 1

The document discusses asymptotic analysis and different asymptotic notations used to analyze algorithms. It explains concepts like best case, worst case and average case time complexities. It also covers Big-O, Omega and Theta notations and their use cases. Finally, it discusses analysis of time and space complexities of algorithms.

Uploaded by

vijipersonal2012
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Asymptotic Analysis:

 The efficiency of an algorithm depends on the amount of


time, storage and other resources required to execute the
algorithm.
 The efficiency is measured with the help of asymptotic
notations.
 An algorithm may not have the same performance for
different types of inputs. With the increase in the input
size, the performance will change.
 Time and Space Complexity in Algorithms - Coding Ninjas CodeStudio
 Asymptotic Notations

 Asymptotic notations are the mathematical notations used to


describe the running time of an algorithm.
 For example: In bubble sort, when the input array is already
sorted, the time taken by the algorithm is linear i.e. the best case.
 But, when the input array is in reverse condition, the algorithm
takes the maximum time (quadratic) to sort the elements i.e. the
worst case.
 When the input array is neither sorted nor in reverse order, then
it takes average time.
 These durations are denoted using asymptotic notations.
There are mainly three asymptotic notations:
•Big-O notation
•Omega notation
•Theta notation
Big-O Notation (O-notation)
Big-O notation represents the upper bound of the
running time of an algorithm. Thus, it gives the worst-
case complexity of an algorithm.
o The below expression can be described as a function f(n) belongs
to the set O(g(n)) if there exists a positive constant c
such that it lies between 0 and cg(n), for sufficiently large n.

• For any value of n, the running time of an algorithm does not cross
the time provided by O(g(n)).
Since it gives the worst-case running time of an algorithm, it is
widely used to analyze an algorithm as we are always interested in
the worst-case scenario.

O(g(n)) = { f(n): there exist positive constants c and n such


0

that 0 ≤ f(n) ≤ cg(n) for all n ≥ n }


0
Omega Notation (Ω-notation)
Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
Ω(g(n)) = { f(n): there exist positive constants c and n such that
0

0 ≤ cg(n) ≤ f(n) for all n ≥ n }


0

The above expression can be described as a function f(n) belongs to the


set Ω(g(n)) if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n.
For any value of n, the minimum time required by the algorithm is given by
Omega Ω(g(n)).
Theta Notation (Θ-notation)
Theta notation encloses the function from above
and below. Since it represents the upper and the lower
bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
Θ(g(n)) = { f(n): there exist positive constants c , c and n
1 2 0

such that 0 ≤ c g(n) ≤ f(n) ≤ c g(n) for all n ≥ n }


1 2 0

The above expression can be described as a function f(n) belongs to the


set Θ(g(n)) if there exist positive constants c 1

and c such that it can be sandwiched between c g(n) and c g(n), for sufficiently large n.
2 1 2

If a function f(n) lies anywhere in between c g(n) and c g(n) for all n ≥ n0, then f(n) is said to
1 2

be asymptotically tight bound.


Algorithm Complexity

• Suppose X is treated as an algorithm and N is treated as the size of input


data, the time and space implemented by the Algorithm X are the two
main factors which determine the efficiency of X.

• Time Factor − The time is calculated or measured by counting the number


of key operations such as comparisons in sorting algorithm.

• Space Factor − The space is calculated or measured by counting the


maximum memory space required by the algorithm.
The complexity of an algorithm f(N) provides the running time and / or
storage space needed by the algorithm with respect of N as the size of input
data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space
needed the algorithm in its life cycle.
Space needed by an algorithm is equal to the sum of the following two
components
 A fixed part that is a space required to store certain data and variables (i.e.
simple variables and constants, program size etc.), that are not dependent of
the size of the problem.

 A variable part is a space required by variables, whose size is totally


dependent on the size of the problem. For example, recursion stack space,
dynamic memory allocation etc.

 Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is


treated as the fixed part and S(I) is treated as the variable part of the
algorithm which depends on instance characteristic I. Following is a simple
example that tries to explain the concept
SUM(P, Q)
Step 1 – START
Step 2 - R ← P + Q + 10
Step 3 - Stop

Here we have three variables P, Q and R and one


constant. Hence S(p) = 1+3. Now space is dependent on
data types of given constant types and variables and it
will be multiplied accordingly.
Time Complexity
 Time Complexity of an algorithm is the representation of the
amount of time required by the algorithm to execute to
completion.
 Time requirements can be denoted or defined as a numerical
function t(N), where t(N) can be measured as the number of
steps, provided each step takes constant time.
 For example, in case of addition of two n-bit integers, N steps
are taken. Consequently, the total computational time is t(N) =
c*n, where c is the time consumed for addition of two bits.
Here, we observe that t(N) grows linearly as input size
increases.
There are five types of Time complexity Cases:
1.Constant Time Complexity - O(1)
2.Logarithmic Time Complexity - O(log n)
3.Linear Time Complexity - O(n)
4.O(n log n) Time Complexity
5.Quadratic Time Complexity - O(n2)
Arrays
•An array is a data structure used to process multiple
elements with the same data type when a number of such
elements are known.

•It provides a powerful feature and can be used as such or


can be used to form complex data structures like stacks
and queues.

•An array can be defined as an infinite collection of


homogeneous(similar type) elements.

•Arrays are always stored in consecutive memory locations.


Types of Arrays
There are two types of Arrays
•One Dimensional Arrays
•Two Dimensional Arrays

Multidimensional arrays
A multidimensional array associates each element in the
array with multiple indexes.
The most commonly used multidimensional array is
the two-dimensional array, also known as
a table or matrix. A two-dimensional array associates
each of its elements with two indexes.
2]={0,1,2,3,4,5,6,7,8,9,3,2}
In this type of declaration,
we have an array of type integer,
block size is 3,
row size is 2, column size is 2 and we have mentioned the values
inside the curly braces during the declaration of array.
So all the values will be stored one by one in the array cells.
int arr[3][2][2]={0,1,2,3,4,5,6,7,8,9,3,2}

block(1) 0 1 block(2) 4 5 block(3) 8 9


23 67 32
2x2 2x2 2x2
Dynamic Memory Allocation

 Since C is a structured language, it has some fixed


rules for programming.
 One of them includes changing the size of an array.
An array is a collection of items stored at contiguous
memory locations.
Dynamic Memory Allocation can be defined as a procedure in
which the size of a data structure (like Array) is changed during
the runtime.

C provides some functions to achieve these tasks. There are 4


library functions provided by C defined under <stdlib.h> header
file to facilitate dynamic memory

1.malloc()
2.calloc()
3.free()
4.realloc()
 The “malloc” or “memory allocation” method in C is used to
dynamically allocate a single large block of memory with the specified
size.
 It returns a pointer of type void which can be cast into a pointer of any
form. It doesn’t Initialize memory at execution time so that it has
initialized each block with the default garbage value initially.
Syntax:
ptr = (cast-type*) malloc(byte-size)

For Example:
ptr = (int*) malloc(100 * sizeof(int));

 Since the size of int is 4 bytes, this statement will allocate 400
bytes of memory.
 And, the pointer ptr holds the address of the first byte in the
allocated memory.
C calloc() method
1.“calloc” or “contiguous allocation” method in C is
used to dynamically allocate the specified number of
blocks of memory of the specified type. it is very much
similar to malloc() but has two different points and these
are:
2.It initializes each block with a default value ‘0’.
3.It has two parameters or arguments as compare to
malloc().
Syntax:
ptr = (cast-type*)calloc(n, element-size);
here, n is the no. of elements and element-
size is the size of each element.
For Example:

ptr = (float*) calloc(25, sizeof(float));


This statement allocates contiguous space in memory for
25 elements each with the size of the float.
C free() method
“free” method in C is used to dynamically de-allocate the
memory. The memory allocated using functions malloc() and
calloc() is not de-allocated on their own. Hence the free()
method is used, whenever the dynamic memory allocation
takes place. It helps to reduce wastage of memory by freeing
it.

Syntax:
free(ptr);
• “realloc” or “re-allocation” method in C is used to
dynamically change the memory allocation of a previously
allocated memory.
• In other words, realloc can be used to dynamically re-allocate
memory. re-allocation of memory maintains the already
present value and new blocks will be initialized with the default
garbage value.
Syntax:
ptr = realloc(ptr, newSize); where ptr is
reallocated with new size 'newSize'.
Basic Operations:
Following are the basic operations supported by an array.
•Traverse − print all the array elements one by one.
•Insertion − Adds an element at the given index.
•Deletion − Deletes an element at the given index.
•Search − Searches an element using the given index or by
the value.
•Update − Updates an element at the given index.

Memory Allocations in Data Structures || CseWorld Online

You might also like