Ads Unit 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 36

ADVANCED DATA STRUCTURES

What Are Data Structures and


Algorithms?
 What Are Data Structures and Algorithms?
 How Do Data Structures and Algorithms Work Together?
 Common Data Structures and Algorithms
 How Do You Learn Data Structures and Algorithms?
 Data Structures and Algorithms in Python
 Data Structures and Algorithms in JavaScript
 Interview Questions on Data Structures and Algorithms
 Resources to Learn Data Structures and Algorithms
 How Does Springboard Help You Master Data Structures and Algorithms?

What Are Data Structures and Algorithms?


A data structure is a method of organizing data in a virtual system. Think of
sequences of numbers, or tables of data: these are both well-defined data structures.
An algorithm is a sequence of steps executed by a computer that takes an input and
transforms it into a target output.

How Do Data Structures and Algorithms


Work Together?
There are many algorithms for different purposes. They interact with different data
structures in the same computational complexity scale. Think of algorithms as
dynamic underlying pieces that interact with static data structures.

The way data is expressed in code is flexible. Once you understand how algorithms
are built, you can generalize across different programming languages. In a sense, it’s
a bit like knowing how a related family of languages work syntactically. Once you
glimpse the fundamental rules behind programming languages and their organizing
principles, you can more easily switch between the different languages and learn
each faster.
Common Data Structures and Algorithms
Common data structures you’ll see across different programming languages
include:

 Linked lists
 Stacks
 Queues
 Sets
 Maps
 Hash tables
 Search trees

Each of these has its own computational complexity for associated functions like
adding items and finding aggregate measures such as the mean for the underlying
data structure.

Some common categories of algorithms are:

 Search
 Sorting
 Graph/tree traversing
 Dynamic programming
 Hashing and regex (string pattern matching)

How Do You Learn Data Structures and


Algorithms?
It’s important to learn data structures and algorithms properly so you can understand
the organizing principles behind web development and programming work.

Follow these steps to ensure your learning is as efficient as your algorithms will be.

1. Gradually move beyond HTML/CSS to a programming language. Python is


good because it’s versatile and can be used for many programming
paradigms, and has more elegant syntax than JavaScript. Eventually, you’ll
work towards defining your own data structures and algorithms.
2. Get familiar with computational complexity. In particular, Big O notation
and the different scales of time and space that represent the worst-case
scenarios for your algorithms from input to output, from linear, polynomial,
exponential, to logarithmic time scales. These scales will have dramatic
differences in the performance and expected computation times of your
algorithms. Something that might be logarithmic might scale decently well
with large data sets and inputs, while something that is exponential may never
finish in time.
3. Understand different data structures and algorithm types. Read through
basic data structure and algorithm types to get a better feel for the subject.
4. Practice, practice, practice. Practice implementing algorithmic principles and
actual algorithms and data structures with different exercises. Build your own
programs.
5. Get on-the-job training. Get a job in software engineering or a role where
data structures and algorithms are implemented in order to best exercise your
new knowledge.
6. In the 1950s and 1960s, many algorithms we study today
were developed. Merge sort was invented by John von
Neumman himself! Data structures like arrays, hash maps,
linked lists, binary search trees, and stacks were also
developed during this time.

What is an advanced data structure?

Advanced Data structures are one of the essential branches of data


science which is used for storage, organization and management of data
and information for efficient, easy accessibility and modification of data.
They are the basic element for creating efficient and effective software
design and algorithms.

What is advanced data structures and algorithms?

Advanced Algorithms and Data Structures are introduces a collection of


algorithms for complex programming challenges in data analysis, machine
learning, and graph computing etc.

What are the basic and advanced data structures?

Some of the basic data structures are Arrays, LinkedList, Stacks, Queues etc.
This page will contain some of the complex and advanced Data Structures like
Disjoint Sets, Self-Balancing Trees, Segment Trees, Tries etc.
What is Algorithm | Introduction to
Algorithms


Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.

Use of the Algorithms:


Algorithms play a crucial role in various fields and have many applications. Some of
the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and are
used to solve problems ranging from simple sorting and searching to complex
tasks such as artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as
finding the optimal solution to a system of linear equations or finding the shortest
path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in
fields such as transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and
machine learning, and are used to develop intelligent systems that can perform
tasks such as image recognition, natural language processing, and decision-
making.
5. Data Science: Algorithms are used to analyze, process, and extract insights from
large amounts of data in fields such as marketing, finance, and healthcare.
These are just a few examples of the many applications of algorithms. The use of
algorithms is continually expanding as new technologies and fields emerge, making it
a vital component of modern society.
Algorithms can be simple and complex depending on what you want to achieve.
It can be understood by taking the example of cooking a new recipe. To cook a new
recipe, one reads the instructions and steps and executes them one by one, in the given
sequence. The result thus obtained is the new dish is cooked perfectly. Every time you
use your phone, computer, laptop, or calculator you are using Algorithms. Similarly,
algorithms help to do a task in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain instructions
that can be implemented in any language, and yet the output will be the same, as
expected.
What is the need for algorithms?
1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster, and easier to
perform.
3. Algorithms also enable computers to perform tasks that would be difficult or
impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze data, make
predictions, and provide solutions to problems.
What are the Characteristics of an Algorithm?
As one would not follow any written instructions to cook the recipe, but only the
standard one. Similarly, not all written instructions for programming are an algorithm.
For some instructions to be an algorithm, it must have the following characteristics:
 Clear and Unambiguous: The algorithm should be unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined
inputs. It may or may not take input.
 Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well. It should produce at least 1 output.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite
time.
 Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology
or anything.
 Language Independent: The Algorithm designed must be language-independent,
i.e. it must be just plain instructions that can be implemented in any language, and
yet the output will be the same, as expected.
 Input: An algorithm has zero or more inputs. Each that contains a fundamental
operator must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous, precise, and
easy to interpret. By referring to any of the instructions in an algorithm one can
clearly understand what is to be done. Every fundamental operator in instruction
must be defined without any ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in all test
cases. Every instruction which contains a fundamental operator must be terminated
within a finite amount of time. Infinite loops or recursive functions without base
conditions do not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic, simple, and
feasible operations so that one can trace it out by using just paper and pencil.
Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same input case.
 Every step in the algorithm must be effective i.e. every step should do some work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm:
It is the simplest approach to a problem. A brute force algorithm is the first approach
that comes to finding when we see a problem.
2. Recursive Algorithm:
A recursive algorithm is based on recursion. In this case, a problem is broken into
several sub-parts and called the same function again and again.
0 seconds of 15 secondsVolume 0%is ad will end in 15

3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible
solutions. Using this algorithm, we keep on building the solution following criteria.
Whenever a solution fails we trace back to the failure point build on the next solution
and continue this process till we find the solution or all possible solutions are looked
after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups of
elements from a particular data structure. They can be of different types based on their
approach or the data structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called sorting
algorithms. Generally sorting algorithms are used to sort groups of data in an
increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm:
This algorithm breaks a problem into sub-problems, solves a single sub-problem, and
merges the solutions to get the final solution. It consists of the following three steps:
 Divide
 Solve
 Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next
part is built based on the immediate benefit of the next part. The one solution that
gives the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm:
This algorithm uses the concept of using the already found solution to avoid repetitive
calculation of the same part of the problem. It divides the problem into smaller
overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate benefit.
The random number helps in deciding the expected outcome.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given problem.
 In an Algorithm the problem is broken down into smaller pieces or steps hence, it
is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it solves
the problem.
Example: Consider the example to add three numbers and print the sum.
Step 1: Fulfilling the pre-requisites
As discussed above, to write an algorithm, its prerequisites must be fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers and print
their sum.
2. The constraints of the problem that must be considered while solving the
problem: The numbers must contain only digits and no other characters.
3. The input to be taken to solve the problem: The three numbers to be added.
4. The output to be expected when the problem is solved: The sum of the three
numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution consists of
adding the 3 numbers. It can be done with the help of the ‘+’ operator, or bit-wise,
or any other method.
Step 2: Designing the algorithm
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3
respectively.
4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END
Step 3: Testing the algorithm by implementing it.
To test the algorithm, let’s implement it in C language.
Program:
 C++
 C
 Java
 Python3
 C#
 Javascript

// C++ program to add three numbers

// with the help of above designed

// algorithm

#include <bits/stdc++.h>

using namespace std;

int main()

// Variables to take the input of


// the 3 numbers

int num1, num2, num3;

// Variable to store the resultant sum

int sum;

// Take the 3 numbers as input

cout << "Enter the 1st number: ";

cin >> num1;

cout << " " << num1 << endl;

cout << "Enter the 2nd number: ";

cin >> num2;

cout << " " << num2 << endl;

cout << "Enter the 3rd number: ";

cin >> num3;

cout << " " << num3;

// Calculate the sum using + operator


// and store it in variable sum

sum = num1 + num2 + num3;

// Print the sum

cout << "\nSum of the 3 numbers is: "

<< sum;

return 0;

// This code is contributed by shivanisinghss2110

Output

Enter the 1st number: 0


Enter the 2nd number: 0
Enter the 3rd number: -1577141152

Sum of the 3 numbers is: -1577141152


Here is the step-by-step algorithm of the code:
1. Declare three variables num1, num2, and num3 to store the three numbers to be
added.
2. Declare a variable sum to store the sum of the three numbers.
3. Use the cout statement to prompt the user to enter the first number.
4. Use the cin statement to read the first number and store it in num1.
5. Use the cout statement to prompt the user to enter the second number.
6. Use the cin statement to read the second number and store it in num2.
7. Use the cout statement to prompt the user to enter the third number.
8. Use the cin statement to read and store the third number in num3.
9. Calculate the sum of the three numbers using the + operator and store it in the sum
variable.
10. Use the cout statement to print the sum of the three numbers.
11. The main function returns 0, which indicates the successful execution of the
program.
Time complexity: O(1)
Auxiliary Space: O(1)
One problem, many solutions: The solution to an algorithm can be or cannot be
more than one. It means that while implementing the algorithm, there can be more
than one method to implement it. For example, in the above problem of adding 3
numbers, the sum can be calculated in many ways:
 + operator
 Bit-wise operators
 . . etc
How to analyze an Algorithm?

For a standard algorithm to be good, it must be efficient. Hence the efficiency of an


algorithm must be checked and maintained. It can be in two stages:

1. Priori Analysis:
“Priori” means “before”. Hence Priori analysis means checking the algorithm before
its implementation. In this, the algorithm is checked when it is written in the form of
theoretical steps. This Efficiency of an algorithm is measured by assuming that all
other factors, for example, processor speed, are constant and have no effect on the
implementation. This is done usually by the algorithm designer. This analysis is
independent of the type of hardware and language of the compiler. It gives the
approximate answers for the complexity of the program.

2. Posterior Analysis:
“Posterior” means “after”. Hence Posterior analysis means checking the algorithm
after its implementation. In this, the algorithm is checked by implementing it in any
programming language and executing it. This analysis helps to get the actual and real
analysis report about correctness(for every possible input/s if it shows/returns correct
output or not), space required, time consumed, etc. That is, it is dependent on the
language of the compiler and the type of hardware used.

What is Algorithm complexity and how to find it?

An algorithm is defined as complex based on the amount of Space and Time it


consumes. Hence the Complexity of an algorithm refers to the measure of the time
that it will need to execute and get the expected output, and the Space it will need to
store all the data (input, temporary data, and output). Hence these two factors define
the efficiency of an algorithm.
The two factors of Algorithm Complexity are:
 Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor: Space is measured by counting the maximum memory space
required by the algorithm to run/execute.

Therefore the complexity of an algorithm can be divided into two types:

1. Space Complexity: The space complexity of an algorithm refers to the amount of


memory required by the algorithm to store the variables and get the result. This can be
for inputs, temporary operations, or outputs.
How to calculate Space Complexity?

The space complexity of an algorithm is calculated by determining the following 2


components:

 Fixed Part: This refers to the space that is required by the algorithm. For example,
input variables, output variables, program size, etc.
 Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables, dynamic
memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C
is the fixed part and S(I) is the variable part of the algorithm, which depends on
instance characteristic I.
Example: Consider the below algorithm for Linear Search
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x
Step 3: Start from the leftmost element of arr[] and one by one compare x with each
element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n
elements and x is the fixed part. Hence S(P) = 1+n. So, the space complexity depends
on n(number of elements). Now, space depends on data types of given variables and
constant types and it will be multiplied accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the amount of
time required by the algorithm to execute and get the result. This can be for normal
operations, conditional if-else statements, loop statements, etc.

How to Calculate, Time Complexity?

The time complexity of an algorithm is also calculated by determining the following 2


components:
 Constant time part: Any instruction that is executed just once comes in this part.
For example, input, output, if-else, switch, arithmetic operations, etc.
 Variable Time Part: Any instruction that is executed more than once, say n times,
comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the algorithm,
which depends on the instance characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is calculated
as follows:
Step 1: –Constant Time
Step 2: — Variable Time (Taking n inputs)
Step 3: –Variable Time (Till the length of the Array (n) or the index of the found
element)
Step 4: –Constant Time
Step 5: –Constant Time
Step 6: –Constant Time
Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).

How to express an Algorithm?

1. Natural Language:- Here we express the Algorithm in the natural English


language. It is too hard to understand the algorithm from it.
2. Flow Chart:- Here we express the Algorithm by making a graphical/pictorial
representation of it. It is easier to understand than Natural Language.
3. Pseudo Code:- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real
code but as it has no syntax like any of the programming languages, it can’t be
compiled or interpreted by the computer. It is the best way to express an algorithm
because it can be understood by even a layman with some school-level knowledge.

What is algorithm and pseudocode with an example?

Algorithms are set of instructions to solve the problem, while pseudocode


is a rough sketch to organize and understand a program before it is
written in codes. The key difference between algorithms and pseudo code
is that algorithms are more specific, while pseudo codes are more general.

What is a Pseudocode?

Pseudo code is an informal method of developing an algorithm. Thus, computer


programmers use simple informal language to write a pseudocode. It does not
have any specific syntax to follow. The pseudo code is a text based design tool.
Basically, pseudo code represents an algorithm to solve a problem in natural
language and mathematical notations.

Pseudocodes are written in plain English, and they use short phrases to
represent the functionalities that the specific lines of code would do. Since there
is no strict syntax to follow in pseudocode writing, they are relatively difficult
to debug.

What is a pseudocode for expressing an algorithm?

Pseudocode is a way of expressing an algorithm without conforming to specific


syntax rules. By learning to read and write pseudocode, you can easily
communicate ideas and concepts to other programmers, even though they may
be using completely different languages.

How to Write Pseudocode


 Always capitalize the initial word (often one of the main six constructs).
 Make only one statement per line.
 Indent to show hierarchy, improve readability, and show nested constructs.
 Always end multi-line sections using any of the END keywords (ENDIF,
ENDWHILE, etc.).

Difference between Algorithm and Pseudocode

The following table highlights the key differences between algorithm and
pseudo code −
Algorithm Pseudocode

It is defined as a sequence of well- It can be understood as one of the


defined steps. These steps provide methods that helps in the
a solution/ a way to solve a representation of an algorithm.
problem in hand.

It is a systematic, and a logical It is a simpler version of coding in a


approach, where the procedure is programming language.
defined step-wise

Algorithms can be represented It is written in plain English, and uses


using natural language, flowchart short phrases to write the
and so on. functionalities that s specific line of
code would do.

This solution would be translated There is no specific syntax which is


to machine code, which is then actually present in other
executed by the system to give the programming languages. This means
relevant output. it can't be executed on a computer.

Many simple operations are There are many formats that could be
combined to help form a more used to write pseudo-codes.
complicated operation, which is
performed with ease by the
computer

It gives the solution to a specific Most of these formats take the


problem. structure from languages such as C,
LIST, FORTRAN, and so on.

It can be understood as the Pseudocode is not actually a


pseudocode for a program. programming language.

Plain text is used. Control structures such as 'while', 'if-


thenelse', 'repeat-until', and so on can
be used.

It is easy to debug. It is relatively difficult to debug.

Its construction is tough. Its construction is easy.

There are no rules to follow while It has certain rules to follow while
constructing it. constructing it.
Asymptotic Notations

Asymptotic Notation is used to describe the running time of an algorithm - how


much time an algorithm takes with a given input, n. There are three different
notations: big O, big Theta (Θ), and big Omega (Ω)

Types of Asymptotic Notations in Complexity Analysis of Algorithms




We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases
of Algorithms. The main idea of asymptotic analysis is to have a measure of the
efficiency of algorithms that don’t depend on machine-specific constants and
don’t require algorithms to be implemented and time taken by programs to be
compared. Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis.
Asymptotic Notations:
.Asymptotic Notations are programming languages that allow you to analyze an
algorithm’s running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the input
size is increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents
the upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural
number n0 such that c1* g(n) &#x2264 f(n) &#x2264 c2 * g(n) for all n
&#x2265 n0
Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0
&#x2264 c1 * g(n) &#x2264 f(n) &#x2264 c2 * g(n) for all n &#x2265 n0}
Note: Θ(g) is a set
0 seconds of 15 secondsVolume 0%
This ad will end in 15

The above expression can be described as if f(n) is theta of g(n), then the value
f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n &#x2265
n0). The definition of theta also requires that f(n) must be non-negative for
values of n greater than n0.
The execution time serves as both a lower and upper bound on the
algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, Consider the expression 3n3 +
6n2 + 6000 = Θ(n3), the dropping lower order terms is always fine because
there will always be a number(n) after which Θ(n3) has higher values
than Θ(n2) irrespective of the constants involved. For a given function g(n), we
denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to
complete statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 &#x2264 f(n) &#x2264 cg(n) for all n
&#x2265 n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time
complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 &#x2264
f(n) &#x2264 cg(n) for all n &#x2265 n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best
case and quadratic time in the worst case. We can safely say that the time
complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have
to use two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O
provides exact or upper bounds .
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time
complexity.
It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be &#x2126(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) &#x2264 f(n) for all n &#x2265 n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 &#x2264
cg(n) &#x2264 f(n) for all n &#x2265 n0 }
Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not very useful information about
insertion sort, as we are generally interested in worst-case and sometimes in the
average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω
provides exact or lower bounds.

Properties of Asymptotic Notations:


1. General Properties:
If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.
Example:
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.

2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

3. Reflexive Properties:

Reflexive properties are always easy to understand after transitive.


If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be
f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.
We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).
4. Symmetric Properties:
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).
Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.

5. Transpose Symmetric Properties:


If f(n) is O(g(n)) then g(n) is Ω (f(n)).
Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
This property only satisfies O and Ω notations.
6. Some More Properties:
1. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))
2. If f(n) = O(g(n)) and d(n)=O(e(n)) then f(n) + d(n) = O( max( g(n), e(n) ))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)
3. If f(n)=O(g(n)) and d(n)=O(e(n)) then f(n) * d(n) = O( g(n) * e(n))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)

Algorithms which have exponential time complexity grow much faster than
polynomial algorithms. The difference you are probably looking for happens to
be where the variable is in the equation that expresses the run time. Equations
that show a polynomial time complexity have variables in the bases of their
terms.

What is polynomial algorithm?


Definition. A polynomial-time algorithm is one whose running time grows as a
polynomial function of the size of its input.

What is exponential algorithm?


If an algorithm takes n2 steps to complete, then it's polynomial. If it takes 2n
steps, it's exponential. The difference is the position of the n. If something is
O(nm) for n>1, m>0, then it's polynomial in n for fixed m, but exponential in m
for fixed n
What is average ,best and worst case complexity?
Best case is the function which performs the minimum number of steps on input data
of n elements. Worst case is the function which performs the maximum number of
steps on input data of size n. Average case is the function which performs an
average number of steps on input data of n elements.


Popular Notations in Complexity Analysis of Algorithms

1. Big-O Notation

We define an algorithm’s worst-case time complexity by using the Big-O notation,


which determines the set of functions grows slower than or at the same rate as the
expression. Furthermore, it explains the maximum amount of time an algorithm
requires to consider all input values.
2. Omega Notation

It defines the best case of an algorithm’s time complexity, the Omega notation defines
whether the set of functions will grow faster or at the same rate as the expression.
Furthermore, it explains the minimum amount of time an algorithm requires to
consider all input values.
3. Theta Notation

It defines the average case of an algorithm’s time complexity, the Theta notation
defines when the set of functions lies in both O(expression) and Omega(expression),
then Theta notation is used. This is how we define a time complexity average case for
an algorithm.

Measurement of Complexity of an Algorithm

Based on the above three notations of Time Complexity there are three cases to
analyze an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an
algorithm. We must know the case that causes a maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched
(x) is not present in the array. When x is not present, the search() function compares it
with all the elements of arr[] one by one. Therefore, the worst-case time complexity of
the linear search would be O(n).

2. Best Case Analysis (Very Rarely used)


In the best-case analysis, we calculate the lower bound on the running time of an
algorithm. We must know the case that causes a minimum number of operations to be
executed. In the linear search problem, the best case occurs when x is present at the
first location. The number of operations in the best case is constant (not dependent on
n). So time complexity in the best case would be Ω(1)

3. Average Case Analysis (Rarely used)


In average case analysis, we take all possible inputs and calculate the computing time
for all of the inputs. Sum all the calculated values and divide the sum by the total
number of inputs. We must know (or predict) the distribution of cases. For the linear
search problem, let us assume that all cases are uniformly distributed (including the
case of x not being present in the array). So we sum all the cases and divide the sum
by (n+1). Following is the value of average-case time complexity.

Average Case Time = \sum_{i=1}^{n}\frac{\theta (i)}{(n+1)} = \frac{\theta (\


frac{(n+1)*(n+2)}{2})}{(n+1)} = \theta (n)

Which Complexity analysis is generally used?

Below is the ranked mention of complexity analysis notation based on popularity:


1. Worst Case Analysis:
Most of the time, we do worst-case analyses to analyze algorithms. In the worst
analysis, we guarantee an upper bound on the running time of an algorithm which is
good information.
2. Average Case Analysis
The average case analysis is not easy to do in most practical cases and it is rarely
done. In the average case analysis, we must know (or predict) the mathematical
distribution of all possible inputs.
3. Best Case Analysis
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t
provide any information as in the worst case, an algorithm may take years to run.

Interesting information about asymptotic notations:

A) For some algorithms, all the cases (worst, best, average) are asymptotically the
same. i.e., there are no worst and best cases.
 Example: Merge Sort does Θ(n log(n)) operations in all cases.
B) Where as most of the other sorting algorithms have worst and best cases.
 Example 1: In the typical implementation of Quick Sort (where pivot is chosen as
a corner element), the worst occurs when the input array is already sorted and the
best occurs when the pivot elements always divide the array into two halves.
 Example 2: For insertion sort, the worst case occurs when the array is reverse
sorted and the best case occurs when the array is sorted in the same order as
output.
Examples with their complexity analysis:
1. Linear search algorithm:

// C implementation of the approach

#include <stdio.h>

// Linearly search x in arr[].

// If x is present then return the index,

// otherwise return -1
int search(int arr[], int n, int x)

int i;

for (i = 0; i < n; i++) {

if (arr[i] == x)

return i;

return -1;

/* Driver's code*/

int main()

int arr[] = { 1, 10, 30, 15 };

int x = 30;

int n = sizeof(arr) / sizeof(arr[0]);

// Function call

printf("%d is present at index %d", x,

search(arr, n, x));
getchar();

return 0;

Output
30 is present at index 2
Time Complexity Analysis: (In Big-O notation)
 Best Case: O(1), This will take place if the element to be searched is on the first
index of the given list. So, the number of comparisons, in this case, is 1.
 Average Case: O(n), This will take place if the element to be searched is on the
middle index of the given list.
 Worst Case: O(n), This will take place if:
 The element to be searched is on the last index
 The element to be searched is not present on the list
2. In this example, we will take an array of length (n) and deals with the following
cases :
 If (n) is even then our output will be 0
 If (n) is odd then our output will be the sum of the elements of the array.
Below is the implementation of the given problem:

 C++
 Java
 Python3
 C#
 Javascript

// C++ implementation of the approach

#include <bits/stdc++.h>

using namespace std;


int getSum(int arr[], int n)

if (n % 2 == 0) // (n) is even

return 0;

int sum = 0;

for (int i = 0; i < n; i++) {

sum += arr[i];

return sum; // (n) is odd

// Driver's Code

int main()

// Declaring two array one of length odd and other of

// length even;

int arr[4] = { 1, 2, 3, 4 };

int a[5] = { 1, 2, 3, 4, 5 };
// Function call

cout << getSum(arr, 4)

<< endl; // print 0 because (n) is even

cout << getSum(a, 5)

<< endl; // print sum because (n) is odd

// This code is contributed by Suruchi Kumari

Output
0
15
Time Complexity Analysis:
 Best Case: The order of growth will be constant because in the best case we are
assuming that (n) is even.
 Average Case: In this case, we will assume that even and odd are equally likely,
therefore Order of growth will be linear
 Worst Case: The order of growth will be linear because in this case, we are
assuming that (n) is always odd.

Worst, Average, and Best Case Analysis of Algorithms is a technique used to analyze
the performance of algorithms under different conditions. Here are some advantages,
disadvantages, important points, and reference books related to this analysis
technique:

Advantages:

1. This technique allows developers to understand the performance of algorithms


under different scenarios, which can help in making informed decisions about
which algorithm to use for a specific task.
2. Worst case analysis provides a guarantee on the upper bound of the running time
of an algorithm, which can help in designing reliable and efficient algorithms.
3. Average case analysis provides a more realistic estimate of the running time of an
algorithm, which can be useful in real-world scenarios.

Disadvantages:

1. This technique can be time-consuming and requires a good understanding of the


algorithm being analyzed.
2. Worst case analysis does not provide any information about the typical running
time of an algorithm, which can be a disadvantage in real-world scenarios.
3. Average case analysis requires knowledge of the probability distribution of input
data, which may not always be available.

Important points:

1. The worst case analysis of an algorithm provides an upper bound on the running
time of the algorithm for any input size.
2. The average case analysis of an algorithm provides an estimate of the running time
of the algorithm for a random input.
3. The best case analysis of an algorithm provides a lower bound on the running time
of the algorithm for any input size.
4. The big O notation is commonly used to express the worst case running time of an
algorithm.
5. Different algorithms may have different best, average, and worst case running
times.
What is recursion and analyzing recursive algorithm?
A recursive algorithm is an algorithm which calls itself with a smaller
problem. More generally, if a problem can be solved utilizing solutions to
smaller versions of the same problem and the smaller versions reduce to
easily solvable cases, then one can use a recursive algorithm to solve that
problem.

Procedure for Recursive Algorithm


1. Specify problem size.
2. Identify basic operation.
3. Worst, best, average case.
4. Write recursive relation for the number of basic operation. Don't forget the
initial conditions (IC)
5. Solve recursive relation and order of growth.
What is Recursive Algorithm? Types and
Method
.

What Is a Recursive Algorithm?

A recursive algorithm calls itself with smaller input values and returns the result
for the current input by carrying out basic operations on the returned value for
the smaller input. Generally, if a problem can be solved by applying solutions to
smaller versions of the same problem, and the smaller versions shrink to readily
solvable instances, then the problem can be solved using a recursive algorithm.

To build a recursive algorithm, you will break the given problem statement
into two parts. The first one is the base case, and the second one is the
recursive step.

 Base Case: It is nothing more than the simplest instance of a problem,


consisting of a condition that terminates the recursive function. This base
case evaluates the result when a given condition is met.
 Recursive Step: It computes the result by making recursive calls to the same
function, but with the inputs decreased in size or complexity.
For example, consider this problem statement: Print sum of n natural numbers
using recursion. This statement clarifies that we need to formulate a function
that will calculate the summation of all natural numbers in the range 1 to n.
Hence, mathematically you can represent the function as:

F(n) = 1 + 2 + 3 + 4 + …………..+ (n-2) + (n-1) + n

It can further be simplified as:


You can breakdown this function into two parts as follows:

Different Types of Recursion

There are four different types of recursive algorithms, you will look at them one
by one.

 Direct Recursion

A function is called direct recursive if it calls itself in its function body


repeatedly. To better understand this definition, look at the structure of a direct
recursive program.

int fun(int z){

fun(z-1); //Recursive call

In this program, you have a method named fun that calls itself again in its
function body. Thus, you can say that it is direct recursive.

 Indirect Recursion
The recursion in which the function calls itself via another function is called
indirect recursion. Now, look at the indirect recursive program structure.

int fun1(int z){ int fun2(int y){

fun2(z-1); fun1(y-2)

} }

In this example, you can see that the function fun1 explicitly calls fun2, which
is invoking fun1 again. Hence, you can say that this is an example of indirect
recursion.

 Tailed Recursion

A recursive function is said to be tail-recursive if the recursive call is the last


execution done by the function. Let’s try to understand this definition with the
help of an example.

int fun(int z)

printf(“%d”,z);

fun(z-1);

//Recursive call is last executed statement

}
If you observe this program, you can see that the last line ADI will execute for
method fun is a recursive call. And because of that, there is no need to
remember any previous state of the program.

 Non-Tailed Recursion

A recursive function is said to be non-tail recursive if the recursion call is not


the last thing done by the function. After returning back, there is something left
to evaluate. Now, consider this example.

int fun(int z)

fun(z-1);

printf(“%d”,z);

//Recursive call is not the last executed statement

In this function, you can observe that there is another operation after the
recursive call. Hence the ADI will have to memorize the previous state inside
this method block. That is why this program can be considered non-tail
recursive.

Moving forward, you will implement a C program that exhibits recursive


algorithmic nature.

Program to Demonstrate Recursion

You will look at a C program to understand recursion in the case of the sum of n
natural numbers problem.
#include<stdio.h>

int Sum(int n){

if(n==0){

return 0;

int temp = Sum(n-1);

return n + temp;

int main()

int n;

printf("Enter the natural number n to calculate the sum of n numbers: ");

scanf("%d",&n);

printf("%d",Sum(n));

}
Output:

The output for the sum of n natural numbers program is represented in the
image below.

You might also like