Ads Unit 1
Ads Unit 1
Ads Unit 1
The way data is expressed in code is flexible. Once you understand how algorithms
are built, you can generalize across different programming languages. In a sense, it’s
a bit like knowing how a related family of languages work syntactically. Once you
glimpse the fundamental rules behind programming languages and their organizing
principles, you can more easily switch between the different languages and learn
each faster.
Common Data Structures and Algorithms
Common data structures you’ll see across different programming languages
include:
Linked lists
Stacks
Queues
Sets
Maps
Hash tables
Search trees
Each of these has its own computational complexity for associated functions like
adding items and finding aggregate measures such as the mean for the underlying
data structure.
Search
Sorting
Graph/tree traversing
Dynamic programming
Hashing and regex (string pattern matching)
Follow these steps to ensure your learning is as efficient as your algorithms will be.
Some of the basic data structures are Arrays, LinkedList, Stacks, Queues etc.
This page will contain some of the complex and advanced Data Structures like
Disjoint Sets, Self-Balancing Trees, Segment Trees, Tries etc.
What is Algorithm | Introduction to
Algorithms
Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.
3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible
solutions. Using this algorithm, we keep on building the solution following criteria.
Whenever a solution fails we trace back to the failure point build on the next solution
and continue this process till we find the solution or all possible solutions are looked
after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups of
elements from a particular data structure. They can be of different types based on their
approach or the data structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called sorting
algorithms. Generally sorting algorithms are used to sort groups of data in an
increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm:
This algorithm breaks a problem into sub-problems, solves a single sub-problem, and
merges the solutions to get the final solution. It consists of the following three steps:
Divide
Solve
Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next
part is built based on the immediate benefit of the next part. The one solution that
gives the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm:
This algorithm uses the concept of using the already found solution to avoid repetitive
calculation of the same part of the problem. It divides the problem into smaller
overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate benefit.
The random number helps in deciding the expected outcome.
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given problem.
In an Algorithm the problem is broken down into smaller pieces or steps hence, it
is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it solves
the problem.
Example: Consider the example to add three numbers and print the sum.
Step 1: Fulfilling the pre-requisites
As discussed above, to write an algorithm, its prerequisites must be fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers and print
their sum.
2. The constraints of the problem that must be considered while solving the
problem: The numbers must contain only digits and no other characters.
3. The input to be taken to solve the problem: The three numbers to be added.
4. The output to be expected when the problem is solved: The sum of the three
numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution consists of
adding the 3 numbers. It can be done with the help of the ‘+’ operator, or bit-wise,
or any other method.
Step 2: Designing the algorithm
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3
respectively.
4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END
Step 3: Testing the algorithm by implementing it.
To test the algorithm, let’s implement it in C language.
Program:
C++
C
Java
Python3
C#
Javascript
// algorithm
#include <bits/stdc++.h>
int main()
int sum;
<< sum;
return 0;
Output
1. Priori Analysis:
“Priori” means “before”. Hence Priori analysis means checking the algorithm before
its implementation. In this, the algorithm is checked when it is written in the form of
theoretical steps. This Efficiency of an algorithm is measured by assuming that all
other factors, for example, processor speed, are constant and have no effect on the
implementation. This is done usually by the algorithm designer. This analysis is
independent of the type of hardware and language of the compiler. It gives the
approximate answers for the complexity of the program.
2. Posterior Analysis:
“Posterior” means “after”. Hence Posterior analysis means checking the algorithm
after its implementation. In this, the algorithm is checked by implementing it in any
programming language and executing it. This analysis helps to get the actual and real
analysis report about correctness(for every possible input/s if it shows/returns correct
output or not), space required, time consumed, etc. That is, it is dependent on the
language of the compiler and the type of hardware used.
Fixed Part: This refers to the space that is required by the algorithm. For example,
input variables, output variables, program size, etc.
Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables, dynamic
memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C
is the fixed part and S(I) is the variable part of the algorithm, which depends on
instance characteristic I.
Example: Consider the below algorithm for Linear Search
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x
Step 3: Start from the leftmost element of arr[] and one by one compare x with each
element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n
elements and x is the fixed part. Hence S(P) = 1+n. So, the space complexity depends
on n(number of elements). Now, space depends on data types of given variables and
constant types and it will be multiplied accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the amount of
time required by the algorithm to execute and get the result. This can be for normal
operations, conditional if-else statements, loop statements, etc.
What is a Pseudocode?
Pseudocodes are written in plain English, and they use short phrases to
represent the functionalities that the specific lines of code would do. Since there
is no strict syntax to follow in pseudocode writing, they are relatively difficult
to debug.
The following table highlights the key differences between algorithm and
pseudo code −
Algorithm Pseudocode
Many simple operations are There are many formats that could be
combined to help form a more used to write pseudo-codes.
complicated operation, which is
performed with ease by the
computer
There are no rules to follow while It has certain rules to follow while
constructing it. constructing it.
Asymptotic Notations
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases
of Algorithms. The main idea of asymptotic analysis is to have a measure of the
efficiency of algorithms that don’t depend on machine-specific constants and
don’t require algorithms to be implemented and time taken by programs to be
compared. Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis.
Asymptotic Notations:
.Asymptotic Notations are programming languages that allow you to analyze an
algorithm’s running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the input
size is increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents
the upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural
number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n
≥ n0
Theta notation
The above expression can be described as if f(n) is theta of g(n), then the value
f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥
n0). The definition of theta also requires that f(n) must be non-negative for
values of n greater than n0.
The execution time serves as both a lower and upper bound on the
algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, Consider the expression 3n3 +
6n2 + 6000 = Θ(n3), the dropping lower order terms is always fine because
there will always be a number(n) after which Θ(n3) has higher values
than Θ(n2) irrespective of the constants involved. For a given function g(n), we
denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to
complete statement execution in the longest amount of time possible.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n
≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time
complexity.
2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
3. Reflexive Properties:
Algorithms which have exponential time complexity grow much faster than
polynomial algorithms. The difference you are probably looking for happens to
be where the variable is in the equation that expresses the run time. Equations
that show a polynomial time complexity have variables in the bases of their
terms.
1. Big-O Notation
It defines the best case of an algorithm’s time complexity, the Omega notation defines
whether the set of functions will grow faster or at the same rate as the expression.
Furthermore, it explains the minimum amount of time an algorithm requires to
consider all input values.
3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta notation
defines when the set of functions lies in both O(expression) and Omega(expression),
then Theta notation is used. This is how we define a time complexity average case for
an algorithm.
Based on the above three notations of Time Complexity there are three cases to
analyze an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an
algorithm. We must know the case that causes a maximum number of operations to be
executed. For Linear Search, the worst case happens when the element to be searched
(x) is not present in the array. When x is not present, the search() function compares it
with all the elements of arr[] one by one. Therefore, the worst-case time complexity of
the linear search would be O(n).
A) For some algorithms, all the cases (worst, best, average) are asymptotically the
same. i.e., there are no worst and best cases.
Example: Merge Sort does Θ(n log(n)) operations in all cases.
B) Where as most of the other sorting algorithms have worst and best cases.
Example 1: In the typical implementation of Quick Sort (where pivot is chosen as
a corner element), the worst occurs when the input array is already sorted and the
best occurs when the pivot elements always divide the array into two halves.
Example 2: For insertion sort, the worst case occurs when the array is reverse
sorted and the best case occurs when the array is sorted in the same order as
output.
Examples with their complexity analysis:
1. Linear search algorithm:
#include <stdio.h>
// otherwise return -1
int search(int arr[], int n, int x)
int i;
if (arr[i] == x)
return i;
return -1;
/* Driver's code*/
int main()
int x = 30;
// Function call
search(arr, n, x));
getchar();
return 0;
Output
30 is present at index 2
Time Complexity Analysis: (In Big-O notation)
Best Case: O(1), This will take place if the element to be searched is on the first
index of the given list. So, the number of comparisons, in this case, is 1.
Average Case: O(n), This will take place if the element to be searched is on the
middle index of the given list.
Worst Case: O(n), This will take place if:
The element to be searched is on the last index
The element to be searched is not present on the list
2. In this example, we will take an array of length (n) and deals with the following
cases :
If (n) is even then our output will be 0
If (n) is odd then our output will be the sum of the elements of the array.
Below is the implementation of the given problem:
C++
Java
Python3
C#
Javascript
#include <bits/stdc++.h>
if (n % 2 == 0) // (n) is even
return 0;
int sum = 0;
sum += arr[i];
// Driver's Code
int main()
// length even;
int arr[4] = { 1, 2, 3, 4 };
int a[5] = { 1, 2, 3, 4, 5 };
// Function call
Output
0
15
Time Complexity Analysis:
Best Case: The order of growth will be constant because in the best case we are
assuming that (n) is even.
Average Case: In this case, we will assume that even and odd are equally likely,
therefore Order of growth will be linear
Worst Case: The order of growth will be linear because in this case, we are
assuming that (n) is always odd.
Worst, Average, and Best Case Analysis of Algorithms is a technique used to analyze
the performance of algorithms under different conditions. Here are some advantages,
disadvantages, important points, and reference books related to this analysis
technique:
Advantages:
Disadvantages:
Important points:
1. The worst case analysis of an algorithm provides an upper bound on the running
time of the algorithm for any input size.
2. The average case analysis of an algorithm provides an estimate of the running time
of the algorithm for a random input.
3. The best case analysis of an algorithm provides a lower bound on the running time
of the algorithm for any input size.
4. The big O notation is commonly used to express the worst case running time of an
algorithm.
5. Different algorithms may have different best, average, and worst case running
times.
What is recursion and analyzing recursive algorithm?
A recursive algorithm is an algorithm which calls itself with a smaller
problem. More generally, if a problem can be solved utilizing solutions to
smaller versions of the same problem and the smaller versions reduce to
easily solvable cases, then one can use a recursive algorithm to solve that
problem.
A recursive algorithm calls itself with smaller input values and returns the result
for the current input by carrying out basic operations on the returned value for
the smaller input. Generally, if a problem can be solved by applying solutions to
smaller versions of the same problem, and the smaller versions shrink to readily
solvable instances, then the problem can be solved using a recursive algorithm.
To build a recursive algorithm, you will break the given problem statement
into two parts. The first one is the base case, and the second one is the
recursive step.
There are four different types of recursive algorithms, you will look at them one
by one.
Direct Recursion
In this program, you have a method named fun that calls itself again in its
function body. Thus, you can say that it is direct recursive.
Indirect Recursion
The recursion in which the function calls itself via another function is called
indirect recursion. Now, look at the indirect recursive program structure.
fun2(z-1); fun1(y-2)
} }
In this example, you can see that the function fun1 explicitly calls fun2, which
is invoking fun1 again. Hence, you can say that this is an example of indirect
recursion.
Tailed Recursion
int fun(int z)
printf(“%d”,z);
fun(z-1);
}
If you observe this program, you can see that the last line ADI will execute for
method fun is a recursive call. And because of that, there is no need to
remember any previous state of the program.
Non-Tailed Recursion
int fun(int z)
fun(z-1);
printf(“%d”,z);
In this function, you can observe that there is another operation after the
recursive call. Hence the ADI will have to memorize the previous state inside
this method block. That is why this program can be considered non-tail
recursive.
You will look at a C program to understand recursion in the case of the sum of n
natural numbers problem.
#include<stdio.h>
if(n==0){
return 0;
return n + temp;
int main()
int n;
scanf("%d",&n);
printf("%d",Sum(n));
}
Output:
The output for the sum of n natural numbers program is represented in the
image below.