DS - Unit - 2
DS - Unit - 2
UNIT – 2
DATA STRUCTURES
Definitio
n:
Data structures are the fundamental building blocks of computer programming. They
define how data is organized, stored, and manipulated within a program.
Understanding data structures is very important for developing efficient and effective
algorithms.
A data structure is a storage that is used to store and organize data. It is a way of
arranging data on a computer so that it can be accessed and updated efficiently.
A data structure is not only used for organizing the data. It is also used for processing,
retrieving, and storing data. There are different basic and advanced types of data
structures that are used in almost every program or software system that has been
developed.
1. Linear Data Structure: Data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and next
adjacent elements, is called a linear data structure.
2.Static Data Structure: Static data structure has a fixed memory size. It is easier
to access the elements in a static data structure.
Example: array.
3. Dynamic Data Structure: In dynamic data structure, the size is not fixed. It can
be randomly updated during the runtime which may be considered efficient concerning
the memory (space) complexity of the code.
4.Non-Linear Data Structure: Data structures where data elements are not placed
sequentially or linearly are called non-linear data structures. In a non-linear data
structure, we can’t traverse all the elements in a single run only.
Recursive Algorithms
Recursive issues are common in competitive programming. And you will first develop
recursive logic for them before attempting to tackle these issues utilizing various
programming paradigms. Recursive thinking is a crucial part of programming. It helps
you divide complicated tasks into simpler ones. As a result, it is often used in practically
all programming languages.
What is Recursion?
Recursion is the action of a function calling itself either directly or indirectly, and the
associated function is known as a recursive function. A recursive method can be used
to tackle some issues with relative ease. Towers of Hanoi (TOH), in
order/preorder/postorder tree traversals, DFS of Graph, etc., are a few examples of these
issues. By calling a duplicate of itself and resolving the original problem's smaller
subproblems, a recursive function solves a specific problem. As and when necessary,
many additional recursive calls can be produced. It is crucial to understand that we
must present a specific situation to stop this recursion process. Therefore, each time,
the function calls itself a simplified version of the initial issue.
A recursive algorithm calls itself with smaller input values and, after performing basic
operations on the returned value for the minor input, provides the result for the current input.
A recursive method can solve a problem if smaller versions of the same problem can be
solved by applying solutions to them, and the smaller versions decrease to easily
solvable examples.
You will split the provided problem statement into two pieces to construct a recursive
algorithm. The primary case is the first, and the recursive step is the second.
Base Case: This is the simplest possible solution to the issue, consisting only of a
condition that ends the recursive function. When a specific condition is satisfied, the
result is evaluated in this base case.
Recursive Step: It calculates the outcome by making repeated calls to the same
function but with more minor or straightforward inputs.
Recursion is a fantastic approach that allows us to shorten our code and simplify
understanding and writing. Compared to the iteration approach, it offers a few benefits
that will be covered later. Recursion
is one of the finest ways to complete a work that its related subtasks may describe. The
factorial of a number, for instance.
Characteristics of Recursion
Steps in an Algorithm
Step 1: Establish a primary case. Choose the most straightforward situation for which
the answer is obvious or trivial. This is the recursion's halting condition, which stops the
function from indefinitely calling itself.
Define a recursive case in step two: Describe the issue in terms of its smaller
counterparts. Recursively calling the function will allow you to solve each subproblem
by breaking the problem up into smaller versions of itself.
Step 3: Verify that the recursion ends: Make sure the recursive code does not go into
an infinite loop and ultimately reaches the base case.
Step 4: Combine the solutions. To answer the main problem, combine the solutions
to the subproblems.
An Interpretation in Mathematics
Let's look at a situation where a programmer has to get the sum of the first n natural
numbers. There are a few methods for achieving this, but the easiest one is to add the
integers from 1 to n. Therefore, the function is represented mathematically as follows:
approach(1) - Simply adding one at a time f(n) = 1 + 2 + 3 + + n yet there is another
way to describe this: approach(2) - Recursive addition f(n) = 1
n=1 f(n) = n + f(n-1) n>1
The only difference between approaches (1) and (2) is that in approach (2), the function
"f()" is called within the function. This phenomenon is known as recursion, and the
function that contains recursion is known as a recursive function. In the end, this is an
excellent tool for programmers to code some problems more straightforwardly and
effectively.
Recursion consumes additional memory because the recursive function adds to the
stack with each call and stores the items there until the call is concluded. Like the stack
data structure, the recursive function uses the LIFO (LAST IN FIRST OUT) structure.
The base case solution is given in the recursive program, and the more significant
problem's solution is given in terms of more minor issues.
int fact(int n)
if (n < = 1) // base
case return 1;
else
return n*fact(n-1);
The base case for n = 1 is described in the example mentioned earlier, and the more
excellent value of a number may be addressed by downsizing it until the base case is
attained.
The goal is to break a more significant problem down into a more minor one, then add a
base condition or conditions to stop the recursion. For instance, if we know the factorial
of (n-1), we may compute factorial n. Factorial's primary case would be n = 0. If n
equals 0, we return 1.
The stack overflow issue may occur if the base case is not reached or defined. To further
grasp this, let's look at an example.
int fact(int n)
{
// wrong base case (it may cause
// stack
overflow). if (n
== 100)
return
1; else
return n*fact(n-1);
}
If fact(10) is called, fact(9), fact(8), fact(7), and so forth will also be called, but the total will
always be
100. The primary case still needs to be achieved. A stack overflow fault will occur if these
functions on the stack fill up all available memory.
If a function calls another function named fun, that function is said to be direct recursive.
If a function fun calls another function, like fun_new, and fun_new calls fun either directly or
indirectly, that function
void directRecFun()
{
// Some code...
directRecFun();
// Some code...
}
// An example of indirect
recursion void
indirectRecFun1()
{
// Some code...
indirectRecFun2();
// Some code...
}
void indirectRecFun2()
{
// Some code...
indirectRecFun1
();
// Some code...
}
When a recursive call is the function's final action, it is said to be tail recursive. For more
information, see the article on tail recursion.
Any function is given memory on the stack when called from main(). Every time a
recursive function calls itself, a new copy of the local variables is made, and memory is
allocated for the called function on top of the memory allocated for the calling
function. When the function reaches the base case, memory is released, and the
process continues, delivering its value to the function from which it was called.
The function is generated in fresh stack memory copies with each recursive call. The copy
is removed from storage once the operation returns some data. Because all
arguments and other variables specified inside functions are retained on the stack,
Jayavardhanarao Sahukaru @ aitam 8
AITAM Data Structures MCA
each recursive call retains a separate stack. The stack is removed after the value from
the proper function is returned.
Regarding resolving and tracking the values at each recursive call, recursion is
reasonably tricky. Thus, You must keep track of the stack's contents and the variables'
values. Examine the following example to learn more about how recursive functions
allocate memory.
Now consider the following recursive Fibonacci algorithm for n = 5. All stacks are first
saved before printing the matching value of n until n equals zero. The stacks are
eliminated one at a time once the termination condition is met by returning 0 to the
calling stack. Look at the diagram below to comprehend the call stack hierarchy.
Types of Recursions in C
This section will discuss the different types of recursions in the C programming
language. Recursion is the process in which a function calls itself up to n-number of
Jayavardhanarao Sahukaru @ aitam 10
AITAM Data Structures MCA
times. If a program allows the user to call a function inside the same function recursively,
the procedure is called a recursive call of the function. Furthermore, a recursive function
can call itself directly or indirectly in the same program.
void recursion ()
{
recursion(); // The recursive function calls itself inside the same function
}
int main ()
{
recursion (); // function call
}
In the above syntax, the main() function calls the recursion function only once. After that,
the recursion function calls itself up to the defined condition, and if the user doesn't define
the condition, it calls the same function infinite times.
The following are the types of the recursion in C programming language, as follows:
Direct Recursion
Indirect Recursion
Tail Recursion
No Tail/ Head Recursion
Linear recursion
Tree Recursion
Direct Recursion
When a function calls itself within the same function repeatedly, it is called the direct
recursion.
fun()
{
// write some
code fun();
// some code
}
In the above structure of the direct recursion, the outer fun() function recursively calls the
inner fun() function, and this type of recursion is called the direct recursion.
#include<stdio.h
> int fibo_num
(int i)
{
{
return 0;
}
if ( i == 1)
{
return 1;
}
return fibo_num (i - 1) + fibonacci (i -2);
}
int main ()
{
int i;
// use for loop to get the first 10 fibonacci
series for ( i = 0; i < 10; i++)
{
printf (" %d \t ", fibo_num (i));
}
return 0;
}
Output
0 1 1 2 3 5 8 13 21 34
Indirect Recursion
When a function is mutually called by another function in a circular manner, the function
is called an indirect recursion function.
fun1()
{
// write some
code fun2()
}
fun2()
{
// write some code
fun3()
// write some code
}
fun3()
{
// write some code
fun1()
}
In this structure, there are four functions, fun1(), fun2(), fun3(). When the fun1() function is
executed, it calls the fun2() for its execution. And then, the fun2() function starts its
execution calls the fun3() function. In this way, each function leads to another function
to makes their execution circularly. And this type of approach is called indirect
recursion.
Output
2 1 4 3 6 5 8 7 10 9
Tail Recursion
A recursive function is called the tail-recursive if the function makes recursive calling
itself, and that recursive call is the last statement executes by the function. After that, there is
no function or statement is left to call the recursive function.
Jayavardhanarao Sahukaru @ aitam 15
AITAM Data Structures MCA
Output
Number is: 7
Number is: 6
Number is: 5
Number is: 4
Number is: 3
Number is: 2
Number is: 1
A function is called the non-tail or head recursive if a function makes a recursive call itself, the
recursive call will be the first statement in the function. It means there should be no
statement or operation is called before the recursive calls. Furthermore, the head
recursive does not perform any operation at the time of recursive calling. Instead, all
operations are done at the return time.
int main ()
{
int a = 5;
printf (" Use of Non-Tail/Head Recursive function \n");
head_fun (a); // function calling
return 0;
}
Output
Linear Recursion
A function is called the linear recursive if the function makes a single call to itself at
each time the function runs and grows linearly in proportion to the size of the problem.
Output
Tree Recursion
A function is called the tree recursion, in which the function makes more than one call to
itself within the recursive function.
Output
Preliminaries of algorithms
What is an Algorithm?
Characteristics of an Algorithm
Input: An algorithm has some input values. We can pass 0 or some input value to
an algorithm.
Output: We will get 1 or more output at the end of an algorithm.
Unambiguity: An algorithm should be unambiguous which means that the
instructions in an algorithm should be clear and simple.
Finiteness: An algorithm should have finiteness. Here, finiteness means that
Jayavardhanarao Sahukaru @ aitam 19
AITAM Data Structures MCA
Robustness: Robustness means that how an algorithm can clearly define our
problem.
User-friendly: If the algorithm is not user-friendly, then the designer will not be able
to explain it to the programmer.
subproblems.
o Stores the result of the subproblems is known as memorization.
o Reuse the result so that it cannot be recomputed for the same subproblems.
o Finally, it computes the result of the complex program.
o Branch and Bound Algorithm: The branch and bound algorithm can be applied
to only integer programming problems. This approach divides all the sets of
feasible solutions into smaller subsets. These subsets are further evaluated to
find the best solution.
o Randomized Algorithm: As we have seen in a regular algorithm, we have
predefined input and required output. Those algorithms that have some defined
set of inputs and required output, and follow some described steps are known as
deterministic algorithms. What happens that when the random variable is
introduced in the randomized algorithm?. In a randomized algorithm, some
random bits are introduced by the algorithm and added in the input to produce the
output, which is random in nature. Randomized algorithms are simpler and
efficient than the deterministic algorithm.
o Backtracking: Backtracking is an algorithmic technique that solves the problem
recursively and removes the solution if it does not satisfy the constraints of a
problem.
The major categories of algorithms are given below:
o Sort: Algorithm developed for sorting the items in a certain order.
o Search: Algorithm developed for searching the items inside a data structure.
o Delete: Algorithm developed for deleting the existing element from the data
structure.
o Insert: Algorithm developed for inserting an item inside a data structure.
o Update: Algorithm developed for updating the existing element inside a data
structure.
Algorithm Analysis
The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm,
and second is after creating the algorithm. The following are the two analysis of an
algorithm:
o Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm
which is done before implementing the algorithm. Various factors can be considered
before implementing the algorithm like processor speed, which has no effect on
the implementation part.
o Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm.
The practical analysis is achieved by implementing the algorithm using any
programming language. This analysis basically evaluate that how much running
time and space taken by the algorithm.
Algorithm Complexity
The performance of the algorithm can be measured in two factors:
o Time complexity: The time complexity of an algorithm is the amount of time
required to complete the execution. The time complexity of an algorithm is
denoted by the big O notation. Here, big O notation is the asymptotic notation to
represent the time complexity. The time complexity is mainly calculated by
counting the number of steps to finish the execution. Let's understand the time
Jayavardhanarao Sahukaru @ aitam 25
AITAM Data Structures MCA
In the above code, the time complexity of the loop statement will be atleast n, and if
the value of n increases, then the time complexity also increases. While the complexity
of the code, i.e., return sum will be constant as its value is not dependent on the value
of n and will provide the result in one step only. We generally consider the worst-time
complexity as it is the maximum time taken for any given input size.
o Space complexity: An algorithm's space complexity is the amount of space
required to solve a problem and produce an output. Similar to the time
complexity, space complexity is also expressed in big O notation.
For an algorithm, the space is required for the following purposes:
1. To store program instructions
2. To store constant values
3. To store variable values
4. To track the function calls, jumping statements, etc.
Auxiliary space: The extra space required by the algorithm, excluding the input size, is
known as an auxiliary space. The space complexity considers both the spaces, i.e.,
auxiliary space, and space used by the input.
So,
Space complexity = Auxiliary space + Input size.
Types of Algorithms
The following are the types of algorithms:
o Search Algorithm
o Sort
Algorithm
Search
Algorithm
On each day, we search for something in our day to day life. Similarly, with the case of
computer, huge data is stored in a computer that whenever the user asks for any data
then the computer searches for that data in the memory and provides that data to the
user. There are mainly two techniques available to search the data in an array:
o Linear search
o Binary
search Linear
Search
Linear search is a very simple algorithm that starts searching for an element or a
value from the beginning of an array until the required element is found. It compares the
element to be searched with all the elements in an array, if the match is found, then it
returns the index of the element else it returns
-1. This algorithm can be implemented on the unsorted list.
Jayavardhanarao Sahukaru @ aitam 27
AITAM Data Structures MCA
Binary Search
A Binary algorithm is the simplest algorithm that searches the element very quickly. It is
used to search the element from the sorted list. The elements must be stored in
sequential order or the sorted manner
to implement the binary algorithm. Binary search cannot be implemented if the elements
are stored in a random manner. It is used to find the middle element of the list.
Sorting Algorithms
Sorting algorithms are used to rearrange the elements in an array or a given data
structure either in an ascending or descending order. The comparison operator decides
the new order of the elements.
Why do we need a sorting algorithm?
o An efficient sorting algorithm is required for optimizing the efficiency of other
algorithms like binary search algorithm as a binary search algorithm requires an array
to be sorted in a particular order, mainly in ascending order.
o It produces information in a sorted order, which is a human-readable format.
o Searching a particular element in a sorted list is faster than the unsorted list.
Asymptotic Analysis
As we know that data structure is a way of organizing the data efficiently and that efficiency is
measured either in terms of time or space. So, the ideal data structure is a structure
that occupies the least possible time to perform all its operation and the memory
space. Our focus would be on finding the time complexity rather than space complexity,
and by finding the time complexity, we can decide which data structure is the best for an
algorithm.
The main question arises in our mind that on what basis should we compare the time
complexity of data structures?. The time complexity can be compared based on
operations performed on them. Let's consider a simple example.
Suppose we have an array of 100 elements, and we want to insert a new element at
the beginning of the array. This becomes a very tedious task as we first need to shift the
elements towards the right, and we will add new element at the starting of the array.
Suppose we consider the linked list as a data structure to add the element at the beginning.
The linked list contains two parts, i.e., data and address of the next node. We simply add
the address of the first node in the new node, and head pointer will now point to the newly
added node. Therefore, we conclude that adding the data at the beginning of the linked
list is faster than the arrays. In this way, we can compare the data structures and
select the best possible data structure for performing the operations.
How to find the Time Complexity or running time for performing the operations?
The measuring of the actual running time is not practical at all. The running time to
perform any operation depends on the size of the input. Let's understand this
statement through a simple example.
Suppose we have an array of five elements, and we want to add a new element at the
beginning of the array. To achieve this, we need to shift each element towards right,
and suppose each element takes one unit of time. There are five elements, so five units
of time would be taken. Suppose there are 1000 elements in an array, then it takes 1000
units of time to shift. It concludes that time complexity depends upon the input size.
Therefore, if the input size is n, then f(n) is a function of n that denotes the time
by comparing their f(n) values. We will find the growth rate of f(n) because there might be a
possibility that one data structure for a smaller input size is better than the other one but
not for the larger sizes. Now, how to find f(n).
Let's look at a simple
example. f(n) = 5n2 + 6n
+ 12
where n is the number of instructions executed, and it depends on the size of
the input. When n=1
n 5n2 6n 12
As we can observe in the above table that with the increase in the value of n, the
running time of 5n2 increases while the running time of 6n and 12 also decreases. Therefore,
it is observed that for larger values of n, the squared term consumes almost 99% of the
time. As the n2 term is contributing most of the time, so we can eliminate the rest two
terms.
Therefore,
f(n) = 5n2
Here, we are getting the approximate time complexity whose result is very close to the
actual result. And this approximate measure of time complexity is known as an
Asymptotic complexity. Here, we are not calculating the exact running time, we are
eliminating the unnecessary terms, and we are just considering the term which is
taking most of the time.
In mathematical analysis, asymptotic analysis of algorithm is a method of defining the
mathematical boundation of its run-time performance. Using the asymptotic analysis,
Jayavardhanarao Sahukaru @ aitam 31
AITAM Data Structures MCA
Example: Running time of one operation is x(n) and for another operation, it is
calculated as f(n2). It refers to running time will increase linearly with an increase in 'n'
for the first operation, and running time will increase exponentially for the second
operation. Similarly, the running time of both operations will be the same if n is
significantly small.
Usually, the time required by an algorithm comes under three types:
Worst case: It defines the input for which the algorithm takes a huge time.
Average case: It takes average time for the program execution.
Best case: It defines the input for which the algorithm takes the
lowest time Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time
complexity of an algorithm is given below:
o Big oh Notation (?)
o Omega Notation (Ω)
o Theta Notation
(θ) Big oh Notation
(O)
o Big O notation is an asymptotic notation that measures the performance of an
algorithm by simply providing the order of growth of the function.
o This notation provides an upper bound on a function which ensures that the
function never grows faster than the upper bound. So, it gives the least upper
bound on a function so that the function never grows faster than this upper
bound.
It is the formal way to express the upper boundary of an algorithm running time. It
measures the worst case of time complexity or the algorithm's longest amount of time
to complete its operation. It is represented as shown below:
For example:
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there
Jayavardhanarao Sahukaru @ aitam 33
AITAM Data Structures MCA
This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the
function f(n). In this case, we are calculating the growth rate of the function which
eventually calculates the worst time complexity of a function, i.e., how worst an
algorithm can perform.
Let's understand through examples
Example 1: f(n)=2n+3 , g(n)=n
Now, we have to find Is f(n)=O(g(n))?
To check f(n)=O(g(n)), it must satisfy the given condition:
f(n)<=c.g(n)
First, we will replace f(n) by 2n+3 and g(n)
by n. 2n+3 <= c.n
Let's assume c=5, n=1
then 2*1+3<=5*1
5<=5
For n=1, the above condition is
true. If n=2
2*2+3<=5*2
7<=10
For n=2, the above condition is true.
We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If
the value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any
value of n starting from 1, it will always satisfy. Therefore, we can say that for some
constants c and for some constants n0, it will always satisfy 2n+3<=c.n. As it is satisfying
the above condition, so f(n) is big oh of g(n) or we can say that f(n) grows linearly.
Therefore, it concludes that c.g(n) is the upper bound of the f(n). It can be represented
graphically as:
The idea of using big o notation is to give an upper bound of a particular function, and
eventually it leads to give a worst-time complexity. It provides an assurance that a
particular function does not behave suddenly as a quadratic or a cubic fashion, it just
behaves in a linear manner in a worst-case.
Omega Notation (Ω)
o It basically describes the best-case scenario which is opposite to the big o notation.
o It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or
the best-case time complexity.
o It determines what is the fastest time that an algorithm can run.
If we required that an algorithm takes at least certain amount of time without using an upper
bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of
running time for large input size.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:
f(n)>=c.g(n) for all n≥no
and c>0 Let's consider a
simple example. If f(n) =
2n+3, g(n) = n,
Is f(n)= Ω (g(n))?
It must satisfy the condition:
f(n)>=c.g(n)
To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.
2n+3>=c*n
Suppose c=1
2n+3>=n (This equation will be true for any value of n
starting from 1). Therefore, it is proved that g(n) is big
omega of 2n+3 function.
As we can see in the above figure that g(n) function is the lower bound of the f(n)
function when the value of c is equal to 1. Therefore, this notation gives the fastest
running time. But, we are not more interested in finding the fastest running time, we are
interested in calculating the worst-case scenarios because we want to check our algorithm
for larger input that what is the worst time that it will take so that we can take further
decision in the further process.
Theta Notation (θ)
o The theta notation mainly describes the average case scenarios.
o It represents the realistic time complexity of an algorithm. Every time, an
algorithm does not perform worst or best, in real-world problems, algorithms
mainly fluctuate between the worst- case and best-case, and this gives us the
average case of the algorithm.
o Big theta is mainly used when the value of worst-case and the best-case is same.
o It is the formal way to express both the upper bound and lower bound of an
algorithm running time.
Let's understand the big theta notation mathematically:
Let f(n) and g(n) be the functions of n where n is the steps required to execute the
program then:
f(n)= θg(n)
The above condition is satisfied only if when
c1.g(n)<=f(n)<=c2.g(n)
where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to
f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta
notation is given below:
constant - ?(1)
linear - ?(n)
logarithmic - ?(log n)
exponential - 2?(n)
cubic - ?(n3)
polynomial - n?(1)
quadratic - ?(n2)
Linear Search
Searching is the process of finding some particular element in the list. If the element is
present in the list, then the process is called successful, and the process returns the
location of that element; otherwise, the search is called unsuccessful.
Two popular search methods are Linear Search and Binary Search. So, here we will
discuss the popular searching technique, i.e., Linear Search Algorithm.
It is widely used to search an element from the unordered list, i.e., the list in which items are
not sorted. The worst-case time complexity of linear search is O(n).
The steps used in the implementation of Linear Search are listed as follows -
o In each iteration of for loop, compare the search element with the current array
element, and -
o If the element matches, then return the index of the corresponding array
element.
o If the element does not match, then move to the next element.
o If there is no match or the search element is not present in the given array, return -
1.
Algorithm
Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the
Step 2: set i = 1
val
set pos =
i print
pos go to
step 6
[end of if]
set ii = i
+1
[end of loop]
Step 5: if pos = -1
Step 6: exit
Now, start from the first element and compare K with each element of the array.
The value of K, i.e., 41, is not matched with the first element of the array. So,
move to the next element. And follow the same process until the respective
element is found.
Now, the element to be searched is found. So algorithm will return the index of the
element matched.
o Best Case Complexity - In Linear search, best case occurs when the element
we are finding is at the first position of the array. The best-case time complexity
of linear search is O(1).
o Average Case Complexity - The average case time complexity of linear search is
O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the
element we are looking is present at the end of the array. The worst-case in
Jayavardhanarao Sahukaru @ aitam 43
AITAM Data Structures MCA
linear search could be when the target element is not present in the given array,
and we have to traverse the entire array. The worst-case time complexity of
linear search is O(n).
The time complexity of linear search is O(n) because every element in the array is
compared only once.
Output:
Output:
Binary Search
Searching is the process of finding some particular element in the list. If the element is
present in the list, then the process is called successful, and the process returns the
location of that element. Otherwise, the search is called unsuccessful.
Linear Search and Binary Search are the two popular searching techniques. Here we
will discuss the Binary Search Algorithm.
Binary search is the search technique that works efficiently on sorted lists. Hence, to
search an element into some list using the binary search technique, we must ensure
that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into
two halves, and the item is compared with the middle element of the list. If the match is
found then, the location of the middle element is returned. Otherwise, we search into
either of the halves depending upon the result produced through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list
elements are not arranged in a sorted manner, we have first to sort them.
Algorithm
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is
the index of the first array element, 'upper_bound' is the index of the last array element,
'val' is the value to search
To understand the working of the Binary search algorithm, let's take a sorted array. It will
be easy to understand the working of Binary search with an example.
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer
We have to use the below formula to calculate the mid of the array -
1. mid = (beg +
array - beg = 0
end = 8
Now, the element to search is found. So algorithm will return the index of the element
o Best Case Complexity - In Binary search, best case occurs when the element to
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search
is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we
have to keep reducing the search space till it has only one element. The worst-case
time complexity of Binary search is O(logn).
int main() {
int a[] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
Output:
Output:
Linked List
What is Linked
List?
A linked list is a linear data structure which can store a collection of "nodes" connected
via links i.e. pointers. Linked lists nodes are not stored at a contiguous location, rather
they are linked using pointers to the different memory locations. A node consists of the
data value and a pointer to the address of the next node within the linked list.
A linked list is a dynamic linear data structure whose memory size can be allocated or de-
allocated at run time based on the operation insertion or deletion, this helps in using
system memory efficiently. Linked lists can be used to implement various data
structures like a stack, queue, graph, hash maps, etc.
A linked list starts with a head node which points to the first node. Every node consists
of data which holds the actual data (value) associated with the node and a next
pointer which holds the memory address of the next node in the linked list. The last
node is called the tail node in the list which points to null indicating the end of the list.
Singly linked lists contain two "buckets" in one node; one bucket holds the data, and
the other bucket holds the address of the next node of the list. Traversals can be done
in one direction only as there is only a single link between two nodes of the same list.
Doubly Linked Lists contain three "buckets" in one node; one bucket holds the data,
and the other buckets hold the addresses of the previous and next nodes in the list. The
list is traversed twice as the nodes in the list are connected to each other from both
sides.
Circular linked lists can exist in both singly linked list and doubly linked list.
Since the last node and the first node of the circular linked list are connected, the
traversal in this linked list will go on forever until it is broken.
The basic operations in the linked lists are insertion, deletion, searching, display, and
deleting an element at a given key. These operations are performed on Singly Linked
Lists as given below –
Adding a new node in the linked list is a more than one step activity. First, create a node
using the same structure and find the location where it has to be inserted.
Now, the next node at the left should point to the new node.
This will put the new node in the middle of the two. The new list should look like this –
Insertion in linked list can be done in three different ways. They are explained as follows −
Insertion at Beginning
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
insertatbegin(22
);
insertatbegin(30
);
insertatbegin(44
);
insertatbegin(50
);
printf("Linked List: ");
// print list
printList();
}
Output:
Insertion at Ending
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
Jayavardhanarao Sahukaru @ aitam 61
AITAM Data Structures MCA
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
struct node *linkedlist = head;
// print list
printList();
}
Output:
In this operation, we are adding an element at any position within the list.
Algorithm
Jayavardhanarao Sahukaru @ aitam 63
AITAM Data Structures MCA
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
= lk;
}
void
main(){
int k=0;
insertatbegin(12);
insertatbegin(22);
insertafternode(head->next,
30); printf("Linked List: ");
// print list
printList();
}
Output:
Deletion is also a more than one step process. We shall learn with pictorial
representation. First, locate the target node to be removed by using searching
algorithms.
The left (previous) node of the target node now should point to the next node of the target
node –
This will remove the link that was pointing to the target node. Now, using the following
code, we will remove what the target node is pointing at.
We need to use the deleted node. We can keep that in memory otherwise we can simply
deallocate memory and wipe off the target node completely.
Similar steps should be taken if the node is being inserted at the beginning of the list. While
inserting it at the end, the second last node of the list should point to the new node and
the new node will point to NULL.
Deletion in linked lists is also performed in three different ways. They are as follows −
Deletion at Beginning
In this deletion operation of the linked, we are deleting an element from the beginning of
the list. For this, we point the head to the second node.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
p = p->next;
}
printf("]");
}
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
// print list
printList();
deleteatbegin();
printf("\nLinked List after deletion: ");
// print list
printList();
}
Output:
Jayavardhanarao Sahukaru @ aitam 69
AITAM Data Structures MCA
Deletion at Ending
In this deletion operation of the linked, we are deleting an element from the ending of the
list.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
// print list
printList();
deleteatend();
printf("\nLinked List after deletion: ");
// print list
printList();
}
Output:
In this deletion operation of the linked, we are deleting an element at any position of the
list.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
// print list
printList();
deletenode(30);
printf("\nLinked List after deletion: ");
// print list
printList();
}
Output:
This operation is a thorough one. We need to make the last node to be pointed by the
head node and reverse the whole linked list.
First, we traverse to the end of the list. It should be pointing to NULL. Now, we shall make
it point to its previous node −
We have to make sure that the last node is not the last node. So we'll have some
temp node, which looks like the head node pointing to the last node. Now, we shall make
all left side nodes point to their previous nodes one by one.
Except the node (first node) pointed by the head node, all nodes should point to their
predecessor, making them their new successor. The first node will point to NULL.
We'll make the head node point to the new first node by using the temp node.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
>next = prev;
prev = cur;
cur = tmp;
}
*head = prev;
}
void
main(){
int k=0;
insertatbegin(12
);
insertatbegin(22
);
insertatbegin(30
);
insertatbegin(40
);
insertatbegin(55
);
printf("Linked List: ");
} // print list
Output: printList();
reverseList(Chead);
printf("\nReversed Linked
List: "); printList();
Searching for an element in the list using a key element. This operation is done in the
same way as array search; comparing every element in the list with the key element
given.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
Jayavardhanarao Sahukaru @ aitam 81
AITAM Data Structures MCA
};
struct node *head =
NULL; struct node
*current = NULL;
struct node *p =
head; printf("\n[");
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
);
printf("Linked List: ");
// print list
printList()
; int ele =
30;
printf("\nElement to be searched is:
%d", ele); k = searchlist(30);
if (k == 1)
printf("\nElement is
found"); else
printf("\nElement is not found in the list");
}
Output:
The traversal operation walks through all the elements of the list in an order and
displays the elements in that order.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
printf("]");
}
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
// print list
printList()
;
}
Output:
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
struct node {
int data;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL;
void printList(){
struct node *p =
head; printf("\n[");
//start from the beginning
while(p != NULL) {
printf(" %d ",p-
>data); p = p-
>next;
}
printf("]");
}
//insertion at the
beginning void
insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
// point it to old first
node lk->next =
head;
//point first to new first node
head = lk;
}
void insertatend(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node)); lk->data = data;
struct node *linkedlist = head;
prev = temp;
temp = temp->next;
}
// print list
printList();
deleteatbegin();
deleteatend();
deletenode(12);
printf("\nLinked List after deletion: ");
// print list
printList();
insertatbegin(4);
insertatbegin(16);
printf("\nUpdated Linked List:
"); printList();
k=
Jayavardhanarao Sahukaru @ aitam 91
AITAM Data Structures MCA
searchlist(16); if
(k == 1)
printf("\nElement is
found"); else
printf("\nElement is not present in the list");
}
Output:
Doubly Linked List is a variation of Linked list in which navigation is possible in both
ways, forward as well as backward easily as compared to Single Linked List. The
following are the important terms to understand the concept of doubly linked list.
Link − Each link of a linked list can store a data called an element.
Next − Each link of a linked list contains a link to the next link called Next.
Prev − Each link of a linked list contains a link to the previous link called Prev.
Linked List − A Linked List contains the connection link to the first link called
First and to the last link called Last.
As per the above illustration, following are the important points to be considered.
Doubly Linked List contains a link element called first and last.
Each link carries a data field(s) and a link field called next.
Each link is linked with its next link using its next link.
Each link is linked with its previous link using its previous link.
The last link carries a link as null to mark the end of the list.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
struct node *prev;
};
void printList(){
struct node *ptr = head;
while(ptr != NULL) {
printf("(%d,%d) ",ptr->key,ptr-
>data); ptr = ptr->next;
}
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
Output:
Doubly Linked List: (6,56) (5,40) (4,1) (3,30) (2,20) (1,10)
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
struct node *prev;
};
void printList(){
struct node *ptr = head;
while(ptr != NULL) {
printf("(%d,%d) ",ptr->key,ptr-
>data); ptr = ptr->next;
}
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
insertFirst(1,10)
;
insertFirst(2,20)
;
insertFirst(3,30)
;
insertFirst(4,1
);
} insertLast(5,4
0);
Output: insertLast(6,5
6);
printf("Doubly Linked List: ");
printList();
Doubly Linked List: (4,1) (3,30) (2,20) (1,10) (5,40) (6,56)
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
struct node *prev;
};
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
head->next->prev = NULL;
}
head = head->next;
Output:
Doubly Linked List: (6,56) (5,40) (4,1) (3,30) (2,20) (1,10)
List after deleting first record: (5,40) (4,1) (3,30) (2,20) (1,10)
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
struct node *prev;
};
bool isEmpty()
{
return head == NULL;
}
//print data
printf("(%d,%d) ",ptr->key,ptr->data);
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
link;
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data = data;
if(isEmpty()) {
return tempLink;
}
//create a link
struct node *newLink = (struct node*)
malloc(sizeof(struct node)); newLink->key = key;
newLink->data =
data; if(current ==
last) {
newLink->next =
NULL; last = newLink;
Jayavardhanarao Sahukaru @ aitam 11
3
AITAM Data Structures MCA
} else {
newLink->next = current->next;
current->next->prev = newLink;
}
newLink->prev =
current; current->next
= newLink; return
true;
}
int main(){
insertFirst(1,10);
insertFirst(2,20);
insertFirst(3,30);
insertFirst(4,1);
insertFirst(5,40);
insertFirst(6,56);
printf("\nList (First to Last): ");
displayForward();
printf("\n");
printf("\nList (Last to first): ");
displayBackward();
printf("\nList , after deleting first record: ");
deleteFirst();
displayForward();
printf("\nList , after deleting last record: ");
deleteLast();
displayForward();
printf("\nList , insert after key(4) : ");
insertAfter(4,7, 13);
displayForward();
printf("\nList , after delete key(4) : ");
delete(4);
displayForward();
}
Output:
Circular Linked List is a variation of Linked list in which the first element points to the last
element and the last element points to the first element. Both Singly Linked List and
Doubly Linked List can be made into a circular linked list.
In doubly linked list, the next pointer of the last node points to the first node and the
previous pointer of the first node points to the last node making the circular in both
directions.
As per the above illustration, the following are the important points to be considered.
The last link's next points to the first link of the list in both cases of singly as
well as doubly linked list.
The first link's previous points to the last of the list in case of doubly linked list.
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL; bool
isEmpty(){
return head == NULL;
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data =
data; if
(isEmpty()) {
head = link;
head->next = head;
} else {
printf("\n[ ");
printf(" ]");
}
void main(){
insertFirst(1,10);
insertFirst(2,20);
insertFirst(3,30);
insertFirst(4,1);
insertFirst(5,40);
insertFirst(6,56);
printf("Circular Linked List: ");
//print list
printList();
}
Output:
Circular Linked List:
[ (6,56) (5,40) (4,1) (3,30) (2,20) ]
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL; bool
isEmpty(){
return head == NULL;
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data =
data; if
(isEmpty()) {
head = link;
head->next = head;
} else {
//print list
printList();
deleteFirst();
printf("\nList after deleting the first item: ");
printList();
}
Output:
Circular Linked List: (6,56) (5,40) (4,1) (3,30) (2,20)
List after deleting the first item: (5,40) (4,1) (3,30) (2,20)
Algorithm
Example
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL; bool
isEmpty(){
return head == NULL;
}
//create a link
struct node *link = (struct node*)
malloc(sizeof(struct node)); link->key = key;
link->data =
data; if
(isEmpty()) {
head = link;
head->next = head;
} else {
printf("\n[ ");
printf(" ]");
}
void main(){
insertFirst(1,10);
insertFirst(2,20);
insertFirst(3,30);
insertFirst(4,1);
insertFirst(5,40);
insertFirst(6,56);
printf("Circular Linked List: ");
//print list
printList();
}
Output:
Circular Linked List:
[ (6,56) (5,40) (4,1) (3,30) (2,20) ]
#include
<stdio.h>
#include
<string.h>
#include
<stdlib.h>
#include
<stdbool.h>
struct node {
int
data;
int
key;
struct node *next;
};
struct node *head =
NULL; struct node
*current = NULL; bool
isEmpty(){
return head == NULL;
}
int length(){
int length = 0;
//if list is empty
if(head == NULL) {
return 0;
}
current = head->next;
Jayavardhanarao Sahukaru @ aitam 12
5
AITAM Data Structures MCA
while(current != head) {
length++;
current = current->next;
}
return length;
}
int main(){
insertFirst(1,10
);
insertFirst(2,20
);
insertFirst(3,30
);
insertFirst(4,1);
insertFirst(5,40);
insertFirst(6,56);
printf("Original List: ");
//print list
printList();
while(!isEmpty()) {
struct node *temp = deleteFirst();
printf("\nDeleted value:");
printf("(%d,%d) ",temp->key,temp->data);
}
printf("\nList after deleting all items: ");
printList();
}
Output:
Original List:
[ (6,56) (5,40) (4,1) (3,30) (2,20) ]
Deleted value:(6,56)
Deleted value:(5,40)
Deleted value:(4,1)
Deleted value:(3,30)
Deleted value:(2,20)
Deleted value:(1,10)
List after deleting all
items: [ ]