DS Unit-1
DS Unit-1
Introduction
Data Structure can be defined as the group of data elements which provides an efficient way
of storing and organising data in the computer so that it can be used efficiently. Some
examples of Data Structures are arrays, Linked List, Stack, Queue, etc. Data Structures are
widely used in almost every aspect of Computer Science i.e. Operating System, Compiler
Design, Artifical intelligence, Graphics and many more.
Data Structures are the main part of many computer science algorithms as they enable the
programmers to handle the data in an efficient way. It plays a vital role in enhancing the
performance of a software or a program as the main function of the software is to store and
retrieve the user's data as fast as possible.
Basic Terminology
Data structures are the building blocks of any program or the software. Choosing the
appropriate data structure for a program is the most difficult task for a programmer.
Following terminology is used as far as data structures are concerned.
Data: Data can be defined as an elementary value or the collection of values, for example,
student's name and its id are the data about the student.
Group Items: Data items which have subordinate data items are called Group item, for
example, name of a student can have first name and the last name.
Record: Record can be defined as the collection of various data items, for example, if we
talk about the student entity, then its name, address, course and marks can be grouped
together to form the record for the student.
File: A File is a collection of various records of one type of entity, for example, if there are 60
employees in the class, then there will be 20 records in the related file where each record
contains the data about each employee.
Attribute and Entity: An entity represents the class of certain objects. it contains various
attributes. Each attribute represents the particular property of that entity.
Field: Field is a single elementary unit of information representing the attribute of an entity.
As applications are getting complexed and amount of data is increasing day by day, there
may arrise the following problems:
Processor speed: To handle very large amout of data, high speed processing is required,
but as the data is growing day by day to the billions of files per entity, processor may fail to
deal with that much amount of data.
Data Search: Consider an inventory size of 106 items in a store, If our application needs to
search for a particular item, it needs to traverse 106 items every time, results in slowing
down the search process.
Multiple requests: If thousands of users are searching the data simultaneously on a web
server, then there are the chances that a very large server can be failed during that process
in order to solve the above problems, data structures are used. Data is organized to form a
data structure in such a way that all items are not required to be searched and required data
can be searched instantly.
Efficiency: Efficiency of a program depends upon the choice of data structures. For
example: suppose, we have some data and we need to perform the search for a perticular
record. In that case, if we organize our data in an array, we will have to search sequentially
element by element. hence, using array may not be very efficient here. There are better data
structures which can make the search process efficient like ordered array, binary search tree
or hash tables.
Reusability: Data structures are reusable, i.e. once we have implemented a particular data
structure, we can use it at any other place. Implementation of data structures can be
compiled into libraries which can be used by different clients.
Abstraction: Data structure is specified by the ADT which provides a level of abstraction.
The client program uses the data structure through interface only, without getting into the
implementation details.
Arrays: An array is a collection of similar type of data items and each data item is called an
element of the array. The data type of the element may be any valid data type like char, int,
float or double.
The elements of array share the same variable name but each one carries a different index
number known as subscript. The array can be one dimensional, two dimensional or
multidimensional.
Linked List: Linked list is a linear data structure which is used to maintain a list in the
memory. It can be seen as the collection of nodes stored at non-contiguous memory
locations. Each node of the list contains a pointer to its adjacent node.
Stack: Stack is a linear list in which insertion and deletions are allowed only at one end,
called top.
A stack is an abstract data type (ADT), can be implemented in most of the programming
languages. It is named as stack because it behaves like a real-world stack, for example: -
piles of plates or deck of cards etc.
Queue: Queue is a linear list in which elements can be inserted only at one end called rear
and deleted only at the other end called front.
It is an abstract data structure, similar to stack. Queue is opened at both end therefore it
follows First-In-First-Out (FIFO) methodology for storing the data items.
Non Linear Data Structures: This data structure does not form a sequence i.e. each item
or element is connected with two or more other items in a non-linear arrangement. The data
elements are not arranged in sequential structure.
Trees: Trees are multilevel data structures with a hierarchical relationship among its
elements known as nodes. The bottommost nodes in the herierchy are called leaf node
while the topmost node is called root node. Each node contains pointers to point adjacent
nodes.
Tree data structure is based on the parent-child relationship among the nodes. Each node in
the tree can have more than one children except the leaf nodes whereas each node can
have atmost one parent except the root node. Trees can be classfied into many categories
which will be discussed later in this tutorial.
Graphs: Graphs can be defined as the pictorial representation of the set of elements
(represented by vertices) connected by the links known as edges. A graph is different from
tree in the sense that a graph can have cycle while the tree can not have the one.
1) Traversing: Every data structure contains the set of data elements. Traversing the data
structure means visiting each element of the data structure in order to perform some specific
operation like searching or sorting.
2) Insertion: Insertion can be defined as the process of adding the elements to the data
structure at any location.
If the size of data structure is n then we can only insert n-1 data elements into it.
3) Deletion:The process of removing an element from the data structure is called Deletion.
We can delete an element from the data structure at any random location.
If we try to delete an element from an empty data structure then underflow occurs.
4) Searching: The process of finding the location of an element within the data structure is
called Searching. There are two algorithms to perform searching, Linear Search and Binary
Search. We will discuss each one of them later in this tutorial.
5) Sorting: The process of arranging the data structure in a specific order is known as
Sorting. There are many algorithms that can be used to perform sorting, for example,
insertion sort, selection sort, bubble sort, etc.
6) Merging: When two lists List A and List B of size M and N respectively, of similar type of
elements, clubbed or joined to produce the third list, List C of size (M+N), then this process
is called merging
What is an Algorithm?
An algorithm is a process or a set of rules required to perform calculations or some other
problem-solving operations especially by a computer. The formal definition of an algorithm is
that it contains the finite set of instructions which are being carried in a specific order to
perform the specific task. It is not the complete program or code; it is just a solution (logic) of
a problem, which can be represented either as an informal description using a Flowchart or
Pseudocode.
Characteristics of an Algorithm
● Input: An algorithm has some input values. We can pass 0 or some input value to an
algorithm.
● Output: We will get 1 or more output at the end of an algorithm.
● Unambiguity: An algorithm should be unambiguous which means that the
instructions in an algorithm should be clear and simple.
● Finiteness: An algorithm should have finiteness. Here, finiteness means that the
algorithm should contain a limited number of instructions, i.e., the instructions should
be countable.
● Effectiveness: An algorithm should be effective as each instruction in an algorithm
affects the overall process.
● Language independent: An algorithm must be language-independent so that the
instructions in an algorithm can be implemented in any of the languages with the
same output.
Dataflow of an Algorithm
● Problem: A problem can be a real-world problem or any instance from the real-world
problem for which we need to create a program or the set of instructions. The set of
instructions is known as an algorithm.
● Algorithm: An algorithm will be designed for a problem which is a step by step
procedure.
● Input: After designing an algorithm, the required and the desired inputs are provided
to the algorithm.
● Processing unit: The input will be given to the processing unit, and the processing
unit will produce the desired output.
● Output: The output is the outcome or the result of the program.
Why do we need Algorithms?
Factors of an Algorithm
The following are the factors that we need to consider for designing an algorithm:
● Modularity: If any problem is given and we can break that problem into small-small
modules or small-small steps, which is a basic definition of an algorithm, it means
that this feature has been perfectly designed for the algorithm.
● Correctness: The correctness of an algorithm is defined as when the given inputs
produce the desired output, which means that the algorithm has been designed
algorithm. The analysis of an algorithm has been done correctly.
● Maintainability: Here, maintainability means that the algorithm should be designed
in a very simple structured way so that when we redefine the algorithm, no major
change will be done in the algorithm.
● Functionality: It considers various logical steps to solve the real-world problem.
● Robustness: Robustness means that how an algorithm can clearly define our
problem.
● User-friendly: If the algorithm is not user-friendly, then the designer will not be able
to explain it to the programmer.
● Simplicity: If the algorithm is simple then it is easy to understand.
● Extensibility: If any other algorithm designer or programmer wants to use your
algorithm then it should be extensible.
Importance of Algorithms
Issues of Algorithms
The following are the issues that come while designing an algorithm:
The following are the approaches used after considering both the theoretical and
practical importance of designing an algorithm:
● Brute force algorithm: The general logic structure is applied to design an algorithm.
It is also known as an exhaustive search algorithm that searches all the possibilities
to provide the required solution. Such algorithms are of two types:
1. Optimizing: Finding all the solutions of a problem and then take out the best
solution or if the value of the best solution is known then it will terminate if the
best solution is known.
2. Sacrificing: As soon as the best solution is found, then it will stop.
● Divide and conquer: It is a very implementation of an algorithm. It allows you to
design an algorithm in a step-by-step variation. It breaks down the algorithm to solve
the problem in different methods. It allows you to break down the problem into
different methods, and valid output is produced for the valid input. This valid output is
passed to some other function.
● Greedy algorithm: It is an algorithm paradigm that makes an optimal choice on each
iteration with the hope of getting the best solution. It is easy to implement and has a
faster execution time. But, there are very rare cases in which it provides the optimal
solution.
● Dynamic programming: It makes the algorithm more efficient by storing the
intermediate results. It follows five different steps to find the optimal solution for the
problem:
1. It breaks down the problem into a subproblem to find the optimal solution.
2. After breaking down the problem, it finds the optimal solution out of these
subproblems.
3. Stores the result of the subproblems is known as memorization.
4. Reuse the result so that it cannot be recomputed for the same subproblems.
5. Finally, it computes the result of the complex program.
● Branch and Bound Algorithm: The branch and bound algorithm can be applied to
only integer programming problems. This approach divides all the sets of feasible
solutions into smaller subsets. These subsets are further evaluated to find the best
solution.
● Randomized Algorithm: As we have seen in a regular algorithm, we have
predefined input and required output. Those algorithms that have some defined set of
inputs and required output, and follow some described steps are known as
deterministic algorithms. What happens that when the random variable is introduced
in the randomized algorithm?. In a randomized algorithm, some random bits are
introduced by the algorithm and added in the input to produce the output, which is
random in nature. Randomized algorithms are simpler and efficient than the
deterministic algorithm.
● Backtracking: Backtracking is an algorithmic technique that solves the problem
recursively and removes the solution if it does not satisfy the constraints of a
problem.
Algorithm Complexity
● Time complexity: The time complexity of an algorithm is the amount of time required
to complete the execution. The time complexity of an algorithm is denoted by the big
O notation. Here, big O notation is the asymptotic notation to represent the time
complexity. The time complexity is mainly calculated by counting the number of steps
to finish the execution. Let's understand the time complexity through an example.
1. sum=0;
2. // Suppose we have to calculate the sum of n numbers.
3. for i=1 to n
4. sum=sum+i;
5. // when the loop ends then sum holds the sum of the n numbers
6. return sum;
In the above code, the time complexity of the loop statement will be atleast n, and if the
value of n increases, then the time complexity also increases. While the complexity of the
code, i.e., return sum will be constant as its value is not dependent on the value of n and will
provide the result in one step only. We generally consider the worst-time complexity as it is
the maximum time taken for any given input size.
Auxiliary space: The extra space required by the algorithm, excluding the input size, is
known as an auxiliary space. The space complexity considers both the spaces, i.e., auxiliary
space, and space used by the input.
So,
Types of Algorithms
The following are the types of algorithm:
● Search Algorithm
● Sort Algorithm
Search Algorithm
On each day, we search for something in our day to day life. Similarly, with the case of
computer, huge data is stored in a computer that whenever the user asks for any data then
the computer searches for that data in the memory and provides that data to the user. There
are mainly two techniques available to search the data in an array:
● Linear search
● Binary search
Linear Search
Linear search is a very simple algorithm that starts searching for an element or a value from
the beginning of an array until the required element is not found. It compares the element to
be searched with all the elements in an array, if the match is found, then it returns the index
of the element else it returns -1. This algorithm can be implemented on the unsorted list.
Binary Search
A Binary algorithm is the simplest algorithm that searches the element very quickly. It is used
to search the element from the sorted list. The elements must be stored in sequential order
or the sorted manner to implement the binary algorithm. Binary search cannot be
implemented if the elements are stored in a random manner. It is used to find the middle
element of the list.
Sorting Algorithms
Sorting algorithms are used to rearrange the elements in an array or a given data structure
either in an ascending or descending order. The comparison operator decides the new order
of the elements.
Asymptotic Analysis
As we know that data structure is a way of organizing the data efficiently and that efficiency
is measured either in terms of time or space. So, the ideal data structure is a structure that
occupies the least possible time to perform all its operation and the memory space. Our
focus would be on finding the time complexity rather than space complexity, and by finding
the time complexity, we can decide which data structure is the best for an algorithm.
The main question arises in our mind that on what basis should we compare the time
complexity of data structures?. The time complexity can be compared based on operations
performed on them. Let's consider a simple example.
Suppose we have an array of 100 elements, and we want to insert a new element at the
beginning of the array. This becomes a very tedious task as we first need to shift the
elements towards the right, and we will add new element at the starting of the array.
Suppose we consider the linked list as a data structure to add the element at the beginning.
The linked list contains two parts, i.e., data and address of the next node. We simply add the
address of the first node in the new node, and head pointer will now point to the newly added
node. Therefore, we conclude that adding the data at the beginning of the linked list is faster
than the arrays. In this way, we can compare the data structures and select the best possible
data structure for performing the operations.
How to find the Time Complexity or running time for performing the
operations?
The measuring of the actual running time is not practical at all. The running time to perform
any operation depends on the size of the input. Let's understand this statement through a
simple example.
Suppose we have an array of five elements, and we want to add a new element at the
beginning of the array. To achieve this, we need to shift each element towards right, and
suppose each element takes one unit of time. There are five elements, so five units of time
would be taken. Suppose there are 1000 elements in an array, then it takes 1000 units of
time to shift. It concludes that time complexity depends upon the input size.
Therefore, if the input size is n, then f(n) is a function of n that denotes the time complexity.
Calculating the value of f(n) for smaller programs is easy but for bigger programs, it's not that
easy. We can compare the data structures by comparing their f(n) values. We can compare
the data structures by comparing their f(n) values. We will find the growth rate of f(n)
because there might be a possibility that one data structure for a smaller input size is better
than the other one but not for the larger sizes. Now, how to find f(n).
f(n) = 5n2 + 6n + 12
where n is the number of instructions executed, and it depends on the size of the input.
When n=1
% of running time due to 5n2 = * 100 = 21.74%
From the above calculation, it is observed that most of the time is taken by 12. But, we have
to find the growth rate of f(n), we cannot say that the maximum amount of time is taken by
12. Let's assume the different values of n to find the growth rate of f(n).
n 5n2 6n 12
As we can observe in the above table that with the increase in the value of n, the running
time of 5n2 increases while the running time of 6n and 12 also decreases. Therefore, it is
observed that for larger values of n, the squared term consumes almost 99% of the time. As
the n2 term is contributing most of the time, so we can eliminate the rest two terms.
Therefore,
f(n) = 5n2
Here, we are getting the approximate time complexity whose result is very close to the actual
result. And this approximate measure of time complexity is known as an Asymptotic
complexity. Here, we are not calculating the exact running time, we are eliminating the
unnecessary terms, and we are just considering the term which is taking most of the time.
It is used to mathematically calculate the running time of any operation inside an algorithm.
Example: Running time of one operation is x(n) and for another operation, it is calculated as
f(n2). It refers to running time will increase linearly with an increase in 'n' for the first
operation, and running time will increase exponentially for the second operation. Similarly,
the running time of both operations will be the same if n is significantly small.
Best case: It defines the input for which the algorithm takes the lowest time
Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time complexity of
an algorithm is given below:
It is the formal way to express the upper boundary of an algorithm running time. It measures
the worst case of time complexity or the algorithm's longest amount of time to complete its
operation. It is represented as shown below:
For example:
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:
f(n)<=c.g(n)
2*1+3<=5*1
5<=5
If n=2
2*2+3<=5*2
7<=10
We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If the
value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any value of
n starting from 1, it will always satisfy. Therefore, we can say that for some constants c and
for some constants n0, it will always satisfy 2n+3<=c.n. As it is satisfying the above
condition, so f(n) is big oh of g(n) or we can say that f(n) grows linearly. Therefore, it
concludes that c.g(n) is the upper bound of the f(n). It can be represented graphically as:
The idea of using big o notation is to give an upper bound of a particular function, and
eventually it leads to give a worst-time complexity. It provides an assurance that a particular
function does not behave suddenly as a quadratic or a cubic fashion, it just behaves in a
linear manner in a worst-case.
● It basically describes the best-case scenario which is opposite to the big o notation.
● It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or the
best-case time complexity.
● It determines what is the fastest time that an algorithm can run.
If we required that an algorithm takes at least certain amount of time without using an upper
bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of
running time for large input size.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:
f(n)>=c.g(n)
To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.
2n+3>=c*n
Suppose c=1
2n+3>=n (This equation will be true for any value of n starting from 1).
As we can see in the above figure that g(n) function is the lower bound of the f(n) function
when the value of c is equal to 1. Therefore, this notation gives the fastest running time. But,
we are not more interested in finding the fastest running time, we are interested in
calculating the worst-case scenarios because we want to check our algorithm for larger input
that what is the worst time that it will take so that we can take further decision in the further
process.
Let f(n) and g(n) be the functions of n where n is the steps required to execute the program
then:
f(n)= θg(n)
c1.g(n)<=f(n)<=c2.g(n)
where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to
f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta notation
is given below:
f(n)=2n+3
g(n)=n
As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater than
f(n) so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n) is the
upper limit of the f(n).
c1.g(n)<=f(n)<=c2.g(n)
c1.n <=2n+3<=c2.n
If n=2
1*2<=2*2+3<=2*2
Therefore, we can say that for any value of n, it satisfies the condition
c1.g(n)<=f(n)<=c2.g(n). Hence, it is proved that f(n) is big theta of g(n). So, this is the
average-case scenario which provides the realistic time complexity.
As we know that big omega is for the best case, big oh is for the worst case while big theta is
for the average case. Now, we will find out the average, worst and the best case of the linear
search algorithm.
Suppose we have an array of n numbers, and we want to find the particular element in an
array using the linear search. In the linear search, every element is compared with the
searched element on each iteration. Suppose, if the match is found in a first iteration only,
then the best case would be Ω(1), if the element matches with the last element, i.e., nth
element of the array then the worst case would be O(n). The average case is the mid of the
best and the worst-case, so it becomes θ(n/1). The constant terms can be ignored in the
time complexity so average case would be θ(n).
So, three different analysis provide the proper bounding between the actual functions. Here,
bounding means that we have upper as well as lower limit which assures that the algorithm
will behave between these limits only, i.e., it will not go beyond these limits.
logarithmic - ?(log n)
exponential - 2?(n)
cubic - ?(n3)
polynomial - n?(1)
quadratic - ?(n2)
Introduction to Algorithms
● Difficulty Level : Easy
● Last Updated : 25 Nov, 2020
The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output will
be the same, as expected.
Advantages of Algorithms:
● It is easy to understand.
● Algorithm is a step-wise representation of a solution to a given
problem.
● In Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual
program.
Disadvantages of Algorithms:
Then the algorithm is written with the help of above parameters such that it
solves the problem.
Example: Consider the example to add three numbers and print the sum.
Program:
● C++
● C
int main()
{
return 0;
}
Output
Enter the 1st number: 0
Enter the 2nd number: 0
Enter the 3rd number: -1577141152
● + operator
● Bit-wise operators
● . . etc
1. Priori Analysis: “Priori” means “before”. Hence Priori analysis
means checking the algorithm before its implementation. In this, the
algorithm is checked when it is written in the form of theoretical
steps. This Efficiency of an algorithm is measured by assuming that
all other factors, for example, processor speed, are constant and
have no effect on the implementation. This is done usually by the
algorithm designer. It is in this method, that the Algorithm Complexity
is determined.
2. Posterior Analysis: “Posterior” means “after”. Hence Posterior
analysis means checking the algorithm after its implementation. In
this, the algorithm is checked by implementing it in any programming
language and executing it. This analysis helps to get the actual and
real analysis report about correctness, space required, time
consumed etc.
● Time Factor: Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
● Space Factor: Space is measured by counting the maximum
memory space required by the algorithm.
1. Space Complexity: Space complexity of an algorithm refers to the
amount of memory that this algorithm requires to execute and get
the result. This can be for inputs, temporary operations, or outputs.
How to calculate Space Complexity?
The space complexity of an algorithm is calculated by determining
following 2 components:
○ Fixed Part: This refers to the space that is definitely
required by the algorithm. For example, input variables,
output variables, program size, etc.
○ Variable Part: This refers to the space that can be different
based on the implementation of the algorithm. For example,
temporary variables, dynamic memory allocation, recursion
stack space, etc.
2. Time Complexity: Time complexity of an algorithm refers to the
amount of time that this algorithm requires to execute and get the
result. This can be for normal operations, conditional if-else
statements, loop statements, etc.
How to calculate Time Complexity?
The time complexity of an algorithm is also calculated by determining
following 2 components:
○ Constant time part: Any instruction that is executed just
once comes in this part. For example, input, output, if-else,
switch, etc.
○ Variable Time Part: Any instruction that is executed more
than once, say n times, comes in this part. For example,
loops, recursion, etc.