Data Structure
Data Structure
and Algorithms
Data Structure is a way of collecting and organising data in such a way that we can
perform operations on these data in an effective way. Data Structures is about rendering
data elements in terms of some relationship, for better organization and storage. For
example, we have some data which has, player's name "Virat" and age 26. Here "Virat"
is of String data type and 26 is of integerdata type.
We can organize this data as a record like Player record, which will have both player's
name and age in it. Now we can collect and store player's records in a file or database
as a data structure. For example: "Dhoni" 30, "Gambhir" 31, "Sehwag" 33
If you are aware of Object Oriented programming concepts, then a class also does the
same thing, it collects different type of data under one single entity. The only difference
being, data structures provides for techniques to access and manipulate data efficiently.
In simple language, Data Structures are structures programmed to store ordered data,
so that various operations can be performed on it easily. It represents the knowledge of
data to be organized in memory. It should be designed and implemented in such a way
that it reduces the complexity and increases the efficiency.
Linked List
Tree
Graph
Stack, Queue etc.
All these data structures allow us to perform different operations on data. We select
these data structures based on which type of operation is required. We will look into
these data structures in more details in our later lessons.
The data structures can also be classified on the basis of the following characteristics:
Characterstic Description
Linear In Linear data structures,the data items are arranged in a linear sequence.
Example: Array
Non- In Non-Homogeneous data structure, the elements may or may not be of the
Homogeneous same type. Example: Structures
Static Static data structures are those whose sizes and structures associated
memory locations are fixed, at compile time. Example: Array
Dynamic Dynamic structures are those which expands or shrinks depending upon the
program need and its execution. Also, their associated memory locations
changes. Example: Linked List created using pointers
What is an Algorithm ?
An algorithm is a finite set of instructions or logic, written in order, to accomplish a
certain predefined task. Algorithm is not the complete code or program, it is just the core
logic(solution) of a problem, which can be expressed either as an informal high level
description as pseudocode or using a flowchart.
Every Algorithm must satisfy the following properties:
An algorithm is said to be efficient and fast, if it takes less time to execute and consumes
less memory space. The performance of an algorithm is measured on the basis of
following properties :
1. Time Complexity
2. Space Complexity
Space Complexity
Its the amount of memory space required by the algorithm, during the course of its
execution. Space complexity must be taken seriously for multi-user systems and in
situations where limited memory is available.
An algorithm generally requires space for following components :
Instruction Space: Its the space required to store the executable version of the
program. This space is fixed, but varies depending upon the number of lines of code
in the program.
Data Space: Its the space required to store all the constants and variables(including
temporary variables) value.
Environment Space: Its the space required to store the environment information
needed to resume the suspended function.
To learn about Space Complexity in detail, jump to the Space Complexity tutorial.
Time Complexity
Time Complexity is a way to represent the amount of time required by the program to
run till its completion. It's generally a good practice to try to keep the time required
minimum, so that our algorithm completes it's execution in the minimum time possible.
We will study about Time Complexityin details in later sections.
NOTE: Before going deep into data structure, you should have a good knowledge of
programming either in C or in C++ or Java or Python etc
Asymptotic Notations
When it comes to analysing the complexity of any algorithm in terms of time and space,
we can never provide an exact number to define the time required and the space
required by the algorithm, instead we express it using some standard notations, also
known as Asymptotic Notations.
When we analyse any algorithm, we generally get a formula to represent the amount of
time required for execution or the time required by the computer to run the lines of code
of the algorithm, number of memory accesses, number of comparisons, temporary
variables occupying memory space etc. This formula often contains unimportant details
that don't really tell us anything about the running time.
Let us take an example, if some algorithm has a time complexity of T(n) = (n2 + 3n + 4),
which is a quadratic equation. For large values of n, the 3n + 4 part will become
insignificant compared to the n2 part.
For n = 1000, n2 will be 1000000 while 3n + 4 will be 3004.
Also, When we compare the execution times of two algorithms the constant coefficients
of higher order terms are also neglected.
An algorithm that takes a time of 200n2 will be faster than some other algorithm that
takes n3 time, for any value of n larger than 200. Since we're only interested in the
asymptotic behavior of the growth of the function, the constant factor can be ignored too.
Space complexity is the amount of memory used by the algorithm (including the input
values to the algorithm) to execute and produce the result.
Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is
the extra space or the temporary space used by the algorithm during it's execution.
Space Complexity = Auxiliary Space + Input space
1. Instruction Space
It's the amount of memory used to save the compiled version of instructions.
2. Environmental Stack
3. Data Space
Type Size
Now let's learn how to compute space complexity by taking a few examples:
{
int z = a + b + c;
return(z);
}
In the above expression, variables a, b, c and z are all integer types, hence they will take
up 4 bytes each, so total memory requirement will be (4(4) + 4) = 20 bytes, this
additional 4 bytes is for return value. And because this space requirement is fixed for
the above example, hence it is called Constant Space Complexity.
Let's have another example, this time a bit complex one,
// n is the length of array a[]
int sum(int a[], int n)
{
int x = 0; // 4 bytes for x
for(int i = 0; i < n; i++) // 4 bytes for i
{
x = x + a[i];
}
return(x);
}
In the above code, 4*n bytes of space is required for the array a[] elements.
4 bytes each for x, n, i and the return value.
Hence the total memory requirement will be (4n + 12), which is increasing linearly with
the increase in the input value n, hence it is called as Linear Space Complexity.
Similarly, we can have quadratic and other complex space complexity as well, as the
complexity of an algorithm increases.
But we should always focus on writing algorithm code in such a way that we keep the
space complexity minimum.
In the above two simple algorithms, you saw how a single problem can have many
solutions. While the first solution required a loop which will execute for n number of
times, the second solution used a mathematical operator * to return the result in one
line. So which one is the better approach, of course the second one.
Above we have a single statement. Its Time Complexity will be Constant. The running
time of the statement will not change in relation to N.
Introduction to Sorting
Sorting is nothing but arranging the data in ascending or descending order. The
term sorting came into picture, as humans realised the importance of searching quickly.
There are so many things in our real life that we need to search for, like a particular
record in database, roll numbers in merit list, a particular telephone number in telephone
directory, a particular page in a book etc. All this would have been a mess if the data
was kept unordered and unsorted, but fortunately the concept of sorting came into
existence, making it easier for everyone to arrange data in an order, hence making it
easier to search.
Sorting arranges data in a sequence which makes searching easier.
Sorting Efficiency
If you ask me, how will I arrange a deck of shuffled cards in order, I would say, I will start
by checking every card, and making the deck as I move on.
It can take me hours to arrange the deck in order, but that's how I will do it.
Well, thank god, computers don't work like this.
Since the beginning of the programming age, computer scientists have been working on
solving the problem of sorting by coming up with various different algorithms to sort data.
The two main criterias to judge which algorithm is better than the other have been:
1. Bubble Sort
2. Insertion Sort
3. Selection Sort
4. Quick Sort
5. Merge Sort
6. Heap Sort
Although it's easier to understand these sorting techniques, but still we suggest you to
first learn about Space complexity, Time complexity and the searching algorithms, to
warm up your brain for sorting algorithms.
1. Starting with the first element(index = 0), compare the current element with the next
element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next
element. Repeat Step 1.
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
Although the above logic will sort an unsorted array, still the above algorithm is not
efficient because as per the above logic, the outer for loop will keep on executing
for 6 iterations even if the array gets sorted after the second iteration.
So, we can clearly optimize our algorithm.
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
In the above code, in the function bubbleSort, if for a single complete cycle
of j iteration(inner forloop), no swapping takes place, then flag will remain 0 and then
we will break out of the for loops, because the array has already been sorted.
Complexity Analysis of Bubble Sort
In Bubble Sort, n-1 comparisons will be done in the 1st pass, n-2 in 2nd pass, n-3 in 3rd
pass and so on. So the total number of comparisons will be,
Sum = n(n-1)/2
i.e O(n2)
1. Starting with the first element(index = 0), compare the current element with the next
element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next
element. Repeat Step 1.
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
Although the above logic will sort an unsorted array, still the above algorithm is not
efficient because as per the above logic, the outer for loop will keep on executing
for 6 iterations even if the array gets sorted after the second iteration.
So, we can clearly optimize our algorithm.
int main()
{
int arr[100], i, n, step, temp;
// ask user for number of elements to be sorted
printf("Enter the number of elements to be sorted: ");
scanf("%d", &n);
// input elements if the array
for(i = 0; i < n; i++)
{
printf("Enter element no. %d: ", i+1);
scanf("%d", &arr[i]);
}
// call the function bubbleSort
bubbleSort(arr, n);
return 0;
}
In the above code, in the function bubbleSort, if for a single complete cycle
of j iteration(inner forloop), no swapping takes place, then flag will remain 0 and then
we will break out of the for loops, because the array has already been sorted.
Complexity Analysis of Bubble Sort
In Bubble Sort, n-1 comparisons will be done in the 1st pass, n-2 in 2nd pass, n-3 in 3rd
pass and so on. So the total number of comparisons will be,
Sum = n(n-1)/2
i.e O(n2)
1. Starting from the first element, we search the smallest element in the array, and
replace it with the element in the first position.
2. We then move on to the second position, and look for smallest element present in
the subarray, starting from index 1, till the last index.
3. We replace the element at the second position in the original array, or we can say at
the first position in the subarray, with the second smallest element.
4. This is repeated, until the array is completely sorted.
In the first pass, the smallest element will be 1, so it will be placed at the first position.
Then leaving the first element, next smallest element will be searched, from the
remaining elements. We will get 3 as the smallest, so it will be then placed at the second
position.
Then leaving 1 and 3(because they are at the correct position), we will search for the
next smallest element from the rest of the elements and put it at third position and keep
doing this until array is sorted.
int main()
{
int arr[] = {46, 52, 21, 22, 11};
int n = sizeof(arr)/sizeof(arr[0]);
selectionSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}
Note: Selection sort is an unstable sort i.e it might change the occurrence of two similar
elements in the list while sorting. But it can also work as a stable sort when it is
implemented using linked list.
1. It is efficient for smaller data sets, but very inefficient for larger lists.
2. Insertion Sort is adaptive, that means it reduces its total number of steps if a partially
sorted array is provided as input, making it efficient.
3. It is better than Selection Sort and Bubble Sort algorithms.
4. Its space complexity is less. Like bubble Sort, insertion sort also requires a single
additional memory space.
5. It is a stable sorting technique, as it does not change the relative order of elements
which are equal.
1. We start by making the second element of the given array, i.e. element at index 1,
the key. The key element here is the new card that we need to add to our existing
sorted set of cards(remember the example with cards above).
2. We compare the key element with the element(s) before it, in this case, element at
index 0:
o If the key element is less than the first element, we insert the key element before
the first element.
o If the key element is greater than the first element, then we insert it after the first
element.
3. Then, we make the third element of the array as key and will compare it with
elements to it's left and insert it at the right position.
4. And we go on repeating this, until the array is sorted.
#include <stdlib.h>
#include <iostream>
// main function
int main()
{
int array[5] = {5, 1, 6, 2, 4, 3};
// calling insertion sort function to sort the array
insertionSort(array, 6);
return 0;
}
Sorted Array: 1 2 3 4 5 6
Now let's try to understand the above simple insertion sort algorithm.
We took an array with 6 integers. We took a variable key, in which we put each element
of the array, during each pass, starting from the second element, that is a[1].
Then using the while loop, we iterate, until j becomes equal to zero or we find an
element which is greater than key, and then we insert the key at that position.
We keep on doing this, until j becomes equal to zero, or we encounter an element
which is smaller than the key, and then we stop. The current key is now at the right
position.
We then make the next element as key and then repeat the same process.
In the above array, first we pick 1 as key, we compare it with 5(element before 1), 1 is
smaller than 5, we insert 1 before 5. Then we pick 6 as key, and compare it with 5 and 1,
no shifting in position this time. Then 2 becomes the key and is compared with 6 and 5,
and then 2 is inserted after 1. And this goes on until the complete array gets sorted.
1. We take a variable p and store the starting index of our array in this. And we take
another variable r and store the last index of array in it.
2. Then we find the middle of the array using the formula (p + r)/2 and mark the
middle index as q, and break the array into two subarrays, from p to q and from q +
1 to r index.
3. Then we divide these 2 subarrays again, just like we divided our main array and this
continues.
4. Once we have divided the main array into subarrays with single elements, then we
start merging the subarrays.
#include <stdio.h>
while(i <= q)
{
b[k++] = a[i++];
}
while(j <= r)
{
b[k++] = a[j++];
}
int main()
{
int arr[] = {32, 45, 67, 2, 7};
int len = sizeof(arr)/sizeof(arr[0]);
Given array:
32 45 67 2 7
Sorted array:
2 7 32 45 67
Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and
best) as merge sort always divides the array in two halves and takes linear time
to merge two halves.
It requires equal amount of additional space as the unsorted array. Hence its not
at all recommended for searching large unsorted arrays.
It is the best Sorting technique used for sorting Linked Lists.
Pivot element can be any element from the array, it can be the first element, the last
element or any random element. In this tutorial, we will take the rightmost element or the
last element as pivot.
For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as pivot. So
after the first pass, the list will be changed like this.
{6 8 17 14 25 63 37 52}
Hence after the first pass, pivot will be set at its position, with all the elements smaller to
it on its left and all the elements larger than to its right. Now 6 8 17 14 and 63 37 52 are
considered as two separate sunarrays, and same recursive logic will be applied on them,
and we will keep doing this until the complete array is sorted.
1. After selecting an element as pivot, which is the last index of the array in our case,
we divide the array for the first time.
2. In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all
the elements smaller than the pivot will be on the left side of the pivot and all the
elements greater than the pivot will be on the right side of it.
3. And the pivot element will be at its final sorted position.
4. The elements to the left and right, may not be sorted.
5. Then we pick subarrays, elements on the left of pivot and elements on the right
of pivot, and we perform partitioning on them by choosing a pivot in the subarrays.
Let's consider an array with values {9, 7, 5, 11, 12, 2, 14, 3, 10, 6}
Below, we have a pictorial representation of how quick sort will sort the given array.
In step 1, we select the last element as the pivot, which is 6 in this case, and call
for partitioning, hence re-arranging the array in such a way that 6 will be placed in its
final position and to its left will be all the elements less than it and to its right, we will
have all the elements greater than it.
Then we pick the subarray on the left and the subarray on the right and select
a pivot for them, in the above diagram, we chose 3 as pivot for the left subarray
and 11 as pivot for the right subarray.
And we again call for partitioning.
Implementing Quick Sort Algorithm
Below we have a simple C program implementing the Quick sort algorithm:
// simple C program for Quick Sort
# include <stdio.h>
/*
a[] is the array, p is starting index, that is 0,
and r is the last index of array.
*/
void quicksort(int a[], int p, int r)
{
if(p < r)
{
int q;
q = partition(a, p, r);
quicksort(a, p, q);
quicksort(a, q+1, r);
}
}
int main()
{
int arr[] = {9, 7, 5, 11, 12, 2, 14, 3, 10, 6};
int n = sizeof(arr)/sizeof(arr[0]);
Space required by quick sort is very less, only O(n*log n) additional space is
required.
Quick sort is not a stable sorting technique, so it might change the occurence of two
similar elements in the list while sorting.
What is a Heap ?
Heap is a special tree-based data structure, that satisfies the following special heap
properties:
1. Shape Property: Heap data structure is always a Complete Binary Tree, which
means all levels of the tree are fully filled.
2. Heap Property: All nodes are either greater than or equal to or less than or equal
to each of its children. If the parent nodes are greater than their child nodes, heap is
called a Max-Heap, and if the parent nodes are smaller than their child nodes, heap
is called Min-Heap.
How Heap Sort Works?
Heap sort algorithm is divided into two basic parts:
Initially on receiving an unsorted list, the first step in heap sort is to create a Heap data
structure(Max-Heap or Min-Heap). Once heap is built, the first element of the Heap is
either largest or smallest(depending upon Max-Heap or Min-Heap), so we put the first
element of the heap in our array. Then we again make heap using the remaining
elements, to again pick the first element of the heap and put it into the array. We keep
on doing the same repeatedly untill we have the complete sorted list in our array.
In the below algorithm, initially heapsort() function is called, which calls heapify() to
build the heap.
#include <iostream>
int main()
{
int arr[] = {121, 10, 130, 57, 36, 17};
int n = sizeof(arr)/sizeof(arr[0]);
heapSort(arr, n);
Heap sort is not a Stable sort, and requires a constant space for sorting a list.
Heap Sort is very fast and is widely used for sorting.
Introduction to Searching
Algorithms
Not even a single day pass, when we do not have to search for something in our day to
day life, car keys, books, pen, mobile charger and what not. Same is the life of a
computer, there is so much data stored in it, that whenever a user asks for some data,
computer has to search it's memory to look for the data and make it available to the
user. And the computer has it's own techniques to search through it's memory fast,
which you can learn more about in our Operating System tutorial series.
What if you have to write a program to search a given number in an array? How will you
do it?
Well, to search an element in a given array, there are two popular algorithms available:
1. Linear Search
2. Binary Search
Linear Search
Linear search is a very basic and simple search algorithm. In Linear search, we search
an element or value in a given array by traversing the array from the starting, till the
desired element or value is found.
It compares the element to be searched with all the elements present in the array and
when the element is matched successfully, it returns the index of the element in the
array, else it return -1.
Linear Search is applied on unsorted or unordered lists, when there are fewer elements
in a list.
Binary Search
Binary Search is used with sorted array or list. In binary search, we follow the following
steps:
1. We start by comparing the element to be searched with the element in the middle of
the list/array.
2. If we get a match, we return the index of the middle element.
3. If we do not get a match, we check whether the element to be searched is less or
greater than in value than the middle element.
4. If the element/number to be searched is greater in value than the middle number,
then we pick the elements on the right side of the middle element(as the list/array is
sorted, hence on the right, we will have all the numbers greater than the middle
number), and start again from the step 1.
5. If the element/number to be searched is lesser in value than the middle number, then
we pick the elements on the left side of the middle element, and start again from the
step 1.
Binary Search is useful when there are large number of elements in an array and they
are sorted.
So a necessary condition for Binary search to work is that the list/array should be sorted.
To search the number 5 in the array given below, linear search will go step by step in a
sequential order starting from the first element in the given array.
/*
below we have implemented a simple function
for linear search in C
target = 77
Output: 4
target = 200
Final Thoughts
We know you like Linear search because it is so damn simple to implement, but it is not
used practically because binary search is a lot faster than linear search. So let's head to
the next tutorial where we will learn more about binary search.
/*
function for carrying out binary search on given array
- values[] => given sorted array
- len => length of the array
- target => value to be searched
*/
int binarySearch(int values[], int len, int target)
{
int max = (len - 1);
int min = 0;
if(values[guess] == target)
{
printf("Number of steps required for search: %d \n", step);
return guess;
}
else if(values[guess] > target)
{
// target would be in the left half
max = (guess - 1);
}
else
{
// target would be in the right half
min = (guess + 1);
}
}
int main(void)
{
int values[] = {13, 21, 54, 81, 90};
int n = sizeof(values) / sizeof(values[0]);
int target = 81;
int result = binarySearch(values, n, target);
if(result == -1)
{
printf("Element is not present in the given array.");
}
else
{
printf("Element is present at index: %d", result);
}
return 0;
}
We hope the above code is clear, if you have any confusion, post your question in our Q
& A Forum.
Now let's try to understand, why is the time complexity of binary search O(log n) and
how can we calculate the number of steps required to search an element from a given
array using binary search without doing any calculations. It's super easy! Are you ready?
Time Complexity of Binary Search O(log
n)
When we say the time complexity is log n, we actually mean log2 n, although
the base of the log doesn't matter in asymptotic notations, but still to understand this
better, we generally consider a base of 2.
Let's first understand what log2(n) means.
Expression: log2(n)
- - - - - - - - - - - - - - -
For n = 2:
log2(21) = 1
Output = 1
- - - - - - - - - - - - - - -
For n = 4
log2(22) = 2
Output = 2
- - - - - - - - - - - - - - -
For n = 8
log2(23) = 3
Output = 3
- - - - - - - - - - - - - - -
For n = 256
log2(28) = 8
Output = 8
- - - - - - - - - - - - - - -
For n = 2048
log2(211) = 11
Output = 11
Now that we know how log2(n) works with different values of n, it will be easier for us to
relate it with the time complexity of the binary search algorithm and also to understand
how we can find out the number of steps required to search any number using binary
search for any value of n.
Counting the Number of Steps
As we have already seen, that with every incorrect guess, binary search cuts down the
list of elements into half. So if we start with 32 elements, after first unsuccessful guess,
we will be left with 16 elements.
So consider an array with 8 elements, after the first unsuccessful, binary sort will cut
down the list to half, leaving behind 4 elements, then 2 elements after the second
unsuccessful guess, and finally only 1 element will be left, which will either be
the target or not, checking that will involve one more step. So all in all binary search
needed at most 4 guesses to search the target in an array with 8 elements.
If the size of the list would have been 16, then after the first unsuccessful guess, we
would have been left with 8 elements. And after that, as we know, we need atmost 4
guesses, add 1 guess to cut down the list from 16 to 8, that brings us to 5 guesses.
So we can say, as the number of elements are getting doubled, the number of guesses
required to find the target increments by 1.
Seeing the pattern, right?
Generalizing this, we can say, for an array with n elements,
the number of times we can repeatedly halve, starting at n, until we get the value 1, plus
one.
And guess what, in mathematics, the function log2 n means exactly same. We have
already seen how the log function works above, did you notice something there?
For n = 8, the output of log2 n comes out to be 3, which means the array can be halved
3 times maximum, hence the number of steps(at most) to find the target value will be (3
+ 1) = 4.
Question for you: What will be the maximum number of guesses required by Binary
Search, to search a number in a list of 2,097,152 elements?
Introduction to Sorting
Sorting is nothing but arranging the data in ascending or descending order. The
term sorting came into picture, as humans realised the importance of searching quickly.
There are so many things in our real life that we need to search for, like a particular
record in database, roll numbers in merit list, a particular telephone number in telephone
directory, a particular page in a book etc. All this would have been a mess if the data
was kept unordered and unsorted, but fortunately the concept of sorting came into
existence, making it easier for everyone to arrange data in an order, hence making it
easier to search.
Sorting arranges data in a sequence which makes searching easier.
Sorting Efficiency
If you ask me, how will I arrange a deck of shuffled cards in order, I would say, I will start
by checking every card, and making the deck as I move on.
It can take me hours to arrange the deck in order, but that's how I will do it.
Well, thank god, computers don't work like this.
Since the beginning of the programming age, computer scientists have been working on
solving the problem of sorting by coming up with various different algorithms to sort data.
The two main criterias to judge which algorithm is better than the other have been:
1. Bubble Sort
2. Insertion Sort
3. Selection Sort
4. Quick Sort
5. Merge Sort
6. Heap Sort
Although it's easier to understand these sorting techniques, but still we suggest you to
first learn about Space complexity, Time complexity and the searching algorithms, to
warm up your brain for sorting algorithms.
Applications of Stack
The simplest application of a stack is to reverse a word. You push a given word to stack
- letter by letter - and then pop letters from the stack.
There are other uses also like:
1. Parsing
2. Expression Conversion(Infix to Postfix, Postfix to Prefix etc)
Implementation of Stack Data Structure
Stack can be easily implemented using an Array or a Linked List. Arrays are quick, but
are limited in size and Linked List requires overhead to allocate, link, unlink, and
deallocate, but is not limited in size. Here we will implement Stack using array.
Below we have a simple C++ program implementing stack data structure while following
the object oriented programming concepts.
/* Below program is written in C++ language */
# include<iostream>
class Stack
{
int top;
public:
int a[10]; //Maximum size of Stack
Stack()
{
top = -1;
}
// main function
int main() {
Stack s1;
s1.push(10);
s1.push(100);
/*
preform whatever operation you want on the stack
*/
}
The time complexities for push() and pop() functions are O(1) because we always have
to insert or remove the data from the top of the stack, which is a one step process.
Applications of Queue
Queue, as the name suggests is used whenever we need to manage any group of
objects in an order in which the first one coming in, also gets out first while the others
wait for their turn, like in the following scenarios:
1. Serving requests on a single shared resource, like a printer, CPU task scheduling
etc.
2. In real life scenario, Call Center phone systems uses Queues to hold people calling
them in an order, until a service representative is free.
3. Handling of interrupts in real-time systems. The interrupts are handled in the same
order as they arrive i.e First come first served.
#include<iostream>
#define SIZE 10
class Queue
{
int a[SIZE];
int rear; //same as tail
int front; //same as head
public:
Queue()
{
rear = front = -1;
}
q.display();
return 0;
}
To implement approach [A], you simply need to change the dequeue method, and include
a for loop which will shift all the remaining elements by one position.
When we dequeue any element to remove it from the queue, we are actually moving
the front of the queue forward, thereby reducing the overall size of the queue. And we
cannot insert new elements, because the rear pointer is still at the end of the queue.
The only way is to reset the linear queue, for a fresh start.
Circular Queue is also a linear data structure, which follows the principle of FIFO(First
In First Out), but instead of ending the queue at the last position, it again starts from the
first position after the last, hence making the queue behave like a circular data structure.
3. New data is always added to the location pointed by the tail pointer, and once the
data is added, tail pointer is incremented to point to the next available location.
4. In a circular queue, data is not actually removed from the queue. Only
the head pointer is incremented by one position when dequeue is executed. As the
queue data is only the data between head and tail, hence the data left outside is not
a part of the queue anymore, hence removed.
5. The head and the tail pointer will get reinitialised to 0 every time they reach the end
of the queue.
6. Also, the head and the tail pointers can cross each other. In other
words, head pointer can be greater than the tail. Sounds odd? This will happen
when we dequeue the queue a couple of times and the tail pointer gets reinitialised
upon reaching the end of the queue.
Going Round and Round
Another very important point is keeping the value of the tail and the head pointer within
the maximum queue size.
In the diagrams above the queue has a size of 8, hence, the value
of tail and head pointers will always be between 0 and 7.
This can be controlled either by checking everytime whether tail or head have reached
the maxSize and then setting the value 0 or, we have a better way, which is, for a
value x if we divide it by 8, the remainder will never be greater than 8, it will always be
between 0 and 0, which is exactly what we want.
So the formula to increment the head and tail pointers to make them go round and
round over and again will be, head = (head+1) % maxSize or tail = (tail+1) % maxSize
#include<iostream>
#define SIZE 10
class CircularQueue
{
int a[SIZE];
int rear; //same as tail
int front; //same as head
public:
CircularQueue()
{
rear = front = -1;
}
if(isEmpty())
{
cout << "Queue is empty" << endl;
}
else
{
y = a[front];
if(front == rear)
{
// only one element in queue, reset queue after removal
front = -1;
rear = -1;
}
else
{
front = (front+1) % SIZE;
}
return(y);
}
}
cq.display();
return 0;
}
Inserted 10
Inserted 100
Inserted 1000
Size of queue: 3
Removed element: 10
Front -> 1
Rear -> 2
In the code above, we have simply defined a class Queue, with two variables S1 and S2 of
type Stack.
We know that, Stack is a data structure, in which data can be added
using push() method and data can be removed using pop() method.
You can find the code for Stack class in the Stack data structure tutorial.
To implement a queue, we can follow two approaches:
1. If the queue is empty(means S1 is empty), directly push the first element onto the
stack S1.
2. If the queue is not empty, move all the elements present in the first stack(S1) to the
second stack(S2), one by one. Then add the new element to the first stack, then
move back all the elements from the second stack back to the first stack.
3. Doing so will always maintain the right order of the elements in the stack, with the 1st
data element staying always at the top, with 2nd data element right below it and the
new data element will be added to the bottom.
This makes removing an element from the queue very simple, all we have to do is call
the pop()method for stack S1.
return y;
}
Now that we know the implementation of enqueue() and dequeue() operations, let's write
a complete program to implement a queue using stacks.
Implementation in C++(OOPS)
We will not follow the traditional approach of using pointers, instead we will define proper
classes, just like we did in the Stack tutorial.
/* Below program is written in C++ language */
# include<iostream>
// enqueue function
void Queue :: enqueue(int x)
{
S1.push(x);
cout << "Element Inserted into Queue\n";
}
// dequeue function
int Queue :: dequeue()
{
int x, y;
while(!S1.isEmpty())
{
// take an element out of first stack
x = S1.pop();
// insert it into the second stack
S2.push(x);
}
// removing the element
y = S2.pop();
return y;
}
// main function
int main()
{
Queue q;
q.enqueue(10);
q.enqueue(100);
q.enqueue(1000);
cout << "Removing element from queue" << q.dequeue();
return 0;
}
And that's it, we did it.
Let's know more about them and how they are different from each other.
We will learn about all the 3 types of linked list, one by one, in the next tutorials. So click
on Nextbutton, let's learn more about linked lists.
Array supports Random Access, which Linked List supports Sequential Access, which
means elements can be accessed means to access any element/node in a linked list,
directly using their index, like arr[0] for we have to sequentially traverse the complete linked
1st element, arr[6] for 7th element etc. list, upto that element.
Hence, accessing elements in an array To access nth element of a linked list, time
is fast with a constant time complexity complexity is O(n).
of O(1).
In an array, elements are stored In a linked list, new elements can be stored
in contiguous memory location or anywhere in the memory.
consecutive manner in the memory.
Address of the memory location allocated to the new
element is stored in the previous node of linked list,
hence formaing a link between the two
nodes/elements.
In array, Insertion and In case of linked list, a new element is stored at the
Deletionoperation takes more time, as first free and available memory location, with only a
the memory locations are consecutive single overhead step of storing the address of
and fixed. memory location in the previous node of linked list.
Insertion and Deletion operations are fast in linked
list.
Memory is allocated as soon as the Memory is allocated at runtime, as and when a new
array is declared, at compile time. It's node is added. It's also known as Dynamic Memory
also known as Static Memory Allocation.
Allocation.
In array, each element is independent In case of a linked list, each node/element points to
and can be accessed using it's index the next, previous, or maybe both nodes.
value.
Array gets memory allocated in Whereas, linked list gets memory allocated
the Stacksection. in Heapsection.
On the left, we have Array and on the right, we have Linked List.
Hence while writing the code for Linked List we will include methods to insert or add new
data elements to the linked list, both, at the beginning of the list and at the end of the list.
We will also be adding some other useful methods like:
Before learning how we insert data and create a linked list, we must understand the
components forming a linked list, and the main component is the Node.
What is a Node?
A Node in a linked list holds the data value and the pointer which points to the location of
the next node in the linked list.
In the picture above we have a linked list, containing 4 nodes, each node has some
data(A, B, C and D) and a pointer which stores the location of the next node.
You must be wondering why we need to store the location of the next node. Well,
because the memory locations allocated to these nodes are not contiguous hence each
node should know where the next node is stored.
As the node is a combination of multiple information, hence we will be defining a class
for Node which will have a variable to store data and another variable to store
the pointer. In C language, we create a structure using the struct keyword.
class Node
{
public:
// our linked list will only hold int data
int data;
//pointer to the next node
node* next;
// default constructor
Node()
{
data = 0;
next = NULL;
}
// parameterised constructor
Node(int x)
{
data = x;
next = NULL;
}
}
We can also make the Node class properties data and next as private, in that case we
will need to add the getter and setter methods to access them(don't know what getter
and setter methods are: Inline Functions in C++ ). You can add the getter and setter
functions to the Node class like this:
class Node
{
// our linked list will only hold int data
int data;
//pointer to the next node
node* next;
class LinkedList
{
public:
node *head;
//declaring the functions
LinkedList()
{
head = NULL;
}
}
1. If the Linked List is empty then we simply, add the new Node as the Head of the
Linked List.
2. If the Linked List is not empty then we find the last node, and make it' next to the new
Node, hence making the new node the last Node.
If the Node to be deleted is the first node, then simply set the Next pointer of the
Head to point to the next element from the Node to be deleted.
If the Node is in the middle somewhere, then find the Node before it, and make the
Node before it point to the Node next to it.
Now you know a lot about how to handle List, how to traverse it, how to search an
element. You can yourself try to write new methods around the List.
If you are still figuring out, how to call all these methods, then below is how
your main() method will look like. As we have followed OOP standards, we will create
the objects of LinkedList class to initialize our List and then we will create objects
of Node class whenever we want to add any new node to the List.
int main() {
LinkedList L;
//We will ask value from user, read the value and add the value to our Node
int x;
cout << "Please enter an integer value : ";
cin >> x;
Node *n1;
//Creating a new node with data as x
n1 = new Node(x);
//Adding the node to the list
L.addAtFront(n1);
}
Similarly you can call any of the functions of the LinkedList class, add as many Nodes
you want to your List.
The real life application where the circular linked list is used is our Personal
Computers, where multiple applications are running. All the running applications are
kept in a circular linked list and the OS gives a fixed time slot to all for running. The
Operating System keeps on iterating over the linked list until all the applications are
completed.
Another example can be Multiplayer games. All the Players are kept in a Circular
Linked List and the pointer keeps on moving forward as a player's chance ends.
Circular Linked List can also be used to create Circular Queue. In a Queue we have
to keep two pointers, FRONT and REAR in memory all the time, where as in Circular
Linked List, only one pointer is required.
node() {
data = 0;
next = NULL;
}
node(int x) {
data = x;
next = NULL;
}
}
CircularLinkedList() {
head = NULL;
}
}
Insertion at the Beginning
Steps to insert a Node at beginning :
1. If the Linked List is empty then we simply, add the new Node as the Head of the
Linked List.
2. If the Linked List is not empty then we find the last node, and make it' next to the new
Node, and make the next of the Newly added Node point to the Head of the List.
If the Node to be deleted is the first node, then simply set the Next pointer of the
Head to point to the next element from the Node to be deleted. And update the next
pointer of the Last Node as well.
If the Node is in the middle somewhere, then find the Node before it, and make the
Node before it point to the Node next to it.
If the Node is at the end, then remove it and make the new last node point to the
head.