Dsa Live PDF
Dsa Live PDF
SUBMITTED BY:-
Name of student: Jay Shekhar
Registration Number: 11911774
Student Declaration
To whomsoever it may concern,
I hereby declare that I have completed my eight weeks summer training in an Online
Website named “GREEKSFORGREEKS” from 22-09-2022 to 09-12-2022. I have
declared that I have worked with full dedication during these eight weeks of
training and my learning outcomes fulfill the requirements of training for the award
of degree of BTech CSE, Lovely Professional University, Phagwara.
Jay Shekhar
11911774
Signature
ACKNOWLEDGEMENT
I would also like to thank my own college Lovely Professional University for offering
such a course which not only helped me improve my programming skill but also
taught me its practical applications.
JAY SHEKHAR
11911774
1. INTRODUCTION 1
2. DECLARATION 2
3. ACKNOWLEDGEMENT 3
5. CONCLUSION 47
7. LEARNING OUTCOMES 49
8. BIBLIOGRAPHY 50
1. Introduction
Analysis of Algorithm
a) Background analysis through a Program and its functions.
Order of Growth
a. A mathematical explanation of the growth analysis through limits and functions.
Asymptotic Notations
Best, Average and Worst case explanation through a program.
In the worst case analysis, we calculate the upper bound on running time of an algorithm. We must know the
case that causes the maximum number of operations to be executed. For Linear Search, the worst case happens
when the element to be searched (x in the above code) is not present in the array
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all
the calculated values and divide the sum by the total number of inputs. We must know (or predict) the
distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed
(including the case of x not being present in the array).
In the best case analysis, we calculate a lower bound on running time of an algorithm. We must know the case
that causes minimum number of operations to be executed
Big O Notation
Graphical and mathematical explanation.
The Big O notation defines an upper bound of an algorithm, it bounds afunction only from above. For
example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst
case. We can safely say that the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers
linear time.
Calculation.
Just as Big O notation provides an asymptotic upper bound on a function, Ω notation provides an
asymptotic lower bound.
Ω Notation can be useful when we have lower bound on time complexity of an algorithm. The Omega
notation is the least used notation among all three.
Theta Notation
Graphical and mathematical explanation.
Calculation.
The theta notation bounds functions from above and below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order terms and ignore leading
constants
Analysis of Recursion
Various calculations through Recursion Tree method
Space Complexity
Basic Programs
Palindrome Numbers
Factorial of Numbers ○ GCD of Two Numbers
1. Bit Magic
Bitwise Operators in C++ ○ Operation of AND, OR, XOR operators ○ Operation of Left
Shift, Right Shift and Bitwise Not
Bitwise Operators in Java
Operation of AND, OR
& (bitwise AND) Takes two numbers as operands and does AND on every bit of two numbers.
The result of AND is 1 only if both bits are 1. Suppose A = 5 and B = 3, therefore A & B = 1.
| (bitwise OR) Takes two numbers as operands and does OR on every bit of two numbers. The
result of OR is 1 if any of the two bits is 1. Suppose A = 5 and B = 3, therefore A | B = 7.
^ (bitwise XOR) Takes two numbers as operands and does XOR on every bit of two numbers. The
result of XOR is 1 if the two bits are different. Suppose A = 5 and B = 3, therefore A ^ B = 6.
<< (left shift) Takes two numbers, left shifts the bits of the first operand, the second operand
decides the number of places to shift.
>> (right shift) Takes two numbers, right shifts the bits of the first operand, the second operand
decides the number of places to shift.
~ (bitwise NOT) Takes one number and inverts all bits of it. Suppose A = 5, therefore ~A = -6.
The left shift and right shift operators cannot be used with negative numbers.
The bitwise XOR operator is the most useful operator from technical interview perspective. We
will see some very useful applications of the XOR operator later in the course.
The bitwise operators should not be used in place of logical operators.
The left-shift and right-shift operators are equivalent to multiplication and division by 2
respectively.
The & operator can be used to quickly check if a number is odd or even. The value of expression
(x & 1) would be non-zero only if x is odd, otherwise the value would be zero.
Let's look at some of the useful tactics of the Bitwise Operators which can be helpful in
solvingalotofproblemsreallyeasilyandquickly.
1. How to set a bit in the number 'num' : If we want to set a bit at nth position in number 'num' ,it can
be done using 'OR' operator( | ).
First we left shift '1' to n position via (1 << n)
Then, use 'OR' operator to set bit at that position.'OR' operator is used because it will set the bit
even if the bit is unset previously in binary representation of number 'num'.
2. How to unset/clear a bit at n'th position in the number 'num' :
Suppose we want to unset a bit at nth position in number 'num' then we have to do this with the help of
'AND' (&) operator.
First we left shift '1' to n position via (1 << n) than we use bitwise NOT operator '~' to unset this
shifted '1'.
Now after clearing this left shifted '1' i.e making it to '0' we will 'AND'(&) with the number 'num' that
will unset bit at nth position position.
When we do arithmetic right shift, every bit is shifted to right and blank position is substituted with sign
bit of number, 0 in case of positive and 1 in case of negative number. Since every bit is a power of 2, with
each shift we are reducing the value of each bit by factor of 2 which is equivalent to division of x by 2
1. Recursion
Introduction to Recursion
The process in which a function calls itself directly or indirectly is called recursion and the corresponding
function is called as recursive function. Using recursive algorithm, certain problems can be solved quite
easily
Applications of Recursion
A recursive function is said to be following Tail Recursion if it invokes itself at the end of the function.
That is, if all of the statements are executed before the function invokes itself then it is said to be following
Tail Recursion.
The tail-recursive functions are considered better than non-tail recursive functions as tail-recursion can
be optimized by the compiler. The idea used by compilers to optimize tail-recursive functions is simple,
since the recursive call is the last statement, there is nothing left to do in the current function, so saving
the current function’s stack frame is of no use.
In a recursive program, the solution to the base case is provided and the solution of bigger problem is
expressed in terms of smaller problems.
int fact(int n)
else
return n*fact(n-1);
The idea is to represent a problem in terms of one or more smaller problems, and add one or more base
conditions that stop recursion. For example, we compute factorial n if we know the factorial of (n-1). The
base case for factorial would be n = 0. We return 1 when n = 0.
WhyStackOverflowerroroccursinrecursion?
If the base case is not reached or not defined, then the stack overflow problem may arise.
int fact(int n)
return 1;
else
return n*fact(n-1);
When any function is called from main(), the memory is allocated to it on stack. A recursive function calls
itself, the memory for the called function is allocated on top of memory allocated to the calling function
and a different copy of local variables is created for each function call. When the base case is reached, the
function returns its value to the function by whom it is called and memory is deallocated and the process
continues. Let us take the example of how recursion works by taking a simple function:
The tail-recursive functions are considered better than non-tail recursive functions as tail-recursion can
be optimized by the compiler. The idea used by compilers to optimize tail-recursive functions is simple,
since the recursive call is the last statement, there is
nothing left to do in the current function, so saving the current function’s stack frame is of no use.
Consider the following function to calculate factorial of N. Although it looks like Tail Recursive at first
look, it is a non-tail-recursive function. If we take a closer look, we can see that the value returned by
fact(N-1) is used in fact(N), so the call to fact(N-1) is not the last thing done by fact(N).
1. Arrays
Introduction and Advantages
An array is a collection of items of the same data type stored at contiguous memory locations. This makes
it easier to calculate the position of each element by simply adding an offset to a base value, i.e., the
memory location of the first element of the array (generally denoted by the name of the array).
Defining an Array: Array definition is similar to defining any other variable. There are two things that
are needed to be kept in mind, the data type of the array
elements and the size of the array. The size of the array is fixed and the memory for an array needs to be
allocated before use, the size of an array cannot be increased or decreased dynamically.
Accessing array elements: Arrays allow access to elements randomly. Elements in an array can be
accessed using indexes. Suppose an array named arr stores N elements.
Indexes in an array are in the range of 0 to N-1, where the first element is present at 0- th index and
consecutive elements are placed at consecutive indexes. Element present at ith index in the array arr[]
can be accessed as arr[i].
Vector in C++ STL is a class that represents a dynamic array. The advantages of vector over normal arrays
are,
We do not need to pass size as an extra parameter when we pass a vector.
Vectors have many in-built functions for erasing an element, inserting an element etc.
Vectors support dynamic sizes, we do not have to initially specify the size of a vector. We can
also resize a vector.
There are many other functionalities vector provide.
Vectors are same as dynamic arrays with the ability to resize itself automatically when an element is
inserted or deleted, with their storage being handled automatically by the container. Vector elements are
placed in contiguous storage so that they can be accessed and traversed using iterators. In vectors, data is
inserted at the end. Inserting at the end takes differential time, as sometimes there may be a need of
extending the array. Removing the last element takes only constant time because no resizing happens.
Inserting and erasing at the beginning or in the middle is linear in time.
Types of Arrays
Fixed-sized array
Dynamic-sized array
Operations on Arrays
Searching
Insertions
Given an array of a given size. The task is to insert a new element in this array. There are two possible
ways of inserting elements in an array:
To delete a given element from an array, we will have to first search the element in the array. If the
element is present in the array then delete operation is performed for the element otherwise the user is
notified that the array does not contain the given element.
Consider the given array is arr[] and the initial size of the array is N, that is the array can contain a
maximum of N elements and the length of the array is len. That is, there are len number of elements
already present in this array.
Arrays vs other DS
Time Complexity in worst case of this insertion operation can be linear i.e. O(N) as we might have to shift
all of the elements by one place to the left.
Given an array arr[] of size N, the task is to generate the prefix sum array of the given array.
Prefix Sum Array: The prefix sum array of any array, arr[] is defined as an array of same size say,
prefixSum[] such that the value at any index i in prefixSum[] is sum of all elements from indexes 0 to i in
arr[].
Finding sum in a Range: We can easily calculate the sum with-in a range [i, j] in an array using the prefix
sum array. Since the array prefixSum[i] stores the sum of all elements upto i. Therefore, prefixSum[j] -
prefixSum[i] will give:
sum of elements upto j-th index - sum of elements upto i-th element
An array declaration has two components: the type and the name. Type declares the element type of the
array. The element type determines the data type of each element that comprises the array. Like an array
of type int, we can also create an array of other primitive data types such as char, float, double..etc, or
user-defined data type(objects of a class). Thus, the element type for the array determines what type of
data the array will hold.
Instantiating an Array: When an array is declared, only a reference of the array is created. To actually
create or give memory to the array, you create an array like this:
The technique can be best understood with the window pane in bus, consider a window
of length n and the pane which is fixed in it of length k. Consider, initially the pane is at extreme left i.e., at
0 units from the left. Now, co-relate the window with array arr[] of size n and plane with current_sum of
size k elements. Now, if we apply force on the window such that it moves a unit distance ahead. The pane
will cover
1. We compute the sum of first k elements out of n terms using a linear loop and store the sum in
variable window_sum.
2. Then we will graze linearly over the array till it reaches the end and simultaneously keep track of
maximum sum.
3. To get the current sum of block of k elements just subtract the first element from the previous block
and add the last element of the current block .
1. Searching
Binary Search Iterative and Recursive
Linear Search means to sequentially traverse a given list or array and check if an element is present in
the respective array or list. The idea is to start traversing the array and compare elements of the array one
by one starting from the first element with the given element until a match is found or end of the array is
reached.
Binary Search is a searching algorithm for searching an element in a sorted list or array. Binary Search is
more efficient than Linear Search algorithm and performs the search operation in logarithmic time
complexity for sorted arrays or lists.
Binary Search performs the search operation by repeatedly dividing the search interval in half. The idea is
to begin with an interval covering the whole array. If the value of the search key is less than the item in the
middle of the interval, narrow the interval to the
lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval
is empty.
Ternary Search is a Divide and Conquer Algorithm used to perform search operation in a sorted array.
This algorithm is similar to the Binary Search algorithm but rather than dividing the array into two parts,
it divides the array into three equal parts.
In this algorithm, the given array is divided into three parts and the key (element to be searched) is
compared to find the part in which it lies and that part is further divided into three parts.
We can divide the array into three parts by taking mid1 and mid2 which can be calculated as shown
below. Initially, l and r will be equal to 0 and N-1 respectively, where N is the length of the array.
So far, we have discussed the Binary Search algorithm and its implementation by writing a function. The
C++ standard template library have some built-in functions that can be used to perform Binary Search
operation directly on a sequential list or container.
types: byte, char, double, int, float, short, long and Object as well.
Description: This method searches the specified array of the given data type for the specified value using
the binary search algorithm. The array must be sorted prior to making this call. If it is not sorted, the
results are undefined. If the array contains
multiple elements with the specified value, there is no guarantee which one will be found.
Parameters:
Return Value: It returns the index of the search key, if it is contained in the array; otherwise, (-(insertion
point) - 1). The insertion point is defined as the point at which the key would be inserted into the array:
the index of the first element greater than the key, or a.length if all elements in the array are less than the
specified key. Note that this guarantees that the return value will be >= 0 if and only if the key is found.
Implementation: The Ternary Search Algorithm can be implemented in both recursive and iterative manner.
Below is the implementation of both recursive and iterative function to perform Ternary Search on an array arr[] of
size N to search an element key.
Important Points:
Sorting in Java
Arrays.sort() in Java
Collection.sort() in Java
Stability in Sorting Algorithms
Examples of Stable and Unstable Algos
QUICK Sort
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array
around the picked pivot. There are many different versions of quickSort that pick pivot in different ways.
The key process in quickSort is partition(). Target of partitions is, given an array and an element x of array
as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x) before x,
and put all greater elements (greater than x) after x. All this should be done in linear time.
Bubble Sort
Bubble Sort is also an in-place sorting algorithm. This is the simplest sorting algorithm and it works on
the principle that:
In one iteration if we swap all adjacent elements of an array such that after swap the first element is less
than the second element then at the end of the iteration, the first element of the array will be the
minimum element.
Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering
ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two
subarrays in a given array.
Insertion Sort
Insertion Sort is an In-Place sorting algorithm. This algorithm works in a similar way of sorting a deck of
playing cards.
The idea is to start iterating from the second element of array till last element and for every element
insert at its correct position in the subarray before it.
Merge Sort
Merge Sort is a Divide and Conquer algorithm. It divides the input array in two halves, calls itself for the
two halves and then merges the two sorted halves. The merge() function is used for merging two halves.
The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges
the two sorted sub- arrays into one in a sorted manner. See following implementation for details
MergeSort(arr[], l, r) If r > l
1. Find the middle point to divide the array into two halves: middle m = (l+r)/2
Counting sort is a sorting technique based on keys between a specific range. It works by counting the
number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate
the position of each object in the output sequence.
1. Matrix
Introduction to Matrix in C++ and Java
Multidimensional Matrix
Pass Matrix as Argument
Printing matrix in a snake pattern
Transposing a matrix
Rotating a Matrix
Check if the element is present in a row and column-wise sorted matrix.
Boundary Traversal
Spiral Traversal
Matrix Multiplication
Search in row-wise and column-wise Sorted Matrix
2. Hashing
Introduction and Time complexity analysis
Hashing is a method of storing and retrieving data from a database efficiently . Suppose that we want to
design a system for storing employee records keyed using phone numbers. And we want the following
queries to be performed efficiently:
Hash Function: A function that converts a given big phone number to a small practical integer value. The
mapped integer value is used as an index in the hash table. In simple terms, a hash function maps a big
number or string to a small integer that can be used as an index in the hash table. A good hash function
should have following properties:
Important Operations:
Insert(k): Keep probing until an empty slot is found. Once an empty slot is found, insert k.
Search(k): Keep probing until the slot's key doesn't become equal to k or an empty slot is
reached.
Delete(k): Delete operation is interesting. If we simply delete a key, then the search may fail.
So slots of the deleted keys are marked specially as "deleted".
Double Hashing We use another hash function hash2(x) and look for i*hash2(x) slot in i'th
rotation.
let hash(x) be the slot index computed using hash function. If slot hash(x) % S is full, then we
try (hash(x) + 1*hash2(x)) % S If (hash(x) + 1*hash2(x)) % S is also
full, then we try (hash(x) + 2*hash2(x)) % S If (hash(x) + 2*hash2(x)) % S is also full, then we try (hash(x)
+ 3*hash2(x)) % S
%S
Linear probing has the best cache performance but it suffers from clustering. One more
advantage of Linear probing that it is easy to compute.
Quadratic probing lies between the two in terms of cache performance and clustering.
Double hashing has poor cache performance but no clustering. Double hashing requires more
computation time as two hash functions need to be computed.
In the above syntax, str_name is any name given to the string variable and size is used to define the length
of the string, i.e the number of characters strings will store. Please keep in mind that there is an extra
terminating character which is the Null character ('\0') used to indicate the termination of string which
differs strings from normal character arrays.
Eg:
Printing a string array: Unlike arrays we do not need to print a string, character by character. The C/C++
language does not provide an inbuilt data type for strings but it has an access specifier “%s” which can be used
to directly print and read strings.
A character array is simply an array of characters can terminated by a null character. A string
is a class which defines objects that be represented as stream of characters.
Size of the character array has to allocated statically, more memory cannot be allocated at run
time if required. Unused allocated memory is wasted in case of character array. In case of
strings, memory is allocated dynamically. More memory
Implementation of character array is faster than std:: string. Strings are slower when
compared to implementation than character array.
1. Linked List
Introduction
Linked Lists are linear or sequential data structures in which elements are stored at non- contiguous
memory locations and are linked to each other using pointers. Like arrays, linked lists are also linear
data structures but in linked lists elements are not stored at contiguous memory locations. They can be
stored anywhere in the memory but for sequential access, the nodes are linked to each other using
pointers.
Advantages of Linked Lists over Arrays: Arrays can be used to store linear data of similar types, but
arrays have the following limitations:
1. The size of the arrays is fixed, so we must know the upper limit on the number of elements in
advance. Also, generally, the allocated memory is equal to the upper limit irrespective of the usage.
On the other hand, linked lists are dynamic and the size of the linked list can be incremented or
decremented during runtime.
2. Inserting a new element in an array of elements is expensive, because a room has to be created for
the new elements, and to create room, existing elements have to shift. For example, in a system, if we
maintain a sorted list of IDs in an array id[].
Implementation in CPP
Implementation in Java
There can be many different situations that may arise while inserting a node in a linked list. Three most
frequent situations are:
Each node contains two pointers, one pointing to the next node and the other pointing to the
previous node.
The prev of Head node is NULL, as there is no previous node of Head.
The next of last node is NULL, as there is no node after the last node
Every node of DLL requires extra space for a previous pointer.
All operations require an extra pointer previous to be maintained. For example, an insertion,
we need to modify previous pointers together with next pointers.
Circular Linked List
A circular linked list is a linked list where all nodes are connected to form a circle. There is no NULL at the
end. A circular linked list can be a singly circular linked list or doubly circular linked list.
Loop Problems
Detecting Loops
1. Queue
Introduction and Application
Like Stack data structure, Queue is also a linear data structure that follows a particular order in which the
operations are performed. The order is First In First Out (FIFO), which means that the element that is
inserted first in the queue will be the first one to be removed from the queue. A good example of queue is
any queue of consumers for a resource where the consumer who came first is served first.
The difference between stacks and queues is in removing. In a stack, we remove the most recently added
item; whereas, in a queue, we remove the least recently added item.
Operations on Queue: Mainly the following four basic operations are performed on queue:
Enqueue: Adds an item to the queue. If the queue is full, then it is said to be an Overflow
condition.
Dequeue: Removes an item from the queue. The items are popped in the same order in which
they are pushed. If the queue is empty, then it is said to be an Underflow condition.
Front: Get the front item from the queue.
Rear: Get the last item from the queue.
Array implementation Of Queue: For implementing a queue, we need to keep track of two indices - front
and rear. We enqueue an item at the rear and dequeue an item from the front. If we simply increment
front and rear indices, then there may be problems, the
front may reach the end of the array. The solution to this problem is to increase front and
rearinacircularmanner.
Consider that an Array of size N is taken to implement a queue. Initially, the size of the queue will be
zero(0). The total capacity of the queue will be the size of the array i.e. N. Now initially, the index front will
be equal to 0, and rear will be equal to N-1. Every time an item is inserted, so the index rear will
increment by one, hence increment it as: rear = (rear + 1)%N and everytime an item is removed, so the
front index will shift to right by 1 place, hence increment it as: front = (front + 1)%N.
1. Stack
Understanding the Stack data structure
Applications of Stack
The Stack is a linear data structure, which follows a particular order in which the operations are
performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
The LIFO order says that the element which is inserted at the last in the Stack will be the first
one to be removed. In LIFO order, the insertion takes place at the rear end of the stack and
deletion occurs at the rear of the stack.
The FILO order says that the element which is inserted at the first in the Stack will be the last
one to be removed. In FILO order, the insertion takes place at the rear end of the stack and
deletion occurs at the front of the stack.
The Stack is a linear data structure, which follows a particular order in which the operations are
performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
The LIFO order says that the element which is inserted at the last in the Stack will be the first
one to be removed. In LIFO order, the insertion takes place at the rear end of the stack and
deletion occurs at the rear of the stack.
The FILO order says that the element which is inserted at the first in the Stack will be the last
one to be removed. In FILO order, the insertion takes place at the rear end of the stack and
deletion occurs at the front of the stack.
There are many real-life examples of a stack. Consider the simple example of plates stacked over one
another in a canteen. The plate that is at the top is the first one to be removed, i.e. the plate that has been
placed at the bottommost position remains in the stack for the longest period of time. So, it can be simply
seen to follow LIFO/FILO order.
Time Complexities of operations on stack: The operations push(), pop(), isEmpty() and peek() all take
O(1) time. We do not run any loop in any of these operations.
14) Deque
Deque or Double Ended Queue is a generalized version of Queue data structure that allows insert
and delete at both ends.
Applications of Deque: Since Deque supports both stack and queue operations, it can be used as both. The
Deque data structure supports clockwise and anticlockwise rotations in O(1) time which can be useful in
certain applications. Also, the problems where elements need to be removed and or added to both ends
can be efficiently solved using
Deque. For example see the Maximum of all subarrays of size k problem., 0-1 BFS, and Find the first
circular tour that visits all petrol pumps. See the wiki page for another example of the A-Steal job
scheduling algorithm where Deque is used as deletions operation is required at both ends.
1. Tree
Introduction
A Tree is a non-linear data structure where each node is connected to a number of nodes with the help of
pointers or references.
Root: The root of a tree is the first node of the tree. In the above image, the root node is the
node 30.
Edge: An edge is a link connecting any two nodes in the tree. For example, in the above image
there is an edge between node 11 and 6.
Siblings: The children nodes of same parent are called siblings. That is, the nodes with same
parent are called siblings. In the above tree, nodes 5, 11, and 63 are siblings.
Leaf Node: A node is said to be the leaf node if it has no children. In the above tree, node 15 is
one of the leaf nodes.
Height of a Tree: Height of a tree is defined as the total number of levels in the tree or the
length of the path from the root node to the node present at the last level. The above tree is of
height 2.
Tree
Application
Binary Tree
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc.), which have only one logical way to
traverse them, trees can be traversed in different ways. Following are the generally used ways for
traversing trees:
Tree Traversal
Implementation of:
Inorder Traversal
In Inorder traversal, a node is processed after processing all the nodes in its left subtree. The right
subtree of the node is processed after processing the node itself.
Algorithm Inorder(tree)
In preorder traversal, a node is processed before processing any of the nodes in its subtree
Algorithm Preorder(tree)
In post order traversal, a node is processed after processing all the nodes in its subtrees
Algorithm Postorder(tree)
Binary Search Tree is a node-based binary tree data structure which has the following properties:
The left subtree of a node contains only nodes with keys lesser than or equal to the node's key.
The right subtree of a node contains only nodes with keys greater than the node's key.
The left and right subtree each must also be a binary search tree. There must be no duplicate
nodes.
Using the property of Binary Search Tree, we can search for an element in O(h) time complexitywhere h
istheheightofthegivenBST.
To search a given key in Binary Search Tree, first compare it with root, if the key is present at root, return
root. If the key is greater than the root's key, we recur for the right subtree of the root node. Otherwise,
we recur for the left subtree.
Inserting a new node in the Binary Search Tree is always done at the leaf nodes to maintain the order of
nodes in the Tree. The idea is to start searching the given node to be inserted from the root node till we
hit a leaf node. Once a leaf node is found, the new node is added as a child of the leaf node.
AVL tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left and
right subtrees cannot be more than one for all nodes.
Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take O(h) time where h is the height
of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If we make sure that
the height of the tree remains O(Logn) after every insertion and deletion, then we can guarantee an upper
bound of O(Logn) for all these operations. The height of an AVL tree is always O(Logn) where n is the
number of nodes in the tree.
To make sure that the given tree remains AVL after every insertion, we must augment the standard BST
insert operation to perform some re-balancing. Following are two basic operations that can be performed
to re-balance a BST without violating the BST property (keys(left) < key(root) < keys(right)).
1. Left Rotation
2. Right Rotation
Time Complexity: The rotation operations (left and right rotate) take constant time as only a few
pointers are being changed there. Updating the height and getting the balance factor also takes constant
time. So the time complexity of AVL insert remains same as BST insert which is O(h) where h is the height
of the tree. Since the AVL tree is balanced, the height is O(Logn). So time complexity of AVL insert is
O(Logn).
1. Heap
1. A Heap is a complete tree (All levels are completely filled except possibly the last level and the
last level has all keys as left as possible).
2. A Heap is either Min Heap or Max Heap. In a Min-Heap, the key at root must be minimum
among all keys present in the Binary Heap. The same property must be recursively true for all
nodes in the Tree. Max Heap is similar to MinHeap.
Binary Heap: A Binary Heap is a heap where each node can have at most two children. In other words, a
Binary Heap is a complete Binary Tree satisfying the above-mentioned properties.
Getting Maximum Element: In a Max-Heap, the maximum element is always present at the root node
which is the first element in the array used to represent the Heap. So, the maximum element from a max
heap can be simply obtained by returning the root node . Getting Minimum Element: In a Min-Heap, the
minimum element is always present at the root node which is the first element in the array used to
represent the Heap. So, the minimum element from a minheap can be simply obtained by returning the
root node as Arr[0] in O(1) time complexity.
Binary Heap
Insertion
Given a Binary Heap and an element present in the given Heap. The task is to delete an element from this
Heap.
The standard deletion operation on Heap is to delete the element present at the root node of the Heap.
That is if it is a Max Heap, the standard deletion operation will delete the maximum element and if it is a
Min heap, it will delete the minimum element.
Process of Root Deletion (Or Extract Min in Min Heap): Since deleting an element at any
intermediary position in the heap can be costly, so we can simply replace the element to be deleted by the
last element and delete the last
Heap Sort
1. Graph
Introduction to Graph
Graphs are used to represent networks. The networks may include paths in a city or
telephone network or circuit network. For example Google GPS
Graphs are also used in social networks like linkedIn, Facebook. For example, in Facebook,
each person is represented with a vertex(or node). Each node is a structure and contains
information like person id, name, gender and locale.
Graph Representation
Adjacency Matrix
Breadth-First Search
The Breadth First Traversal or BFS traversal of a graph is similar to that of the Level
OrderTraversalofTrees.
The BFS traversal of Graphs also traverses the graph in levels. It starts the traversal with a given vertex,
visits all of the vertices adjacent to the initially given vertex and pushes them all to a queue in order of
visiting. Then it pops an element from the front of the queue, visits all of its neighbours and pushes the
neighbours which are not already visited
into the queue and repeats the process until the queue is empty or all of the vertices are visited.
Applications
Complete Algorithm:
1. Create a boolean array say visited[] of size V+1 where V is the number of vertices in the graph.
2. Create a Queue, mark the source vertex visited as visited[s] = true and push it into the queue.
3. Until the Queue is non-empty, repeat the below steps:
Pop an element from the queue and print the popped element.
Traverse all of the vertices adjacent to the vertex poped from the queue.
If any of the adjacent vertex is not already visited, mark it visited and push it to the queue.
Applications
1. Greedy
Introduction
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next
piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal
also leads to the global optimal solution are best fit for Greedy.
For example, consider the Fractional Knapsack Problem. The problem states that:
Given a list of elements with specific values and weights associated with them, the task is to fill a Knapsack of
weight W using these elements such that the value of knapsack is maximumpossible.
Note: You are allowed to take a fraction of an element also in order to maximize the value.
Activity Selection Problem
Fractional Knapsack
Huffman Coding
1. Backtracking
Concepts of Backtracking
The Backtracking algorithm is a problem-solving algorithm, which uses recursion at its core. It involves
trying to build a solution incrementally piece by piece. And solutions that don’t satisfy the conditions are
removed during the course of program execution. It uses
a brute force approach while trying to find a solution to the problem. Basically, all the possible
combinations and solutions are tried. And those solutions that don’t fulfill the criteria are removed or
rejected.
Introduction
Dynamic Programming is an algorithmic approach to solve some complex problems easily and save time
and number of comparisons by storing the results of past computations. The basic idea of dynamic
programming is to store the results of previous calculation and reuse it in future instead of recalculating
them.
There are two main properties of any problem which identifies a problem that it can be solved using the
dynamic programming approach:
1. Overlapping Subproblem Property
2. Optimal Substructure Property
Overlapping Subproblems: Like Divide and Conquer, Dynamic Programming combines solutions to sub-
problems. Dynamic Programming is mainly used when solutions of the same subproblems are needed
again and again. In dynamic programming, computed solutions to subproblems are stored in a table so
that these don’t have to be recomputed. So Dynamic Programming is not useful when there are no
common (overlapping) subproblems because there is no point storing the solutions if
they are not needed again. For example, Binary Search doesn’t have common subproblems. If we take an
example of following the recursive program for Fibonacci Numbers, there are many subproblems which
are solved again and again.
Optimal Substructure: A given problem has Optimal Substructure Property if an optimal solution of the
given problem can be obtained by using optimal solutions of its subproblems. For example, the Shortest
Path problem has the following optimal substructure property: If a node x lies in the shortest path from a
source node u to destination node v then the shortest path from u to v is combination of shortest path
from u to x and shortest path from x to v. The standard All Pair Shortest Path algorithms like Floyd–
Warshall and Bellman-Ford are typical examples of Dynamic Programming. On the other hand, the
Longest Path problem doesn’t have the Optimal
Substructure property. Here, by Longest Path we mean longest simple path (path without cycle) between
any two nodes. Consider the following unweighted graph given in the CLRS book. There are two longest
paths from q to t: q->r->t and q->s->t. Unlike shortest paths, these longest paths do not have the optimal
substructure property. For example, the longest path q->r->t is not a combination of the longest path
from q to r and longest path from r to t, because the longest path from q to r is q->s->t->r and the longest
path from r to t is r->q->s->t.
Dynamic Programming
Memoization
Tabulation
22. Trie
Introduction
The Trie data structure is an efficient information re-trie-val data structure. The Trie data struture is
used to efficiently search for a particular string key among a list of such keys. Using the trie, the lookup
operation can be performed in time complexity
of O(key_length).
Representation
Search
Insert
Inserting a key to Trie is a simple approach. Every character of the input key is inserted as an individual
Trie node. Note that the children are an array of pointers (or references) to next level trie nodes. The key
character acts as an index into the array of children. If the input key is new or an extension of the existing
key, we need to construct non-existing nodes of the key, and mark end of the word for the last node. If the
input key is a prefix of the existing key in Trie, we simply mark the last node of the key as the end of a
word. The key length determines Trie depth.
Delete
For example, if the dictionary stores the following words {“abc”, “abcd”, “aa”, “abbbaba”} and the User
types in “ab” then he must be shown {“abc”, “abcd”, “abbbaba”} as a result as all of them have the prefix ab.
Introduction
Segment Trees are Binary Tree which is used to store intervals or segments. That is each node in a
segment tree basically stores the segment of an array. Segment Trees are generally used in problems
where we need to solve queries on a range of elements in arrays.
Can we optimize the time complexity of the first operation in the above solution?
Yes, we can optimize the first operation to be solved in O(1) time complexity by storing presum. We can
keep an auxiliary array say sum[] in which the i-th element will store the sum of first i elements of the
original array. So, whenever we need to find the sum of a range of elements, we can simply calculate it by
(sum[r]-sum[l-1]). But in this
solution the complexity to perform the second operation of updating an element increases from O(1) to
O(N).
Construction
Range Query
The Range Minimum Query is another popular problem which can be solved using Segment Trees. The
problems state that, given an array and a list of queries containing ranges, the task is to find the minimum
element in the range for every query.
Update Query
Introduction
A disjoint-set data structure is a data structure that keeps track of a set of elements partitioned into a
number of disjoint (non-overlapping) subsets
Implementing Union Operation: It takes as input two elements and finds the representatives of their
sets using the find operation, and then finally puts either one of the trees (representing the set) under the
root node of the other tree, effectively merging the trees and the sets.
Application: There are a lot of applications of Disjoint-Set data structure. Consider the problem of
detecting a cycle in a Graph. It can be easily solved using the Disjoint Set and Union-Find algorithm. This
method assumes that the graph does not contain any self-loop.
Union by Rank
Path Compression
In the previous post, we introduced union find algorithm and used it to detect cycle in a graph. We used
the following union() and find() operations for subsets.
int xset = find(parent, x); int yset = find(parent, y); parent[xset] = yset;
def disable_all_buttons():
for i in range(9):
if i == 0 or i == 3 or i == 6:
winner = True if i == 2:
winner = True
time.sleep(1) messagebox.showinfo("Tic-Tac-Toe", b)
next_round = messagebox.askquestion("Tic-Tac-Toe", " Want to Play Another Round? ") if next_round == 'yes':
count = 0
No one wins \n Want to play another Round?') if next_round == 'yes': for k in Buttons:
txt = b['text']
if txt == ' ' and turn == '1st': b['text'] = Players[0] count += 1 turn = '2nd'
checkifwon()
elif txt == ' ' and turn == '2nd': b['text'] = Players[1] count += 1 turn = '1st' checkifwon()
Buttons.append(globals()[k])
column = 0
for i in range(9):
row += 1
column = 0 root.mainloop()
CONCLUSION
Given a connected and undirected graph, a spanning tree of that graph is a subgraph that is a tree and
connects all the vertices together. A single graph can have many different spanning trees. A minimum
spanning tree (MST) or minimum weight spanning tree for a weighted, connected, and undirected graph
is a spanning tree with a weight less than or equal to the weight of every other spanning tree. The weight
of a spanning tree is the sum of weights given to each edge of the spanning tree. Algorithm is vast topic
and it is all about making a program more efficient. These Algorithms are
the ones that make our experience smother with the software. Take google search engine for example
how fast it provides the best search result in few seconds of times. That is the power of algorithms. That is
why many companies ask questions related to algorithms during interview.
Through this Course I have Learnt a vast number of interesting topics like Trees, Graphs, Linked List and
many other more. Their implementation in practical problems and understanding their base concepts.
While working in IT sector we need to solve the problems and make programs write tons of code which
will help us with the given problem and to write a program one need to make different algorithms. Many
algorithms combine to make a program. Now, algorithm are written in some languages but they are not
dependent ton them, one need to make a plan and algo first then write it into any language whether i tis
C++ or JAVA or C or any other programming language. Algorithm is based on data structure and its
implementation and working. So, basically one need to have a good grip on DSA to work in programming
sector
Reason for choosing this technology
With advancement and innovation in technology, programming is becoming a highly in- demand skill for
Software Developers. Everything you see around yourself from Smart TVs, ACs, Lights, Traffic Signals uses some
kind of programming for executing user commands.
Data Structures and Algorithms are the identity of a good Software Developer. The interviews for technical
roles in some of the tech giants like Google, Facebook, Amazon, Flipkart is more focused on measuring the
knowledge of Data Structures and Algorithms of the candidates. The main reason behind this is Data
Structures and Algorithms improves the problem-solving ability of a candidate to a great extent.
1. This course has video lectures of all the topics from which one can easily learn. I prefer learning from video
rather than books and notes. I know books and notes and thesis have their own significance but still video
lecture or face to face lectures make it easy to understand faster as we are involved Practically.
2. It has 200+ algorithmic coding problems with video explained solutions.
3. It has track based learning and weekly assessment to test my skills.
4. It was a great opportunity for me to invest my time in learning instead of wasting it here and there during
my summer break in this Covid-19 pandemic.
5. This was a life time accessible course which I can use to learn even after my training whenever I want to
revise.
Learning Outcomes
Programming is all about data structures and algorithms. Data structures are used to hold data while
algorithms are used to solve the problem using that data.
Data structures and algorithms (DSA) goes through solutions to standard problems in detail and gives you an
insight into how efficient it is to use each one of them. It also teaches you the science of evaluating the
efficiency of an algorithm. This enables you to choose the best of various choices.
For example, you want to search your roll number in 30000 pages of documents, for that you have choices like
Linear search, Binary search, etc.
So, the more efficient way will be Binary search for searching something in a huge number of data. So, if you
know the DSA, you can solve any problem efficiently.
Time is precious
Memory is expensive
BIBLIOGRAPHY
GeeksForGreeks website
Youtube