SYBSC Computer Science SEM 4 Fundamental of Algorithms
SYBSC Computer Science SEM 4 Fundamental of Algorithms
(Computer Science)
SEMESTER - IV (CBCS)
FUNDAMENTALS OF
ALGORITHMS
Prof.(Dr.) D. T. Shirke
Offg. Vice-Chancellor,
University of Mumbai,
1. Introduction to Algorithm 01
2. Master Theorem 14
3. Tree Algorithms 26
4. Graph Algorithms 87
5. Selection Algorithms 159
6. Algorithm and Design Techniques 185
7. Greedy Algorithms 190
8. Divide and Conquer Algorithms 201
9. Dynamic Programming 212
SEMESTER IV
THEORY
Tree algorithms: What is a Tree? Glossary, Binary Trees, Types of Binary Trees,
Properties of Binary Trees, Binary Tree Traversals, Generic Trees (N-ary Trees),
Threaded Binary Tree Traversals, Expression Trees, Binary Search Trees
(BSTs), Balanced Binary Search Trees, AVL (Adelson-Velskii and Landis)
Unit II 15L
Trees
Graph Algorithms: Introduction, Glossary, Applications of Graphs, Graph
Representation, Graph Traversals, Topological Sort, Shortest Path Algorithms,
Minimal Spanning Tree
Selection Algorithms: What are Selection Algorithms? Selection by Sorting,
Partition-based Selection Algorithm, Linear Selection Algorithm - Median of
Medians Algorithm, Finding the K Smallest Elements in Sorted Order
Algorithms Design Techniques: Introduction, Classification, Classification by
Implementation Method, Classification by Design Method
Greedy Algorithms: Introduction, Greedy Strategy, Elements of Greedy
Algorithms, Advantages and Disadvantages of Greedy Method, Greedy
Applications, Understanding Greedy Technique
Divide and Conquer Algorithms: Introduction, What is Divide and Conquer
Strategy? Divide and Conquer Visualization, Understanding Divide and
Unit III Conquer, Advantages of Divide and Conquer, Disadvantages of Divide and 15L
Conquer, Master Theorem, Divide and Conquer Applications
Dynamic Programming: Introduction, What is Dynamic Programming Strategy?
Properties of Dynamic Programming Strategy, Problems which can be solved
using Dynamic Programming, Dynamic Programming Approaches, Examples
of Dynamic Programming Algorithms, Understanding Dynamic Programming,
Longest Common Subsequence
Textbook(s):
1. Data Structure and Algorithmic Thinking with Python, Narasimha Karumanchi , CareerMonk
Publications, 2016
2. Introduction to Algorithm, Thomas H Cormen, PHI
Additional References(s):
1. Data Structures and Algorithms in Python, Michael T. Goodrich, Roberto Tamassia, Michael
H. Goldwasser, 2016, Wiley
2. Fundamentals of Computer Algorithms, Sartaj Sahni and Sanguthevar Rajasekaran Ellis
Horowitz, Universities Press
1
INTRODUCTION TO ALGORITHM
Unit Structure:
1.1 Objectives
1.2 Introduction to Algorithm
1.3 Why to analysis algorithm
1.4 Running time analysis
1.5 How to Compare Algorithms
1.6 Rate of Growth
1.7 Commonly Used Rates of Growth
1.8 Types of Analysis
1.9 Asymptotic Notation
1. Big-O Notation
2. Omega-Ω Notation
3. Theta-Θ Notation
1.10 Properties of Notations
1.11 Summary
1.12 Questions
1.13 Reference for further reading
1.1 OBJECTIVE
After going through this unit, you will be able to:
Understand the fundamental concepts of algorithm design and why
algorithm analysis is necessary. How to Compare Algorithms, Rates of
Growth and Their Varieties, Analysis Types, and Asymptotic Notation
1
statement. In order to achieve that input/output connection, the algorithm
Fundamental of
specifies a particular computing method. For instance, it can be necessary
Algorithms
to organise a list of integers in non-decreasing order. Numerous common
design strategies and analytical tools may be included into this challenge
because it regularly occurs in real-world settings.
Characteristics of an algorithm:
1. Input: External input should be given in amounts of zero or more.
2. Output: Must generate at least one outcome.
3. Clearness: The actions to complete the task should be explicit and clear.
4. Finiteness: Must come to an end after a set number of steps.
5. Effectiveness: Must yield the intended outcome.
Example 1
Algorithm for adding two integers the user providedentered by the user.
Step 1: Start
Step 2: Declare three variables n1, n2 and res.
Step 3: Read the value of variables n1 and n2.
Step 4: Add the value of variablesn1 and n2 and assign the result to res.
res=n1+n2
Step 5: Display res
Step 6: Stop
2
Example 2 Introduction to
Algorithm
Algorithm for subtraction two integers the user provided.
Step 1: Start
Step 2: Declare three variables n1, n2 and res.
Step 3: Read the value of variables n1 and n2.
Step 4: Subtract the value of variables n1 and n2 and assign the result to
res.
res=n1-n2
Step 5: Display res
Step 6: Stop
Example 3
Algorithm for calculating factorial of an integer the user provided.
Step 1: Start
Step 2: Declare two variables n andfact.
Step 3: Read the value of variable n.
Step 4: Variable fact is set as 1
Step 5: fact <= fact * n
Step 6: Decrease n
Step 7: Verify if n is equal to 0
Step 8: If n is equal to zero, goto step 10 (break out of loop)
Step 9: Else goto step 5
Step 10: Display fact
Example 4
Algorithm for calculating square of a number the user provided.
Step 1: Start
Step 2: Declare two variables n and res.
Step 3: Read the value of variable n.
Step 4: res=n*n
Step 5: Display res
Step 6: Stop
3
Fundamental of 1.3 WHY TO ANALYSIS ALGORITHM
Algorithms
An essential component of computational complexity theory is algorithm
analysis, which offers a theoretical estimate of the resources needed by an
algorithm to solve a particular computing issue. Calculating how much
time and space are needed to execute an algorithm is called algorithm
analysis. As a result, the analysis can only be considered approximate. In
addition, by examining many algorithms, we may contrast them and
choose the most effective one for our needs. The technique of determining
an algorithm's computational complexity is known as algorithm analysis.
The time, space, and any other resources required to execute (run) the
algorithm are all referred to as the method's computational complexity.
Comparing several algorithms that are applied to the same issue is the aim
of algorithm analysis. This is performed to measure which approach uses
less memory and resolves a certain problem more rapidly.
Algorithm Analysis Types:
1. Best case
2. Worst case
3. Average case
Best case: Best case: Specify the input for the method that requires the
minimum amount of time. Calculate an algorithm's lower bound in the
best case scenario. The lower bound on the algorithm's running time for
every given input is provided by the best case analysis of algorithms.
Simply stated, it says that every software will require at least (more than
or equal to) that amount of time to do its task.
Example-The perfect scenario occurs in a linear search when the search
data is present at the initial place of large data.
Worst Case: Specify the input for the algorithm to use if you want it to run
quickly. Calculate an algorithm's upper bound as worst case scenario. The
upper bound on the algorithm's running time for any given input is
provided by the worst-case analysis of algorithms. In other words, it says
that any programme will run in a maximum amount of time (less than or
equal to).
Example-The worst case scenario happens in a linear search when there is
no search data at all.
Average case: Consider all random inputs and figure out how long it will
take to process each one on average. The total inputs are then divided by
this number. The average case analysis of algorithms, as the name implies,
adds up the running times on all potential inputs before taking the average.
Therefore, in this instance, the algorithm's execution time serves as both
the lower and upper bounds. In other words, we can determine the
algorithm's typical running time through it.
4
Example: Introduction to
Algorithm
Find the ace of spades from a deck of 52 cards.
One card at a time is flipped until an ace of spades is revealed.
Best case scenario: Everything goes as smoothly as it possibly can, we
complete the task speedily, and we call it a day.
You turned over an ace of spades as your opening card.
Worst case scenario: Everything is getting completely out of control, so
you must put in the most effort to complete your mission.
The final card you flipped was an ace of spades.
Average case scenario: Neither very poorly nor really well, things go.
The ace of spades is not the first or last card that you see.
5
Fundamental of
Algorithms Polynomial Running time increases faster than previous
O(nc) iterations based on n.
exponential Running time becomes even faster than using the
O(cn) polynomial approach based on n.
factorial Running time increases the fastest and quickly
O(n!) becomes unsuitable for even tiny values of n.
6
1.6 RATE OF GROWTH Introduction to
Algorithm
An algorithm's growth rate is the rate at which the cost of the algorithm
increases as the size of its input increases. The term "Rate of Growth"
describes how quickly running time grows in response to input.
Let's say you visited a store to purchase a car and a bicycle. If a friend sees
you there and inquires about your purchase, we typically reply that you are
purchasing a car. Because the price of a car is excessively high compared
to the price of a bicycle (approximating the cost of cycle to the cost of a
car).
Overall Cost = Car Cost + Bicycle Cost
Overall Cost ≈ Car Cost (approximation)
For the abovementioned example, we can describe the cost of the car and
the cost of the bike in terms of function, ignoring the low order variables
that are generally irrelevant (for large values of input size, n).
As an example in the below case, n3, 2n2 and 10n are the individual costs
of some function and approximate it to n3. Since, n3 is the highest rate of
growth.
n3 + 2n2 + 10n ≈ n3
Understanding growth rates is the key to algorithm analysis. How much
more resources will my algorithm need as the volume of data increases, in
other words? Usually, we use a function to express the rate of resource
expansion of a piece of code. This section will look at graphs for various
growth rates, ranging from the most efficient to the least efficient, to assist
comprehend the implications.
7
Log Linear
Fundamental of
Algorithms A line with a small curve represents a log linear growth rate. Lower
numbers have a more prominent curve than higher ones.
8
Introduction to
Algorithm
9
Examples
Fundamental of
Algorithms Sequential Statements
If our sentences contain fundamental operations like comparisons,
assignments, and variable reads. We can assume they take the same
amount of time O(1).
Example where we can calculate the square sum of two values.
n1 = a * a;
n2 = b * b;
res = n1 + n2;
Constant time is spent on each line O(1).
10
fastest possible runtime for an algorithm. The best-case situation is Introduction to
depicted in this asymptotic notation, which is the opposite of large o Algorithm
notation. The lower bound of an algorithm's execution time is presented
formally in this manner. This indicates that this is the shortest amount of
time needed to execute an algorithm. It represents the speed at which an
algorithm can operate.
Assume that the most significant term in the function f(n), g(n), represents
the time complexity of an algorithm.
If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Example
The least amount of time needed to sort 1000 numbers is "10 seconds," if
we are sorting 1000 numbers. These 1000 numbers cannot be sorted in
less than 10 seconds using omega notation. Not 9 or 8 seconds, but 11 or
12 seconds is OK.
2. Big-o(O) Notation :
The formal notation for expressing an algorithm's running time upper
bound is the big-o O. The worst-case time complexity or the longest time
an algorithm might possibly take to finish is measured. By indicating the
function's order of increase, this asymptotic notation measures an
algorithm's efficiency. It gives a function an upper bound, assuring that it
will never grow faster than that of the upper bound.
It measures the algorithm's worst-case complexity. Calculates the
execution time that took the most time. As a result, it provides a formal
means of expressing the maximum allowable execution time for an
algorithm.
Assume that the most significant term in the function f(n), g(n), represents
the time complexity of an algorithm.
If f(n) <= C g(n) for all n >= n0, C > 0 and n0>= 1.
Then we can represent f(n) as O(g(n)).
f(n) = O(g(n))
Example:
The maximum amount of time needed to sort 1000 numbers is "50 secs,"
if we are sorting 1000 numbers. These 1000 numbers can be sorted in less
than 50 seconds using big-o notation. 48 or 46 seconds are acceptable but
51 or 52 seconds are not.
11
3. Theta(Θ) Notation :
Fundamental of
Algorithms In order to express precise asymptotic behaviour, the theta notation
bounds a functions from above and below. The execution time of a given
algorithm will, "on average," be the same for any given input. The typical
case scenario is depicted in this asymptotic notation. When solving a real-
world problem, an algorithm cannot perform worst or best. Theta notation
(), which denotes the average scenario, shows how the temporal
complexity varies between the best case and worst situation. When the
value of the worst-case and best-case scenarios are equal, theta notation is
commonly utilized. It represents both the top and lower bounds of an
algorithm's running time.
Assume that the most significant term in the function f(n), g(n), represents
the time complexity of an algorithm.
If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Example:
Let's use the example of sorting 1000 numbers once more. The algorithm
takes 10 seconds the first time, 11 seconds the second time, and 7 seconds
the third time.
12
1.11 SUMMARY Introduction to
Algorithm
An algorithm is a set of clear, step-by-step directions for solving a
particular issue.
We may choose the algorithm that uses the least amount of time and
space by performing an algorithm analysis.
The objective of an algorithm analysis is to evaluate different
algorithms, mostly in terms of running time but also in terms of other
elements.
The Best Case, Worst Case, and Average Case are used to evaluate an
algorithm's complexity.
Upper Bounding Function in Big-O Notation.
Lower Bounding Function in the Omega-Ω Notation.
Order Function in Theta- Θ Notation.
1.12 QUESTIONS
1. What are the essential properties of an algorithms? Explain.
2. Define algorithm. State its essential characteristics.
3. Write an algorithm for adding two integers the user provided.
4. Write an algorithm for calculating factorial of an integer the user
provided.
5. Explain why analysis of algorithm is important.
6. Explain the running time of an algorithm.
7. Explain how to compare algorithms. Give example.
8. Explain the terms in Rate of growth.
9. Write a note on Upper and Lower bound of an algorithm.
10. Brief describe Big-O and Omega Ω in algorithm analysis.
13
2
MASTER THEOREM
Unit Structure :
2.1 Objectives
2.2 Commonly used Logarithms and Summations
2.2.1 Guidelines for Asymptotic Analysis
2.3 Performance characteristics of algorithms
2.4 Master Theorem for Divide and Conquer
2.5 Divide and Conquer Master Theorem: Problems & Solutions
2.6 Master Theorem for Subtract and Conquer Recurrences
2.7 Method of Guessing and Confirming
2.8 Summary
2.9 Questions
2.10 Reference for further reading
2.1 OBJECTIVE
After going through this unit, you will be able to understand:
Basic algorithm properties , Performance characteristics of algorithms,
Master Theorem for Divide and Conquer Problems & Solutions, Master
Theorem for Subtract andConquer Recurrences, Method of Guessing and
Confirming.
14
2.2.1 Guidelines for Asymptotic Analysis Master Theorem
There are some rules to help us determine the running time of an
algorithm. I have described with an example using python code
1. Loops: The running time of a loop is, at most, the running time of the
statements inside the loop (including tests) multiplied by the number
of iterations.
2. Nested loops: Analyse from the inside out. Total running time is the
product of the sizes of all the loops.
Source:
https://d2vlcm61l7u1fs.cloudfront.net/media%2F53d%2F53d266b3-56c1-
4cea-99a5-4eaf6ecae629%2FphpjB5pBJ.png
15
algorithm to solve a problem from a variety of algorithms with the use of
Fundamental of
performance analysis.
Algorithms
When there are several potential solutions to a problem, we evaluate them
and choose the one that best meets our needs. The official explanation is
as follows:
Making an evaluation of an algorithm is the process of measuring its
performance.
17
T(n) = 8T(n/4) – n2logn
Fundamental of
Algorithms Solution:
T(n) = aT(n/b) + f(n)
Here
The provided recurrence relation does not match Master's theorem's
general form.
Therefore, the Master's theorem cannot be used to solve it.
Example 3: Use Master's theorem to solve the recurrence relation shown
below-
T(n)=2T(n/2)+n
Solution:
T(n) = aT(n/b) + f(n)
Here
a=2
b=2
f(n) = n2
logba = log22= 1
means f(n) = nlogba+ , where, is a constant.
Here, Case 2 suggests-
T(n) = f(n) = Θ(nlogba * log n)= Θ(n log n)
Example 4: Use Master's theorem to solve the recurrence relation shown
below-
T(n) = 3T (n/2)+n3
Solution: T(n) = 3T (n/2) + n3 => T (n) =( n3) (Master Theorem Case 3.a)
Solution:
T(n) = aT(n/b) + f(n)
Here
a=3
b=2
f(n) = n2
18
logba = log23= 1.58 Master Theorem
means f(n) = nlogba+ , where, is a constant.
Here, Case 3 suggests-
T(n) = f(n) = Θ(n3)
Example 5: Use Master's theorem to solve the recurrence relation shown
below-
T(n) = 4T (n/2)+n²
Solution: T(n) = 4T (n/2) + n² => T (n) = O(nlogn) (Master Theorem Case
2.a)
Solution:
T(n) = aT(n/b) + f(n)
Here
a=4
b=2
f(n) = n2
logba = log24= 2
means f(n) = nlogba+ , where, is a constant.
Here, Case 2 suggests-
T(n) = f(n) = Θ(nlogba * log n)= Θ(n log n)
Example 6: Use Master's theorem to solve the recurrence relation shown
below-
T(n)=2nT(n/2)+nn
Solution:
T(n) = aT(n/b) + f(n)
Here
The provided recurrence relation does not match Master's theorem's
general form.
Therefore, the Master's theorem cannot be used to solve it.
19
Fundamental of
Algorithms
Source: https://dotnettutorials.net/lesson/master-theorem/
Source: https://dotnettutorials.net/lesson/master-theorem/
20
2.6 MASTER THEOREM FOR SUBTRACT AND Master Theorem
CONQUER RECURRENCES
Master theorem is used to determine the Big – O upper bound on functions
which possess recurrence, i.e which can be broken into sub problems.
Let T(n) be a function defined on positive n as shown below:
for some constants c, a>0, b>0, k>=0 and function f(n). If f(n) is O(nk),
then
1. If a<1 then T(n) = O(nk)
2. If a=1 then T(n) = O(nk+1)
3. if a>1 then T(n) = O(nkan/b)
Solution:
Here
a=3
b=1
d=0
Hence T(n) = O(n03n/1)
Here, Case 2 suggests- (a>1)
Therefore T(n)=O(3n).
21
Example 2: Use Master's theorem for Subtract and Conquer Recurrences
Fundamental of
to solve the below-
Algorithms
T(n) = 5T(n-3)+O(n2) when n>0
=1 otherwise
Solution:
Here
a=5
b=3
d=2
Hence T(n) = O(n25n/3)
Here, Case 2 suggests- (a>1)
Therefore T(n)= O(n25n/3)
22
Only that 1 le 1/2*clog N is assumed in the final inequality. If N is large Master Theorem
enough and c is constant, regardless of how little it is, this is true.
We can see from the previous proof that we were right about the upper
bound. Let's now establish the bottom bound for its occurrence:
T(N) = √N*T(√N) + N
T(N) \ge √N* k√N*log√N + N
T(N) = N* klog√N + N
T(N) = N*1/2*k*log N + N
T(N) \ge N*k*log N
The final inequality just makes the assumption that 1 ge 1/2*k*log N. If N
is large enough and for any constant k, this is false.
We can see from the previous proof that our assumption regarding the
lower bound is unreliable.
It is clear from the reasoning above that (N*log N) is an excessively large
number. How about Θ (N) instead? The direct proof of the lower bound is
simple:
T(N) = √N*T(√N) + N \ge N
Now, let us prove the upper bound for this Θ(N):
T(N) = √N*T(√N) + N\\ \le √N*c√N + N\\ = N c+ N\\ = N (c + 1)\\
\nleqcN
It is clear from the previous induction that Θ(N) is too small and Θ(N*log
N)(N*log N) is too large. So what do we need that is greater than N and
less than N*log N? Consider N*√log N.
Proving upper bound for N*√log N:
T(N) = √N*T(√N) + N
T(N) \le √n*c√N*√(log √N) + N
T(N) = N* 1/√2 *c*log √N + N
T(N) \le N*c*log √N
Proving lower bound for N*√log N:
T(N) = √N*T(√N) + N
T(N) \ge √N*k√N*√(log√N) + N
T(N) = N*1/√2*k*log √N + N
T(N) \ngeq N*k*log √N
23
The last step doesn’t work. So, Θ(N*√log N) doesn’t work. What else is
Fundamental of
between N and N*log N? How about N*log(log N)?
Algorithms
Proving upper bound for N*log(log N):
T(N) = √N*T(√N) + N
T(N) \le √N*c√N*log(log √N) + N
T(N) = N*clog(log N) - cN + N
T(N) \le N*clog(log N), if \ c\ge 1
Proving lower bound for N*log(log N):
T(N) = √N*T(√N) + N
T(N) \ge √N*k√N*log(log √N) + N
T(N) = N*k*log(log N) - kN + N
T(N) \ge N*klog(log N), if \ k \le 1
From the above proofs, it can see that T(N) \le N*clog(log N), if \ c \ge 1
and T(N) \ge N*klog(log N), if \ k \le 1.
2.8 SUMMARY
There are numerous uses for the common logarithm in science and
engineering
When analysing an algorithm, we solely take into account the time and
space needed for that specific method, ignoring all other factors.
The basic idea behind this method is to guess the answer, and then
prove it correct by induction. This method can be used to solve any
recurrence.
24
2.9 QUESTIONS Master Theorem
https://bournetocode.com/projects/AQA_A_Theory/pages/4-3-
Algorithms.html
25
3
TREE ALGORITHMS
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 An Overview
3.2.1 What is a tree?
3.2.2 Terminology of TREE?
3.2.3 Characteristics of a Trees?
3.2.4 Advantages of BST
3.2.5 Types of tree?
3.3 Arithematic Tree
3.3.1 Expression Binary Tree
3.3.2 Arithematic VS Expression Binary Tree?
3.3.3 Design of Tree?
3.3.4 Representation of tree
3.3.5 Representation of link list
3.3.6 singly vs doubly link list
3.4 Balanced Tree
3.4.1 From programmers point of view
3.4.2 From users point of view
3.5 What is Binary Search Tree?
3.5.1 What is BST?
3.5.2 What does balanced binary tree?
3.5.3 what is AVL tree?
3.6 Let us Sum Up
3.7 List of References
3.8 Unit End Exercise
26
3.0 OBJECTIVES Tree Algorithms
3.1 INTRODUCTION
A tree is a non-linear abstract data type with a hierarchy-based structure. It
consists of nodes (where the data is stored) that are connected via links.
The tree data structure stems from a single node called a root node and has
subtrees connected to the root.
Important Terms
Following are the important terms with respect to tree.
Path − Path refers to the sequence of nodes along the edges of a tree.
Root − The node at the top of the tree is called root. There is only one
root per tree and one path frocm the root node to any node.
27
Parent − Any node except the root node has one edge upward to a
Fundamental of
node called parent.
Algorithms
Child − The node below a given node connected by its edge
downward is called its child node.
Leaf − The node which does not have any child node is called the leaf
node.
Subtree − Subtree represents the descendants of a node.
Visiting − Visiting refers to checking the value of a node when control
is on the node.
Traversing − Traversing means passing through nodes in a specific
order.
Levels − Level of a node represents the generation of a node. If the
root node is at level 0, then its next child node is at level 1, its
grandchild is at level 2, and so on.
Keys − Key represents a value of a node based on which a search
operation is to be carried out for a node.
Types of Trees
There are three types of trees −
General Trees
Binary Trees
Binary Search Trees
General Trees
General trees are unordered tree data structures where the root node has
minimum 0 or maximum ‘n’ subtrees.
The General trees have no constraint placed on their hierarchy. The root
node thus acts like the superset of all the other subtrees.
28
Binary Trees Tree Algorithms
Binary Trees are general trees in which the root node can only hold up to
maximum 2 subtrees: left subtree and right subtree. Based on the number
of children, binary trees are divided into three types.
29
Fundamental of
Algorithms
Advantages of BST
Binary Search Trees are more efficient than Binary Trees since time
complexity for performing various operations reduces.
Since the order of keys is based on just the parent node, searching
operation becomes simpler.
The alignment of BST also favors Range Queries, which are executed
to find values existing between two keys. This helps in the Database
Management System.
Disadvantages of BST
The main disadvantage of Binary Search Trees is that if all elements in
nodes are either greater than or lesser than the root node, the tree
becomes skewed. Simply put, the tree becomes slanted to one side
completely.
This skewness will make the tree a linked list rather than a BST, since the
worst case time complexity for searching operation becomes O(n).
To overcome this issue of skewness in the Binary Search Trees, the
concept of Balanced Binary Search Trees was introduced.
30
There are various types of self-balancing binary search trees − Tree Algorithms
AVL Trees
Red Black Trees
B Trees
B+ Trees
Splay Trees
Priority Search Trees
WHAT IS TREE?
Trees: Non-Linear data structure
A data structure is said to be linear if its elements form a sequence or a
linear list. Previous
linear data structures that we have studied like an array, stacks, queues and
linked lists organize data in linear order. A data structure is said to be non
linear if its elements form a hierarchical classification where, data items
appear at various levels.
Trees and Graphs are widely used non-linear data structures. Tree and
graph structures represent hierarchical relationship between individual
data elements. Graphs are nothing but trees with certain restrictions
removed.
Trees represent a special case of more general structures known as graphs.
In a graph, there is no restrictions on the number of links that can enter or
leave a node, and cycles may be present in the graph. The figure 5.3.1
shows a tree and a non-tree.
Advantages of trees
Trees are so useful and frequently used, because they have some very
serious advantages:
Trees reflect structural relationships in the data
Trees are used to represent hierarchies
Trees provide an efficient insertion and searching
Trees are very flexible data, allowing to move sub trees around with
minimum effort
3.2 AN OVERVIEW
In a Tree, Every individual element is called as Node. Node in a tree data
structure, stores the actual data of that particular element and link to next
element in hierarchical structure. Example
32
Tree Algorithms
1. Root
In a tree data structure, the first node is called as Root Node. Every tree
must have root node. We can say that root node is the origin of tree data
structure. In any tree, there must be only one root node. We never have
multiple root nodes in a tree. In above tree, A is a Root node
2. Edge
In a tree data structure, the connecting link between any two nodes is
called as EDGE. In a tree with 'N' number of nodes there will be a
maximum of 'N-1' number of edges.
3. Parent
In a tree data structure, the node which is predecessor of any node is called
as PARENT NODE. Insimple words, the node which has branch from it to
any other node is called as parent node. Parent node can also be defined as
"The node which has child / children". e.g., Parent (A,B,C,D).
4. Child
In a tree data structure, the node which is descendant of any node is called
as CHILD Node. In simple words, the node which has a link from its
parent node is called as child node. In a tree, any parent node can have any
number of child nodes. In a tree, all the nodes except root are child nodes.
e.g., Children of D are (H, I,J).
5. Siblings
In a tree data structure, nodes which belong to same Parent are called as
SIBLINGS. In simple words, the nodes with same parent are called as
Sibling nodes. Ex: Siblings (B,C, D)
6. Leaf
In a tree data structure, the node which does not have a child (or) node
with degree zero is calledas LEAF Node. In simple words, a leaf is a node
with no child.In a tree data structure, the leaf nodes are also called as
External Nodes. External node is also a node with no child. In a tree, leaf
node is also called as 'Terminal' node. Ex: (K,L,F,G,M,I,J)
33
7. Internal Nodes
Fundamental of
Algorithms In a tree data structure, the node which has atleast one child is called as
INTERNAL Node. In simple words, an internal node is a node with atleast
one child.
In a tree data structure, nodes other than leaf nodes are called as Internal
Nodes. The root node is also said to be Internal Node if the tree has more
than one node. Internal nodes are also called as 'Non-Terminal' nodes.
Ex:B,C,D,E,H
8. Degree
In a tree data structure, the total number of children of a node (or)number
of subtrees of a node is called as DEGREE of that Node. In simple words,
the Degree of a node is total number of children it has. The highest degree
of a node among all the nodes in a tree is called as 'Degree of Tree'
9. Level
In a tree data structure, the root node is said to be at Level 0 and the
children of root node are at Level 1 and the children of the nodes which
are at Level 1 will be at Level 2 and so on... In simplewords, in a tree each
step from top to bottom is called as a Level and the Level count starts with
'0' and incremented by one at each level (Step). Some authors start root
level with 3.
10. Height
In a tree data structure, the total number of edges from leaf node to a
particular node in the longest path is called as HEIGHT of that Node. In a
tree, height of the root node is said to be height of thetree. In a tree, height
of all leaf nodes is '0'.
11. Depth
In a tree data structure, the total number of edges from root node to a
particular node is called as DEPTH of that Node. In a tree, the total
number of edges from root node to a leaf node in the
longest path is said to be Depth of the tree. In simple words, the highest
depth of any leaf node in a tree is said to be depth of that tree. In a tree,
depth of the root node is '0'.
34
12. Path Tree Algorithms
In a tree data structure, the sequence of Nodes and Edges from one node to
another node is called as PATH between that two Nodes. Length of a Path
is total number of nodes in that path. In belowexample the path A - B - E -
J has length 4.
Tree Representations
A tree data structure can be represented in two methods. Those methods
are as follows...
1.List Representation
2. Left Child - Right Sibling Representation Consider the following tree...
1. List Representation
In this representation, we use two types of nodes one for representing the
node with data and another for representing only references. We start with
a node with data from root node in the tree. Then it is linked to an internal
35
node through a reference node and is linked to any other node directly.
Fundamental of
This process repeats for all the nodes in the tree.
Algorithms
The above tree example can be represented using List representation as
follows...
In this representation, every node's data field stores the actual value of that
node. If that node has left child, then left reference field stores the address
of that left child node otherwise that field stores NULL. If that node has
right sibling then right reference field stores the address of right sibling
nodeotherwise that field stores NULL.
The above tree example can be represented using Left Child - Right
Sibling representation asfollows...
36
Tree Algorithms
Binary Trees
Introduction
In a normal tree, every node can have any number of children. Binary tree
is a special type of tree data structure in which every node can have a
maximum of 2 children. One is known as left child and the other is known
as right child.
A tree in which every node can have a maximum of two children is called
as Binary Tree.
In a binary tree, every node can have either 0 children or 1 child or 2
children but not more than 2children. Example
37
There are different types of binary trees and they are...Strictly Binary Tree
Fundamental of
Algorithms In a binary tree, every node can have a maximum of two children. But in
strictly binary tree, every node should have exactly two children or none.
That means every internal node must have exactly two children. A strictly
Binary Tree can be defined as follows...
A binary tree in which every node has either two or zero number of
children is called Strictly Binary Tree. Strictly binary tree is also called as
Full Binary Tree or Proper Binary Tree or 2-Tree
39
There are three types of binary tree traversals.
Fundamental of
Algorithms 1) In - Order Traversal
2) Pre - Order Traversal
3) Post - Order Traversal
Binary Tree Traversals
• 3. In - Order Traversal ( leftChild - root -
rightChild )
• I - D - J - B - F - A - G - K - C – H
• A - B - D - I - J - F - C - G - K – H
• I - J - D - F - B - K - G - H - C – A
CHAPTER 5
}
1. Pre - Order Traversal ( root - leftChild - rightChild )
In Pre-Order traversal, the root node is visited before left child and right
child nodes. In this traversal, the root node is visited first, then its left
child and later its right child. This pre-order traversal is applicable for
every root node of all subtrees in the tree.
In the above example of binary tree, first we visit root node 'A' then visit
its left child 'B' which is a root for D and F. So we visit B's left child 'D'
and again D is a root for I and J. So we visit D's left child'I' which is the
left most child. So next we go for visiting D's right child 'J'. With this we
have completed root, left and right parts of node D and root, left parts of
node B. Next visit B's right child'F'. With this we have completed root and
left parts of node A. So we go for A's right child 'C' which is a root
node for G and H. After visiting C, we go for its left child 'G' which is a
root for node K. So next we visit left of G, but it does not have left child
so we go for G's right child 'K'. With this we have completed node C's root
and left parts. Next visit C's right child 'H' which is the right most child
in the tree. So we stop the process. That means here we have visited
in the order of A-B-D-I-J-F-C-G-K-H using Pre-Order Traversal.
Algorithm
Until all nodes are traversed −Step 1 − Visit root node.
Step 2 − Recursively traverse left subtree. Step 3 − Recursively traverse
right subtree.
void preorder(tree_pointer ptr) /* preorder tree traversal */ Recursive
{
41
if (ptr) {
Fundamental of
Algorithms printf(“%d”, ptr->data); preorder(ptr->left_child); preorder(ptr-
>right_child);
}
}
42
Tree Algorithms
+
* E
* D
/ C
A B
15
Trace Operations of Inorder Traversal
43
addq(front, &rear, ptr->left_child);if (ptr->right_child)
Fundamental of
Algorithms addq(front, &rear, ptr->right_child);
}
else break;
} }
Level order Traversal is implemented with circular queue. In this order, we visit the root
first, then root’s left child followed by root’s right child. We continue in this manner,
visiting the nodes at each new level from left most to rightmost nodes.
We begin by adding root to the queue. The function operates by deleting the node at the
front of the queue, printing out the node’s data field, and adding the node’s left and
right children to the queue. The level order traversal for abovearithmetic expression
is + * E * D / C A B.
We're going to implement tree using node object and connecting them
through references.
Definition: A binary search tree (BST) is a binary tree. It may be empty. If
it is not empty,then all nodes follows the below mentioned properties −
The keys in a nonempty right subtree larger than the key in the root of
subtree.
The left and right subtrees are also binary search trees.left sub-tree and
right sub-tree and can be defined as −
44
left_subtree (keys) ≤ node (key) ≤ right_subtree (keys) Tree Algorithms
46
Algorithm Tree Algorithms
Create newnodeIf root is NULL
then create root nodereturn
If root exists then
compare the data with node.data
while until insertion position is locatedIf data is greater than node.data
goto right subtreeelse
goto left subtreeendwhile
insert newnodeend If
Implementation
The implementation of insert function should look like this –
47
Fundamental of retur }
Algorithms }
//go to right of
the treeelse {
current = current-
>rightChild;
//insert to the
right if(current
== NULL) {
}
}
Deleting a node
Remove operation on binary search tree is more complicated, than insert
and search. Basically, in can be divided into two stages:
48
2. Node to be removed has one child. In this case, node is cut from the
Tree Algorithms
tree and algorithm links single child (with it's subtree) directly to the
parent of the removed node.
3. Node to be removed has two children. --This is the most complex case.
The deleted node can be replaced by either largest key in its left
subtree or the smallest in its right subtree. Preferably which node
has one child.
50
} else if (!(*node)->right->left) { Tree Algorithms
/*
* deleting a node whose right child
* is the smallest node in the right
* subtree for the node to be deleted.
*/
tmpNode = *node;
(*node)->right->left = (*node)->left;
(*parent)->left = (*node)->right;free(tmpNode);
*node = (*parent)->left;
} else {
/*
* Deleting a node with two children.
* First, find the smallest node in
* the right subtree. Replace the
* smallest node with the node to be
* deleted. Then, do proper connections
* for the children of replaced node.
*/
tmpNode = (*node)->right; while (tmpNode->left) {tmpParent = tmpNode;
tmpNode = tmpNode->left;
}
tmpParent->left = tmpNode->right; tmpNode->left = (*node)->left;
tmpNode->right =(*node)->right; free(*node);
*node = tmpNode;
}
} else if (data < (*node)->data) {
/* traverse towards left subtree */ deletion(&(*node)->left, node, data);
} else if (data > (*node)->data) {
/* traversing towards right subtree */ deletion(&(*node)->right, node,
51
data);
Fundamental of
Algorithms }
}
Example
Construct a Binary Search Tree by inserting the following sequence of
numbers...10,12,5,4,20,8,7,15 and 13
Above elements are inserted into a Binary Search Tree as follows...
52
Tree Algorithms
53
Example:
Fundamental of
Algorithms
Generic Tree
To represent the above tree, we have to consider the worst case, that is
the node with maximum children (in above example, 6 children) and
allocate that many pointers for each node.
The node representation based on this method can be written as:
struct Node{
int data;
Node *firstchild;
Node *secondchild;
Node *thirdchild;
Node *fourthchild;
Node *fifthchild;
Node *sixthchild;
};
54
1. In Linked list, we can not randomly access any child’s address. So it Tree Algorithms
will be expensive.
2. In array, we can randomly access the address of any child, but we can
store only fixed number of children’s addresses in it.
Better Approach:
We can use Dynamic Arrays for storing the address of children’s address.
We can randomly access any child’s address and the size of the vector is
also not fixed.
Inorder traversal of a Binary tree can either be done using recursion
or with the use of a auxiliary stack. The idea of threaded binary trees is
to make inorder traversal faster and do it without stack and without
recursion. A binary tree is made threaded by making all right child
pointers that would normally be NULL point to the inorder successor of
the node (if it exists).
A threaded binary tree is a type of binary tree data structure where the
empty left and right child pointers in a binary tree are replaced with
threads that link nodes directly to their in-order predecessor or successor,
thereby providing a way to traverse the tree without using recursion or a
stack.
Threaded binary trees can be useful when space is a concern, as they can
eliminate the need for a stack during traversal. However, they can be
more complex to implement than standard binary trees.
There are two types of threaded binary trees.
Single Threaded: Where a NULL right pointers is made to point to the
inorder successor (if successor exists)
Double Threaded: Where both left and right NULL pointers are made
to point to inorder predecessor and inorder successor respectively. The
predecessor threads are useful for reverse inorder traversal and postorder
traversal.
The threads are also useful for fast accessing ancestors of a node.
Following diagram shows an example Single Threaded Binary Tree. The
dotted lines represent threads.
55
Fundamental of
Algorithms
struct Node
{
int data;
struct Node *left, *right;
bool rightThread;
}
56
Tree Algorithms
Expression Tree
The expression tree is a binary tree in which each internal node
corresponds to the operator and each leaf node corresponds to the
operand so for example expression tree for 3 + ((5+9)*2) would be:
59
Inorder traversal of expression tree produces infix version of given
Fundamental of
postfix expression (same with postorder traversal it gives postfix
Algorithms
expression)
Evaluating the expression represented by an expression tree:
Let t be the expression tree
If t is not null then
If t.value is operand then
Return t.value
A = solve(t.left)
B = solve(t.right)
// calculate applies operator 't.value'
// on A and B, and returns value
Return calculate(A, B, t.value)
Construction of Expression Tree:
Now For constructing an expression tree we use a stack. We loop
through input expression and do the following for every character.
1. If a character is an operand push that into the stack
2. If a character is an operator pop two values from the stack make them
its child and push the current node again.
In the end, the only element of the stack will be the root of an expression
tree.
Examples:
Input: A B C*+ D/
Output: A + B * C / D
The first three symbols are operands, so create tree nodes and push
pointers to them onto a stack as shown below.
60
In the Next step, an operator ‘*’ will going read, so two pointers to trees Tree Algorithms
are popped, a new tree is formed and a pointer to it is pushed onto the
stack
In the Next step, an operator ‘+’ will read, so two pointers to trees are
popped, a new tree is formed and a pointer to it is pushed onto the stack.
Below is the implementation of the above approach:
import java.util.Stack;
class Node{
char data;
Node left,right;
public Node(char data){
this.data = data;
left = right = null;
}
}
61
Fundamental of public static Node expressionTree(String postfix){
Algorithms
Stack<Node> st = new Stack<Node>();
Node t1,t2,temp;
for(int i=0;i<postfix.length();i++){
if(!isOperator(postfix.charAt(i))){
temp = new Node(postfix.charAt(i));
st.push(temp);
}
else{
temp = new Node(postfix.charAt(i));
t1 = st.pop();
t2 = st.pop();
temp.left = t2;
temp.right = t1;
st.push(temp);
}
}
temp = st.pop();
return temp;
}
public static void inorder(Node root){
if(root==null) return;
inorder(root.left);
System.out.print(root.data);
inorder(root.right);
62
} Tree Algorithms
Node r = expressionTree(postfix);
inorder(r);
}
}
Output
The Inorder Traversal of Expression Tree: A + B * C / D
Time complexity: O(n)
Auxiliary space: O(n)
63
Balanced Binary Search Tree
Fundamental of
Algorithms A balanced binary tree is also known as height balanced tree. It is defined
as binary tree in when the difference between the height of the left subtree
and right subtree is not more than m, where m is usually equal to 3. The
height of a tree is the number of edges on the longest path between the
root node and the leaf node.
The above tree is a binary search tree. A binary search tree is a tree in
which each node on the left side has a lower value than its parent node,
and the node on the right side has a higher value than its parent node. In
the above tree, n1 is a root node, and n4, n6, n7 are the leaf nodes. The n7
node is the farthest node from the root node. The n4 and n6 contain 2
edges and there exist three edges between the root node and n7 node.
Since n7 is the farthest from the root node; therefore, the height of the
above tree is 3.
Now we will see whether the above tree is balanced or not. The left
subtree contains the nodes n2, n4, n5, and n7, while the right subtree
contains the nodes n3 and n6. The left subtree has two leaf nodes, i.e., n4
and n7. There is only one edge between the node n2 and n4 and two edges
between the nodes n7 and n2; therefore, node n7 is the farthest from the
root node. The height of the left subtree is 2. The right subtree contains
only one leaf node, i.e., n6, and has only one edge; therefore, the height of
the right subtree is 3. The difference between the heights of the left subtree
and right subtree is 3. Since we got the value 1 so we can say that the
above tree is a height-balanced tree. This process of calculating the
difference between the heights should be performed for each node like n2,
n3, n4, n5, n6 and n7. When we process each node, then we will find that
the value of k is not more than 1, so we can say that the above tree is a
balanced binary tree.
64
Tree Algorithms
In the above tree, n6, n4, and n3 are the leaf nodes, where n6 is the farthest
node from the root node. Three edges exist between the root node and the
leaf node; therefore, the height of the above tree is 3. When we consider
n1 as the root node, then the left subtree contains the nodes n2, n4, n5, and
n6, while subtree contains the node n3. In the left subtree, n2 is a root
node, and n4 and n6 are leaf nodes. Among n4 and n6 nodes, n6 is the
farthest node from its root node, and n6 has two edges; therefore, the
height of the left subtree is 2. The right subtree does have any child on its
left and right; therefore, the height of the right subtree is 0. Since the
height of the left subtree is 2 and the right subtree is 0, so the difference
between the height of the left subtree and right subtree is 2. According to
the definition, the difference between the height of left sub tree and the
right subtree must not be greater than 3. In this case, the difference comes
to be 2, which is greater than 1; therefore, the above binary tree is an
unbalanced binary search tree.
The above tree is a binary search tree because all the left subtree nodes are
smaller than its parent node and all the right subtree nodes are greater than
its parent node. Suppose we want to want to find the value 79 in the above
tree. First, we compare the value of node n1 with 79; since the value of 79
is not equal to 35 and it is greater than 35 so we move to the node n3, i.e.,
65
48. Since the value 79 is not equal to 48 and 79 is greater than 48, so we
Fundamental of
move to the right child of 48. The value of the right child of node 48 is 79
Algorithms
which is equal to the value to be searched. The number of hops required to
search an element 79 is 2 and the maximum number of hops required to
search any element is 2. The average case to search an element is O(logn).
The above tree is also a binary search tree because all the left subtree
nodes are smaller than its parent node and all the right subtree nodes are
greater than its parent node. Suppose we want to find the find the value 79
in the above tree. First, we compare the value 79 with a node n4, i.e., 13.
Since the value 79 is greater than 13 so we move to the right child of node
13, i.e., n2 (21). The value of the node n2 is 21 which is smaller than 79,
so we again move to the right of node 23. The value of right child of node
21 is 29. Since the value 79 is greater than 29 so we move to the right
child of node 29. The value of right child of node 29 is 35 which is smaller
than 79 so we move to the right child of node 35, i.e., 48. The value 79 is
greater than 48, so we move to the right child of node 48. The value of
right child node of 48 is 79 which is equal to the value to be searched. In
this case, the number of hops required to search an element is 5. In this
case, the worst case is O(n).
If the number of nodes increases, the formula used in the tree diagram1 is
more efficient than the formula used in the tree diagram2. Suppose the
number of nodes available in both above trees is 100,000. To search any
element in a tree diagram2, the time taken is 100,000µs whereas the time
taken to search an element in tree diagram is log(100,000) which is equal
16.6 µs. We can observe the enormous difference in time between above
two trees. Therefore, we conclude that the balance binary tree provides
searching more faster than linear tree data structure.
66
What is AVL Trees? Tree Algorithms
An AVL tree is a type of binary search tree. Named after it's inventors
Adelson, Velskii, and Landis, AVL trees have the property of dynamic
self-balancing in addition to all the other properties exhibited by binary
search trees.
A BST is a data structure composed of nodes. It has the following
guarantees:
1. Each tree has a root node (at the top)
2. The root node has zero, one, or two child nodes
3. Each child node has zero, one, or two child nodes
4. Each node has up to two children
5. For each node, its left descendants are less than the current node,
which is less than the right descendants
AVL trees have an additional guarantee:
1. The difference between the depth of right and left sub-trees cannot be
more than one. This difference is called the balance factor.
67
AVL Tree Rotations
Fundamental of
Algorithms In AVL trees, after each operation like insertion and deletion, the balance
factor of every node needs to be checked. If every node satisfies the
balance factor condition, then the operation can be concluded. Otherwise,
the tree needs to be rebalanced using rotation operations.
There are four rotations and they are classified into two types:
68
Define the class and initialize the nodes Tree Algorithms
class avl_Node(object):
def __init__(self, value):
self.value = value
self.leaf = None
self.root = None
self.height = 1
69
self.preOrder(root.left)
Fundamental of
Algorithms self.preOrder(root.right)
70
else: Tree Algorithms
root.right = self.insert_node(root.right, value)
root.height = 1 + max(self.avl_Height(root.left),
self.avl_Height(root.right))
# Update the balance factor and balance the tree
balanceFactor = self.avl_BalanceFactor(root)
if balanceFactor > 1:
if value < root.left.value:
return self.rightRotate(root)
else:
root.left = self.leftRotate(root.left)
return self.rightRotate(root)
if balanceFactor < -1:
if value > root.right.value:
return self.leftRotate(root)
else:
root.right = self.rightRotate(root.right)
return self.leftRotate(root)
return root
Define a function for deleting a node from the AVL tree in Python
def delete_node(self, root, value):
# Find the node to be deleted and remove it
if not root:
return root
elif value < root.value:
root.left = self.delete_node(root.left, value)
elif value > root.value:
root.right = self.delete_node(root.right, value)
71
else:
Fundamental of
Algorithms if root.left is None:
temp = root.right
root = None
return temp
elif root.right is None:
temp = root.left
root = None
return temp
temp = self.avl_MinValue(root.right)
root.value = temp.key
root.right = self.delete_node(root.right, temp.value)
if root is None:
return root
# Update the balance factor of nodes
root.height = 1 + max(self.avl_Height(root.left),
self.avl_Height(root.right))
balanceFactor = self.avl_BalanceFactor(root)
# Balance the tree
if balanceFactor > 1:
if self.avl_BalanceFactor(root.left) >= 0:
return self.rightRotate(root)
else:
root.left = self.leftRotate(root.left)
return self.rightRotate(root)
if balanceFactor < -1:
if self.avl_BalanceFactor(root.right) <= 0:
return self.leftRotate(root)
72
else: Tree Algorithms
root.right = self.rightRotate(root.right)
return self.leftRotate(root)
return root
Complete Code for AVL Tree in Python
class avl_Node(object):
def __init__(self, value):
self.value = value
self.left = None
self.right = None
self.height = 1
class AVLTree(object):
def insert_node(self, root, value):
if not root:
return avl_Node(value)
elif value < root.value:
root.left = self.insert_node(root.left, value)
else:
root.right = self.insert_node(root.right, value)
root.height = 1 + max(self.avl_Height(root.left),
self.avl_Height(root.right))
# Update the balance factor and balance the tree
balanceFactor = self.avl_BalanceFactor(root)
if balanceFactor > 1:
if value < root.left.value:
return self.rightRotate(root)
else:
root.left = self.leftRotate(root.left)
return self.rightRotate(root)
73
if balanceFactor < -1:
Fundamental of
Algorithms if value > root.right.value:
return self.leftRotate(root)
else:
root.right = self.rightRotate(root.right)
return self.leftRotate(root)
return root
def avl_Height(self, root):
if not root:
return 0
return root.height
# Get balance factore of the node
def avl_BalanceFactor(self, root):
if not root:
return 0
return self.avl_Height(root.left) - self.avl_Height(root.right)
def avl_MinValue(self, root):
if root is None or root.left is None:
return root
return self.avl_MinValue(root.left)
def preOrder(self, root):
if not root:
return
print("{0} ".format(root.value), end=" ")
self.preOrder(root.left)
self.preOrder(root.right)
def leftRotate(self, b):
a = b.right
T2 = a.left
74
a.left = b Tree Algorithms
b.right = T2
b.height = 1 + max(self.avl_Height(b.left),
self.avl_Height(b.right))
a.height = 1 + max(self.avl_Height(a.left),
self.avl_Height(a.right))
return a
def rightRotate(self, b):
a = b.left
T3 = a.right
a.right = b
b.left = T3
b.height = 1 + max(self.avl_Height(b.left),
self.avl_Height(b.right))
a.height = 1 + max(self.avl_Height(a.left),
self.avl_Height(a.right))
return a
def delete_node(self, root, value):
# Find the node to be deleted and remove it
if not root:
return root
elif value < root.value:
root.left = self.delete_node(root.left, value)
elif value > root.value:
root.right = self.delete_node(root.right, value)
else:
if root.left is None:
temp = root.right
root = None
75
return temp
Fundamental of
Algorithms elif root.right is None:
temp = root.left
root = None
return temp
temp = self.avl_MinValue(root.right)
root.value = temp.key
root.right = self.delete_node(root.right, temp.value)
if root is None:
return root
# Update the balance factor of nodes
root.height = 1 + max(self.avl_Height(root.left),
self.avl_Height(root.right))
balanceFactor = self.avl_BalanceFactor(root)
# Balance the tree
if balanceFactor > 1:
if self.avl_BalanceFactor(root.left) >= 0:
return self.rightRotate(root)
else:
root.left = self.leftRotate(root.left)
return self.rightRotate(root)
if balanceFactor < -1:
if self.avl_BalanceFactor(root.right) <= 0:
return self.leftRotate(root)
else:
root.right = self.rightRotate(root.right)
return self.leftRotate(root)
return root
76
Tree = AVLTree() Tree Algorithms
root = None
root = Tree.insert_node(root,40)
root = Tree.insert_node(root,60)
root = Tree.insert_node(root,50)
root = Tree.insert_node(root,70)
print("PREORDER")
Tree.preOrder(root)
Output:
PREORDER
50 40 60 70
// Driver Code
public static void main(String args[])
{
Node root = newNode(1);
root.left = newNode(2);
root.right = newNode(3);
root.left.left = newNode(4);
root.left.right = newNode(5);
root.right.left = newNode(6);
root.right.right = newNode(7);
root.right.left.right = newNode(8);
Output
Sum of all the elements is: 36
Time Complexity: O(N)
Auxiliary Space: O(1), but if we consider space due to the
recursion call stack then it would be O(h), where h is the height of
the Tree.
78
Method 2 – Another way to solve this problem is by using Level Tree Algorithms
Order Traversal. Every time when a Node is deleted from the
queue, add it to the sum variable.
Implementation:
C++
Java
Python3
C#
Javascript
# Python3 Program to print sum of all
# the elements of a binary tree
q = []
if (temp.left != None):
q.append(temp.left)
if temp.right != None:
q.append(temp.right)
79
Fundamental of
Algorithms return sum
# Driver Code
if __name__ == '__main__':
root = newNode(1)
root.left = newNode(2)
root.right = newNode(3)
root.left.left = newNode(4)
root.left.right = newNode(5)
root.right.left = newNode(6)
root.right.right = newNode(7)
root.right.left.right = newNode(8)
Output
Sum of all elements in the binary tree is: 36
Time Complexity: O(n)
Auxiliary Space: O(n)
Method 3: Using Morris traversal:
The Morris Traversal algorithm is an in-order tree
traversal algorithm that does not require the use of
a stack or recursion, and uses only constant extra
memory. The basic idea behind the Morris Traversal
algorithm is to use the right child pointers of the
nodes to create a temporary link back to the node's
parent, so that we can easily traverse the tree
without using any extra memory.
Follow the steps below to implement the above idea:
1. Initialize a variable sum to 0 to keep track of the sum of
all nodes in the binary tree.
2. Initialize a pointer root to the root of the binary tree.
3. While root is not null, perform the following steps:
If the left child of root is null, add the value of root to
sum, and move to the right child of root.
If the left child of root is not null, find the rightmost node
of the left subtree of root and create a temporary link
back to root.
Move to the left child of root.
80
4. After the traversal is complete, return sum. Tree Algorithms
Below is the implementation of the above approach:
C++
// C++ code to implement the morris traversal approach
#include <iostream>
using namespace std;
81
Fundamental of // rightmost node is null, set
Algorithms // it to the current node and
// move to the left child
prev->right = root;
root = root->left;
}
else { // If the right child of the rightmost
// node is the current node, set it to
// null, add the value of the current
// node to the sum and move to the right
// child
prev->right = nullptr;
sum += root->val;
root = root->right;
}
}
}
return sum;
}
// Driver code
int main()
{
// Example binary tree: 1
// / \
// 2 3
// / \
// 4 5
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
return 0;
}
// This code is contributed by Veerendra_Singh_Rajpoot
Output
Sum of all nodes in the binary tree is 15
82
Time Complexity: O(n) , Because of all the nodes are traversing Tree Algorithms
only once.
Auxiliary Space: O(1)
84
46. Determine if a binary tree satisfies the height-balanced property of a Tree Algorithms
red–black tree?
47. Construct an ancestor matrix from a binary tree?
48. Find all possible binary trees having the same inorder traversal?
49. Perform boundary traversal on a binary tree ?
50. Check if each node of a binary tree has exactly one child?
51. Evaluate a Binary Expression Tree?
52. Construction of an expression tree?
53. Fix children-sum property in a binary tree?
54. Maximum path sum in a binary tree?
55. Create a mirror of an m–ary tree?
56. Print a two-dimensional view of a binary tree?
57. Construct a binary tree from an ancestor matrix?
58. Determine whether a given binary tree is a BST or not?
59. Find inorder successor for the given key in a BST?
60. Fix a binary tree that is only one swap away from becoming a BST?
61. Find the size of the largest BST in a binary tree?
62. Print binary tree structure with its contents in C++?
63. Maximum Independent Set Problem?Huffman Coding Compression
Algorithm
64. Construct a Cartesian tree from an inorder traversal?
65. Calculate the height of a binary tree with leaf nodes forming a
circular doubly linked list?
66. Link nodes present in each level of a binary tree in the form of a
linked list?
67. Convert a ternary tree to a doubly-linked list?
68. Extract leaves of a binary tree into a doubly-linked list?
69. Find the vertical sum of a binary tree?
70. In-place convert a binary tree to a doubly-linked list?
71. Check whether the leaf traversal of given binary trees is the same or
not?
85
72. Efficiently print all nodes between two given levels in a binary tree?
Fundamental of
Algorithms 73. Calculate the height of a binary tree?
74. Delete a binary tree?
75. Level order traversal of a binary tree?
76. Spiral order traversal of a binary tree
77. Reverse level order traversal of a binary tree?
78. Print all nodes of a perfect binary tree in a specific order?
79. Print left view of a binary tree?
80. Find the next node at the same level as the given node in a binary
tree?
81. Check if a binary tree is a complete binary tree or not?
82. Print diagonal traversal of a binary tree?
83. Print corner nodes of every level in a binary tree?
84. Invert Binary Tree?
85. Convert a binary tree into a doubly-linked list in spiral order?
86. Check if a binary tree is a min-heap or not?
87. Invert alternate levels of a perfect binary tree?
88. Perform vertical traversal of a binary tree?
89. Compute the maximum number of nodes at any level in a binary tree?
90. Print right view of a binary tree?
91. Find the minimum depth of a binary tree?
92. Depth-First Search (DFS) vs Breadth-First Search (BFS)?
93. Print nodes of a binary tree in vertical order?
86
4
GRAPH ALGORITHMS
Unit Structure :
4.0 Objectives
4.1 Introduction to Graph
4.2 An Over view
4.2.1 What is a graph algorithms?
4.2.2 Types of graphs
4.2.3 Application of graphs
4.2.4 Depth first search
4.2.5 Breadth first search
4.2.6 Dijkstra’s path algorithm
4.3 Applications of Graphs
4.4 Graph Representation
4.5 Graph traversals
4.6 Topological Sort
4.7 Shortest Path Algorithms
4.8 Minimal Spanning Tree
4.9 conclusions
4.10 Unit End Exercises
4.0 OBJECTIVES
To learn what a graph is and how it is used.
87
sequence of classes you must take to complete a major in computer
Fundamental of
science. We will see in this chapter that once we have a good
Algorithms
representation for a problem, we can use some standard graph algorithms
to solve what otherwise might seem to be a very difficult problem.
While it is relatively easy for humans to look at a road map and
understand the relationships between different places, a computer has no
such knowledge. However, we can also think of a road map as a graph.
When we do so we can have our computer do interesting things for us. If
you have ever used one of the Internet map sites, you know that a
computer can find the shortest, quickest, or easiest path from one place to
another.
As a student of computer science you may wonder about the courses you
must take in order to get a major. A graph is good way to represent the
prerequisites and other interdependencies among courses. Figure 1. shows
another graph. This one represents the courses and the order in which they
must be taken to complete a major in computer science at Luther College.
88
The course is designed to develop skills to design and analyze simple Graph Algorithms
linear and non linear data structures.
89
Order: Order defines the total number of vertices present in the graph.
Fundamental of
Algorithms Size: Size defines the number of edges present in the graph.
Self-loop: It is the edges that are connected from a vertex to itself.
Isolated vertex: It is the vertex that is not connected to any other vertices
in the graph.
Vertex degree: It is defined as the number of edges incident to a vertex in
a graph.
Weighted graph: A graph having value or weight of vertices.
Unweighted graph: A graph having no value or weight of vertices.
Directed graph: A graph having a direction indicator.
Undirected graph: A graph where no directions are defined.
Let's now carry forward the main discussion and learn about different
types of graph algorithms.
90
Graph Algorithms
Algorithm
1. Start putting anyone vertices from the graph at the back of the queue.
2. First, move the front queue item and add it to the list of the visited
node.
3. Next, create nodes of the adjacent vertex of that list and add them
which have not been visited yet.
4. Keep repeating steps two and three until the queue is found to be
empty.
Pseudocode
1. Set all nodes to "not visited";
2. q = new Queue();
3. q.enqueue(initial node);
4. while ( q ? empty ) do
5. {
6. x = q.dequeue();
7. if ( x has not been visited )
8. {
9. visited[x] = true; // Visit node x !
10.
11. for ( every edge (x, y) /* we are using all edges ! */ )
12. if ( y has not been visited )
13. q.enqueue(y); // Use the edge (x,y) !!!
91
14. }
Fundamental of
Algorithms 15. }
Complexity: 0(V+E) where V is vertices and E is edges.
Applications
BFS algorithm has various applications. For example, it is used to
determine the shortest path and minimum spanning tree. It is also used
in web crawlers to creates web page indexes. It is also used as powering
search engines on social media networks and helps to find out peer-to-peer
networks in BitTorrent.
Algorithm
1. Start by putting one of the vertexes of the graph on the stack's top.
2. Put the top item of the stack and add it to the visited vertex list.
3. Create a list of all the adjacent nodes of the vertex and then add those
nodes to the unvisited at the top of the stack.
4. Keep repeating steps 2 and 3, and the stack becomes empty.
Pseudocode
1. DFS(G,v) ( v is the vertex where the search starts )
2. Stack S := {}; ( start with an empty stack )
3. for each vertex u, set visited[u] := false;
92
4. push S, v; Graph Algorithms
5. while (S is not empty) do
6. u := pop S;
7. if (not visited[u]) then
8. visited[u] := true;
9. for each unvisited neighbour w of uu
10. push S, w;
11. end if
12. end while
13. END DFS()
Applications
DFS finds its application when it comes to finding paths between two
vertices and detecting cycles. Also, topological sorting can be done using
the DFS algorithm easily. DFS is also used for one-solution puzzles.
Dijkstra's shortest path algorithm
Dijkstra's shortest path algorithm works to find the minor path from one
vertex to another. The sum of the vertex should be such that their sum of
weights that have been traveled should output minimum. The shortest path
algorithm is a highly curated algorithm that works on the concept of
receiving efficiency as much as possible. Consider the below diagram.
Algorithm
1. Set all the vertices to infinity, excluding the source vertex.
2. Push the source in the form (distance, vertex) and put it in the min-
priority queue.
3. From the priority, queue pop out the minimum distant vertex from the
source vertex.
93
4. Update the distance after popping out the minimum distant vertex and
Fundamental of
calculate the vertex distance using (vertex distance + weight <
Algorithms
following vertex distance).
5. If you find that the visited vertex is popped, move ahead without using
it.
6. Apply the steps until the priority queue is found to be empty.
Pseudocode
1. function dijkstra(G, S)
2. for each vertex V in G
3. distance[V] <- infinite
4. previous[V] <- NULL
5. If V != S, add V to Priority Queue Q
6. distance[S] <- 0
7. while Q IS NOT EMPTY
8. U <- Extract MIN from Q
9. for each unvisited neighbour V of U
10. tempDistance <- distance[U] + edge_weight(U, V)
11. if tempDistance < distance[V]
12. distance[V] <- tempDistance
13. previous[V] <- U
14. return distance[], previous[]
Applications
Dijkstra's shortest path algorithm is used in finding the distance of travel
from one location to another, like Google Maps or Apple Maps. In
addition, it is highly used in networking to outlay min-delay path problems
and abstract machines to identify choices to reach specific goals like the
number game or move to win a match.
Cycle detection
A cycle is defined as a path in graph algorithms where the first and last
vertices are usually considered. For example, if you start from a vertex and
travel along a random path, you might reach the exact point where you
eventually started. Hence, this forms a chain or cyclic algorithm to cover
along with all the nodes present on traversing. Therefore, cycle detection
is based on detecting this kind of cycle. Consider the below image.
94
Graph Algorithms
Pseudocode
1. Brent's Cycle Algorithm Example
2. def brent(f, x0):
3. # main phase: search successive powers of two
4. power = lam = 1
5. tortoise = x0
6. hare = f(x0) # f(x0) is the element/node next to x0.
7. while tortoise != hare:
8. if power == lam: # time to start a new power of two?
9. tortoise = hare
10. power *= 2
11. lam = 0
12. hare = f(hare)
13. lam += 1
14. # Find the position of the first repetition of length ?
15. tortoise = hare = x0
16. for i in range(lam):
17. # range(lam) produces a list with the values 0, 1, ... , lam-1
18. hare = f(hare)
19. # The distance between the hare and tortoise is now ?.
20. # Next, the hare and tortoise move at same speed until they agree
21. mu = 0
22. while tortoise != hare:
95
23. tortoise = f(tortoise)
Fundamental of
Algorithms 24. hare = f(hare)
25. mu += 1
26. return lam, mu
Applications
Cyclic algorithms are used in message-based distributed systems and
large-scale cluster processing systems. It is also mainly used to detect
deadlocks in the concurrent system and various cryptographic applications
where the keys are used to manage the messages with encrypted values.
Pseudocode
1. Prim's Algorithm Example
2. ReachSet = {0};
3. UnReachSet = {1, 2, ..., N-1};
4. SpanningTree = {};
5. while ( UnReachSet ? empty )
6. {
7. Find edge e = (x, y) such that:
8. 1. x ? ReachSet
9. 2. y ? UnReachSet
10. 3. e has smallest cost
96
11. SpanningTreeSpanningTree = SpanningTree ? {e}; Graph Algorithms
12. ReachSetReachSet = ReachSet ? {y};
13. UnReachSetUnReachSet = UnReachSet - {y};
14. }
Applications
Minimum spanning tree finds its application in the network design and is
popularly used in traveling salesman problems in a data structure. It can
also be used to find the minimum-cost weighted perfect matching and
multi-terminal minimum cut problems. MST also finds its application in
the field of image and handwriting recognition and cluster analysis.
97
2. Graph theory is useful in biology and conservation efforts where a
Fundamental of
vertex can represent regions where certain species exist and the edges
Algorithms
represent migration paths, or movement between the regions. This
information is important when looking at breeding patterns or tracking the
spread of disease.
3. Different activities of a project can be represented using a graph. This
graph can be useful in project scheduling.
Graph Representation
In this article, we will discuss the ways to represent the graph. By Graph
representation, we simply mean the technique to be used to store some
graph into the computer's memory.
A graph is a data structure that consist a sets of vertices (called nodes) and
edges. There are two ways to store Graphs into the computer's memory:
o Sequential representation (or, Adjacency matrix representation)
o Linked list representation (or, Adjacency list representation)
In sequential representation, an adjacency matrix is used to store the
graph. Whereas in linked list representation, there is a use of an adjacency
list to store the graph.
In this tutorial, we will discuss each one of them in detail
99
Now, let's start discussing the ways of representing a graph in the data
Fundamental of
structure.
Algorithms
Sequential representation
In sequential representation, there is a use of an adjacency matrix to
represent the mapping between vertices and edges of the graph. We can
use an adjacency matrix to represent the undirected graph, directed graph,
weighted directed graph, and weighted undirected graph.
If adj[i][j] = w, it means that there is an edge exists from vertex i to vertex
j with weight w.
An entry Aij in the adjacency matrix representation of an undirected graph
G will be 1 if an edge exists between Vi and Vj. If an Undirected Graph G
consists of n vertices, then the adjacency matrix for that graph is n x n, and
the matrix A = [aij] can be defined as -
aij = 1 {if there is a path exists from Vi to Vj}
aij = 0 {Otherwise}
It means that, in an adjacency matrix, 0 represents that there is no
association exists between the nodes, whereas 1 represents the existence of
a path between two edges.
If there is no self-loop present in the graph, it means that the diagonal
entries of the adjacency matrix will be 0.
Now, let's see the adjacency matrix representation of an undirected graph.
In the above figure, an image shows the mapping among the vertices (A,
B, C, D, E), and this mapping is represented by using the adjacency
matrix.
There exist different adjacency matrices for the directed and undirected
graph. In a directed graph, an entry Aij will be 1 only when there is an
edge directed from Vi to Vj.
Adjacency matrix for a directed graph
In a directed graph, edges represent a specific path from one vertex to
another vertex. Suppose a path exists from vertex A to another vertex B; it
means that node A is the initial node, while node B is the terminal node.
100
Consider the below-directed graph and try to construct the adjacency Graph Algorithms
matrix of it.
In the above graph, we can see there is no self-loop, so the diagonal entries
of the adjacent matrix are 0.
In the above image, we can see that the adjacency matrix representation of
the weighted directed graph is different from other representations. It is
because, in this representation, the non-zero values are replaced by the
actual weight assigned to the edges.
Adjacency matrix is easier to implement and follow. An adjacency matrix
can be used when the graph is dense and a number of edges are large.
Though, it is advantageous to use an adjacency matrix, but it consumes
more space. Even if the graph is sparse, the matrix still consumes the same
space.
101
Linked list representation
Fundamental of
Algorithms An adjacency list is used in the linked representation to store the Graph in
the computer's memory. It is efficient in terms of storage as we only have
to store the values for edges.
Let's see the adjacency list representation of an undirected graph.
In the above figure, we can see that there is a linked list or adjacency list
for every node of the graph. From vertex A, there are paths to vertex B
and vertex D. These nodes are linked to nodes A in the given adjacency
list.
An adjacency list is maintained for each node present in the graph, which
stores the node value and a pointer to the next adjacent node to the
respective node. If all the adjacent nodes are traversed, then store the
NULL in the pointer field of the last node of the list.
The sum of the lengths of adjacency lists is equal to twice the number of
edges present in an undirected graph.
Now, consider the directed graph, and let's see the adjacency list
representation of that graph.
For a directed graph, the sum of the lengths of adjacency lists is equal to
the number of edges present in the graph.
Now, consider the weighted directed graph, and let's see the adjacency list
representation of that graph.
102
Graph Algorithms
In the case of a weighted directed graph, each node contains an extra field
that is called the weight of the node.
In an adjacency list, it is easy to add a vertex. Because of using the linked
list, it also saves space.
Implementation of adjacency matrix representation of Graph
Now, let's see the implementation of adjacency matrix representation of
graph in C.
In this program, there is an adjacency matrix representation of an
undirected graph. It means that if there is an edge exists from vertex A to
vertex B, there will also an edge exists from vertex B to vertex A.
Here, there are four vertices and five edges in the graph that are non-
directed.
1. /* Adjacency Matrix representation of an undirected graph in C */
2.
3. #include <stdio.h>
4. #define V 4 /* number of vertices in the graph */
5.
6. /* function to initialize the matrix to zero */
7. void init(int arr[][V]) {
8. int i, j;
9. for (i = 0; i < V; i++)
10. for (j = 0; j < V; j++)
11. arr[i][j] = 0;
12. }
13.
14. /* function to add edges to the graph */
15. void insertEdge(int arr[][V], int i, int j) {
103
16. arr[i][j] = 1;
Fundamental of
Algorithms 17. arr[j][i] = 1;
18. }
19.
20. /* function to print the matrix elements */
21. void printAdjMatrix(int arr[][V]) {
22. int i, j;
23. for (i = 0; i < V; i++) {
24. printf("%d: ", i);
25. for (j = 0; j < V; j++) {
26. printf("%d ", arr[i][j]);
27. }
28. printf("\n");
29. }
30. }
31.
32. int main() {
33. int adjMatrix[V][V];
34.
35. init(adjMatrix);
36. insertEdge(adjMatrix, 0, 1);
37. insertEdge(adjMatrix, 0, 2);
38. insertEdge(adjMatrix, 1, 2);
39. insertEdge(adjMatrix, 2, 0);
40. insertEdge(adjMatrix, 2, 3);
41.
42. printAdjMatrix(adjMatrix);
43.
44. return 0;
104
45. } Graph Algorithms
Output:
After the execution of the above code, the output will be -
106
47. /* Add an edge from src to dest. The node is added at the beginnin Graph Algorithms
g */
48. struct AdjNode* check = NULL;
49. struct AdjNode* newNode = newAdjNode(dest);
50.
51. if (graph->array[src].head == NULL) {
52. newNode->next = graph->array[src].head;
53. graph->array[src].head = newNode;
54. }
55. else {
56.
57. check = graph->array[src].head;
58. while (check->next != NULL) {
59. check = check->next;
60. }
61. // graph->array[src].head = newNode;
62. check->next = newNode;
63. }
64.
65. /* Since graph is undirected, add an edge from dest to src also */
66. newNode = newAdjNode(src);
67. if (graph->array[dest].head == NULL) {
68. newNode->next = graph->array[dest].head;
69. graph->array[dest].head = newNode;
70. }
71. else {
72. check = graph->array[dest].head;
73. while (check->next != NULL) {
74. check = check->next;
75. }
107
76. check->next = newNode;
Fundamental of
Algorithms 77. }
78. }
79. /* function to print the adjacency list representation of graph*/
80. void print(struct Graph* graph)
81. {
82. int v;
83. for (v = 0; v < graph->V; ++v) {
84. struct AdjNode* pCrawl = graph->array[v].head;
85. printf("\n The Adjacency list of vertex %d is: \n head ", v);
86. while (pCrawl) {
87. printf("-> %d", pCrawl->dest);
88. pCrawl = pCrawl->next;
89. }
90. printf("\n");
91. }
92. }
93.
94. int main()
95. {
96.
97. int V = 4;
98. struct Graph* g = createGraph(V);
99. addEdge(g, 0, 1);
100. addEdge(g, 0, 3);
101. addEdge(g, 1, 2);
102. addEdge(g, 1, 3);
103. addEdge(g, 2, 4);
104. addEdge(g, 2, 3);
108
105. addEdge(g, 3, 4); Graph Algorithms
106. print(g);
107. return 0;
108. }
Output:
In the output, we will see the adjacency list representation of all the
vertices of the graph. After the execution of the above code, the output
will be -
Conclusion
Here, we have seen the description of graph representation using the
adjacency matrix and adjacency list. We have also seen their
implementations in C programming language.
So, that's all about the article. Hope, it will be helpful and informative to
you.
Graph traversal
In this section we will see what is a graph data structure, and the traversal
algorithms of it.
The graph is one non-linear data structure. That is consists of some nodes
and their connected edges. The edges may be director or undirected. This
graph can be represented as G(V, E). The following graph can be
represented as G({A, B, C, D, E}, {(A, B), (B, D), (D, E), (B, C), (C, A)})
The graph has two types of traversal algorithms. These are called the
Breadth First Search and Depth First Search.
109
Breadth First Search (BFS)
Fundamental of
Algorithms The Breadth First Search (BFS) traversal is an algorithm, which is used to
visit all of the nodes of a given graph. In this traversal algorithm one node
is selected and then all of the adjacent nodes are visited one by one. After
completing all of the adjacent vertices, it moves further to check another
vertices and checks its adjacent vertices again.
Algorithm
bfs(vertices, start)
Input: The list of vertices, and the start vertex.
Output: Traverse all of the nodes, if the graph is connected.
Begin
define an empty queue que
at first mark all nodes status as unvisited
add the start vertex into the que
while que is not empty, do
delete item from que and set to u
display the vertex u
for all vertices 1 adjacent with u, do
if vertices[i] is unvisited, then
mark vertices[i] as temporarily visited
add v into the queue
mark
done
mark u as completely visited
done
End
110
Algorithm Graph Algorithms
dfs(vertices, start)
Input: The list of all vertices, and the start node.
Output: Traverse all nodes in the graph.
Begin
initially make the state to unvisited for all nodes
push start into the stack
while stack is not empty, do
pop element from stack and set to u
display the node u
if u is not visited, then
mark u as visited
for all nodes i connected to u, do
if ith vertex is unvisited, then
push ith vertex into the stack
mark ith vertex as visited
done
done
End
Can a graph have more than one valid topological ordering? Yep! In the
example above, [1, 3, 2, 4, 5] works too.
111
Cyclic Graphs
Fundamental of
Algorithms Look at this directed graph with a cycle:
The cycle creates an impossible set of constraints—B has to be
before and after D in the ordering.
As a rule, cyclic graphs don't have valid topological orderings.
The Algorithm
How can we produce a topological ordering for this directed graph?
Well, let's focus on the first node in the topological ordering. That node
can't have any incoming directed edges; it must have an indegree ↴ of
zero.
Why?
Because if it had incoming directed edges, then the nodes pointing to it
would have to come first.
So, we'll find a node with an indegree of zero and add it to the topological
ordering.
That covers the first node in our topological ordering. What about the next
one?
Once a node is added to the topological ordering, we can take the node,
and its outgoing edges, out of the graph.
Then, we can repeat our earlier approach: look for any node with an
indegree of zero and add it to the ordering.
This is a common algorithm design pattern:
1. Figure out how to get the first thing.
2. Remove the first thing from the problem.
3. Repeat.
Note: this isn't the only way to produce a topological ordering.
Implementation
We'll use the strategy we outlined above:
1. Identify a node with no incoming edges.
2. Add that node to the ordering.
3. Remove it from the graph.
4. Repeat.
112
We'll keep looping until there aren't any more nodes with indegree zero. Graph Algorithms
This could happen for two reasons:
There are no nodes left. We've taken all of them out of the graph and
added them to the topological ordering.
There are some nodes left, but they all have incoming edges. This
means the graph has a cycle, and no topological ordering exists.
One small tweak. Instead of actually removing the nodes from the graph
(and destroying our input!), we'll use a hash map to track each node's
indegree. When we add a node to the topological ordering, we'll
decrement the indegree of that node's neighbors, representing that those
nodes have one fewer incoming edges.
Let's code it up!
deftopological_sort(digraph):
# digraph is a dictionary:
# key: a node
# value: a set of adjacent neighboring nodes
# construct a dictionary mapping nodes to their
# indegrees
indegrees ={node :0for node in digraph}
for node in digraph:
forneighborin digraph[node]:
indegrees[neighbor]+=1
# track nodes with no incoming edges
nodes_with_no_incoming_edges=[]
for node in digraph:
if indegrees[node]==0:
nodes_with_no_incoming_edges.append(node)
# initially, no nodes in our ordering
topological_ordering=[]
# as long as there are nodes with no incoming edges
# that can be added to the ordering
113
Fundamental of
Algorithms whilelen(nodes_with_no_incoming_edges)>0:
# add one of those nodes to the ordering
node =nodes_with_no_incoming_edges.pop()
topological_ordering.append(node)
# decrement the indegree of that node's neighbors
forneighborin digraph[node]:
indegrees[neighbor]-=1
if indegrees[neighbor]==0:
nodes_with_no_incoming_edges.append(neighbor)
# we've run out of nodes with no incoming edges
# did we add all the nodes or find a cycle?
iflen(topological_ordering)==len(digraph):
returntopological_ordering# got them all
else:
raiseException("Graph has a cycle! No topological ordering exists.")
Uses
The most common use for topological sort is ordering steps of a
process where some the steps depend on each other.
As an example, when making chocolate bundt cake,
The ingredients have to be mixed before going in the bundt pan.
The bundt pan has to be greased and floured before the batter can be
poured in.
The oven has to be preheated before the cake can bake.
The cake has to be baked before it cools.
The cake has to cool before it can be iced.
This process can be represented as a directed graph. Each step is a
node, and we'll add directed edges between nodes to represent that one
step has to be done before another.
115
Fundamental of
Algorithms
116
Topological sorting of a graph follows the algorithm of ordering the Graph Algorithms
vertices linearly so that each directed graph having vertex ordering ensures
that the vertex comes before it. Users can understand it more accurately by
looking at the sample image given below.
In the above example, you can visualize the ordering of the unsorted graph
and topologically sorted graph. The topologically sorted graph ensures to
sort vertex that comes in the pathway.
Pseudocode
1. topological_sort(N, adj[N][N])
2. T = []
3. visited = []
4. in_degree = []
5. for i = 0 to N
6. in_degree[i] = visited[i] = 0
7. for i = 0 to N
8. for j = 0 to N
9. if adj[i][j] is TRUE
10. in_degree[j] = in_degree[j] + 1
11. for i = 0 to N
12. if in_degree[i] is 0
13. enqueue(Queue, i)
14. visited[i] = TRUE
15. while Queue is not Empty
16. vertex = get_front(Queue)
17. dequeue(Queue)
18. T.append(vertex)
117
19. for j = 0 to N
Fundamental of
Algorithms 20. if adj[vertex][j] is TRUE and visited[j] is FALSE
21. in_degree[j] = in_degree[j] - 1
22. if in_degree[j] is 0
23. enqueue(Queue, j)
24. visited[j] = TRUE
25. return T
Application
Topological sorting covers the room for application in Kahn's and DFS
algorithms. In real-life applications, topological sorting is used in
scheduling instructions and serialization of data. It is also popularly used
to determine the tasks that are to be compiled and used to resolve
dependencies in linkers.
Graph coloring
Graph coloring algorithms follow the approach of assigning colors to the
elements present in the graph under certain conditions. The conditions are
based on the techniques or algorithms. Hence, vertex coloring is a
commonly used coloring technique followed here. First, in this method,
you try to color the vertex using k color, ensuring that two adjacent
vertexes should not have the same color. Other method includes face
coloring and edge coloring. Both of these methods should also ensure that
no edge or face should be inconsequent color. The coloring of the graph is
determined by knowing the chromatic number, which is also the smaller
number of colors needed. Consider the below image to understand how it
works.
Pseudocode
1. #include <iostream>
2. #include <list>
3. using namespace std;
118
4. // A class that represents an undirected graph Graph Algorithms
5. class Graph
6. {
7. int V; // No. of vertices
8. list<int> *adj; // A dynamic array of adjacency lists
9. public:
10. // Constructor and destructor
11. Graph(int V) { this->VV = V; adj = new list<int>[V]; }
12. ~Graph() { delete [] adj; }
13. // function to add an edge to graph
14. void addEdge(int v, int w);
15. // Prints greedy coloring of the vertices
16. void greedyColoring();
17. };
18. void Graph::addEdge(int v, int w)
19. {
20. adj[v].push_back(w);
21. adj[w].push_back(v); // Note: the graph is undirected
22. }
23. // Assigns colors (starting from 0) to all vertices and prints
24. // the assignment of colors
25. void Graph::greedyColoring()
26. {
27. int result[V];
28. // Assign the first color to first vertex
29. result[0] = 0;
30. // Initialize remaining V-1 vertices as unassigned
31. for (int u = 1; u < V; u++)
32. result[u] = -1; // no color is assigned to u
119
33. // A temporary array to store the available colors. True
Fundamental of
Algorithms 34. // value of available[cr] would mean that the color cr is
35. // assigned to one of its adjacent vertices
36. bool available[V];
37. for (int cr = 0; cr < V; cr++)
38. available[cr] = false;
39. // Assign colors to remaining V-1 vertices
40. for (int u = 1; u < V; u++)
41. {
42. // Process all adjacent vertices and flag their colors
43. // as unavailable
44. list<int>::iterator i;
45. for (i = adj[u].begin(); i != adj[u].end(); ++i)
46. if (result[*i] != -1)
47. available[result[*i]] = true;
48. // Find the first available color
49. int cr;
50. for (cr = 0; cr < V; cr++)
51. if (available[cr] == false)
52. break;
53. result[u] = cr; // Assign the found color
54. // Reset the values back to false for the next iteration
55. for (i = adj[u].begin(); i != adj[u].end(); ++i)
56. if (result[*i] != -1)
57. available[result[*i]] = false;
58. }
59. // print the result
60. for (int u = 0; u < V; u++)
61. cout << "Vertex " << u << " ---> Color "
120
62. << result[u] << endl; Graph Algorithms
63. }
64. // Driver program to test above function
65. int main()
66. {
67. Graph g1(5);
68. g1.addEdge(0, 1);
69. g1.addEdge(0, 2);
70. g1.addEdge(1, 2);
71. g1.addEdge(1, 3);
72. g1.addEdge(2, 3);
73. g1.addEdge(3, 4);
74. cout << "Coloring of graph 1 \n";
75. g1.greedyColoring();
76. Graph g2(5);
77. g2.addEdge(0, 1);
78. g2.addEdge(0, 2);
79. g2.addEdge(1, 2);
80. g2.addEdge(1, 4);
81. g2.addEdge(2, 4);
82. g2.addEdge(4, 3);
83. cout << "\nColoring of graph 2 \n";
84. g2.greedyColoring();
85. return 0;
86. }
Application
Graph coloring has vast applications in data structures as well as in
solving real-life problems. For example, it is used in timetable scheduling
and assigning radio frequencies for mobile. It is also used in Sudoko and
to check if the given graph is bipartite. Graph coloring can also be used in
geographical maps to mark countries and states in different colors.
121
Maximum flow
Fundamental of
Algorithms The maximum flow algorithm is usually treated as a problem-solving
algorithm where the graph is modeled like a network flow infrastructure.
Hence, the maximum flow is determined by finding the path of the flow
that has the maximum flow rate. The maximum flow rate is determined
by augmenting paths which is the total flow-based out of source node
equal to the flow in the sink node. Below is the illustration for the same.
1. function: DinicMaxFlow(Graph G,Node S,Node T):
2. Initialize flow in all edges to 0, F = 0
3. Construct level graph
4. while (there exists an augmenting path in level graph):
5. find blocking flow f in level graph
6. FF = F + f
7. Update level graph
8. return F
Applications
Like you, the maximum flow problem covers applications of popular
algorithms like the Ford-Fulkerson algorithm, Edmonds-Karp algorithm,
and Dinic's algorithm, like you saw in the pseudocode given above. In real
life, it finds its applications in scheduling crews in flights and image
segmentation for foreground and background. It is also used in games like
basketball, where the score is set to a maximum estimated value having
the current division leader.
Matching
A matching algorithm or technique in the graph is defined as the edges
that no common vertices at all. Matching can be termed maximum
matching if the most significant number of edges possibly matches with as
many vertices as possible. It follows a specific approach for determining
full matches, as shown in the below image.
122
Graph Algorithms
Applications
Matching is used in an algorithm like the Hopcroft-Karp algorithm and
Blossom algorithm. It can also be used to solve problems using a
Hungarian algorithm that covers concepts of matching. In real-life
examples, matching can be used resource allocation and travel
optimization and some problems like stable marriage and vertex cover
problem.
Conclusion
In this article, you came across plenty of graph coloring algorithms and
techniques that find their day-to-day applications in all instances of real
life. You learned how to implement them according to situations, and
hence the pseudo code helped you process the information strategically
and efficiently. Graph algorithms are considered an essential aspect in the
field confined not only to solve problems using data structures but also in
general tasks like Google Maps and Apple Maps. However, a beginner
might find it hard to implement Graph algorithms because of their
complex nature. Hence, it is highly recommended to go through this
article since it covers everything from scratch.
Shortest Path Algorithms
Dijkstra's Algorithm finds the shortest path between a given node (which
is called the "source node") and all other nodes in a graph. This algorithm
uses the weights of the edges to find the path that minimizes the total
distance (weight) between the source node and all other nodes.
125
How does Dijkstra’s Algorithm works?
Fundamental of
Algorithms Let’s see how Dijkstra’s Algorithm works with an example given below:
Dijkstra’s Algorithm will generate the shortest path from Node 0 to all
other Nodes in the graph.
Dijkstra’s Algorithm
The algorithm will generate the shortest path from node 0 to all the other
nodes in the graph.
For this graph, we will assume that the weight of the edges represents
the distance between two nodes.
As, we can see we have the shortest path from,Node 0 to Node 1, from
Node 0 to Node 2, from
Node 0 to Node 3, from
Node 0 to Node 4, from
Node 0 to Node 6.
Initially we have a set of resources given below :
The Distance from the source node to itself is 0. In this example the
source node is 0.
The distance from the source node to all other node is unknown so we
mark all of them as infinity.
Example: 0 -> 0, 1-> ∞,2-> ∞,3-> ∞,4-> ∞,5-> ∞,6-> ∞.
we’ll also have an array of unvisited elements that will keep track of
unvisited or unmarked Nodes.
126
Algorithm will complete when all the nodes marked as visited and the Graph Algorithms
distance between them added to the path. Unvisited Nodes:- 0 1 2 3 4
5 6.
Step 1: Start from Node 0 and mark Node as visited as you can check in
below image visited Node is marked red.
Dijkstra’s Algorithm
Step 2: Check for adjacent Nodes, Now we have to choices (Either choose
Node1 with distance 2 or either choose Node 2 with distance 6 ) and
choose Node with minimum distance. In this step Node 1 is Minimum
distance adjacent Node, so marked it as visited and add up the distance.
Distance: Node 0 -> Node 1 = 2
Dijkstra’s Algorithm
Step 3: Then Move Forward and check for adjacent Node which is Node
3, so marked it as visited and add up the distance, Now the distance will
be:
127
Fundamental of
Algorithms
Dijkstra’s Algorithm
Step 4: Again we have two choices for adjacent Nodes (Either we can
choose Node 4 with distance 10 or either we can choose Node 5 with
distance 15) so choose Node with minimum distance. In this step Node
4 is Minimum distance adjacent Node, so marked it as visited and add up
the distance.
Dijkstra’s Algorithm
Step 5: Again, Move Forward and check for adjacent Node which
is Node 6, so marked it as visited and add up the distance, Now the
distance will be:
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 + 10
+ 2 = 19
128
Graph Algorithms
Dijkstra’s Algorithm
So, the Shortest Distance from the Source Vertex is 19 which is
optimal one
Pseudo Code for Dijkstra’s Algorithm
function Dijkstra(Graph, source):
// Initialize distances to all nodes as infinity, except for the source node.
distances = map infinity to all nodes
distances = 0
// Initialize an empty set of visited nodes and a priority queue to keep
track of the nodes to visit.
visited = empty set
queue = new PriorityQueue()
queue.enqueue(source, 0)
// Loop until all nodes have been visited.
while queue is not empty:
// Dequeue the node with the smallest distance from the priority
queue.
current = queue.dequeue()
// If the node has already been visited, skip it.
if current in visited:
continue
// Mark the node as visited.
visited.add(current)
// Check all neighboring nodes to see if their distances need to be
updated.
for neighbor in Graph.neighbors(current):
// Calculate the tentative distance to the neighbor through the
current node.
tentative_distance = distances[current] +
Graph.distance(current, neighbor)
129
// If the tentative distance is smaller than the current distance to the
Fundamental of
neighbor, update the distance.
Algorithms
if tentative_distance< distances[neighbor]:
distances[neighbor] = tentative_distance
// Enqueue the neighbor with its new distance to be considered
for visitation in the future.
queue.enqueue(neighbor, distances[neighbor])
// Return the calculated distances from the source to all other nodes in
the graph.
return distances
Example:
Input: Source = 0
Example
Output: Vertex
Vertex Distance from Source
130
0 -> 0 Graph Algorithms
1 -> 2
2 -> 6
3 -> 7
4 -> 17
5 -> 22
6 -> 19
Below is the algorithm based on the above idea:
1. Initialize distances of all vertices as infinite.
2. Create an empty priority_queuepq. Every item of pq is a pair (weight,
vertex). Weight (or distance) is used as first item of pair as first item is
by default used to compare two pairs
3. Insert source vertex into pq and make its distance as 0.
4. While either pq doesn’t become empty
1. Extract minimum distance vertex from pq.
Let the extracted vertex be u.
2. Loop through all adjacent of u and do the following for every vertex v.
3. If there is a shorter path to v through u.
131
Fundamental of classGraph {
Algorithms
intV; // No. of vertices
// In a weighted graph, we need to store vertex
// and weight pair for every edge
list<pair<int, int>>* adj;
public:
Graph(intV); // Constructor
// function to add an edge to graph
voidaddEdge(intu, intv, intw);
// prints shortest path from s
voidshortestPath(ints);
};
// Allocates memory for adjacency list
Graph::Graph(intV)
{
this->V = V;
adj = newlist<iPair>[V];
}
voidGraph::addEdge(intu, intv, intw)
{
adj[u].push_back(make_pair(v, w));
adj[v].push_back(make_pair(u, w));
}
// Prints shortest paths from src to all other vertices
voidGraph::shortestPath(intsrc)
{
// Create a priority queue to store vertices that
// are being preprocessed. This is weird syntax in C++.
132
// Refer below link for details of this syntax Graph Algorithms
// https://www.geeksforgeeks.org/implement-min-heap-using-stl/
priority_queue<iPair, vector<iPair>, greater<iPair>>
pq;
// Create a vector for distances and initialize all
// distances as infinite (INF)
vector<int>dist(V, INF);
// Insert source itself in priority queue and initialize
// its distance as 0.
pq.push(make_pair(0, src));
dist[src] = 0;
/* Looping till priority queue becomes empty (or all
distances are not finalized) */
while(!pq.empty()) {
// The first vertex in pair is the minimum distance
// vertex, extract it from priority queue.
// vertex label is stored in second of pair (it
// has to be done this way to keep the vertices
// sorted distance (distance must be first item
// in pair)
intu = pq.top().second;
pq.pop();
// 'i' is used to get all adjacent vertices of a
// vertex
list<pair<int, int>>::iterator i;
for(i = adj[u].begin(); i != adj[u].end(); ++i) {
// Get vertex label and weight of current
// adjacent of u.
133
Fundamental of intv = (*i).first;
Algorithms
intweight = (*i).second;
// If there is shorted path to v through u.
if(dist[v] >dist[u] + weight) {
// Updating distance of v
dist[v] = dist[u] + weight;
pq.push(make_pair(dist[v], v));
}
}
}
// Print shortest distances stored in dist[]
printf("Vertex Distance from Source\n");
for(inti = 0; i< V; ++i)
printf("%d \t\t %d\n", i, dist[i]);
}
// Driver program to test methods of graph class
intmain()
{
// create the graph given in above figure
intV = 7;
Graph g(V);
// making above shown graph
g.addEdge(0, 1, 2);
g.addEdge(0, 2, 6);
g.addEdge(1, 3, 5);
g.addEdge(2, 3, 8);
g.addEdge(3, 4, 10);
g.addEdge(3, 5, 15);
134
g.addEdge(4, 6, 2); Graph Algorithms
g.addEdge(5, 6, 6);
g.shortestPath(0);
return0;
}
Output
Vertex Distance from Source
0 0
1 2
2 6
3 7
4 17
5 22
6 19
Final Answer:
Output
Complexity Analysis of Dijkstra’s Algorithm using Priority Queue:
Time complexity : O(E log V)
Space Complexity: O(V2), here V is the number of Vertices.
2. Dijkstra shortest path algorithm using Prim’s Algorithm
135
Given a graph and a source vertex in the graph, find the shortest paths
Fundamental of
from the source to all vertices in the given graph.
Algorithms
Example:
Example
Output: 0 2 6 7 17 22 19
Follow the below steps to implement Dijkstra’s Algorithm using Prim’s
Algorithm:
Create a set sptSet (shortest path tree set) that keeps track of vertices
included in the shortest-path tree, i.e., whose minimum distance from
the source is calculated and finalized. Initially, this set is empty.
Assign a distance value to all vertices in the input graph. Initialize all
distance values as INFINITE. Assign the distance value as 0 for the
source vertex so that it is picked first.
While sptSet doesn’t include all vertices
Pick a vertex u which is not there in sptSet and has a minimum
distance value.
Include u to sptSet.
Then update distance value of all adjacent vertices of u.
To update the distance values, iterate through all adjacent vertices.
For every adjacent vertex v, if the sum of the distance value of u (from
source) and weight of edge u-v, is less than the distance value of v,
then update the distance value of v.
136
3. Dijkstra’s shortest path with minimum edges Graph Algorithms
In this Implementation we are given an adjacency atrix graph representing
paths between the nodes in the given graph. The task is to find the shortest
path with minimum edges i. e. if there a multiple short paths with same
cost then choose the one with the minimum number of edges.
Consider the graph given below:
Example
There are two paths from vertex 0 to vertex 3 with a weight of 12:
0 -> 1 -> 2 -> 3
0 -> 4 -> 3
Since Dijkstra’s algorithm is a greedy algorithm that seeks the minimum
weighted vertex on every iteration, so the original Dijkstra’s algorithm
will output the first path but the result should be the second path as it
contains minimum number of edges.
Examples:
Input: graph[][] = { {0, 1, INFINITY, INFINITY, 10},
{1, 0, 4, INFINITY, INFINITY},
{INFINITY, 4, 0, 7, INFINITY},
{INFINITY, INFINITY, 7, 0, 2},
{10, INFINITY, INFINITY, 2, 0} };
Output: 0->4->3
INFINITY here shows that u and v are not neighbors
Input: graph[][] = { {0, 5, INFINITY, INFINITY},
{5, 0, 5, 10},
{INFINITY, 5, 0, 5},
{INFINITY, 10, 5, 0} };
Output: 0->1->3
137
Approach: The idea of the algorithm is to use the original Dijkstra’s
Fundamental of
algorithm, but also to keep track of the length of the paths by an array that
Algorithms
stores the length of the paths from the source vertex, so if we find a shorter
path with the same weight, then we will take it.
Complexity Analysis:
Time Complexity: O(V^2) where V is the number of vertices and E is
the number of edges.
Auxiliary Space: O(V + E)
4. Dijkstra’s shortest path algorithm using Set
Given a graph and a source vertex in graph, find shortest paths from
source to all vertices in the given graph
Example:
Example
Output: 0 2 6 7 17 22 19
Below is algorithm based on set data structure.
1. Initialize distances of all vertices as infinite.
2. Create an empty set. Every item of set is a pair (weight, vertex).
Weight (or distance) is used as first item of pair as first item is by
default used to compare two pairs.
3. Insert source vertex into the set and make its distance as 0.
4. While Set doesn’t become empty, do following
a) Extract minimum distance vertex from Set.
Let the extracted vertex be u.
b) Loop through all adjacent of u and do
following for every vertex v.
138
// If there is a shorter path to v through u. Graph Algorithms
If dist[v] >dist[u] + weight(u, v
(i) Update distance of v, i.e., do dist[v] = dist[u] + weight(u, v)
(i) If v is in set, update its distance in set by removing it first, then
inserting with new distance
(ii) If v is not in set, then insert it in set with new distance
5) Print distance array dist[] to print all shortest paths.
Complexity Analysis:
Time Complexity: O(ELogV), where V is the number of vertices and
E is the number of edges.
Auxiliary Space: O(V2)
139
Fundamental of Feature: Dijkstra’s Bellman Ford
Algorithms
Bellman-Ford algorithm
optimized for finding the is optimized for finding
shortest path between a the shortest path between
single source node and all a single source node and
other nodes in a graph with all other nodes in a graph
non-negative edge weights with negative edge
Optimization weights.
the Bellman-Ford
Dijkstra’s algorithm uses a algorithm relaxes all
greedy approach where it edges in each iteration,
chooses the node with the updating the distance of
smallest distance and each node by considering
updates its neighbors all possible paths to that
Relaxation node
140
Feature: Dijkstra’s Floyd-Warshall Algorithm Graph Algorithms
Dijkstra’s algorithm is a
Floyd-Warshall algorithm,
single-source shortest
on the other hand, is an all-
path algorithm that uses a
pairs shortest path algorithm
greedy approach and
that uses dynamic
calculates the shortest
programming to calculate
path from the source node
the shortest path between all
to all other nodes in the
pairs of nodes in the graph.
Technique graph.
Floyd-Warshall algorithm,
Dijkstra’s algorithm does
on the other hand, is an all-
not work with graphs that
pairs shortest path algorithm
have negative edge
that uses dynamic
weights, as it assumes that
programming to calculate
all edge weights are non-
Negative the shortest path between all
negative.
Weights pairs of nodes in the graph.
Feature: A* Algorithm
141
Fundamental of Feature: A* Algorithm
Algorithms
single source node and all heuristic function to guide the
other nodes in a graph search towards the goal node.
with non-negative edge
weights
Dijkstra’s algorithm is
. A* algorithm is commonly
used in many applications
used in pathfinding and graph
such as routing
traversal problems, such as
algorithms, GPS
video games, robotics, and
navigation systems, and
planning algorithms.
Application network analysis
142
Graph Algorithms
The above graph can be represented as G(V, E), where 'V' is the number
of vertices, and 'E' is the number of edges. The spanning tree of the above
graph would be represented as G`(V`, E`). In this case, V` = V means that
the number of vertices in the spanning tree would be the same as the
number of vertices in the graph, but the number of edges would be
different. The number of edges in the spanning tree is the subset of the
number of edges in the original graph. Therefore, the number of edges can
be written as:
E` € E
It can also be written as:
E` = |V| - 1
Two conditions exist in the spanning tree, which is as follows:
o The number of vertices in the spanning tree would be the same as
the number of vertices in the original graph.
V` = V
o The number of edges in the spanning tree would be equal to the
number of edges minus 1.
E` = |V| - 1
o The spanning tree should not contain any cycle.
o The spanning tree should not be disconnected.
143
Fundamental of
Algorithms
144
Graph Algorithms
The following are the spanning trees that we can make from the above
graph.
o The first spanning tree is a tree in which we have removed the edge
between the vertices 1 and 5 shown as below:
The sum of the edges of the above tree is (1 + 4 + 5 + 2): 12
o The second spanning tree is a tree in which we have removed the edge
between the vertices 1 and 2 shown as below:
The sum of the edges of the above tree is (3 + 2 + 5 + 4) : 14
o The third spanning tree is a tree in which we have removed the edge
between the vertices 2 and 3 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 5) : 11
o The fourth spanning tree is a tree in which we have removed the edge
between the vertices 3 and 4 shown as below:
The sum of the edges of the above tree is (1 + 3 + 2 + 4) : 10. The edge
cost 10 is minimum so it is a minimum spanning tree.
General properties of minimum spanning tree:
o If we remove any edge from the spanning tree, then it becomes
disconnected. Therefore, we cannot remove any edge from the
spanning tree.
o If we add an edge to the spanning tree then it creates a loop. Therefore,
we cannot add any edge to the spanning tree.
o In a graph, each edge has a distinct weight, then there exists only a
single and unique minimum spanning tree. If the edge weight is not
distinct, then there can be more than one minimum spanning tree.
145
o A complete undirected graph can have an nn-2 number of spanning
Fundamental of
trees.
Algorithms
o Every connected and undirected graph contains atleast one spanning
tree.
o The disconnected graph does not have any spanning tree.
o In a complete graph, we can remove maximum (e-n+1) edges to
construct a spanning tree.
The number of spanning trees that can be made from the above complete
graph equals to nn-2 = 44-2 = 16.
Therefore, 16 spanning trees can be created from the above graph.
The maximum number of edges that can be removed to construct a
spanning tree equals to e-n+1 = 6 - 4 + 1 = 3.
Application of Minimum Spanning Tree
1. Consider n stations are to be linked using a communication network &
laying of communication links between any two stations involves a
cost.
The ideal solution would be to extract a subgraph termed as minimum
cost spanning tree.
2. Suppose you want to construct highways or railroads spanning several
cities then we can use the concept of minimum spanning trees.
3. Designing Local Area Networks.
4. Laying pipelines connecting offshore drilling sites, refineries and
consumer markets.
146
5. Suppose you want to apply a set of houses with Graph Algorithms
o Electric Power
o Water
o Telephone lines
o Sewage lines
To reduce cost, you can connect houses with minimum cost spanning
trees.
147
Fundamental of
Algorithms
Kruskal's Algorithm:
An algorithm to construct a Minimum Spanning Tree for a connected
weighted graph. It is a Greedy Algorithm. The Greedy Choice is to put the
smallest weight edge that does not because a cycle in the MST constructed
so far.
Solution: First we initialize the set A to the empty set and create |v| trees,
one containing each vertex with MAKE-SET procedure. Then sort the
edges in E into order by non-decreasing weight.
There are 9 vertices and 12 edges. So MST formed (9-1) = 8 edges
Now, check for each edge (u, v) whether the endpoints u and v belong to
the same tree. If they do then the edge (u, v) cannot be supplementary.
Otherwise, the two vertices belong to different trees, and the edge (u, v) is
149
added to A, and the vertices in two trees are merged in by union
Fundamental of
procedure.
Algorithms
Step1: So, first take (h, g) edge
Step 3: then (a, b) and (i, g) edges are considered, and the forest becomes
Step 4: Now, edge (h, i). Both h and i vertices are in the same set. Thus it
creates a cycle. So this edge is discarded.
Then edge (c, d), (b, c), (a, h), (d, e), (e, f) are considered, and the forest
becomes.
150
Graph Algorithms
Step 5: In (e, f) edge both endpoints e and f exist in the same tree so
discarded this edge. Then (b, h) edge, it also creates a cycle.
Step 6: After that edge (d, f) and the final spanning tree is shown as in
dark lines.
151
At every step, it considers all the edges and picks the minimum weight
Fundamental of
edge. After picking the edge, it moves the other endpoint of edge to set
Algorithms
containing MST.
Steps for finding MST using Prim's Algorithm:
1. Create MST set that keeps track of vertices already included in MST.
2. Assign key values to all vertices in the input graph. Initialize all key
values as INFINITE (∞). Assign key values like 0 for the first vertex so
that it is picked first.
3. While MST set doesn't include all vertices.
a. Pick vertex u which is not is MST set and has minimum key value.
Include 'u'to MST set.
b. Update the key value of all adjacent vertices of u. To update, iterate
through all adjacent vertices. For every adjacent vertex v, if the weight
of edge u.v less than the previous key value of v, update key value as a
weight of u.v.
MST-PRIM (G, w, r)
1. for each u ∈ V [G]
2. do key [u] ← ∞
3. π [u] ← NIL
4. key [r] ← 0
6. Q ← V [G]
7. While Q ?∅
8. do u ← EXTRACT - MIN (Q)
9. for each v ∈Adj [u]
152
Graph Algorithms
153
Fundamental of
Algorithms
154
Now remove 4 because key [4] = 25 which is minimum, so u =4 Graph Algorithms
1. Adj [4] = {6, 3}
2. Key [3] = ∞ key [6] = ∞
3. w (4,3) = 22 w (4,6) = 24
4. w (u, v) < key [v] w (u, v) < key [v]
5. w (4,3) < key [3] w (4,6) < key [6]
Update key value of key [3] as 22 and key [6] as 24.
And the parent of 3, 6 as 4.
1. π[3]= 4 π[6]= 4
1. u = EXTRACT_MIN (2, 6)
2. u = 2 [key [2] < key [6]]
3. 12 < 18
4. Now the root is 2
5. Adj [2] = {3, 1}
6. 3 is already in a heap
7. Taking 1, key [1] = 28
8. w (2,1) = 16
9. w (2,1) < key [1]
So update key value of key [1] as 16 and its parent as 2.
1. π[1]= 2
Now all the vertices have been spanned, Using above the table we get
Minimum Spanning Tree.
1. 0 → 5 → 4 → 3 → 2 → 1 → 6
2. [Because Π [5] = 0, Π [4] = 5, Π [3] = 4, Π [2] = 3, Π [1] =2, Π [6] =1]
Total Cost = 10 + 25 + 22 + 12 + 16 + 14 = 99
157
6. Applications, Advantages and Disadvantages of Graph
Fundamental of
Algorithms 7. Check whether the structure of given Graph is same as the game of
Rock-Paper-Scissor
8. Maximum number of edges that N-vertex graph can have such that
graph is Triangle free | Mantel's Theorem
9. Python Program to Find Independent Sets in a Graph using Graph
Coloring
10. What is graphs? Enlist Types of Graphs.
11. Explain Shortest Path First Algorithms?
12. What is BFS & DFS?
13. Explain minimal spanning tree with examples?
14. Explain Graph Representation & Graph Traversal?
15. Enlist Applications of Graphs?
16. Explain Topological Sort in detail?
17. Explain Graph Algorithms in details?
18. Write a short note on Weighted Graphs & Directed graphs?
19. Expalin What is Dijkstra’s Algorithm? Introduction to Dijkstra’s
Shortest Path Algorithm.
20. What is Graph Colouring ?
4.10 CONCLUSION
In this article, you came across plenty of graph coloring algorithms and
techniques that find their day-to-day applications in all instances of real
life. You learned how to implement them according to situations, and
hence the pseudo code helped you process the information strategically
and efficiently. Graph algorithms are considered an essential aspect in the
field confined not only to solve problems using data structures but also in
general tasks like Google Maps and Apple Maps. However, a beginner
might find it hard to implement Graph algorithms because of their
complex nature. Hence, it is highly recommended to go through this
article since it covers everything from scratch.
158
5
SELECTION ALGORITHMS
UnitStructure
5.0 Objectives
5.1 Introduction
5.2 What are the Selection Algorithm
5.3 Selection by Sorting
5.4 Linear Selection Algorithm
5.5 Median of median Algorithms
5.6 Finding the K Smallest Elements in Sorted Order
5.7 List of References
5.8 Bibliography
5.0 OBJECTIVES
A sorting algorithm is a method for reorganizing a large number of items
into a specific order, such as alphabetical, highest-to-lowest value or
shortest-to-longest distance. Sorting algorithms take lists of items as input
data, perform specific operations on those lists and deliver ordered arrays
as output.
159
Fundamental of 5.2 WHAT ARE THE SELECTION ALGORITHM
Algorithms
1. Selection by Sorting
Sorting the list or an array then selecting the required element can make
selection easy. This method is inefficient for selecting a single element but
is efficient when many selections need to be made from an array for which
it only needs sorting of an array. For selection in a linked list is O(n) even
if the linked list is sorted due to lack of random access.
Instead of sorting the entire list or an array, we can use partial sorting to
select the kth smallest(or largest) element in a list or an array. Then the kth
smallest(or largest) is the largest(or smallest) element of the partially
sorted list. This takes O(1) to access in an array and O(k) to access in a
list.
160
} Selection Algorithms
}
returnarr[k]
}
2. Partition Based Selection:
For partition-based selection, the Quick select Algorithm is used. It is a
variant of quicksort algorithm. In both, we choose a pivot element and
using the partition step from the quicksort algorithm arranges all the
elements smaller than the pivot on its left and the elements greater than it
on its right. But while Quicksort recurses on both sides of the
partition, Quickselect only recurses on one side, the side on which the
desired kth element is present. The partition-based algorithms are done in
place, which results in partially sorting the data. They can be done out of
place by not changing the original data at the cost of O(n) auxiliary space.
3. Median selected as pivot:
A median-selection algorithm can be used to perform a selection algorithm
or sorting algorithm, by selecting the median of the array as the pivot
element in Quickselect or Quicksort algorithm. In practice the overhead of
pivot computation is significant, so these algorithms are generally not
used, but this technique is of theoretical interest in relating selection and
sorting algorithms. The median of an array is the best pivot for sorting of
an array because it evenly divides the data into two parts., and thus
guarantees optimal sorting, assuming the selection algorithm is optimal.
4. selection sort in python:
In this tutorial, we will implement the selection sort algorithm in Python.
It is quite straightforward algorithm using the less swapping.
In this algorithm, we select the smallest element from an unsorted array in
each pass and swap with the beginning of the unsorted array. This process
will continue until all the elements are placed at right place. It is simple
and an in-place comparison sorting algorithm.
161
Step - 3: Now compare the minimum with the second element. If the
Fundamental of
second element is smaller than the first, we assign it as a minimum.
Algorithms
Again we compare the second element to the third and if the third element
is smaller than second, assign it as minimum. This process goes on until
we find the last element.
Step - 4: After each iteration, minimum element is swapped in front of the
unsorted array.
Step - 5: The second to third steps are repeated until we get the sorted
array.
Algorithm
1. selection_sort(array)
2. repeat (0, length - 1) times
3. set the first unsorted element as the minimum
4. for each of the unsorted elements
5. if element < currentMinimum
6. set element as new minimum
7. swap minimum with first unsorted position
8. end selection_sort
Code -
1. def selection_sort(array):
2. length = len(array)
3.
4. for i in range(length-1):
5. minIndex = i
6.
7. for j in range(i+1, length):
8. if array[j]<array[minIndex]:
162
9. minIndex = j Selection Algorithms
10.
11. array[i], array[minIndex] = array[minIndex], array[i]
12.
13.
14. return array
15. array = [21,6,9,33,3]
16.
17. print("The sorted array is: ", selection_sort(array))
Output:
The sorted array is: [3, 6, 9, 21, 33]
Explanation -
Let's understand the above code -
o First, we define the selection_sort() function that takes array as an
argument.
o In the function, we get the length of the array which used to determine
the number of passes to be made comparing values.
o As we can see that, we use two loops - outer and inner loop. The outer
loop uses to iterate through the values of the list. This loop will iterate
to 0 to (length-1). So the first iteration will be perform (5-1) or 4 times.
In each iteration, the value of the variable i is assigned to the variable
o The inner loop uses to compare the each value of right-side element to
the other value on the leftmost element. So the second loop starts its
iteration from i+1. It will only pick the value that is unsorted.
o Find the minimum element in the unsorted list and update the minIndex
position.
o Place the value at the beginning of the array.
o Once the iteration is completed, the sorted array is returned.
o At last we create an unsorted array and pass to the selection_sort() It
prints the sorted array.
What Is Searching?
Searching is the process of fetching a specific element in a collection of
elements. The collection can be an array or a linked list. If you find the
element in the list, the process is considered successful, and it returns the
location of that element.
And in contrast, if you do not find the element, it deems the search
unsuccessful.
Two prominent search strategies are extensively used to find a specific
item on a list. However, the algorithm chosen is determined by the list's
organization.
1. Linear Search
2. Binary Search
Moving ahead in this tutorial, you will understand what exactly a linear
search algorithm is.
164
Selection Algorithms
The match is not found, you now move on to the next element and try to
implement a comparison.
Step 2: Now, search element 39 is compared to the second element of an
array, 9.
166
As both are not matching, you will continue the search. Selection Algorithms
Step 3: Now, search element 39 is compared with the third element,
which is 21.
Again, both the elements are not matching, you move onto the next
following element.
Step 4; Next, search element 39 is compared with the fourth element,
which is 15.
A perfect match is found, you stop comparing any further elements and
terminate the Linear Search Algorithm and display the element found at
location 4.
Followed by the practical implementation, you will move on to the
complexity of the linear search algorithm.
169
Fundamental of
Algorithms
170
6. Use quickSelect subroutine to find the true median from array M, The Selection Algorithms
median obtained is the viable pivot.
7. Terminate the algorithm once the base case is hit, that is, when the
sublist becomes small enough. Use Select brute-force subroutine to find
the median.
Note: We used chunks of size 5 because selecting a median from a list
whose size is an odd number is easier. Even numbers require additional
computation. We could also select 7 or any other odd number as we shall
see in the proofs below.
Here is the procedure pictorially;
medianOfMedians(arr[1...n])
if(n <5returnSelect(arr[1...n], n/2))
Let M be an empty list
171
Time and Space Complexity of Median of Medians Algorithm
Fundamental of
Algorithms This algorithm runs in O(n) linear time complexity, we traverse the list
once to find medians in sublists and another time to find the true median to
be used as a pivot.
The space complexity is O(logn) , memory used will be proportional to
the size of the lists.
proof:
Array M consists of n/5 medians of sub lists of size 5, these elements in
list M is greater than and less than at-least two elements in the original list.
QuickSelect will return a true median that represents the whole list which
is greater than and less than n/5/2 elements of list M and since each one of
the M elements is greater than and less than at least two other elements in
their previous sublists, therefore the true median is greater than and less
than at least 3n/10, 30 percentile of elements of the whole list.
2. Lemma:
With that, we guarantee that quickselect finds a good pivot in linear time
O(n) which in turn guarantees quickSort's worst case to be O(nlogn).
proof:
Recurrence relation;
T(n) = {c if(n ≤ 1)
T(n/5) + T(7n/10) + dn if(n > 1)
Theorem:
1. T(n) ≤ k . n for n ≥ n’
2. T(n) = T(n/5) + T(7n/10) + dn
Base Case:
n=1
T(1) = c ≤ k . 1 therefore k ≥ c
Induction Hypothesis:
Assume, T(j) ≤ b . k for(k ≤ n)
172
Induction Step: Selection Algorithms
T(n) = T(n5) + T(7n/10) + dn
T(n) ≤k . n/5 + k . 7n/10 + dn = 9/10kn + dn
By induction;
T(n) ≤ k . n for n ≥ 1
The above proof worked because n/5 + 7n/10 < 1 ,we split the original list
in chunks of 5 assuming the original list is divisible by 5. We could also
use another odd number provided the above equation results in a number
below 1, then our theorem will perform its operations in O(n) linear time.
Therefore we get a big theta(n) time complexity for QuickSelect which
proves using this heuristic for QuickSelect ad QuickSort improves worst
case to O(n) and O(nlogn) for the respective algorithms.
1.6 Find the Kth smallest element in the sorted generated array
Given an array arr[] of N elements and an integer K, the task is to
generate an B[] with the following rules:
1. Copy elements arr[1…N], N times to array B[].
2. Copy elements arr[1…N/2], 2*N times to array B[].
3. Copy elements arr[1…N/4], 3*N times to array B[].
4. Similarly, until only no element is left to be copied to array B[].
Finally print the Kth smallest element from the array B[]. If K is out of
bounds of B[] then return -1.
Examples:
Input: arr[] = {1, 2, 3}, K = 4
Output: 1
{1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 1, 1, 1, 1, 1} is the required array B[]
{1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3} in the sorted form where
1 is the 4th smallest element. Input: arr[] = {2, 4, 5, 1}, K = 13
Output: 2
Approach:
Maintain a Count_Array where we must store the count of times every
element occurs in array B[]. It can be done for range of elements by
adding the count at start index and subtracting the same count at end index
+ 1 location.
1. Take cumulative sum of count array.
2. Maintain all elements of arr[] with their count in Array B[] along with
their counts and sort them based on element value.
3. Traverse through vector and see which element has Kth position in B[]
as per their individual counts.
173
4. If K is out of bounds of B[] then return -1.
Fundamental of
Algorithms Below is the implementation of the above approach:
# Python3 implementation of the approach
# Function to return the Kth element in B[]
def solve(Array, N, K) :
# Initialize the count Array
count_Arr = [0]*(N + 2) ;
factor = 1;
size = N;
# Reduce N repeatedly to half its value
while (size) :
start = 1;
end = size;
# Add count to start
count_Arr[1] += factor * N;
# Subtract same count after end index
count_Arr[end + 1] -= factor * N;
factor += 1;
size //= 2;
for i in range(2, N + 1) :
count_Arr[i] += count_Arr[i - 1];
# Store each element of Array[] with their count
element = [];
for i in range(N) :
element.append(( Array[i], count_Arr[i + 1] ));
# Sort the elements wrt value
element.sort();
start = 1;
for i in range(N) :
174
end = start + element[i][1] - 1; Selection Algorithms
# If Kth element is in range of element[i]
# return element[i]
if (K >= start and K <= end) :
return element[i][0];
start += element[i][1];
# If K is out of bound
return -1;
# Driver code
if __name__ == "__main__" :
arr = [ 2, 4, 5, 1 ];
N = len(arr);
K = 13;
print(solve(arr, N, K));
# This code is contributed by AnkitRai01
Output:
2
Time Complexity: O(N * log N)
Auxiliary Space: O(N)
Find the kth element in the series generated by the given N ranges
Given N non-overlapping ranges L[] and R[] where the every range
starts after the previous range ends i.e. L[i] > R[i – 1] for all valid i. The
task is to find the Kth element in the series which is formed after sorting
all the elements in all the given ranges in ascending order.
Examples:
Input: L[] = {1, 8, 21}, R[] = {4, 10, 23}, K = 6
Output: 9
The generated series will be 1, 2, 3, 4, 8, 9, 10, 21, 22, 23
And the 6th element is 9
Input: L[] = {2, 11, 31}, R[] = {7, 15, 43}, K = 13
Output: 32
175
Approach: The idea is to use binary search. An array total to store the
Fundamental of
number of integers that are present upto ith index, now with the help of
Algorithms
this array find out the index in which the kth integer will lie. Suppose that
index is j, now compute the position of the kth smallest integer in the
interval L[j] to R[j] and find the kth smallest integer using binary search
where low will be L[j] and high will be R[j]. Below is the implementation
of the above approach:
# Python3 implementation of the approach
# Function to return the kth element
# of the required series
def getKthElement(n, k, L, R):
l=1
h=n
# To store the number of integers that lie
# upto the ith index
total=[0 for i in range(n + 1)]
total[0] = 0
# Compute the number of integers
for i in range(n):
total[i + 1] = total[i] + (R[i] - L[i]) + 1
# Stores the index, lying from 1
# to n,
index = -1
# Using binary search, find the index
# in which the kth element will lie
while (l <= h):
m = (l + h) // 2
if (total[m] > k):
index = m
h=m-1
elif (total[m] < k):
l=m+1
176
else : Selection Algorithms
index = m
break
l = L[index - 1]
h = R[index - 1]
# Find the position of the kth element
# in the interval in which it lies
x = k - total[index - 1]
while (l <= h):
m = (l + h) // 2
if ((m - L[index - 1]) + 1 == x):
return m
elif ((m - L[index - 1]) + 1 > x):
h=m-1
else:
l=m+1
# Driver code
L=[ 1, 8, 21]
R=[4, 10, 23]
n = len(L)
k=6
print(getKthElement(n, k, L, R))
Output:
9
Time Complexity: O(N) Auxiliary Space: O(N)
177
Algorithm
Fundamental of
Algorithms The following steps are involved in finding thekth smallest element using
sorting.
1. Sort the given array in ascending order using a sorting algorithm
like merge sort, bubble sort, or heap sort.
2. Return the element at index k−1 in the sorted array.
The illustration below further explains this concept:
Implementation
The following code snippet implements the above algorithm in C++:
#include <iostream>
#include<algorithm>
using namespace std;
int main() {
int arr[] = {10, 1, 0, 8, 7, 2};
int size = sizeof(arr)/sizeof(arr[0]);
// Sorting the array
sort(arr, arr + size);
// Finding the third smallest element in the array
cout<< "The third smallest element in the array is: "<<
arr[2]<<endl;
return 0;
179
Fundamental of size //=2;
Algorithms
foriinrange(2, N +1) :
count_Arr[i] +=count_Arr[i-1];
# Store each element of Array[] with their count
element =[];
foriinrange(N) :
element.append(( Array[i], count_Arr[i+1] ));
# Sort the elements wrt value
element.sort();
start =1;
foriinrange(N) :
end =start +element[i][1] -1;
# If Kth element is in range of element[i]
# return element[i]
if(K >=start andK <=end) :
returnelement[i][0];
start +=element[i][1];
# If K is out of bound
return-1;
# Driver code
if__name__ =="__main__":
arr=[ 2, 4, 5, 1];
N =len(arr);
K =13;
print(solve(arr, N, K));
# This code is contributed by AnkitRai01
180
Output: Selection Algorithms
2
Time Complexity: O(N * log N)
Auxiliary Space: O(N)
Exercise end up
1. What is meant by sorting?
2. What are the two main classifications of sorting based on the source
of data?
3. What is meant by external and internal sorting?
4. What is the purpose of quick sort?
5. What is the advantage of quick sort?
6. What is the purpose of insertion sort?
7. Define merge sort.
8. What are the advantages of merge sort?
9. What is linear search?
10. What is binary search?
11. Differentiate linear search and binary search.
12. Differentiate quick sort and merge
13. Give the advantage of merge sort
14. Distinguish quick sort and insertion sort.
15. Define sorting.
16. Narrate insertion sort with example
17. List examples for various sorting
18. Give the advantage of Merge sort
19. List linear search and binary search with example
20. Narrate insertion sort with example
1. Write and trace the following algorithms with suitable example.
I. Breadth first traversal II. Depth first traversal
2. Sort the sequence 3,1,4,1,5,9,2,6,5 using insertion sort.
3. Give short notes of : (i) Merge sort with suitable example.
181
4. Write an algorithm to sort ‘n’ numbers using quicksort.
Fundamental of
Algorithms 5. Show how the following numbers are sorted using quicksort : 42, 28,
90,2, 56. 39, 12, 87 5. Sort the sequence 13,11, 74,37,85,39,22,56,25
using Insertion sort.
6. Explain the operation and implementation of merge sort.
7. Explain the operation and implementation of external sorting.
8. Write quick sort algorithm
9. Trace the quick sort algorithm for the following list of numbers.
90,77,60,99,55,88,66
10. Write down the merge sort algorithm and give its worst case, best
case and average case analysis.
11. Explain linear search algorithm with an example.
12. Explain linear search & binary search algorithm in detail.
13. Briefly differentiate linear search algorithm with binary search
algorithm.
5.8 BIBILOGRAPHY
1 In computer science, selection sort is an in-place comparison sorting
algorithm. It has an O(n2) time complexity, which makes it inefficient
on large lists, and generally performs worse than the similar insertion
sort. Selection sort is noted for its simplicity and has performance
advantages over more complicated algorithms in certain situations,
particularly where auxiliary memory is limited.
2 The algorithm divides the input list into two parts: a sorted sublist of
items which is built up from left to right at the front (left) of the list and
a sublist of the remaining unsorted items that occupy the rest of the list.
Initially, the sorted sublist is empty and the unsorted sublist is the entire
input list. The algorithm proceeds by finding the smallest (or largest,
depending on sorting order) element in the unsorted sublist, exchanging
(swapping) it with the leftmost unsorted element (putting it in sorted
order), and moving the sublist boundaries one element to the right.
182
3 The time efficiency of selection sort is quadratic, so there are a number Selection Algorithms
of sorting techniques which have better time complexity than selection
sort.
4 Example
Here is an example of this sort algorithm sorting five elements:
arr[] = 64 25 12 22 11
// Find the minimum element in arr[0...4]
// and place it at beginning
11 25 12 22 64
// Find the minimum element in arr[1...4]
// and place it at beginning of arr[1...4]
11 12 25 22 64
// Find the minimum element in arr[2...4]
// and place it at beginning of arr[2...4]
11 12 22 25 64
// Find the minimum element in arr[3...4]
// and place it at beginning of arr[3...4]
11 12 22 25 64
183
Conclusion of selection sort algorithms
Fundamental of
Algorithms Conclusion: Selection sort is sorting algorithm known by its simplicity.
Unfortunately, it lacks efficiency on huge lists of items, and also, it does
not stop unless the number of iterations has been achieved (n-1, n is the
number of elements) even though the list is already sorted.
184
6
ALGORITHM AND DESIGN
TECHNIQUES
Unit Structure :
6.0 Objectives
6.1 Introduction
6.2 Algorithm & it’s classification
6.2.1 Classification by implementation method
6.2.2 Classification by design method
6.2.3 Classification by design approaches
6.2.4 Other classification
6.3 Summary
6.4 Questions
6.5 References
6.0 OBJECTIVES
After this chapter, you will be
185
Fundamental of 6.2 ALGORITHM & IT’S CLASSIFICATION
Algorithms
The word Algorithm means ” A set of finite rules or instructions to be
followed in calculations or other problem-solving operations ” Or ” A
procedure for solving a mathematical problem in a finite number of steps
that frequently involves recursive operations”. Therefore an Algorithm is
a procedure to solve a particular problem in a finite number of steps for a
finite-sized input. The algorithms can be classified in various ways. They
are:
1. Implementation Method
2. Design Method
3. Design Approaches
4. Other Classifications
The different algorithms in each classification method are discussed
below.
Recursion or Iteration:
A recursive algorithm is an algorithm which calls itself again and
again until a base condition is achieved whereas iterative algorithms
use loops and/ or data structures like stacks, queues to solve any
problem. Every recursive solution can be implemented as an iterative
solution and vice versa.
Example: The Tower of Hanoi is implemented in a recursive fashion
while Stock Span problem is implemented iteratively.
Exact or Approximate:
Algorithms that are capable of finding an optimal solution for any
problem are known as the exact algorithm. For all those problems,
where it is not possible to find the most optimized solution, an
approximation algorithm is used. Approximate algorithms are the type
of algorithms that find the result as an average outcome of sub
outcomes to a problem. Example: For NP-Hard Problems,
approximation algorithms are used. Sorting algorithms are the exact
algorithms.
Linear Programming:
In Linear Programming, there are inequalities in terms of inputs and
maximizing or minimizing some linear functions of inputs.
Example: Maximum flow of Directed Graph.
187
Reduction (Transform and Conquer):
Fundamental of
Algorithms In this method, we solve a difficult problem by transforming it into a
known problem for which we have an optimal solution. Basically, the
goal is to find a reducing algorithm whose complexity is not
dominated by the resulting reduced algorithms. Example: Selection
algorithm for finding the median in a list involves first sorting the list
and then finding out the middle element in the sorted list. These
techniques are also called transform and conquer.
Backtracking:
This technique is very useful in solving combinatorial problems that
have a single unique solution. Where we have to find the correct
combination of steps that lead to fulfillment of the task. Such
problems have multiple stages and there are multiple options at each
stage. This approach is based on exploring each available option at
every stage one-by-one. While exploring an option if a point is
reached that doesn’t seem to lead to the solution, the program control
backtracks one step, and starts exploring the next option. In this way,
the program explores all possible course of actions and finds the route
that leads to the solution. Example: N-queen problem, maize problem.
Branch and Bound: This technique is very useful in solving
combinatorial optimization problem that have multiple solutions and
we are interested in find the most optimum solution. In this approach,
the entire solution space is represented in the form of a state space
tree. As the program progresses each state combination is explored,
and the previous solution is replaced by new one if it is not the
optimal than the current solution.
Example: Job sequencing, Travelling salesman problem.
6.2.3 Classification By Design Approaches
There are two approaches for designing an algorithm these approaches
include
1. Top-Down Approach :
2. Bottom-up approach
Top-Down Approach: In the top-down approach, a large problem is
divided into small sub-problem. and keep repeating the process of
decomposing problems until the complex problem is solved.
Bottom-up approach: The bottom-up approach is also known as the
reverse of top-down approaches. In approach different, part of a
complex program is solved using a programming language and then
this is combined into a complete program.
6.2.4 Other Classifications
Apart from classifying the algorithms into the above broad categories,
the algorithm can be classified into other broad categories like:
188
Randomized Algorithms: Algorithms that make random choices for Algorithm and Design
faster solutions are known as randomized algorithms. Techniques
6.3 SUMMARY
In this chapter we have foremost seen an introduction about
algorithm. There are many ways of classifying algorithms and a few
of them we have discuss in brief. We have classify algorithm into
Implementation Method, Design Approaches, Design Method and few
other types of classifications.
6.4 QUESTIONS
1. Explain Classification by implementation method.
2. Explain classification by design approaches.
3. Explain classification by design method.
4. Explain Other Classification types.
6.5 REFERENCES
1. Data Structure and Algorithmic Thinking with Python, Narasimha
Karumanchi , CareerMonk Publications, 2016.
2. Introduction to Algorithm, Thomas H Cormen, PHI
189
7
GREEDY ALGORITHMS
Unit Structure :
7.0 Objectives
7.1 Introduction
7.2 What is greedy algorithm?
7.3 Greedy strategy
7.3.1 Elements of greedy algorithms
7.3.2 Does greedy always work?
7.3.3 Why to use the greedy algorithm?
7.4 Advantages & disadvantages of greedy algorithm
7.5 Greedy applications
7.6 Understanding greedy technique
7.6.1 Huffman coding
7.6.2 Fractional knapsack problem
7.6.3 Minimum coin change problem
7.7 Summary
7.8 Questions
7.9 References
7.0 OBJECTIVES
In an algorithm design there is no one 'silver bullet' that is a cure for
all computation problems. Different problems require the use of
different kinds of techniques.
190
7.1 INTRODUCTION Greedy Algorithms
Let us start our discussion with simple theory that will give us an
understanding of the Greedy technique. In the game of Chess, every
time we make a decision about a move, we have to also think about the
future consequences. Whereas, in the game of Tennis (or Volleyball),
our action is based on the immediate situation. This means that in some
cases making a decision that looks right at that moment gives the best
solution (Greedy), but in other cases it doesn’t. The Greedy technique is
best suited for looking at the immediate situation.
The goal of the greedy algorithm is to find the optimal solution. There
can be only 1 optimal solution. For Example: Let us consider the
example of the container loading problem. In a container loading
problem, a large ship is loaded with cargo. The cargo has containers of
equal sizes but different weights. The cargo capacity is of the ship is
cp. We need to load the containers in the ship in such a way that the
ship has maximum containers. For example, let us say there are a total
of 4 containers with weights: w1=20, w2=75, w3=40, w4=20. Let cp =
80. Then, to load maximum containers we will load container-1,3, 4.
C
1
C C1 C4 C3
4
C
3 Cargo with maximum
numbers of containers
C1,C4,C3 :- Cylinder
191
Fundamental of 7.3 GREEDY STRATEGY
Algorithms
Greedy algorithms work in stages. In each stage, a decision is made
that is good at that point, without bothering about the future. This
means that some local best is chosen. It assumes that a local good
selection makes for a global optimal solution.
The choice made by the greedy approach does not consider future
data and choices. In some cases making a decision that looks right at
that moment gives the best solution (Greedy), but in other cases, it
doesn’t.
The greedy technique is used for optimization problems (where we
have to find the maximum or minimum of something). The Greedy
technique is best suited for looking at the immediate situation. Let us
study Greedy Strategy with an example:
1. Let's start with the root node 20. The weight of the right child is 3
and the weight of the left child is 2.
2. Our problem is to find the largest path. And, the optimal solution at
the moment is 3. So, the greedy algorithm will choose 3.
3. Finally the weight of an only child of 3 is 1. This gives us our final
result 20 + 3 + 1 = 24.
4. However, it is not the optimal solution. There is another path that
carries more weight (20 + 2 + 10 = 32) as shown in the image below.
2
0
2 3
7 10 1
0
193
Greedy algorithms can be used for optimization purposes or finding
Fundamental of
close to optimization in case of Hard problems.
Algorithms
The difficult part is that for greedy algorithms you have to work
much harder to understand correctness issues. Even with the correct
algorithm, it is hard to prove why it is correct.
The local optimal solution may not always be globally optimal.
Not applicable for many problems
7.5 GREEDY APPLICATIONS
Sorting: Selection sort, Topological sort.
Priority Queues: Heap sort.
Huffman coding compression algorithm.
Prim’s and Kruskal’s algorithms.
Shortest path in Weighted Graph [Dijkstra’s].
Coin change problem.
Fractional Knapsack problem.
Disjoint sets-UNION by size and UNION by height (or rank).
Job scheduling algorithm.
Greedy techniques can be used as an approximation algorithm for
complex problems.
7.6 UNDERSTANDING GREEDY TECHNIQUE
7.6.1 Huffman Coding
194
less frequent characters and groups of characters. Also, the character Greedy Algorithms
coding is constructed in such a way that no two character codes are
prefixes of each other.
Given this, create a binary tree for each character that also stores
the frequency with which it occurs (as shown below):-
The algorithm works as follows: In the list, find the two binary trees
that store minimum frequencies at their nodes. Connect these two
nodes at a newly created common node that will store no character
but will store the sum of the frequencies of all the nodes connected
below it. So our picture looks like this:
195
Fundamental of
Algorithms
Once the tree is built, each leaf node corresponds to a letter with a
code. To determine the code for a particular node, traverse from the
root to the leaf node. For each move to the left, append a 0 to the
code, and for each move to the right, append a 1. As a result, for the
above generated tree, we get the following codes:
Now, let us see how many bits that Huffman coding algorithm is
saving. All we need to do for this calculation is see how many bits
are originally used to store the data and subtract from that the
number of bits that are used to store the data using the Huffman
code. In the above example, since we have six characters, let’s
assume each character is stored with a three bit code.
Since there are 133 such characters (multiply total frequencies by 3),
the total number of bits used is 3 * 133 = 399. Using the Huffman
coding frequencies we can calculate the new total number of bits
used:
Thus, we saved 399 – 238 = 161 bits, or nearly 40% of the storage
space.
196
Greedy Algorithms
OBJECTS 1 2 3 4 5 6 7
PROFIT(p) 5 10 15 7 8 9 4
WEIGHT(w) 1 3 5 4 1 3 2
W:Weight of the 15
knapsack
n (no of items) 7
197
First approach: The first approach is to select the item based on the
Fundamental of
maximum profit.
Algorithms
3 15 5 15 - 5 = 10
2 10 3 10 - 3 = 7
6 9 3 7-3=4
5 8 1 4-1=3
7 7 * ¾ = 5.25 3 3-3=0
1 5 1 15 - 1 = 14
5 7 1 14 - 1 = 13
7 4 2 13 - 2 = 11
2 10 3 11 - 3 = 8
6 9 3 8-3=5
4 7 4 5-4=1
3 15 * 1/5 = 3 1 1-1=0
Third approach:
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0
As we can observe in the above table that the remaining weight is zero
which means that the knapsack is full. We cannot add more objects in
the knapsack. Therefore, the total profit would be equal to (8 + 5 + 10
+ 15 + 9 + 4), i.e., 51.
In the first approach, the maximum profit is 47.25. The maximum
profit in the second approach is 45. The maximum profit in the third
approach is 51. Therefore, we can say that the third approach, i.e.,
maximum profit/weight ratio is the best approach among all the
approaches.
199
Fundamental of 7.7 SUMMARY
Algorithms
In this chapter you learned what a greedy programming paradigm is and
discovered different techniques and steps to build a greedy solution. The
chapter also discussed applications and mentions the advantages and
disadvantages of greedy algorithm. Towards the end the chapter
introduced different examples to understand greedy techniques.
7.8 QUESTIONS
1. Explain Greedy Strategy with an example.
2. What are the elements of Greedy Strategy.
3. Give advantages and disadvantages of greedy method.
4. Explain in brief Huffman coding.
5. Write a note on Fractional KnapSack Problem.
7.9 REFERENCES
1. Data Structure and Algorithmic Thinking with Python, Narasimha
Karumanchi, CareerMonk Publications, 2017.
2. Introduction to Algorithm, Thomas H Cormen, PHI.
3. https://www.javatpoint.com/fractional-knapsack-problem.
4. https://www.geeksforgeeks.org/
5. https://www.codechef.com/wiki/
200
8
DIVIDE AND CONQUER ALGORITHMS
Unit Structure :
8.0 Objectives
8.1 Introduction
8.2 What is divide and conquer strategy?
8.3 Does divide and conquer always work?
8.4 Divide and conquer visualization
8.5 Understanding divide and conquer
8.6 Advantages & disadvantages of divide & conquer
8.6.1 Advantages of divide & conquer (D&C)
8.6.2 Disadvantages of divide & conquer (D&C)
8.7 Master theorem
8.8 Divide and conquer: problems & solutions
8.9 Summary
8.10 Questions
8.11 References
8.0 OBJECTIVE
After this chapter learners will understand about divide and conquer a
powerful algorithm design technique.
8.1 INTRODUCTION
In the Greedy chapter, we have seen that for many problems the Greedy
strategy failed to provide optimal solutions. Among those problems,
there are some that can be easily solved by using the Divide and
Conquer (D & C) technique. Divide and Conquer is an important
algorithm design technique based on recursion.
201
The D & C algorithm works by recursively breaking down a problem into
Fundamental of
two or more sub problems of the same type, until they become simple
Algorithms
enough to be solved directly. The solutions to the sub problems are then
combined to give a solution to the original problem.
Here's how to view one step, assuming that each divide step creates two
subproblems (though some divide-and-conquer algorithms create more
than two):
202
Divide and Conquer
Algorithms
Source:
https://www.khanacademy.org
Source:
https://www.khanacademy.org
203
The moral for problem solvers is different. If we can’t solve the
Fundamental of
problem, divide it into parts, and solve one part at a time. Therefore a
Algorithms
divide and conquer algorithm is a strategy of solving a large problem
by breaking the problem into smaller sub-problems, solving the sub-
problems, and combining them to get the desired output. To use the
divide and conquer algorithm, recursion is used.
Source https://mobisoftinfotech.com/resources/blog/
6 5 1 4 3 2
2. Break the array into two segments:- Recursively split each subpart
again into two parts until you have separate elements.
3. Merge the individual parts in an ordered manner. Here, the steps to
conquer and merge go hand in hand.
204
2. Parallelism: Since D & C allows us to solve the subproblems Divide and Conquer
independently, this allows for execution in multiprocessor machines, Algorithms
especially shared-memory systems where the communication of data
between processors does not need to be planned in advance, because
different subproblems can be executed on different processors.
3. Memory access: D & C algorithms naturally tend to make efficient use
of memory caches. This is because once a subproblem is small, all its
subproblems can be solved within the cache, without accessing the
slower main memory.
205
f(n) = cost of work done outside the recursive calls like dividing into
Fundamental of
subproblems and cost of combining them to get the solution.
Algorithms
If the recurrence is of the form T(n)=aT + , where a ≥
1, b > 1, k ≥ 0 and p is a real number, then the complexity can be
directly given as:
T(n) = θ( )
T(n) = θ(logn)
2. Example-2: Merge Sort – T(n) = 2T(n/2) + O(n)
a = 2, b = 2, k = 1, p = 0
T(n) = θ( )
T(n) = θ(nlogn)
3. Example-3: T(n) = 3T(n/2) + n2
a = 3, b = 2, k = 2, p = 0
T(n) = θ( n)
T(n) = θ( )
206
solved individually. Finally, sub-problems are combined to form the Divide and Conquer
final solution. Algorithms
= +
= +
= +
This approach is very costly as it required 8 multiplications and 4
additions.
208
Time Complexity: O((m + n)/2). Divide and Conquer
Algorithms
Problem 4:- Can we improve the time complexity of Problem-2 to
O(logn)?
Solution: Yes, using the D & C approach. Let us assume that the given
two lists are L1 and L2.
Algorithm:
1. Find the medians of the given sorted input arrays L1[] and L2[].
Assume that those medians are m1 and m2.
2. If m1 and m2 are equal then return m1 (or m2).
3. If m1 is greater than m2, then the final median will be below two sub
arrays.
4. From first element of L1 to m1.
5. From m2 to last element of L2.
6. If m2 is greater than m1, then median is present in one of the two sub
arrays below.
8. From m1 to last element of L1.
8. From first element of L2 to m2.
9. Repeat the above process until the size of both the sub arrays
becomes 2.
10. If size of the two arrays is 2, then use the formula below to get the
median.
11. Median = (max(L1[0],L2[0]) + min(L1[1],L2[1])/2
Problem 5:- We are testing “unbreakable” laptops and our goal is to find
out how unbreakable they really are. In particular, we work in an n-story
building and want to find out the lowest floor from which we can drop the
laptop without breaking it (call this “the ceiling”). Suppose we are given
two laptops and want to find the highest ceiling possible. Give an
algorithm that minimizes the number of tries we need to make f(n)
(hopefully, f(n) is sub-linear, as a linear f(n) yields a trivial solution).
Solution: For the given problem, we cannot use binary search as we
cannot divide the problem and solve it recursively. Let us take an example
for understanding the scenario. Let us say 14 is the answer. That means we
need 14 drops to find the answer. First we drop from height 14, and if
it breaks we try all floors from 1 to 13. If it doesn’t break then we are left
13 drops, so we will drop it from 14 + 13 + 1 = 28th floor. The reason
being if it breaks at the 28th floor we can try all the floors from 15 to 27 in
12 drops (total of 14 drops). If it did not break, then we are left with 11
209
drops and we can try to figure out the floor in 14 drops. From the above
Fundamental of
example, it can be seen that we first tried with a gap of 14 floors, and then
Algorithms
followed by 13 floors, then 12 and so on. So if the answer is k then we are
trying the intervals at k, k – 1, k – 2 ....1. Given that the number of floors
is n, we have to relate these two. Since the maximum floor from which we
can try is n, the total skips should be less than n.
This gives:
= k + (k-1) + (k-2) + ---- + 1 ≤ n
= ≤n
= k≤
8.10 SUMMARY
In this chapter we studied the Divide and Conquer strategy(D&C)
which works in three stages as follows:
Divide: This involves dividing the problem into smaller sub-problems.
Conquer: Solve sub-problems by calling recursively until solved.
Combine: Combine the sub-problems to get the final solution of the
whole problem.
At the end some problems and its solution using D&C technique was
mentioned.
8.11 QUESTIONS
1. Write a short note on Divide and Conquer Strategy.
2. Explain advantages and disadvantages of divide and conquer.
3. Explain disadvantages of divide and conquer.
4. Explain any 5 applications of divide and conquer.
5. Describe master theorem in detail.
210
8.12 REFERENCES Divide and Conquer
Algorithms
1. Data Structure and Algorithmic Thinking with Python, Narasimha
Karumanchi, CareerMonk Publications, 2016.
2. Introduction to Algorithm, Thomas H Cormen, PHI.
3. https://www.javatpoint.com/fractional-knapsack-problem.
4. https://www.geeksforgeeks.org/
5. https://www.codechef.com/wiki/
211
9
DYNAMIC PROGRAMMING
Unit Structure
9.0 Objectives
9.1 Introduction
9.2 What is Dynamic Programming Strategy?
9.2.1 What is Recursion?
9.2.2 Difference between dynamic programming & recursion?
9.3 Properties of dynamic programming strategy
9.3.1 Can dynamic programming solve all problems?
9.4 Dynamic programming approaches
9.4.1 Bottom-up versus top-down programming
9.4.2 Which one is better?
9.5 Examples of dynamic programming algorithms
9.6 Understanding dynamic programming
9.7 Longest common subsequence
9.8 summary
9.9 Questions
9.10 References
9.0 OBJECTIVE
To gain an understanding of the technique of dynamic programming.
To be able to adapt it to new problems. To be able to use dynamic
programming to develop new logic.
Towards the end of this chapter, you will have a better understanding of
the recursion and dynamic programming approach
Learners will also be able to analyze and solve various application
using dynamic strategy.
Learners will also be able to analyze the problem and will be able to
recognize the order in which the sub-problems are solved and will start
solving from the trivial subproblem, up towards the given problem.
212
Learners will be able to reason about different dynamic algorithm Dynamic
correctness and complexity Programming
9.1 INTRODUCTION
In this chapter we will try to solve the problems for which we failed to get
the optimal solutions
using other techniques (say, Divide & Conquer and Greedy methods).
Dynamic Programming (DP) is a simple technique but it can be difficult to
master. One easy way to identify and solve DP problems is by solving as
many problems as possible. The term Programming is not related to
coding but it is from literature, and means filling tables (similar to Linear
Programming).
215
o Analyze the problem and see in what order the subproblems are solved,
Fundamental of
and work your way up from the trivial subproblem to the given
Algorithms
problem. This process ensures that the subproblems are solved before
the main problem.
216
The result Fib(43) then applies to the computation of the next values in the Dynamic
series when n = 43: Programming
Fib(n) = Fib(n - 1) + Fib(n - 2) =
Fib(43) = Fib(43 - 1) + Fib(43 - 2) =
Fib(n) = Fib(42) + Fib(41) = Fib(83) then
Fib(83) = Fib(83 - 1) + Fib(83 - 2) =
Fib(83) = Fib(82) + Fib(81) = Fib(163).
You can continue using this optimal solution for each subproblem to
calculate the next values within the Fibonacci sequence, giving you the
ability to break down the initial problem of Fib(n) = Fib(n - 1) + Fib(n -
2), when n > 1.
2. Top Down Approach: Understanding that the sum of the first two
preceding values is the next number in the series can help you solve the
entire problem when you want to calculate the nth Fibonacci number.
Because you know the pattern for computing the future values in the
series, you can apply a formula to determine the optimal solution without
breaking down the problem into smaller subproblems. Using the
appropriate formula when n > 1, you can compute an optimal solution
with the top-down method when solving for an nth value of 13:
Fib(n) = Fib(n - 1) + Fib(n - 2) =
Fib(13) = Fib(13 - 1) + Fib(13 - 2) =
Fib(n) = Fib(12) + Fib(11) = 23
Using the top-down approach and applying memorization, you can cache
the result of 23 in the database to input and call on your code string for
additional tasks.
9.4.2 Which one is better?
The top-down approach might also run into stack overflow conditions
in the case of a very deep recursion tree.
217
Fundamental of 9.5 EXAMPLES OF DYNAMIC PROGRAMMING
Algorithms ALGORITHMS
1. Many String algorithms including longest common subsequence,
longest increasing subsequence, longest common substring, edit distance.
o Given two strings ‘X’ and ‘Y’, find the length of the longest
common substring.
Input : X = “zxabcdezy”, y = “yzabcdezx”
Output : 6
Explanation:
The longest common substring is “abcdez” and is of length 6.
218
EDIT DISTANCE Dynamic
Programming
o Levenshtein (1966) introduced the edit distance between two strings as
the minimum number of elementary operations (insertions, deletions,
and substitutions) needed to transform one string into the other .
o The Edit distance is a problem to measure how much two strings are
different from one another by counting the minimum number of
operations required to convert one string into the other. Edit distance
problem can be solved by many different approaches.But the most
efficient approach to solve the Edit distance problem is Dynamic
programming approach which takes the O(N * M) time complexity,
where N and M are sizes of the strings.
o Edit distance has different definitions which uses different sets of string
operations. Levenshtein distance operations is the basic set of
operations which is used in Edit distance Problem.
o Operation allowed are:
Delete any character from the string.
Replace any character with any other.
Add any character into any part of the string.
219
COMMON SUBSEQUENCE
Fundamental of
Algorithms o Given two sequences v = , , …, , and w = , , …, a
common subsequence of v and w is a sequence of positions in
o w: 1 < < < … < < n such that the -th letter of v is equal to the
-th letter of w.
o Example: v = ATGCCAT, w = TCGGGCTATC. Then take:
= 2, = 3, = 6, =7
= 1, = 3, , = 8, , =9
o This gives us that the common subsequence is TGAT.
2. Bellman-Ford algorithm :-
o The Bellman-Ford Algorithm determines the shortest route from a
particular source vertex to every other weighted digraph vertices. The
Bellman-Ford algorithm can handle graphs where some of the edge
weights are negative numbers and produce a correct answer, unlike
Dijkstra’s algorithm, which does not confirm whether it makes the
correct answer. However, it is much slower than Dijkstra’s algorithm.
o The Bellman-Ford algorithm works by relaxation; that is, it gives
approximate distances that better ones continuously replace until a
solution is reached. The approximate distances are usually
overestimated compared to the distance between the vertices. The
replacement values reflect the minimum old value and the length of a
newly found path.
o This algorithm terminates upon finding a negative cycle and thus can
be applied to cycle-canceling techniques in network flow analysis.
3. Floyd-Warshall algorithm
o The all pair shortest path algorithm is also known as Floyd-Warshall
algorithm is used to find all pair shortest path problem from a given
weighted graph. As a result of this algorithm, it will generate a matrix,
which will represent the minimum distance from any node to all other
nodes in the graph.
o The Floyd-Warshall method uses a technique of dynamic programming
to locate the shortest pathways. It determines the shortest route across
all pairings of vertices in a graph with weights. Both directed and
undirected weighted graphs can use it.
o This program compares each pair of vertices’ potential routes through
the graph. It gradually optimizes an estimate of the shortest route
between two vertices to determine the shortest distance between two
220
vertices in a chart. With simple modifications to it, one can reconstruct Dynamic
the paths. Programming
o This method for dynamic programming contains two subtypes:
Behavior with negative cycles: Users can use the Floyd-Warshall
algorithm to find negative cycles. You can do this by inspecting the
diagonal path matrix for a negative number that would indicate the
graph contains one negative cycle. In a negative cycle, the sum of the
edges is a negative value; thus, there cannot be a shortest path between
any pair of vertices. Exponentially huge numbers are generated if a
negative cycle occurs during algorithm execution.
Time complexity: The Floyd-Warshall algorithm has three loops, each
with constant complexity. As a result, the Floyd-Warshall complexity
has a time complexity of O(n3). Wherein n represents the number of
network nodes.
221
For example, if A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is
Fundamental of
a 5 × 60 matrix, then computing (AB)C needs (10×30×5) +
Algorithms
(10×5×60) = 1500 + 3000 = 4500 operations while computing
A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000
operations.
Clearly, the first method is more efficient.
o Chain Matrix Multiplication Problem: Given a sequence of matrices
A1, A2,...,An and dimensions p0, p1,...,pn where Ai is of dimension
pi−1 × pi, determine the order of multiplication (represented, say, as a
binary tree) that minimizes the number of operations.
o Important Note: This algorithm does not perform the multiplications,
it just determines the best order in which to perform the multiplications.
5. Subset Sum
o A subset is a set that contains the elements of a previously defined set.
For eg. {a, b} is a subset of {a,b,c,e,f}. In this, you have to find a subset
or the set of numbers from the given array that amount to the given
value in the input.
o Given a set of N non-negative integers and a target sum S, determine if
there exists a subset of the given set such that the sum of the elements
of the subset equals to S. Return 1 if there exists such subset, otherwise
return 0.
o Therefore a subset sum problem is that given a subset A of n positive
integers and a value sum is given, find whether or not there exists any
subset of the given set, the sum of whose elements is equal to the given
value of sum.
o Let's look at the problem statement: "You are given an array of non-
negative numbers and a value 'sum'. You have to find out whether a
subset of the given array is present whose sum is equal to the given
value."
10 0 5 8 6 2 4
222
subset[i][j] = true if there is a subset with: Dynamic
Programming
* the i-th element as the last element
* sum equal to j
subset[i][j] = subset[i-1][j-E1];
where E1 = array[i-1]
6. 0/1 Knapsack:-
o A knapsack (kind of shoulder bag) with limited weight capacity.
o Few items each having some weight and value.
o The Knapsack problem states- Which items should be placed into the
knapsack such that- The value or profit obtained by putting the items
into the knapsack is maximum and the weight limit of the knapsack
does not exceed.
o 0/1 Knapsack Problem is one of the variant of Knapsack problem
where s the name suggests, items are indivisible here.We can not take
the fraction of any item. We have to either take an item completely or
leave it completely.It is solved using dynamic programming approach.
o Step-01: Draw a table say ‘T’ with (n+1) number of rows and (w+1)
number of columns. Fill all the boxes of 0th row and 0th column with
zeroes as shown-
223
0 1 2 3 W
Fundamental of
Algorithms
0 0 0 0 0 ………… 0
1 0
2 0
……
n 0
T-Table
o Step-02: Start filling the table row wise top to bottom from left to right.
We start with solving the “leaf” level problems and then move on to
their “parent” problems and so on. We save the results as we solve sub-
problems for future reference. Thereby avoiding working on the same
sub-problem if encountered again.
224
Dynamic Programming and Recursion are very similar. Both recursion Dynamic
and dynamic programming are starting with the base case where we Programming
initialize the start. he main difference is that, for recursion, we do not
store any intermediate values whereas dynamic programming do utilize
that.Let’s get a bit deeper with the Fibonacci Number.
1. FIBONACCI NUMBER
The Fibonacci numbers are the numbers in the following integer
sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..
In mathematical terms, the sequence Fn of Fibonacci numbers is
defined by the recurrence relation Fn = Fn-1 + Fn-2 for n>1with seed
values F0 = 0 and F1 = 1.
Analyzing Time complexity of both solutions it was clear that the
recursive solution was getting significantly slower for each
additional number in the sequence due to exponential time growth.
There is, however, a smart way of improving this solution and that is
using Memoization.
For our recursive solution we want to store the arguments of each
function call as well as function’s output, and reuse it later on if the
function was called with the same arguments.Our recursive solution
looked like this:
function fib(n) {
if (n < 2) {
return n;
}
return fib(n - 1) + fib(n - 2);
}
Source:
Source:
226
Hence while solving the problems using DP, try to figure out the Dynamic
following: Programming
See how the problems are defined in terms of subproblems
recursively.
See if we can use some table [memoization] to avoid the repeated
calculations.
2. Factorial
int[] factorialDP(int n)
{
int fact[n+1],i,j; // factorials array
fact[0]=1;
for(i=1;i<=n;i++)
{
fact[i] = i * fact[i-1];
}
return fact;
}
227
The above implementation clearly reduces the complexity as compared
Fundamental of
to recursive implementation. This is because if the fact(n) is already
Algorithms
there, then we are not recalculating the value again. If we fill these
newly computed values, then the subsequent calls further reduce the
complexity.
228
X and Y be two given sequences Dynamic
Initialize a table LCS of dimension X.length * Y.length Programming
X.label = X
Y.label = Y
LCS[0][] = 0
LCS[][0] = 0
Start from LCS[1][1]
Compare X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])
9.8 SUMMARY
Recursion which is calling a function within itself. Sub-problems
might have to be solved multiple times when a problem is solved using
recursion. At the same time, Dynamic programming is a technique
where you store the result of the previous calculation to avoid
calculating the same once again.
229
Fundamental of 9.9 QUESTIONS
Algorithms
1. Write a short note on Dynamic Programming Strategy.
2. Difference Between A Dynamic Programming Algorithm And
Recursion.
3. Explain properties of dynamic programming.
4. Bottom-Up Versus Top-Down Programming
5. Describe Longest common subsequence.
6. Explain Fibonacci number using dynmamic programming approach.
9.10 REFERENCES
1. Data Structure and Algorithmic Thinking with Python, Narasimha
Karumanchi, CareerMonk Publications, 2016.
2. Introduction to Algorithm, Thomas H Cormen, PHI.
3. https://www.knowledgehut.com/blog/programming/dynamic-
programming
4. https://www.spiceworks.com/tech/devops/articles/what-is-dynamic-
programming/
5. https://en.wikipedia.org/wiki/Dynamic_programming
6. https://www.geeksforgeeks.org/
7. https://www.thelearningpoint.net/computer-science/dynamic-
programming
230