Algorithm Comple Handwrtn Notes
Algorithm Comple Handwrtn Notes
10 20 30 40
Suppose you are asked to search element 50 in the array. You need to compare the element
50 with all elements in the array from beginning. But even after the comparison you are
not able to find the same. This is the worst case of the algorithm.
Counting Sort:
▪ This is a simple algorithm for sorting an array.
▪ Counting sort assumes that each of the elements is an integer in the range 1 to
k where k=O(n), the counting sort runs in O(n) time.
▪ The basic idea of counting sort is to determine , for each input elements x, the
number of elements less than x. This information can be used to place directly
into its correct position.
▪ Consider the first element of the array. If number of elements less than the first
element in the complete array is 5. It indicates that the first element must be
located at 6th position in the array.
▪ For counting sort we are given array A[1..n] of length n. we required two more
arrays, the array B[1..n] holds the sorted output and array C[1..k] provides
temporary working storage.
▪ Algorithm: CountingSort(A,n,k)
▪ // A[1..n] is an array of n integers to be sorted.
//'k’ is the range of integer values.
{ for i= 1 to k do
C[i]=0;
for i=1 to n do
C[A[i]]=C[A[i]]+1;
for i=2 to k do
C[i]=C[i]+C[i-1];
//C[i] now contains the number of elements <=i
for j=n to 1 do
{
B[C[A[j]]]=A[j];
// Putting sorted element into correct position
Counting Sort Page 1
// Putting sorted element into correct position
C[A[j]]=C[A[j] ] -1; //decrement the value by 1
}
}
❖ Searching Algorithm:
Searching Algorithms are designed to check or search for an element or
retrieve an element from any data structure where it is stored.
The Search is said to be successful if the required element is found otherwise it
Is unsuccessful.
Based on the type of search operation, these algorithms are generally classified
into two categories:
❖ Linear Search
In Computer Science , a linear search or sequential search is a method for
finding an element within a list. It sequentially checks each element of the list
until the match is found or the whole list has been searched.
Counting Sort Page 2
until the match is found or the whole list has been searched.
Linear search is a very basic and simple search algorithm. In Linear search, we
search an element or value in a given array by traversing the array from the
starting, till the desired element or value is found.
It compares the element to be searched with all the elements present in the
array and when the element is matched successfully,
Best Case Complexity = O(1)
Worst Case Complexity and Average Case Complexity = O(n)
It is used for unsorted and unordered small list of elements.
It has a time complexity of O(n), which means the time is linearly dependent
on the number of elements.
❖ Binary Search:
▪ The sequential search situation will be in worst case if the element is at
the end of the list.
▪ For eliminating this problem, we have one efficient searching technique called
binary search.
▪ The condition for binary search is that all data should be in a Sorted array.
▪ If it is less than the middle element then we search it in the left portion of the
array and if it is greater than the middle element then search will be in the right
portion of the array. Now we will take that portion only for search and
compare with middle element of that portion. This process will be in iteration
until we find the element or middle element has no left or right portion to
Counting Sort Page 3
compare with middle element of that portion. This process will be in iteration
until we find the element or middle element has no left or right portion to
search.
❖ Non-Recursive Binary Search Algorithm:
Algorithm : BinSearch(a, n, x)
//Given an array a[1..n] of elements in non-decreasing order, n>=0,
Determine whether x is present, and if so return j such that x=a[j];
else return 0
{
low =1;
high=n;
While low<=high do
{
mid=(low + high)/2
if a[mid]=x then
return mid;
else if
a[mid] > x then
high=mid-1;
else
low=mid+1;
}
return 0;
}
Asymptotic_Big_O_Omega Page 1
Asymptotic_Big_O_Omega Page 2
Asymptotic_Big_O_Omega Page 3
❖ Asymptotic Notations:
• We have discuss space and time complexity of program where we can
analyze the exact counts rather than estimate. Asymptotic notation is used
to make statements about program performance when the instance
characteristics are large.
• Mathematical Way of representing the time complexity.
• The Big-oh notation is most popular asymptotic notation used in the
performance analysis of program, the omega, theta notations also are
in common use.
• A problem may have numerous algorithmic solutions. In order to choose
the best algorithm for particular task you need to be able to judge how long
particular solution will take to run or more accurately, you need to be able
to judge how long two solutions will take to run and choose the better of two.
• You don’t need to know about how many minutes and seconds they will
take, but you need some way to compare algorithms against one another.
• Asymptotic notations are mathematical tools to represent time complexity of
algorithms for asymptotic analysis. The following 3 asymptotic notations are
mostly used to represent time complexity of algorithms.
1. Big O Notation:
▪ It defines upper bound of an algorithm.
▪ Big-Oh (O) notation gives an upper bound for a function f(n) to within a
constant factor.
▪ It calculates the Worst Case Complexity of an Algorithm.
▪ It is a notation used to find out the efficiency of an algorithm.
▪ It gives the algorithm complexity in terms of input size n.
▪ Big – O notation focuses on Worst Case Complexity of the Algorithm.
Asymptotic_Big_O_Omega Page 4
▪ Big – O notation focuses on Worst Case Complexity of the Algorithm.
Asymptotic_Big_O_Omega Page 5
○
○ It defines lower bound of an algorithm.
○ In simple words, when we represent a time complexity for any algorithm in
the form of big-Ω, we mean that the algorithm will take at least this much
time to complete it's execution.
Definition :- The function f(n)= Ω(g(n)) [read as f of n is Ω of g of n]
if and only if there exist positive constants C and no such that
f(n)> =C*g(n) where C>0 and n>= and >=1
Asymptotic_Big_O_Omega Page 6
Asymptotic_Big_O_Omega Page 7
Asymptotic_Big_O_Omega Page 8
Examples Page 1
Examples Page 2
Examples Page 3
Tuesday, January 26, 2021 11:40 AM
Theta Notation: Θ
• The theta notation bounds the function from above and below, so it defines
exact asymptotic behavior.
• This notation denotes both upper and lower bounds of f(n) and written as
f(n)=Θ (g(n))
• It is a notation used to find out the efficiency of an algorithm.
• It represents the Average Case Time Complexity.
• It is also called as tight bound of an algorithm
• It shows upper and lower bound of the algorithm.
Definition :- The function f(n)= Θ(g(n)) [read as f of n is Θ of g of n] if and
only if there exist positive constants C and such that
C1*g(n) <= f(n) < =C2 *g(n)
where C1 and C2 constants are>0 and n>= and >=1
Sorting algorithm:
▪ Sorting refers to arranging data in a particular format.
▪ Sorting algorithm specifies the way to arrange data in a particular order.
▪ Sorting algorithms may require some extra space for comparison and
temporary storage of few data elements.
▪ The algorithms do not require any extra space and sorting is said to happen
in-place, or for example, within the array itself.
▪ This is called in-place sorting. Bubble sort is an example of in-place sorting.
❖ Bubble Sort:
▪ Bubble sort is a simple sorting algorithm. This sorting algorithm is
comparison-based algorithm in which each pair of adjacent element is
compared and the elements are swapped if they are not in order. (if the first
element is greater than second element then swapping of element is
required)
▪ Bubble Sort compares all the element one by one and sort them based on
their values.
▪ If the given array has to be sorted in ascending order, then bubble sort will
start by comparing the first element of the array with the second element, if
the first element is greater than the second element, it will swap both the
elements, and then move on to compare second and third element, and so
on.
▪ If we have total n elements, then we need to repeat this process
for n-1 times.
▪ It is known as bubble sort, because with every complete iteration the largest
element in the given array, bubbles up towards the last place or the highest
index.
▪ Sorting takes place by stepping through all the elements one-by-one and
comparing it with the adjacent element and swapping them if required.
▪ Implementation:
Bubble_Sort Page 1
▪ Implementation:
Following are the steps involved in bubble sort(for sorting a given array in
ascending order):
1. Starting with the first element(index = 0), compare the current element
with the next element of the array.
2. If the current element is greater than the next element of the array, swap
them.
3. If the current element is less than the next element, move to the next
element. Repeat Step 1.
Algorithm
▪ We assume list is an array of n elements. We further assume that swap
function swaps the values of the given array elements.
Bubble Sort(n)
for (i=0;i<n-1;i++)
{
for (j=0;j<n-1-i;j++)
{
If (a[j] > a[j+1])
{
temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
}}}
Bubble_Sort Page 2
This algorithm is not suitable for large data sets as its average and worst
case complexity are of Ο(n2 )where n is the number of items.
Worst Case complexity= O(n2)
Average Case Complexity = O(n2)
Best Case Complexity= O(n)
Bubble_Sort Page 3
Tuesday, January 26, 2021 2:13 PM
Algorithm:
Insertion Sort(A)
{ for j=2 to A.length
key=A[j];
// Insert A[j] into sorted sequence A[1…..j-1]
i=j-1;
While(i>0 and A[i] > key)
A[i+1]=A[i];
i=i-1;
A[i+1]=key;
}
❖ Analysis:
In insertion sort we insert the element before or after and we start Comparison
from first element. Since first element has no other elements before it. So it
Insertion Sort Page 1
from first element. Since first element has no other elements before it. So it
does not require any comparison.
Second element requires 1 comparison, third element requires 2 comparisons,
Fourth requires 3 comparisons and so on.
The last element requires n-1 comparisons. So total number of comparisons
will be
= 1+2+3+……………………+n-1
= n(n-1) / 2
= O(n2)
❖ Stable Sort:
▪ A sorting algorithm is said to be stable if two objects with equal keys appear
in the same order in sorted output as they appear in the input unsorted array.
▪ Some Sorting Algorithm is stable by nature like Insertion Sort and Bubble Sort
etc.
❖ In place sorting:
▪ It is in-place because this algorithm does not require extra memory space).
Insertion Sort Page 2
▪ It is in-place because this algorithm does not require extra memory space).
Where T(n) is the time for DAndC on any input size n and g(n) is the time
to compute the answer directly for small inputs.
The function f(n) is the time for dividing P and combining the solutions to
subproblems. Assume that problem is of size of n.
else return 0
{ low =1;
high=n;
While low<=high do
{ mid=(low + high)/2;
if a[mid]=x then
return mid;
else if
a[mid] > x then
high=mid-1;
else
low=mid+1;
}
return 0;
}
❖ Merge Sort
▪ It is another example of divide and conquer.
▪ We assume the elements are sorted in non-decreasing order.
▪ Given a sequence of n elements a[1]…….a[n] the general idea is to split the
array into two sets a[1]……..a[n/2] and a[n/2+1,………a[n]
▪ Each set is individually sorted and the resulting sorted sequences are merged
to produce single sorted sequence of n elements.
▪ Thus we are splitting array into two equal sized sets and the combining
operation is the merging of two sorted sets into one.
Quick Sort:
▪ The divide and Conquer approach can be used to arrive at an efficient sorting
▪ Method different from merge sort.
▪ In merge sort the file a[1..n] was divided at its midpoint into subarrays which are
independently sorted and later merged.
▪ In quick sort, the division into two subarrays is made. So that the sorted
subarrays do not need to be merged later.
Algorithm Partition(a, m, p)
// a is array.
//m is low value
// p is high value
{
V=a[m]; i=m; j=p;
repeat
{
repeat
i=i+1;
until (a[i] >= V);
repeat
j=j-1;
until(a[j]<=V);
T(n) = a n=1
T(n-1) +cn n>1
T(n)= a if n=1
2T(n/2) + cn n>1
=4T(n/4)+ 2cn
=8T(n/8)+3 cn
= 2k T(n/2k)+kcn
Ch 2 Divide and Conquer_ Quick Sort Page 4
=8T(n/8)+3 cn
= 2k T(n/2k)+kcn
= n.T(n/n)+kcn
=n.(1)+kcn
=n.a+log2 n.c.n
=n(a+c.log2 n)
C11= P+S-T+V
C12=R+T
C21=Q+S
C22=P+R-Q+U
Ch 2 TC_Derivation_Strassens_2 Page 1
Ch 2 TC_Derivation_Strassens_2 Page 2
Wednesday, February 24, 2021 11:12 AM
P= (A11+A22) (B11+B22)
Q= B11(A21+A22)
R= A11(B12-B22) C11= P+S-T+V
S= A22(B21-B11) C12=R+T
T= B22 (A11+A12) C21=Q+S
U= (A21-A11)(B11+B12) C22=P+R-Q+U
V= (B21+B22)(A12-A22)
❖ General Method:
▪ The Greedy method is the simplest possible design strategy.
▪ It is used to solve optimization problem.
▪ The problem which requires either minimum or maximum result.
It deals with the problem that have 'n' inputs and require us to obtain a subset
that satisfies some constraints.
▪ Any Subset that satisfies these constraints is called a feasible solution.
▪ The solution which satisfies the constraints.
We need to find a feasible solution that either maximizes or minimizes a
given objective function. A feasible solution that does this is called an
optimal solution. Satisfying the objective of problem is optimal solution
▪ The greedy method suggests that one can devise an algorithm that works in
stages, considering one input at a time.
▪ At each stage decision is made regarding whether a particular input is an
optimal solution. This is done by considering one input at a time and input
order determined by some selection procedure.
▪ If the inclusion of the next input to partially constructed optimal solution
will be result in an infeasible solution, then this input is not added to the
partial solution otherwise it is added.
▪ The selection procedure itself is based on some optimization measure. This
measure may be objective function.
❖ Knapsack Problem:
▪ Suppose a student is going on a trip. He has a single knapsack that can carry
items of weights at most 'm'.
▪ In addition, he is allowed to break items into fractions arbitrarily. Each item i
has some utility value say pi. He wants to fill his knapsack with items most
useful to him with total weight at most m.
▪ Let us try to apply the greedy method to solve knapsack problem.
We are given n objects and a knapsack or bag.
Object i has a weight wi and the knapsack has capacity m. If a fraction xi,
of object i is placed into the knapsack, then profit of pixi is earned.
▪ The objective is to obtain a filling of the knapsack that maximizes the total
profit earned. Since the knapsack capacity is m, we require a total weight
of all chosen objects to be at most m.
1. Greedy by Profit:
At each step select from the remaining items the one with the highest
profit.(provided the capacity of the knapsack is not exceeded). This
Ch 3 Greedy Method_1 Page 2
1. Greedy by Profit:
At each step select from the remaining items the one with the highest
profit.(provided the capacity of the knapsack is not exceeded). This
approach tries to maximize the profit by choosing the most profitable
items first.
2. Greedy by Weight:
At each step select from the remaining items the one with the least
weight. (the capacity of knapsack not exceeded). This approach tries to
maximize the profit by putting as many items into the knapsack as
possible.
3. Greedy by profit density:
At each step select from the remaining items the one with the highest
or largest profit density, pi/wi the capacity of knapsack is not exceeded.
This approach tries to maximize the profit by choosing items with the
largest profit per unit of weight.
Algorithm: Greedy_knapsack(m, n)
//p[1..n] and w[1..n] contain the profits and weights respectively of the
n objects ordered such that p[i]/w[i] >= p[i+1]/ w[i+1]
m is knapsack size and x[1..n] is the solution vector.
{
for i=1 to n do
x[i]=0.0; // Initialize x
U=m;
for i=1 to n do
{
If (w[i] > U) then break;
x[i]=1.0;
U=U-w[i]; // decreasing the available capacity
After putting the object into knapsack.
}
If (i<=n) then x[i]=U/w[i];
}
Algorithm for greedy strategies for knapsack problem
▪ Greedy by Profit:
Find an optimal solution to the knapsack instance n=7, m=23
Example:
▪ Which is the optimal sequence in case of Job Sequencing problem with 4 jobs
where P=(20,10,30,50) and D=(2,3,1,3)
▪ Which is the optimal sequence in case of Job Sequencing problem with 4 jobs
where P=(8,6,5,4,3) and D=(2,3,4,4,1)
In all below trees are spanning tree. The number of edges is 3 which is less
than the number of nodes.
❖ Example:
If the nodes of G represents cities and the edges represent possible
communication links connecting two cities, then the minimum number
of links needed to connect the n cities is n-1. The spanning tree of G
represents all feasible solutions.
❖ Kruskal Algorithm:
▪ Kruskal's Algorithm uses greedy method.
▪ Kruskal's Algorithm is used to find the minimum spanning tree for a
connected weighted graph.
▪ The main target of the algorithm is to find the subset of edges by using
which, we can traverse every vertex of the graph.
▪ Kruskal's algorithm follows greedy approach which finds an optimum
solution at every stage instead of focusing on a global optimum.
▪ Kruskal Algorithm Steps:
1. Arrange all the edges in increasing order of weight or cost.
Construct heap out of edge costs using Heapify;
Heapify procedure which creates the heap of edges such that edge with
minimum cost is at the root.
2. Choose edge with lowest cost and delete from edges group and add it into
Spanning tree. If the edge creates cycle then that edge will be rejected.
3. Adjust: Adjust is procedure which again built the heap so that edge
with minimum cost from the remaining edge will be selected as root.
4. This procedure should be applied until the heap becomes empty.
❖ Kruskal Algorithm:
▪ Kruskal's Algorithm uses greedy method.
▪ Kruskal's Algorithm is used to find the minimum spanning tree for a connected
weighted graph.
▪ The main target of the algorithm is to find the subset of edges by using which,
we can traverse every vertex of the graph.
▪ Kruskal's algorithm follows greedy approach which finds an optimum solution
at every stage instead of focusing on a global optimum.
❖ Kruskal Algorithm Steps:
1. Arrange all the edges in increasing order of weight or cost.
Construct heap out of edge costs using Heapify;
Heapify procedure which creates the heap of edges such that edge with
minimum cost is at the root.
2. Adjust: Adjust is procedure which again built the heap so that edge
with minimum cost from the remaining edge will be selected as root.
3. Find: Find is the procedure which finds the root of the tree to which
Parameter node belongs.
4. Choose edge with lowest cost and delete from edges group and add it into
Spanning tree. If the edge creates cycle then that edge will be rejected.
5. This procedure should be applied until the heap becomes empty.
❖ Prims Algorithm:
▪ Prims algorithm is Greedy Method.
▪ Prims algorithm the minimum weighted edges associated with particular
vertices are considered one by one.
▪ The algorithm will start with a tree that includes only a minimum cost
edge of G.
▪ Then edges are added to this tree one by one .
▪ The next edge (i, j) to be added in such way that i is a vertex already
included in the tree, j is a vertex not yet included and the cost of (i,j) is
minimum among all edges.
▪ To determine this edge (i, j) efficiently we associate with each vertex j
not yet included in the tree and find j in such way that the new edge should
contains minimum weight.
We start from one vertex and keep adding edges with the lowest weight until
we reach our goal.
The steps for implementing Prim's algorithm are as follows:
Step 1: Select any connected vertices with min weight.
Step 2: Select unvisited vertex which is adjacent of visited vertices with min
weight.
Step 3: Repeat step 2 until all vertices are visited and we will get the
spanning tree.
Alternatively we could first merge x1 and x2 getting y1, then merge x3 and x4 and get
y2 and finally merge y1 and y2 and get the desired sorted file.
Given n sorted files, there are many ways in which to pairwise merge them into
single sorted file.
Different pairings require different amounts of computing time.
The problem we address ourselves to now is that of determining an optimal way
To pairwise merge n sorted files. In optimal merge pattern number of moves are
minimized.
Example:
The files x1, x2 and x3 are three sorted files of length 30, 20 and 10 records each.
Merging x1 and x2 requires 50 moves. Merging the result with x3 requires another 60
moves. The total number of record moves required to merge the three files this way is
110.
If instead we first merge x2 and x3 (taking 30 moves) and then x1 (taking 60 moves)
the total record moves made is only 90. Hence, the second merge pattern is faster
than first.
Find the optimal merge pattern for merging the files of sizes 10, 5 , 7, 20, 12
l >= ∑ li
The tape is a sequential device so every program on the tape is stored one after
the other.
If a program pi is to be retrieved from the tape, we first have to retrieve the
programs p1,p2……pi-1
Therefore the total time needed to retrieve pi = time needed to retrieve p1+
time needed to retrieve p2 + ………+ time needed to retrieve pi-1.
In the problem we consider, we are given a directed graph G=(V,E), and source
vertex V0. The problem is to determine the shortest path from V0 to all the
remaining vertices of G.
It is assumed that all the weights are positive.
The numbers on the edges are the weights. If node 1 is the source vertex, then the
Shortest path from 1 to 2 is 1, 4, 5, 2
The length of this path is 10+15+20=45
Even though there are three edges on this path, it is shorter than 1->2
which is of length 50. There is no path from 1 to 6.
Ch 3 Dijkstra’s shortest path Page 1
which is of length 50. There is no path from 1 to 6.
The following table shows shortest paths from 1
Path Length
1) 1,4 10
2) 1,4,5 25
3) 1,4,5,2 45
4) 1,3 45
The shortest path from 1 to nodes 4, 5, 2, and 3
Here, j
d(pj)= fj ∑ li
n i=1
and ERT = ∑ d(pj)/N
j=1 where N= ∑ fi
Let n=5 the programs P1 to p5 have length l=(2,3,3,6,10) and frequency of their
retrieval is f=(6,3,4,11,5)
1. Greedy by Length:
Huffman Code:
Another application of binary tree with minimal weighted external path length
is to obtain an optimal set of codes for messages M1…..Mn+1 . Each code is
binary string that is used for transmission of the corresponding message. At the
receiving end the code is decoded using decode tree.
▪ Huffman coding is a lossless data compression algorithm.
▪ Compression technique
▪ Reducing the size of the data
▪ Used to store data in compresses form
▪ When data is sent over the network then data is compresses and then
transmitted to reduce the cost of transmission.
Given a chain M1,M2 …Mn of n matrices, where for i=1,2,……n matrix Mi has
ri-1 rows and ri columns.
▪ M=M1M2M3……Mn in a such way that minimizes the number of scalar
multiplications.
▪ However dynamic programming provides an algorithm with time complexity
O(n3).
mij= 0 if i=j
min (mik+mk+1,j + ri-1rkrj) j>i
Recurrence Relation:
dp[i, j] = 0 if i=j
dp[i, j] = min{dp[i, k] + dp[k+1, j]} + mat[i-1]*mat[k]*mat[j]
Let M1, M2, M3, M4 are four matrices with dimensions 2X3, 3X4,4X2, 2X5.
Find the number of minimum scalar multiplications required to multiply
matrices M1M2M3M4
1. (M1 (M2 (M3 M4)))
2. (M1 ((M2 M3) M4))
3. ((M1 M2) (M3 M4))
4. ((M1 (M2 M3)) M4)
5. (((M1 M2) M3) M4)
Let M1, M2, M3, M4 are four matrices with dimensions 2X3, 3X4,4X2, 2X5.
Find the number of minimum scalar multiplications required to multiply matrices
M1M2M3M4
Matrix_Multiplication Page 1
Friday, March 26, 2021 10:41 AM
Example: Find an optimal solution for 0/1 knapsack problem by using merge
and purge method. n=3 m=6 P=(1,2,5) and W=(2,3,4)
❖ 0/1 Knapsack:
▪ 0/1 Knapsack is based on the dynamic programming method.
▪ 0/1 knapsack problem is solved using function method and merge and purge
method.
A thief robbing a store finds n items, the ith item is worth Vi Rs. and weights
Wi are integers. He wants to take a valuable as load possible but he can carry at
most m kgs in his knapsack( a bag) for some integer m. what items should take?
Here thief can select or leave behind an item. He cannot take fractional amount
of an item or cannot take item more than once.
This is 0/1 knapsack problem.
We have already defined the greedy knapsack.
▪ The only variation is that xi=0 or xi=1 and not a fraction.
▪ It is a maximization problem.
▪ An optimal solution is a feasible solution for which
n
∑ pixi is maximum
i=1
❖ 0/1 knapsack with function method:
▪ In 0/1 knapsack problem items are indivisible and in fractional knapsack items
are divisible.
fn(m)= max {fn-1(m), fn-1(m-wn) + pn}
f0(m)=0;
f1(m)= P1; w1<=m
= 0; w1>m
fn(-m)=-∞;
1. Which algorithm is used to find all pair shortest distances using Dynamic
Programming ?
2. Floyd Warshall’s Algorithm is used for solving all pair shortest distances
problem.
Definition:
Given a sequence x=<x1,x2….xm> y= <y1,y2….yn> and a sequence
z=<z1,z2…..zk>
Given two sequences X and Y, a sequence Z is a common subsequence of X
and Y if Z is a subsequence of both sequences X and Y.
a. If xi = yj then
C[i, j] = C[i-1, j-1] + 1
i.e. Upper Diagonal element + 1 and arrow " Cross Arrow"
b. If Xi Yj then
check the upper and left side of the element to be computed.
If upper element is greater than or equal to left element then new element is
upper element with arrow "↑" otherwise new element is left element is left
element and arrow is "←"
Ch 4 Longest Common Subsequence Page 1
element and arrow is "←"
X= <A,B,C,B,D,A,B>
Y= <B,D,C,A,B,A>
j 0 1 2 3 4 5 6
i Yi B D C A B A
0 Xi 0 0 0 0 0 0 0
1 A 0
2 B 0
3 C 0
4 B 0
5 D 0
6 A 0
7 B 0
Algorithm printLCS
Algorithm printLCS (b, x, i, j)
{
if i=0 or j=0 then
return;
if bij= "cross arrow" then
{
printLCS (b,x,i-1,j-1);
Print xi;
}
else
{
if bij="↑" then
printLCS(b, x, i-1, j);
else
printLCS(b, x, i, j-1);
}
printLCS
j 0 1 2 3 4 5
i Yj 1 0 1 1 1
0 Xi
1 0
Ch 4 Longest Common Subsequence Page 3
0 Xi
1 0
2 1
3 0
4 1
5 0
❖ String Editing:
▪ We are given two strings X= x1,x2,……xn and Y= y1,y2….ym where xi,
1<=i<=n and yj, 1<=j<=m are members of a finite set of symbols known as the
alphabet.
▪ We want to transform X into Y using a sequence of edit operations on X.
▪ The permissible edit operations are insert, delete and change and there is cost
associated with performing each.
The cost of sequence of operations is the sum of the costs of the individual
operations in the sequence.
▪ The problem of string editing is to identify minimum cost sequence of edit
operations that will transform X into Y.
▪ Let D (xi) be the cost of deleting the symbol xi from X,
▪ I (Yj) be the cost of inserting the symbol Yj into X,
▪ C (xi, yj) be the cost of changing the symbol Xi of X into Yj
j b a b b
i yj 0 1 2 3 4
xi 0
a 1
a 2
b 3
a 4
b 5
▪ The minimum cost of edit sequence is:
x1 x2 x3 x4 x5
a a b a b
y1,y2,y3,y4= b, a, b, b
▪ String Editing is used to identify minimum cost sequence of edit operations that
will transform X into Y.
▪ Find number of edit operations for transforming string SSAB to AAB. The cost
of each insertion and deletion operation is 1 and cost of changing any symbol to
any other symbol is 2.
X= SSAB
Y= AAB
j A A B
i yj 0 1 2 3
xi 0
S 1
S 2
A 3
B 4
X = abc
Y = abc
j A B C
i yj 0 1 2 3
xi 0
A 1
B 2
C 3
The edit distance between two strings be zero when two strings are equal.
When the lengths of the two strings are equal edit distance is not zero.
Ch 4 String Editing Page 2
When the lengths of the two strings are equal edit distance is not zero.
Both dynamic programming and recursion is can be used to solve the edit
distance problem.
A,B,C,D are four cities. These are connected by different roads. The distance of
the road are written (in kms) in the figure. A salesman starts his tour at A and
travels along the roads in such way that he visits every other city only once and
comes back to A. The distance travelled during his tour is minimum.
We observe different tours like
A->C->D->B->A , A->B->C->D->A
out of which in the first tour distance travelled is 16 km and in that of second is 11
km minimum.
Naturally the salesperson prefers the tour with minimum distance travelled.
▪ This is minimization problem. It is the problem of finding minimum distance.
Ch 4 TSP Page 1
▪ It gives shortest path traversed by salesperson and path includes every vertex.
Shortest path is always cycle.
▪ The traveling salesman problem involves visiting each city only once.
Ch 4 TSP Page 2
Saturday, March 27, 2021 9:02 AM
❖ Graph Traversal:
Graph traversal is systematic procedure for exploring the graph by visiting all its
Vertices. To solve the many problems dealing with the graph we need to search
and visit the vertices of the graph in a systematic fashion.
1. Depth First Search:
This is traversal technique. Depth First Search (DFS) algorithm traverses a graph
in a depthward motion and uses a stack to remember to get the next vertex to start
a search.
It employs the following rules.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it
in a stack.
Ch 5 Decrease and Conquer Page 1
in a stack.
Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will
pop up all the vertices from the stack, which do not have adjacent vertices.)
Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.
▪ DFS uses LIFO technique. In DFS, how many times a node is visited? Once
▪ The Data structure used in standard implementation of Depth First Search is
Stack. That is use of stack when we traverse the graph.
▪ A vertex is pushed on the stack when it is visited first time and it is poped
from the stack when explored.
▪ A vertex which itself is visited and all its adjacent vertices are also visited is
said to be explored.
▪ Note:
▪ The Depth First Search traversal of a graph will result into tree.
▪ Depth first Search is equivalent to Pre-order Traversal in the binary Trees.
▪ The maximum number of edges in the tree generated by DFS from an
undirected graph with n vertices is n-1.
▪ DFS uses backtracking and visits all the vertices in the graph.
▪ Time Complexity of DFS is? (V – number of vertices, E – number of edges)
O(V+E)
▪ The output generated by BFT or DFT of a directed graph can be either tree or
forest. (Forest is a collection of disjoint trees.)
◊ Applications Of DFS
1. Topological Sorting
2. Finding Strongly Connected components
3. Finding Articulation Point and Bridge edge
Ch 5 BFS_Theory Page 1
Sunday, April 4, 2021 1:26 PM
Ch 5 BFS_Example Page 1
Ch 5 BFS_Example Page 2
Monday, March 29, 2021 9:41 AM
❖ Topological Sorting:
▪ A topological sort or topological ordering of a directed acyclic graph G=(V,E)
is a linear ordering of its vertices such that for every edge <u, v>, u comes
before v in the linear ordering.
▪ Graph should be directed acyclic graph.(DAG)
▪ For instance the vertices of the graph may represent tasks to be performed and
the edges may represents constraints that one task must be performed before
another.
▪ The topological sorting is a valid sequence for the tasks.
▪ A topological ordering is possible if and if the graph has no directed cycles, that
is, if it is directed acyclic graph.(DAG)
Topological sort can be implemented by Using both Depth and Breadth First
Search.
▪ Bridge Edge:
An edge of graph G is a bridge edge if its deletion from graph disconnects the
Graph into 2 or more non-empty components.
Chapter 6 Backtracking
❖ Introduction:
▪ Backtracking is design strategy/ approach used to solve puzzles or
problems include such as eight queens puzzle, four queens puzzle, Sudoku.
▪ Backtracking can be defined as a general algorithmic technique that considers
searching every possible combination in order to solve a computational problem.
▪ It removes the solutions that doesn't give rise to the solution of the problem based on
the constraints given to solve the problem.
▪ Backtracking is not used for optimization.
▪ It is used for finding multiple solutions.
▪ Useful technique for optimizing search under some constraints.
▪ Express the desired solution as an n-tuple (x1, . . . , xn) where each xi ∈ Si , Si being
a finite set .
▪ The solution is based on finding one or more vectors that maximize, minimize, or
satisfy a criterion function P(x1, . . . , xn)
▪ Many of the problems we solve using backtracking require that all the solutions
satisfy the set of constraints.
▪ Backtrack approach
▪ Requires less than m trials to determine the solution.
▪ Form a solution (partial vector) one component at a time, and check at every step
if this has any chance of success.
▪ If the solution at any point seems not-promising, ignore it.
If the partial vector (x1, x2, . . . , xi) does not yield an optimal solution, ignore mi+
1 · · · mn possible test vectors even without looking at them.
For any problem these constraints can be divided into two categories.
1. Explicit Constraint
These are the rules which restrict each xi to take value from a given set.
▪ Si = {1, 2, 3, 4, 5, 6, 7, 8}, 1 ≤ i ≤ 8
▪ Xi=0 or xi=1 or Si= {0,1}
▪ Xi>=0 or Si={all nonnegative real numbers}
2. Implicit Constraint
These are rules that determine which of the tuples in the solution space of I satisfy
the criterion function.
Ch 6 Backtracking Page 1
❖ n-queen Problem:
In n queens problem there is n × n chessboard and we have to place n queen on
n × n chessboard so that no two queens should attack. That is no two queens should
be on the same row, same column and same diagonal. This is generalized problem.
This is generalized problem to understand this we consider 4 queen problem. In
three directions the queens can attack each other.
❖ 4 - queen Problem:
Given a 4×4 chessboard and we have to arrange 4 queens in such way that
No two queens attack each other. No two queens should be on the same row and
column and on the same diagonal.
▪ Explicit Constraint:
These are the rules which restrict each xi to take value from a given set.
▪ Si = {1, 2, 3, 4} or 1 ≤ i ≤ 4
▪ Implicit Constraint:
a. No two queens should be on the same row.
b. No two queens should be on the same column.
c. No two queens should be on the same diagonal.
Ch 6 Backtracking Page 2
Saturday, April 3, 2021 3:57 PM
❖ 8 queen Problem:
You are given an 8x8 chessboard, find a way to place 8 queens such that no queen can attack
any other queen on the chessboard. A queen can only be attacked if it lies on the same row, or
same column, or the same diagonal of any other queen. Print all the possible configurations.
▪ Explicit constraints for 8 queen's problem are:
All solutions to the 8-queens problem can be represented as an 8-tuple where queen i is on
column . The explicit constraints are the solution space consists of 8-tuples.
The implicit constraints are that no two 's can be the same (as queens must be on different
columns) and no two queens can be on the same diagonal.
All solutions to 8-queens problem can now be represented as 8-tuples (x1, x2,...,x8 ) where xi
is the column on which queen i is placed.
The first of these constraints implies that all solutions are permutations of 8-tuple (1, 2, 3, 4, 5,
6, 7, 8).
This realization reduces the size of the problem space to 8! tuples.
▪ The color of each node is indicated next to it. It can also been seen that three colors are
needed to color this graph. Hence graph chromatic number is 3.
Minimum number of unique colors required for vertex coloring of a graph is
called?
For the following graph find out all possible solutions with m=3.
Ch 6 Graph_Colouring_Example Page 1
Thursday, April 8, 2021 7:38 AM
❖ Hamiltonian Cycles:
▪ Hamiltonian circuit problem is solved by using backtracking.
▪ Let G =(V,E) be a connected graph with n vertices. A Hamiltonian cycle
(Suggested by Sir William Hamilton) is a round-trip path along n edges of G
That visits every vertex once and returns to its starting position.
▪ A Hamiltonian cycle is a closed loop on a graph where every node (vertex) is
visited exactly once. A loop is just an edge that joins a node to itself; so a
Hamiltonian cycle is a path traveling from a point back to itself, visiting every
node on route.
▪ Hamiltonian Path in an undirected graph is a path that visits each vertex exactly
once. A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path
such that there is an edge (in the graph) from the last vertex to the first
vertex of the Hamiltonian Path. Determine whether a given graph contains
Hamiltonian Cycle or not.
▪ It is used in various fields such as Computer Graphics, electronic circuit design.
▪ Live node is a generated node for which all of the children have not been
generated yet.
X Y Z
A solution node is said to obtained if x=m ( whatever may be the value of y and z)
Example:
Solve the following instance of 0/1 knapsack using FIFOBB with variable tuple
size formulation method.
Example:
Solve the following instance of 0/1 knapsack using LCBB with variable tuple
size formulation method.
Steps:
▪ Let A be the reduced cost matrix for node R
Perform row reduction.
Perform column reduction.
Make all the entries in ith row and jth column of matrix A as ∞.
Set A(j,1)=∞
▪ If r is the total amount subtracted
C^(s)= C^ (R ) +A(i, j) + r
Solve the given instance of TSP using branch and bound technique.
∞ 20 30 10 11
15 ∞ 16 4 2
3 5 ∞ 2 4
19 6 18 ∞ 3
16 4 7 16 ∞
Solve the following instance of 0/1 knapsack using LCBB with variable tuple size
formulation method.
LCBB_Lecture Page 1
LCBB_Lecture Page 2
LCBB_Lecture Page 3
Wednesday, January 5, 2022 11:17 AM
LIFOBB Page 1
LIFOBB Page 2
Wednesday, April 14, 2021 8:45 AM
1. Deterministic Algorithm:
Algorithms in which the result of every operation is uniquely defined are called
deterministic algorithm.
The group of deterministic polynomial time algorithm is denoted by P.
Thus P is set of all decision problems solvable by deterministic algorithms in polynomial
Chapter 8 Problem Classification Page 1
Thus P is set of all decision problems solvable by deterministic algorithms in polynomial
time.
e.g. finding number whether it is odd or even., Sorting, searching, MST
▪ A deterministic algorithm is (essentially) one that always computes the correct answer
2. Nondeterministic algorithm: (non-deterministic Polynomial)
A non-deterministic algorithm contains operations whose outcomes are not uniquely
defined but are limited to certain specific sets of possibilities. The machine executing such
operations is allowed to choose any one of these outcomes subject to a termination condition.
NP: the class of decision problems that are solvable in polynomial time on a
nondeterministic machine (or with a nondeterministic algorithm) – (A deterministic
computer is what we know) – A nondeterministic computer is one that can “guess” the right
answer or solution.
Example: Satisfiability (SAT) • the problem of deciding whether a given Boolean formula is
satisfiable.
❖ Decidable Problem:
A problem is said to be Decidable if we can always construct a corresponding algorithm
that can answer the problem correctly. Suppose we are asked to compute all the prime
numbers in the range of 1000 to 2000. To find the solution of this problem, we can easily
devise an algorithm that can enumerate all the prime numbers in this range.
❖ The Halting problem –
Halting problem is example of undecidable problem.
Given a program/algorithm will ever halt or not? Can we design machine or algorithm
which tells about the given program will always halt or not on particular input?
Halting means that the program on certain input will accept it and halt or reject
and it would never go into an infinite loop.
Basically halting means terminating. So can we have an algorithm that will tell that the
given program will halt or not.
In terms of Turing machine, will it terminate when run on some machine with some
particular given input string.
The answer is no we cannot design a generalized algorithm which can appropriately say
that given a program will ever halt or not? Not NP complete
The only way is to run the program and check whether it halts or not.
We can refrain the halting problem question in such a way also:
Given a program written in some programming language(c/c++/java) will it ever get into
an infinite loop (loop never stops) or will it always terminate(halt)?
This is an undecidable problem because we cannot have an algorithm which will tell us
whether a given program will halt or not in a generalized way i.e. by having specific
program/algorithm.
▪ In general we can’t always know that’s why we can’t have a general algorithm.
▪ The best possible way is to run the program and see whether it halts or not.
▪ In this way for many programs we can see that it will sometimes loop and always halt.
▪ The problems that cannot be solved by any algorithm are called Undecidable problem.
In computability theory, an undecidable problem is a type of computational problem that
requires a yes/no answer, but where there cannot possibly be any computer program that
always gives the correct answer; that is, any possible program would sometimes give the
wrong answer or run forever without giving any answer.
Halting Problem is not NP complete.
A halting problem is not an NP class problem. An NP problem can be solved in a finite
amount of time, though this time period may exceed the age of the universe for
sufficiently large inputs.
The halting problem is to determine whether a give piece of code will stop executing at
some point or keep running indefinitely.
❖ Nondeterministic Polynomial:
▪ Non deterministic polynomial algorithm (NP) is divided into NP hard and NP complete.
▪ NP class is the class of decision problems that can be solved by non-deterministic
polynomial algorithms.
Explanation: NP problems are called as non-deterministic polynomial problems. They are a
class of decision problems that can be solved using NP algorithms.
Problems which can be solved using polynomial time algorithms are called as tractable
(easy).
Problems that can be solved using super polynomial time algorithms are called intractable
(hard)
1. NP Hard:
Non deterministic polynomial time hard in computational theory is a class of problems that
are informally at least as hard as the hardest problems in NP.
e.g. of NP-hard problem is the decision sum of subset problem.
Cook's Theorem:
The satisfiability problem is to determine or to find for what values of x i this formula
is true.
e.g. of such formulas are ( x1 ^ x2 ) V (x3 ^ x4) and (x3 v x4) (x1 v x2).
Ch 8 Satisfiability Page 2