Algorithms Unit 4
Algorithms Unit 4
4. Backtracking
Backtracking can be defined as a general algorithmic technique that considers
searching every possible combination in order to solve a computational problem.
Suppose another path exists from node A to node C. So, we move from node A to node C. It is also
a dead-end, so again backtrack from node C to node A. We move from node A to the starting
node.
Now we will check any other path exists from the starting node. So, we move from start node to
the node D. Since it is not a feasible solution, we move from node D to node E. The node E is
also not a feasible solution. It is a dead end so we backtrack from node E to node D.
Suppose another path exists from node D to node F. So, we move from node D to node F. Since it
is not a feasible solution and it's a dead-end, we check for another path from node F.
Example 2
Problem: You want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches.
Constraint: Girl should not be on the middle bench.
Solution: There are a total of 3! = 6 possibilities. We will try all the possibilities and get the
possible solutions. We recursively try all the possibilities.
Backtracking Recursion
Backtracking is an algorithm that finds all the Recursion is a technique that calls
possible solutions and selects the desired the same function again and again
solution from the given set of solutions which until you reach the base case.
satisfies conditional constraints.
In case of greedy and dynamic programming techniques, we will use Brute force approach. It
means, we will evaluate all possible solutions, among which, we select one solution as optimal
solution. In backtracking technique, we will get same optimal solution with less number of
steps. So we use backtracking technique. We can solve problems in an efficient way when
compared to other methods like greedy method and dynamic programming. In this we will
use bounding functions (criterion functions), implicit and explicit conditions. While
explaining the general method of backtracking technique, there we will see implicit and
explicit constraints.
The major advantage of backtracking method is, if a partial solution (x1 ,x2 ,x3…..,xi) can’t lead
to optimal solution then (xi+1…xn) solution may be ignored entirely.
In 𝑛 queens problem, the n queens can be placed in 𝑛𝑥𝑛 chess board in 𝑛𝑛 ways.
Explicit constraints:
Implicit constraints:
3) Problem state: each node in the tree organization defines a problem state. So, A,B ,C are
problem states.
4) Solution states: These are those problem states S for which the path from the root to S
define a tuple in the solution space.
Here square nodes indicate solution. For the above solution space, there exists 3 solution states.
These solution states represented in the form of tuples i.e. (1,2,4), (1,3,6) and (1,3,7) are the
solution states.
5) State space tree: if we represent solution space in the form of a tree then the tree is
referred as the state space tree. For example given is the state space tree of 4-queen problem.
Initially x1=1 or 2 or 3 or 4. It means we can place first queen in either of 1/2/3/4 column. If
x1=1 then x2 can be paced in either 2nd, 3rd , or 4th column. If x2=2 then x3 can be placed
either in 3rd or 4th column. If x3=3 then x4=4. So nodes 1-2-3-4-5 is one solution in solution
space. It may or may not be feasible solution. Similarly we can observe the remaining
solutions in the figure.
7) Live node: A node which has been generated and all of whose children have not yet been
generated is live node. In the fig (a) node A is called live node since the children of node A
have not yet been generated.
In fig (b) node A is not a live node but B,C are live nodes. In fig(c) nodes A,B are not live and D,E C
are live nodes.
8) E-node : The live node whose children are currently being generated is called E-node.(
node being expanded).
In figure (b) assumed that node B can generate one more node so nodes A,D,C are dead nodes.
Consider an 𝑛𝑥𝑛 chess board. Let there are n Queens. These n Queens are to be placed on the
N- Queens problem
nxn chess board so that no two queens are on the same column, same row or same diagonal.
n-queens Problem: The n-queens problem is a generalization of the 8-queens problem. Now
are on the same row, column, or diagonal. The solution space consists of all 𝑛! permutations
n-queens are to be placed on an nxn cross board so that no two attack; that is no two queens
nqueens(k,n) place(k,i)
nqueens(1,4) place(1,1) returns True so x[1]=1
Hence we got our solution as (2, 4, 1, 3), this is the one possible solution for the 4-Queen
Problem. For another solution, we will have to backtrack to all possible partial solutions he
other possible solution for the 4 Queen problem is (3, 4, 1, 2)
Example: Consider a graph G = (V, E) shown in fig. we have to find a Hamiltonian circuit using
Backtracking method.
Solution:
Firstly, we start our search with vertex 'a.' Next, we choose vertex 'b' adjacent to 'a' as it
this vertex 'a' becomes the root of our comes first in lexicographical order (b, c, d).
implicit tree.
Next, we select 'c' adjacent to 'b.' Next, we select 'd' adjacent to 'c.'
Hamiltonian cycle
The Hamiltonian cycle is the cycle in the graph which visits all the vertices in graph
exactly once and terminates at the starting node. It may not include all the edges
The input to the problem is an undirected, connected graph. For the graph shown, the
path A – B – E – D – C – A forms a Hamiltonian cycle. It visits all the vertices exactly once, but
does not visit the edges <B, D>.
Example 1 Example 2
Graph coloring
Introduction
Graph coloring can be described as a process of assigning colors to the vertices of a graph. In
this, the same color should not be used to fill the two adjacent vertices. We can also call graph
coloring as Vertex Coloring. In graph coloring, we have to take care that a graph must not
contain any edge whose end vertices are colored by the same color. This type of graph is
known as the Properly colored graph.
In this graph, we are showing the properly colored graph, which is described as follows:
The least possible value of ‘m’ required to color the graph successfully is known as the
chromatic number of the given graph.
Chromatic Number
Definition
Chromatic Number is the minimum number of colors required to properly color any graph
(or)Chromatic Number is the minimum number of colors required to color any graph, such
that no two adjacent vertices of it are assigned the same color.
Note:
All the above cycle graphs are also planar graphs.
2. Complete Graph
In a complete graph, each vertex is connected with every other vertex.
Chromatic Number of any Complete Graph= Number of vertices in that Complete Graph
The minimum number of colors required for vertex coloring of graph ‘G’ is called as the
chromatic number of G, denoted by X(G).
χ(G) = 1 if and only if 'G' is a null graph. If 'G' is not a null graph, then χ(G) ≥ 2.
Example
Take a look at the following graph. The regions ‘aeb’ and ‘befc’ are adjacent, as there is a
common edge ‘be’ between those two regions.
Similarly, the other regions are also coloured based on the adjacency. This graph is coloured
as follows –
Here backtracking means to stop further recursive calls on adjacent vertices by returning
false. In this algorithm Step-1.2 (Continue) and Step-2 (backtracking) is causing the program
to try different color option.
Continue – try a different color for current vertex.
Backtrack – try a different color for last colored vertex.
A legal move is the one that move a tile adjacent to an Empty Space(ES) to ES.
Each move creates a new arrangement of tile called states of the puzzle.
Initial and final arrangements are called initial state and goal states respectively.
There are 16! (20.9 * 1012 = 20,922,789,888,000) different arrangements of the tiles
on the frame. Only half of them are reachable from any given initial state.
If we number positions 1-16, position(i) is the frame position containing the tile
number i in the goal arrangement
Check whether the goal state is achieved from the initial state given below
Initial State
less(1) = 0 less(2) = 0 less(3) = 0 less(4) = 0 less(5) = 0 less(6) = 0 less(7) = 0 less(8) = 1
less(9) = 1 less(10) = 1 less(11) = 0 less(12) = 0 less(13) = 1 less(14) = 1 less(15) = 1
r=2
∑ = 0+0+0+0+0+0+0+1+1+1+0+0+1+1+1+2 = 8
Goal State
less(1) = 0 less(2) = 0 less(3) = 0 less(4) = 0 less(5) = 0 less(6) = 0 less(7) = 0 less(8) = 0
less(9) = 0 less(10) = 0 less(11) = 0 less(12) = 0 less(13) = 0 less(14) = 0 less(15) = 0
r=4
∑ = 15*0 + 4 = 4
Since the initial and the goal state are even parity the goal state is reachable from the initial state.
In the state space tree, node1 dies after leaving behind nodes 2, 3, 4 and 5.
Calculating ĉ (x) for these nodes, we have ĉ(2) =
1+4 = 5
ĉ(3) = 1+4 = 5
ĉ(4) = 1+2 = 3
ĉ(5) = 1+4 = 5
Since ĉ (4) is the lowest value, next E-node is node 4
The selection rule for the next node in BFS and DFS is “blind”. i.e. the selection rule does not
give any preference to a node that has a very good chance of getting the search to an answer
node quickly. The search for an optimal solution can often be speeded by using an
“intelligent” ranking function, also called an approximate cost function to avoid searching in
sub-trees that do not contain an optimal solution. It is similar to BFS-like search but with one
major optimization. Instead of following FIFO order, we choose a live node with least cost. We
may not get optimal solution by following node with least promising cost, but it will provide
very good chance of getting the search to an answer node quickly.
Let’s take below example and try to calculate Since Job 2 is assigned to worker A (marked
promising cost when Job 2 is assigned to in green), cost becomes 2 and Job 2 and
worker A. worker A becomes unavailable (marked in
red).
Now we assign job 3 to worker B as it has Finally, job 1 gets assigned to worker C as it
minimum cost from list of unassigned jobs. has minimum cost among unassigned jobs
Cost becomes 2 + 3 = 5 and Job 3 and worker and job 4 gets assigned to worker C as it is
B also becomes unavailable. only Job left. Total cost becomes 2 + 3 + 5 + 4
= 14.
Below diagram shows complete search space diagram showing optimal solution path in
green.
Definition
Suppose a person has to fill up his knapsack by selecting various objects which will
give him maximum profit.
Mathematical representation of knapsack problem
Number the objects from 1 to n and introduce a vector of binary variables Xj (j = 1, ... ,n)
Pj is a measure of the profit given for object j, c the size of the knapsack,
Steps
Calculate the cost function and the Upper bound for the two children of each node.
Here, the (i + 1)th level indicates whether the ith object is to be included or not.
If the cost function for a given node is greater than the upper bound, then the node
need not be explored further. Hence, we can kill this node. Otherwise, calculate the
upper bound for this node. If this value is less than U, then replace the value of U with
this value. Then, kill all unexplored nodes which have cost function greater than this
value.
The next node to be checked after reaching all nodes in a particular level will be the
one with the least cost function value among the unexplored nodes.
While including an object, one needs to check whether the adding the object crossed
the threshold. If it does, one has reached the terminal point in that branch, and all the
succeeding objects will not be included.
Solution
In this problem we will calculate lower bound and upper bound for each node. Place first item
in knapsack. Remaining weight of knapsack is 15 – 2 = 13. Place next item w2 in knapsack and
the remaining weight of knapsack is 13 – 4 = 9. Place next item w3 in knapsack then the
remaining weight of knapsack is 9 – 6 = 3. No fractions are allowed in calculation of upper
bound so w4 cannot be placed in knapsack.
Profit = P1 + P2 + P3 = 10 + 10 +12
So, Upper bound = 32
To calculate lower bound we can place w4 in knapsack since fractions are allowed in
calculation of lower bound.
Knapsack problem is maximization problem but branch and bound technique is applicable for
only minimization problems. In order to convert maximization problem into minimization
problem we have to take negative sign for upper bound and lower bound.
Therefore, Upper bound (U) = -32 & Lower bound (L) = -38
We choose the path, which has minimum difference of upper bound and lower bound. If the
difference is equal then we choose the path by comparing upper bounds and we discard node
with maximum upper bound.
For node 3, x1 = 0, means we should not place first item in the knapsack.
U = 10 + 12 = 22, make it as -22
Next, we will calculate difference of upper bound and lower bound for nodes 2, 3
For node 2, U – L = -32 + 38 = 6
For node 3, U – L = -22 + 32 = 10
Choose node 2, since it has minimum difference value of 6.
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate difference of
lower and upper bound of nodes 4 and 5.
For node 4, U – L = -32 + 38 = 6
For node 5, U – L = -22 + 36 = 14
Choose node 4, since it has minimum difference value of 6
Now we will calculate lower bound and upper bound of node 8 and 9. Calculate difference of
lower and upper bound of nodes 8 and 9.
For node 6, U – L = -32 + 38 =6
For node 7, U – L = -38 + 38 =0
Unit IV CS3401 Algorithms 30
Choose node 7, since it is minimum difference value of 0
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate difference of
lower and upper bound of nodes 4 and 5.
For node 8, U – L = -38 + 38 =0
For node 9, U – L = -20 + 20 =0
Here the difference is same, so compare upper bounds of nodes 8 and 9. Discard the node,
which has maximum upper bound. Choose node 8, discard node 9 since, it has maximum
upper bound.
Consider the path from 1 2 4 7 8
X1 =1 ; X2 =1 ; X3 =0 ; X4 = 1
The solution for 0/1 Knapsack problem is (x1, x2, x3, x4) = (1, 1, 0, 1)
𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑃𝑟𝑜𝑓𝑖𝑡 = ∑ 𝑃𝑖𝑋𝑖 = 10 ∗ 1 + 10 ∗ 1 + 12 ∗ 0 + 18 ∗ 1 = 38
Least Cost Branch & Bound using Static State Space Tree for Travelling Salesman
Problem
Branch and bound is an effective way to find better solution in quick time by pruning some of
the unnecessary branches of search tree.
Working
Consider directed weighted graph G = (V, E, W), where node represents cities and
weighted directed edges represents direction and distance between two cities.
𝑪𝒐𝒔𝒕 = 𝑳 + 𝑪𝒐𝒔𝒕(𝒊, 𝒋) + 𝒓
vii. Compute the cost of newly created reduced matrix as,
∞ }}
Problem:
Find the solution of following travelling salesman problem using branch and bound method.
Consider the following distance matrix:
Solution:
Row reduction
Find the minimum element from each row and Reduced matrix would be:
subtract it from each cell of matrix.
From the above reduced matrix, we will find Column reduced matrix MColRed would be:
the minimum element from each column and
subtract it from each cell of matrix.
Each row and column of MColRed has at least one zero entry, so this matrix is reduced matrix.
Column reduction cost (M) = 1 + 0 + 3 + 0 + 0 = 4
State space tree for 5 city problem is depicted as follows. Number within circle indicates the
order in which the node is generated, and number of edge indicates the city being visited.
M2 is already reduced.
Cost of node 2 :
C(2) = C(1) + Reduction cost + M1 [1] [2]
= 25 + 0 + 10 = 35
Select edge 1-4: Select edge 1-5:
Set M1 [1][ ] = M1[ ][4] = ∞ Set M1 [1] [ ] = M1 [ ] [5] = ∞
Set M1 [4][1] = ∞ Set M1 [5] [1] = ∞
Reduce resultant matrix if required. Reduce the resultant matrix if required.
Node 4 has minimum cost for path 1-4. We can go to vertex 2, 3 or 5. Let’s explore all three
nodes.
Select path 1-4-2 : (Add edge 4-2) Select edge 4-3 (Path 1-4-3):
Set M4 [1] [] = M4 [4] [] = M4 [] [2] = ∞ Set M4 [1] [ ] = M4 [4] [ ] = M4 [ ] [3] = ∞
Set M4 [2] [1] = ∞ Set M [3][1] = ∞
Reduce resultant matrix if required. Reduce the resultant matrix if required.
Cost of node 7:
C(7) = C(4) + Reduction cost + M4 [4] [3]
= 25 + 2 + 11 + 12 = 50
Select edge 4-5 (Path 1-4-5):
Matrix M8 is reduced.
Cost of node 8:
C(8) = C(4) + Reduction cost + M4 [4][5]
= 25 + 11 + 0 = 36
Cost of node 9:
C(9) = C(6) + Reduction cost + M6 [2][3]
= 28 + 11 + 2 + 11 = 52
Add edge 2-5 (Path 1-4-2-5):
Set M6 [1][ ] = M6 [4][ ] = M6 [2][ ] = M6 [ ][5] = ∞
Set M6 [5][1] = ∞
Reduce resultant matrix if required.