AI (Unit 2)
AI (Unit 2)
AI (Unit 2)
INTRODUCTION
Problem solving is a method of deriving solution steps beginning from initial description of the problem
to the desired solution. In AI, the problems are frequently modeled as a state space problem where the state
space is a set of all possible states from start to goal states.
The 2 types of problem-solving methods that are generally followed include general purpose & special
purpose methods. A general purpose method is applicable to a wide variety of problems, where a special
purpose method is a tailor method made for a particular problem. The most general approach for solving a
problem is to generate the solution & test it. For generating new state in the search space, an
action/operator/rule is applied & tested whether the state is the goal state or not. In case the state is not the goal
state, the procedure is repeated. The order of application of rules to the current state is called control strategy.
Production System:
Production System (PS) is one of the formalisms that help AI programs to do search process more
conveniently in state-space problems. This system consists of start (initial) state(s) & goal (final) state(s) of the
problem along with one or more databases consisting of suitable & necessary information for a particular task.
Production System consists of a number of production rules.
Problem Statement: There are two jugs a 4-Gallon one and 3-Gallon one. Neither has any measuring marker on
it. There is a pump that can be used to fill the jugs with water. How can we get exactly 2- Gallon water into the
4- Gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x
= 0, 1, 2, 3 or 4 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 4- Gallon jug. y
represents the number of gallons of water in the 3- Gallon jug. The start state is (0, 0). The goal state is
(2, n) for any value of n.
1
3. (x, y) if x>0 (x-d, y) pour some water out of the 4-g jug
4. (x, y) if y>0 (x, y-d) pour some water out of the 3-g jug
7. (x, y) if x + y ≥ 4 & y>0 (4,y-(4-x)) Pour water from the 3-g jug into the 4-g jug
until the 4-g jug is full
8. (x, y) if x + y>3 & x>0 (x-(3-y), 3) pour water from the 4-g jug into the 3-g jug
until the 3-g jug is full
9. (x, y) if x + y ≤ 4 & y>0 (x+y,0) Pour all the water from 3-g jug into 4-g jug.
10. (x, y) If x + y ≤ 3 & x>0 (0,x+y) Pour all the water from the 4-g jug into
Gallon in the 4-Gallon Jug Gallon in the 3-Gallon Jug Rule Applied
0 0
0 3
3 0
3 3
4 2
5/12
0 2
9/11
2 0
2
Example 2: Water Jug Problem
Problem Statement: There are two jugs a 5-Gallon one and 3-Gallon one. Neither has any measuring marker on
it. There is a pump that can be used to fill the jugs with water. How can we get exactly 4- Gallon water into the
5- Gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x
= 0, 1, 2, 3, 4 or 5 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 5- Gallon jug. y
represents the number of gallons of water in the 3- Gallon jug. The start state is (0, 0). The goal state is
(4, n) for any value of n.
3
Example 3: Missionaries & Cannibals Problem
Problem Statement: Three missionaries & three cannibals want to cross a river. There is a boat on their side of
the river that can be used by either 1 (or) 2 persons. How should they use this boat to cross the river in such a
way that cannibals never outnumber missionaries on either side of the river? If the cannibals ever outnumber the
missionaries (on either bank) then the missionaries will be eaten. How can they cross over without eaten?
Consider Missionaries as ‘M’, Cannibals as ‘C’ & Boat as ‘B’ which are on the same side of the river.
Initial State: ([3M, 3C, 1B], [0M, 0C, 0B]) Goal State: ([0M, 0C, 0B], [3M, 3C, 1B])
Rule 1: (0, M): One Missionary sailing the boat from Bank-1 to Bank-2.
Rule 2: (M, 0): One Missionary sailing the boat from Bank-2 to Bank-1.
Rule 3: (M, M): Two Missionaries sailing the boat from Bank-1 to Bank-2.
Rule 4: (M, M): Two Missionaries sailing the boat from Bank-2 to Bank-1.
Rule 5: (M, C): One Missionary & One Cannibal sailing the boat from Bank-1 to Bank-2.
Rule 6: (C, M): One Cannibal & One Missionary sailing the boat from Bank-2 to Bank-1.
Rule 7: (C, C): Two Cannibals sailing the boat from Bank-1 to Bank-2.
Rule 8: (C, C): Two Cannibals sailing the boat from Bank-2 to Bank-1.
Rule 9: (0, C): One Cannibal sailing the boat from Bank-1 to Bank-2.
Rule 10: (C, 0): One Cannibal sailing the boat from Bank-2 to Bank-1.
4
State Space Search:
A State Space Search is another method of problem representation that facilitates easy search. Using this
method, we can also find a path from start state to goal state while solving a problem. A state space basically
consists of 4 components:
A solution path is a path through the graph from a node in S to a node in G. The main objective of a search
algorithm is to determine a solution path in graph. There may be more than one solution paths, as there may be
more than one ways of solving the problem.
Problem Statement: The eight-puzzle problem has a 3X3 grid with 8 randomly numbered (1 to 8) tiles arranged
on it with one empty cell. At any point, the adjacent tile can move to the empty cell, creating a new empty cell.
Solving this problem involves arranging tiles such that we get the goal state from the start state.
A state for this problem should keep track of the position of all tiles on the game board, with 0 representing the
blank (empty cell) position on the board. The start & goal states may be represented as follows with each list
representing corresponding row:
5
Solution: Following is a Partial Search Tree for Eight Puzzle Problem
6
Example: Chess Game (One Legal Chess Move)
Chess is basically a competitive 2 player game played on a chequered board with 64 squares arranged in
an 8 X 8 square. Each player is given 16 pieces of the same colour (black or white). These include 1 King, 1
Queen, 2 Rooks, 2 Knights, 2 Bishops & 8 pawns. Each of these pieces move in a unique manner. The player
who chooses the white pieces gets the first turn to play. The players get alternate chances in which they can
move one piece at a time. The objective of this game is to remove the opponent’s king from the game. The
opponent’s King has to be placed in such a situation where the king is under immediate attack & there is no way
to save it from the attack. This is known as Checkmate.
For a problem playing chess the starting position can be described as an 8 X 8 array where each position
contains a symbol standing for appropriate piece in the official chess opening position. We can define our goal
as any board position in which the opponent does not have a legal move & his/her king is under attack. The
legal moves provide the way of getting from initial state to goal state. They can be described easily as a set of
rules consisting of 2 parts
• A left side that serves as a pattern to be matched against the current board position
• A right side that describes the change to be made to the board position to reflect the move
Control strategy is one of the most important components of problem solving that describes the order of
application of the rules to the current state. Control strategy should be such that it causes motion towards a
solution. The second requirement of control strategy is that it should explore the solution space in a systematic
manner. Depth-First & Breadth-First are systematic control strategies. There are 2 directions in which a search
could proceed
• Data-Driven Search, called Forward Chaining, from the Start State
• Goal-Driven Search, called Backward Chaining, from the Goal State
Forward Chaining: The process of forward chaining begins with known facts & works towards a solution. For
example, in 8-puzzle problem, we start from the start state & work forward to the goal state. In this case, we
begin building a tree of move sequences with the root of the tree as the start state. The states of next level of the
tree are generated by finding all rules whose left sides match with root & use their right side to create the new
state. This process is continued until a configuration that matches the goal state is generated.
Backward Chaining: It is a goal directed strategy that begins with the goal state & continues working backward,
generating more sub-goals that must also be satisfied to satisfy main goal until we reach to start state. Prolog
(Programming in Logic) language uses this strategy. In this case, we begin building a tree of move sequences
with the goal state of the tree as the start state. The states of next level of the tree are generated by finding all
rules whose right sides match with goal state & use their left side to create the new state. This process is
continued until a configuration that matches the start state is generated.
Note: We can use both Data-Driven & Goal-Driven strategies for problem solving, depending on the nature of
the problem.
CHARACTERISTICS OF PROBLEM
8
Recoverable: These are the problems where solution steps can be undone. For example, in Water Jug Problem,
if we have filled up the jug, we can empty it also. Any state can be reached again by undoing the steps. These
problems are generally puzzles played by a single player. Such problems can be solved by back tracking, so
control strategy can be implemented using a push-down stack.
Irrecoverable: The problems where solution steps cannot be undone. For example, any 2-Player games such as
chess, playing cards, snake & ladder etc are example of this category. Such problems can be solved by planning
process.
2. Decomposability of a Problem: Divide the problem into a set of independent smaller sub-problems,
solve them and combine the solution to get the final solution. The process of dividing sub-problems continues
till we get the set of the smallest sub-problems for which a small collection of specific rules are used. Divide-
And-Conquer technique is the commonly used method for solving such problems. It is an important & useful
characteristic, as each sub-problem is simpler to solve & can be handed over to a different processor. Thus, such
problems can be solved in parallel processing environment.
3. Roll of Knowledge: Knowledge plays an important role in solving any problem. Knowledge could be in
the form of rules & facts which help generating search space for finding the solution.
4. Consistency of Knowledge Base used in Solving Problem: Make sure that knowledge base used to solve
problem is consistent. Inconsistent knowledge base will lead to wrong solutions. For example, if we have
knowledge in the form of rules & facts as follows:
If it is humid, it will rain. If it is sunny, then it is day time. It is sunny day. It is night time.
This knowledge is not consistent as there is a contradiction because ‘it is a day time’ can be deduced from the
knowledge, & thus both ‘it is night time’ and ‘it is a day time’ are not possible at the same time. If knowledge
base has such inconsistency, then some methods may be used to avoid such conflicts.
5. Requirement of Solution: We should analyze the problem whether solution require is absolute (or)
relative. We call solution to be absolute if we have to find exact solution, where as it is relative if we have
reasonable good & approximate solution. For example, in Water Jug Problem, if there are more than one ways
to solve a problem, then we follow one path successfully. There is no need to go back & find a better solution.
In this case, the solution is absolute. In travelling sales man problem, our goal is to find the shortest route,
unless all routes are known, it is difficult to know the shortest route. This is the Best-Path problem; where as
Water Jug is Any-Path problem. Any-Path problem is generally solved in reasonable amount of time by using
heuristics that suggest good paths to explore. Best-Path problems are computationally harder compared with
Any-Path problems.
9
EXHAUSTIVE SEARCHES (OR) UNIFORMED SEARCHES
• Breadth-First Search
• Depth-First Search
• Depth-First Iterative Deepening
• Bidirectional Search
Algorithm:
1. Create a variable called NODE-LIST and set it to the initial state.
2. Loop until the goal state is found or NODE-LIST is empty.
a. Remove the first element, say E, from the NODE-LIST. If NODE-LIST was empty then quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
ii. If the new state is the goal state, quit and return this state.
iii. Otherwise add this state to the end of NODE-LIST
Advantages:
• BFS will provide a solution if any solution exists.
• If there is more than one solution for a given problem, then BFS will provide the minimal solution
which requires the least number of steps.
Disadvantages:
• BFS requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
• BFS needs lots of time if the solution is far away from the root node.
10
Example 1: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K.
Example 2: a---> b---> c---> d---> e---> f---> g---> h---> i---> j---> k.
11
Example 4: 8-Puzzle Problem
Example 5:
12
Algorithm:
1. If the initial state is a goal state, quit and return success.
2. Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor of the initial state. If there are no more successors,
signal failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise continue in this loop.
Advantages:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node
to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantages:
• There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example 1:
Note: It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse
node C and then G, and here it will terminate as it found goal node.
13
Example 2:
14
Example 5: 8- Puzzle Problem
Example:
15
Iteration 1: A
Iteration 2: A, B, C
Iteration 3: A, B, D, E, C, F, G
Iteration 4: A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
• Repeats all the work of the previous phase.
16
4. Bidirectional Search:
Bidirectional search is a graph search algorithm that runs 2 simultaneous searches. One search moves
forward from the start state & other moves backward from the goal state & stops when the two meet in the
middle. It is useful for those problems which have a single start state & single goal state.
Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
Example:
If match is found, then path can be traced from start to the matched state & from matched to the goal
state. It should be noted that each node has link to its successors as well as to its parent. These links will be
generating complete path from start to goal states.
The trace of finding path from node 1 to 16 using Bidirectional Search is as given below. The Path
obtained is 1, 2, 6, 11, 14, 16.
17
5. Analysis of Search Methods:
18
Travelling Salesman Problem (TSP):
Statement: In travelling salesman problem (TSP), one is required to find the shortest route of visiting all the
cities once & running back to starting point. Assume that there are ‘n’ cities & the distance between each pair of
the cities is given.
The problem seems to be simple, but deceptive. All the possible paths of the search tree are explored &
the shortest path is returned. This will require (n-1)! paths to be examined for ‘n’ cities.
• Start generating complete paths, keeping track of the shortest path found so far.
• Stop exploring any path as soon as its partial length becomes greater than the shortest path length found
so far.
In this case, there will be 4!=24 possible paths. In below performance comparison, we can notice that
out of 13 paths shown, 5 paths are partially evaluated.
19
Table: Performance Comparison
20
HEURISTIC SEARCH TECHNIQUES
Heuristic: It is helpful in improving the efficiency of search process.
• Generate & Search
• Branch & Bound Search (Uniformed Cost Search)
• Hill Climbing
• Beam Search
• Best-First Search (A* Algorithm)
Algorithm:
Start
• Generate a possible solution
• Test if it is a goal
• If not go to start else quit
End
Advantage:
• Guarantee in finding a solution if a solution really exists.
Disadvantage:
• Not suitable for the larger problems
Advantage:
• Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantage:
• It does not care about the number of steps involve in searching and only concerned about path cost. Due
to which this algorithm may be stuck in an infinite loop.
Hill Climbing:
• Simple Hill Climbing
• Steepest-Ascent Hill Climbing (Gradient Search)
Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as a current state. It
only checks it's one successor state, and if it finds better than the current state, then move else be in the same
state.
Algorithm:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to step2.
Step 5: Exit.
22
Steepest-Ascent Hill Climbing (Gradient Search):
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines
all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal state.
This algorithm consumes more time as it searches for multiple neighbors.
Algorithm:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial
state.
Step 2: Loop until a solution is found or the current state does not change.
a) Let SUCC be a state such that any successor of the current state will be better than it.
b) For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
1. Local Maximum: It is a state that is better than all its neighbours but not better than some other states which
are far away. From this state all moves looks to be worse. In such situation backtrack to some earlier state &
try going in different direction to find a solution.
2. Plateau: It is a flat area of the search space where all neighbouring states has the same value. It is not
possible to determine the best direction. In such situation make a big jump to some direction & try to get to
new section of the search space.
23
3. Ridge: It is an area of search space that is higher than surrounding areas but that cannot be traversed by
single moves in any one direction. It is a special kind of Local Maxima.
Beam Search:
Beam Search is a heuristic search algorithm in which W number of best nodes at each level is always
expanded. It progresses level-by-level & moves downward only from the best W nodes at each level. Beam
Search uses BFS to build its search tree. At each level of tree it generates all its successors of the states at the
current level, sorts them in order of increasing heuristic values. However it only considers W number of states
at each level, whereas other nodes are ignored.
Here, W - Width of Beam Search
B - Branching Factor
There will only be W * B nodes under consideration at any depth but only W nodes will be selected.
Algorithm:
1. Node=Root_Node; Found= false
2. If Node is the goal node, then Found=true else find SUCCs of Node, if any with its estimated cost and store
in OPEN list;
3. While (Found=false and not able to proceed further) do
{
• Sort OPEN list;
• Select top W elements from OPEN list and put it in W_OPEN list and empty OPEN list
• For each NODE from W_OPEN list
{
24
• If NODE=Goal state then FOUND=true else find SUCCs of NODE. If any with its estimated
cost and store in OPEN list;
}
}
4. If FOUND=true then return Yes otherwise return No;
5. Stop
Example:
Below is the search tree generated using Beam Search Algorithm. Assume, W=2 & B=3. Here black
nodes are selected based on their heuristic values for further expansion.
Best-First Search:
It is a way of combining the advantages of both Depth-First and Breadth-First Search into a single
method. At each step of the best-first search process, we select the most promising of the nodes we have
generated so far. This is done by applying an appropriate heuristic function to each of them.
In some cases we have so many options to solve but only any one of them must be solved. In AI this can
be represented as OR graphs. In this among all available sub problems either of them must be solved. Hence the
name OR graph.
To implement such a graph-search procedure, we will need to use two lists of nodes.
OPEN: This list contains all the nodes those have been generated and have had the heuristic function applied to
them but which have not yet been examined. OPEN is actually a priority queue in which the elements with the
highest priority are those with the most promising value of the heuristic function.
CLOSED: This list contains all the nodes that have already been examined. We need to keep these nodes in
memory if we want to search a graph rather than a tree.
25
Algorithm:
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a) Pick the best node on OPEN
b) Generate its successors
c) For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN, and record its parent.
ii. If it has been generated before, change the parent if this new path is better than the
previous one. In that case, update the cost of getting to this node and to any successors
that this node may already have.
Example:
Step 1: At this level we have only one node, i.e., initial node A
Step 2: Now we generate the successors A, three new nodes are generated namely B, C, and D with the costs of
3, 5 and 1 respectively. So these nodes are added to the OPEN list and A can be shifted to CLOSED list since it
is processed.
26
Among these three nodes D is having the least cost, and hence selected for expansion. So this node is shifted to
CLOSED list.
Step 3: At this stage the node D is expanded generating the new nodes E and F with the costs 4 and 6
respectively. The newly generated nodes will be added to the OPEN list. And node D will be added to CLOSED
list.
Step 4: At this stage node B is expanded generating the new nodes G & H with costs 6 and 5 respectively. The
newly generated nodes will be added to the OPEN list. And node B will be added to CLOSED list.
Step 5: this stage node E is expanded generating the new nodes I & J with costs 2 and 1 respectively. The newly
generated nodes will be added to the OPEN list. And node E will be added to CLOSED list.
27
A* Algorithm:
A* is a Best First Search algorithm that finds the least cost path from initial node to one goal node (out of one or
more possible goals)
f l(x) = g(x) + hl(x)
Where, f l = estimate of cost from initial state to goal state, along the path that generated current node.
g = measure of cost getting from initial state to current node.
hl = estimate of the additional cost of getting from current node to a goal state.
Example 1:
28
Optimal Solution by A* Algorithm:
• Underestimation
• Overestimation
Underestimation: From the below start node A is expanded to B, C & D with f values as 4, 5 & 6 respectively.
Here, we are assuming that the cost of all arcs is 1 for the sake of simplicity. Note that node B has minimum f
value, so expand this node to E which has f value as 5. Since f value of C is also 5, we resolve in favour of E,
the path currently we are expanding. Now node E is expanded to node F with f value as 6. Clearly, expansion of
a node F is stopped as f value of C is not the smallest. Thus, we see that by underestimating heuristic value, we
have wasted some effort by eventually discovered that B was farther away than we thought. Now we go back &
try another path & will find the optimal path.
Overestimation: Here we are overestimating heuristic value of each node in the graph/tree. We expand B to E, E
to F & F to G for a solution path of length 4. But assume that there is direct path from D to a solution giving a
path of length 2 as h value of D is also overestimated. We will never find it because of overestimating h(D). We
may find some other worse solution without ever expanding D. So, by overestimating h, we cannot be
guaranteed to find the shortest path.
29
Admissibility of A*:
A search algorithm is admissible, if for any graph, it always terminates in an optional path from start state to
goal state, if path exists. We have seen earlier that if heuristic function ‘h’ underestimates the actual value from
current state to goal state, then it bounds to give an optimal solution & hence is called admissible function. So,
we can say that A* always terminates with the optimal path in case h is an admissible heuristic function.
ITERATIVE-DEEPING A*
IDA* is a combination of the DFID & A* algorithm. Here the successive iterations are corresponding to
increasing values of the total cost of a path rather than increasing depth of the search. Algorithm works as
follows:
• For each iteration, perform a DFS pruning off a branch when its total cost (g+h) exceeds a given
threshold.
• The initial threshold starts at the estimate cost of the start state & increases for each iteration of the
algorithm.
• The threshold used for the next iteration is the minimum cost of all values exceeded the current
threshold.
• These steps are repeated till we find a goal state.
Let us consider as example to illustrate the working IDA* Algorithm as shown below. Initially, the
threshold value is the estimated cost of the start node. In the first iteration, threshold=5. Now we generate all the
successors of start node & compute their estimated values as 6, 8, 4, 8 & 9. The successors having values
greater than 5 are to be pruned. Now for next iteration, we consider the threshold to be the minimum of the
pruned nodes value, that is, threshold=6 & the node with 6 value along with node with value 4 are retained for
further expansion.
Advantages:
• Simpler to implement over A* is that it does not use Open & Closed lists
• Finds solution of least cost or optimal solution
• Uses less space than A*
30
Fig: Working of IDA*
CONSTRAINT SATISFACTION
Many problems in AI can be viewed as problems of constraint satisfaction in which the goal is to
discover some problem state that satisfies a given set of constraints instead of finding a optimal path to the
solution. Such problems are called Constraint Satisfaction (CS) problems. Constraint satisfaction is a two step
process.
31
• First, constraints are discovered and propagated as far as possible throughout the system. Then, if there
is still not a solution then the search begins. A guess about something is made and added as a new
constraint
• Propagation then occurs with this new constraint.
Algorithm:
1. Propagate available constraints. To do this, first set OPEN to the set of all objects that must have values
assigned to them in a complete solution. Then do until an inconsistency is detected or until OPEN is empty:
a) Select an object OB from OPEN. Strengthen as much as possible the set of constraints that apply to OB.
b) If this set is different from the set that was assigned the last time OB was examined or if this is the first
time OB has been examined, then add to OPEN all objects that share any constraints with OB
c) Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report the solution.
3. If the union of the constraints discovered above defines a contradiction, then return failure.
4. If neither of the above occurs, then it is necessary to make a guess at in order to proceed. To do this, loop
until a solution is found or all possible solutions have been eliminated
a) Select an object whose value is not yet determined and select a way of strengthening the constraints on
that object.
b) Recursively invoke constrain satisfaction with the current set of constraints augmented by the
strengthening constraint just selected.
Example:
Let us consider M=1, because by adding any 2 single digit number, at maximum we get one as carry.
32
Assume that S=8 (or) 9, S + M = 0 (or) S + M + C3 = 0
If S = 9, S + M = 9 + 1 =10 (with no carry)
If S = 8, S + M + C3 = 8 + 1 + 1 = 10 (with carry). So, we get O value as 0.
Therefore, M = 1, S = 9 & O = 0.
So, here E + 0 = N. Then E =N (It is not possible because no 2 variables should have same value).
E + O + c2 = N
E + 0 + 1 = N which gives E + 1 = N
Estimate E value from the remaining possible numbers i.e.., 2, 3, 4, 5, 6, 7, 8. So, from our estimation the E &
N values satisfies at E = 5. So, E + 1 = 5 + 1 = 6 i.e.., N = 6 (E + 1 = N)
Therefore, M = 1, S = 9, O = 0, E = 5 & N = 6.
So, here N + R + C1 = E. We already know that E = 5. So, the E value is satisfied by taking R = 8.
6 + 8 + 1 = 15.
Therefore, M = 1, S = 9, O = 0, E = 5, N = 6 & R = 8
.
Here, D + E = Y. It has to satisfy C1 = 1, so that D = 7.
Then, 7 + 5 = 12. So, Y = 2.
33
Final Values are M = 1, S = 9, O = 0, E = 5, N = 6, R = 8, D = 7 & Y = 2.
By substituting all the above values,
Example 2:
Example 3:
Example 4:
Example 5:
Example 6:
34