Unit -2
Heuristic Search
Presented By:
P.SINDHU
ASSISTANT
PROFESSOR©
Contents
• Introduction to search
• Uninformed search
– Breadth first search
– Depth first search
– Depth limit search
– Iterative Deepining DFS
– Bidirectional Search
– Uninform Cost Search
Contents
• Informed Search
– Best First Search
– A* Search
– Generate and Test
– Hill Climbing
– Problem Reduction
– Constraint Satisfaction Problem
– Mean and Analysis
Introduction to search
• Search Algorithm Terminologies:
• Search: Searching is a step by step procedure to
solve a search-problem in a given search space. A
search problem can have three main factors:
– Search Space: Search space represents a set of possible
solutions, which a system may have.
– Start State: It is a state from where agent begins the
search.
– Goal test: It is a function which observe the current state
and returns whether the goal state is achieved or not.
Contd…
• Search tree: A tree representation of search problem is called Search
tree. The root of the search tree is the root node which is
corresponding to the initial state.
• Actions: It gives the description of all the available actions to the
agent.
• Transition model: A description of what each action do, can be
represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to
the goal node.
• Optimal Solution: If a solution has the lowest cost among all
solutions.
Contd…
Properties of Search Algorithms:
• Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
• Completeness: A search algorithm is said to be complete if it
guarantees to return a solution if at least any solution exists for any
random input.
• Optimality: If a solution found for an algorithm is guaranteed to be
the best solution (lowest path cost) among all other solutions, then
such a solution for is said to be an optimal solution.
• Time Complexity: Time complexity is a measure of time for an
algorithm to complete its task.
• Space Complexity: It is the maximum storage space required at any
point during the search, as the complexity of the problem.
Types of search algorithms
• Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms .
Uninformed/Blind Search
• The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it
only includes information about how to traverse the tree and how to
identify leaf and goal nodes. Uninformed search applies a way in which
search tree is searched without any information about the search space
like initial state operators and test for the goal, so it is also called blind
search. It examines each node of the tree until it achieves the goal node.
• It can be divided into five main types:
• Breadth-first search
• Uniform cost search
• Depth-first search
• Iterative deepening depth-first search
• Bidirectional Search
Informed Search
• Informed search algorithms use domain knowledge. In
an informed search, problem information is available
which can guide the search. Informed search strategies
can find a solution more efficiently than an uninformed
search strategy. Informed search is also called a Heuristic
search.
• A heuristic is a way which might not always be
guaranteed for best solutions but guaranteed to find a
good solution in reasonable time.
• Informed search can solve much complex problem
which could not be solved in another way.
Breadth-first Search:
• Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
• BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
• Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
• Disadvantages:
• Play Video
• It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Example:
• In the below tree structure, we have shown the traversing of the tree
using BFS algorithm from the root node S to goal node K. BFS search
algorithm traverse in layers, so it will follow the path which is shown by
the dotted arrow, and the traversed path will be:
• S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Depth-first Search
• Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
• Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path from
root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
• In the below search tree, we have shown the flow of
depth-first search, and it will follow the order as:
• Root node--->Left node ----> right node.
• It will start searching from root node S, and traverse
A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and
still goal node is not found. After backtracking it will
traverse node C and then G, and here it will terminate
as it found goal node.
Depth-Limited Search Algorithm
• A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the
infinite path in the Depth-first search. In this algorithm, the node at the depth
limit will treat as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Advantages:
• Depth-limited search is Memory efficient.
Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.
• Example:
Uniform-cost Search Algorithm
• Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph. This algorithm comes into play when a different cost is available for each
edge. The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to solve any
graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is
implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost
of all edges is the same.
Advantages:
• Uniform cost search is optimal because at every state the path with the least cost is
chosen.
Disadvantages:
• It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
Iterative deepening depth-first Search
• The iterative deepening algorithm is a combination of DFS and BFS algorithms.
This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.
• This algorithm performs depth-first search up to a certain "depth limit", and it
keeps increasing the depth limit after each iteration until the goal node is found.
• This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
Disadvantages:
• The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
• Following tree structure is showing the iterative deepening depth-first search.
IDDFS algorithm performs various iterations until it does not find the goal node.
The iteration performed by the algorithm is given as:
• 1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find
the goal node.
Bidirectional Search Algorithm
• Bidirectional search algorithm runs two simultaneous searches, one
form initial state called as forward-search and other from goal node
called as backward-search, to find the goal node. Bidirectional search
replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex.
The search stops when these two graphs intersect each other.
• Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
Example:
• In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and
starts from goal node 16 in the backward direction.
• The algorithm terminates at node 9 where two searches meet.
Informed Search Algorithms
Heuristics function:
• Heuristic is a function which is used in Informed Search, and it
finds the most promising path.
• It takes the current state of the agent as its input and produces
the estimation of how close agent is from the goal. The
heuristic method, however, might not always give the best
solution, but it guaranteed to find a good solution in
reasonable time.
• Heuristic function estimates how close a state is to the goal. It
is represented by h(n), and it calculates the cost of an optimal
path between the pair of states.
• The value of the heuristic function is always positive.
• Admissibility of the heuristic function is given as:
h(n) <= h*(n)
• Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
Pure Heuristic Search:
• Pure heuristic search is the simplest form of heuristic search algorithms. It expands
nodes based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED
list. In the CLOSED list, it places those nodes which have already expanded and in the
OPEN list, it places nodes which have yet not been expanded.
• On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm continues
unit a goal state is found.
In the informed search we will discuss two main algorithms which are given below:
• Best First Search Algorithm(Greedy search)
• A* Search Algorithm
Best-first Search Algorithm
• Greedy best-first search algorithm always selects the path which
appears best at that moment. It is the combination of depth-first
search and breadth-first search algorithms. It uses the heuristic
function and search. Best-first search allows us to take the
advantages of both algorithms. With the help of best-first search,
at each step, we can choose the most promising node. In the best
first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function,
i.e.
• f(n)= g(n).
• Were, h(n)= estimated cost from node n to the goal.
• The greedy best first algorithm is implemented by the priority
queue.
Advantages:
• Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
• It can behave as an unguided depth-first search in the worst case
scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
Example:
• Consider the below search problem, and we will traverse it using greedy
best-first search. At each iteration, each node is expanded using
evaluation function f(n)=h(n) , which is given in the below table.
• Best first search algorithm:
• Step 1: Place the starting node into the OPEN list.
• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
• Step 4: Expand the node n, and generate the successors of node n.
• Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
• Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
• Step 7: Return to Step 2.
• Advantages:
Example:
• Consider the below search problem, and we will
traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation
function f(n)=h(n) , which is given in the below table.
• In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the
iteration for traversing the above example.
• Expand the nodes of S and put in the CLOSED list
• Initialization: Open [A, B], Closed [S]
• Iteration 1: Open [A], Closed [S, B]
• Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
• Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
• Hence the final solution path will be: S----> B----->F----> G
• Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
• Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.
• Complete: Greedy best-first search is also incomplete, even if the given state space
is finite.
• Optimal: Greedy best first search algorithm is not optimal.
A* search
• A* search is the most commonly known form of best-first search. It uses
heuristic function h(n), and cost to reach the node n from the start state
g(n).
• It has combined features of UCS and greedy best-first search, by which it
solve the problem efficiently.
• A* search algorithm finds the shortest path through the search space
using the heuristic function.
• This search algorithm expands less search tree and provides optimal
result faster.
• A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
• In A* search algorithm, we use search heuristic as well as the cost to reach
the node. Hence we can combine both costs as following, and this sum is
called as a fitness number.
Algorithm of A* search:
• Step1: Place the starting node in the OPEN list.
• Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.
• Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and
stop, otherwise
• Step 4: Expand node n and generate all of its successors, and put n into
the closed list. For each successor n', check whether n' is already in the
OPEN or CLOSED list, if not then compute evaluation function for n' and
place into Open list.
• Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.
• Step 6: Return to Step 2.
Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
Disadvantages:
• It does not always produce the shortest path as it mostly based
on heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.
Example:
• In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n)
of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.
• Here we will use OPEN and CLOSED list.
• Initialization: {(S, 5)}
• Iteration1: {(S--> A, 4), (S-->G,
10)}
• Iteration2: {(S--> A-->C, 4),
(S--> A-->B, 7), (S-->G, 10)}
• Iteration3: {(S--> A-->C--->G,
6), (S--> A-->C--->D, 11), (S-->
A-->B, 7), (S-->G, 10)}
• Iteration 4 will give the final
result, as S--->A--->C--->G it
provides the optimal path
with cost 6.
• Points to remember:
• A* algorithm returns the path which occurred first, and it does not search
for all remaining paths.
• The efficiency of A* algorithm depends on the quality of heuristic.
• A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
• Complete: A* algorithm is complete as long as:
• Branching factor is finite.
• Cost at every action is fixed.
• Optimal: A* search algorithm is optimal if it follows below two
conditions:
• Admissible: the first condition requires for optimality is that h(n) should
be an admissible heuristic
• for A* tree search. An admissible heuristic is optimistic in
nature.
• Consistency: Second required condition is consistency for only
A* graph-search.
• If the heuristic function is admissible, then A* tree search will
always find the least cost path.
• Time Complexity: The time complexity of A* search algorithm
depends on heuristic function, and the number of nodes
expanded is exponential to the depth of solution d. So the
time complexity is O(b^d), where b is the branching factor.
• Space Complexity: The space complexity of A* search
algorithm is O(b^d)
Generate-and-Test
• The generate-and-test strategy is the simplest of all the
approaches. It consists of the following steps:
• Algorithm: Generate-and-Test
• 1. Generate a possible solution. For some problems. this means
generating a particular point in the problem space. For others, it
means generating a path from a start state.
• 2. Test to see if this is actually a solution by comparing the chosen
point or the endpoint of the chosen path to the set of acceptable
goal states.
• 3. If a solution has been found, quit. Otherwise, return to step 1.
• The following diagram shows the Generate and Test Heuristic
Search Algorithm
• Generate-and-test, like depth-first search, requires
that complete solutions be generated for testing.
• In its most systematic form, it is only an exhaustive
search of the problem space.
• Solutions can also be generated randomly but the
solution is not guaranteed.
• This approach is what is known as the British
Museum algorithm: finding an object in the British
Museum by wandering randomly.
Example: coloured blocks
• “Arrange four 6-sided cubes in a row, with each
side of each cube painted one of four colors, such
that on all four
sides of the row one block face of each color are
showing.”
• Heuristic: If there are more red faces than other
colours then, when placing a block with several red
faces, use few of them as possible as outside faces.
Example – Traveling Salesman Problem (TSP)
• A salesman has a list of cities, each of which he must
visit exactly once. There are direct roads between each
pair of cities on the list. Find the route the salesman
should follow for the shortest possible round trip that
both starts and finishes at any one of the cities.
• Traveler needs to visit n cities.
• Know the distance between each pair of cities.
• Want to know the shortest route that visits all the
cities once.
• Finally, select the path
whose length is less.
Hill Climbing
• Hill climbing algorithm is a local search algorithm which continuously moves in
the direction of increasing elevation/value to find the peak of the mountain or
best solution to the problem. It terminates when it reaches a peak value
where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.
• It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which are state and
value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.
Features of Hill Climbing:
• Following are some main features of Hill Climbing
Algorithm:
• Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in
the direction which optimizes the cost.
• No backtracking: It does not backtrack the search space, as
it does not remember the previous states.
State-space Diagram for Hill Climbing:
• The state-space landscape is a graphical representation of
the hill-climbing algorithm which is showing a graph between
various states of algorithm and Objective function/Cost.
• On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on the x-
axis. If the function on Y-axis is cost then, the goal of search
is to find the global minimum and local minimum. If the
function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
Different regions in the state space landscape:
• Local Maximum: Local maximum is a state which is better than its
neighbor states, but there is also another state which is higher than
it.
• Play Video
• Global Maximum: Global maximum is the best possible state of
state space landscape. It has the highest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is
currently present.
• Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
1. Simple Hill Climbing:
• Simple hill climbing is the simplest way to implement a hill climbing
algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a current
state. It only checks it's one successor state, and if it finds better than
the current state, then move else be in the same state.
This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new state as a
current state.
– Else if not better than the current state, then return to step2.
• Step 5: Exit.
2. Steepest-Ascent hill climbing:
• The steepest-Ascent algorithm is a variation of simple hill
climbing algorithm. This algorithm examines all the
neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This
algorithm consumes more time as it searches for multiple
neighbors
• Algorithm for Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then
return success and stop, else make current state as initial
state.
• Step 2: Loop until a solution is found or the current state does not
change.
– Let SUCC be a state such that any successor of the current state will
be better than it.
– For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the
SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state
to SUCC.
• Step 5: Exit.
3. Stochastic hill climbing:
• Stochastic hill climbing does not examine for
all its neighbor before moving. Rather, this
search algorithm selects one neighbor node at
random and decides whether to choose it as a
current state or examine another state.
Problems in Hill Climbing Algorithm
• 1. Local Maximum: A local maximum is a peak state in
the landscape which is better than each of its
neighboring states, but there is another state also
present which is higher than the local maximum.
• Solution: Backtracking technique can be a solution of
the local maximum in state space landscape. Create a
list of the promising path so that the algorithm can
backtrack the search space and explore other paths as
well.
• 2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of
this algorithm does not find any best direction to move. A hill-climbing
search might be lost in the plateau area.
• Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state which
is far away from the current state so it is possible that the algorithm could
find non-plateau region.
• 3. Ridges: A ridge is a special form of the local maximum. It
has an area which is higher than its surrounding areas, but
itself has a slope, and cannot be reached in a single move.
• Solution: With the use of bidirectional search, or by moving in
different directions, we can improve this problem
Simulated Annealing:
• A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum.
And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which
yields both efficiency and completeness.
• In mechanical term Annealing is a process of hardening a metal or glass to
a high temperature then cooling gradually, so this allows the metal to
reach a low-energy crystalline state. The same process is used in simulated
annealing in which the algorithm picks a random move, instead of picking
the best move. If the random move improves the state, then it follows the
same path. Otherwise, the algorithm follows the path which has a
probability of less than 1 or it moves downhill and chooses another path.
Problem Reduction
• We already know about the divide and conquer strategy, a
solution to a problem can be obtained by decomposing it
into smaller sub-problems.
• Each of this sub-problem can then be solved to get its sub-
solution.
• These sub-solutions can then be recombined to get a
solution as a whole. That is called is Problem Reduction.
• This method generates arc which is called as AND arcs.
• One AND arc may point to any number of successor
nodes, all of which must be solved for an arc to point to a
solution.
Problem Reduction algorithm:
• 1. Initialize the graph to the starting node.
• 2. Loop until the starting node is labelled SOLVED or until its cost goes
above FUTILITY:
• (i) Traverse the graph, starting at the initial node and following the current best
path and accumulate the set of nodes that are on that path and have not yet
been expanded.
• (ii) Pick one of these unexpanded nodes and expand it. If there are no
successors, assign FUTILITY as the value of this node. Otherwise, add its
successors to the graph and for each of them compute f'(n). If f'(n) of any node
is O, mark that node as SOLVED.
• (iii) Change the f'(n) estimate of the newly expanded node to reflect the new
information provided by its successors. Propagate this change backwards
through the graph. If any node contains a successor arc whose descendants are
all solved, label the node itself as SOLVED.
Constraint Satisfaction Problem
• We have seen so many techniques like Local search,
Adversarial search to solve different problems. The objective
of every problem-solving technique is one, i.e., to find a
solution to reach the goal. Although, in adversarial search
and local search, there were no constraints on the agents
while solving the problems and reaching to its solutions.
• In this section, we will discuss another type of problem-
solving technique known as Constraint satisfaction
technique. By the name, it is understood that constraint
satisfaction means solving a problem under certain
constraints or rules.
• Constraint satisfaction is a technique where a problem is solved when its values
satisfy certain constraints or rules of the problem. Such type of technique leads
to a deeper understanding of the problem structure as well as its complexity.
• Constraint satisfaction depends on three components, namely:
• X: It is a set of variables.
• D: It is a set of domains where the variables reside. There is a specific domain
for each variable.
• C: It is a set of constraints which are followed by the set of variables.
• In constraint satisfaction, domains are the spaces where the variables reside,
following the problem specific constraints. These are the three main elements
of a constraint satisfaction technique. The constraint value consists of a pair
of {scope, rel}. The scope is a tuple of variables which participate in the
constraint and rel is a relation which includes a list of values which the variables
can take to satisfy the constraints of the problem.
Solving Constraint Satisfaction Problems
• The requirements to solve a constraint satisfaction problem (CSP) is:
• A state-space
• The notion of the solution.
• A state in state-space is defined by assigning values to some or all variables such as
• {X1=v1, X2=v2, and so on…}.
An assignment of values to a variable can be done in three ways:
• Consistent or Legal Assignment: An assignment which does not violate any
constraint or rule is called Consistent or legal assignment.
• Complete Assignment: An assignment where every variable is assigned with a
value, and the solution to the CSP remains consistent. Such assignment is known as
Complete assignment.
• Partial Assignment: An assignment which assigns values to some of the variables
only. Such type of assignments are called Partial assignments.
Types of Domains in CSP
• There are following two types of domains which are
used by the variables :
• Discrete Domain: It is an infinite domain which can
have one state for multiple variables. For example, a
start state can be allocated infinite times for each
variable.
• Finite Domain: It is a finite domain which can have
continuous states describing one domain for one
specific variable. It is also called a continuous domain.
Constraint Types in CSP
• With respect to the variables, basically there are following types of constraints:
• Unary Constraints: It is the simplest type of constraints that restricts the value of a
single variable.
• Binary Constraints: It is the constraint type which relates two variables. A value x2 will
contain a value which lies between x1 and x3.
• Global Constraints: It is the constraint type which involves an arbitrary number of
variables.
• Some special types of solution algorithms are used to solve the following types of
constraints:
• Linear Constraints: These type of constraints are commonly used in linear programming
where each variable containing an integer value exists in linear form only.
• Non-linear Constraints: These type of constraints are used in non-linear programming
where each variable (an integer value) exists in a non-linear form.
• Note: A special constraint which works in real-world is known as Preference constraint.
Constraint Propagation
• In local state-spaces, the choice is only one, i.e., to search for a
solution. But in CSP, we have two choices either:
• We can search for a solution or
• We can perform a special type of inference called constraint
propagation.
• Constraint propagation is a special type of inference which
helps in reducing the legal number of values for the
variables. The idea behind constraint propagation is local
consistency.
• In local consistency, variables are treated as nodes, and each
binary constraint is treated as an arc in the given problem.
There are following local consistencies which are discussed below:
• Node Consistency: A single variable is said to be node consistent if
all the values in the variable’s domain satisfy the unary constraints
on the variables.
• Arc Consistency: A variable is arc consistent if every value in its
domain satisfies the binary constraints of the variables.
• Path Consistency: When the evaluation of a set of two variable with
respect to a third variable can be extended over another variable,
satisfying all the binary constraints. It is similar to arc consistency.
• k-consistency: This type of consistency is used to define the notion
of stronger forms of propagation. Here, we examine the k-
consistency of the variables
CSP Problems
• Constraint satisfaction includes those problems which contains some
constraints while solving the problem. CSP includes the following
problems:
• Graph Coloring: The problem where the constraint is that no adjacent
sides can have the same color.
• Sudoku Playing: The gameplay where the constraint is that no number from 0-9
can be repeated in the same row or column .
• n-queen problem: In n-queen problem, the constraint is that no queen should be
placed either diagonally, in the same row or column.
• Note: The n-queen problem is already discussed in Problem-solving in AI section.
• Crossword: In crossword problem, the constraint is that there should be the
correct formation of the words, and it should be meaningful.
• Crypt arithmetic Problem is a type of constraint satisfaction problem where the
game is about digits and its unique replacement either with alphabets or other
symbols. In crypt arithmetic problem, the digits (0-9) get substituted by some
possible alphabets or symbols.
• The task in crypt arithmetic problem is to substitute each digit with an alphabet to
get the result arithmetically correct.
• We can perform all the arithmetic operations on a given crypt arithmetic problem.
The rules or constraints on a crypt arithmetic problem are as follows:
• There should be a unique digit to be replaced with a unique alphabet.
• The result should satisfy the predefined arithmetic rules, i.e., 2+2 =4, nothing else.
• Digits should be from 0-9 only.
• There should be only one carry forward, while performing the addition operation
on a problem.
• The problem can be solved from both sides, i.e., lefthand side (L.H.S), or righthand
side (R.H.S)
• Let us take an example of the message:
S E N D
SEND +MORE =MONEY M O R E
-------------
M O N E Y
These alphabets then are replaced by numbers such that all the constraints are
satisfied. So initially we have all blank spaces.
These alphabets then are replaced by numbers such that all the constraints are
satisfied. So initially we have all blank spaces.
We first look for the MSB in the last word which is 'M' in the word 'MONEY' here.
It is the letter which is generated by carrying. So, carry generated can be only one.
SO, we have M=1.
Now, we have S+M=O in the second column from the left side. Here M=1.
Therefore, we have, S+1=O. So, we need a number for S such that it generates a
carry when added with 1. And such a number is 9. Therefore, we
have S=9 and O=0.
• Now, in the next column from the same side we have E+O=N. Here
we have O=0. Which means E+0=N which is not possible. This
means a carry was generated by the lower place digits. So we have:
• 1+E=N ----------(i)
• Next alphabets that we have are N+R=E -------(ii)
• So, for satisfying both equations (i) and (ii), we get E=5 and N=6.
• Now, R should be 9, but 9 is already assigned to S, So, R=8 and we
have 1 as a carry which is generated from the lower place digits.
• Now, we have D+5=Y and this should generate a carry.
Therefore, D should be greater than 4. As 5, 6, 8 and 9 are already
assigned, we have D=7 and therefore Y=2. 9 5 6 7
1 0 8 5
• Therefore, the solution to the given Crypt-Arithmetic problem is:
-------------
1 0 6 5 2
• S=9; E=5; N=6; D=7; M=1; O=0; R=8; Y=2 --------------
Mean And Analysis
• We have studied the strategies which can reason either in forward or
backward, but a mixture of the two directions is appropriate for solving a
complex and large problem. Such a mixed strategy, make it possible that
first to solve the major part of a problem and then go back and solve the
small problems arise during combining the big parts of the problem. Such a
technique is called Means-Ends Analysis.
• Means-Ends Analysis is problem-solving techniques used in Artificial
intelligence for limiting search in AI programs.
• It is a mixture of Backward and forward search technique.
• The MEA technique was first introduced in 1961 by Allen Newell, and
Herbert A. Simon in their problem-solving computer program, which was
named as General Problem Solver (GPS).
• The MEA analysis process centered on the evaluation of the difference
between the current state and goal state
How means-ends analysis Works:
• The means-ends analysis process can be applied recursively
for a problem. It is a strategy to control search in problem-
solving. Following are the main Steps which describes the
working of MEA technique for solving a problem.
• First, evaluate the difference between Initial State and final
State.
• Select the various operators which can be applied for each
difference.
• Apply the operator at each difference, which reduces the
difference between the current state and goal state.
Operator Sub goaling
• In the MEA process, we detect the differences between the
current state and goal state.
• Once these differences occur, then we can apply an operator to
reduce the differences.
• But sometimes it is possible that an operator cannot be applied
to the current state.
• So we create the subproblem of the current state, in which
operator can be applied, such type of backward chaining in
which operators are selected, and then sub goals are set up to
establish the preconditions of the operator is called Operator
Subgoaling.
Algorithm for Means-Ends Analysis:
• Let's we take Current state as CURRENT and Goal State as GOAL, then following are
the steps for the MEA algorithm.
• Step 1: Compare CURRENT to GOAL, if there are no differences between both then
return Success and Exit.
• Step 2: Else, select the most significant difference and reduce it by doing the
following steps until the success or failure occurs.
– Select a new operator O which is applicable for the current difference, and if there is no
such operator, then signal failure.
– Attempt to apply operator O to CURRENT. Make a description of two states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.
– If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success and return the
result of combining FIRST-PART, O, and LAST-PART.
Example of Mean-Ends Analysis:
Let's take an example where we know the initial state and goal state as given below. In this problem, we
need to get the goal state by finding differences between the initial state and goal state and applying
operators.
Solution:
To solve the above problem, we will first find the differences between initial states and goal states,
and for each difference, we will generate a new state and will apply the operators. The operators
we have for this problem are:
•Move
•Delete
•Expand
1. Evaluating the initial state: In the first step, we will evaluate the initial state and will compare
the initial and Goal state to find the differences between both states.
2. Applying Delete operator: As we can check the first difference is that in goal state there is no
dot symbol which is present in the initial state, so, first we will apply the Delete operator to
remove this dot.
• 3. Applying Move Operator: After applying the Delete operator, the new state occurs which
we will again compare with goal state. After comparing these states, there is another
difference that is the square
• 4. Applying Expand Operator: Now a new state is generated in the third step, and we will
compare this state with the goal state. After comparing the states there is still one difference
which is the size of the square, so, we will apply Expand operator, and finally, it will
generate the goal state.
• re is outside the circle, so, we will apply the Move Operator.
THANK YOU….