Chapter 3
Chapter 3
3. Search Techniques
Disadvantages
§ It requires lots of memory since each level of the tree must be saved into memory to expand the
next level.
§ BFS needs lots of time if the solution is far away from the root node.
Breadth-first Search (BFS):
Example
§ BFS search algorithm traverse in layers, so it will follow the path which is shown by the
dotted arrow, and the traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Breadth-first Search (BFS):
§ Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node.
T (b) = O(𝑏 ! )
Where the d= depth of shallowest solution and b is a node at every state.
§ Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which
is O(𝑏 ! )
§ Completeness:
§ BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a solution.
§ Does it always find a solution if one exists?
§ YES
§ If shallowest goal node is at some finite depth d and If b is finite
§ Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
Two lessons:
1. Memory requirements are a bigger problem than its execution time.
2. Exponential complexity search problems cannot be solved by uninformed search
methods for any but the smallest instances.
Depth-first Search (DFS):
§ It is a recursive algorithm for traversing a tree or graph data structure.
§ It is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path.
§ DFS uses a stack data structure for its implementation.
§ The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
§ DFS requires very less memory as it only needs to store a stack of the nodes on the path from root
node to the current node.
§ It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
§ There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
§ DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Depth-first Search (DFS):
Example:
§ Root node--->Left node ----> right node.
Depth-first Search (DFS):
§ Completeness:
§ Does it always find a solution if one exists?
§ NO
§ If search space is infinite and search space contains loops then DFS may not find solution.
§ DFS search algorithm is complete within finite state space as it will expand every node within a limited search
tree.
§ Time Complexity:
§ Time complexity of DFS will be equivalent to the node traversed by the algorithm.
§ It is given by:
T(n)=O(𝑛" )
Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)
§ Space Complexity:
§ DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is equivalent
to the size of the fringe set.
§ Optimal:
§ DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal
node.
Depth-Limited Search:
§ A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
§ Depth-limited search can solve the drawback of the infinite path in the Depth-first search.
§ In this algorithm, the node at the depth limit will treat as it has no successor nodes further.
§ Depth-limited search can be terminated with two Conditions of failure:
§ Standard failure value: It indicates that problem does not have any solution.
§ Cut-off failure value: It defines no solution for the problem within a given depth limit.
Advantages:
§ Depth-limited search is Memory efficient.
Disadvantages:
§ Depth-limited search also has a disadvantage of incompleteness.
§ It may not be optimal if the problem has more than one solution.
Depth-Limited Search:
Example:
Depth-Limited Search:
§ Completeness:
§ DLS search algorithm is complete if the solution is above the depth-limit.
§ Time Complexity:
§ Time complexity of DLS algorithm is O(𝑏 ! ).
§ Space Complexity:
§ Space complexity of DLS algorithm is O(b×ℓ).
§ Optimal:
§ Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d.
Informed/Heuristic Search techniques:
§ More efficient than an uninformed search because, along with the current state
information, some additional information is also present, which make it easy to
reach the goal state.
§ Contains an array of knowledge such as how far we are from the goal, path cost,
how to reach to goal node, etc.
§ This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
§ The informed search algorithm is more useful for large search space.
§ Informed search algorithm uses the idea of heuristic, so it is also called Heuristic
search.
§ Heuristic Search Uses domain-dependent (heuristic) information in order to
search the space more efficiently.
Informed/Heuristic Search techniques:
Heuristic Function
§ Heuristic is a function which is used in Informed Search, and it finds the most promising path.
§ It takes the current state of the agent as its input and produces the estimation of how close agent is
from the goal.
§ The heuristic method, however, might not always give the best solution, but it guaranteed to find a
good solution in reasonable time.
§ Heuristic function estimates how close a state is to the goal.
§ It is represented by h(n), and it calculates the cost of an optimal path between the pair of states.
§ The value of the heuristic function is always positive.
Informed/Heuristic Search techniques:
Ways of using heuristic information:
§ Deciding which node to expand next, instead of doing the expansion in a strictly breadth-first or
depth-first order;
§ In the course of expanding a node, deciding which successor or successors to generate, instead of
blindly generating all possible successors at one time
§ Deciding that certain nodes should be discarded, or pruned, from the search space.
Informed Search uses domain specific information to improve the search pattern
§ Define a heuristic function, h(n), that estimates the "goodness" of a node n.
§ Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a goal state.
§ The heuristic function is an estimate, based on domain-specific information that is computable
from the current state description, of how close we are to a goal.
Informed/Heuristic Search techniques:
Best-first Search Algorithm (Greedy Search):
§ A best-first search is a general approach of informed search.
§ It is the combination of depth-first search and breadth-first search algorithms.
§ It uses the heuristic function and search.
§ Best-first search allows us to take the advantages of both algorithms.
§ With the help of best-first search, at each step, we can choose the most
promising node.
Informed/Heuristic Search techniques:
Best-first Search Algorithm (Greedy Search):
§ In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by
heuristic function,
h(n)= estimated cost of the cheapest path from the current node n to the goal node.
Note: If the current node n is a goal node, the value of h(n) will be 0.
§ Best-first search is known as a greedy search because it always
tries to explore the node which is nearest to the goal node and
selects that path, which gives a quick solution.
§ Thus, it evaluates nodes with the help of the heuristic function, i.e.,
f(n)=h(n).
§ The greedy best first algorithm is implemented by the priority
queue.
Informed/Heuristic Search techniques:
Best-first Search Algorithm (Greedy Search):
§ Set an OPEN list and a CLOSE list where the OPEN list contains visited but
unexpanded nodes and the CLOSE list contains visited as well as expanded
nodes.
§ Initially, traverse the root node and visit its next successor nodes and place
them in the OPEN list in ascending order of their heuristic value.
§ Select the first successor node from the OPEN list with the lowest heuristic
value and expand further.
§ Now, rearrange all the remaining unexpanded nodes in the OPEN list and
repeat above two steps.
§ If the goal node is reached, terminate the search, else expand further.
Informed/Heuristic Search techniques:
Advantages:
§ Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
§ This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
§ It can behave as an unguided depth-first search in the worst case scenario.
§ It can get stuck in a loop as DFS.
§ This algorithm is not optimal.
§ BFS does not guarantees to reach the goal state.
§ Since the best-first search is a greedy approach, it does not give an optimized solution.
§ It may cover a long distance in some cases.
Informed/Heuristic Search techniques:
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n) , which is given in the below
Informed/Heuristic Search techniques:
Example:
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G] 2
Hence the final solution path will be:
S----> B----->F----> G
9 0
Informed/Heuristic Search techniques:
Time Complexity:
§ The worst case time complexity of Greedy best first search is O(𝑏 ! ).
Space Complexity:
§ The worst case space complexity of Greedy best first search is O(𝑏 ! ). Where, m is the maximum depth
of the search space.
Complete:
§ Greedy best-first search is also incomplete, even if the given state space is finite.
Optimal:
§ Greedy best first search algorithm is not optimal.
Informed/Heuristic Search techniques:
A* Search
§ A* search is the most widely used informed search algorithm where a node n is evaluated by
combining values of the functions g(n)and h(n).
§ The function g(n) is the path cost from the start/initial node to a node n and h(n) is the estimated
cost of the cheapest path from node n to the goal node.
§ Therefore, we have f(n)=g(n)+h(n)
§ This sum is called as a fitness number.
§ where f(n) is the estimated cost of the cheapest solution through n.
§ So, in order to find the cheapest solution, try to find the lowest values of f(n).
Informed/Heuristic Search techniques:
Algorithm of A* search:
Disadvantages:
1. It does not always produce the shortest path as it mostly based on heuristics and approximation.
2. A* search algorithm has some complexity issues.
3. The main drawback of A* is memory requirement as it keeps all generated nodes in the memory,
so it is not practical for various large-scale problems.
Informed/Heuristic Search techniques:
Completeness:
§ The star(*) in A* search guarantees to reach the goal node.
Optimality:
§ An underestimated cost will always give an optimal solution.
Space and time complexity:
§ A* search has O(𝑏 " ) space and time complexities.
Informed/Heuristic Search techniques:
Hill climbing
§ It is a local search algorithm which continuously moves in the direction of increasing
elevation to find the peak of the mountain or best solution to the problem.
§ It terminates when it reaches a peak value where no neighbour has a higher value.
§ A technique which is used for optimizing the mathematical problems.
§ One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman
Problem
§ It is also called greedy local search as it only looks to its good immediate neighbour state and
not beyond that.
§ A node of hill climbing algorithm has two components which are state and value.
§ Hill Climbing is mostly used when a good heuristic is available.
§ In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.
Informed/Heuristic Search techniques:
Features of Hill climbing
§ Generate and Test variant:
§ Hill Climbing is the variant of Generate and Test method.
§ The Generate and Test method produce feedback which helps to decide which direction to move in the search
space.
§ Greedy approach:
§ Hill-climbing algorithm search moves in the direction which optimizes the cost.
§ No backtracking:
§ It does not backtrack the search space, as it does not remember the previous states.
Informed/Heuristic Search techniques:
State-space Diagram for Hill Climbing:
§ A graphical representation of the hill-climbing algorithm which is showing a graph between various
states of algorithm and Objective function/Cost.
§ On Y-axis the function which can be an objective function or cost function, and state-space on the x-axis.
§ If the function on Y-axis is cost then, the goal of search is to find the global minimum and local
minimum.
§ If the function of Y-axis is Objective function, then the goal of the search is to find the global maximum
and local maximum.
Informed/Heuristic Search techniques:
Different regions in the state space landscape:
§ Local Maximum:
§ A state which is better than its neighbour states, but there is also another state which is higher than it.
§ Global Maximum:
§ The best possible state of state space landscape. It has the highest value of objective function.
§ Current state:
§ It is a state in a landscape diagram where an agent is currently present.
§ Flat local maximum:
§ It is a flat space in the landscape where all the neighbour states of current states have the same value.
§ Shoulder:
§ It is a plateau region which has an uphill edge.
Informed/Heuristic Search techniques:
Example of Hill Climbing Algorithm:
§ Consider that the most promising successor of a node is the one that has the shortest straight-line
distance to the goal node G.
§ In figure below, the straight line distances between each city and goal G is indicated in square brackets,
i.e. the heuristic.
Informed/Heuristic Search techniques:
Problems in Hill Climbing Algorithm:
§ Gets stuck at local maxima when we reach a position where there are no better neighbors, it is not a
guarantee that we have found the best solution.
§ Ridge is a sequence of local maxima.
§ Another type of problem we may find with hill climbing searches is finding a plateau.
§ This is an area where the search space is flat so that all neighbors return the same evaluation
Informed/Heuristic Search techniques:
Simulated Annealing:
§ A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete
because it can get stuck on a local maximum.
§ If algorithm applies a random walk, by moving a successor, then it may complete but not efficient.
§ Simulated Annealing is an algorithm which yields both efficiency and completeness.
§ It is motivated by the physical annealing process in which material is heated and slowly cooled into a
uniform structure.
§ The same process is used in simulated annealing in which the algorithm picks a random move, instead of
picking the best move.
§ If the random move improves the state, then it follows the same path.
§ Otherwise, the algorithm follows the path which has a probability of less than 1 or it moves downhill
and chooses another path.
§ This search technique was first used in 1980 to solve VLSI layout problems.
§ It is also applied for factory scheduling and other large optimization tasks.
Informed/Heuristic Search techniques:
Genetic Algorithm:
§ Genetic Algorithm is one of the heuristic algorithms that mimics the process of
natural evolution.
§ They are used to solve optimization problems.
§ They are inspired by Darwin’s Theory of Evolution.
§ GA generate solutions to optimization problems using techniques inspired by natural
evolution, such as inheritance, mutation, selection, and crossover.
§ They are an intelligent exploitation of a random search.
§ This heuristic is routinely used to generate useful solutions to optimization and search
problems.
§ Concisely stated, a GA is a programming technique that mimics biological evolution
as a problem-solving strategy.
Informed/Heuristic Search techniques:
Genetic Algorithm:
§ Given a specific problem to solve,
§ the input to the GA is a set of potential solutions to that problem,
§ Encoded in some fashion,
§ A metric called a fitness function that allows each candidate to be quantitatively evaluated.
§ These candidates may be solutions already known to work, with the aim of the GA
being to improve them, but more often they are generated at random.
§ The GA then evaluates each candidate according to the fitness function.
Informed/Heuristic Search techniques:
Genetic Algorithm works in the following steps-
Step-01:
§ Randomly generate a set of possible solutions to a problem.
§ Represent each solution as a fixed length character string.
Step-02:
§ Using a fitness function, test each possible solution against the problem to evaluate them.
Step-03:
§ Keep the best solutions.
§ Use best solutions to generate new possible solutions.
Step-04:
Repeat the previous two steps until-
§ Either an acceptable solution is found
§ Or until the algorithm has completed its iterations through a given number of cycles /
generations.
Informed/Heuristic Search techniques:
Basic Operators:
The basic operators of GA are:
1. Selection (Reproduction):
§ It is the first operator applied on the population.
§ It selects the chromosomes from the population of parents to cross over and produce
offspring.
§ It is based on evolution theory of “Survival of the fittest” given by Darwin.
There are many techniques for reproduction or selection operator such as-
§ Tournament selection
§ Ranked position selection
§ Steady state selection etc.
Informed/Heuristic Search techniques:
Basic Operators:
The basic operators of GA are:
2. Cross Over:
§ Population gets enriched with better individuals after reproduction phase.
§ Then crossover operator is applied to the mating pool to create better strings.
§ Crossover operator makes clones of good strings but does not create new ones.
§ By recombining good individuals, the process is likely to create even better individuals.
Informed/Heuristic Search techniques:
Basic Operators:
3. Mutation-
§ Mutation is a background operator.
§ Mutation of a bit includes flipping it by changing 0 to 1 and vice-versa.
§ After crossover, the mutation operator subjects the strings to mutation.
§ It facilitates a sudden change in a gene within a chromosome.
§ Thus, it allows the algorithm to see for the solution far away from the current ones.
§ It guarantees that the search algorithm is not trapped on a local optimum.
§ Its purpose is to prevent premature convergence and maintain diversity within the
population.
Informed/Heuristic Search techniques:
Flowchart:
Informed/Heuristic Search techniques:
Difference between Genetic Algorithms and Traditional Algorithms
§ In the traditional algorithm, only one set of solutions is maintained, whereas,
in a GA, several sets of solutions in search space can be used.
§ Traditional algorithms need more information in order to perform a search,
whereas GA need only one objective function to calculate the fitness of an
individual.
§ Traditional Algorithms cannot work parallelly, whereas GA can work
parallelly (calculating the fitness of the individualities are independent).
§ One of the big differences is that it does not directly operate on candidate
solutions.
Informed/Heuristic Search techniques:
Difference between Genetic Algorithms and Traditional Algorithms
§ Traditional Algorithms can only generate one result in the end,
whereas Genetic Algorithms can generate multiple optimal results
from different generations.
§ The traditional algorithm is not more likely to generate optimal
results, whereas Genetic algorithms do not guarantee to generate
optimal global results
§ Traditional algorithms are deterministic in nature, whereas Genetic
algorithms are probabilistic and stochastic in nature.
Adversarial Search:
§ Adversarial search is a search, where we examine the problem
which arises when we try to plan ahead of the world and other agents
are planning against us
§ Previous search strategies are only associated with a single agent that
aims to find the solution which often expressed in the form of a
sequence of actions.
§ But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
Adversarial Search:
§ The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other.
§ Each agent needs to consider the action of other agent and effect of
that action on their performance.
§ So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
Introduction to Game Playing
§ Game Playing is an important domain of artificial intelligence.
§ Game playing was one of the first tasks undertaken in Artificial Intelligence.
§ Game theory has its history from 1950, almost from the days when computers
became programmable.
§ The very first game that is been tackled in AI is chess.
§ Initiators in the field of game theory in AI were
§ Konard Zuse (the inventor of the first programmable computer and the first
programming language),
§ Claude Shannon (the inventor of information theory),
§ Norbert Wiener (the creator of modern control theory),
§ Alan Turing.
§ Since then, there has been a steady progress in the standard of play, to the
point that machines have defeated human champions (although not every
time) in chess, and are competitive in many other games
Introduction to Game Playing
§ Types of Game
§ Perfect Information Game:
§ In which player knows all the possible moves of himself and
opponent and their results.
§ E.g. Chess.
§ Imperfect Information Game:
§ In which player does not know all the possible moves of the
opponent.
§ E.g. Bridge since all the cards are not visible to player.
Introduction to Game Playing
Definition
Game playing is a search problem defined by following components:
§ Initial state:
§ This defines initial configuration of the game and identifies first player to move.
§ Successor function:
§ This identifies which are the possible states that can be achieved from the current
state.
§ This function returns a list of (move, state) pairs, each indicating a legal move and
the resulting state.
§ Goal test:
§ Which checks whether a given state is a goal state or not.
§ States where the game ends are called as terminal states.
§ Path cost / utility / payoff function:
§ Which gives a numeric value for the terminal states?
§ In chess, the outcome is win, loss or draw, with values +1, -1,or 0.
§ Some games have wider range of possible outcomes.
Introduction to Game Playing
Characteristics of game playing
§ Unpredictable Opponent:
§ Generally we cannot predict the behavior of the opponent.
§ Thus we need to find a solution which is a strategy specifying a
move for every possible opponent move or every possible
state.
§ Time Constraints:
§ Every game has a time constraints.
§ Thus it may be infeasible to find the best move in this time.
Introduction to Game Playing
HOW TO PLAY A GAME IN AI?
Typical structure of the game in the AI is:
1. 2- person game
2. Players alternate moves
3. Zero-sum game: one player’s loss is the other’s gain
4. Perfect information:
§ both players have access to complete information about
the state of the game.
§ No information is hidden from either player.
5. No chance (e. g. using dice) involved.
E.g. Tic- Tac- Toe, Chess
Adversarial Search:
Mini-Max Algorithm:
§ Max is considered as the first player in the game and Min as the
second player
§ This algorithm computes the minimax decision from the current state
§ It uses a recursive computation of minimax values of each successor
state directly implementing some defined function
§ The recursion proceeds from the initial node to all the leaf nodes
§ Then the minimax values are backed up through the tree as the
recursion unwinds
§ It performs the depth first exploration of a game tree in a complete
way
Adversarial Search:
Mini-Max Algorithm:
Adversarial Search:
Mini-Max Algorithm:
Adversarial Search:
Mini-Max Algorithm:
Adversarial Search:
Mini-Max Algorithm:
Adversarial Search:
Mini-Max Algorithm:
Adversarial Search:
Properties of Mini-Max algorithm:
Complete:
§ Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
Optimal
§ Min-Max algorithm is optimal if both opponents are playing
optimally.
Time and Space complexity:
§ Mini-max algorithm has similar to DFS.
Adversarial Search:
Limitation of the minimax Algorithm:
§ The main drawback of the minimax algorithm is that it gets really
slow for complex games such as Chess.
§ This type of games has a huge branching factor, and the player has
lots of choices to decide.
§ This limitation of the minimax algorithm can be improved from
alpha-beta pruning which we will discuss in the next topic.
Adversarial Search:
Alpha-Beta Pruning:
§ A modified version of the minimax algorithm.
§ It is an optimization technique for the minimax algorithm.
§ Since we cannot eliminate the exponent, but we can cut it to half.
§ Hence there is a technique by which without checking each node of
the game tree we can compute the correct minimax decision, and this
technique is called pruning.
§ This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning.
§ It is also called as Alpha-Beta Algorithm.
§ Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune the tree leaves but also entire sub-tree.
Adversarial Search:
Alpha-Beta Pruning:
§ The two-parameter can be defined as:
§ Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer.
§ The initial value of alpha is -∞.
§ Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer.
§ The initial value of beta is +∞.
§ The Alpha-beta pruning to a standard minimax algorithm returns the same
move as the standard algorithm does,
§ but it removes all the nodes which are not really affecting the final decision
but making algorithm slow.
§ Hence by pruning these nodes, it makes the algorithm fast..
Adversarial Search:
Condition for Alpha-Beta Pruning:
§ The main condition which required for alpha-beta pruning is:
α>=β
Key points about alpha-beta pruning:
§ The Max player will only update the value of alpha.
§ The Min player will only update the value of beta.
§ While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
§ We will only pass the alpha, beta values to the child nodes.
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Working of Alpha-Beta Pruning:
Adversarial Search:
Move Ordering in Alpha-Beta pruning:
§ The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
§ Move order is an important aspect of alpha-beta pruning.
It can be of two types:
Worst ordering:
§ In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and works exactly
as minimax algorithm.
§ In this case, it also consumes more time because of alpha-beta factors, such a move of pruning is called worst
ordering.
§ In this case, the best move occurs on the right side of the tree.
§ The time complexity for such an order is O(b^m).
Ideal ordering:
§ The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and best moves
occur at the left side of the tree.
§ We apply DFS hence it first search left of the tree and go deep twice as minimax algorithm in the same amount
of time.
§ Complexity in ideal ordering is O(b^m/2).
Constraint satisfaction problem
§ A constraint satisfaction problem (CSP) is a problem that requires its
solution within some limitations or conditions also known as
constraints.
§ Constraint satisfaction is a technique where a problem is solved
when its values satisfy certain constraints or rules of the problem.
§ Such type of technique leads to a deeper understanding of the
problem structure as well as its complexity.
§ It consists of the following:
§ A finite set of variables which stores the solution (V = {V1, V2, V3,....., Vn})
§ A set of discrete values known as domain from which the solution is picked
(D = {D1, D2, D3,.....,Dn})
§ A finite set of constraints (C = {C1, C2, C3,......, Cn})
Constraint satisfaction problem
§ In constraint satisfaction, domains are the spaces where the
variables reside, following the problem specific constraints.
§ These are the three main elements of a constraint
satisfaction technique.
§ The constraint value consists of a pair of {scope, rel}.
§ The scope is a tuple of variables which participate in the
constraint and rel is a relation which includes a list of
values which the variables can take to satisfy the
constraints of the problem.
Constraint satisfaction problem
§ Please note, that the elements in the domain can be both
continuous and discrete but in AI, we generally only deal
with discrete values.
§ Also, note that all these sets should be finite except for the
domain set.
§ Each variable in the variable set can have different
domains.
Constraint satisfaction problem
Popular Problems with CSP
§ The following problems are some of the popular problems
that can be solved using CSP:
§ CryptArithmetic (Coding alphabets to numbers.)
§ n-Queen (In an n-queen problem, n queens should be placed
in an nXn matrix such that no queen shares the same row,
column or diagonal.)
§ Map Coloring (coloring different regions of map, ensuring no
adjacent regions have the same color)
§ Crossword (everyday puzzles appearing in newspapers)
§ Sudoku (a number grid)
Constraint satisfaction problem
Popular Problems with CSP
Constraint satisfaction problem
Popular Problems with CSP
Constraint satisfaction problem
Popular Problems with CSP
Constraint satisfaction problem
Popular Problems with CSP: How to solve CryptArithmetic
Constraint satisfaction problem
An assignment of values to a variable can be done in three ways:
§ Consistent or Legal Assignment:
§ An assignment which does not violate any constraint or rule is
called Consistent or legal assignment.
§ Complete Assignment:
§ An assignment where every variable is assigned with a value,
and the solution to the CSP remains consistent.
§ Such assignment is known as Complete assignment.
§ Partial Assignment:
§ An assignment which assigns values to some of the
variables only.
§ Such type of assignments are called Partial assignments.
Backtracking Algorithm
• The backtracking algorithm is a depth-first search method used to
systematically explore possible solutions in CSPs.
• It operates by assigning values to variables and backtracks if any
assignment violates a constraint.
How it works:
• The algorithm selects a variable and assigns it a value.
• It recursively assigns values to subsequent variables.
• If a conflict arises (i.e., a variable cannot be assigned a valid value),
the algorithm backtracks to the previous variable and tries a different
value.
• The process continues until either a valid solution is found or all
possibilities have been exhausted.
• This method is widely used due to its simplicity but can be inefficient
for large problems with many variables.
Constraint Propagation Algorithms
• Constraint propagation algorithms further reduce the search
space by enforcing local consistency across all variables.
How it works:
• Constraints are propagated between related variables.
• Inconsistent values are eliminated from variable domains by
leveraging information gained from other variables.
• These algorithms refine the search space by making inferences,
removing values that would lead to conflicts.
• Constraint propagation is commonly used in conjunction with
other CSP algorithms, such as backtracking, to increase efficiency
by narrowing down the solution space early in the search
process.
Constraint Propagation Algorithms
Min-conflicts algorithm
• It is a search algorithm used in computer science to
resolve constraint satisfaction issues.
• The procedure randomly chooses a variable from the
collection of variables with conflicts breaking one or
more constraints of a constraint satisfaction issue.
• Finally, it selects a value at random if there are multiple
values with a minimal amount of conflicts.
• This random variable selection and min-conflict value
assignment are repeated until a solution is found or the
predetermined maximum number of iterations has been
achieved.
Questions ?
Thank you !