0% found this document useful (0 votes)
11 views69 pages

Unit 2.1

The document discusses Constraint Satisfaction Problems (CSPs), which involve finding variable assignments that satisfy specific constraints. It provides examples such as map coloring and cryptarithmetic puzzles, and explains concepts like constraint propagation, local consistency, and various algorithms for solving CSPs, including backtracking search and maintaining arc consistency. Additionally, it highlights the importance of variable and value ordering heuristics to improve search efficiency.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views69 pages

Unit 2.1

The document discusses Constraint Satisfaction Problems (CSPs), which involve finding variable assignments that satisfy specific constraints. It provides examples such as map coloring and cryptarithmetic puzzles, and explains concepts like constraint propagation, local consistency, and various algorithms for solving CSPs, including backtracking search and maintaining arc consistency. Additionally, it highlights the importance of variable and value ordering heuristics to improve search efficiency.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Unit 2

Constraint
Satisfaction
Problems
Introduction
• We have seen the problems can be solved by searching in a space
of states.
• These states can be evaluated by domain-specific heuristics and
tested to see whether they are goal states.
• In this method we are going to use a factored representation for
each state: a set of variables, each of which has a value.
• A problem is solved when each variable has a value that satisfies
all the constraints on the variable.
• A problem described this way is called a constraint satisfaction
problem, or CSP.
4
Example problem: Map coloring
Example problem: Map coloring

• Looking at a map of Australia showing each of its states and


territories .
• We are given the task of coloring each region either red, green, or
blue in such a way that no neighboring regions have the same
color.
• To formulate this as a CSP, we define the variables to be the
regions
• X = {WA,NT,Q,NSW, V,SA, T} .
• The domain of each variable is the set Di = {red , green, blue}.
• The constraints require neighboring regions to have distinct
colors.
Example problem: Map coloring

• Since there are nine places where regions border, there are nine
constraints :
• C = {SA = WA,SA = NT,SA = Q,SA = NSW,SA = V, WA = NT,NT = Q,Q =
NSW,NSW = V } .
• There are many possible solutions to this problem, such as
• {WA=red ,NT =green,Q=red ,NSW =green, V =red ,SA=blue, T =red }.
• It can be visualized as a constraint graph.
• The nodes of the graph correspond to variables of the problem, and
a link connects any two variables that participate in a constraint.
Cryptarithmetic Puzzles
• A constraint involving an arbitrary number of variables is called a
global constraint.
• One of the most common global constraints is Alldiff , which says
that all of the variables involved in the constraint must have
different values.
• Each letter in a cryptarithmetic puzzle represents a different digit.
• This can be represented as the global constraint
Alldiff (F, T,U,W,R,O).
• These constraints can be represented in a constraint hypergraph.
• A hypergraph consists of ordinary nodes
• (the circles) and hypernodes (the squares), which represent n-ary
constraints.
Cryptarithmetic Puzzles
The addition constraints on the four columns of the puzzle
can be written as the following n-ary constraints:

where C10, C100, and C1000 are auxiliary variables


representing the digit carried over into the tens, hundreds,
or thousands column
Cryptarithmetic Puzzles
Constraint Propagation: Inference in CSPs
• In CSPs an algorithm can search (choose a new variable
assignment from several possibilities) or do a specific type of
inference called constraint propagation.
• This reduces the number of legal values for a variable using
constraints, which in turn can reduce the legal values for another
variable, and so on.
• Constraint propagation may be intertwined with search, or it may
be done as a preprocessing step, before search starts.
Constraint Propagation: Inference in CSPs
• The key idea is local consistency.
• In which each variable is treated as a node in a graph, and each
binary constraint as an arc.
• If local consistency can be used in each part of graph, we can
eliminate the inconsistent values throughout the graph.
• There are different types of local consistency
Constraint Propagation: Inference in CSPs
• 1. Node consistency
• A single variable (corresponding to a node in the CSP network) is
node-consistent if all the values in the variable’s domain satisfy
the variable’s unary constraints.
• For example, in the variant of the Australia map-coloring problem
where South Australians dislike green, the variable SA starts with
domain {red , green, blue}, and we can make it node consistent by
eliminating green, leaving SA with the reduced domain {red ,
blue}.
• A network is node-consistent if every variable in the network is
node-consistent.
Constraint Propagation: Inference in CSPs
• 2. Arc consistency
• A variable in a CSP is arc-consistent if every value in its domain
satisfies the variable’s binary constraints.
• More formally, Xi is arc-consistent with respect to another
variable Xj if for every value in the current domain Di there is
some value in the domain Dj that satisfies the binary constraint
on the arc (Xi,Xj).
Constraint Propagation: Inference in CSPs
Arc Consistency
• A network is arc-consistent if every variable is arc consistent with
every other variable.
• For example, consider the constraint Y = X2 where the domain of
both X and Y is the set of digits.
• We can write this constraint explicitly as
• (X, Y ), {(0, 0), (1, 1), (2, 4), (3, 9))} .
• To make X arc-consistent with respect to Y , we reduce X’s domain
to {0, 1, 2, 3}.
• If we also make Y arc-consistent with respect to X, then Y ’s
domain becomes {0, 1, 4, 9} and the whole CSP is arc-consistent.
Constraint Propagation: Inference in CSPs
Arc Consistency
• The most popular algorithm for arc consistency is called AC-3.
• To make every variable arc-consistent, the AC-3 algorithm
maintains a queue of arcs to consider.
• Initially, the queue contains all the arcs in the CSP. AC-3 then pops
off an arbitrary arc (Xi,Xj) from the queue and makes Xi arc-
consistent with respect to Xj .
• If this leaves Di unchanged, the algorithm just moves on to the
next arc.
• But if this revises Di (makes the domain smaller), then we add to
the queue all arcs (Xk,Xi) where Xk is a neighbor of Xi.
Constraint Propagation: Inference in CSPs
Arc Consistency
3. Path consistency
• Arc consistency make tighter the domains (unary constraints)
using the arcs (binary constraints).
• Path consistency tightens the binary constraints by using implicit
constraints
• That are inferred by looking at triples of variables.
• A two-variable set {Xi,Xj} is path-consistent with respect to a third
variable Xm if, for every assignment {Xi = a,Xj = b} consistent with
the constraints on {Xi,Xj}, there is an assignment to Xm that
satisfies the constraints on {Xi,Xm} and {Xm,Xj}.
• This is called path consistency because one can think of it as
looking at a path from Xi to Xj with Xm in the middle.
4. K-consistency
• A CSP is k-consistent if, for any set of k − 1 variables and for any
consistent assignment to those variables, a consistent value can
always be assigned to any kth variable.
• 1-consistency means, given the empty set, we can make any set of
one variable consistent.
• This is also called as node consistency.
• 2-consistency is the same as arc consistency.
• For binary constraint networks, 3-consistency is the same as path
consistency.
• A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-
consistent, (k − 2)-consistent, . . . upto 1-consistent.
Global Constraints
• A global constraint is one involving an arbitrary number of
variables.
• Global constraints occur frequently in real problems and can be
handled by special-purpose algorithms that are more efficient
than the general-purpose methods.
• For example, the Alldiff constraint says that all the variables
involved must have distinct values.
• One simple form of inconsistency detection for Alldiff constraints
works as follows:
• If m variables are involved in the constraint, and if they have n
possible distinct values(domain) altogether, and m > n, then the
constraint cannot be satisfied.
Global Constraints
• This leads to the following simple algorithm:
• First, remove any variable in the constraint that has a singleton
domain, and delete that variable’s value from the domains of the
remaining variables.
• Repeat as long as there are singleton variables.
• If at any point an empty domain is produced or there are more
variables than domain values left, then an inconsistency has been
detected.
Sudoku example
• A Sudoku board consists of 81 squares, some of which are initially
filled with digits from 1 to 9.
• The puzzle is to fill in all the remaining squares such that no digit
appears twice in any row, column, or 3×3 box.
• A row, column, or box is called a unit.
• A Sudoku puzzle can be considered a CSP with 81 variables, one
for each square.
• The variable names are A1 through A9 for the top row (left to
right), down to I1 through I9 for the bottom row.
• The empty squares have the domain {1, 2, 3, 4, 5, 6, 7, 8, 9} and
the prefilled squares have a domain consisting of a single value.
Sudoku example
Sudoku example
• In addition, there are 27 different Alldiff constraints: one for each
row, column, and box of 9 squares.
• Alldiff (A1,A2,A3,A4,A5,A6, A7, A8, A9)
• Alldiff (B1,B2,B3,B4,B5,B6,B7,B8,B9)
• ・・・
• Alldiff (A1,B1,C1,D1,E1, F1,G1,H1, I1)
• Alldiff (A2,B2,C2,D2,E2, F2,G2,H2, I2)
• ・・・
• Alldiff (A1,A2,A3,B1,B2,B3,C1,C2,C3)
• Alldiff (A4,A5,A6,B4,B5,B6,C4,C5,C6)
• ・・・
Sudoku example
• Consider variable E6, the empty square between the 2 and the 8 in the
middle box.
• From the constraints in the box, we can remove not only 2 and 8 but
also 1 and 7 from E6’s domain. From the constraints in its column, we
can eliminate 5, 6, 2, 8, 9, and 3. That leaves E6 with a domain of {4}; in
other words, we know the answer for E6.
• Now consider variable I6, the square in the bottom middle box
surrounded by 1, 3, and 3. Applying arc consistency in its column, we
eliminate 5, 6, 2, 4 (since we now know E6 must be 4), 8, 9, and 3.
• We eliminate 1 by arc consistency with I5 , and we are left with only the
value 7 in the domain of I6 .
• Now there are 8 known values in column 6, so arc consistency can infer
that A6 must be 1.
• Inference continues along these lines, and eventually, AC-3 can solve the
entire puzzle—all the variables have their domains reduced to a single
value.
Backtracking Search for CSPs
• Backtracking search, a form of depth-first search, is commonly
used for solving CSPs.
• Inference can be linked with search.
• Commutativity: CSPs are all commutative. A problem is
commutative if the order of application of any given set of actions
has no effect on the outcome.
• Backtracking search: A depth-first search that chooses values for
one variable at a time and backtracks when a variable has no legal
values left to assign.
Backtracking Search for CSPs
• Backtracking algorithm repeatedly chooses an unassigned
variable, and then tries all values in the domain of that variable in
turn, trying to find a solution.
• If an inconsistency is detected, then BACKTRACK returns failure,
causing the previous call to try another value.
• There is no need to supply BACKTRACKING-SEARCH with a
domain-specific initial state, action function, transition model, or
goal test.
• BACKTRACKING-SARCH keeps only a single representation of a
state and alters that representation rather than creating a new
ones.
Backtracking Search for CSPs
Backtracking Search for CSPs
• To solve CSPs efficiently without domain-specific knowledge,
address following questions:
 1)function SELECT-UNASSIGNED-VARIABLE: which variable
should be assigned next?
 function ORDER-DOMAIN-VALUES: in what order should its
values be tried?
 2)function INFERENCE: what inferences should be performed
at each step in the search?
 3)When the search arrives at an assignment that violates a
constraint, can the search avoid repeating this failure?
Variable and value ordering
• The simplest strategy for SELECT-UNASSIGNED-VARIABLE is to
choose the next unassigned variable in order, {X1,X2, . . .}.
• This static variable ordering seldom results in the most efficient
search.
• The idea of choosing the variable with the fewest legal values—is
called the Minimum Remaining-Values (MRV) heuristic.
• It is also called as the “most constrained variable” or “fail-first”
heuristic.
Variable and value ordering
• It picks a variable that is most likely to cause a failure soon
thereby pruning the search tree.
• If some variable X has no legal values left, the MRV heuristic
will select X and failure will be detected immediately—
avoiding pointless searches through other variables.
• E.g. After the assignment for WA=red and NT=green, there is
only one possible value for SA, so it makes sense to assign
SA=blue next rather than assigning Q.
• This limits the choices for Q,NSW and V after SA is assigned.
Variable and value ordering
• Degree heuristic: The degree heuristic attempts to reduce the
branching factor on future choices by selecting the variable that is
involved in the largest number of constraints on other unassigned
variables. [useful tie-breaker]
• e.g. SA is the variable with highest degree 5; the other variables
have degree 2 or 3; T has degree 0.
Variable and value ordering
• Once a variable has been selected, the algorithm must decide on
the order in which to examine its values.
• For this, the least-constraining-value heuristic can be effective.
• It prefers the value that decides the fewest choices for the
neighboring variables in the constraint graph.
• for a wide variety of problems, a variable ordering that chooses a
variable with the minimum number of remaining values helps
minimize the number of nodes in the search tree by pruning larger
parts of the tree earlier.
• For value ordering, to look for the most likely values first.
Interleaving Search and Inference
• INFERENCE(Assumption)
• Forward Checking:-
• This is one of the simplest forms of inference.
• Whenever a variable X is assigned, the forward-checking process
establishes arc consistency for it: for each unassigned variable Y
that is connected to X by a constraint, delete from Y’s domain any
value that is inconsistent with the value chosen for X.
• Advantage: For many problems the search will be more effective if
we combine the MRV heuristic with forward checking.
• Disadvantage: Forward checking only makes the current variable
arc-consistent, but doesn’t look ahead and make all the other
variables arc-consistent.
Interleaving Search and Inference
Interleaving Search and Inference
• MAC (Maintaining Arc Consistency) algorithm: This is more
powerful than forward checking, to detect the inconsistencies.
• After a variable Xi is assigned a value, the INFERENCE procedure
calls AC-3, but instead of a queue of all arcs in the CSP, we start
with only the arcs( Xj, Xi) for all Xj that are unassigned variables
and are neighbors of Xi.
• From there, AC-3 does constraint propagation in the usual way,
and if any variable has its domain reduced to the empty set, the
call to AC-3 fails and we need to backtrack immediately.
Intelligent Backtracking
• The BACKTRACKING-SEARCH algorithm has a very simple policy
for what to do when a branch of the search fails, it back up to the
preceding variable and try a different value for it.
• This is called chronological backtracking because the most recent
decision point is revisited
Intelligent Backtracking
• For example,
• Suppose we have generated the partial assignment {Q=red,
NSW=green, V=blue, T=red}.
• When we try the next variable SA, we see every value violates a
constraint.
• We back up to T and try a new color, it cannot resolve the
problem.
Intelligent Backtracking
Intelligent Backtracking
• A more intelligent approach to backtracking is to backtrack to a
variable that might fix the problem.
• Intelligent backtracking: Backtrack to a variable that was
responsible for making one of the possible values of the next
variable impossible (e.g. SA).
• Conflict set for a variable: A set of assignments that are in conflict
with some value for that variable.
• (e.g. The set {Q=red, NSW=green, V=blue} is the conflict set for
SA.)
• Backjumping method: Backtracks to the most recent assignment
in the conflict set.
• (e.g. backjumping would jump over T and try a new value for V.)
Intelligent Backtracking
• Forward checking can supply the conflict set with no extra work,
whenever forward checking based on an assignment X =x deletes
a value from Y ’s domain, it should add X =x to Y ’s conflict set.
• If the last value is deleted from Y ’s domain, then the assignments
in the conflict set of Y are added to the conflict set of X.
• Then, when we get to Y , we know immediately where to
backtrack if needed
Intelligent Backtracking
• Backjumping occurs when every value in a domain is in conflict
with the current assignment.
• but forward checking detects this event and prevents the search
from reaching such a node.
• In fact, it can be shown that every branch pruned by backjumping
is also pruned by forward checking.
• Hence, simple backjumping is redundant in a forward-checking
search.
Intelligent Backtracking
• The concept of backjumping is a good one, but backjumping
notices failure when a variable’s domain becomes empty.
• e.g.
• consider the partial assignment which is proved to be
inconsistent: {WA=red, NSW=red}.
• We try T=red next and then assign NT, Q, V, SA, no assignment can
work for these last 4 variables.
• Eventually we run out of value to try at NT, but
simple backjumping cannot work because NT doesn’t have a
complete conflict set of preceding variables that caused to fail.
Intelligent Backtracking
• The set {WA, NSW} is a conflict set for NT, caused NT together
with any subsequent variables to have no consistent solution.
• So the algorithm should backtrack to NSW and skip over T.
• A backjumping algorithm that uses conflict sets defined in this
way is called conflict-direct backjumping.
Intelligent Backtracking
• How to Compute:
• When a variable’s domain becomes empty, the terminal failure
occurs, that variable has a standard conflict set.
• Let Xj be the current variable, let conf(Xj) be its conflict set. If
every possible value for Xj fails, backjump to the most recent
variable Xi in conf(Xj), and set
• conf(Xi) ← conf(Xi )∪conf(Xj) – {Xi}.
• The conflict set for an variable means, there is no solution from
that variable onwards, given the preceding assignment to the
conflict set.
Intelligent Backtracking
• For example:
• Assign WA, NSW, T, NT, Q, V, SA.
• SA fails, and its conflict set is {WA, NT, Q}. (standard conflict set)
• Backjump to Q, its conflict set is
{NT, NSW} ∪ {WA,NT,Q} - {Q} = {WA, NT, NSW}.
• Backtrack to NT, its conflict set is
{WA} ∪ {WA,NT,NSW} - {NT} = {WA, NSW}.
• Hence the algorithm backjump to NSW. (over T)
Intelligent Backtracking
• After backjumping from a contradiction, to avoid running into the
same problem again use Constraint learning.
• Constraint learning finds a minimum set of variables from the
conflict set that causes the problem.
• This set of variables, along with their corresponding values, is
called a no-good.
• We then record the no-good, either by adding a new constraint to
the CSP or by keeping a separate cache of no-goods.
Local Search for CSPs
• Local search algorithms turn out to be effective in solving many
CSPs.
• They use a complete-state formulation such as the initial state
assigns a value to every variable, and the search changes the value
of one variable at a time.
• For example, in the 8-queens problem, the initial state might be a
random configuration of 8 queens in 8 columns, and each step
moves a single queen to a new position in its column.
• The initial guess violates several constraints.
• The point of local search is to eliminate the violated (disputed)
constraints.
• In choosing a new value for a variable, select the value that results
in the minimum number of conflicts with other variables—the
min-conflicts heuristic.
Local Search for CSPs
• Another technique, called constraint weighting, can help
concentrate the search on the important constraints.
• Each constraint is given a numeric weight, Wi, initially all 1.
• At each step of the search, the algorithm chooses a variable/value
pair to change that will result in the lowest total weight of all
violated constraints.
• The weights are then adjusted by incrementing the weight of each
constraint that is violated by the current assignment.
Local Search for CSPs
• Another advantage of local search is that it can be used in an
online setting when the problem changes.
• This is particularly important in scheduling problems.
• A week’s airline schedule may involve thousands of flights and
tens of thousands of personnel assignments, but bad weather at
one airport can make the schedule infeasible.
• We would like to repair the schedule with a minimum number of
changes.
• This can be easily done with a local search.
The Structure of Problem
• The structure of the problem as represented by the constraint
graph can be used to find solution quickly.
• The only way to deal with the real world is to decompose it into
many subproblems.
• e.g. The problem can be decomposed into two independent
subproblems: Coloring T and coloring the mainland.
• Independence can be established by finding connected
components of the constraint graph.
• Each component corresponds to a subproblem CSPi.
• If assignment Si is a solution of CSPi, then I Si is a solution of i
CSPi.
The Structure of Problem
The Structure of Problem
• Constraint graph is a tree when any two variables are connected
by only one path.
• A CSP is defined to be directed arc-consistent under an ordering of
variables X1, X2, … , Xn if and only if every Xi is arc-consistent with
each Xj for j>I, then it is called as Directed Arc Consistency or DAC.
The Structure of Problem
• How to solve a tree-structure CSP:
• Pick any variable to be the root of the tree.
• Choose an ordering of the variable such that each variable appears after
its parent in the tree, such ordering is called as topological sort.
• Any tree with n nodes has n-1 arcs, so we can make this graph directed
arc-consistent in O(n) steps, each of which must compare up to d
possible domain values for 2 variables, for a total time of O(nd2).
• Once we have a directed arc-consistent graph, we can just check the list
of variables and choose any remaining value.
• Since each link from a parent to its child is arc consistent, we won’t have
to backtrack, and can move linearly through the variables.
The Structure of Problem
The Structure of Problem
The Structure of Problem
• There are two primary ways to reduce more general constraint graphs to
trees:
• 1. Based on removing nodes;

e.g. We can delete SA from the graph by fixing a value for SA and deleting
from the domains of other variables any values that are inconsistent with
the value chosen for SA.
The Structure of Problem
• The general algorithm:
• Choose a subset S of the CSP’s variables such that the constraint
graph becomes a tree after removal of S. S is called a cycle cutset.
• For each possible assignment to the variables in S that satisfies all
constraints on S,
• (a) remove from the domain of the remaining variables any values
that are inconsistent with the assignment for S, and
• (b) If the remaining CSP has a solution, return it together with the
assignment for S.
• Time complexity: O(dc·(n-c)d2), c is the size of the cycle cut set.
• Cutset conditioning: The overall algorithmic approach of efficient
approximation algorithms to find the smallest cycle cutset.
The Structure of Problem
• 2. Based on collapsing nodes together
• Tree decomposition: construct a tree decomposition of the constraint
graph into a set of connected subproblems, each subproblem is solved
independently, and the resulting solutions are then combined.
The Structure of Problem
• A tree decomposition must satisfy three requirements:
 Every variable in the original problem appears in at least one of
the subproblems.
 If two variables are connected by a constraint in the original
problem, they must appear together (along with the constraint) in
at least one of the subproblems.
 If a variable appears in two subproblems in the tree, it must
appear in every subproblem along the path connecting those
those subproblems.
The Structure of Problem
• Each subproblem can be solved independently.
• If any one has no solution, the entire problem has no solution.
• If all the subproblems can be solved, then a global solution can be
constructed as follows. First, each subproblem can be viewed as a
mega-variable whose domain is the set of all solutions for the
subproblem.
• A given constraint graph declares many tree decompositions; in
choosing a decomposition, the aim is to make the subproblems as
small as possible.
• The tree width of a tree decomposition of a graph is one less than
the size of the largest subproblems.
• The tree width of the graph itself is the minimum tree width
among all its tree decompositions.
• Time complexity: O(ndw+1), w is the tree width of the graph.

You might also like