AI Unit-II Chapter-II Constraint Satisfaction Problems
AI Unit-II Chapter-II Constraint Satisfaction Problems
UNIT - II
We have seen many techniques like Local search, Adversarial search to solve different problems.
The objective of every problem-solving technique is to find a solution to reach the goal. Although, in
adversarial search and local search, there were no constraints on the agents while solving the
problems and reaching to its solutions.
This section focusses another type of problem-solving technique known as Constraint satisfaction
technique. By the name, it is understood that constraint satisfaction means solving a problem under
certain constraints or rules.
Constraint satisfaction is a technique where a problem is solved when its values satisfy certain
constraints or rules of the problem. Such type of technique leads to a deeper understanding of the
problem structure as well as its complexity.
A constraint satisfaction problem (CSP) is a problem that requires its solution within some
limitations or conditions also known as constraints.
CSPs represent a state with a set of variable/value pairs and represent the conditions for a
solution by a set of constraints on the variables. Many important real-world problems
can be described as CSPs.
CSP (constraint satisfaction problem): Use a factored representation (a set of variables, each of
which has a value) for each state, a problem that is solved when each variable has a value that
satisfies all the constraints on the variable is called a CSP.
It consists of the following:
o X: A finite set of variables which stores the solution (V = {V1, V2, V3,....., Vn})
o D: A set of discrete values known as domain from which the solution is picked (D = {D1, D2,
D3,.....,Dn})
Each domain Di consists of a set of allowable values, {v1, …, vk} for variable Xi.
Some special types of solution algorithms are used to solve the following types of constraints:
Linear Constraints: These types of constraints are commonly used in linear programming where
each variable containing an integer value exists in linear form only.
Non-linear Constraints: These types of constraints are used in non-linear programming where each
variable (an integer value) exists in a non-linear form.
Note: A special constraint which works in real-world is known as Preference constraint.
CONSTRAINT PROPAGATION
In local state-spaces, the choice is only one, i.e., to search for a solution. But in CSP, we have two
choices either:
We can search for a solution or
We can perform a special type of inference called constraint propagation.
A number of inference techniques use the constraints to infer which variable/value pairs are
consistent and which are not. These include node, arc, path, and k-consistent.
constraint propagation: Using the constraints to reduce the number of legal values for a variable,
which in turn can reduce the legal values for another variable, and so on.
local consistency: If we treat each variable as a node in a graph and each binary constraint as an arc,
then the process of enforcing local consistency in each part of the graph causes inconsistent values to
be eliminated throughout the graph.
There are different types of local consistency:
Node Consistency: A single variable is said to be node consistent if all the values in the variable’s
domain satisfy the unary constraints. A network is node-consistent if every variable in the network is
node-consistent.
Mr. Mohammed Afzal, Asst. Professor in AIML
Mob: +91-8179700193, Email: [email protected]
Arc Consistency: A variable is arc consistent if every value in its domain satisfies the binary
constraints. Xi is arc-consistent with respect to another variable Xj if for every value in the current
domain Di there is some value in the domain Dj that satisfies the binary constraint on the arc (Xi,
Xj). A network is arc-consistent if every variable is arc-consistent with every other variable.
Path Consistency: A two-variable set {Xi, Xj} is path-consistent with respect to a third variable Xm
if, for every assignment {Xi = a, Xj = b} consistent with the constraint on {Xi, Xj}, there is an
assignment to Xm that satisfies the constraints on {Xi, Xm} and {Xm, Xj}.
k-consistency: A CSP is k-consistent if, for any set of k-1 variables and for any consistent
assignment to those variables, a consistent value can always be assigned to any kth variable.
GLOBAL CONSTRAINTS
A global constraint is one involving an arbitrary number of variables (but not necessarily all
variables). Global constraints can be handled by special-purpose algorithms that are more efficient
than general-purpose methods.
1) Inconsistency detection for Alldiff constraints
A simple algorithm: First remove any variable in the constraint that has a singleton domain, and
delete that variable’s value from the domains of the remaining variables. Repeat as long as there
are singleton variables. If at any point an empty domain is produced or there are more vairables
than domain values left, then an inconsistency has been detected.
A simple consistency procedure for a higher-order constraint is sometimes more effective than
applying arc consistency to an equivalent set of binary constrains.
2) Inconsistency detection for resource constraint (the atmost constraint)
We can detect an inconsistency simply by checking the sum of the minimum of the current
domains;
e.g. Atmost(10, P1, P2, P3, P4): no more than 10 personnel are assigned in total.
If each variable has the domain {3, 4, 5, 6}, the Atmost constraint cannot be satisfied.
We can enforce consistency by deleting the maximum value of any domain if it is not consistent
with the minimum values of the other domains.
e.g. If each variable in the example has the domain {2, 3, 4, 5, 6}, the values 5 and 6 can be
deleted from each domain.
3) Inconsistency detection for bounds consistent
For large resource-limited problems with integer values, domains are represented by upper and
lower bounds and are managed by bounds propagation.
Mr. Mohammed Afzal, Asst. Professor in AIML
Mob: +91-8179700193, Email: [email protected]
e.g. suppose there are two flights F1 and F2 in an airline-scheduling problem, for which the
planes have capacities 165 and 385, respectively. The initial domains for the numbers of
passengers on each flight are D1 = [0, 165] and D2 = [0, 385].
Now suppose we have the additional constraint that the two flight together must carry 420
people: F1 + F2 = 420. Propagating bounds constraints, we reduce the domains to D1 = [35, 165]
and D2 = [255, 385].
A CSP is bounds consistent if for every variable X, and for both the lower-bound and upper-
bound values of X, there exists some value of Y that satisfies the constraint between X and Y for
every variable Y.
SELECT-UNASSIGNED-VARIABLE
Variable selection—fail-first
Minimum-remaining-values (MRV) heuristic: The idea of choosing the variable with the fewest
“legal” value. A.k.a. “most constrained variable” or “fail-first” heuristic, it picks a variable that is
Mr. Mohammed Afzal, Asst. Professor in AIML
Mob: +91-8179700193, Email: [email protected]
most likely to cause a failure soon thereby pruning the search tree. If some variable X has no legal
values left, the MRV heuristic will select X and failure will be detected immediately—avoiding
pointless searches through other variables.
e.g. After the assignment for WA=red and NT=green, there is only one possible value for SA, so it
makes sense to assign SA=blue next rather than assigning Q.
Degree heuristic: The degree heuristic attempts to reduce the branching factor on future choices by
selecting the variable that is involved in the largest number of constraints on other unassigned
variables. [useful tie-breaker]
e.g. SA is the variable with highest degree 5; the other variables have degree 2 or 3; T has degree 0.
ORDER-DOMAIN-VALUES
Value selection—fail-last
If we are trying to find all the solution to a problem (not just the first one), then the ordering does not
matter.
Least-constraining-value heuristic: prefers the value that rules out the fewest choice for the
neighboring variables in the constraint graph. (Try to leave the maximum flexibility for
subsequent variable assignments.)
e.g. We have generated the partial assignment with WA=red and NT=green and that our next choice
is for Q. Blue would be a bad choice because it eliminates the last legal value left for Q’s neighbor,
SA, therefore prefers red to blue.
The minimum-remaining-values and degree heuristic are domain-independent methods for
deciding which variable to choose next in a backtracking search. The least-constraining-
value heuristic helps in deciding which value to try first for a given variable.
INFERENCE
forward checking: [One of the simplest forms of inference.] Whenever a variable X is assigned, the
forward-checking process establishes arc consistency for it: for each unassigned variable Y that is
connected to X by a constraint, delete from Y’s domain any value that is inconsistent with the value
chosen for X.
There is no reason to do forward checking if we have already done arc consistency as a pre-
processing step.
3. Intelligent backtracking
Forward checking can supply the conflict set with no extra work.
Whenever forward checking based on an assignment X=x deletes a value from Y’s domain, add X=x
to Y’s conflict set;
If the last value is deleted from Y’s domain, the assignment in the conflict set of Y are added to the
conflict set of X.
In fact, every branch pruned by back-jumping is also pruned by forward checking. Hence simple
back-jumping is redundant in a forward-checking search or in a search that uses stronger consistency
checking (such as MAC).
e.g. We can delete SA from the graph by fixing a value for SA and deleting from the
domains of other variables any values that are inconsistent with the value chosen for SA.
The general algorithm:
Choose a subset S of the CSP’s variables such that the constraint graph becomes a tree
after removal of S. S is called a cycle cutset.
For each possible assignment to the variables in S that satisfies all constraints on S,
The complexity of solving a CSP is strongly related to the structure of its constraint graph. Tree-
structured problems can be solved in linear time. Cutset conditioning can reduce a general CSP to a
tree-structured one and is quite efficient if a small cutset can be found. Tree
decomposition techniques transform the CSP into a tree of subproblems and are efficient if the tree
width of constraint graph is small.
2. The structure in the values of variables
By introducing a symmetry-breaking constraint, we can break the value symmetry and reduce the
search space by a factor of n!.
e.g. Consider the map-colouring problems with n colours, for every consistent solution, there is
actually a set of n! solutions formed by permuting the colour names. (value symmetry)
On the Australia map, WA, NT and SA must all have different colours, so there are 3!=6 ways to
assign.
We can impose an arbitrary ordering constraint NT<SA<WA that requires the 3 values to be in
alphabetical order. This constraint ensures that only one of the n! solution is possible: {NT=blue,
SA=green, WA=red}. (Symmetry-breaking constraint)