0% found this document useful (0 votes)
104 views33 pages

Csps

Constraint satisfaction problems involve variables with domains of possible values and constraints specifying allowed value combinations. They can be represented as constraint graphs and solved with backtracking search improved with heuristics like most-constrained variable selection and forward checking constraint propagation. Stronger forms of consistency like arc consistency prune the search space earlier. Structured problems can leverage tree decompositions to decompose into independent subproblems.

Uploaded by

Arthy J
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views33 pages

Csps

Constraint satisfaction problems involve variables with domains of possible values and constraints specifying allowed value combinations. They can be represented as constraint graphs and solved with backtracking search improved with heuristics like most-constrained variable selection and forward checking constraint propagation. Stronger forms of consistency like arc consistency prune the search space earlier. Structured problems can leverage tree decompositions to decompose into independent subproblems.

Uploaded by

Arthy J
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

Constraint Satisfaction Problems

Constraint satisfaction problems (CSPs)

• Standard search problem: state is a "black box“ – any data


structure that supports successor function and goal test
– state is defined by variables Xi with values from domain Di
– goal test is a set of constraints specifying allowable combinations
of values for subsets of variables

• Simple example of a formal representation language


• Allows useful general-purpose algorithms with more
power than standard search algorithms
• CSP:
Example: Map-Coloring

• Variables WA, NT, Q, NSW, V, SA, T


• Domains Di = {red,green,blue}
• Constraints: adjacent regions must have different colors
• e.g., WA ≠ NT, or (WA,NT) in {(red,green),(red,blue),
(green,red), (green,blue),(blue,red),(blue,green)}
Example: Map-Coloring

• Solutions are complete and consistent assignments


• e.g., WA = red, NT = green, Q = red, NSW =
green,V = red,SA = blue,T = green

Constraint graph
• Binary CSP: each constraint relates two variables
• Constraint graph: nodes are variables, arcs are constraints
Varieties of CSPs
• Discrete variables
– finite domains:
• n variables, domain size d  O(dn) complete assignments
• e.g., Boolean CSPs, incl. Boolean satisfiability (NP-complete)
– infinite domains:
• integers, strings, etc.
• e.g., job scheduling, variables are start/end days for each job
• need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3

• Continuous variables
– e.g., start/end times for Hubble Space Telescope observations
– linear constraints solvable in polynomial time by LP
Varieties of constraints
• Unary constraints involve a single variable,
– e.g., SA ≠ green

• Binary constraints involve pairs of variables,


– e.g., SA ≠ WA

• Higher-order constraints involve 3 or more variables,


– e.g., cryptarithmetic column constraints
Backtracking search
• Variable assignments are commutative, i.e.,
[ WA = red then NT = green ] same as [ NT = green then WA = red ]

• => Only need to consider assignments to a single variable at


each node

• Depth-first search for CSPs with single-variable assignments


is called backtracking search

• Can solve n-queens for n ≈ 25



Backtracking example
Backtracking example
Backtracking example
Backtracking example
Improving backtracking efficiency

• General-purpose methods can give huge


gains in speed:
– Which variable should be assigned next?
– In what order should its values be tried?
– Can we detect inevitable failure early?
Most constrained variable
• Most constrained variable:
choose the variable with the fewest legal values

• a.k.a. minimum remaining values (MRV)


heuristic

Most constraining variable
• A good idea is to use it as a tie-breaker
among most constrained variables
• Most constraining variable:
– choose the variable with the most constraints on
remaining variables

Least constraining value
• Given a variable to assign, choose the least
constraining value:
– the one that rules out the fewest values in the
remaining variables

• Combining these heuristics makes 1000 queens


feasible

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Constraint propagation
• Forward checking propagates information from assigned to
unassigned variables, but doesn't provide early detection for all
failures:

• NT and SA cannot both be blue!


• Constraint propagation algorithms repeatedly enforce constraints
locally…

Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y



Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y



Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked




Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


• Arc consistency detects failure earlier than forward checking
• Can be run as a preprocessor or after each assignment


Arc consistency algorithm AC-3

• Time complexity: O(#constraints |domain|3)


Checking consistency of an arc is O(|domain|2)
k-consistency
• A CSP is k-consistent if, for any set of k-1 variables, and for any consistent
assignment to those variables, a consistent value can always be assigned to
any kth variable
• 1-consistency is node consistency
• 2-consistency is arc consistency
• For binary constraint networks, 3-consistency is the same as path consistency
• Getting k-consistency requires time and space exponential in k
• Strong k-consistency means k’-consistency for all k’ from 1 to k
– Once strong k-consistency for k=#variables has been obtained, solution
can be constructed trivially
• Tradeoff between propagation and branching
• Practitioners usually use 2-consistency and less commonly 3-consistency
Other techniques for CSPs
• Global constraints
– E.g., Alldiff
– E.g., Atmost(10,P1,P2,P3), i.e., sum of the 3 vars ≤ 10
– Special propagation algorithms
• Bounds propagation
– E.g., number of people on two flight D1 = [0, 165] and D2 = [0, 385]
– Constraint that the total number of people has to be at least 420
– Propagating bounds constraints yields D1 = [35, 165] and D2 = [255, 385]
• …

• Symmetry breaking
Structured CSPs
Tree-structured CSPs
Algorithm for tree-structured CSPs
Nearly tree-structured CSPs

(Finding the minimum cutset is NP-complete.)


Tree decomposition
• Every variable in original
problem must appear in at least
one subproblem
• If two variables are connected
in the original problem, they
must appear together (along
with the constraint) in at least
one subproblem
• If a variable occurs in two
subproblems in the tree, it must
appear in every subproblem on
the path that connects the two

• Algorithm: solve for all solutions of each subproblem. Then, use the tree-
structured algorithm, treating the subproblem solutions as variables for those
subproblems.
• O(ndw+1) where w is the treewidth (= one less than size of largest subproblem)
• Finding a tree decomposition of smallest treewidth is NP-complete, but good
heuristic methods exists

You might also like