0% found this document useful (0 votes)
39 views30 pages

6 Constraint Satisfaction Problems

Uploaded by

Ritim Roof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views30 pages

6 Constraint Satisfaction Problems

Uploaded by

Ritim Roof
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

11/15/2024

Constraint satisfaction problems (CSPs)


BLG 435E: Artificial Intelligence
Constraint Satisfaction Problems  CSP:
 state is defined by variables Xi with values from domain Di
Instructor: Professor Mehmet Keskinöz  goal test is a set of constraints specifying allowable combinations of values for subsets of
variables
ITU Artificial Intelligence Research and Development Center (ITUAI)
Faculty of Computer and Informatics  Allows useful general-purpose algorithms with more power than standard search
Computer Engineering Department algorithms
Istanbul Technical University, Istanbul, Turkey

Email: [email protected]
2
ODS 2001

Example: Map-Coloring Example: Map-Coloring

 Variables WA, NT, Q, NSW, V, SA, T  Solutions are complete and consistent assignments,
 Domains Di = {red,green,blue} e.g., WA = red, NT = green,Q = red,NSW =
green,V = red,SA = blue,T = green
 Constraints: adjacent regions must have different colors
 e.g., WA ≠ NT
3 4

1
11/15/2024

Constraint graph Varieties of CSPs


 Binary CSP: each constraint relates two variables  Discrete variables
finite domains:
 Constraint graph: nodes are variables, arcs are constraints 

 n variables, domain size d  O(d ) complete assignments


n

 e.g., 3-SAT (NP-complete)


 infinite domains:
 integers, strings, etc.
 e.g., job scheduling, variables are start/end days for each job:
StartJob1 + 5 ≤ StartJob3

 Continuous variables
 e.g., start/end times for Hubble Space Telescope observations
 linear constraints solvable in polynomial time by linear programming

5 6

Varieties of constraints Example: Cryptarithmetic


 Unary constraints involve a single variable,
 e.g., SA ≠ green

 Binary constraints involve pairs of variables,


 e.g., SA ≠ WA
 Variables: F T U W R O X1 X2 X3
 Domains: {0,1,2,3,4,5,6,7,8,9} {0,1}
 Higher-order constraints involve 3 or more variables,  Constraints: Alldiff (F,T,U,W,R,O)
 e.g., SA ≠ WA ≠ NT  O + O = R + 10 · X1
 X1 + W + W = U + 10 · X2
 X2 + T + T = O + 10 · X3
 X3 = F, T ≠ 0, F ≠ 0
7 8

2
11/15/2024

Real-world CSPs Standard search formulation

 Assignment problems Let’s try the standard search formulation.


 e.g., who teaches what class
We need:
 Timetabling problems • Initial state: none of the variables has a value (color)
 e.g., which class is offered when and where? • Successor state: one of the variables without a value will get some value.
• Goal: all variables have a value and none of the constraints is violated.
 Transportation scheduling
 Factory scheduling
NxD
N layers
 Notice that many real-world problems involve real- WA WA WA NT T
valued variables [NxD]x[(N-1)xD]
WA WA WA NT
NT NT NT WA

Equal! N! x D^N
9 10
There are N! x D^N nodes in the tree but only D^N distinct states??

Backtracking (Depth-First) search Backtracking example


• Special property of CSPs: They are commutative:
NT = WA
This means: the order in which we assign variables
WA NT
does not matter.

• Better search tree: First order variables, then assign them values one-by-one.

D
WA WA WA
WA
NT D^2
WA WA
NT NT

D^N
11 12

3
11/15/2024

Backtracking example Backtracking example

13 14

Backtracking example Improving backtracking efficiency

 General-purpose methods can give huge gains in speed:


 Which variable should be assigned next?
 In what order should its values be tried?
 Can we detect inevitable failure early?

 We’ll discuss heuristics for all these questions in the following.

15 16

4
11/15/2024

Which variable should be assigned next? Which variable should be assigned next?
minimum remaining values heuristic  degree heuristic

 Most constrained variable:  Tie-breaker among most constrained variables


choose the variable with the fewest legal values
 Most constraining variable:
 choose the variable with the most constraints on remaining
 a.k.a. minimum remaining values (MRV) heuristic variables (most edges in graph)

 Picks a variable which will cause failure as soon as possible,


allowing the tree to be pruned.

17 18

In what order should its values be tried?


 least constraining value heuristic Rationale for MRV, DH, LCV
 Given a variable, choose the least constraining value:  In all cases we want to enter the most promising branch, but we also want to
detect inevitable failure as soon as possible.
 the one that rules out the fewest values in the remaining variables
 MRV+DH: the variable that is most likely to cause failure in a branch is assigned
first. E.g X1-X2-X3, values is 0,1, neighbors cannot be the same.

 LCV: tries to avoid failure by assigning values that leave maximal flexibility for the
 Leaves maximal flexibility for a solution. remaining variables.
 Combining these heuristics makes 1000 queens feasible

19 20

5
11/15/2024

Can we detect inevitable failure early?


 forward checking Forward checking
 Idea:  Idea:
 Keep track of remaining legal values for unassigned variables  Keep track of remaining legal values for unassigned variables
that are connected to current variable.  Terminate search when any variable has no legal values
 Terminate search when any variable has no legal values

21 22

Forward checking Forward checking


 Idea:  Idea:
 Keep track of remaining legal values for unassigned variables  Keep track of remaining legal values for unassigned variables
 Terminate search when any variable has no legal values  Terminate search when any variable has no legal values

23 24

6
11/15/2024

Constraint propagation Arc consistency


 Forward checking only looks at variables connected to current value in constraint  Simplest form of propagation makes each arc consistent
graph.  X Y is consistent iff
for every value x of X there is some allowed y

 NT and SA cannot both be blue!


consistent arc.
 Constraint propagation repeatedly enforces constraints locally
constraint propagation propagates arc consistency on the graph.
25 26

Arc consistency Arc consistency


 Simplest form of propagation makes each arc consistent  Simplest form of propagation makes each arc consistent
 X Y is consistent iff  X Y is consistent iff
for every value x of X there is some allowed y for every value x of X there is some allowed y

this arc just became inconsistent


inconsistent arc.
remove blue from source consistent arc.  If X loses a value, neighbors of X need to be rechecked:
i.e. incoming arcs can become inconsistent again
(outgoing arcs will stay consistent).
27 28

7
11/15/2024

Arc consistency Arc Consistency


 Simplest form of propagation makes each arc consistent
This is a propagation algorithm. It’s like sending messages to neighbors
X Y is consistent iff


on the graph! How do we schedule these messages?
for every value x of X there is some allowed y
 Every time a domain changes, all incoming messages need to be re-
send. Repeat until convergence  no message will change any
domains.

 Since we only remove values from domains when they can never be
part of a solution, an empty domain means no solution possible at all 
back out of that branch.
 If X loses a value, neighbors of X need to be rechecked
 Arc consistency detects failure earlier than forward checking  Forward checking is simply sending messages into a variable that just
 Can be run as a preprocessor or after each assignment got its value assigned. First step of arc-consistency.
 Time complexity: O(n d )
2 3
29 30

Try it yourself
[R,B,G] [R,B,G]

[R]

[R,B,G] [R,B,G]

Use all heuristics including arc-propagation to solve this problem.


•n: Number of variables in the CSP.
•d: Maximum domain size of the variables.
31 32

8
11/15/2024

Why Tree-Structured CSPs Are Easier


1.Tree Structure: In a tree, any two variables are connected by a unique path.
B G R R G B
This structure avoids cycles, which simplifies dependency management between
variables.
2.Efficient Propagation: In a tree, a constraint can be propagated from the B a priori
B B R G B
R constrained
leaves up to the root (or vice versa) to reduce the domains of each variable in G
G R
nodes
G
a single pass, guaranteeing consistency throughout the tree.
3.Algorithm:
1.Select a root variable arbitrarily.
2.Perform arc consistency on the tree in a single forward pass from the Note: After the backward pass, there is guaranteed
to be a legal choice for a child note for any of its
leaves to the root. leftover values.
3.Then, assign values in a backward pass from the root to the leaves. This removes any inconsistent values from Parent(Xj),
it applies arc-consistency moving backwards.

33 34

In Constraint Satisfaction Problems (CSPs), a nearly tree-structured CSP is one that is almost a
tree but has a small number of cycles. While trees can be solved efficiently using arc consistency methods,
nearly tree-structured CSPs require special handling due to the presence of cycles.

Cutset conditioning is a technique used to transform a nearly tree-structured CSP


into a tree-structured one. Here’s how it works:
1.Identify a Cycle Cutset:
1. A cycle cutset is a set of variables that, when removed from the graph,
break all cycles, transforming the CSP into a tree structure.
2. The goal is to select the smallest possible cutset to minimize complexity.
2.Condition on the Cutset Variables:
1. For each possible assignment of values to the cutset variables, solve the
remaining tree-structured CSP.
2. Since the graph is now a tree, you can solve it efficiently with methods like
arc consistency.
3.Combine Solutions:
1. After solving the CSP for each assignment of the cutset variables, combine
35 these solutions to find one that satisfies the original constraints. 36

9
11/15/2024

Benefits and Complexity


Example of Cutset Conditioning •Efficiency: By removing cycles, the CSP becomes solvable in polynomial time
Consider a CSP with variables A, B, C, D, and E, and the following constraint graph:
•Trade-off: The method can become costly if the cutset is large, as we need to consider
all possible assignments for the cutset, leading to exponential complexity in the size of the
cutset.

Cutset conditioning is especially useful for CSPs that are "almost" tree-like, where a small
cutset can simplify the problem significantly.

•Step 1: Identify a cutset. Removing D breaks the cycle. So, the cutset is {D}.
•Step 2: Assign values to D. For each possible value of D, solve the tree-structured subproblem
that remains (involving A, B, C, and E).
•Step 3: Combine solutions. Once each subproblem is solved, combine them to obtain a solution
for the original CSP.

37 38

Junction Tree Decompositions

Junction tree decomposition is a technique to organize the constraints of a Constraint Satisfaction Problem
(CSP) into a tree structure that allows for efficient propagation and solution finding.

1. Objective of Junction Tree Decomposition


The goal is to decompose a CSP’s constraint graph (or primal graph) into clusters of variables, known as
cliques, and then arrange these cliques into a tree structure (a junction tree). This tree structure enables
efficient message passing, reducing the complexity of enforcing consistency.

2. Triangulation and Cliques


In CSPs, cycles with four or more variables can complicate constraint propagation. Triangulation involves
identifying clusters of mutually constrained variables (cliques) without adding artificial constraints.
For example, in a map-coloring CSP, where nodes represent regions and edges represent shared borders
(constraints that neighboring regions must have different colors), we:
• Look at regions that are all mutually adjacent and form maximal cliques.
.

39 40

10
11/15/2024

Let’s go through a numerical example of junction tree decomposition using a simple map-coloring Step 1: Formulating Constraints
problem with specific regions (variables) and constraints. The constraint graph can be visualized as follows:
Problem Setup
Suppose we have a map with four regions AAA, BBB, CCC, and DDD that need to be colored. Each
region should have a different color from its neighboring regions. Here’s the adjacency graph (where
each edge represents a constraint):
•Region A is adjacent to regions B and C.
•Region B is adjacent to regions A, C, and D.
•Region C is adjacent to regions A, B, and D.
•Region D is adjacent to regions B and C.
We have three possible colors: Red, Green, and Blue.

In this graph:
•Edges represent constraints that neighboring regions
must have different colors.
41 42

Step 2: Identify Maximal Cliques


Step 4: Assign Initial Colors (Example Solution Propagation)
For junction tree decomposition, we need to find maximal cliques (sets of nodes where each pair of nodes is
Let’s assign colors to each region based on the constraints. Starting with an arbitrary color assignment
connected) in the graph.
and propagating constraints through the junction tree, we could find solutions that satisfy the constraints.
In this case, the maximal cliques are:
1.Start with Clique 1 {A,B,C}:
•Clique 1: {A,B,C}
1. Assign Red to A.
•Clique 2: {B,C,D}
2. Assign Green to B (since B must differ from A).
Step 3: Construct the Junction Tree
3. Assign Blue to C(since Cmust differ from both A and B).
To construct the junction tree, we need to connect these cliques so that any shared variables are connected
2.Propagate to Clique 2 {B,C,D}:
along the tree path. Here’s how we can link them:
1. We already have B=Green and C=Blue
1.Connect Clique 1 {A,B,C}and Clique 2 {B,C,D} via their shared nodes B and C.
2. Assign Red to D (since Dmust differ from both B and C).

43 44

11
11/15/2024

Final Color Assignment


Thus, one possible solution is:
•A: Red
•B: Green
•C: Blue
•D: Red
Benefits of the Junction Tree
In this example:
•Constraint propagation becomes simpler, as constraints are checked within each clique and
propagated through the tree.
•Consistency is maintained across cliques without the need to recheck each constraint individually in the
whole graph. This decomposition enables efficient solution propagation and constraint satisfaction for
larger and more complex CSPs.

45 46

Local search for CSPs Example: 4-Queens


 Note: The path to the solution is unimportant, so we can  States: 4 queens in 4 columns (44 = 256 states)
apply local search!  Actions: move queen in column
 Goal test: no attacks
 To apply to CSPs:  Evaluation: h(n) = number of attacks
 allow states with unsatisfied constraints
 operators reassign variable values

 Variable selection: randomly select any conflicted variable

 Value selection by min-conflicts heuristic:


 choose value that violates the fewest constraints
 i.e., hill-climb with h(n) = total number of violated constraints

47 48

12
11/15/2024

Summary
 CSPs are a special kind of problem:
 states defined by values of a fixed set of variables
 goal test defined by constraints on variable values

 Backtracking = depth-first search with one variable assigned per node

 Variable ordering and value selection heuristics help significantly

 Forward checking prevents assignments that guarantee later failure

 Constraint propagation (e.g., arc consistency) does additional work to constrain values
and detect inconsistencies

 Iterative min-conflicts is usually effective in practice

49 50

Outline Constraint satisfaction problems (CSPs)

CSP:
CSP? – state is defined by variables Xi with values from domain Di
– goal test is a set of constraints specifying allowable combinations of values for
Backtracking for CSP subsets of variables

Local search for CSPs Allows useful general-purpose algorithms with more power than
standard search algorithms
Problem structure and decomposition

Pag. 51 Pag.

13
11/15/2024

Example: Map-Coloring Example: Map-Coloring

Variables WA, NT, Q, NSW, V, SA, T Solutions are complete and consistent
Domains Di = {red,green,blue} assignments, e.g., WA = red, NT = green,Q =
red,NSW = green,V = red,SA = blue,T =
Constraints: adjacent regions must have different colors
green
e.g., WA ≠ NT
Pag. Pag.

Constraint graph Varieties of CSPs

Binary CSP: each constraint relates two variables Discrete variables


– finite domains:
Constraint graph: nodes are variables, arcs are constraints
– n variables, domain size d  O(d n) complete assignments
– e.g., 3-SAT (NP-complete)
– infinite domains:
– integers, strings, etc.
– e.g., job scheduling, variables are start/end days for each job:
StartJob1 + 5 ≤ StartJob3

Continuous variables
– e.g., start/end times for Hubble Space Telescope observations
– linear constraints solvable in polynomial time by linear programming

Pag. Pag.

14
11/15/2024

Varieties of constraints Example: Cryptarithmetic

Unary constraints involve a single variable,


– e.g., SA ≠ green

Binary constraints involve pairs of variables,


Variables: F T U W R O X1 X2 X3
– e.g., SA ≠ WA
Domains: {0,1,2,3,4,5,6,7,8,9} {0,1}
Constraints: Alldiff (F,T,U,W,R,O)
Higher-order constraints involve 3 or more variables, – O + O = R + 10 · X1
– e.g., SA ≠ WA ≠ NT – X1 + W + W = U + 10 · X2
– X2 + T + T = O + 10 · X3
– X3 = F, T ≠ 0, F ≠ 0
Pag. Pag.

Real-world CSPs Constraint satisfaction problems


Assignment problems
– e.g., who teaches what class What is a CSP?
Timetabling problems – Finite set of variables V1, V2, …, Vn
– e.g., which class is offered when and where? – Finite set of constraints C1, C2, …, Cm
Transportation scheduling – Nonemtpy domain of possible values for each variable
DV1, DV2, … DVn
Factory scheduling – Each constraint Ci limits the values that variables can take, e.g., V1 ≠ V2
A state is defined as an assignment of values to some or all
Notice that many real-world problems involve variables.
real-valued variables
Consistent assignment: assignment does not not violate the
constraints.

Pag. Pag. 60

15
11/15/2024

Constraint satisfaction problems CSP example: map coloring

An assignment is complete when every value is mentioned.


A solution to a CSP is a complete assignment that satisfies
all constraints.
Some CSPs require a solution that maximizes an objective
function.
Applications: Scheduling the time of observations on the Variables: WA, NT, Q, NSW, V, SA, T
Hubble Space Telescope, Floor planning, Map coloring, Domains: Di={red,green,blue}
Cryptography Constraints:adjacent regions must have different colors.
– E.g. WA  NT (if the language allows this)
– E.g. (WA,NT)  {(red,green),(red,blue),(green,red),…}

Pag. 61 Pag.
62

CSP example: map coloring Constraint graph

CSP benefits
– Standard representation pattern
– Generic goal and successor
functions
– Generic heuristics (no domain
specific expertise).

Constraint graph = nodes are variables, edges show constraints.


Solutions are assignments satisfying all constraints, e.g. Graph can be used to simplify search.
e.g. Tasmania is an independent subproblem.
{WA=red,NT=green,Q=red,NSW=green,V=red,SA=blue,T=g
reen}

Pag. Pag.
63 64

16
11/15/2024

Varieties of CSPs Varieties of constraints

Discrete variables Unary constraints involve a single variable.


– Finite domains; size d O(dn) complete assignments. – e.g. SA  green
– E.g. Boolean CSPs, include. Boolean satisfiability (NP-complete). Binary constraints involve pairs of variables.
– Infinite domains (integers, strings, etc.) – e.g. SA  WA
– E.g. job scheduling, variables are start/end days for each job
– Need a constraint language e.g StartJob1 +5 ≤ StartJob3. Higher-order constraints involve 3 or more variables.
– Linear constraints solvable, nonlinear undecidable. – e.g. cryptharithmetic column constraints.

Continuous variables Preference (soft constraints) e.g. red is better than green often
– e.g. start/end times for Hubble Telescope observations. representable by a cost for each variable assignment  constrained
– Linear constraints solvable in poly time by LP methods. optimization problems.

Pag. 65 Pag. 66

Example; cryptharithmetic CSP as a standard search problem

A CSP can easily expressed as a standard search


problem.
Incremental formulation
– Initial State: the empty assignment {}.
– Successor function: Assign value to unassigned variable provided that
there is not conflict.
– Goal test: the current assignment is complete.
– Path cost: as constant cost for every step.

Pag. 67 Pag. 68

17
11/15/2024

CSP as a standard search problem Commutativity

This is the same for all CSP’s !!! CSPs are commutative.
Solution is found at depth n (if there are n variables).
– The order of any given set of actions has no effect on the
– Hence depth first search can be used.
outcome.
Path is irrelevant, so complete state representation can also
be used. – Example: choose colors for Australian territories one at a time
– [WA=red then NT=green] same as [NT=green then WA=red]
Branching factor b at the top level is nd. – All CSP search algorithms consider a single variable assignment at a time 
b=(n-l)d at depth l, hence n!dn leaves (only dn complete there are dn leaves.
assignments).

AI 1
Pag. 69 15 november Pag. 70
2024

Backtracking search Backtracking search

Cfr. Depth-first search function BACKTRACKING-SEARCH(csp) return a solution or failure


return RECURSIVE-BACKTRACKING({} , csp)

Chooses values for one variable at a time and function RECURSIVE-BACKTRACKING(assignment, csp) return a solution or failure

backtracks when a variable has no legal values if assignment is complete then return assignment
var  SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)

left to assign. for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do


if value is consistent with assignment according to CONSTRAINTS[csp] then

Uninformed algorithm add {var=value} to assignment


result  RRECURSIVE-BACTRACKING(assignment, csp)

– No good general performance if result  failure then return result


remove {var=value} from assignment
return failure

Pag. 71 Pag. 72

18
11/15/2024

Backtracking example Backtracking example

Pag. 73 Pag. 74

Backtracking example Backtracking example

Pag. 75 Pag. 76

19
11/15/2024

Improving backtracking efficiency Minimum remaining values

Previous improvements  introduce heuristics


General-purpose methods can give huge gains
in speed:
– Which variable should be assigned next?
var  SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)
– In what order should its values be tried?
– Can we detect inevitable failure early?
A.k.a. most constrained variable heuristic
– Can we take advantage of problem structure? Rule: choose variable with the fewest legal moves
Which variable shall we try first?

Pag. 77 Pag.
78

Degree heuristic Least constraining value

Use degree heuristic Least constraining value heuristic


Rule: select variable that is involved in the largest number of Rule: given a variable choose the least constraing
constraints on other unassigned variables.
value i.e. the one that leaves the maximum
Degree heuristic is very useful as a tie breaker.
flexibility for subsequent variable assignments.
In what order should its values be tried?

Pag. Pag.
79 80

20
11/15/2024

Forward checking Forward checking

Can we detect inevitable failure early? Assign {WA=red}


– And avoid it later?
Effects on other variables connected by constraints
Forward checking idea: keep track of remaining legal with WA
values for unassigned variables.
– NT can no longer be red
Terminate search when any variable has no legal – SA can no longer be red
values.
Pag. Pag.
81 82

Forward checking Forward checking

Assign {Q=green} If V is assigned blue


Effects on other variables connected by constraints with WA Effects on other variables connected by constraints with WA
– NT can no longer be green – SA is empty
– NSW can no longer be green – NSW can no longer be blue
– SA can no longer be green
FC has detected that partial assignment is inconsistent with the
MRV heuristic will automatically select NT and SA next, why? constraints and backtracking can occur.

Pag. Pag.
83 84

21
11/15/2024

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 {1,2,3,4} {1,2,3,4} 1 2 3 4 {1,2,3,4} {1,2,3,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{1,2,3,4} {1,2,3,4} {1,2,3,4} {1,2,3,4}

Pag. Pag.
85 86

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 {1,2,3,4} { , ,3,4} 1 2 3 4 {1,2,3,4} { , ,3,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{ ,2, ,4} { ,2,3, } { ,2, ,4} { ,2,3, }

Pag. Pag.
87 88

22
11/15/2024

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 {1,2,3,4} { , ,3,4} 1 2 3 4 { ,2,3,4} {1,2,3,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{ , , , } { ,2,3, } {1,2,3,4} {1,2,3,4}

Pag. Pag.
89 90

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 { ,2,3,4} { , , ,4} 1 2 3 4 { ,2,3,4} { , , ,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{1, ,3, } {1, ,3,4} {1, ,3, } {1, ,3,4}

Pag. Pag.
91 92

23
11/15/2024

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 { ,2,3,4} { , , ,4} 1 2 3 4 { ,2,3,4} { , , ,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{1, , , } {1, ,3, } {1, , , } {1, ,3, }

Pag. Pag.
93 94

Example: 4-Queens Problem Example: 4-Queens Problem

X1 X2 X1 X2
1 2 3 4 { ,2,3,4} { , , ,4} 1 2 3 4 { ,2,3,4} { , , ,4}
1 1
2 2
3 3
4 4
X3 X4 X3 X4
{1, , , } { , ,3, } {1, , , } { , ,3, }

Pag. Pag.
95 96

24
11/15/2024

Constraint propagation Arc consistency

Solving CSPs with combination of heuristics plus forward X  Y is consistent iff


checking is more efficient than either approach alone.
FC checking propagates information from assigned to for every value x of X there is some allowed y
unassigned variables but does not provide detection for all SA  NSW is consistent iff
failures.
– NT and SA cannot be blue! SA=blue and NSW=red
Constraint propagation repeatedly enforces constraints locally

Pag. Pag.
97 98

Arc consistency Arc consistency

X  Y is consistent iff Arc can be made consistent by removing blue from


for every value x of X there is some allowed y NSW
NSW  SA is consistent iff
RECHECK neighbours !!
NSW=red and SA=blue
– Remove red from V
NSW=blue and SA=???
Arc can be made consistent by removing blue from NSW

Pag. Pag.
99 100

25
11/15/2024

Arc consistency Arc consistency algorithm

function AC-3(csp) return the CSP, possibly with reduced domains


inputs: csp, a binary csp with variables {X1, X2, …, Xn}
local variables: queue, a queue of arcs initially the arcs in csp

while queue is not empty do


(Xi, Xj)  REMOVE-FIRST(queue)
if REMOVE-INCONSISTENT-VALUES(Xi, Xj) then
for each Xk in NEIGHBORS[Xi ] do
add (Xi, Xj) to queue
Arc can be made consistent by removing blue from NSW
RECHECK neighbours !! function REMOVE-INCONSISTENT-VALUES(Xi, Xj) return true iff we remove a value
– Remove red from V removed  false
for each x in DOMAIN[Xi] do
Arc consistency detects failure earlier than FC
if no value y in DOMAIN[Xi] allows (x,y) to satisfy the constraints between Xi and Xj
Can be run as a preprocessor or after each assignment. then delete x from DOMAIN[Xi]; removed  true
– Repeated until no inconsistency remains return removed

Pag. Pag. 102


101

K-consistency K-consistency

Arc consistency does not detect all inconsistencies: A graph is strongly k-consistent if
– Partial assignment {WA=red, NSW=red} is inconsistent. – It is k-consistent and
Stronger forms of propagation can be defined using the notion of k- – Is also (k-1) consistent, (k-2) consistent, … all the way down to 1-consistent.
consistency.
A CSP is k-consistent if for any set of k-1 variables and for any This is ideal since a solution can be found in time O(nd)
consistent assignment to those variables, a consistent value can instead of O(n2d3)
always be assigned to any kth variable.
– E.g. 1-consistency or node-consistency YET no free lunch: any algorithm for establishing n-
– E.g. 2-consistency or arc-consistency consistency must take time exponential in n, in the worst
– E.g. 3-consistency or path-consistency
case.

Pag. 103 Pag. 104

26
11/15/2024

Further improvements Local search for CSP

Checking special constraints Use complete-state representation


– Checking Alldif(…) constraint
– E.g. {WA=red, NSW=red} For CSPs
– Checking Atmost(…) constraint – allow states with unsatisfied constraints
– Bounds propagation for larger value domains
– operators reassign variable values
Intelligent backtracking
– Standard form is chronological backtracking i.e. try different value for preceding variable.
Variable selection: randomly select any conflicted variable
– More intelligent, backtrack to conflict set. Value selection: min-conflicts heuristic
– Set of variables that caused the failure or set of previously assigned variables that are connected to X by
constraints. – Select new value that results in a minimum number of conflicts with the other
– Backjumping moves back to most recent element of the conflict set. variables
– Forward checking can be used to determine conflict set.

Pag. 105 Pag. 106

Local search for CSP Min-conflicts example 1

function MIN-CONFLICTS(csp, max_steps) return solution or failure


inputs: csp, a constraint satisfaction problem
max_steps, the number of steps allowed before giving up

current  an initial complete assignment for csp


for i = 1 to max_steps do
if current is a solution for csp then return current
h=5 h=3 h=1
var  a randomly chosen, conflicted variable from VARIABLES[csp]
value  the value v for var that minimize CONFLICTS(var,v,current,csp) Use of min-conflicts heuristic in hill-
set var = value in current
return failure climbing.

AI 1 AI 1
15 november Pag. 107 15 november Pag.
2024 2024 108

27
11/15/2024

Min-conflicts example 2 Advantages of local search

The runtime of min-conflicts is roughly independent


of problem size.
– Solving the millions-queen problem in roughly 50 steps.

A two-step solution for an 8-queens problem using min- Local search can be used in an online setting.
conflicts heuristic.
At each stage a queen is chosen for reassignment in its
– Backtrack search requires more time
column.
The algorithm moves the queen to the min-conflict square
breaking ties randomly.

AI 1 AI 1
15 november Pag. 15 november Pag. 110
2024 109 2024

Problem structure Problem structure

How can the problem structure help to find a solution quickly? Suppose each problem has c variables out of a total of n.
Subproblem identification is important: Worst case solution cost is O(n/c dc), i.e. linear in n
– Coloring Tasmania and mainland are independent subproblems – Instead of O(d n), exponential in n
– Identifiable as connected components of constrained graph. E.g. n= 80, c= 20, d=2
Improves performance – 280 = 4 billion years at 1 million nodes/sec.
– 4 * 220= .4 second at 1 million nodes/sec

AI 1 AI 1
15 november Pag. 15 november Pag.
2024 111 2024 112

28
11/15/2024

Tree-structured CSPs Tree-structured CSPs

Theorem: if the constraint graph has no loops In most cases subproblems of a CSP are connected as a tree
Any tree-structured CSP can be solved in time linear in the
then CSP can be solved in O(nd 2) time number of variables.
Compare difference with general CSP, where – Choose a variable as root, order variables from root to leaves such that every node’s
parent precedes it in the ordering. (label var from X1 to Xn)
worst case is O(d n) – For j from n down to 2, apply REMOVE-INCONSISTENT-VALUES(Parent(Xj),Xj)
– For j from 1 to n assign Xj consistently with Parent(Xj )

AI 1 AI 1
15 november Pag. 15 november Pag.
2024 113 2024 114

Nearly tree-structured CSPs Nearly tree-structured CSPs

Idea: assign values to some variables so that the remaining


Can more general constraint graphs be reduced to variables form a tree.
trees? Assume that we assign {SA=x}  cycle cutset
Two approaches: – And remove any values from the other variables that are inconsistent.
– Remove certain nodes – The selected value for SA could be the wrong one so we have to try all
of them
– Collapse certain nodes
AI 1 AI 1
15 november Pag. 15 november Pag.
2024 115 2024 116

29
11/15/2024

Nearly tree-structured CSPs Nearly tree-structured CSPs

Tree decomposition of the


constraint graph in a set of
connected subproblems.
Each subproblem is solved
independently
Resulting solutions are
combined.
Necessary requirements:
This approach is worthwhile if cycle cutset is small. – Every variable appears in ar least one
of the subproblems.
Finding the smallest cycle cutset is NP-hard – If two variables are connected in the
– Approximation algorithms exist original problem, they must appear
together in at least one subproblem.
This approach is called cutset conditioning. – If a variable appears in two
subproblems, it must appear in each
node on the path.

AI 1 AI 1
15 november Pag. 15 november Pag.
117 118
2024 2024

Summary

CSPs are a special kind of problem: states defined by values of a fixed set of
variables, goal test defined by constraints on variable values
Backtracking=depth-first search with one variable assigned per node
Variable ordering and value selection heuristics help significantly
Forward checking prevents assignments that lead to failure.
Constraint propagation does additional work to constrain values and detect
inconsistencies.
The CSP representation allows analysis of problem structure.
Tree structured CSPs can be solved in linear time.
Iterative min-conflicts is usually effective in practice.

AI 1
15 november Pag. 119
2024

30

You might also like