0% found this document useful (0 votes)
28 views90 pages

cs188 Su24 Lec03

The document discusses Constraint Satisfaction Problems (CSPs) in the context of artificial intelligence, focusing on their definitions, examples, and solving methods. It highlights key concepts such as backtracking search, filtering techniques like forward checking and arc consistency, and the importance of heuristics in optimizing search processes. The presentation also covers real-world applications of CSPs and the implications of various constraint types in problem-solving.

Uploaded by

Parv Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views90 pages

cs188 Su24 Lec03

The document discusses Constraint Satisfaction Problems (CSPs) in the context of artificial intelligence, focusing on their definitions, examples, and solving methods. It highlights key concepts such as backtracking search, filtering techniques like forward checking and arc consistency, and the importance of heuristics in optimizing search processes. The presentation also covers real-world applications of CSPs and the implications of various constraint types in problem-solving.

Uploaded by

Parv Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

CS 188: Artificial Intelligence

Constraint Satisfaction Problems

Summer 2024: Eve Fleisig & Evgeny Pobachienko


University of California, Berkeley
[These slides adapted from Nicholas Tomlin, Dan Klein, Pieter Abbeel, and Anca Dragan]
Introduction: Eve (she/her)
Introduction: Eve (she/her)
Introduction: Eve (she/her)
o Rising 4th year PhD student
o Advised by Dan Klein
o Natural language processing (NLP) + AI ethics
o Ethics & societal impacts of generative language
models like ChatGPT
o Always happy to chat if you’re curious about getting started with
research
Last time…

1 A 3

S G
5
Last time…

1 A 3

S G
5
Last time…
g: backward cost (S -> current node)
h: forward cost (heuristic for current node -> goal)

h = ??

1 A 3

S h = ??
G h = ??

5
Last time…
g: backward cost (S -> current node)
Q: Where do heuristics come from?
h: forward cost (heuristic for current node -> goal)
A: We have to create them!
h=6

1 A 3
f =g+h
=1+6
=7
S h=7
G h=0

f =g+h
5 =5+0
=5

Not the best heuristic…


Last time…
g: backward cost (S -> current node)
Q: Where do heuristics come from?
h: forward cost (heuristic for current node -> goal)
A: We have to create them!
h=6

1 A 3
f =g+h
=1+6
=7
S h=7
G h=0

f =g+h
5 =5+0
=5

Not the best heuristic…


Last time…
g: backward cost (S -> current node)
Q: Where do heuristics come from?
h: forward cost (heuristic for current node -> goal)
A: We have to create them!
h=3

1 A 3
f =g+h
=1+3
=4
S h=1
G h=0

f =g+h
5 =5+0
=5

What’s a better heuristic?


Last time…
g: backward cost (S -> current node)
Q: Where do heuristics come from?
h: forward cost (heuristic for current node -> goal)
A: We have to create them!
h=3

1 A 3
f =g+h
=1+3
=4
S h=1
G h=0

f =g+h
5 =5+0
=5

What’s a better heuristic? Admissible = Underestimates cost from any node to the goal
Last time…
o Failure to detect repeated states can cause exponentially more work.

State Graph Search Tree


Last time…
o Idea: never expand a state twice

o How to implement:
o Tree search + set of expanded states (“closed set”)
o Expand the search tree node-by-node, but…
o Before expanding a node, check to make sure its state has never been
expanded before
o If not new, skip it, if new add to closed set
Last time…

A
1
1
S C

1
2
3
B

G
Last time…

A
1
1
S C

1
2
3
B

G
Last time…
h=4

A
h=1
1
f =g+h
1 f =g+h
h=2 S C =3+1
=1+4
=5 =4
1
2
3
B

h=1
G
f =g+h
=1+1
h=0
=2
Last time…
This heuristic isn’t consistent
h=4

A
h=1
1
f =g+h
1 f =g+h
h=2 S C =3+1
=1+4
=5 =4
1
2
3
“Triangle inequality” B

h(u) ≤ d(u,v) + h(v)


h=1
G
f =g+h
=1+1
h=0
=2
Last time…
This heuristic isn’t consistent
h=4

A
h=1
1
f =g+h
1 f =g+h
h=2 S C =3+1
=1+4
=5 =4
1
2
3
“Triangle inequality” B

h(u) ≤ d(u,v) + h(v)


h=1
G
Q: Is h(A) ≤ d(A,C) + h(C)? f =g+h
=1+1
h=0
=2
Last time…
This heuristic isn’t consistent
h=4

A
h=1
1
f =g+h
1 f =g+h
h=2 S C =3+1
=1+4
=5 =4
1
2
3
Consistency: “Triangle inequality” B

h(u) ≤ d(u,v) + h(v)


h=1
G
Q: Is h(A) ≤ d(A,C) + h(C)? f =g+h
=1+1
A: No: 4 ≰ 1 + 1 h=0
=2
Summary of A*
o Tree search:
o A* is optimal if heuristic is admissible
o UCS is a special case (h = 0)

o Graph search:
o A* optimal if heuristic is consistent
o UCS optimal (h = 0 is consistent)

o Consistency implies admissibility

o In general, most natural admissible heuristics


tend to be consistent, especially if it comes
from a relaxed problem
Bonus: Optimality of A* Graph Search

o Consider what A* does:


o Expands nodes in increasing total f value (f-contours)
Reminder: f(n) = g(n) + h(n) = cost to n + heuristic
o Proof idea: the optimal goal(s) have the lowest f value, so
it must get expanded first

… f≤1
f≤2
f≤3
Bonus: Optimality of A* Graph Search

Proof by contradiction:
o New possible problem: some n on path to G*
isn’t in queue when we need it, because some
worse n’ for the same state dequeued and
expanded first (disaster!)
o Take the highest such n in tree
o Let p be the ancestor of n that was on the
queue when n’ was popped
o f(p) < f(n) because of consistency
o f(n) < f(n’) because n’ is suboptimal
o p would have been expanded before n’
o Contradiction!
Beyond Pathfinding
A* can be used in a variety of domains
besides path planning

Even has applications to LLMs!


Constraint Satisfaction Problems

N variables
domain D
constraints
x2

x1

states goal test successor function


partial assignment complete; satisfies constraints assign an unassigned variable
What is Search For?
o Assumptions about the world: a single agent, deterministic actions, fully observed
state, discrete state space

o Planning: sequences of actions


o The path to the goal is the important thing
o Paths have various costs, depths
o Heuristics give problem-specific guidance

o Identification: assignments to variables


o The goal itself is important, not the path
o All paths at the same depth (for some formulations)
o CSPs are specialized for identification problems
Constraint Satisfaction Problems

o Standard search problems:


o State is a “black box”: arbitrary data structure
o Goal test can be any function over states
o Successor function can also be anything

o Constraint satisfaction problems (CSPs):


o A special subset of search problems
o State is defined by variables Xi with values from a
domain D (sometimes D depends on i)
o Goal test is a set of constraints specifying allowable
combinations of values for subsets of variables

o Allows useful general-purpose algorithms with more


power than standard search algorithms
CSP Examples
Example: Map Coloring
o Variables:

o Domains:

o Constraints: adjacent regions must have different


colors
Implicit:

Explicit:

o Solutions are assignments satisfying all


constraints, e.g.:
Constraint Graphs
Constraint Graphs

o Binary CSP: each constraint relates (at most) two


variables

o Binary constraint graph: nodes are variables, arcs


show constraints

o General-purpose CSP algorithms use the graph


structure to speed up search. E.g., Tasmania is an
independent subproblem!
Example: N-Queens
o Formulation 1:
o Variables:
o Domains:
o Constraints
Example: N-Queens
o Formulation 2:
o Variables:

o Domains:

o Constraints:
Implicit:

Explicit:
Example: Cryptarithmetic
X1

o Variables:

o Domains:

o Constraints:
Example: Sudoku
§ Variables:
§ Each (open) square
§ Domains:
§ {1,2,…,9}
§ Constraints:

9-way alldiff for each column


9-way alldiff for each row
9-way alldiff for each region
(or can have a bunch of
pairwise inequality
constraints)
Varieties of Constraints
o Varieties of Constraints
o Unary constraints involve a single variable (equivalent to
reducing domains), e.g.:

o Binary constraints involve pairs of variables, e.g.:

o Higher-order constraints involve 3 or more variables:


e.g., cryptarithmetic column constraints

o Preferences (soft constraints):


o E.g., red is better than green
o Often representable by a cost for each variable assignment
o Gives constrained optimization problems
o (We’ll ignore these until we get to Bayes’ nets)
Real-World CSPs
o Assignment problems: e.g., who teaches what class
o Timetabling problems: e.g., which class is offered when and where?
o Hardware configuration
o Transportation scheduling
o Factory scheduling
o Circuit layout
o Fault diagnosis
o … lots more!

o Many real-world problems involve real-valued variables…


Solving CSPs
Standard Search Formulation
o Standard search formulation of CSPs

o States defined by the values assigned so


far (partial assignments)
o Initial state: the empty assignment, {}
o Successor function: assign a value to an
unassigned variable
o Goal test: the current assignment is
complete and satisfies all constraints

o We’ll start with the straightforward,


naïve approach, then improve it
Search Methods
o What would BFS do?

{}
{WA=g} {WA=r} … {NT=g} …

{WA=g, NT=r} {WA=g, NT=g} {WA=r, NT=g}

[Demo: coloring -- dfs]


Search Methods
o What would BFS do?

o What would DFS do?


o let’s see!

o What problems does naïve search have?

[Demo: coloring -- dfs]


Video of Demo Coloring -- DFS
Backtracking Search
Backtracking Search
o Backtracking search is the basic uninformed algorithm for solving CSPs
o Idea 1: One variable at a time
o Variable assignments are commutative, so fix ordering -> better branching factor!
o I.e., [WA = red then NT = green] same as [NT = green then WA = red]
o Only need to consider assignments to a single variable at each step

o Idea 2: Check constraints as you go


o I.e. consider only values which do not conflict previous assignments
o Might have to do some computation to check the constraints
o “Incremental goal test”

o Depth-first search with these two improvements


is called backtracking search (not the best name)
o Can solve n-queens for n » 25
Backtracking Example

[Demo: coloring -- backtracking]


Video of Demo Coloring – Backtracking
Backtracking Search

o Backtracking = DFS + variable-ordering + fail-on-violation


o What are the choice points?
Improving Backtracking

o General-purpose ideas give huge gains in speed

o Ordering:
o Which variable should be assigned next?
o In what order should its values be tried?

o Filtering: Can we detect inevitable failure early?


Filtering

Keep track of domains for unassigned variables and cross off bad options
Filtering: Forward Checking
o Filtering: Keep track of domains for unassigned variables and cross off bad options
o Forward checking: Cross off values that violate a constraint when added to the existing
assignment

NT Q
WA
SA NSW
V

[Demo: coloring -- forward checking]


Video of Demo Coloring – Backtracking with Forward Checking
Filtering: Constraint Propagation
o Forward checking propagates information from assigned to unassigned variables, but
doesn't provide early detection for all failures:

NT Q
WA
SA
NSW
V

o NT and SA cannot both be blue!


o Why didn’t we detect this yet?
o Constraint propagation: reason from constraint to constraint
Consistency of A Single Arc
o An arc X ® Y is consistent iff for every x in the tail there is some y in the head which
could be assigned without violating a constraint

NT Q
WA
SA
NSW
V

Delete from the tail!


Forward checking?
Enforcing consistency of arcs pointing to each new assignment
Arc Consistency of an Entire CSP
o A simple form of propagation makes sure all arcs are consistent:

NT Q
WA SA
NSW
V

o Important: If X loses a value, neighbors of X need to be rechecked!


o Arc consistency detects failure earlier than forward checking
Remember: Delete
o Can be run as a preprocessor or after each assignment from the tail!
o What’s the downside of enforcing arc consistency?
Enforcing Arc Consistency in a CSP

o Runtime: O(n2d3), can be reduced to O(n2d2)


o … but detecting all possible future problems is NP-hard – why?
Limitations of Arc Consistency

o After enforcing arc consistency:


o Can have one solution left
o Can have multiple solutions left
o Can have no solutions left (and
not know it)

o Arc consistency still runs inside


a backtracking search!

[Demo: coloring -- forward checking]


[Demo: coloring -- arc consistency]
Video of Demo Coloring – Backtracking with Forward Checking –
Complex Graph
Video of Demo Coloring – Backtracking with Arc Consistency –
Complex Graph
K-Consistency
o Increasing degrees of consistency
o 1-Consistency (Node Consistency): Each single node’s domain has a
value which meets that node’s unary constraints

o 2-Consistency (Arc Consistency): For each pair of nodes, any


consistent assignment to one can be extended to the other

o K-Consistency: For each k nodes, any consistent assignment to k-1


can be extended to the kth node.

o Higher k more expensive to compute

o (You need to know the k=2 case: arc consistency)


Strong K-Consistency
o Strong k-consistency: also k-1, k-2, … 1 consistent
o Claim: strong n-consistency means we can solve without backtracking!
o Why?
o Choose any assignment to any variable
o Choose a new variable
o By 2-consistency, there is a choice consistent with the first
o Choose a new variable
o By 3-consistency, there is a choice consistent with the first 2
o …

o Lots of middle ground between arc consistency and n-consistency! (e.g. k=3, called
path consistency)
Ordering
Ordering: Minimum Remaining Values
o Variable Ordering: Minimum remaining values (MRV):
o Choose the variable with the fewest legal left values in its domain

o Why min rather than max?


o Also called “most constrained variable”
o “Fail-fast” ordering
Ordering: Least Constraining Value
o Value Ordering: Least Constraining Value
o Given a choice of variable, choose the least
constraining value
o I.e., the one that rules out the fewest values in
the remaining variables
o Note that it may take some computation to
determine this! (E.g., rerunning filtering)

o Why least rather than most?

o Combining these ordering ideas makes


1000 queens feasible
Demo: Coloring -- Backtracking + Forward Checking + Ordering
Summary
o Work with your rubber duck to write down:
o How we order variables and why
o How we order values and why
Iterative Improvement
Iterative Algorithms for CSPs
o Local search methods typically work with “complete” states, i.e., all variables assigned

o To apply to CSPs:
o Take an assignment with unsatisfied constraints
o Operators reassign variable values
o No fringe! Live on the edge.

o Algorithm: While not solved,


o Variable selection: randomly select any conflicted variable
o Value selection: min-conflicts heuristic:
o Choose a value that violates the fewest constraints
o I.e., hill climb with h(x) = total number of violated constraints
Example: 4-Queens

o States: 4 queens in 4 columns (44 = 256 states)


o Operators: move queen in column
o Goal test: no attacks
o Evaluation: c(n) = number of attacks
Iterative Improvement – n Queens
Iterative Improvement – Coloring
Performance of Min-Conflicts
o Given random initial state, can solve n-queens in almost constant time for arbitrary
n with high probability (e.g., n = 10,000,000)!

o The same appears to be true for any randomly-generated CSP except in a narrow
range of the ratio
Summary: CSPs
o CSPs are a special kind of search problem:
o States are partial assignments
o Goal test defined by constraints
o Basic solution: backtracking search

o Speed-ups:
o Ordering
o Filtering
o Structure – turns out trees are easy!

o Iterative min-conflicts is often effective in practice


Local Search
Local Search
o Tree search keeps unexplored alternatives on the fringe (ensures completeness)

o Local search: improve a single option until you can’t make it better (no fringe!)

o New successor function: local changes

o Generally much faster and more memory efficient (but incomplete and suboptimal)
Hill Climbing
o Simple, general idea:
o Start wherever
o Repeat: move to the best neighboring state
o If no neighbors better than current, quit

o What’s bad about this approach?

o What’s good about it?


Hill Climbing Diagram
Hill Climbing Quiz

Starting from X, where do you end up ?

Starting from Y, where do you end up ?

Starting from Z, where do you end up ?


Simulated Annealing
o Idea: Escape local maxima by allowing downhill moves
o But make them rarer as time goes on

82
Simulated Annealing
o Theoretical guarantee:
o Stationary distribution:
o If T decreased slowly enough,
will converge to optimal state!

o Is this an interesting guarantee?

o Sounds like magic, but reality is reality:


o The more downhill steps you need to escape a local
optimum, the less likely you are to ever make them all in a
row
o People think hard about ridge operators which let you
jump around the space in better ways
Genetic Algorithms

o Genetic algorithms use a natural selection metaphor


o Keep best N hypotheses at each step (selection) based on a fitness function
o Also have pairwise crossover operators, with optional mutation to give variety

o Possibly the most misunderstood, misapplied (and even maligned) technique around
Example: N-Queens

o Why does crossover make sense here?


o When wouldn’t it make sense?
o What would mutation be?
o What would a good fitness function be?
Bonus (time permitting): Structure
Problem Structure

o Extreme case: independent subproblems


o Example: Tasmania and mainland do not interact

o Independent subproblems are identifiable as


connected components of constraint graph

o Suppose a graph of n variables can be broken into


subproblems of only c variables:
o Worst-case solution cost is O((n/c)(dc)), linear in n
o E.g., n = 80, d = 2, c =20
o 280 = 4 billion years at 10 million nodes/sec
o (4)(220) = 0.4 seconds at 10 million nodes/sec
Tree-Structured CSPs

o Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d2) time
o Compare to general CSPs, where worst-case time is O(dn)

o This property also applies to probabilistic reasoning (later): an example of the relation
between syntactic restrictions and the complexity of reasoning
Tree-Structured CSPs
o Algorithm for tree-structured CSPs:
o Order: Choose a root variable, order variables so that parents precede children

o Remove backward: For i = n : 2, apply RemoveInconsistent(Parent(Xi),Xi)


o Assign forward: For i = 1 : n, assign Xi consistently with Parent(Xi)
o Runtime: O(n d2) (why?)
Tree-Structured CSPs
o Claim 1: After backward pass, all root-to-leaf arcs are consistent
o Proof: Each X®Y was made consistent at one point and Y’s domain could not have
been reduced thereafter (because Y’s children were processed before Y)

o Claim 2: If root-to-leaf arcs are consistent, forward assignment will not backtrack
o Proof: Induction on position

o Why doesn’t this algorithm work with cycles in the constraint graph?
o Note: we’ll see this basic idea again with Bayes’ nets
Improving Structure
Nearly Tree-Structured CSPs

o Conditioning: instantiate a variable, prune its neighbors' domains


o Cutset conditioning: instantiate (in all ways) a set of variables such that
the remaining constraint graph is a tree
o Cutset size c gives runtime O( (dc) (n-c) d2 ), very fast for small c
Cutset Conditioning

Choose a cutset
SA

Instantiate the cutset


(all possible ways)
SA SA SA

Compute residual CSP


for each assignment

Solve the residual CSPs


(tree structured)
Cutset Quiz
o Find the smallest cutset for the graph below.
Tree Decomposition*
§ Idea: create a tree-structured graph of mega-variables
§ Each mega-variable encodes part of the original CSP
§ Subproblems overlap to ensure consistent solutions

M1 M2 M3 M4

¹ ¹ ¹ ¹
Agree on

Agree on

Agree on
NS NS
WA NT NT Q Q W W
V

¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹
shared vars

shared vars

shared vars
SA SA SA SA

{(WA=r,SA=g,NT=b), {(NT=r,SA=g,Q=b), Agree: (M1,M2) Î


(WA=b,SA=r,NT=g), (NT=b,SA=g,Q=r), {((WA=g,SA=g,NT=g), (NT=g,SA=g,Q=g)), …}
…} …}

You might also like