Lecture 5
Lecture 5
• Type: Uninformed
• Strategy: Explores all nodes level by level.
• Advantages:
• Complete and will always find a solution if one exists.
• Guarantees the shortest path if all actions have the same cost.
• Disadvantages:
• Explores all possibilities, which can be inefficient in large spaces.
• High memory consumption.
2/56
Depth-First Search (DFS)
• Type: Uninformed
• Strategy: Explores as far down one branch as possible before backtracking.
• Advantages:
• Low memory consumption.
• Can be useful for problems where solutions are deep in the search space.
• Disadvantages:
• Not guaranteed to find the shortest path.
• Can get stuck in deep, irrelevant paths.
3/56
Uniform-Cost Search (UCS)
• Type: Uninformed
• Strategy: Expands the node with the lowest path cost, ensuring the cheapest
path to the goal.
• Advantages:
• Guaranteed to find the optimal solution.
• Complete.
• Disadvantages:
• Can be slow if all actions have similar costs.
• Can explore in irrelevant directions.
4/56
Informed Search Overview
5/56
Greedy Best-First Search
• Type: Informed.
• Strategy: Expands the node that appears to be closest to the goal according to
the heuristic.
• Heuristic: Uses only h(n) (estimate of cost to reach the goal).
• Advantages:
• Often faster than uninformed search methods.
• Useful when a good heuristic is available.
• Disadvantages:
• Not complete; may get stuck in loops or local minima.
• Not guaranteed to find the optimal solution.
6/56
A* Search
• Type: Informed.
• Strategy: Expands the node with the lowest combined cost f(n) = g(n) + h(n),
where:
• g(n): Cost to reach node n from the start.
• h(n): Estimated cost from node n to the goal.
• Advantages:
• Complete and optimal if h(n) is admissible (never overestimates).
• Balances path cost and estimated cost to the goal.
• Disadvantages:
• Memory-intensive due to the need to store all explored nodes.
• Can be slow in large search spaces if h(n) is not efficient.
7/56
Introduction
8/56
Agenda
1. Introduction
2. Constraint Networks
3. Consistency and Solutions
4. Naïve Backtracking “Variable and Value Ordering”
5. Filter Ordering
6. Arc Consistency
7. Conclusion and References
9/56
Search Algorithms
12/56
Constraint Satisfaction Problem Example
• Variables: A, B, C, D, E, F, G
• Domains: D = {yellow, blue, green}
• Constraints: Adjacent nodes must have
different colors
• Implicit: A ̸= B
• Explicit:
(A, B) ∈ {(yellow, blue), (yellow, green), . . .}
• Solutions: Assignments that meet all
constraints, e.g.,
Color of A ̸= blue
A ̸= B
• Constraint Networks:
• Defined by a set of variables,
each with a finite domain
• Constraints specify allowed
values for variable pairs
• A CSP consists of finding values
for all variables satisfying all
constraints
SudoKu as CSP
18/56
Example: Coloring Australia
Coloring Australia
19/56
Standard Search Approach for CSPs
function NaïveBacktracking(assignment):
if assignment is complete:
return assignment
variable <- select unassigned variable
for each value in domain of variable:
if value consistent with assignment:
result <- NaïveBacktracking(assignment + {variable=value})
if result != failure:
return result
return failure
23/56
CSP Map Coloring
24/56
Backtracking
25/56
Backtracking Scenario
26/56
Step 1: Australian States and Territories Structure
NT QLD
WA SA NSW
VIC
27/56
Step 2: Start with Western Australia (WA)
NT NT NT
WA SA WA SA WA SA
NT QLD NT QLD
WA SA WA SA
NT = Blue NT = Green
NT QLD
WA SA NSW
VIC
30/56
Step 5: Coloring Queensland (QLD)
NT QLD
WA SA NSW
VIC
31/56
Step 6: Coloring New South Wales (NSW) and
Victoria (VIC)
NT QLD
WA SA NSW
VIC 32/56
Step 1: Coloring choices for vertex A
B D B D B D
C C C
E E E
33/56
Step 2: Options for B (after A = Red)
B = Blue B = Green
A A
B D B D
C C
E E
34/56
Step 3: Options for D (after A = Red, B = Blue)
D = Blue D = Green
A A
B D B D
C C
E E
35/56
Step 4: Options for C (after A = Red, B = Blue, D =
Blue)
B D
36/56
Step 5: Final Vertex E (after A = Red, B = Blue, D =
Blue, C = Green)
B D
37/56
Course Scheduling Problem
Constraints:
- Each course needs one time slot (9AM, 10AM, 11AM, or 2PM)
- Same professor can’t teach at same time
- Group A takes CS101 and MATH101
- Group B takes CS102 and PHYS101
38/56
Step 1: Initial Time Slot Options for CS101 (Group A)
9 AM 10 AM 11 AM 2 PM
39/56
Step 2: Options for MATH101 (Group A, after CS101 =
9AM)
CS101
9 AM
10 AM 11 AM 2 PM
CS101 MATH101
9 AM 10 AM
CS102 CS102
11 AM 2 PM
Option 1 Option 2
CS102 can’t be at:
- 9AM (Prof. Smith teaches CS101)
41/56
- 10AM (conflict with MATH101)
Step 4: Final Assignment - PHYS101 (Group B)
CS101 (Group
MATH101
A) (Group
CS102
A) (Group
PHYS101
B) (Group B)
9 AM 10 AM 11 AM 2 PM
• Ordering
• Filtering
43/56
Variable Ordering
• Most Constrained Variable: Choose the variable with the fewest legal values
• Most Constraining Variable: Choose the variable involved in the most constraints
• Least Constraining Value: Choose the value that rules out the fewest choices
for other variables
44/56
Forward Checking
45/56
Forward Checking
• Forward Checking:
• Forward checking propagates information from assigned to unassigned variables,
helping to reduce the domain of possible values.
• After assigning a value to a variable, forward checking eliminates values in the
domains of unassigned neighboring variables that would violate constraints.
• Limitation of Forward Checking:
• Forward checking may not detect all potential conflicts early because it only looks
one step ahead.
• For example, if assigning colors to states on a map, it may not catch conflicts
between variables that are not directly connected (e.g., NT and SA cannot both be
blue, but this conflict may not be detected immediately).
• This technique does not fully propagate constraints across the entire network,
leading to possible missed conflicts until later in the search.
• Solution: Constraint Propagation 47/56
Arc Consistency
48/56
Arc Consistency Example
49/56
Arc Consistency: Initial Setup and First Assignment
• Process NT:
• NT has {green, blue}; ensure it doesn’t
conflict with its neighbors.
• Remove ”green” from Q’s domain if NT is
assigned ”green”.
• Process Q:
• Q has {red, blue}; check consistency with its
neighbors.
• Remove ”blue” from NT’s domain if Q is
assigned ”blue”.
• Repeat arc consistency for all pairs until no more values can be
removed.
• Final domains show the colors that satisfy all adjacency
constraints.
• Backtracking:
• Simple approach, one variable at a time which may explore invalid paths without
early conflict detection.
• Forward Checking:
• Checks future assignments by eliminating values that lead to conflicts, reducing
the search space.
• Less thorough than arc consistency; only partially enforces constraints.
• Arc Consistency:
• Enforces consistency for all variable pairs, reducing search by eliminating
conflicting values early.
• Stronger but more computationally intensive than forward checking.
• Summary:
• Backtracking is a base technique.
• Forward checking is a more efficient improvement. 54/56
References
Some slides and ideas have been adapted from the following sources:
• Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
Russell and Norvig (2010)
• Tutorial: Artificial Intelligence Tutorial (Week 3)
55/56
Thank you for your attention
56/56