0% found this document useful (0 votes)
6 views44 pages

Lecture 2

This lecture covers uninformed search strategies in artificial intelligence, including classical search problems and various algorithms such as Breadth-First Search (BFS), Depth-First Search (DFS), and Uniform-Cost Search (UCS). It outlines key components of problem formulation, types of problems, and the structure of search trees. The document emphasizes the importance of search strategies in finding optimal solutions in complex environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views44 pages

Lecture 2

This lecture covers uninformed search strategies in artificial intelligence, including classical search problems and various algorithms such as Breadth-First Search (BFS), Depth-First Search (DFS), and Uniform-Cost Search (UCS). It outlines key components of problem formulation, types of problems, and the structure of search trees. The document emphasizes the importance of search strategies in finding optimal solutions in complex environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Artificial Intelligence and Expert Systems

Lecture 2 Uninformed Search

October 5, 2024
Table of Contents

1. Classical Search Problems

2. Search Strategies
2.1 Breadth-First Search (BFS)
2.2 Depth First Search (DFS)
2.3 Uniform-Cost Search
2.4 Iterative Deepening Search

3. Conclusion

2/40
Classical Search Problems
Classical Search Problems

• Route Planning: Finding the shortest path between two points.


• Puzzle Solving: E.g., 8-puzzle, where tiles must be moved to reach a goal
configuration.
• Game Playing: Determining optimal moves in a game like chess or checkers.

3/40
Outline

• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms

4/40
Overview

• Reflex agents cannot operate well in complex environments


• Problem-solving agents use atomic representations
• Two general classes of search:
• Uninformed search algorithms
• Informed search algorithms
• Goals help organize behavior by limiting objectives
• We consider environments that are known, observable, discrete, and
deterministic
• Search: process of looking for a sequence of actions that reaches the goal

5/40
Well-defined Problems and Solutions

Key Components of a Problem:


1. Initial State: Start state, e.g., In(Arad).
2. Actions: Actions from a state, e.g., {Go(Sibiu)}.
3. Transition Model: Returns new state from action.
4. Goal Test: Checks if state is a goal, e.g., In(Bucharest).
5. Path Cost: Sum of action costs.

6/40
Problem-solving agents

• Goal formulation and problem setting.


• Execute action sequences toward solving the problem.
• Continue until the goal is achieved.
Problem Formulation:
• Abstraction: Ignores irrelevant details for simplicity.
• State Description: Focuses on crucial elements.
• Actions: Considers only key actions.

7/40
Example: Romania

• Goal: Be in Bucharest
• Problem formulation:
• States: Various cities
• Actions: Drive between cities
• Solution: Sequence of cities (e.g., Arad,
Sibiu, Fagaras, Bucharest)
Figure: Simplified map of Romania image
source Russell and Norvig (2010)

8/40
Problem types

• Deterministic, fully observable → single-state problem


• Non-observable → sensorless problem (conformant problem)
• Nondeterministic and/or partially observable → contingency problem
• Unknown state space → exploration problem

9/40
Selecting a state space

• The Real world is complex, state space must be abstracted


• Abstract state = set of real states
• Abstract action = complex combination of real actions
• Abstract solution = set of real paths that are solutions in the real world

10/40
8-Puzzle Problem

States: Each state represents a unique


configuration of the 8 tiles on a 3x3 board.
Initial State: Any configuration of tiles. Ex-
ample: the configuration shown above.
Actions: Slide a tile horizontally or verti-
cally into the empty space.
Transition Model: Given a state and ac-
tion, returns a new state with the tile
moved into the empty space.
Goal Test: A state where the tiles are ar- Figure: 8 Puzzle configuration Russell and
ranged in a numerical order with the empty Norvig (2010)
space at the bottom right corner.
Path Cost: Each move costs one step. The
path cost is the number of moves taken. 11/40
8 Queens Problem

States: Each state represents a unique ar-


rangement of up to 8 queens on an 8x8
chessboard.
Initial State: Any configuration of queens,
from 0 to 8, on the board.
Actions: Add a queen to any empty square
such that no two queens attack each
other.
Transition Model: Given a state and ac-
tion, returns a new state with the queen
added to the specified square.
Goal Test: A state with 8 queens on the
Figure: 8 Queens on a chessboard Russell and
board with none attacking each other.
Norvig (2010)
Path Cost: Typically 1 per move. However, 12/40
Searching for Solutions

Understanding Search:
• Solutions are sequences of actions
considered by search algorithms.
• These sequences form a search tree,
starting from the initial state as the
root.
• Expanding a state generates new
states, adding branches to the tree.
• We add child nodes to parent nodes
through these expansions.

Figure: Illustration of a search tree


13/40
Searching for Solutions - Cont’d

Expanding the Frontier:


• Nodes without children are leaves; all
leaves form the frontier.
• Expansion continues until a solution
is found or all states are explored.
• We focus next on the general
TREE-SEARCH algorithm.
• Search algorithms differ by their
state expansion strategy.
• TREE-SEARCH explores all paths;
GRAPH-SEARCH avoids redundant
paths. Figure: Illustration of a search tree
14/40
Searching for Solutions

• Redundant paths make problems intractable.


• Use explored set in TREE-SEARCH.
• Explored set prevents redundant exploration.

15/40
Infrastructure and Implementation

• Search requires structured data.


• Each node has four components.
• States differ from nodes.
• Nodes stored in specific queues.
• Queue types affect node retrieval.
Node structure:
• n.STATE: state in the state space
• n.PARENT: node in the search tree that generated this node
• n.ACTION: action applied to the parent
• n.PATH-COST: cost of the path from initial state to the node

16/40
States vs. Nodes

State:
• A representation of a physical configuration
• Part of the problem domain
• Independent of the search process

Node:
• A data structure used in the search algorithm
• Contains a state plus other information:
• Parent node
• Action taken to reach this node Figure: Node and State Image source Russell and
• Path cost (g(x)) Norvig (2010)
• Depth in the search tree
• Created by the Expand function during search

Key Difference: Multiple nodes can contain the same state but represent different paths to reach that state.
17/40
Search strategies

Evaluation criteria:
• Completeness
• Time complexity
• Space complexity
• Optimality
Measures:
• b: maximum branching factor
• d: depth of least-cost solution
• m: maximum depth of state space

18/40
Uninformed search strategies

• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

19/40
Search Strategies
Breadth First Search

Breadth-First Search (BFS) is a layer-by-layer traversal method that explores nodes


in the order of their distances from the starting point. It uses a queue to keep track
of the next node to visit.

20/40
Breadth First Search BFS

Figure: Breadth First Search Image Source Russell and Norvig (2010)
21/40
Breadth-First Search (BFS)

S Queue Operations (FIFO):


Node Frontier
S A, B, D
A B, D, E, F
A B D B D, E, F, G, C
D E, F, G, C, K, M
E F, G, C, K, M, H, I
F G, C, K, M, H, I
E F G C K G M C, K, M, H, I, J
C K, M, H, I, J
K M, H, I, J
M H, I, J, N, L
H I J NH L I, J, N, L
I J, N, L
J N, L
N Goal Reached!
22/40
Breadth-First Search (BFS)

S Queue Operations (FIFO):


Node Frontier
S A, B, D
A B, D, E, F
A B D B D, E, F, G, C
D E, F, G, C, K, M
E F, G, C, K, M, H, I
F G, C, K, M, H, I
E F G C K G M C, K, M, H, I, J
C K, M, H, I, J
K M, H, I, J
M H, I, J, N, L
H I J NH L I, J, N, L
I Goal Reached!

23/40
Complexity and Properties

• Time Complexity: O(b{d} ), where b is the branching factor and d is the depth of
the shallowest solution.
• Space Complexity: O(b{d} ), as all nodes at each depth d are stored in the queue.
• Completeness: BFS is complete, meaning it will find a solution if one exists.
• Optimality: BFS is optimal for uniform cost paths.

24/40
Depth-First Search (DFS)

Algorithm:

1. Initialize an empty stack


2. Push the starting node onto the stack
3. While the stack is not empty:
a. Pop the top node
b. If it is the goal, return success
c. Otherwise, push all adjacent nodes onto the stack

Example: Generate all possible configurations of a puzzle game.

25/40
Depth-First Search (DFS) Visualization - Path to
Goal State N

Node Frontier
S
S D, B, A
A D, B, F, E
E D, B, F, I, H
H D, B, F, I
A B D
I D, B, F
F D, B
B D, C, G
G D, C, J
E F G C K M
J D, C
C D
D M, K
K M
H I J N L
M L, N
N “Reached Goal State” L26/40
How DFS Solves the Maze to Reach N

DFS explores nodes by expanding the deepest node in the current frontier until it
reaches the goal state N or can expand no further.
Key Steps:
• Start at S, push all children onto the stack starting from right to left.
• Continue deep-first exploration.
• M is the parent of N, discovered after popping K.
• N is found and identified as the goal state.
• Path to N: S → A → E → H → I → F → B → G → J → C → D → K → M → N (highlighted in yellow).
DFS is used when space is a concern, and it can be faster than BFS by finding a
solution without exploring all level nodes.

27/40
Uniform Cost Search

28/40
Figure: Uniform Cost Search Image Source Russell and Norvig (2010)
Uniform Cost Search (UCS) Visualization

UCS Path Finding Details:


S Frontier List Expand List Explored List
{(S, 0)}
3 2 1 {(D, 1), (B, 2), (A, 3)} S {S}
{(B, 2), (A, 3), (M, 3), (K, 5)} D {S, D}
{(A, 3), (M, 3), (G, 5), (K, 5), (C, 6)} B {S, D, B}
{(M, 3), (G, 5), (K, 5), (C, 6), (A, 7)} M {S, D, B, M}

A B D
{(G, 5), (K, 5), (C, 6), (A, 7), (N, 4), (L, 6)}
{(G, 5), (K, 5), (C, 6), (A, 7), (L, 6)}
N
G
{S, D, B, M, N}
{S, D, B, M, N, G}

4 2 3 4 4 2

E F G C K M

1 4 2 1 3

H I J N L

29/40
How UCS Solves the Map to Reach N

UCS explores paths in order of increasing cost:


• Start at node S.
• Continue by expanding the least costly node.
• Nodes are expanded until the goal node N is reached.
• Path to N: S → D → M → N.
• Total Cost: 4
UCS guarantees the lowest cost path in weighted graphs.

30/40
Uniform Cost Search (UCS) Visualization - Path to
Goal State I

UCS Path Finding Details:


S Frontier List Expand List Explored List
{(S, 0)}

3 2 1 {(D, 1), (B, 2), (A, 3)}


{(B, 2), (A, 3), (M, 3), (K, 5)}
S
D
{S}
{S, D}
{(A, 3), (M, 3), (G, 5), (K, 5), (C, 6)} B {S, D, B}
{(M, 3), (G, 5), (K, 5), (C, 6), (A, 7)} M {S, D, B, M}
{(G, 5), (K, 5), (C, 6), (A, 7), (N, 4), (L, 6)} N {S, D, B, M, N}

A B D {(G, 5), (K, 5), (C, 6), (A, 7), (L, 6)}
{(K, 5), (C, 6), (A, 7), (L, 6), (F, 8)}
G
G
{S, D, B, M, N, G}
{S, D, B, M, N, G, C}
{(K, 5), (L, 6), (A, 7), (F, 8), (E, 10)} K {S, D, B, M, N, G, C, K}
{(L, 6), (A, 7), (F, 8), (E, 10)} A {S, D, B, M, N, G, C, K, A}

4 2 3 4 4 2 {(L, 6), (F, 8), (E, 10), (H, 11), (I, 14)}
{(L, 6), (F, 8), (H, 11), (I, 14)}
E
I
{S, D, B, M, N, G, C, K, A, E}
{S, D, B, M, N, G, C, K, A, E, I}

E F G C K M

1 4 2 1 3

H I J N L
31/40
Iterative Deepening Search

IDS Process Explained:


• Starts at node S and iteratively deepens the search.
• Explores all nodes within the current depth limit before increasing the limit.
• Efficiently combines the space efficiency of DFS and the depth control of BFS.
• Ensures that no nodes are missed and finds the shortest path in terms of depth.
Benefits:
• IDS finds the shortest path with minimal memory usage compared to BFS.
• Effective in large search spaces where the depth of the solution is not known in
advance.

32/40
IDS Algorithm and Its Relation to DFS and BFS

Algorithm Details:
• Combines DFS depth-first search with BFS layer-by-layer expansion.
• DFS is applied with increasing depth limits, simulating BFS.
• Explores fully at each depth before proceeding deeper.
Execution Steps:
1. Begin with depth 0, perform DFS (only depth 0 is explored).
2. If the goal isn’t found, increase depth to 1.
3. Repeat DFS for depth 1 and higher until the goal is found.
4. This method simulates BFS by exploring all nodes at each depth before moving deeper.
Key Benefits:
• Uses minimal memory (like DFS), storing only the current path.
• Ensures completeness (like BFS) by exploring all nodes at each level. 33/40
Iterative Deepening Search

Figure: Iterative Deepening Search Image source Russell and Norvig (2010)

34/40
Iterative Deepening Depth-First Search (IDDFS) -
Path to Goal State N

D0 S
IDDFS Process:
Depth Expanded Frontier
0 S S
D1 A B D 1 S A, B, D
2 A E, F, B, D
B E, F, G, C, D
D2 E F G C K M D E, F, G, C, K, M
3 E H, I, F, G, C, K, M
F H, I, G, C, K, M
D 3H I J N L G H, I, J, C, K, M
C H, I, J, K, M
K H, I, J, M
D4 M H, I, J, N, L
4 N H, I, J, L
Path to N: S → D → M → N
35/40
Iterative Deepening Depth-First Search (IDDFS) -
Path to Goal State I

D0 S
IDDFS Process:
Depth Expanded Frontier
0 S S
D1 A B D 1 S A, B, D
2 A E, F, B, D
B E, F, G, C, D
D2 E G
F C
K M D E, F, G, C, K, M
3 E H, I, F, G, C, K, M
F H, I, G, C, K, M
HD 3 I J N L G H, I, J, C, K, M
I H, I, J, C, K, M

D4
Path to I: S → A → E → I

36/40
Comparing uninformed Search Strategies

Figure: Comparing uninformed search methods Image Source Russell and Norvig (2010)

37/40
Conclusion
Conclusion

• Recap the significance and applications of classical search.


• Encourage further exploration and optimization of these algorithms.
• Chapter 3 In AI modern Approach for reference.

38/40
Reference

Some slides and ideas are adapted from the following sources:
• Artificial Intelligence: A Modern Approach Russell and Norvig (2010)

39/40
Thank you for your attention

October 5, 2024
Russell, S. and Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Prentice
Hall, 3 edition.

40/40

You might also like