0% found this document useful (0 votes)
9 views87 pages

Chapter 2

Chapter 3 of 'Artificial Intelligence: A Modern Approach' discusses problem-solving through search, outlining key concepts such as problem formulation, search algorithms, and the distinction between uninformed and informed search strategies. It introduces various problem types, examples like the Romania map and the 8-puzzle, and emphasizes the importance of abstraction in problem representation. The chapter also covers tree search algorithms, state space versus search space, and evaluates search strategies based on completeness, time complexity, and optimality.

Uploaded by

11mosta22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views87 pages

Chapter 2

Chapter 3 of 'Artificial Intelligence: A Modern Approach' discusses problem-solving through search, outlining key concepts such as problem formulation, search algorithms, and the distinction between uninformed and informed search strategies. It introduces various problem types, examples like the Romania map and the 8-puzzle, and emphasizes the importance of abstraction in problem representation. The chapter also covers tree search algorithms, state space versus search space, and evaluates search strategies based on completeness, time complexity, and optimality.

Uploaded by

11mosta22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Solving Problems by

Searching – Part#1

Department of Artificial Intelligence


Artificial Intelligence: A Modern Approach
Fourth Edition, Global Edition

Chapter 3

Solving Problems By Searching

Chapter 3 © 2022 Pearson Education Ltd. 2


Outline

♦ Problem-solving agents
♦ Example Problems
♦ Problem formulation
♦ Search Algorithms
♦ Uninformed Search
Strategies

♦ Informed (Heuristic)
Search Strategies
♦ Heuristic Functions

© 2022 Pearson Education Ltd.


Problem Solving As Search

(1) Define the problem through:


a) Goal formulation: Is the first step in problem solving, based on the
current situation and the agent’s performance measure.
b) Problem formulation: Is the process of deciding what actions and
state to consider , given a goal.

(2) Solving the problem as a 2-stage process:


a) Search exploration of several possibilities
b) Execute the solution found
Problem Formulation

 Initial state: the state in which the agent starts.


 States: All states reachable from the initial state by any sequence of
actions (state space).
 Action: possible actions available to the agent. Given state s, ACTION(s),
returns the set of actions that can be executed in state s (Action space).
 Transition model: A description of what each action does Results (s, a)
 Goal test: determines if a given state is a goal state.
 Path cost: function that assigns a numeric cost to a path (performance
measure).
Problem-Solving Agents

Restricted form of general agent:


function Simple-Problem-Solving-Agent( percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state ← Update-State(state, percept)
if seq is empty then
goal ← Formulate-Goal(state)
problem ← Formulate-Problem(state, goal) seq ← Search( problem)
action ← First(seq); seq ← Rest(seq)
return action

Note: this is offline problem solving; solution executed “eyes closed.”


Online problem solving involves acting without complete knowledge.
Example: Romania

Oradea
71
Neamt

Zerind 87
75 151
Iasi
Arad
140
92
Sibiu Fagaras
99
118 Vaslui
80
Timisoara Rimnicu Vilcea

142
111 Pitesti 211
Lugoj 97
70 98
146 85 Hirsova
Mehadia 101 Urziceni
75 138 86
Bucharest
Dobreta 120
90
Craiova Eforie
Giurgiu
Example: Romania

On holiday in Romania, currently in Arad.

Formulate goal: be in Bucharest

Formulate problem: states: various cities


actions: drive between cities.

Find solution:
sequence of cities,
e.g., Arad, Sibiu, Fagaras,
Bucharest
Problem types

Deterministic, fully observable =⇒ single-state problem


Agent knows exactly which state it will be in; solution is a sequence

Non-observable =⇒ conformant problem


Agent may have no idea where it is; solution (if any) is a sequence
Nondeterministic and/or partially observable =⇒ contingency problem
percepts provide new information about current state
solution is a contingent plan or a policy often interleave search, execution
Unknown state space =⇒ exploration problem (“online”)
Single-state problem formulation

A problem is defined by four items:


initial state e.g., “at Arad”
successor function S(x) = set of action–state pairs
e.g., S(Arad) = {(Arad → Zerind, Zerind), . . .}
goal test, can be
explicit, e.g., x = “at Bucharest” implicit, e.g., NoDirt(x)
path cost
e.g., sum of distances, number of actions executed, etc.
c(x, a, y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions
leading from the initial state to a goal state
Optimal Solution

 A solution to a problem is:


 An action sequence that leads from the initial state to goal state
 Solution quality is measured by the path cost function

 An optimal solution has the lowest path cost among all solutions
Abstraction when formulating problems

 There are so many details in the actual world!


 Actual world state = The travelling companions, the condition of the road,
the weather, etc..
 Abstract mathematical state = In (Arad)
 We left out all other considerations in the state description because they
are irrelevant to the problem of finding a route to Bucharest

Is the weather irrelevant ? Why?


 This process of removing details from a representation is called
abstraction.
 Choosing the level of abstraction

• How many details do we want to consider? Do we want to consider


everything such as stopping in the red light, looking out of the
window?
Example: vacuum world state space

R
L R

S S

R R
L R L R

L L
S S
S S

R
L R

S S

states??
actions??
goal
test??
path
cost??
Example: vacuum world state space

R
L R

S S

R R
L R L R

L L
S S
S S

R
L R

S S

states??: Integer dirt and robot locations (ignore dirt amounts etc.)
actions??
goal test??
path cost??
Example: vacuum world state space

R
L R

S S

R R
L R L R

L L
S S
S S

R
L R

S S

states??: integer dirt and robot locations (ignore dirt amounts


etc.)
actions??: Left, Right, Suck, NoOp
goal test??
path cost??
Example: vacuum world state space

R
L R

S S

R R
L R L R

L L
S S
S S

R
L R

S S

states??: integer dirt and robot locations (ignore dirt amounts


etc.)
actions??: Left, Right, Suck, NoOp
goal test??: no dirt
path cost??
Example: vacuum world state space

R
L R

S S

R R
L R L R

L L
S S
S S

R
L R

S S

states??: integer dirt and robot locations (ignore dirt amounts


etc.)
actions??: Left, Right, Suck, NoOp
goal test??: no dirt
path cost??: 1 per action (0 for NoOp)
Example: The 8-puzzle

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

states??
actions??
goal
test??
path
cost??
Example: The 8-puzzle

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

states??: Integer locations of tiles (ignore intermediate positions)


actions??
goal test??
path cost??
Example: The 8-puzzle

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

states??: integer locations of tiles (ignore intermediate positions)


actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??
path cost??
Example: The 8-puzzle

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

states??: integer locations of tiles (ignore intermediate positions)


actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??: = goal state (given)
path cost??
Example: The 8-puzzle

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

states??: integer locations of tiles (ignore intermediate positions)


actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??: = goal state (given)
path cost??: 1 per move
State space vs. Search space

 State space: a physical configuration


 Search space: an abstract configuration represented by a search tree or
graph of possible solutions
 Search Tree: model the sequence of actions
Root: Initial state
Branches: actions
Nodes: the results from actions. A node has: parent , children, depth,
path cost, associated state in the state space.
 Expand: A function that given a node, creates all children nodes
Search Space Regions

 The search is divided into three regions


• Explored (a.k.a Closed List, Visited set)
• Fronter (a.k.a Open List, the fringe)
• Unexplored
 The essence of search is moving nodes from regions (3) to (2) to (1), and
the essence of search strategy is deciding the order of such moves.
Tree search algorithms

Basic idea:
offline, simulated exploration of state space
by generating successors of already-explored
states (a.k.a. expanding states)

function Tree-Search( problem, strategy) returns a solution, or failure initialize


the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure choose a leaf node
for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add the resulting nodes to the search tree
end
Tree search example

Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea RimnicuVilcea Arad Lugoj Arad Oradea


Tree search example

Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea RimnicuVilcea Arad Lugoj Arad Oradea


Tree search example

Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea RimnicuVilcea Arad Lugoj Arad Oradea


Implementation: States vs. Nodes

A state is a (representation of) a physical


configuration
A node is a data structure constituting part of a
search tree includes parent, children, depth, path
cost g(x)
States do not have parents, children, depth, or path
cost! depth = 6
State 55 4 Node parent, action
g=6
66 1 8 8

7 3 2 2

The Expand function creates new nodes, filling in the various fields
and using the SuccessorFn of the problem to create the
corresponding states.
Implementation: General Tree Search

function Tree-Search( problem, fringe) returns a solution, or failure fringe


← Insert(Make-Node(Initial-State[problem]), fringe) loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test(problem, State(node)) then return Solution(node)
fringe ← InsertAll(Expand(node, problem), fringe)

function Expand( node, problem) returns a set of nodes


successors ← the empty set; state ← State[node]
for each action, result in Successor-Fn(problem, state) do
s ← a new Node
Parent-Node[s] ← node;
Action[s] ← action;
State[s] ← result
Path-Cost[s] ← Path-Cost[node]+Step-Cost(state, action, result)
Depth[s] ← Depth[node] + 1 add s to successors
return successors
State Space Graphs

▪ State space graph: A mathematical


representation of a search problem
▪ Nodes are (abstracted) world configurations G
a
▪ Arcs represent successors (action results)
b c
▪ The goal test is a set of goal nodes (maybe only
one) e
d f
▪ In a state space graph, each state occurs S h
only once!
p r
q

▪ We can rarely build this full graph in Tiny search graph for a tiny
memory (it’s too big), but it’s a useful idea search problem
Search strategies

A strategy is defined by picking the order of node expansion


Strategies are evaluated along the following dimensions:
completeness—does it always find a solution if one exists? time
complexity—number of nodes generated/expanded space
complexity—maximum number of nodes in memory optimality—does
it always find a least-cost solution?
Time and space complexity are measured in terms of
b—maximum branching factor of the search tree
d—depth of the least-cost solution
C∗—path cost of the least-cost solution
m—maximum depth of the state space (may be ∞)
Uninformed Search Strategies

Uninformed strategies use only the information available in the


problem definition
 Breadth-first search (BFS): Expand shallowest node
 Depth-first search (DFS): Expand deepest node
 Depth-limited search(DLS): Depth first with depth

 Iterative deepening search(IDS): DLS with increasing limit


 Uniform-cost search (UCS): Expand least cost node
Breadth-first Search

Expand shallowest unexpanded node


Im plem entation :
fringe is a FIFO queue, i.e., new successors go at end
A

B C

D E F G
Breadth-first Search

Expand shallowest unexpanded node


Imp lemen tation :
fringe is a FIFO queue, i.e., new successors go
at end
A
B C

D E F G
Breadth-first Search

Expand shallowest unexpanded node


Imp lemen tation :
fringe is a FIFO queue, i.e., new successors go
at end
A
B C

D E F G
Breadth-first Search

Expand shallowest unexpanded node


Imp lemen tation :
fringe is a FIFO queue, i.e., new successors go
at end
A
B C

D E F G
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier list
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

38
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier list
{S} 5 2 4
S not goal {A,B,C}
A B C

9 4 6 2
6 G 1
D E goal F

39
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A not goal {B,C,D,E}
9 4 6 2
6 G 1
D E goal F

40
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B not goal {C,D,E,G}
9 4 6 2
6 G 1
D E goal F

41
Breadth-First Search (BFS)
generalSearch(problem, queue)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C not goal {D,E,G,F}
6 G 1
D E goal F

42
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D not goal {E,G,F,H} D E goal F

43
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 6, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E not goal {G,F,H,G}
7

44
Breadth-First Search (BFS)

generalSearch(problem, queue)
S
# of nodes tested: 7, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E {G,F,H,G}
G goal {F,H,G} no expand 7

45
Breadth-First Search (BFS)
generalSearch(problem, queue)
S
# of nodes tested: 7, expanded: 6 start
expnd. node Frontier list
{S} 5 2 4
S {A,B,C}
A B C
A {B,C,D,E}
B {C,D,E,G}
9 4 6 2
C {D,E,G,F}
6 G 1
D {E,G,F,H} D E goal F
E {G,F,H,G}
G {F,H,G} 7
path: S,B,G
H cost: 8

46
Properties of breadth-first search

Complete?? Yes (if b is finite)


Time?? 1+b + b2 + b3 + . . . + bd + b(bd − 1) = O(bd+1), i.e., exp. in d
Space?? O(bd+1) (keeps every node in memory)
Optimal?? No, unless step costs are constant (if cost = 1 per step)
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

D E F G

H I J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

D E F G

H I J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

D E F G

H I J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

D E F G

H I J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

D E F G

I J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

E F G

J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

E F G

J K L M N O
Depth-first Search

Expand deepest unexpanded node


Imp lemen tation :
fringe = LIFO queue, i.e., put successors at
front
A

B C

E F G

K L M N O
Depth-first Search

Expand deepest unexpanded node


Implementation:
fringe = LIFO queue, i.e., put successors at front
A

F G

L M N O
Depth-first Search

Expand deepest unexpanded node


Implementation:
fringe = LIFO queue, i.e., put successors at front
A

F G

L M N O
Depth-first Search

Expand deepest unexpanded node


Implementation:
fringe = LIFO queue, i.e., put successors at front
A

F G

L M N O
Depth-first Search

Expand deepest unexpanded node


Implementation:
fringe = LIFO queue, i.e., put successors at front
A

F G

M N O
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier
{S}
5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

61
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier
{S}
5 2 4
S not goal {A,B,C}
A B C

9 4 6 2
6 G 1
D E goal F

62
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A not goal {D,E,B,C}
9 4 6 2
6 G 1
D E goal F

63
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D not goal {H,E,B,C}
9 4 6 2
6 G 1
D E goal F

64
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H not goal {E,B,C}
6 G 1
D E goal F

65
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E not goal {G,B,C} D E goal F

66
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E {G,B,C} D E goal F
G goal {B,C} no expand
7

67
Depth-First Search (DFS)

generalSearch(problem, stack)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier
{S}
5 2 4
S {A,B,C}
A B C
A {D,E,B,C}
D {H,E,B,C}
9 4 6 2
H {E,B,C}
6 G 1
E {G,B,C} D E goal F
G {B,C}
7
path: S,A,E,G
H cost: 15

68
Properties of Depth-first Search

Complete?? No: fails in infinite-depth spaces, spaces with loops


Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
Optimal?? No
Implementation: Fringe: LIFO (Stack)
Iterative Deepening Search

 Combines the benefits of BFS and DFS.

 Idea: Iteratively increase the search limit until the depth of the
shallowest solution d is reached.

 Applies DLS with increasing limits.

 The algorithm will stop if a solution is found or if DLS returns a


failure (no solution).

 Because most of the nodes are on the bottom of the search tree, it
not a big waste to iteratively re-generate the top

 Let’s take an example with a depth limit between 0 and 3.


Iterative Deepening Search

Limit = 0 A
Iterative Deepening Search

Limit = 1 A A A

B C B C C
Iterative Deepening Search

Limit = 2 A A A A

B C B C B C B C

D E F G D E F G D E F G E F G

A A A

C C C

F G F G G
Iterative Deepening Search

Limit = 3 A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O

A A A A

B C B C B C B C

D E F G E F G E F G E F G

I J K L M N O J K L M N O J K L M N O K L M N O

A A A A

C C C C

F G F G F G F G

L M N O L M N O L M N O M N O
Iterative Deepening Search

function Iterative-Deepening-Search( problem) returns a solution


inputs: problem, a problem
for depth ← 0 to ∞ do
result ← Depth-Limited-Search( problem,depth)
if result ƒ= cutoff then return result
end
Properties of Iterative Deepening Search

Complete?? Yes
Time?? (d + 1)b0 + db1 + (d − 1)b2 + . . . + bd = O(bd)

Space?? O(bd)

Optimal?? Yes, if step cost = 1


Can be modified to explore uniform-cost tree
Numerical comparison for b = 10 and d = 5, solution at far right leaf:

N (IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450


N (BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100

IDS does better because other nodes at depth d are not expanded BFS
can be modified to apply goal test when a node is generated
Summary of Uninformed Search algorithms
Uniform-cost search

• B FS will find the shortest path which may be costly.


• We want the cheapest not shallowest solution.
• Modify BFS: Prioritize by cost not depth → Expand node n
with the lowest path cost g ( n )
• Explores increasing costs.
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier list
{S}
5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

80
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier list
{S:0}
5 2 4
S not goal {B:2,C:4,A:5}
A B C

9 4 6 2
6 G 1
D E goal F

81
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B not goal {C:4,A:5,G:2+6}
9 4 6 2
6 G 1
D E goal F

82
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C not goal {A:5,F:4+2,G:8}
9 4 6 2
6 G 1
D E goal F

83
Unifor m-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A not goal {F:6,G:8,E:5+4,
D:5+9} 6 G 1
D E goal F

84
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F not goal {G:4+2+1,G:8,E:9, D E goal F
D:14}
7

85
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F {G:7,G:8,E:9,D:14} D E goal F
G goal {G:8,E:9,D:14}
no expand 7

86
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S}
5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
6 G 1
F {G:7,G:8,E:9,D:14} D E goal F
G {G:8,E:9,D:14}
7
path: S,C,F,G
H cost: 7

87
Summary

 Problem formulation usually requires abstracting away real-


world details to define a state space that can feasibly be
explored
 Variety of uninformed search strategies
 Iterative deepening search uses only linear space
 and not much more time than other uninformed algorithms
 Graph search can be exponentially more efficient than tree
search
Credit

• Artificial Intelligence, A Modern Approach. Stuart Russell and


Peter Norvig. Third Edition. Pearson Education.
http://aima.cs.berkeley.edu/

• (Slides adapted from Dr. Stuart Russell )

You might also like