0% found this document useful (0 votes)
20 views

Lecture3 - Problem Solving by Search

Artificial Intelligence (AI), a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Lecture3 - Problem Solving by Search

Artificial Intelligence (AI), a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Introduction to Problem

Solving by Search
LECTURE 3
Outline
2

⚫ Problem formulation: representing sequential

problems.

⚫ Example problems.

⚫ Planning for solving sequential problems without

uncertainty.

⚫ Basic search algorithms


Environment Type Discussed In
this Lecture 3

Fully Observable
⚫ Static Environment
yes

Deterministic

yes

Sequential
yes no
Discrete no
yes Discrete
no
yes
Planning, Control, Vector Search: Continuous Function
heuristic cybernetics Constraint Optimization
search Satisfaction
Choice in a Deterministic Known Environment
4

• Without uncertainty, choice is trivial in principle:


choose what you know to be the best option.
• Trivial if the problem is represented in a look-up
table.
Option Value
Chocolate 10
Coffee 20
Book 15

This is the standard problem representation in decision theory


(economics).
Computational Choice Under
Certainty 5

⚫ But choice can be computationally hard if the


problem information is represented differently.
⚫ Options may be structured and the best option
needs to be constructed.
E.g., an option may consist of a path, sequence of actions, plan,
or strategy.
⚫ The value of options may be given implicitly rather
than explicitly.
E.g., cost of paths need to be computed from map.
Problem Types
6

⚫ Deterministic, fully observable -> single-state problem


Agent knows exactly which state it will be in; solution is a
sequence
⚫ Non-observable -> conformant problem
Agent may have no idea where it is; solution (if any) is a sequence
⚫ Nondeterministic and/or partially observable -> contingency
problem
percepts provide new information about current state solution is a
contingent plan or a policy
often interleave search, execution
⚫ Unknown state space -> exploration problem (“online”)
Sequential Action
Example 7

⚫ Deterministic, fully observable: single-state problem


Agent knows exactly which state it will be in; solution is a sequence
Vacuum world: everything observed
Romania: The full map is observed

⚫ Single-state:
Start in #5. Solution??
[Right, Suck]
Example: Romania
8

⚫ On holiday in Romania; currently in Arad.

⚫ Formulate goal:
be in Bucharest
⚫ Formulate problem:
states: various cities
actions: drive between cities
⚫ Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
9

Abstraction: The process of removing details from a representation Is the map a good representation of the
problem? What is a good replacement?
Single-state problem formulation
10

A problem is defined by 4 items:


⚫ initial state e.g., “at Arad”

⚫ Successor function S(x)= set of action–state

⚫ Goal test, can be


explicit, e.g., x = “at Bucharest”, or “checkmate” in chess
implicit, e.g., NoDirt(x)
⚫ Path cost (additive) e.g., sum of distances, number
of actions executed, etc. c(x, a, y) is the step cost,
assumed to be ≥ 0
A solution is a sequence of actions leading from the
initial state to a goal state
The successor function
11

⚫ Successor function: for a given state, returns a set of action/new-state pairs.

⚫ Vacuum-cleaner world: (A, dirty, clean) → (’Left’, (A, dirty, clean)),(’Right’, (B, dirty, clean)),
(’Suck’, (A, clean, dirty)), (’NoOp, (A, dirty, clean))

⚫ Romania: In(Arad) → ((Go(Timisoara), In(Timisoara), (Go(Sibiu), In(Sibiu)), (Go(Zerind),


In(Zerind))
Size of space
12

⚫ 8-puzzle: 9!/2 = 181, 000 states (easy)


⚫ 15-puzzle: ∼ 1.3 trillion states (pretty easy)
⚫ 24-puzzle: ∼ 10^25 states (hard)
⚫ TSP, 20 cities: 20! = 2.43 × 10^18 states
(hard)
Selecting a state space
13

⚫ Real world is complex


state space must be abstracted for problem solving

⚫ (Abstract) state = set of real states

⚫ (Abstract) action = complex combination of real actions


e.g., "Arad -> Zerind" represents a complex set of possible routes,
detours, rest stops, etc.
⚫ (Abstract) solution =
set of real paths that are solutions in the real world

⚫ Each abstract action should be "easier" than the original problem


Vacuum world state space graph
14

⚫ states?
⚫ actions?
⚫ goal test?
⚫ path cost?
Vacuum world state space graph
15

⚫ states? integer dirt and robot location


⚫ actions? Left, Right, Suck
⚫ goal test? no dirt at all locations
⚫ path cost? 1 per action
Example: The 8-puzzle
16

⚫ states?
⚫ actions?
⚫ goal test?
⚫ path cost?
Example: The 8-puzzle
17

⚫ states? locations of tiles


⚫ actions? move blank left, right, up, down
⚫ goal test? = goal state (given)
⚫ path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
Example: robotic assembly
18

⚫ states?: real-valued coordinates of robot joint angles parts of the object to be


assembled
⚫ actions?: continuous motions of robot joints
⚫ goal test?: complete assembly
⚫ path cost?: time to execute
Problem-solving agents
19

Note: this is offline problem solving; solution executed “eyes closed.”


Tree search algorithms
20

⚫ Basic idea:
offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding
states)
Tree search example
21
Tree search example
22
Tree search example
23
Search Graph vs. State Graph
24

⚫ Be careful to distinguish
Search tree: nodes are sequences of actions.
State Graph: Nodes are states of the environment.
We will also consider soon search graphs.
Search strategies
25

⚫ A search strategy is defined by picking the order of node


expansion
⚫ Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
⚫ Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
Search Strategies
26

⚫ Uninformed (blind) search


⚫ Informed Search
⚫ Adversarial Search (Game Theory)
Uninformed search strategies
27

⚫ Uninformed search strategies use only the information

available in the problem definition

⚫ Uninformed search (blind search)


Breadth-first search
Depth-first search
Depth-limited search
Iterative deepening search
Breadth-first search
28

⚫ Expand shallowest unexpanded node

⚫ Implementation:
is a FIFO queue, i.e., new successors go at end
Breadth-first search
29

⚫ Expand shallowest unexpanded node

⚫ Implementation:
is a FIFO queue, i.e., new successors go at end
Breadth-first search
30

⚫ Expand shallowest unexpanded node

⚫ Implementation:
is a FIFO queue, i.e., new successors go at end
Breadth-first search
31

⚫ Expand shallowest unexpanded node


Implementation:
is a FIFO queue, i.e., new successors go at end
Properties of breadth-first search
32

⚫ Complete? Time? Space? Optimal?

⚫ Complete? Yes (if b is finite)

⚫ Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)

⚫ Space? O(bd+1) (keeps every node in memory)

⚫ Optimal? Yes (if cost = 1 per step)

⚫ Space is the bigger problem (more than time)


Thanks

You might also like