0% found this document useful (0 votes)
16 views

Lecture2 Uninformedsearch

dsadasdasdas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture2 Uninformedsearch

dsadasdasdas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Vietnam National University of HCMC

International University
School of Computer Science and Engineering

Introduction to Artificial Intelligence


(IT097IU)

Lecture 02: States and Searching -


Graph Searching Techniques

Instructor: Nguyen Trung Ky


Review
Agent, agent function, agent program
Rational agent and its performance measure
PEAS
Five major agent program skeletons
Simple Reflex Agents
Model-based Reflex Agents
Goal-based Agents
Utility-based Agents
Learning Agents

Today class, solving problems by searching 2


Chapter 3:
Solving Problems by Searching
Problem Solving Agent
Problem-solving agent: a type of goal-based
agent
Decide what to do by finding sequences of
actions that lead to desirable states
Goal formulation: based on current situation
and agent’s performance measure
Problem formulation: deciding what actions
and states to consider, given a goal
The process of looking for such a sequence
of actions that reach the goal is called
search
Example: Romania Touring
Example: Romania Touring
On holiday in Romania; currently in Arad
Non-refundable ticket to fly out of Bucharest
tomorrow
Formulate goal (perf. evaluation):
be in Bucharest before the flight
Formulate problem:
states: various cities
actions: drive between cities
Search:
sequence of cities
Road Map of Romania
Problem-Solving Agents
Aspects of the Simple Problem Solver
Where does it fit into the agents and
environments discussion?
Static environment
Observable
Discrete
Deterministic
Open-loop system: percepts are ignored, thus break
the loop between agent and environment
Well-Defined Problems
A problem can be defined formally by five
components:
Initial state
Actions
Transition model: description of what each action
does (successor)
Goal test
Path cost
Problem Formulation – 5 Components
Initial state: In(Arad)
Actions, if current state is In(Arad), actions = {Go{Sibiu),
Go(Timisoara), Go(Zerind)}
Transition model:
e.g., Results(In(Arad), Go(Sibiu)) = In(Sibiu)
Goal test determines whether a given state is a goal state
explicit, e.g. In(Bucharest)
implicit, e.g. checkmate
Path cost function that assigns a numeric cost to each path
e.g., distance traveled
step cost: c(x, a, y)

Solution: a path from the initial state to a goal state


Optimal solution: the path that has the lowest path cost
among all solutions; measured by the path cost
function
Problem Abstraction
Real world is complex and has more details
Irrelevant details should be removed from state
space and actions, which is called abstraction
What’s the appropriate level of abstraction?
the abstraction is valid, if we can expand it into a solution
in the more detailed world
the abstraction is useful, if carrying out the actions in the
solution is easier than the original problem
remove as much detail as possible while retaining validity
and usefulness
Example: Vacuum-Cleaner
States
8 states
Initial state
any state
Actions
Left, Right, and Suck
Transition model
complete state space, see next page
Goal test
whether both squares are clean
Path cost
each step costs 1
Complete State Space
Example: 8-puzzle

NP-Complete

States:
location of each tile and the blank
Initial state: any
Actions:
blank moves Left, Right, Up or Down
Transition model:
Given a state and action, returns the resulting state
Goal test: Goal configuration
Path cost: Each step costs 1
Example: Robotic Assembly

States
real-valued coordinates of robot joint angles; parts of the
object to be assembled
Actions
continuous motions of robot joints
Transition model
States of robot joints after each action
Goal test
complete assembly
Path cost: time to execute
Missionaries & Cannibals
Both missionaries and cannibals must cross the river safely.
Boats can ride up to three people.
If the number of cannibals is more than the number of
missionaries anywhere, missionaries will be eaten.
Check on this link:
https://javalab.org/en/boat_puzzle_en/
Problem Formulation
States:
<m, c, b> representing the # of missionaries and the # of
cannibals, and the position of the boat
Initial state:
<3, 3, 1>
Actions:
take 1 missionary, 1 cannibal, 2 missionaries, 2 cannibals,
or 1 missionary and 1 cannibal across the river
Transition model:
state after an action
Goal test:
<0, 0, 0>
Path cost:
number of crossing
Real-World Problems
Touring problem: visit every city at least
once, starting in Arad and ending in
Bucharest
Traveling sales problem: exactly once
Robot navigation
Internet searching: software robots
Searching for Solutions

? ?
? ?
? ?
Searching for Solutions
Search tree: generated by initial state and
possible actions

Basic idea:
offline, simulated exploration of state space by
generating successors of already-explored states
(expanding states)
the choice of which state to expand is determined
by search strategy
Tree Search Example

20
Tree Search Example
Tree Search Example
Terminologies
Frontier: set of all leaf nodes available for
expansion at any given point
Repeated state
Loopy path: Arad to Sibiu to Arad
Redundant path: more than one way to get from
one state to another

Sometimes, redundant paths are


unavoidable
Sliding-block puzzle
General Tree Search Algorithm
Avoiding Repeated States
Failure to detect repeated states can turn a linear
problem into an exponential one!
Algorithms that forget their history are doomed to
repeat it

state space size: d + 1 => search tree leaves: 2d


General Graph Search Algorithm

We augment Tree-Search with a explored set, which remembers


every expanded node
Graph Search Examples
Implementation: States vs. Nodes
A state is a representation of a physical configuration

A node is a data structure constituting part of a search tree


includes state, parent node, action, path cost g(n), depth

A solution path can be easily extracted


CHILD-NODE Function
The CHILD-NODE function takes a parent
node and an action and returns the resulting
child node
function CHILD-NODE(problem, parent, action) returns a node
return a node with
STATE = problem.RESULT(parent.STATE, action)
PARENT = parent
ACTION = action
PATH-COST = parent.PATH-COST +
problem.STEP-COST(parent.STATE, action)
Frontier and Explored Set
Goal of frontier: the next node to expand
can be easily located in the frontier
Possible data structures?

Goal of explored set: efficient checking for


repeated states
Possible data structures?

30
Search Strategies
A search strategy is defined by picking the order of node
expansion

Strategies are evaluated along the following dimensions:


completeness: does it always find a solution if one exists?
optimality: does it always find a least-cost solution?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory

Time and space complexity are measured in terms of


b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)

Search cost (time), total cost (time+space)


Uninformed Search Strategies
Uninformed search (blind search) strategies use
only the information available in the problem
definition

Strategies that know whether one non-goal state is


better than another are called informed search or
heuristic search

General uninformed search strategies:


Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Breadth-First Search
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go
at end
Breadth-First Search
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go
at end
Breadth-First Search
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go
at end
Breadth-First Search
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go
at end
BFS on a Graph
Analysis of Breadth-First Search
Complete?
Yes (if b is finite), the shallowest solution is returned
Time?
b+b2+b3+… +bd = O(bd)
Space?
O (bd) (keeps every node in memory)
Optimal?
Yes if step costs are all identical or path cost is a
nondecreasing function of the depth of the node

Space is the bigger problem (more than time)


Time requirement is still a major factor
How Bad is BFS?
With b = 10; 1 million nodes/sec; 1k bytes/node
It takes 13 days for the solution to a problem with
search depth 12, nodes 1012
350 years at depth 16

Memory is more of a problem than time


Requires 103G when d = 8
Exponential-complexity search problems cannot be
solved by uninformed methods for any but the
smallest instances
Uniform-Cost Search
Expand least-cost unexpanded node
Implementation:
Frontier = priority queue ordered by path cost g(n)
Example, shown on board, from Sibiu to Bucharest
breadth-first = uniform-cost search when?

40
Uniform-Cost Search
Analysis
Complete?
Yes, if step cost ≥ ε

Time?
O (b ceiling(C*/ ε)) where C * is the cost of the optimal solution
# of nodes with g ≤ cost of optimal solution

Space?
O (b ceiling(C*/ ε))
# of nodes with g ≤ cost of optimal solution

Optimal?
Yes – nodes expanded in increasing order of g(n)
Uniform-Cost Search is Optimal
Uniform-cost search expands nodes in order
of their optimal path cost
Hence, the first goal node selected for
expansion must be the optimal solution
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function

50
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Depth-First Search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
Or a recursive function
Properties of DFS
Properties of DFS depend strongly on
whether the graph-search or tree-search
version is used
Analysis of DFS + Tree Search
Complete?
No: fails in infinite-depth spaces, or spaces with loops
Modify to avoid repeated states along path
=> complete in finite spaces
Time?
O (bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than
breadth-first
Space?
O (bm), i.e., linear space!
Optimal?
No
Analysis of DFS + Graph Search
Complete?
No: also fails in infinite-depth spaces
Yes: for finite state spaces
Time?
O (bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than
breadth-first
Space?
Not linear any more, because of explored set
Optimal?
No
Backtracking Search
Backtracking search is a variant of DFS
Only one successor is generated at a time rather
than all successors
Each partially expanded node remembers which
successor to generate next

Memory requirement: O(m) vs. O(bm)


Depth-Limited Search
Is the same as depth-first search with depth limit l, nodes
at depth l are treated as if they have no successors

Complete? Time? Space? Optimal?


Iterative Deepening DF-Search
Gradually increase the depth limit until a goal is
found
Combines the benefits of depth-first and breadth-
first search
Depth Limit = 0
Depth Limit = 1

60
Depth Limit = 2
Depth Limit = 3
Analysis of Iterative Deepening Search
Number of nodes generated in a depth-limited search to depth
d with branching factor b:
NDLS = b0 + b1 + b2 + … + b d-2 + b d-1 + b d

Number of nodes generated in an iterative deepening search


to depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + bd

For b = 10, d = 5,
NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

Overhead = (123,456 - 111,111)/111,111 = 11%

IDS is the preferred uninformed search method when search


space is large and depth of solution is unknown
Analysis, Continue
Complete?
Yes
Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
Space?
O(bd) (tree search version)
Optimal?
Yes, if step costs are identical or path cost is a
nondecreasing function of the depth of the node
Summary of Uninformed Tree
Search Strategies

Complete and optimal under certain conditions


Discussion on bidirectional search
Analysis of Graph Search
Much more efficient than Tree-Search
Time and space are proportional to the size of the
state space

Optimality:
uniform-cost search or breadth-first search with identical
step costs are still optimal even if it returns the first path
found
iterative-deepening, identical step cost or non-decreasing
function of depth of a node

Tradeoff: depth-first or iterative deepening are not


linear anymore
Bidirectional Search
Runs two simultaneous searches
Forward from initial state
Backward from goal state
Summary
We have covered methods for selecting actions in
environments that are deterministic, observable,
static, and completely known

Problem formulation requires abstraction

Uninformed search strategies

You might also like