0% found this document useful (0 votes)
47 views57 pages

PMSCS 656 Lec2 Searching I

The document discusses intelligent agents and problem solving agents. It defines intelligent agents as autonomous entities that observe their environment, act on it to achieve goals, and may learn or use knowledge. A problem solving agent formulates goals and problems as states and actions, then finds solutions. Search problems are defined by their state space, initial state, successor function, goal test, and path cost. Basic search concepts like search trees, nodes, and strategies are explained. Blind search strategies like breadth-first, depth-first, and uniform cost are described along with their properties.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views57 pages

PMSCS 656 Lec2 Searching I

The document discusses intelligent agents and problem solving agents. It defines intelligent agents as autonomous entities that observe their environment, act on it to achieve goals, and may learn or use knowledge. A problem solving agent formulates goals and problems as states and actions, then finds solutions. Search problems are defined by their state space, initial state, successor function, goal test, and path cost. Basic search concepts like search trees, nodes, and strategies are explained. Blind search strategies like breadth-first, depth-first, and uniform cost are described along with their properties.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

PMSCS 656

Artificial Intelligence and


Expert System

Searching-1

Lecture-2

Md. Rafsan Jani


Intelligent Agent
• In artificial intelligence, an intelligent agent
(IA) is an autonomous entity which
observes through sensors and acts upon an
environment using actuators and directs its
activity towards achieving goals.
• Intelligent agents may also learn or use
knowledge to achieve their goals.
• They may be very simple or very complex.
• A reflex machine, such as a thermostat, is
considered an example of an intelligent
agent.
Intelligent Agent
Problem-Solving Agent
sensors

?
environment
agent

actuators
Problem-Solving Agent

sensors

?
environment
agent

actuators
• Formulate Goal
• Formulate Problem
•States
•Actions
• Find Solution
Example Problem

Start Street

Street with
Parking
Looking for Parking
• Going home:
– need to find street parking
• Formulate Goal:
– Car is parked
• Formulate Problem:
– States: street with parking and car at that street
– Actions: drive between street segments
• Find solution:
Sequence of street segments, ending with a
street with parking
Problem Formulation

Start Street Search


Path

Street with
Parking
Search Problem
• State space
– each state is an abstract representation of the
environment
– the state space is discrete
• Initial state
• Successor function
• Goal test
• Path cost
Search Problem

• State space
• Initial state:
– usually the current state
– sometimes one or several hypothetical
states (“what if …”)
• Successor function
• Goal test
• Path cost
Search Problem
• State space
• Initial state
• Successor function:
– [state  subset of states]
– an abstract representation of the possible actions
• Goal test
• Path cost
Search Problem
• State space
• Initial state
• Successor function
• Goal test:
– usually a condition
– sometimes the description of a state
• Path cost
Search Problem
• State space
• Initial state
• Successor function
• Goal test
• Path cost:
– [path  positive number]
– usually, path cost = sum of step costs
– e.g., number of moves of the empty tile
Assumptions in Basic Search
• The environment is static
• The environment is discretizable
• The environment is observable
• The actions are deterministic
Search Space Size
• Unlike Search in Elementary Data Structures,
AI encounters search spaces that are too large
• AI Search typically does not realize the entire
search graph or state space
• Examples
– Scheduling CS classes such that every student in
every program of study can take every class they
wish
– Search for shortest path that covers all streets
(Travelling Salesman Problem)
Search Space Size
• Scheduling CS classes such that every
student in every program of study can
take every class they wish

• States = ?
• State Space Size = ?
• Search Time = ?
Search Space Size
• Search for shortest path that covers all
streets (Travelling Salesman Problem)

• State = ?
• State Space Size = ?
• Search Time = ?
Simple Agent Algorithm
Problem-Solving-Agent
1. initial-state  sense/read state
2. goal  select/read goal
3. successor  select/read action models
4. problem  (initial-state, goal, successor)
5. solution  search(problem)
6. perform(solution)
Basic Search Concepts
• Search tree
• Search node
• Node expansion
• Search strategy: At each stage it determines
which node to expand
Node Data Structure
• STATE
• PARENT
• ACTION
• COST
• DEPTH If a state is too large, it may
be preferable to only represent the
initial state and (re-)generate the
other states when needed
Fringe
• Set of search nodes that have not been
expanded yet
• Implemented as a queue FRINGE
– INSERT(node,FRINGE)
– REMOVE(FRINGE)
• The ordering of the nodes in FRINGE
defines the search strategy
Search Algorithm
1. If GOAL?(initial-state) then return initial-state
2. INSERT(initial-node,FRINGE)
3. Repeat:
If FRINGE is empty then return failure
n  REMOVE(FRINGE)
s  STATE(n)
For every state s’ in SUCCESSORS(s)
 Create a node n’
 If GOAL?(s’) then return path or goal state
 INSERT(n’,FRINGE)
Search Strategies
• A strategy is defined by picking the order of node
expansion
• Performance Measures:
– Completeness – does it always find a solution if one exists?
– Time complexity – number of nodes generated/expanded
– Space complexity – maximum number of nodes in memory
– Optimality – does it always find a least-cost solution
• Time and space complexity are measured in terms of
– b – maximum branching factor of the search tree
– d – depth of the least-cost solution
– m – maximum depth of the state space (may be ∞)
Remark

• Some problems formulated as search


problems are NP-hard problems. We
cannot expect to solve such a problem
in less than exponential time in the
worst-case
• But we can nevertheless strive to solve
as many instances of the problem as
possible
Blind vs. Heuristic Strategies
• Blind (or uninformed) strategies do not
exploit any of the information contained in
a state

• Heuristic (or informed) strategies exploits


such information to assess that one node
is “more promising” than another
Blind Strategies
• Breadth-first
– Bidirectional

Step cost = 1
• Depth-first
– Depth-limited
– Iterative deepening

• Uniform-Cost Step cost = c(action)


>0
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (1)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (2, 3)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (3, 4, 5)

4 5 6 7
Breadth-First Strategy
New nodes are inserted at the end of FRINGE

2 3 FRINGE = (4, 5, 6, 7)

4 5 6 7
Evaluation

• b: branching factor
• d: depth of shallowest goal node
• Complete
• Optimal if step cost is 1
• Number of nodes generated:
1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
• Time and space complexity is O(bd+1)
Time and Memory Requirements

d #Nodes Time Memory


2 100 .01 msec 11 Kbytes
4 10,000 1 msec 1 Mbyte
6 ~106 1 sec 100 Mb
8 ~108 100 sec 10 Gbytes
10 ~1010 2.8 hours 1 Tbyte
12 ~1012 11.6 days 100 Tbytes
14 ~1014 3.2 years 10,000 Tb
Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node
Time and Memory Requirements

d #Nodes Time Memory


2 100 .01 msec 11 Kbytes
4 10,000 1 msec 1 Mbyte
6 ~106 1 sec 100 Mb
8 ~108 100 sec 10 Gbytes
10 ~1010 2.8 hours 1 Tbyte
12 ~1012 11.6 days 100 Tbytes
14 ~1014 3.2 years 10,000 Tb

Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node


Bidirectional Strategy

2 fringe queues: FRINGE1 and FRINGE2

Time and space complexity = O(bd/2) << O(bd)


Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3
FRINGE = (1)

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3
FRINGE = (2, 3)

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3
FRINGE = (4, 5, 3)

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Depth-First Strategy
New nodes are inserted at the front of FRINGE
1

2 3

4 5
Evaluation
• b: branching factor
• d: depth of shallowest goal node
• m: maximal depth of a leaf node
• Complete only for finite search tree
• Not optimal
• Number of nodes generated:
1 + b + b2 + … + bm = O(bm)
• Time complexity is O(bm)
• Space complexity is O(bm)
Depth-Limited Strategy
• Depth-first with depth cutoff k (maximal
depth below which nodes are not
expanded)

• Three possible outcomes:


– Solution
– Failure (no solution)
– Cutoff (no solution within cutoff)
Iterative Deepening Strategy
Repeat for k = 0, 1, 2, …:
Perform depth-first with depth cutoff k

• Complete
• Optimal if step cost =1
• Time complexity is:
(d+1)(1) + db + (d-1)b2 + … + (1) bd = O(bd)
• Space complexity is: O(bd)
Comparison of Strategies
• Breadth-first is complete and optimal, but
has high space complexity
• Depth-first is space efficient, but neither
complete nor optimal
• Iterative deepening is asymptotically optimal
Repeated States
No Few Many

1 2 3
search tree is finite search tree is infinite
4 5
7 8 6

8-queens assembly 8-puzzle and robot navigation


planning
Avoiding Repeated States
• Requires comparing state descriptions
• Breadth-first strategy:
– Keep track of all generated states
– If the state of a new node already exists, then
discard the node
Avoiding Repeated States
• Depth-first strategy:
– Solution 1:
• Keep track of all states associated with nodes in current tree
• If the state of a new node already exists, then discard the
node
 Outcome: Avoids loops

– Solution 2:
• Keep track of all states generated so far
• If the state of a new node has already been generated, then
discard the node
 Outcome: Space complexity of breadth-first
Detecting Identical States

• Use explicit representation of state


space

• Use hash-code or similar representation


Uniform-Cost Strategy
• Each step has some cost   > 0.
• The cost of the path to each fringe node N is
g(N) =  costs of all steps.
• The goal is to generate a solution path of minimal cost.
• The queue FRINGE is sorted in increasing cost.

A S
0
1 10
S 5 B 5 G
A B C
1 5 15
5
15 C
G G
11 10
Modified Search Algorithm
1. INSERT(initial-node,FRINGE)
2. Repeat:
If FRINGE is empty then return failure
n  REMOVE(FRINGE)
s  STATE(n)
If GOAL?(s) then return path or goal state
For every state s’ in SUCCESSORS(s)
 Create a node n’
 INSERT(n’,FRINGE)
Branch and Bound
• If search involves optimization (e.g.
shortest path for covering all streets)
– Can record the best path so far, and bound
the search when the current path becomes
longer than current best path.
Summary
• Search tree  state space
• Search strategies: breadth-first, depth-
first, and variants
• Evaluation of strategies: completeness,
optimality, time and space complexity
• Avoiding repeated states
• Optimal search with variable step costs

You might also like