0% found this document useful (0 votes)
7 views21 pages

AI Lecture 3

The lecture discusses problem-solving agents and their role in executing actions to reach a goal through search algorithms. It outlines the components of a problem, including initial state, actions, successor functions, goal tests, and path costs, using examples such as navigating cities. The lecture also covers search strategies, their evaluation criteria, and the structure of search trees and nodes.

Uploaded by

zeegamingzone
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views21 pages

AI Lecture 3

The lecture discusses problem-solving agents and their role in executing actions to reach a goal through search algorithms. It outlines the components of a problem, including initial state, actions, successor functions, goal tests, and path costs, using examples such as navigating cities. The lecture also covers search strategies, their evaluation criteria, and the structure of search trees and nodes.

Uploaded by

zeegamingzone
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Lecture 3 – Problem-Solving

(Search) Agents

Instrutor : Hafiz Mueez Ameen


March 6, 2023 1
 Problem-solving agents
 Problem types
 Problem formulation
 Example problems
 Basic search algorithms

March 6, 2023 2
 Suppose an agent can execute several actions
immediately in a given state
 It doesn’t know the utility of these actions
 Then, for each action, it can execute a sequence
of actions until it reaches the goal
 The immediate action which has the best
sequence (according to the performance
measure) is then the solution
 Finding this sequence of actions is called
search, and the agent which does this is called
the problem-solver.
 NB: Its possible that some sequence might fail,
e.g., getting stuck in an infinite loop, or unable
to find the goal at all.
March 6, 2023 3
 You can begin to visualize the concept of a
graph
 Searching along different paths of the graph
until you reach the solution
 The nodes can be considered congruous to the
states
 The whole graph can be the state space
 The links can be congruous to the actions……

March 6, 2023 4
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest
 Formulate goal: Be in Bucharest
 Formulate problem:
 States: various cities
 Actions: drive between cities
 Find solution:
 Sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest.

March 6, 2023 5
March 6, 2023 6
 Static: The configuration of the graph (the city map)
is unlikely to change during search

 Observable: The agent knows the state (node)


completely, e.g., which city I am in currently

 Discrete: Discrete number of cities and routes


between them

 Deterministic: Transiting from one city (node) on


one route, can lead to only one possible city

 Single-Agent: We assume only one agent searches


at one time, but multiple agents can also be used.
March 6, 2023 7
 A problem is defined by five items:
1. An Initial state, e.g., “In Arad“

2. Possible actions available, ACTIONS(s) returns the set


of actions that can be executed in s.

3. A successor function S(x) = the set of all possible


{Action–State} pairs from some state, e.g., Succ(Arad) =
{<Arad  Zerind, In Zerind>, … }

4. Goal test, can be


 explicit, e.g., x = "In Bucharest
 implicit, e.g., Checkmate(x)

5. Path cost (additive)


 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0
 A solution is a sequence of actions leading from the initial
state to a goal state.
March 6, 2023 8
 States? Actions?
 Goal test? Path cost?
March 6, 2023 9
 States? dirt and robot location
 Actions? Left, Right, Pick
 Goal test? no dirt at all locations
 Path cost? 1 per action
March 6, 2023 10
 States?
 Actions?
 Goal test?
 Path cost?

March 6, 2023 11
 States? locations of tiles
 Actions? move blank left, right, up, down
 Goal test? = goal state (given)
 Path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]

March 6, 2023 12
 States?: real-valued coordinates of robot
joint angles, parts of the object to be
assembled, current assembly
 Actions?: continuous motions of robot
joints
 Goal test?: complete assembly
 Path cost?: time to execute
March 6, 2023 13
 Basic idea:
 Offline (not dynamic), simulated exploration of
state space by generating successors of already-
explored states (a.k.a. expanding the states)
 The expansion strategy defines the different
search algorithms.

March 6, 2023 14
March 6, 2023 15
March 6, 2023 16
March 6, 2023 17
 Fringe: The collection of nodes that have been
generated but not yet expanded
 Each element of the fringe is a leaf node, with
(currently) no successors in the tree
 The search strategy defines which element to
choose from the fringe

fringe
March 6, 2023
fringe 18
 A state is a representation of a physical
configuration
 A node is a data structure constituting part of a
search tree includes state, parent node, action,
path cost g(x), depth
 The Expand function creates new nodes, filling in
the various fields and using the SuccessorFn of
the problem to create the corresponding states.

March 6, 2023 19
 A search strategy is defined by picking the
order of node expansion
 Strategies are evaluated along the following
dimensions:
 Completeness: Does it always find a solution if one
exists?
 Time complexity: Number of nodes generated
 Space complexity: Maximum number of nodes in
memory
 Optimality: Does it always find a least-cost solution?
 Time and space complexity are measured in
terms of
 b: maximum no. of successors of any node
 d: depth of the shallowest goal node
 m: maximum length of any path in the state space.
March 6, 2023 20
March 6, 2023 21

You might also like