0% found this document useful (0 votes)
22 views27 pages

Search Agent

This document discusses search agents and artificial intelligence. It covers different types of environments and agents, including goal-based agents that work towards a goal by identifying actions that lead to that goal through search. Examples of search problems are given, such as the 8-queen puzzle, 8-puzzle, and route finding between cities. Real-world applications of search agents discussed include pathfinding, the traveling salesman problem, VLSI layout, robot navigation, and automatic assembly sequencing. Key concepts covered are state space versus search space, with search space represented as a search tree or graph to model sequences of actions.

Uploaded by

yusifovyadigar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views27 pages

Search Agent

This document discusses search agents and artificial intelligence. It covers different types of environments and agents, including goal-based agents that work towards a goal by identifying actions that lead to that goal through search. Examples of search problems are given, such as the 8-queen puzzle, 8-puzzle, and route finding between cities. Real-world applications of search agents discussed include pathfinding, the traveling salesman problem, VLSI layout, robot navigation, and automatic assembly sequencing. Key concepts covered are state space versus search space, with search space represented as a search tree or graph to model sequences of actions.

Uploaded by

yusifovyadigar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Artificial

Intelligence
Search Agents

Dr.Samir Rustamov

Image form https://www.growingwiththeweb.com


Environments Types
• Fully observable (vs. partially observable)
• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multi-agent)
• Known (vs. Unknown)
Agent types
• Four basic types in order of increasing generality:
◦ Simple reflex agents
◦ Model-based reflex agents
◦ Goal-based agents
◦ Utility-based agents

• All these can be turned into learning agents that can improve their performance and
generate better actions.
Goal-based agents
• Agents that work towards a goal.
• Agents consider the impact of actions on future states.
• Agent’s job is to identify the action or series of actions that
lead to the goal.
• Formalized as a search through possible solutions.
Explore!
The 8-queen
problem: on a
chess board, place
8 queens so that
no queen is
attacking any other
horizontally,
vertically or
Number of possible sequences to investigate: diagonally.
64 ∗ 63 ∗ 62 ∗ ... ∗ 57 = 1.8 ×
Problem formulation
• Initial state: the state in which the agent starts
• States: All states reachable from the initial state by any sequence of actions
(State space)
• Actions: possible actions available to the agent. At a state s, Actions(s) returns
the set of actions that can be executed in state s. (Action space)
• Transition model: A description of what each action does Results(s, a)
• Goal test: determines if a given state is a goal state
• Path cost: function that assigns a numeric cost to a path w.r.t. performance
measure
• States: all arrangements
of 0 to 8 queens on the
board.

• Initial state: No queen on


the board

• Actions: Add a queen to


any empty square

• Transition model: updated


board

• Goal test: 8 queens on the


board with none attacked
8 Puzzle
https://kartikkukreja.files.wordpress.com/2015/06/8puzzle1.jpg
• States: Location of each of
the 8 tiles in the 3x3 grid
• Initial state: Any state
• Actions: Move Left, Right,
Up or Down
• Transition model: Given a
state and an action, returns
resulting state
• Goal test: state matches
the goal state?
• Path cost: total moves,
each move costs 1.
Example of
search agents
• States: In City where
City ∈ {Baku, Sumqayit, Shusha,...}
• Initial state: In Goranboy
• Actions: Go Shamakhi, etc.
• Transition model:
Results (In (Ismayilli), Go (Gebele)) =
In(Gebele)
• Goal test: In(Sheki)
• Path cost: path length in kilometers

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Azerbaijan_roads.jpg/1200px-Azerbaijan_roads.jpg
Real-world
examples:
Pathfinding
Pathfinding or pathing is the
plotting, by a computer application,
of the shortest route between two
points. It is a more practical variant
on solving mazes.

Two primary problems of


pathfinding are (1) to find a path
between two nodes in a graph (BFS
and DFS)-Route Finding; and (2) the
shortest path problem—to find the
optimal shortest path(A*).
https://driving-tests.org/wp-content/uploads/2012/07/reading-a-map.jpg
https://cdn.macrumors.com/article-new/2017/02/apple-maps-transit-new-orleans-1.jpg
Travelling
salesman
problem
The travelling salesman
problem (TSP) asks the
following question: "Given a
list of cities and the
distances between each pair
of cities, what is the shortest
possible route that visits
each city and returns to the
origin city?"

https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/GLPK_solution_of_a_travelling_salesman_problem.svg/1200px-GLPK_solution_of_a_travelling_salesman_problem.svg.png
VLSI –
Very-large-scale
integration
VLSI is the process of creating
an integrated circuit by combining
hundreds of thousands
of transistors or devices into a single
chip. The microprocessor is a VLSI
device.
• VLSI layout: position million of
components and connections on a
chip to minimize area, shorten
delays.
• Aim: put circuit components on a
chip so as they don’t overlap and
leave space to wiring which is a
complex problem.

https://3.imimg.com/data3/AD/HT/MY-9548710/advanced-vlsi-design-vlsi-2-500x500.jpg
Robot
navigation
Special case of route
finding for robots with
no specific routes or
connections. The robot
navigates in 2D or 3D
space or more where
the state space and
action space are
potentially infinite.

https://icdn2.digitaltrends.com/image/dji-spark-drone-review-11-1500x1000.jpg?ver=1
Automatic
assembly
sequencing
Find an order in which to
assemble parts of an object
which is in general a difficult
and expensive geometric
search.

http://img.directindustry.com/images_di/photo-g/22084-9140315.jpg
State space vs. search space
• State space: a physical configuration
• Search space: an abstract configuration represented by a search tree or
graph of possible solutions.
• Search tree: models the sequence of actions
◦ Root: initial state
◦ Branches: actions
◦ Nodes: results from actions. A node has: parent, children, depth, path cost
◦ Expand: A function that given a node, creates all children nodes
Search space regions
• The search space is divided into three regions:
◦ 1. Explored (a.k.a. Closed List, Visited Set)
◦ 2. Frontier (a.k.a. Open List, the Fringe)
◦ 3. Unexplored.
Tree search
Example
of search
agents
Baku

Sumqayit Xirdalan

Xizi Quba Qusar


Shamaxi Goychay Zaqatala

Baku to Shamaxi Goal found: Shamaxi! ->


Return Success(Shamaxi)
State Space Graphs vs. Search Trees
State Space Graph Search Tree
S

a G d e p
b c
b c e h r q
e
d f a a h r p q f
S h
p q f q c G
p q r
q c G a

a
State Space Graphs vs. Search Trees
State Space Graph Search Tree
S

a G d e p
b c
b c e h r q
e
d f a a h r p q f
S h
p q f q c G
p q r
q c G a

[These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]
Quiz: State Space Graphs vs. Search
Trees
Consider this 4-state graph: How big is its search tree (from S)?

S G

Important: Lots of repeated structure in the search tree!


[These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]
How to
handle
repeated
states?
Search strategies
• A strategy is defined by picking the order of node expansion
• Strategies are evaluated along the following dimensions:
- Completeness
◦ Does it always find a solution if one exists?

- Time complexity
◦ Number of nodes generated/expanded

- Space complexity
◦ Maximum number of nodes in memory

- Optimality
◦ Does it always find a least-cost solution?
Search strategies
• Time and space complexity are measured in terms of:
◦ b: maximum branching factor of the search tree (actions per
state).
◦ d: depth of the solution
◦ m: maximum depth of the state space (may be ∞) (also noted
sometimes D).
• Two kinds of search: Uninformed and Informed.

You might also like