CS361 Artificial Intelligence (SEP) Lecture 3 (Problem Solving As Search - State Space Search) Fall 2020
CS361 Artificial Intelligence (SEP) Lecture 3 (Problem Solving As Search - State Space Search) Fall 2020
Intelligence
Software Engineering Computer
Science
Programme
Dept.
Lecture 3 Helwan
University
Problem Solving as
Search & State Space
Search
3.1 Solving Problems by Searching
Types of Agents (Reflex vs. Planning) - State Space Search
- Example: Tic-Tac-Toe Game - Example: Mechanical
Fault Diagnosing - How human beings think? - Heuristic
Search
3.2
State Space
Search Search
problem Graph & Strategies
components - State Space & State Space
Graph - Examples: Vacuum World, The 8-Puzzle, Romania,
etc. - State Space Search Strategies - Selecting Search Facult
Strategy
3.3 Basic Idea of Search & the Backtracking Search y of
Algorithm
Search: Basic idea - Search Tree - Tree Search Algorithm- Computers &
Outline & Example - Handling Repeated States - Artificial
Backtracking Search Algorithm & Data Structures Intelligence
Where are we now .. ?!
1: Course Introduction
2: An introduction to Artificial
& Plan
Intelligence [AI] 3: Intelligent Agents,
& AI Related Disciplines
4: Solving Problems by Searching, and State Space Search
Strategies & Structures 5: Knowledge Representation via
Propositional & Predicate Calculi
6: Problem Solving as Search (Blind / Uninformed vs. Heuristic /
Informed Strategies) 7: Problem Solving as Search (More on
Heuristic Search + Adversarial Search)
8: Beyond Classical Search (Evolutionary / Genetic Algorithms)
9: Heuristic Optimization, Hill Climbing, Gradient Descent, &
Simulated Annealing
10: Supervised Machine Learning: Decision Trees via the ID3 Algorithm + Business
Intelligence app. via Weka 11: The Learning Problem, the Perceptron, & the Perceptron
Learning Algorithm [PLA]
12: Multilayer Perceptron [MLP], & an introduction to Artificial Neural Networks
[ANNs] o Artificial Intelligence, University of California, Berkeley
13: Loss Functions, Weights Optimization, & Support
o S. Russell Vector
& P. Norvig, Machines
"Artificial [SVMs]A Modern
Intelligence:
Approach,"
Course 3rd Ed.are based on their counterparts in the
& Lectures
o G. Luger, "Artificial Intelligence: Structures & Strategies for
following:
by o Complex
Intelligent Problem Solving,"of5th
Systems, University Ed. Columbia (Dept. of
British
o W. Ertel, "Introduction to Artificial Intelligence," 2nd
CS)
Dr. Amr S. Ghoneim
Ed.
Lecture 3: Solving Problem by Searching,
.. & .. State Space Search Strategies and
Structures
3.1 Solving problems by Example: Tic-Tac-Toe
Searching Example: Traveling
Solving Problems by Salesperson
Searching State Space Search
Types of Agents (Reflex vs. Strategies
Planning) Selecting Search Strategy
State Space Search
Example: Tic-Tac-Toe Game 3.3 Basic Idea of Search & the
Backtracking Search Algorithm
Example: Mechanical Fault
Search: Basic idea
Diagnosing
Search Tree
How human beings think?
Tree Search Algorithm Outline
Heuristic Search
Tree Search Example
3.2 State Space Search Graph Handling repeated states
& Strategies Backtracking Search
Search problem components
Backtracking Algorithm Data
Example: Romania Structures
Resources for this
lecture
• This lecture covers the following chapters:
• Chapter 3 (Solving Problems by Searching; only sections 3.1,
3.2, & 3.3) from Stuart J. Russell and Peter Norvig, "Artificial
Intelligence: A Modern Approach," Third Edition (2010), by
Pearson Education Inc.
.. AND ..
• Part II (pages 41 to 45) and Chapter 3 (Structures and
Strategies for State Space Search; only sections 3.0, 3.1, and
3.2) from George F. Luger, "Artificial Intelligence: Structures
and strategies for complex problem solving, " Fifth Edition
(2005), Pearson Education 4
Limited.
Solving Problems by
SEARCHING
Types
ofAgents
a Reflex Agent : a Planning agent :
Source: D. Klein, P.
State Space
Search
Problems are solved by searching among alternative choices.
o Humans consider several alternative strategies on their way
to solving a problem.
X X X
X X X
X X X
O O O
X X X O X X O X X X
O O O
Example:Mechanical Fault
Diagnosing Start ask:
What is the problem?
Engine trouble
ask:
Transmission
ask:………
breaks
ask: ……
…
Does the car
start?
Ye No
s
Engine starts Engine won’t start ask:
ask:………. Will engine turn over?
Yes No battery
Ye
ok
s
Turn over Won’t turn over ask: No
ask: …….. Do lights come on? battery
dead
How Human Beings
Think ..?
• Human beings do not search the entire state space
(exhaustive search).
• Only alternatives that experience has shown to be effective
are explored.
• Human problem solving is based on judgmental rules that
limit the exploration of search space to those portions of
state space that seem somehow promising.
• These judgmental rules are known as “heuristics”.
Heuristic
Search
• A heuristic is a strategy for selectively exploring the search
space.
• It guides the search along lines that have a high probability of
success.
• It employs knowledge about the nature of a problem to find a
solution.
• It does not guarantee an optimal solution to the problem but
can come close most of the time.
• Human beings use many heuristics in problem solving.
Exercises
What have we
learned?
Define the following terms:
State Space Graph,
Exhaustive Search, and ..
Heuristics.
Resources
Searching:
https://www.youtube.com/watch?v=t0yCDZe6Tnk
State Space Problems:
https://www.youtube.com/watch?v=ngkXAcjeNWE
Lecture 3: Solving Problem by Searching,
.. & .. State Space Search Strategies and
Structures
3.1 Solving problems by Example: Tic-Tac-Toe
Searching Example: Traveling
Solving Problems by Salesperson
Searching State Space Search
Types of Agents (Reflex vs. Strategies
Planning) Selecting Search Strategy
State Space Search
Example: Tic-Tac-Toe Game 3.3 Basic Idea of Search & the
Backtracking Search Algorithm
Example: Mechanical Fault
Search: Basic idea
Diagnosing
Search Tree
How Human Beings Think?
Tree Search Algorithm Outline
Heuristic Search
Tree Search Example
3.2 State Space Search Graph Handling repeated states
& Strategies Backtracking Search
Search Problem Components
Backtracking Algorithm Data
Example: Romania Structures
Searc
h
We will consider the problem of designing goal-
based agents in fully observable, deterministic,
discrete, known environments.
Start State
Goal State
Searc
h
We will consider the problem of designing goal-based
agents in fully observable, deterministic, discrete, known
environments.
Initial state
o Arad
Actions
o Go from one city to another
Transition Model
o If you go from city A to
city B, you end up in city B
Goal State
o Bucharest
Path Cost
o Sum of edge costs (total distance
State
Space
The initial state, actions, and transition model define
the state
space of the problem;
• The set of all states reachable from initial state
by any sequence of actions.
• Can be represented as a directed graph
where the nodes are states and links
between nodes are actions.
Riverbank 1
4
River
Island 1 1 Island 2
Riverbank 2
State
Space
A state space is represented by four-tuple [N, A, S, GD].
o N, is the set of nodes or states of the graph.
These
correspond to the states in the problem-solving
process.
o A, is the set of arcs between nodes. These correspond to
the steps in a problem-solving process.
o S, a nonempty subset of N, contains the start-state (s) of
the problem.
o GD, a nonempty subset of N, contains the goal-state(s) of
the problem. The states in GD are described using either:
o A measurable property of the states encountered in
the search.
o A property of the solution path developed in the
Example: Vacuum
World
o States:
o Agent location and dirt location
o How many possible states?
o What if there are n possible locations?
o The size of the state space grows
exponentially with the “size” of the
world!
o Actions:
o Left, right, suck.
o Transition Model .. ?
Example:Vacuum World State Space
Graph
o Transition Model:
Example: the8-
Puzzle
o States
o Locations of tiles
o 8-puzzle: 181,440 states
(9!/2)
o 15-puzzle: ~10 trillion
states
o 24-puzzle: ~1025 states
o Actions
o Move blank left, right, up,
down
o Path Cost
o 1 per move
Example: Robot Motion
Planning
o States
o Real-valued joint parameters (angles,
displacements).
o Actions
o Continuous motions of robot joints.
o Goal State
o Configuration in which object is grasped.
o Path Cost
o Time to execute, smoothness of path, etc.
Example: Tic-Tac-
Toe
• Nodes(N): all the different configuration of Xs and Os
that the game can have.
• Arcs (A): generated by legal moves by placing an X or
an O in unused location.
• Start state (S): an empty board.
• Goal states (GD): a board state having three Xs in a
row, column, or diagonal.
• The arcs are directed, then no cycles in the state space,
directed acyclic graph (DAG).
• Complexity: 9! Different paths can be generated.
Example: Traveling
Salesperson
A salesperson has five cites to visit and ten must return home.
•Nodes(N): represent 5 cites.
•Arcs(A): labeled with weight indicating the cost of traveling
between connected cites.
•Start state(S): a home city.
•Goal states(GD): an entire path contains a complete circuit
with minimum cost.
•Complexity: (n-1)! Different cost-weighted paths can be
generated.
State Space Search
Strategies
There are two distinct ways for searching a state space graph:
Resources
Backward & Forward Chaining
https://www.youtube.com/watch?v=ZhTt-GG7PiQ
Lecture 3: Solving Problem by Searching,
.. & .. State Space Search Strategies and
Structures
3.1 Solving problems by Example: Tic-Tac-Toe
Searching Example: Traveling
Solving Problems by Salesperson
Searching State Space Search
Types of Agents (Reflex vs. Strategies
Planning) Selecting Search Strategy
State Space Search
Example: Tic-Tac-Toe Game 3.3 Basic Idea of Search & the
Backtracking Search Algorithm
Example: Mechanical Fault
Search: Basic idea
Diagnosing
Search Tree
How human beings think?
Tree Search Algorithm Outline
Heuristic Search
Tree Search Example
3.2 State Space Search Graph Handling repeated states
& Strategies Backtracking Search
Search problem components
Backtracking Algorithm Data
Example: Romania Structures
Search ..
?
Given:
Initial state
Actions
Transition model
Goal state
Path cost
Start
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Search: Basic
idea
Starting
Search Tree (theWhat-if State
“What tree)
if” tree of sequences of actions and Action
outcomes; Successor
I.e., When we are searching, we are not acting State
frontier.
Tree Search
Example
Start: Arad
Goal: Bucharest
Tree Search
Example
Start: Arad
Goal: Bucharest
Tree Search
Example
Start: Arad
Goal: Bucharest
Handling Repeated
States
- Initialize the frontier using the starting state
- While the frontier is not empty
- Choose a frontier node according to search strategy
and take it off the frontier.
- If the node contains the goal state, return solution Else
expand the node and add its children to
the frontier.
Thank