Lect03 Problem Solving by Searching
Lect03 Problem Solving by Searching
2024
Contents
1. Problem-solving Agents
2. Example Problems
Example
Problems
Searching for
Solutions • A problem-solving agent is one kind of goal-based agent
• It uses atomic representations for state space.
4
Example: Romania travelling
Problem-solving
Agents
Example
Problems
Searching for
Solutions • You are at Arad
• Your friend’s wedding tomorrow at Bucharest
Oradea
71
Neamt
Zerind 87
75 151
Iasi
Arad
140
92
Sibiu Fagaras
99
118
Vaslui
80
Rimnicu Vilcea
Timisoara
142
111 Pitesti 211
Lugoj 97
70 98
85 Hirsova
Mehadia 146 101 Urziceni
75 138 86
Bucharest
Drobeta 120
90
Craiova Eforie
Giurgiu
5
Properties of the environment
Problem-solving
Agents
Example
Problems
Searching for
Solutions • Observable
• Each city has a sign indicating its presence for arriving drivers.
• The agent always knows the current state.
• Discrete
• Each city is connected to a small number of other cities.
• There are only finitely many actions to choose from any given state.
• Known
• The agent knows which states are reached by each action.
• Deterministic
• Each action has exactly one outcome.
• Under these assumptions, the solution to any problem is a fixed sequence of
actions
6
Solving problem by searching
Problem-solving
Agents
Example
Problems
Searching for
Solutions • Search: the process of looking for a sequence of actions that reaches the goal
• A search algorithm takes a problem as input and returns a solution in the
form of an action sequence.
• Execution phase: once a solution is found, the recommended actions are
carried out.
• While executing the solution, the agent ignores its percepts when
choosing an action → open-loop system
7
Solving problem by searching (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions
function Simple-Problem-Solving-Agent(percept) returns an action
persistent: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state ← Update-State(state, percept) Quy luật chuyển trạng
if seq is empty then
goal ← Formulate-Goal(state) Mô tả
problem ← Formulate-Problem(state, goal)
seq ← Search(problem) Quan trọng nhất
if seq = failure then return a null action
action ← First(seq)
seq ← Rest(seq)
return action
8
Well-defined problems and solutions
Problem-solving
Agents
Example
Problems
Searching for
Solutions A problem can be defined formally by five components:
1. The initial state that the agent starts in
• For example, the initial state for our agent in Romania might be
described as In(Arad)
2. A description of the possible actions is available to the agent
• For example, Action(In(Arad)) = {Go(Sibiu), Go(Timisoara),
Go(Zerind)} Những hành động có thể làm được ở Arad là đi tới
Sibiu, timi, zering
Kết quả hành3. The transition model, which is a description of what each action does
động ko có yếu • For example, Result(In(Arad), Go(Zerind)) → In(Zerind)
tố ngẫu nhiên • The term successor to refer to any state reachable from a given state by
a single action
• The initial state, actions, and transition model implicitly define the state
space of the problem
9
Well-defined problems and solutions (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions
• The state space forms a directed network or graph in which the nodes
are states and the links between nodes are actions
• A path in the state space is a sequence of states connected by a
sequence of actions
4. The goal test, which determines whether a given state is a goal state
• For example, the agent’s goal in Romania is the singleton set
{In(Bucharest)} Hàm kiểm tra kết quả
5. A path cost function that assigns a numeric cost to each path
• The step cost of taking action a in state s to reach state s’ is denoted
by c(s, a, s’) Chi phí => Tìm lời giải tối ưu
A solution to a problem is an action sequence that leads from the initial state to
a goal state
• An optimal solution has the lowest path cost among all solutions
10
Formulating problems by abstraction
Problem-solving
Agents
Example
Problems
Searching for
Solutions • The process of removing detail from a representation: abstract the state
description and the actions
• Abstraction is critical for automated problem solving
• Real-world is too detailed to model exactly
• Create an approximate, simplified, model of the world for the computer
to deal with
• The choice of a good abstraction thus involves
• Removing as much detail as possible while
• Retaining validity and ensuring that the abstract actions are easy to
carry out
11
Directed Graphs
Problem-solving
Agents
Example
Problems
Searching for
Solutions • A directed graph G = (V , E ) consists of a set V of nodes and a set E of
ordered pairs of nodes, called arcs.
12
Example Problems
Toy problems vs. Real-world problems
Problem-solving
Agents
Example
Problems
Searching for
Solutions
Toy problems Real-world problems
Illustrate or exercise various problem- More difficult
solving methods
Concise, exact description No single, agreed-upon description
Can be used to compare performance
E.g., 8-puzzle, 8-queens problem, E.g., Route finding, Touring and
Cryptarithmetic, Vacuum world, traveling salesperson problems, VLSI
Missionaries and cannibals, simple layout, Robot navigation, Assembly
route finding sequencing
14
The vacuum world
Problem-solving
Agents
15
The vacuum world (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions • Transition model: state space
R
L R
L
S S
R R
L R L R
L L
S S
S S
R
L R
S S
16
The 8-puzzle
Problem-solving
Agents
Example
Problems
Searching for
Solutions States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state
• Actions:
• Left, Right, Up, or Down.
• Different subsets of these are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the resulting state
• Goal test: This checks whether the state matches the goal configuration
• Path cost: Each step costs 1
7 2 4 1 2
9! do 1 ô có thể bị chứa bởi 8
số hoặc bị rỗng
5 di chuyển vào ô trống nghĩa
5 6 3 4 5
là ô trống đi sang trái, 6 thì là
phải, 2 là lên và 3 là xuống
8 3 1 6 7 8
Example
Problems
do mỗi thành phần liên thông có 2 trạng thái
Searching for
Solutions • It is a member of the family of sliding-block puzzles, NP-complete
• 8-puzzle: 9!/2 = 181,440 reachable states → easily solved.
• 15-puzzle (on board 4 × 4): 1.3 trillion states → optimally solved in a few
millisecs
• 24-puzzle (on board 5 × 5): around 1025 states → optimally solved in several
hours
18
The 8-queens
Problem-solving
Agents
Example
Problems
Searching for
Solutions
There are two main kinds of formulation:
• An incremental formulation involves operators
that augment the state description, starting with
an empty state; for the 8-queens problem, this
means that each action adds a queen to the
state.
• A complete-state formulation starts with all 8
queens on the board and moves them around
19
The 8-queens (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions Incremental formulation
• States: Any arrangement of 0 to 8 queens on the board is a state
• The number of states 64 × 63 . . . 57 ≈ 1.8 × 1014
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified
square.
• Goal test: 8 queens are on the board, none attacked.
• Path cost: no interest
20
Knuth’s 4 problem
Problem-solving
Agents
Example
Problems
Searching for
Solutions Devised by Donald Knuth (1964)
• Illustration of how infinite state spaces can arise
• Knuth’s conjecture: Starting with the number 4, a sequence of factorial ·!,
√
square root ·, and floor b·c operations will reach any desired positive
integer. v
usr
u qp
t
(4!)!=5
Example
Problems
Searching for
Solutions Consider the airline travel problems solved by a travel-planning Web site.
States: Each state obviously includes a location (e.g., an airport) and the current
time.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving
after the current time, leaving enough time for within-airport transfer if
needed.
• Transition model: The state resulting from taking a flight will have the
flight’s destination as the current location and the flight’s arrival time as the
current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time,
customs and immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage awards, and so on.
22
Example: robotic assembly
Problem-solving
Agents
Example
Problems
Searching for
Solutions P
R R
R R
23
Searching for Solutions
Search tree
Problem-solving
Agents Coi trong tập
Example
Problems
Searching for
Solutions A solution is an action sequence, so search algorithms work by considering
various possible action sequences
• Search tree: the possible action sequences starting at the initial state
• The branches are actions and the nodes correspond to states in the
state space of the problem
• The root node of the tree corresponds to the initial state
• Taking actions by expanding the current state (parent node), thereby
generating a new set of states (child nodes)
• Frontier: the set of all leaf nodes available for expansion at any given
point
Search algorithms all share this basic structure; they vary primarily according to
how they choose which state to expand next – the so-called search strategy
25
Search tree (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions
(a) The initial state Arad Partial search trees for finding a
route from Arad to Bucharest.
Nodes that have been expanded
Sibiu Timisoara Zerind
(b) After expanding Arad Arad been generated but not yet
Sibiu Timisoara Zerind
expanded are outlined in bold;
nodes that have not yet been
generated are shown in faint
Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea
26
Search tree (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions
function Tree-Search(problem) returns a solution, or failure
initialize the frontier using the initial state of problem
loop do using queue, stack, priority queue
if the frontier is empty then
return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then
return the corresponding solution
expand the chosen node,
adding the resulting nodes to the frontier
27
Redundant paths
Problem-solving
Agents
Example
Problems
Searching for
Solutions • Loopy path cause repeated states in search tree
A A
B B B
C C C C C
Example
Problems
Searching for
Solutions
• Algorithms that forget their history are doomed to repeat it
• Using data structure called the explored set, which remembers every
expanded node
Example
Problems
Searching for
Solutions • The algorithm has another nice property: the frontier separates the
state-space graph into the explored region and the unexplored region, so that
every path from the initial state to an unexplored state has to pass through a
state in the frontier
Figure 1: The frontier (white nodes) always separates the explored region of the state
space (black nodes) from the unexplored region (gray nodes)
30
Infrastructure for search algorithms
Problem-solving
Agents
Example
Problems
Searching for
Solutions Search algorithms require a data structure to keep track of the search tree that is
being constructed. For each node n of the tree, we have a structure that contains
four components:
• n.State: the state in the state space to which the node corresponds;
• n.Parent: the node in the search tree that generated this node;
• n.Action: the action that was applied to the parent to generate the node;
• n.Path-Cost: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.
31
Infrastructure for search algorithms (cont.)
Problem-solving
Agents
Example
Problems
Searching for
Solutions
PARENT
32
Measuring problem-solving performance
Problem-solving
Agents
Searching for
Solutions We can evaluate an algorithm’s performance in four ways:
• Completeness: Is the algorithm guaranteed to find a solution when there is
Solution one?
• Optimality: Does the strategy find the optimal solution?
Algorithm • Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
Time and space complexity are measured in terms
of
• b: maximum branching factor of the
…
search tree
• d: depth of the least-cost/shallowest
solution
• m: maximum depth of the state space
(may be ∞)
33
References