0% found this document useful (0 votes)
185 views69 pages

Chapter 3 Problem Solving Agents

This document discusses problem solving by searching. It outlines key concepts like problem formulation, search strategies, and avoiding repeated states. It describes four types of problems agents may face: single-state, multiple-state, contingency, and exploration problems. It also provides examples of well defined problems that can be solved through searching, like the water pouring and eight puzzle problems.

Uploaded by

hamba Abebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
185 views69 pages

Chapter 3 Problem Solving Agents

This document discusses problem solving by searching. It outlines key concepts like problem formulation, search strategies, and avoiding repeated states. It describes four types of problems agents may face: single-state, multiple-state, contingency, and exploration problems. It also provides examples of well defined problems that can be solved through searching, like the water pouring and eight puzzle problems.

Uploaded by

hamba Abebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

Chapter 3

Problem Solving ( Goal Based) Agent

Chapter 3: Solving Problems by Searching 1


Outlines
•Problem solving by searching
•Problem formulation
•Search Strategies
•Informed search strategies
•Uninformed search strategies
•Avoiding repeated states

Chapter 3: Solving Problems by Searching 2


Introduction
 Problems are the issues which comes across any system.
 If there is a problem then we need to find a solution.
 A solution is needed to solve that particular problem.
 There are four essentially different types of problems:
 Single- state problems: If the agent's sensors give
enough information to tell exactly which state it is in
 if it knows exactly what each of its actions does
 if it can calculate exactly which state it will be in after
any sequence of actions that will get to a goal state.

Chapter 3: Solving Problems by Searching 3



Multiple-state problems: If the agent knows all the effects of its actions,
but has limited access to the world state.
 If the agent must reason about sets of states that it might get to, rather than
single states.
Contingency problems: Suppose, for example, that the environment
appears to be nondeterministic.
 Thus, solving this problem requires sensing during the execution phase.
 Notice that the agent must now calculate a whole tree of actions, rather
than a single action sequence.
 In general, each branch of the tree deals with a possible contingency that
might arise.
 Many problems in the real, physical world are contingency problems,
because exact prediction is impossible.

Chapter 3: Solving Problems by Searching 4



Exploration problems: Finally, consider the difficulty of an agent
that has no information about the effects of its actions.
 This is somewhat equivalent to being lost in a strange country with
no map at all, and is the hardest task faced by an intelligent agent.
 The agent must experiment, gradually discovering what its actions
do and what sorts of states exist.
 This is a kind of search, but a search in the real world rather than in
a model thereof.
 Taking a step in the real world, rather than in a model, may involve
significant danger for an ignorant agent.
 If it survives, the agent learns a "map" of the environment, which it
can then use to solve subsequent problems.

Chapter 3: Solving Problems by Searching 5


Problem Solving Agents
• Problem solving agent
– A kind of “goal based” agent
– Finds sequences of actions that lead to desirable states.
– is a goal-driven agent and focuses on satisfying the goal.

• The algorithms are uninformed


– No extra information about the problem other than the
definition
• No extra information
• No heuristics (rules)

Chapter 3: Solving Problems by Searching 6


Steps performed by Problem-solving agent

Goal Formulation: It is the first and simplest step in


problem-solving.
 It organizes the steps required to formulate one goal out of
multiple goals as well as actions to achieve that goal.
 The agent’s task is to find out which sequence of actions will
get it to a goal state.
 is based on the current situation and the agent’s performance
measure.
Problem Formulation: It is the most important step of
problem-solving which decides what states to consider and
actions should be taken to achieve the formulated goal.

Chapter 3: Solving Problems by Searching 7



Search: It identifies all the best possible sequence of
actions to reach the goal state from the current state.
 A search algorithm takes a problem as input and returns
a solution in the form of action sequence.
Solution: It finds the best algorithm out of various
algorithms, which may be proven as the best optimal
solution.
Execution: It executes the best optimal solution from
the searching algorithms to reach the goal state from
the current state.
Chapter 3: Solving Problems by Searching 8
Problem Formulation
 A problem is simply a collection of information that the agent will use to
decide what to do.
 A solution to a problem is a path from the initial state to the goal state.
 The state space of the problem is the set of all states reachable from the
initial state by any sequence of actions.
 A path in the state space is simply any sequence of actions leading from
one state to another.
 The term operator is used to denote the description of an action in terms of
which state will be reached by carrying out the action in a particular state.
 Real world is absurdly complex, state space must be abstracted for problem
solving.
 First, we will look at the different amounts of knowledge that an agent can
have concerning its actions and the state that it is in.

Chapter 3: Solving Problems by Searching 9



• There are following five components involved in problem formulation:
 Initial State: It is the starting state or initial step of the agent towards its
goal. it is what the agent knows itself to be in.
 Actions: It is the description of the possible actions available to the agent.
 Transition Model: It describes what each action does.
 Goal Test: It determines if the given state is a goal state.
 Path cost: It assigns a numeric cost to each path that follows the goal.
• The problem-solving agent selects a cost function, which reflects its
performance measure.
• Remember, an optimal solution has the lowest path cost among all the
solutions.
• Note: Initial state, actions, and transition model together define
the state-space of the problem implicitly.

Chapter 3: Solving Problems by Searching 10


Goal Based Agents
• Assumes the problem environment is:
– Static
• The plan remains the same
– Observable
• Agent knows the initial state
– Discrete
• Agent can enumerate the choices
– Deterministic
• Agent can plan a sequence of actions such that each will lead to an
intermediate state

• The agent carries out its plans with its eyes closed
– Certain of what’s going on
– Open loop system

Chapter 3: Solving Problems by Searching 11


Well Defined Problems and
Solutions
• A problem
– Initial state
– Actions and Successor Function
– Goal test
– Path cost

Chapter 3: Solving Problems by Searching 12


Example: Water Pouring
• Given a 4 gallon bucket and a 3 gallon
bucket, how can we measure exactly 2
gallons into one bucket?
– There are no markings on the bucket
– You must fill each bucket completely

Chapter 3: Solving Problems by Searching 13


Example: Water Pouring
• Initial state:
– The buckets are empty
– Represented by the tuple ( 0 0 )

• Goal state:
– One of the buckets has two gallons of water in it
– Represented by either ( x 2 ) or ( 2 x )

• Path cost:
– 1 per unit step

Chapter 3: Solving Problems by Searching 14


Example: Water Pouring
• Actions and Successor Function
– Fill a bucket
• (x y) -> (3 y)
• (x y) -> (x 4)
– Empty a bucket
• (x y) -> (0 y)
• (x y) -> (x 0)
– Pour contents of one bucket into another
• (x y) -> (0 x+y) or (x+y-4, 4)
• (x y) -> (x+y 0) or (3, x+y-3)

Chapter 3: Solving Problems by Searching 15


Example: Water Pouring
(0,0)

(4,0) (0,3)

(1,3) (4,3) (3,0)

(1,0) (0,1)
(3,3) (4,2)

(4,1)
(2,3)

(2,0) (0,2)

Chapter 3: Solving Problems by Searching 16


Example: Eight Puzzle
• States:
– Description of the eight
tiles and location of the 7 2 4
blank tile
• Successor Function:
5 6
– Generates the legal states 8 3 1
from trying the four
actions {Left, Right, Up,
Down}
• Goal Test:
– Checks whether the state 1 2 3
matches the goal
configuration 4 5 6
• Path Cost:
– Each step costs 1
7 8

Chapter 3: Solving Problems by Searching 17


Example: Eight Puzzle
• Eight puzzle is from a family of “sliding –
block puzzles”
– NP Complete
– 8 puzzle has 9!/2 = 181440 states
– 15 puzzle has approx. 1.3*1012 states
– 24 puzzle has approx. 1*1025 states

Chapter 3: Solving Problems by Searching 18


Example: Eight Queens
• Place eight queens on a Q
chess board such that no
queen can attack another Q
queen
Q
• No path cost because Q
only the final state
counts! Q
Q
• Incremental formulations
• Complete state Q
formulations Q
Chapter 3: Solving Problems by Searching 19
Example: Eight Queens
• States: Q
– Any arrangement of 0 to 8
queens on the board
Q
• Initial state:
– No queens on the board Q
• Successor function:
– Add a queen to an empty Q
square
• Goal Test: Q
– 8 queens on the board and
none are attacked Q
• 64*63*…*57 = 1.8*1014
possible sequences Q
– Ouch!
Q
Chapter 3: Solving Problems by Searching 20
Example: Eight Queens
• States: Q
– Arrangements of n queens,
one per column in the Q
leftmost n columns, with no
queen attacking another are Q
states
• Successor function: Q
– Add a queen to any square in
the leftmost empty column Q
such that it is not attacked by
any other queen. Q
• 2057 sequences to Q
investigate
Q
Chapter 3: Solving Problems by Searching 21
Problem Solving by Searching
• Searching refers as finding information one needs.
• Searching is the most commonly used technique of problem solving in
artificial intelligence.
• The searching algorithm helps us to search for solution of particular problem.
• Whenever the agent is confronted by a problem, its first action is seeking a
solution with its knowledge system.
• This is known as the search for the solution in the knowledge base.
• Another attempt can be to search for a solution by going into different states.
• The search of the agent stops in the state when the agent reaches the goal state.
• There are many approaches for searching a particular goal state from all the
states that the agent can be in.

Chapter 3: Solving Problems by Searching 22


Types of search algorithms

Chapter 3: Solving Problems by Searching 23


Uninformed Search Strategies
• Uninformed strategies use only the information
available in the problem definition
– Also known as blind searching

• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

Chapter 3: Solving Problems by Searching 24


Comparing Uninformed
Search Strategies
• Completeness
– Will a solution always be found if one exists?
• Time
– How long does it take to find the solution?
– Often represented as the number of nodes searched
• Space
– How much memory is needed to perform the search?
– Often represented as the maximum number of nodes stored at
once
• Optimal
– Will the optimal (least cost) solution be found?

Chapter 3: Solving Problems by Searching 25


Comparing Uninformed
Search Strategies
• Time and space complexity are measured in
– b – maximum branching factor of the search
tree
– m – maximum depth of the state space
– d – depth of the least cost solution

Chapter 3: Solving Problems by Searching 26


Breadth-First Search
• is an algorithm for traversing or searching tree or graph
data structures.
• starts at the tree root (or some arbitrary node of a graph,
sometimes referred to as a ‘search key’), and explores all
of the neighbor nodes at the present depth prior to
moving on to the nodes at the next depth level.
• generates one tree at a time until the solution is found.
• provides shortest path to the solution.
• Expand the shallowest unexpanded node
• Place all new successors at the end of a FIFO queue

Chapter 3: Solving Problems by Searching 27


Breadth-First Search

Chapter 3: Solving Problems by Searching 28


Breadth-First Search

Chapter 3: Solving Problems by Searching 29


Breadth-First Search

Chapter 3: Solving Problems by Searching 30


Breadth-First Search

Chapter 3: Solving Problems by Searching 31


Properties of Breadth-First
Search
• Complete
– Yes if b (max branching factor) is finite
• Time
– 1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
– exponential in d
• Space
– O(bd+1)
– Keeps every node in memory
– This is the big problem; an agent that generates nodes at 10
MB/sec will produce 860 MB in 24 hours
• Optimal
– Yes (if cost is 1 per step); not optimal in general

Chapter 3: Solving Problems by Searching 32


Lessons From Breadth First
Search
• The memory requirements are a bigger
problem for breadth-first search than is
execution time

• Exponential-complexity search problems


cannot be solved by uninformed methods
for any but the smallest instances

Chapter 3: Solving Problems by Searching 33


Uniform-Cost Search
• Same idea as the algorithm for breadth-first
search…but…
– Expand the least-cost unexpanded node
– FIFO queue is ordered by cost
– Equivalent to regular breadth-first search if all
step costs are equal
• Cost of a node is defined as:
 cost(node) = cumulative cost of all nodes from root
 cost(root) = 0

Chapter 3: Solving Problems by Searching 34


Uniform-Cost Search
• Complete
– Yes if the cost is greater than some threshold
– step cost >= ε
• Time
– Complexity cannot be determined easily by d or d
– Let C* be the cost of the optimal solution
– O(bceil(C*/ ε))
• Space
– O(bceil(C*/ ε))
• Optimal
– Yes, Nodes are expanded in increasing order

Chapter 3: Solving Problems by Searching 35


Depth-First Search
• is an algorithm for traversing or searching
tree or graph data structures.
• starts at the root node (selecting some
arbitrary node as the root node in the case of
a graph) and explores as far as possible
along each branch before backtracking.
• Expand the deepest unexpanded node
• Unexplored successors are placed on a stack
until fully explored
Chapter 3: Solving Problems by Searching 36
Depth-First Search

Chapter 3: Solving Problems by Searching 37


Depth-First Search

Chapter 3: Solving Problems by Searching 38


Depth-First Search

Chapter 3: Solving Problems by Searching 39


Depth-First Search

Chapter 3: Solving Problems by Searching 40


Depth-First Search

Chapter 3: Solving Problems by Searching 41


Depth-First Search

Chapter 3: Solving Problems by Searching 42


Depth-First Search

Chapter 3: Solving Problems by Searching 43


Depth-First Search

Chapter 3: Solving Problems by Searching 44


Depth-First Search

Chapter 3: Solving Problems by Searching 45


Depth-First Search

Chapter 3: Solving Problems by Searching 46


Depth-First Search

Chapter 3: Solving Problems by Searching 47


Depth-First Search

Chapter 3: Solving Problems by Searching 48


Depth-First Search
• Complete?
– No: fails in infinite-depth spaces, spaces with loops
• Modify to avoid repeated spaces along path
– Yes: in finite spaces
• Time
– O(bm)
– Not great if m is much larger than d
– But if the solutions are dense, this may be faster than breadth-
first search
• Space
– O(bm)…linear space
• Optimal
– No

Chapter 3: Solving Problems by Searching 49


Depth-Limited Search
• A variation of depth-first search that uses
a depth limit
– Alleviates the problem of unbounded trees
– Search to a predetermined depth l (“ell”)
– Nodes at depth l have no successors

• Same as depth-first search if l = ∞


• Can terminate for failure and cutoff

Chapter 3: Solving Problems by Searching 50


Depth-Limited Search
• Complete
– Yes if l < d
• Time
– O(bl)
• Space
– O(bl)
• Optimal
– No if l > d

Chapter 3: Solving Problems by Searching 51


Iterative Deepening Search
• Iterative deepening depth-first search
– Uses depth-first search
– Finds the best depth limit
• Gradually increases the depth limit; 0, 1, 2, … until
a goal is found

Chapter 3: Solving Problems by Searching 52


Iterative Deepening Search

Chapter 3: Solving Problems by Searching 53


Iterative Deepening Search

Chapter 3: Solving Problems by Searching 54


Iterative Deepening Search

Chapter 3: Solving Problems by Searching 55


Iterative Deepening Search

Chapter 3: Solving Problems by Searching 56


Iterative Deepening Search
• Complete
– Yes
• Time
– O(bd)
• Space
– O(bd)
• Optimal
– Yes if step cost = 1
– Can be modified to explore uniform cost tree

Chapter 3: Solving Problems by Searching 57


Lessons From Iterative
Deepening Search
• Faster than BFS even though IDS
generates repeated states
– BFS generates nodes up to level d+1
– IDS only generates nodes up to level d

• In general, iterative deepening search is


the preferred uninformed search method
when there is a large search space and the
depth of the solution is not known

Chapter 3: Solving Problems by Searching 58


A* search
•Take into account the cost of getting to the node as well as our estimate of the cost of getting to the goal from the
node.
•Define an evaluation function f(n)
• F(n)=h(n)+ g(n)
• g(n) is the cost of the path represented by node n
• h(n) is the heuristic estimate of the cost of achieving the goal from n.
• Always expand the node with lowest f-value on OPEN.
• The f-value, f(n) is an estimate of the cost of getting to the goal via the node (path) n.
• I.e., we first follow the path n then we try to get to the goal. f(n) estimates the total cost of such a solution.
A* Search: example:

• State space graph search:……….

1
S5 6
5 6 D6
9 2
3 C4 2
A7 1
B3
9 2 E5
5 2
7
G10 7
G30
G20
F6
8
A* Search: example cont’d
• Solution
• Visited : S(5) ,A(12), B(11),D(12),C(12),E(13)
6
S5
5 D6 12
5
9 2
C4 2
A7 12
12 B3 E5
9 3 5
12 7 7 13
G10 B3
14 11 G30
1 G20 15
F6
C4 13
21
13 GOAL!
A* Search cont’d
• The choice of an appropriate heuristic evaluation function, h(n), is still crucial to the
behavior of this algorithm.
• In general, we want to choose a heuristic evaluation function h(n) which is as close as
possible to the actual cost of getting to a goal state.
• If we can choose a function h(n) which never overestimates the actual cost of getting to
the goal state, then we have a very useful property. Such a h(n) is said to be admissible.
A* Search cont’d
•Perhaps surprisingly, breadth-first search (where each step has the same cost) is an example of
Algorithm A*, since the function it uses to sort the agenda is simply:
• f(n) = g(n) + 0

•Breadth-first search takes no account of the distance to the goal, and because a zero estimate cannot
possibly be an overestimate of that distance it has to be admissible. This means that BFS can be seen
as a basic example of Algorithm A*.
•However, despite being admissible breadth-first search isn't a very intelligent search strategy as it
doesn't direct the search towards the goal state. The search is still blind
•Informedness :

• We say that a search strategy which searches less of the state space in order to find a goal state is
more informed. Ideally, we'd like a search strategy which is both admissible (so it will find us an
optimal path to the goal state), and informed (so it will find the optimal path quickly.)
Hill Climbing (Greedy Search)

• Alternatively, we might sort the agenda by the cost of getting to the goal from that state. This
is known as greedy search.
• An obvious problem with greedy search is that it doesn't take account of the cost so far, so it
isn't optimal, and can wander into dead-ends, like depth-first search.
• QueueingFn is sort-by-h

• Only keep lowest-h state on open list


• Hill climbing is irrevocable

• Features

• Much faster
• Less memory
• Dependent upon h(n)
• If bad h(n), may prune away all goals
• Not complete
Hill Climbing issues

• Also referred to as gradient descent

• Foothill problem / local maxima / local minima


• Can be solved with random walk or more steps

• Other problems: ridges, plateaus


values

states
Recap of search strategies
Avoiding Repeated States
• Complication of wasting time by expanding states
that have already been encountered and expanded
before
– Failure to detect repeated states can turn a linear problem
into an exponential one

• Sometimes, repeated states are unavoidable


– Problems where the actions are reversable
• Route finding
• Sliding blocks puzzles

Chapter 3: Solving Problems by Searching 67


Avoiding Repeated States

State Space Search Tree

Chapter 3: Solving Problems by Searching 68


Chapter 3: Solving Problems by Searching
19/07/2023 69

You might also like