4 Solving Problems by Searching
4 Solving Problems by Searching
•What is a Problem?
•Solving a problem
•Well-defined problems
2
What is a Problem?
• It is a gap between what actually is and what is desired .
– A problem exists when an individual becomes aware of the existence of an
obstacle which makes it difficult to achieve a desired goal or objective.
• A number of problems are addressed towards designing intelligent agent:
– Toy problems:
@ are problems that are useful to test and demonstrate methodologies and
can be used by researchers to compare the performance of different
algorithms
• may require little or no ingenuity, good for games design
• e.g. 8-puzzle, n-queens, vacuum cleaner world, towers of Hanoi, river
crossing…
– Real-life problems:
@ are problems that have much greater commercial/economic impact if
solved.
• Such problems are more difficult and complex to solve, and there is no
single agreed-upon description 3
• E.g. Route finding, Traveling sales person, etc.
• 8-Puzzle Example problems
– Given an initial configuration of 8 numbered tiles on a
3 x 3 board, move the tiles so as to produce a desired
goal configuration of the tiles.
8
Road map of Romania
Solving a problem
Formalize the problem: Identify the collection of information that
the agent will use to decide what to do.
• Define states
– States describe distinguishable stages during the problem-
solving process
– Example- What are the various states in route finding problem?
• The various places including the location of the agent
• Define operators/rules
– Identify the available operators for getting from one state to the
next
– Operators cause an action that brings transitions from one state
to another by applying on a current state
• Construct state space
– Suggest a suitable representation (such as graph, table,… or a combination
of them) to construct the state space
9
State Space of the Problem
• The state space defines the set of all relevant states reachable
by (any) sequence of actions from the initial state until the
goal state is reached.
• State space (also called search space/problem space) of the
problem includes the various states
– Initial state
• defines where the agent starts or begins its task
– Goal state
• defines the situation the agent attempts to achieve
– Transition states
• other states in between initial and goal states
Initial Goal
state state
Actions
11
Example: The 8 puzzle problem
• This is the problem of arranging the tiles so that all the tiles are in
the correct positions. You do this by moving tiles or space up,
down, left, or right, so long as the following conditions are met:
– a) there's no other tile blocking you in the direction of the movement; and
– b) you're not trying to move outside of the boundaries/ edges.
1 2 3 1 2 3
8 4 5 8 4
7 6 7 6 513
Example : River Crossing Puzzles
Goat, Wolf and Cabbage problem
• A farmer returns from the market, where he bought a goat, a
cabbage and a wolf/. On the way home he must cross a river. His
boat is small and unable to transport more than one of his
purchases. He cannot leave the goat alone with the cabbage
(because the goat would eat it), nor he can leave the goat alone
with the wolf (because the goat would be eaten). How can the
farmer get everything safely on the other side?
1. Identify the set of possible states and operators
18
Example: Vacuum world problem
To simplify the problem (rather
than the full version), let;
• The world has only two
locations
– Each location may or may not
contain dirt
– The agent may be in one location
or the other
• Eight possible world states
• Three possible actions (Left,
Right, Suck)
– Suck operator clean the dirt
– Left and Right operators move
the agent from location to location
• Goal: to clean up all the dirt
19
Vacuum Cleaner state Space
20
Single state problem
• Fully observable: The world is accessible to the agent
– It can determine its exact state through its sensors
– The agent’s sensor knows which state it is in
• Deterministic: The agent knows exactly the effect of its actions
– It can then calculate exactly which state it will be in after any
sequence of actions
• Action sequence is completely planned
Example - Vacuum cleaner world
– Can the agent achieve its goal? Let say the agent is initially at
state 4?
– Agent easily calculates & formulates sequences of actions ([Left,
Suck]) and knows that it will get to a goal state
• Right {6}
• Suck {8} 21
Multiple state problems
• Partially observable: The agent has limited access to the world state
– It might not have sensors to get full access to the environment states or as
an extreme, it can have no sensors at all (due to lack of percepts)
• Deterministic: The agent knows exactly what each of its actions do
– It can then calculate which state it will be in after any sequence of actions
Example - Vacuum cleaner world
– Can the agent achieve its goal?
• If the agent has full knowledge of how its actions change the world, but does
not know of the state of the world, it can still solve the task
– Agent’s initial state is one of the 8 states: {1,2,3,4,5,6,7,8}
– Action sequence: {right, suck, left, suck}
– Because agent knows what its actions do, it can discover and reach
to goal state.
Right [2.4.6.8.] Suck {4,8}
Left {3,7} Suck {7}
22
Contingency Problems
• Partially observable: The agent has limited access to the world
state
• Non-deterministic: The agent is ignorant of the effect of its
actions
– Sometimes ignorance prevents the agent from finding a
guaranteed solution sequence.
• Suppose the agent is in Murphy’s law world
– The agent has to sense during the execution phase, since things
might have changed while it was carrying out an action. This
implies that
• the agent has to compute a tree of actions, rather than a linear
sequence of action
– Example - Vacuum cleaner world:
• action ‘Suck’ deposits dirt on the carpet, but only if there is no
dirt already. Depositing dirt rather than sucking returns from
ignorance about the effects of actions 23
Cont’d
• Example - Vacuum cleaner world
– What will happen given initial state [1,2,3,4,5,6,7,8], and action
sequence: [Suck, Right, Suck]?
{5, 4, 7, 2, 1, 8,3,6} {2,4,6,8} {4,2,8,6} (failure)
• Is there a way to solve this problem?
– Solving this problem requires local sensing, i.e. sensing the
execution phase,
– Start from one of the states {1,2,3,4,5,6,7,8}, and take
improved action sequence [Suck, Right, Suck (only if there is
dirt there)]
• Many problems in the real world are contingency
problems (exact prediction is impossible)
– For this reason many people keep their eyes open while
walking around or driving. 24
Exploration problem
• The agent has no knowledge of the environment
– World Partially observable : No knowledge of states
(environment)
• Unknown state space (no map, no sensor)
– Non-deterministic: No knowledge of the effects of its actions
– Problem faced by (intelligent) agents (like, newborn babies)
• This is a kind of problem in the real world rather than in a
model, which may involve significant danger for an ignorant
agent. If the agent survives, it learns about the environment
• The agent must experiment, learn and build the model of the
environment through its results, gradually, discovering
– What sort of states exist and What its action do
– Then it can use these to solve subsequent (future) problems
• Example: in solving Vacuum cleaner world problem the agent
learns the state space and the effects of its action sequences say:
[suck, Right] 25
Steps in problem solving
• Goal formulation
– is a step that specifies exactly what the agent is trying to achieve
– This step narrows down the scope that the agent has to look at
• Problem formulation
– is a step that puts down the actions and states that the agent has to
consider given a goal (avoiding any redundant states), like:
• the initial state
• the allowable actions etc…
• Search
– is the process of looking for the various sequence of actions that
lead to a goal state, evaluating them and choosing the optimal
sequence.
• Execute
– is the final step that the agent executes the chosen sequence of
actions to get it to the solution/goal 26
Problem Solving Agent Algorithm
function SIMPLE-PROBLEM-SOLVING-AGENT(percept)
returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal initially null
problem, a problem formulation
state UPDATE-STATE(state,percept)
if seq is empty then do
goal FORMULATE-GOAL(state)
problem FORMULATE-PROBLEM(state,goal)
seq SEARCH(problem)
action FIRST(seq)
seq REST(seq)
return action
27
End!!!
29