CH 3 Problem Solving and Search Algorithms Part I
CH 3 Problem Solving and Search Algorithms Part I
Part I
Dr. Udaya Raj Dhungana
Assist. Professor
Pokhara University, Nepal
Guest Faculty
Hochschule Darmstadt University of Applied Sciences, Germany
E-mail: [email protected] and [email protected]
1
Overview
3.Problem Solving and Search Algorithms (10 hrs)
3.1.Problem Solving
3.1.1.Problem Solving Agents
3.1.2.Problem solving process
3.1.3.Production System
3.1.4.Well-defined and ill-defined problems
3.1.5.Problem formulation
• When the correct action to take is not immediately clear, an agent may need to
plan ahead to nd a sequence of actions that form a path to a goal state. Such
an agent is called a problem-solving agent and the computational process it
undertakes is called search.
• A set of possible states that the environment can be in. It is called a state space of the
problem.
• An initial state that the agent starts in. For example, an empty board in tic-tac-toe game.
• A set of one or more goal states.
• A set of actions available to the agent. For example, the vacuum-cleaner world has four
actions- Right, Left, suck and NoOP.
• A transition model which describes what each action does.
• A path is a sequence of states. A solution is a path from initial state to the goal state.
• An action cost which gives the numeric cost of applying action. Total cost of the solution
is given by the summation of the cost of each action in the path that lead to the goal
State. An optimal solution has the lowest path cost among all solutions.
• Puzzle Solving: Arranging tiles in a speci c order. For example: 8 Puzzle problem, 8
Queen problem, Water-Jug problem
• Game Playing: Making strategic moves in a competitive environment. For example: tic-
tac-toe, chess
Dr. Udaya R. Dhungana, Pokhara University, Nepal. [email protected] 10
fi
fi
Solving Problem by Searching
• A wide range of problems can be formulated as searches
– as the process of searching for a sequence of actions that take
you from an initial state to a goal state
• Our assumption: our agents always have access to information about the world,
such as the map in previous slide.
• Then the agent can follow this four-phase problem-solving process to solve the
problem:
1. Goal Formulation
2. Problem Formulation
3. Search
4. Execution
• The agent de nes the desired goal to achieve. For instance, the agent adopts the goal of
reaching Bucharest in the problem to reach to Bucharest from Arad.
2. Problem Formulation
• The agent de nes the problem in terms of initial state, goal state, actions necessary to reach the
goal.
3. Search
• The agent simulates sequences of actions in its model, searching until it nds a sequence of
actions that reaches the goal (solution).
• The agent might have to simulate multiple sequences that do not reach the goal, but eventually it
will nd a solution (such as going from Arad to Sibiu to Fagaras to Bucharest), or it will nd that no
solution is possible.
4. Execution
• The agent can now execute the actions in the solution, one at a time.
• It works as follows:
1. It first formulates a goal and a problem,
# SEARCH
# SOLUTION
# EXECUTION
A simple problem-solving agent.
• Observable
– Agent knows the initial state
• Discrete
– Agent can enumerate the choices
• Deterministic
– Agent can plan a sequence of actions such that each will lead to
an intermediate state
• This involves identifying the state space, initial state, actions, transition model, goal
test, and path cost function. A problem can be formulated by the following
components:
1. State Space: The set of all possible states that the problem can have.
3. Actions: all actions that the agent can apply to reach from one state to another
6. Path Cost function: assigns a numeric cost to the path that leads to the goal
state.
– States:
• In a two-square version, the agent can be in either of the two squares, and
each square can either contain dirt or not, so there are 2*2*2 = 8 states.
– Initial state:
• Any state can be designated as the initial state.
– Actions:
• Left, Right Suck and NoOP
– Transition model: Initial state
• Left- move the vacuum cleaner to the left
• Right- move the vacuum cleaner to the right
• Suck- suck the dirt
• NoOP- No operation
– Goal test:
• This checks whether all the squares are clean.
– Path cost: Goal state
• Each action costs 1, so the path cost is the number of actions in the path.
The state-space graph for the two-square vacuum world. There are 8 states and three actions for each
state: L = Left, R = Right, S = Suck. The links denotes actions.
• 8-Puzzle Problem:
– Best known variant of sliding-tile puzzle is 8 puzzle problem.
below figure.
– In the figure, our task is to convert the start state into goal
• A well-defined problem
• is one that has a clear goal or solution, and problem solving strategies
are easily developed.
• is one that is unclear, abstract, or confusing, and that does not have a
clear problem solving strategy.
• Examples:
• Logical puzzles
• Geometry proofs
• Some well defended problems can appear ill-defined by the size of the problem
space (for example chess)
• Examples:
• Design a fuel efficient car
• Design software
• Write a paper
• Finding a perfect mate
• a sequence of world state in which the nal state satisfy the goal or
• a sequence of action in which the last action will result the goal
state.
• Each action change one state to the next state of the world
• Actions: Add a queen to any empty box so that no queen in Initial State
the board attack each other.
Description of Water Jug Problem: You are given two jugs, a 4-liters and a 3-liters
one. Neither has any measuring mark on it. There is a tap with unlimited water supply
capacity and it can be used to ll the jugs with water. Each jug can be lled to its full
capacity. Each jug can be emptied completely. Each jug can be poured into the other jug
until either the rst jug is empty or the second jug is full. No other intermediate
measurement is allowed.
Problem: How can you get exactly 2 liters of water using these two jugs?
2. Fill jug 1: ( 4 , 0 )
3. Pour from jug 1 to jug 2: ( 1 , 3). Jug 2 is now full and Jug 1 contains exactly 1 liter.
4. Empty jug 2: ( 1 , 0 ). Jug 2 is now empty and Jug 1 still contains exactly 1 liter.
5. Pour from jug 1 to jug 2: ( 0 , 1 ). Now Jug 1 is empty and Jug 2 contains exactly 1
liter.
6. Fill jug 1: ( 4 , 1). Now Jug 1 contains 4 liter and Jug 2 contains 1 liter.
7. Pour from jug 1 to jug 2: ( 2 , 3 ). Now Jug 1 contains exactly 2 liters and Jug 3
contains exactly 3 liters.
8. Goal state is reached. You got exactly the 2 liters water in Jug 1.
Solution of B: Solution of A
Since the goal node is at the bottom of the
tree, it is faster for the depth-first search to
reach it. The breath-first search will have to
search more nodes before it reaches the
goal node.
• Initial State
– e.g. “At Arad”
• Successor Function
– A set of action state pairs
– S(Arad) = {(Arad->Zerind, Zerind), …}
• Goal Test
– e.g. x = “at Bucharest”
• Path Cost
– sum of the distances traveled
1. A set of rules each consisting of a left side that determines the applicability of
the rule and a right side that describes the operation to be performed if the
rule is applied.
3. A control strategy that specifies the order in which the rules will be compared
with facts in the database and also specifies how to resolve conflicts in
selection of several rules or selection of more facts.
4. A rule applier which is the computational system that implements the control
strategy and applies the rules.
• A search control strategy is de ned by picking the order in which the nodes expand.
• Completeness,
• Time complexity,
• Space complexity,
• Optimality
End of Chapter
40