Lecture 3 Problem Solving
Lecture 3 Problem Solving
Artificial Intelligence
Harris Chikunya
1
Problem Solving (Using Search)
• When the correct action to be taken is not immediately obvious, an
agent may need to plan ahead: to consider a sequence of actions
that form a path to a goal state.
• Such an agent is called a problem-solving agent, and the
computational process it undertakes is called search.
2
Problem Solving (using Search)
• Search Terminology
Problem Space: It is the environment in which the search takes place.
Problem Instance: It is Initial state + Goal state
Problem Space Graph: It represents problem state. States are shown by
nodes and operators are shown by edges.
Depth of a problem: Length of a shortest path or shortest sequence of
operators from Initial State to goal state.
Branching Factor: The average number of child nodes in the problem
space graph.
Depth: Length of the shortest path from initial state to goal state.
3
The Importance of Search in AI
• It has already become clear that many of the tasks underlying AI can
be phrased in terms of a search for the solution to the problem at
hand.
• Many goal based agents are essentially problem solving agents
which must decide what to do by searching for a sequence of
actions that lead to their solutions.
• For production systems, we have seen the need to search for a
sequence of rule applications that lead to the required fact or
action.
• For neural network systems, we need to search for the set of
connection weights that will result in the required input to output
mapping.
4
Problem Types
1. Single-State problem: deterministic, accessible
5
Problem Types
2. Multiple-state problem: deterministic, inaccessible
• Agent does not know the exact state (could be in any of the
possible states)
• May not have sensor at all
6
Problem Types
3. Contingency problem: nondeterministic, inaccessible
7
Problem Types
• Exploration problem: unknown state space
• E.g., Maze
8
Problem-solving Agents
• A simplified road map of part of Romania
9
Problem-Solving Agents
• GOAL FORMULATION: Goals organize behavior by limiting the
objectives and hence the actions to be considered. The agent adopts
the goal of reaching Bucharest.
• PROBLEM FORMULATION: The agent devises a description of the
states and actions necessary to reach the goal—an abstract model of
the relevant part of the world. For our agent, one good model is to
consider the actions of traveling from one city to an adjacent city
• SEARCH: Before taking any action in the real world, searches until it
finds a sequence of actions that reaches the goal. Such a sequence is
called a solution. (such as going from Arad to Sibiu to Faragas to
Bucharest), or it will find that no solution is possible.
• EXECUTION: The agent can now execute the actions in the solution,
one at a time.
10
Search Problems and Solutions
A search problem can be defined formally as follows:
A set of possible states that the environment can be in. We call this the state
space.
The initial state that the agent starts in. For example: Arad.
A set of one or more goal states. Sometimes there is one goal state (e.g.,
Bucharest), sometimes there is a small of alternative goals states.
The actions available to the agent. Given a state s, ACTIONS(s) returns a
finite set of actions that can be executed in s. For example:
ACTIONS(Arad) = {ToSibiu, ToTimisoara,
ToZerind}.
A transition model, which describes what each action does. RESULT(s,a)
returns the state that results from doing action a in state s. For example,
RESULT(Arad, ToZerind) = Zerind.
An action cost function, denoted by ACTION-COST(s, a, s’), that gives the numeric
cost of applying action a in state s to reach state s’
A sequence of actions forms a path, and a solution is a path from the initial state to
11
Example Problems
• Problems can be either standardized problems or real-world
problems.
• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:
13
Standardized Problem Example 1: 8-puzzle
14
Standardized Problem Example 1: 8-puzzle
15
Standardized Problem Example 2: Vaccum World
• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:
16
Standardized Problem Example 2: Vaccum World
• STATES: ??
• INITIAL STATE: ??
• ACTIONS: ??
• GOAL STATE: ??
• ACTION COST: ??
18
Search Algorithms
• A search algorithm takes a search problem as input and returns a
solution, or an indication of failure.
• These algorithms use a Search tree to form the various paths from
the initial state, trying to find a path that reaches a goal state.
• Each node in the search tree corresponds to a state in the state
space and the edges in the search tree corresponds to actions.
• The root of the tree corresponds to the initial state of the problem.
• The search tree may have multiple paths to (and thus multiple nodes
for) any given state, but each node in the tree has a unique path
back to the root (as in all trees).
19
Search tree Example
20
Evaluation of Search strategies
• We can evaluate an algorithm’s performance in four ways:
COMPLETENESS: Is the algorithm guaranteed to find a solution when
there is one, and to correctly report failure when there is not?
COST OPTIMALITY: Does it find a solution with the lowest path cost of
all solutions?
TIME COMPLEXITY: How long does it take to find a solution? This can
be measured in seconds, or more abstractly by the number of states
and actions considered.
SPACE COMPLEXITY: How much memory is needed to perform the
search?
21
Uninformed Search Strategies
• An uninformed search algorithm is given no clue about how close a
state is to the goal(s).
• In contrast, an informed agent who knows the location of each city
knows that Sibiu is much closer to Bucharest and thus more likely to
be on the shortest path.
• Use only information available in the problem formulation
Breadth-first
Uniform-cost
Depth-first
Depth-limited
Iterative deepening
22
Breadth-First Search
• Expand the shallowest unexpanded node
• Moves down level by level until a goal is reached
23
Example: Traveling from Arad to Bucharest
24
Breadth-first search
25
Breath-first Search
26
Breadth-first search
27
BFS Performance
• Search algorithms are commonly evaluated according to the following four criteria:
Completeness: does it always find a solution if one exists?
Time complexity: how long does it take as function of num. of nodes?
Space complexity: how much memory does it require?
Optimality: does it guarantee the least-cost solution?
28
Dijkstra’s Algorithm or Uniform-Cost Search
29
Romania with step costs in Km
30
Uniform-cost search
31
Uniform-cost search
32
Uniform-cost search
33
UCS Performance
• Completeness: Yes
• Time complexity: # nodes with g cost of optimal , O(b d)
• Space complexity: # nodes with g cost of optimal solution,
O(b d)
• Optimality: cost optimal
34
Depth-first search
• Always expands the deepest node in the frontier first.
• The search proceeds immediately to the deepest level of the search,
where the nodes have no successors.
• The search then goes to the deepest node that still has unexpanded
successors.
35
DFS Example
36
Performance
• Completeness: No, fails in infinite state space (yes if finite state
space)
• Time complexity: O(b m)
• Space complexity: O(b m)
• Optimality: No
Remember:
b = branching factor
m = max depth of search tree
37
Depth-limited search
• Designed to keep DFS from wandering down an infinite path
• A version of DFS in which we supply a depth limit, ℓ, and treat all
nodes at depth ℓ as if they had no successors
• The time complexity is O()
• The space complexity is O(bℓ)
• Unfortunately, if we make a poor choice for ℓ the algorithm will fail
to reach the solution, making it incomplete again.
• Sometimes a good depth limit can be chosen based on knowledge
of the problem.
38
Depth-limited search
39
Iterative deepening search
• Combines the best of BFS and DFS strategies
• IDS solves the problem of picking a good value for ℓ by
trying all values: first 0, then 1, then 2, and so on—until
either a solution is found, or the depth limited search
returns the failure value rather than the cutoff value
40
Iterative Deepening Search
41
Informed (Heuristic) Search Strategies
• Use domain-specific hints about the location of goals
• Can find solutions more efficiently than an uninformed strategy
• The hints come in the form of a heuristic function, denoted h(n)
• h(n) = estimated cost of the cheapest path from the state at node n to
a goal state.
• For example, in route-finding problems, we can estimate the
distance from the current state to a goal by computing the straight-
line distance on the map between the two points.
Greedy Best first
A*
Heuristics
Hill-climbing
Simulated annealing
42
Greedy best-first search
• Estimation function:
h(n) = estimate of cost from n to goal (heuristic)
• For example:
hSLD(n) = straight-line distance from n to Bucharest
43
Greedy best-first search
44
Greedy best-first search
45
A* search
• Uses the evaluation function
f(n) = g(n) + h(n) where:
• We are trying to find the cheapest solution i.e. the node with the
lowest value of g(n) + h(n)
• A* search is both complete and optimal
46
A* search
47
A* search
48
End
49