0% found this document useful (0 votes)
28 views

Lecture 3 Problem Solving

The document discusses problem solving using search techniques in artificial intelligence. It covers key concepts like problem space, states, actions, and search algorithms. Standardized problems like the 8-puzzle and vacuum world are presented as examples to illustrate search problems and solutions. Real-world problems involving route finding are also discussed. Different search strategies like breadth-first, depth-first, and uniform cost are introduced.

Uploaded by

Harris Chikunya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Lecture 3 Problem Solving

The document discusses problem solving using search techniques in artificial intelligence. It covers key concepts like problem space, states, actions, and search algorithms. Standardized problems like the 8-puzzle and vacuum world are presented as examples to illustrate search problems and solutions. Real-world problems involving route finding are also discussed. Different search strategies like breadth-first, depth-first, and uniform cost are introduced.

Uploaded by

Harris Chikunya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

IT406

Artificial Intelligence

Problem Solving (Using Search)

Harris Chikunya

1
Problem Solving (Using Search)
• When the correct action to be taken is not immediately obvious, an
agent may need to plan ahead: to consider a sequence of actions
that form a path to a goal state.
• Such an agent is called a problem-solving agent, and the
computational process it undertakes is called search.

2
Problem Solving (using Search)
• Search Terminology
 Problem Space: It is the environment in which the search takes place.
 Problem Instance: It is Initial state + Goal state
 Problem Space Graph: It represents problem state. States are shown by
nodes and operators are shown by edges.
 Depth of a problem: Length of a shortest path or shortest sequence of
operators from Initial State to goal state.
 Branching Factor: The average number of child nodes in the problem
space graph.
 Depth: Length of the shortest path from initial state to goal state.
3
The Importance of Search in AI
• It has already become clear that many of the tasks underlying AI can
be phrased in terms of a search for the solution to the problem at
hand.
• Many goal based agents are essentially problem solving agents
which must decide what to do by searching for a sequence of
actions that lead to their solutions.
• For production systems, we have seen the need to search for a
sequence of rule applications that lead to the required fact or
action.
• For neural network systems, we need to search for the set of
connection weights that will result in the required input to output
mapping.

4
Problem Types
1. Single-State problem: deterministic, accessible

 Agent knows everything about world (the exact state),

 Can calculate optimal action sequence to reach goal


state.

 E.g., playing chess. Any action will result in an exact


state

5
Problem Types
2. Multiple-state problem: deterministic, inaccessible

• Agent does not know the exact state (could be in any of the
possible states)
• May not have sensor at all

• Assume states while working towards goal state.

• E.g., walking in a dark room


• If you are at the door, going straight will lead you to the kitchen
• If you are at the kitchen, turning left leads you to the bedroom
• …

6
Problem Types
3. Contingency problem: nondeterministic, inaccessible

• Must use sensors during execution


• Solution is a tree or policy
• Often interleave search and execution

• E.g., a new skater in an arena


• Sliding problem.
• Many skaters around

7
Problem Types
• Exploration problem: unknown state space

Discover and learn about environment while taking


actions.

• E.g., Maze

8
Problem-solving Agents
• A simplified road map of part of Romania

9
Problem-Solving Agents
• GOAL FORMULATION: Goals organize behavior by limiting the
objectives and hence the actions to be considered. The agent adopts
the goal of reaching Bucharest.
• PROBLEM FORMULATION: The agent devises a description of the
states and actions necessary to reach the goal—an abstract model of
the relevant part of the world. For our agent, one good model is to
consider the actions of traveling from one city to an adjacent city
• SEARCH: Before taking any action in the real world, searches until it
finds a sequence of actions that reaches the goal. Such a sequence is
called a solution. (such as going from Arad to Sibiu to Faragas to
Bucharest), or it will find that no solution is possible.
• EXECUTION: The agent can now execute the actions in the solution,
one at a time.

10
Search Problems and Solutions
A search problem can be defined formally as follows:
 A set of possible states that the environment can be in. We call this the state
space.
 The initial state that the agent starts in. For example: Arad.
 A set of one or more goal states. Sometimes there is one goal state (e.g.,
Bucharest), sometimes there is a small of alternative goals states.
 The actions available to the agent. Given a state s, ACTIONS(s) returns a
finite set of actions that can be executed in s. For example:
ACTIONS(Arad) = {ToSibiu, ToTimisoara,
ToZerind}.
 A transition model, which describes what each action does. RESULT(s,a)
returns the state that results from doing action a in state s. For example,
RESULT(Arad, ToZerind) = Zerind.
 An action cost function, denoted by ACTION-COST(s, a, s’), that gives the numeric
cost of applying action a in state s to reach state s’
A sequence of actions forms a path, and a solution is a path from the initial state to
11
Example Problems
• Problems can be either standardized problems or real-world
problems.

• A standardized problem is intended to illustrate or exercise various


problem solving methods.
• It can be given a concise, exact description and hence is suitable as a
benchmark for researchers to compare the performance of
algorithms.

• A real-world problem, such as robot navigation, is one whose


solutions people actually use, and whose formulation is not
standardized, because, for example, each robot has different sensors
that produce different data.
12
Standardized Problem Example 1: 8-puzzle

• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:

13
Standardized Problem Example 1: 8-puzzle

• STATES: Location of each of the tiles


• INITIAL STATE: Any state
• ACTIONS: moving blank Left, Right, UP, Down
• GOAL STATE: does state match goal state?
• ACTION COST: Each action cost 1

14
Standardized Problem Example 1: 8-puzzle

Why search algorithms?


 8-puzzle has 362,800 states
 15-puzzle has 10^12 states
 24-puzzle has 10^25 states

So, we need a principled way to look for a solution in these huge


search spaces…

15
Standardized Problem Example 2: Vaccum World

• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:
16
Standardized Problem Example 2: Vaccum World

• STATES: dirt or robot locations


• INITIAL STATE: Any state
• ACTIONS: Left, Right, Suck
• GOAL STATE: no dirt
• ACTION COST: Each action cost 1
17
Real-World Problems Example 1: Route
Finding problem
• Consider the airline travel problems that must be solved by a travel-
planning Web site:

• STATES: ??
• INITIAL STATE: ??
• ACTIONS: ??
• GOAL STATE: ??
• ACTION COST: ??

18
Search Algorithms
• A search algorithm takes a search problem as input and returns a
solution, or an indication of failure.
• These algorithms use a Search tree to form the various paths from
the initial state, trying to find a path that reaches a goal state.
• Each node in the search tree corresponds to a state in the state
space and the edges in the search tree corresponds to actions.
• The root of the tree corresponds to the initial state of the problem.
• The search tree may have multiple paths to (and thus multiple nodes
for) any given state, but each node in the tree has a unique path
back to the root (as in all trees).

19
Search tree Example

20
Evaluation of Search strategies
• We can evaluate an algorithm’s performance in four ways:
 COMPLETENESS: Is the algorithm guaranteed to find a solution when
there is one, and to correctly report failure when there is not?
 COST OPTIMALITY: Does it find a solution with the lowest path cost of
all solutions?
 TIME COMPLEXITY: How long does it take to find a solution? This can
be measured in seconds, or more abstractly by the number of states
and actions considered.
 SPACE COMPLEXITY: How much memory is needed to perform the
search?

21
Uninformed Search Strategies
• An uninformed search algorithm is given no clue about how close a
state is to the goal(s).
• In contrast, an informed agent who knows the location of each city
knows that Sibiu is much closer to Bucharest and thus more likely to
be on the shortest path.
• Use only information available in the problem formulation

 Breadth-first
 Uniform-cost
 Depth-first
 Depth-limited
 Iterative deepening

22
Breadth-First Search
• Expand the shallowest unexpanded node
• Moves down level by level until a goal is reached

23
Example: Traveling from Arad to Bucharest

24
Breadth-first search

25
Breath-first Search

26
Breadth-first search

27
BFS Performance
• Search algorithms are commonly evaluated according to the following four criteria:
 Completeness: does it always find a solution if one exists?
 Time complexity: how long does it take as function of num. of nodes?
 Space complexity: how much memory does it require?
 Optimality: does it guarantee the least-cost solution?

• Time and space complexity are measured in terms of:


 b – max branching factor of the search tree
 d – depth of the least-cost solution
 m – max depth of the search tree (may be infinity)

 BFS Performance measure


• Completeness: Yes, if b is finite
• Time complexity: 1+b+b2+…+bd = O(b d), i.e., exponential in d
• Space complexity: O(b d), keeps every node in memory
• Optimality: Yes (assuming cost = 1 per step)

28
Dijkstra’s Algorithm or Uniform-Cost Search

• It explores paths in the increasing order of cost


• It always expands the least cost node
• It is identical to BFS if each transition has the same cost
• The path cost is usually taken to be the sum of the step costs

29
Romania with step costs in Km

30
Uniform-cost search

31
Uniform-cost search

32
Uniform-cost search

33
UCS Performance
• Completeness: Yes
• Time complexity: # nodes with g  cost of optimal ,  O(b d)
• Space complexity: # nodes with g  cost of optimal solution, 
O(b d)
• Optimality: cost optimal

g(n) is the path cost to node n


Remember:
b = branching factor
d = depth of least-cost solution

34
Depth-first search
• Always expands the deepest node in the frontier first.
• The search proceeds immediately to the deepest level of the search,
where the nodes have no successors.
• The search then goes to the deepest node that still has unexpanded
successors.

35
DFS Example

36
Performance
• Completeness: No, fails in infinite state space (yes if finite state
space)
• Time complexity: O(b m)
• Space complexity: O(b m)
• Optimality: No

Remember:
b = branching factor
m = max depth of search tree

37
Depth-limited search
• Designed to keep DFS from wandering down an infinite path
• A version of DFS in which we supply a depth limit, ℓ, and treat all
nodes at depth ℓ as if they had no successors
• The time complexity is O()
• The space complexity is O(bℓ)
• Unfortunately, if we make a poor choice for ℓ the algorithm will fail
to reach the solution, making it incomplete again.
• Sometimes a good depth limit can be chosen based on knowledge
of the problem.

38
Depth-limited search

39
Iterative deepening search
• Combines the best of BFS and DFS strategies
• IDS solves the problem of picking a good value for ℓ by
trying all values: first 0, then 1, then 2, and so on—until
either a solution is found, or the depth limited search
returns the failure value rather than the cutoff value

• NB: In general, iterative deepening is the preferred


uninformed search method when the search state space is
larger than can fit in memory and the depth of the solution
is not known.

40
Iterative Deepening Search

41
Informed (Heuristic) Search Strategies
• Use domain-specific hints about the location of goals
• Can find solutions more efficiently than an uninformed strategy
• The hints come in the form of a heuristic function, denoted h(n)
• h(n) = estimated cost of the cheapest path from the state at node n to
a goal state.
• For example, in route-finding problems, we can estimate the
distance from the current state to a goal by computing the straight-
line distance on the map between the two points.
 Greedy Best first
 A*
 Heuristics
 Hill-climbing
 Simulated annealing
42
Greedy best-first search
• Estimation function:
h(n) = estimate of cost from n to goal (heuristic)

• For example:
hSLD(n) = straight-line distance from n to Bucharest

• Greedy search expands first the node that appears to be closest to


the goal, according to h(n).
• This is on the grounds that it is likely to lead to a solution quickly

43
Greedy best-first search

Values of hSLD- straight-line distances to Bucharest e.g. hSLD(Arad) =


366.

44
Greedy best-first search

45
A* search
• Uses the evaluation function
f(n) = g(n) + h(n) where:

g(n) – cost so far to reach n


h(n) – estimated cost to goal from n
f(n) – estimated total cost of path through n to goal

• We are trying to find the cheapest solution i.e. the node with the
lowest value of g(n) + h(n)
• A* search is both complete and optimal

46
A* search

47
A* search

48
End

49

You might also like