0% found this document useful (0 votes)
107 views85 pages

Chapter - 3 Searching and Planning

Uploaded by

Beky Kitawm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views85 pages

Chapter - 3 Searching and Planning

Uploaded by

Beky Kitawm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

1

Chapter 3
Solving Problems by Searching and Constraint
Satisfaction Problem
Topics we will cover
2

Solving Problems by Searching


Problem Solving Agents
Problem Formulation
Search Strategies
1. Uninformed search strategies
2. Informed search strategies
Avoiding Repeated States
Constraint Satisfaction Problem
Games as Search Problems
Problem-solving agents
3

With this chapter we investigate one type of goal-based agent called


a problem-solving agent.
These agents are supposed to act in such a way that the
environment goes through a sequence of states that maximizes the
performance measure.
Unfortunately, this specification is difficult to translate into a
successful agent design.
o The task is simplified if the agent can adopt a goal and aim to
satisfy it.
Example: Suppose the agent is in Jimma and wishes to get to
Adama. There are a number of factors to consider e.g. cost,
distance, speed and comfort of journey.
Problem-solving agents
4

List and explain the phases of problem solving agent.


Write two two examples of informed and uninformed search
strategies.
Cont…
5

Goals such as this help to organize behavior by limiting the


objectives that the agent is trying to achieve.
Goal formulation, based on the current situation is the first step
in problem solving.
In addition to formulating a goal, the agent may wish to decide
on some other factors (Cost, comfort, speed etc.. ) that affect the
desirability of different ways of achieving the goal.
 We will consider a goal to be a set of states - just those states in
which the goal is satisfied.
Actions can be viewed as causing transitions between states.
How can the agent decide on what types of actions to consider?
Cont…
6

Problem formulation is the process of deciding what actions and


states to consider.
For now let us assume that the agent will consider actions at the level of
driving from one city to another.
 The states will then correspond to being in particular towns along the
way, Adama.
The agent has now adopted the goal of getting to Adama, so unless it is
already there, it must transform the current state into the desired one.
Suppose that there are three roads leaving Jimma but that none of them
lead directly to Adama. What should the agent do?
 If it does not know the geography it can do no better than to pick one of
the roads at random.
Cont…
7
However, suppose the agent has map of the area.
 The purpose of a map is to provide the agent with info about the
states it might get itself into and the actions it can take.
The agent can use the map
 to consider subsequent steps of a hypothetical journey that will
eventually lead to the goal state.
In general, an agent with several intermediate options of unknown
value can decide what to do;
 by first examining different possible sequences of actions that lead to
states of known value and then choosing the best one.
This process is called search.
A search algorithm takes a problem as input and returns a solution in
the form of an action sequence.
Cont…
8
Once a solution is found, the actions it recommends can be carried
out. This is called the execution phase.
Hence, the basic algorithm for problem-solving agents consists
of 3 phases:
 Formulate the problem,
 Search for a solution and
 Execute the solution.
In solving problems, it is important to understand the concept of
a state space.
 A State space is the set of all states reachable from the initial state.
The aim of the problem-solving agent is to perform a
sequence of actions that change the environment so that it ends
up in one of the goal states.
Example: Romania (1)
9
For example, Let’s say that an agent is in the town of Arad, and
has the goal of getting to Bucharest.
What sequence of actions will lead to the agent achieving its
goal?

Figure - A Simplified road map of part of Romania.


Example: Romania (2)
10

Formulate goal:
 Be in Bucharest
Formulate problem:
 States: various cities
 Actions: drive between cities
Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Problem Formulation
11

A problem can be defined formally by four components:


 The initial state that the agent starts in.
 A description of the possible actions available to the agent.
 The goal test, which determines whether a given state is a
goal state.
 A path cost function that assigns a numeric cost to each
path.
A solution to a problem is a path from the initial state to a
goal state.
How the solution quality or performance of problem-solving
agent will measured?
Measuring problem-solving performance
12

The effectiveness of a search can be measured in at


least three ways:
 Does it find a solution?
 Is it a good solution (low cost)?
 What is the time and memory required to find a solution
(search cost)?
Single-state problem formulation
13
For example, Let’s say that an agent is in the town of Arad, and has
the goal of getting to Bucharest.

A problem is defined by four items:


1. Initial state e.g., "at Arad"
2. Actions or successor function S(x) = set of action->state pairs
e.g. S(Arad) = {<AradZerind, Zerind>,
<AradSibiu, Sibiu>,<AradTimisoara, Timisoara}
1. Goal test, can be
 x = "at Bucharest"?
2. Path cost (additive)
• e.g., sum of distances, number of actions executed, etc.
• c(x,a,y) is the step cost, assumed to be ≥ 0
• c(Arad, AradZerind, Zerind) = 75,…etc

A solution is a sequence of actions leading from the initial state to a goal state.
Cont….
14
Example: Vacuum world state space graph
15
Aim: Understanding the state space for
vacuum cleaner world.

Note: R - Right
L - Left
S - Suck

One of two locations, each of which may contain dirt


states? => 2x22=8 possible states
actions? Left, Right, Suck

goal test? No dirt at all locations

path cost? 1 per action


So such a set of all the possible state for a problem is called the state space.
Example: Vacuum world state space graph
16

• The possible world state that we


have in a vacuum cleaner problem
is given by n * 2n states. Where,
n refers to number of room.
• How many states we would have
if the number of room is 4 rather
than 2?
4 * 24 = 64
• So such a set of all the possible
state for a problem is called the
state space.
• On the right side the state space
for romania problem is given
Example: The 8-puzzle
17

states? locations of tiles & blank space => 9! different


states
actions? move blank left, right, up, down

goal test? = goal state (given)

path cost? 1 per move


Tree search algorithms
18

Search algorithms are one of the most important areas of Artificial


Intelligence.
They are important in solving search problems.
A search problem consists of a search space, start state, and
goal state.
o Search Space: represents a set of possible solutions, which a
system may have.
o Start State: is a state from where agent begins the search.
o Goal test/state: is a function which observe the current state and
returns whether the goal state is achieved or not.
Search algorithms help the AI agents to attain goal state through
the assessment of scenarios and alternatives.
Cont…
19
• If the node represents a goal state we stop searching.
• Else we expand the selected node(generate its possible
successors using the successor function) and add the successors
as child nodes of the selected node.
• Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
o Completeness: Complete if it guarantees to return a solution(if at least
any solution exists for any random input).
o Optimality: Optimal if a solution found is best (lowest path cost) among
all the solutions identified.
o Time Complexity: If the algorithm completes a task in a lesser amount
of time, then it is an efficient one.
o Space Complexity: It is the maximum storage or memory taken by the
algorithm.
Cont…
20
Time and space are measured in terms of:
o b: maximum branching factor (maximum number of successors of any
node) of the search tree. E.g. the branching factor from Arad is 3.
o d: depth of the least-cost solution.
o m: maximum depth of the state space (may be ∞).

Start node

Goal
Tree search example (1)
21

Partial search trees for finding a route from Arad to


Bucharest.
o Nodes that have been expanded are shaded;
Tree search example (2)
22

Partial search trees for finding a route from Arad to


Bucharest.
o Nodes that have been expanded are shaded;
o Nodes that have been generated but not yet expanded are
outlined in bold;
Tree search example (3)
23

Partial search trees for finding a route from Arad to


Bucharest.
o Nodes that have been expanded are shaded;
o Nodes that have been generated but not yet expanded are
outlined in bold;
o Nodes that have not yet been generated are shown in faint
dashed lines.
Another Example: Route finding Problem
24
A search tree is a representation in which nodes denote paths
and branches connect paths.
o The node with no parent is the root node.
o The nodes with no children are called leaf nodes.
Partial search tree for route finding from Saris to Main
campus. goal test
(a) The initial state Main Campus

Saris generating a new state


(b) After expanding Saris
Mercato JiT campus Gabriel
choosing one
option Saris

JiT
(c) After expanding Mercato Mercato Gabriel
campus

Main Campus Frustale Gabriel, Agp….


Another Example: Route finding Problem
25

Partial search tree for route finding from Sidist Kilo to


Stadium.
Sidist Kilo goal test
(a) The initial state

Sidist Kilo
(b) After expanding Sidist Kilo generating a new state

choosing one option Arat Kilo Giorgis ShiroMeda


SidistKilo

(c) After expanding Arat Kilo Arat Kilo Giorgis ShiroMeda

MeskelSquare
Piassa Megenagna
(Stadium)
Searching strategies
26
• Search strategy gives the order in which the search space is
examined.
1. Uninformed (= blind) search
o They do not need domain knowledge that guide them towards the
goal.
o Have no information about the number of steps or the path cost
from the current state to the goal.
o It is important for problems for which there is no additional
information to consider.
2. Informed (= heuristic) search
o Have problem-specific knowledge (knowledge that is true from
experience).
o Have knowledge about how far are the various state from the goal.
o Can find solutions more efficiently than uninformed search.
Search methods types:
27

Uninformed search
o Breadth first search
o Depth first search
o Uniform cost search,
o Depth limited search
o Iterative deepening search
o etc.
Informed search
o Greedy search
o A*-search
o Iterative improvement,
o Constraint satisfaction
o etc.
Uninformed search strategies
28

The simplest type of tree search algorithm is called


uninformed, or blind, tree search.
Uninformed search strategies use only the information
available in the problem definition.
They have no additional information about the distance from
the current state to the goal.
o Breadth-first search
o Depth-first search
o Uniform-cost search
o Depth-limited search
o Iterative deepening search
Breadth-first search (1)
29

In breadth-first search we always select the minimum depth node


for expansion.
Expand shallowest(least depth) unexpanded node.
Implementation:
 It uses queue data structure(FIFO approach), i.e., new successors go
at end.
Breadth-first search (2)
30

Expand shallowest unexpanded node


Implementation:
It uses queue data structure(FIFO approach), i.e., new successors go
at end.
Breadth-first search (3)
31

Expand shallowest unexpanded node


Implementation:
It uses queue data structure(FIFO approach), i.e., new successors go at
end.
Breadth-first search (4)
32

Expand shallowest unexpanded node


Implementation:
It uses queue data structure(FIFO approach), i.e., new successors go at
end.
Exercise
33

Apply BFS to find an optimal path from start node to


Goal node.
S is Start node and
G is Goal node.
Depth-first search (1)
34

Expand deepest unexpanded node.


Depth-first search (1)
35

Expand deepest unexpanded node.


Implementation:
o It uses stack data structure(LIFO approach) i.e., put
successors at front
Depth-first search (2)
36

Expand deepest unexpanded node


Implementation:
o It uses stack data structure(LIFO approach) i.e., put
successors at front.
Depth-first search (3)
37

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (4)
38

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (5)
39

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (6)
40

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (7)
41

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (8)
42

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (9)
43

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (10)
44

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (11)
45

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search (12)
46

Expand deepest unexpanded node


Implementation:
o fringe = LIFO stack, i.e., put successors at front
Depth-first search Vs. Breadth-first search
47
Uniform cost Search
48
 Finds the shortest path to the goal in terms of cost.
o It modifies the BFS by expanding least-cost unexpanded node first.
o It is used for traversing a weighted tree or graph.
 Implementation:
o fringe = queue ordered by path cost
A
S S S
1 10 S
S 5 B 5 G 0 A B C A B C A B C
1 5 15 5 15 15
1 G G G
5
5 11 11 10
C

Properties:
 Equivalent to breadth-first if step costs all equal.
 This strategy finds the cheapest solution.
 It does not care about the number of steps involved in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
Uniform cost Search
49
 Look at the following example(2) on how UCS works.

• S=start node, G=goal node

• From node S we look for a


node to expand and we have
nodes A and G but since it’s
a uniform cost search it’s
expanding the node with the
lowest step cost.
• So node A becomes the successor rather than our required goal node G.

• From A we look at its children nodes B and C. So since C has the lowest step
cost it traverses through node C and then we look at successors of C i.e. D and
G since the cost to D is low we expand along with the node D.
Uniform cost Search
50

• D has only one child G


which is our required goal
state with path cost of 6, !
but the cheapest is through
C→G = 4 leading to the goal
state G by implementing
uniform cost search
Algorithm. i.e S→ A→ C→
• IfGwe have traversed this way definitely our total path cost from S to G is
just 4 even after traversing through many nodes rather than going to G
directly where the cost is 12 and 4<<12(in terms of step cost).
Depth-limited search
51

• Depth-first search is not complete for unbounded trees.


• Depth-limited search overcomes this by:
• Performing depth-first search but with depth limit
given by l.
i.e., it never expand nodes at depth l
• Unfortunately, this introduces a new source of
incompleteness …
o What if the solution is at a depth greater than l ?
 Also, depth-limited search is non-optimal if l > d i.e if solution
is above cut-off depth l.
Properties of DLS(Describe more…)
52

 Depth-limited search overcomes this:


 E.g depth-first search with depth limit l=2
i.e., never expand nodes at depth l=2.

Properties
o Incomplete if solution is below cut-off depth l
o Not optimal if solution is above cut-off depth l
Iterative deepening search
53

 Iteratively run depth-limited search.


 Gradually increase depth cut-off.
 It finds the best depth limit.
 It does this by gradually increasing the limit— first 0, then 1,

then 2, and so on—until a goal is found.


Iterative deepening search, l =0
54

At L=0, the start node is goal-tested but no nodes are expanded.


This is so that you can solve trick problems like, “Starting in
Arad, go to Arad.”

1'st Iteration-----> A
Iterative deepening search l =1
55

At L=1, the start node is expanded. Its children are goal-tested,


but not expanded. Recall that to expand a node means to
generate its children.

2'nd Iteration----> A, B, C
Iterative deepening search l =2
56

At L=2, the start node and its children are expanded. Its
grand-children are goal-tested, but not expanded.

3'rd Iteration------>A, B, D, E, C, F, G
Iterative deepening search l =3
57

At L=3, the start node, its children, and its grand-children are
expanded. Its great-grandchildren are goal-tested, but not expanded.

4'rth Iteration------>A, B, D, H, l, E, J, K, C, F, L, M, G
Bidirectional search
58

• Sometime it is possible to reduce complexity by


searching in 2 directions at once.
• It runs two simultaneous search's:
o Forward search - from initial state
o Backward search - from goal state
• Usually done with breadth-first searches.
• Before expanding node, check if it is in fringe of other
search.
• Can be useful, but:
o Bad space complexity (two searches in memory)
o How to generate predecessor?
Bidirectional Search
59
 The search stops when the two search's intersects each other. i.e.
Forward and backward search
o Only need to go to half depth.
o It can enormously reduce time complexity, but is not always
applicable.
Note that if a heuristic function is inaccurate, the two
searches might miss one another.
Exercise: Uniform Cost Search
60

Assume that node 3 is the initial state and node 4 is


the goal state.
Exercise: Uninformed Search Strategies
61

Assume that S is Start node and G is Goal node.


S
1 5 8

A B C
3 9
7 4 5
D E G
BFS: S-A-B-C-D-E-G DFS: S-A-D-E-G
UCS: S-B-G DLS: S-A-D-E-G(L=2) same with
DFS due to L=2 equivalent to max depth of tree.
Informed search strategies(Heuristic)
62

Informed Search is another technique that has additional


information about the estimate distance from the current state
to the goal.
It equips the AI with guidance regarding how and where it can
find the problem’s solution.
o Greedy search
o A*-search
o Constraint satisfaction
o Iterative improvement,
o etc.
Greedy Search
63
 A greedy algorithm is an approach for solving a problem by
selecting the best option available at the moment.
 It doesn't worry whether the current best result will bring the overall
optimal result.
 For example, let us see how this works for route-finding problems in
Romania. What information can we use to estimate the actual road
distance from a city to Bucharest?
 One possible answer is to use the straight-line distance, SLD from
each city to Bucharest. Table 1 shows a list of all these distances.
 Each has a heuristic function hSLD(n).
Greedy Search...
64

Figure 1: The state space of the Romania problem.


Table 1 – Values of hSLD(n) – the straight-line distance to Bucharest
65

City Straight-line City Straight-line


distance, hSLD(n) distance, hSLD(n)
Arad 366
Bucharest 0 Mehadia 241
Neamt 234
Craiova 160
Oradea 380
Drobeta 242 Pitesti 100
Eforie 161 Rimnicu Vilcea 193
Fagaras 176
Giurgiu 77 Sibiu 253
Timisoara 329
Hirsova 151
Urziceni 80
Iasi 226 Vaslui 199
Lugoj 244 Zerind 374
Greedy Search...
66

 Using this information(lowest value of hSLD(n)), the greedy best-


first search algorithm will select a node for expansion.
 Let us step through the greedy best-first algorithm when
applied to the problem of finding a path from Arad to
Bucharest:
Step1:
 Fringe=[Arad]
 Lowest value of heuristic function hSLD(Arad)=366
 Action: expand Arad
Greedy Search...
67

Step 2:
 Fringe=[Sibiu,Timisoara,Zerind]
 Lowest value of heuristic function hSLD(Sibiu)=253
 Action: expand Sibiu
Greedy Search...
68

Step 3:
 Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu
Vilcea]
 Lowest value of heuristic function hSLD(Fagaras)=176
 Action: expand Fagaras
Greedy Search... ...
69

Step 4:
 Fringe=[Timisoara,Zerind,Arad,Oradea,Rimnicu
Vilcea,Sibiu,Bucharest]
 Lowest value of heuristic function hSLD(Bucharest)=0
 Action: find goal at Bucharest!
Greedy Search... ...
70

 The Greedy search finds a solution without ever expanding a


node that is not on the solution path; hence, its search cost is
minimal!
A* Search
71
 A* (pronounced “A star”) search is similar to greedy best-first
search, except that it also takes into account the actual path
cost taken so far to reach each node.
f(n) = g(n) + h(n)
where g(n) = total actual path cost to get to node n
h(n) = estimated path cost to get from node n to goal.
 Example: Execution steps of A* search for reaching from Arad to
Bucharest.
Step1:
 Fringe=[Arad]
 Lowest value of evaluation function f(Arad)=0+366=366
 Action: expand Arad
A* Search...
72

Step 2:
 Fringe=[Sibiu,Timisoara,Zerind]
 Lowest value of evaluation function f(Sibiu)=140+253=393
 Action: expand Sibiu
A* Search...
73

Step 3:
 Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]
 Lowest value of evaluation function f(Rimnicu Vilcea)=220+193=413
 Action: expand Rimnicu Vilcea
A* Search...
74

Step 4:
 Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Craiova,Pitesti,Sibiu]
 Lowest value of evaluation function f( Fagaras)=239+176=415
 Action: expand Fagaras
A* Search...
75

Step 5:
 Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Pitesti,Sibiu,Sibiu,
Bucharest]
 Lowest value of evaluation function f(Pitesti)=317+100=417
 Action: expand Pitesti
A* Search...
76

Step 6:
 Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Sibiu,Bucharest,C
raiova,Rimnicu Vilcea]
 Lowest value of evaluation function f(Bucharest)=418+0=418
 Action: find goal at Bucharest
Constraint Satisfaction Problems (CSPs)
77

A constraint satisfaction problem consists of three components, X,


D, and C:
 X is a set of variables, {X , ... , X ,}.
i n

 D is a set of domains, {D , , D .}, one for each variable.


i n
 C is a set of constraints that specify allowable
combinations of values.
In Constraint satisfaction problems(CSPs):
 State is defined by variables Xi with values from domain Di.
 Goal Test is a set of constraints specifying allowable
combinations of values for subsets of variables.
Example: Map-Coloring (1)
78

As an example we will consider the map-colouring problem.


o No two adjacent regions have the same colour.

Figure: A map showing the


regions of Australia

 Variables: WA, NT, Q, NSW, V, SA, T


 Domains: Di = {red, green, blue}
 Constraints: adjacent regions must have different colors, e.g.
WA ≠ NT, or (WA,NT) in {(red, green),(red, blue),(green, red),
(green, blue),(blue, red),(blue, green)}
Example: Map-Coloring (2)
79
Here we introduce two pieces of important terminology:
 Complete: a complete state is one in which all variables have
been assigned a value.
 Consistent: a consistent state is one that does not violate any of
the specified constraints.

Solutions are complete and consistent assignments, e.g., WA=red,


NT=green, Q=red, NSW=green, V=red, SA=blue, T=green
Example: Map Coloring
80
 Color a map with three colors so that adjacent countries have different
colors ({red, green, blue}.
 Assume that initially region B is colored with Red and Region C with
Blue.
A C variables:
G A, B, C, D, E, F, G
B

?
D
Constraints: “no
neighboring regions
?

F
E have the same color”
?

?
Exercise: Coloring Problem
81

Use at most three colors (red, green and blue) to color


the given map, map(A)?
Map – A CSP-graph
Constraint Graph
82

Another way of visualising CSPs is by using a constraint graph.


o Constraint graph: nodes are variables, arcs are constraints.
 The nodes of the graph correspond to variables of the
problem, and a link connects any two variables that participate
is a constraint.
o Binary CSP: each constraint relates(links) only two variables.
Varieties of CSPs
83

• It is possible to categorise CSPs based on the type of variable they use


and on the size of their domains:
1. Discrete variables (Discrete CSP)
• In a discrete CSP, the variables take on discrete values.
• There are two types of discrete CSP:
o Finite domains: means can take on a limited number of discrete values.
 For example, the {red, green, blue} or {yes, no} domain
o Infinite domains: Some discrete variables can have infinite domains.
 integers, natural numbers, strings, etc.
2. Continuous variables (Continuous CSP)
In a continuous CSP, variables can take one from a continuous range of
values.
o For example, real numbers are continuous.
Varieties of CSPs
84

• We can also categorise CSPs based on the type of their constraints.


• There are three types of constraint we can have:
1. Unary constraints involve a single variable,
o Restricts the value of a single variable.
e.g., SA ≠ green I.e t South Australians dislike the color green.
2. Binary constraints involve pairs of variables,
e.g., SA ≠ WA ….(eg: map coloring slide #77)
3. Higher-order constraints involve 3 or more variables
o The more variables that are involved in a constraint, the harder
the problem is to solve.
o Problems with higher-order constraints are the hardest class of
problems.
85

The End!

You might also like