AI-searching-Lecture 3
AI-searching-Lecture 3
SOLVING PROBLEMS BY
SEARCHING
Outline:
• A problem-solving agent and search
• Background
• Problem Solving Paradigm
• Graph Search as Tree Search
• Terminology
• Classes of Search
• Simple Search Algorithm
• Implementing The Search Strategies
• Testing for the Goal
• Depth-First Search
• Breadth-First Search
• Best-First Search
2
A problem-solving agent and search
• When the correct action to take is not immediately obvious, an
agent may need to plan ahead:
• to consider a sequence of actions that form a path to a goal state
• Search
• plays a key role in many parts of AI
• links are referred to as edges (this is more common in talking about graphs)
6
Background
• Nodes
• A tree has a root node (where the tree "starts")
• Every node except the root has a single parent (aka direct ancestor)
• Each node (except the terminal (aka leaf) nodes) has one or more children
(aka direct descendants)
8
Background
• A graph
• is a set of nodes connected by links but
• where loops are allowed and
• a node can have multiple parents
• A tree
• is made up of nodes and links (circles and lines) connected
• so that there are no loops (cycles)
• Every node except the root has a single parent (aka direct ancestor)
10
Background
• Examples of Graphs
• road networks or
• airline routes or
• computer networks
11
Problem Solving Paradigm
• To reduce the problem to be solved to one of searching a graph
• Specify what are the states, the actions and the goal test
• A state
• is supposed to be complete,
• that is, to represent all (and preferably only) the relevant aspects of the
problem to be solved
• So, for example,
• when we are planning the cheapest round-the world flight plan,
• we don't need to know the address of the airports;
• knowing the identity of the airport is enough
12
Problem Solving Paradigm
• Actions
• We are assuming that the actions are deterministic,
• that is, we know exactly the state after the action is performed
• We also assume that the actions are discrete,
• so we don't have to represent what happens while the action is happening
• For example,
• we assume that a flight gets us to the scheduled destination and
• that what happens during the flight does not matter
• A test for the goal
• we need a test for the goal,
14
Graph Search as Tree Search
• Graphs have cycles
• Note that such a tree has a leaf node for every non-looping path
in the graph starting at S
16
Graph Search as Tree Search
• An example of converting from
a graph to a tree
• Also note, however, that even though we
avoided loops,
• some nodes (the colored ones) are duplicated
in the tree, that is,
• they were reached along different non-
looping paths
• This means that a complete search of this tree
might do extra work
17
A state and a search node
• State:
• is an arrangement of the real world (or at least our model of it)
• refers to the vertices of the search tree which is being generated by the search
algorithm
• Note that a node also corresponds to a path from the start state to the state
associated with the node
• This follows from the fact that the search algorithm is generating a tree
20
Classes of Search
• The uninformed, optimal algorithms
• These methods guarantee finding the "best" path
• (as measured by the sum of weights on the graph edges) but
• do not use any information beyond what is in the graph definition
• Uniform cost search algorithm
21
Classes of Search
22
Simple Search Algorithm
• This is a common search algorithm
• The search strategies we will look at are all instances of it
• The basic idea
• is to keep a list (Q) of nodes (that is, partial paths),
• then to pick one such node from Q,
• see if it reaches the goal and otherwise
• extend that path to its neighbours and
• add them back to Q
• Note that
• we are keeping track of the states we have reached (visited) and
• not entering them in Q more than once
• This will certainly keep us from ever looping,
• since we can only ever reach a state once,
• no matter how the underlying graph is connected
23
Simple Search Algorithm
24
Implementing The Search Strategies
• For example,
• depth-first search
• always looks at the deepest node in the search tree first
• We can get that behaviour by:
• picking the first element of Q as the node to test and extend
• adding the new (extended) paths to the FRONT of Q,
• so that the next path to be examined will be one of the extensions of the
current path to one of the descendants of that node's state
• One good thing about depth-first search
• is that Q never gets very big
• The size of the Q depends on
• the depth of the search tree and not on its breadth
25
Implementing The Search Strategies
• For example,
• Breadth-first
• traverse the graph breadthwise as follows:
• First move horizontally and
• visit all the nodes of the current layer
• Move to the next layer
• The basic approach
• is to once again pick the first element of Q to examine BUT now
• we place the extended paths at the back of Q
• This means that the next path pulled off of Q will typically
• not be a descendant of the current one,
• but rather one at the same level in tree
• Note that in breadth-first search,
• Q gets very big
• because we postpone looking at longer paths (that go to the next level)
• until we have finished looking at all the paths at one level 26
Implementing The Search Strategies
27
Implementation Issues
• Testing for the Goal
• One subtle point
• is where in the algorithm one tests for success (that is, the goal test)
• There are two acceptable points:
• one is when a path is extended and it reaches a goal,
• testing on extension is correct and
• will save some work for any-path searches
• the other is when a path is pulled off of Q
• This is testing in step 3 of the algorithm
• it will generalize more readily to optimal searches
28
Implementation Issues
29
Visited vs Expanded States
• We say a state is visited
• when a path that reaches that state (that is, a node that refers to that
state) gets added to Q
• So, if the state is anywhere in any node in Q, it has been visited
• A state M is Expanded
• when a path to that state is pulled off of Q
• At that point, the descendants of M are visited and
• the paths to those descendants added to the Q
30
Visited vs Expanded States
31
Depth-First Search
• The table in the center
• shows the contents of
• Q and of the Visited list
• at each time through the loop of the
search algorithm
• The nodes in Q
• are indicated by reversed paths,
• blue
• is used to indicate newly added
nodes (paths)
• On the right
• is the graph we are searching and
• we will label
• the state of the node that is being
extended at each step 32
Depth-First Search
• The first step
• is to initialize
•Q
• with a single node corresponding to
the start state (S in this case) and
• the Visited list
• with the start state
33
Depth-First Search
• Next step:
• We pick the first element of Q,
• which is that initial node,
• remove it from Q,
• extend its path to its descendant
states (if they have not been
Visited) and
• add the resulting nodes to the
front of Q
• We also add the states
corresponding to these new nodes
to the Visited list
• So, we get the situation on line 2
34
Depth-First Search
• Next step:
• We then pick the first node on Q,
whose state is A, and
• repeat the process, extending to
paths that end at C and D and
• placing them at the front of Q
35
Depth-First Search
• Next step:
• We pick the first node, whose
state is C, and
• note that there are no descendants
of C and
• so no new nodes to add
36
Depth-First Search
• Next step:
• We pick the first node of Q,
whose state is D, and
• consider extending to states C
and G,
• but C is on the Visited list
• so we do not add that extension
• We do add the path to G to the
front of Q
37
Depth-First Search
• Next step:
• We pick the first node of Q,
whose state is G,
• the intended goal state,
• so we stop and return the path
• The final path returned goes
from S to A, then to D and then
to G
38
Breadth-First Search
• We start as with the initial node
corresponding to S
• The difference from depth-first
search
• is that new paths are added to the
back of Q
39
Breadth-First Search
• Next step
• We pick it and add paths to A and B,
as before
40
Breadth-First Search
• Next step
• We pick the first node, whose state is
A, and
• extend the path to C and D and
• add them to Q (at the back) and
• here we see the difference from depth-
first
41
Breadth-First Search
• Next step
• Now, the first node in Q is the path to
B
• so we pick that and consider its
extensions to D and G
• Since D is already Visited,
• we ignore that and
• add the path to G to the end (back)
of Q
42
Breadth-First Search
• Next step
• At this point, having generated a
path to G, we would be justified in
stopping
• But, as we mentioned earlier,
• we proceed until the path to the goal
becomes the first path in Q
43
Breadth-First Search
• Next step
• We now pull out the node
corresponding to C from Q but
• it does not generate any extensions
since C has no descendants
44
Breadth-First Search
• Next step
• So we pull out the path to D
• Its potential extensions are to
previously visited states and
• so we get nothing added to Q
45
Breadth-First Search
• Next step
• Finally, we get the path to G and we
stop
46
Breadth-First Search
• Note that we found a path with
fewer states than we did with
depth-first search, from S to B to
G
• In general, breadth-first search
guarantees finding a path to the
goal with the minimum number
of states
47
Time and space worst case complexity
• to describe the worst-case complexity of breadth-first and depth-first search in
terms of:
• d - distance d from the start node (measured in number of edge traversals),
• b - is the "branching factor" of the graph
• BFS takes
• O(bd + 1) time
• O(bd + 1) memory
• DFS takes
• O(bd) time
• O(bd) memory
48
Completeness
• A search method is described as being complete
• if it is guaranteed to find a goal state if one exists
49
Terminology
• Heuristic:
• generally refers to a "rule of thumb",
• something that's helpful but not guaranteed to work
• A heuristic function
• Is a function that estimates the cost to reach the goal from a given state,
• helping to make informed decisions that optimize the search process
• uses the value of a heuristic function defined on the states to guide the search
51
Implementing The Search Strategies
• Best-first search can be implemented as follows:
• pick the "best" path (as measured by heuristic value of the node's state) from all of Q and
• add the extensions somewhere on Q
• So, at any step, we are always examining the pending node with the best heuristic value
52
Implementing The Search Strategies
53
Implementation Issues: Finding the best node
• Note
• that best-first search requires finding the best node in Q
54
Implementation Issues: Finding the best node
55
Best-First Search
• we look at the whole Q to find the
best node (by heuristic value)
• We start with Start node as before,
• but now we're showing the heuristic
value of each path
• (which is the value of its state) in the
Q,
• so we can easily see which one to
extract next
56
Best-First Search
• We pick the first node and extend to
A and B
57
Best-First Search
• We pick the node corresponding to A,
• since it has the best value (= 2) and
extend to C and D
58
Best-First Search
• The node corresponding to C has the
lowest value
• so we pick that one
• That goes nowhere
59
Best-First Search
• Then, we pick the node
corresponding to B
• which has lower value than the path
to D and
• extend to G (not D because of
previous Visit)
60
Best-First Search
• We pick the node corresponding to G
and rejoice
61
Best-First Search
• We found the path to the goal
• from S to B to G
62
Completeness
• If the state space is infinite, in general
• the Best First search is not complete
64