0% found this document useful (0 votes)
49 views64 pages

AI-searching-Lecture 3

Uploaded by

Kaleab Legese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views64 pages

AI-searching-Lecture 3

Uploaded by

Kaleab Legese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Chapter 3

SOLVING PROBLEMS BY
SEARCHING
Outline:
• A problem-solving agent and search
• Background
• Problem Solving Paradigm
• Graph Search as Tree Search
• Terminology
• Classes of Search
• Simple Search Algorithm
• Implementing The Search Strategies
• Testing for the Goal
• Depth-First Search
• Breadth-First Search
• Best-First Search
2
A problem-solving agent and search
• When the correct action to take is not immediately obvious, an
agent may need to plan ahead:
• to consider a sequence of actions that form a path to a goal state

• In which we see how an agent can look ahead to find a sequence of


actions that will eventually achieve its goal

• Such an agent is called a problem-solving agent, and

• the computational process it undertakes is called search


3
Informed vs uninformed search algorithms
• We distinguish between informed algorithms,
• in which the agent can estimate how far it is from the goal, and

• uninformed algorithms, where no such estimate is available

• Search
• plays a key role in many parts of AI

• These algorithms provide the conceptual backbone of almost every


approach to the systematic exploration of alternatives 4
Four classes of search algorithms
• uninformed (also known as blind) search
• is a search algorithm that explores a problem space
• without any specific knowledge or information about the problem
• other than the initial state and the possible actions to take
• It lacks domain-specific heuristics or prior knowledge about the problem
• informed (also known as heuristic) searches
• have access to task-specific information that can be
• used to make the search process more efficient
• any path searches
• will just settle for finding some solution
• optimal searches
• are looking for the best possible path
5
Background
• Tree vs Graphs
• A tree
• is made up of nodes and links (circles and lines) connected
• so that there are no loops (cycles)

• The search methods are defined on trees and graphs

• Nodes are sometimes referred to as vertices and

• links are referred to as edges (this is more common in talking about graphs)
6
Background
• Nodes
• A tree has a root node (where the tree "starts")

• Every node except the root has a single parent (aka direct ancestor)

• More generally, an ancestor node is a node that can be reached by


repeatedly going to a parent node

• Each node (except the terminal (aka leaf) nodes) has one or more children
(aka direct descendants)

• More generally, a descendant node is a node that can be reached by


repeatedly going to a child node 7
Background

8
Background
• A graph
• is a set of nodes connected by links but
• where loops are allowed and
• a node can have multiple parents
• A tree
• is made up of nodes and links (circles and lines) connected
• so that there are no loops (cycles)
• Every node except the root has a single parent (aka direct ancestor)

• We have two kinds of graphs to deal with:


• directed graphs,
• where the links have direction (akin to one-way streets) and,
• undirected graphs where the links go both ways
• You can think of an undirected graph as shorthand for a graph with directed links going each
way between connected nodes 9
Background

10
Background
• Examples of Graphs
• road networks or
• airline routes or

• computer networks

11
Problem Solving Paradigm
• To reduce the problem to be solved to one of searching a graph

• Specify what are the states, the actions and the goal test

• A state
• is supposed to be complete,
• that is, to represent all (and preferably only) the relevant aspects of the
problem to be solved
• So, for example,
• when we are planning the cheapest round-the world flight plan,
• we don't need to know the address of the airports;
• knowing the identity of the airport is enough
12
Problem Solving Paradigm
• Actions
• We are assuming that the actions are deterministic,
• that is, we know exactly the state after the action is performed
• We also assume that the actions are discrete,
• so we don't have to represent what happens while the action is happening
• For example,
• we assume that a flight gets us to the scheduled destination and
• that what happens during the flight does not matter
• A test for the goal
• we need a test for the goal,

• sometimes, not just one specific goal state


• So, for example,
• we might be interested in any city in Germany rather than specifically, Frankfurt
13
Problem Solving Paradigm
• Examples

14
Graph Search as Tree Search
• Graphs have cycles

• Trees don't have cycles

• Cycles are bad for searching,


• since, obviously, you don't want to go round and round getting nowhere

• When asked to search a graph,


• we can construct an equivalent problem of searching a tree by doing
two things:
• turning undirected links into two directed links; and, more importantly,
• making sure we never consider a path with a loop or,
• even better, by never visiting the same node twice 15
Graph Search as Tree Search
• An example of converting from a graph to
a tree
• we assume that S is the start of our search and

• we are trying to find a path to G,

• then we can walk through the graph and

• make connections from every node to every connected node


that would not create a cycle and

• stop whenever we hit G

• Note that such a tree has a leaf node for every non-looping path
in the graph starting at S
16
Graph Search as Tree Search
• An example of converting from
a graph to a tree
• Also note, however, that even though we
avoided loops,
• some nodes (the colored ones) are duplicated
in the tree, that is,
• they were reached along different non-
looping paths
• This means that a complete search of this tree
might do extra work

17
A state and a search node
• State:
• is an arrangement of the real world (or at least our model of it)

• used to refer to the vertices of the underlying graph that is being


searched, that is,

• states in the problem domain,


• for example,
• a city,
• an arrangement of blocks or
• the arrangement of parts in a puzzle

• We assume that you can arrive at the same real-world state by


multiple routes, that is, by different sequences of actions 18
A state and a search node
• A search node, on the other hand,
• is a data structure in the search algorithm,

• which constructs an explicit tree of nodes while searching

• refers to the vertices of the search tree which is being generated by the search
algorithm

• Each node refers to some state, but not uniquely

• many nodes may refer to the same state

• Note that a node also corresponds to a path from the start state to the state
associated with the node

• This follows from the fact that the search algorithm is generating a tree

• So, if we return a node, we're returning a path 19


Classes of Search
• The uninformed, any-path algorithms
• These algorithms basically look at all the nodes in the search tree in a specific
order (independent of the goal) and
• stop when they find the first path to a goal state
• depth-first and breadth-first search algorithms

• The informed, any-path algorithms


• These algorithms exploit a task specific measure of goodness to try to either
• reach the goal more quickly or
• find a more desirable goal state
• Best First algorithm

20
Classes of Search
• The uninformed, optimal algorithms
• These methods guarantee finding the "best" path
• (as measured by the sum of weights on the graph edges) but
• do not use any information beyond what is in the graph definition
• Uniform cost search algorithm

• The informed, optimal algorithms


• also guarantee finding the best path but
• which exploit heuristic ("rule of thumb") information
• to find the path faster than the uninformed methods
• A* (A star) (UC using an admissible heuristic)

21
Classes of Search

22
Simple Search Algorithm
• This is a common search algorithm
• The search strategies we will look at are all instances of it
• The basic idea
• is to keep a list (Q) of nodes (that is, partial paths),
• then to pick one such node from Q,
• see if it reaches the goal and otherwise
• extend that path to its neighbours and
• add them back to Q
• Note that
• we are keeping track of the states we have reached (visited) and
• not entering them in Q more than once
• This will certainly keep us from ever looping,
• since we can only ever reach a state once,
• no matter how the underlying graph is connected
23
Simple Search Algorithm

24
Implementing The Search Strategies
• For example,
• depth-first search
• always looks at the deepest node in the search tree first
• We can get that behaviour by:
• picking the first element of Q as the node to test and extend
• adding the new (extended) paths to the FRONT of Q,
• so that the next path to be examined will be one of the extensions of the
current path to one of the descendants of that node's state
• One good thing about depth-first search
• is that Q never gets very big
• The size of the Q depends on
• the depth of the search tree and not on its breadth
25
Implementing The Search Strategies
• For example,
• Breadth-first
• traverse the graph breadthwise as follows:
• First move horizontally and
• visit all the nodes of the current layer
• Move to the next layer
• The basic approach
• is to once again pick the first element of Q to examine BUT now
• we place the extended paths at the back of Q
• This means that the next path pulled off of Q will typically
• not be a descendant of the current one,
• but rather one at the same level in tree
• Note that in breadth-first search,
• Q gets very big
• because we postpone looking at longer paths (that go to the next level)
• until we have finished looking at all the paths at one level 26
Implementing The Search Strategies

27
Implementation Issues
• Testing for the Goal
• One subtle point
• is where in the algorithm one tests for success (that is, the goal test)
• There are two acceptable points:
• one is when a path is extended and it reaches a goal,
• testing on extension is correct and
• will save some work for any-path searches
• the other is when a path is pulled off of Q
• This is testing in step 3 of the algorithm
• it will generalize more readily to optimal searches

28
Implementation Issues

29
Visited vs Expanded States
• We say a state is visited
• when a path that reaches that state (that is, a node that refers to that
state) gets added to Q
• So, if the state is anywhere in any node in Q, it has been visited

• A state M is Expanded
• when a path to that state is pulled off of Q
• At that point, the descendants of M are visited and
• the paths to those descendants added to the Q

30
Visited vs Expanded States

31
Depth-First Search
• The table in the center
• shows the contents of
• Q and of the Visited list
• at each time through the loop of the
search algorithm
• The nodes in Q
• are indicated by reversed paths,
• blue
• is used to indicate newly added
nodes (paths)
• On the right
• is the graph we are searching and
• we will label
• the state of the node that is being
extended at each step 32
Depth-First Search
• The first step
• is to initialize
•Q
• with a single node corresponding to
the start state (S in this case) and
• the Visited list
• with the start state

33
Depth-First Search
• Next step:
• We pick the first element of Q,
• which is that initial node,
• remove it from Q,
• extend its path to its descendant
states (if they have not been
Visited) and
• add the resulting nodes to the
front of Q
• We also add the states
corresponding to these new nodes
to the Visited list
• So, we get the situation on line 2
34
Depth-First Search
• Next step:
• We then pick the first node on Q,
whose state is A, and
• repeat the process, extending to
paths that end at C and D and
• placing them at the front of Q

35
Depth-First Search
• Next step:
• We pick the first node, whose
state is C, and
• note that there are no descendants
of C and
• so no new nodes to add

36
Depth-First Search
• Next step:
• We pick the first node of Q,
whose state is D, and
• consider extending to states C
and G,
• but C is on the Visited list
• so we do not add that extension
• We do add the path to G to the
front of Q

37
Depth-First Search
• Next step:
• We pick the first node of Q,
whose state is G,
• the intended goal state,
• so we stop and return the path
• The final path returned goes
from S to A, then to D and then
to G

38
Breadth-First Search
• We start as with the initial node
corresponding to S
• The difference from depth-first
search
• is that new paths are added to the
back of Q

39
Breadth-First Search
• Next step
• We pick it and add paths to A and B,
as before

40
Breadth-First Search
• Next step
• We pick the first node, whose state is
A, and
• extend the path to C and D and
• add them to Q (at the back) and
• here we see the difference from depth-
first

41
Breadth-First Search
• Next step
• Now, the first node in Q is the path to
B
• so we pick that and consider its
extensions to D and G
• Since D is already Visited,
• we ignore that and
• add the path to G to the end (back)
of Q

42
Breadth-First Search
• Next step
• At this point, having generated a
path to G, we would be justified in
stopping
• But, as we mentioned earlier,
• we proceed until the path to the goal
becomes the first path in Q

43
Breadth-First Search
• Next step
• We now pull out the node
corresponding to C from Q but
• it does not generate any extensions
since C has no descendants

44
Breadth-First Search
• Next step
• So we pull out the path to D
• Its potential extensions are to
previously visited states and
• so we get nothing added to Q

45
Breadth-First Search
• Next step
• Finally, we get the path to G and we
stop

46
Breadth-First Search
• Note that we found a path with
fewer states than we did with
depth-first search, from S to B to
G
• In general, breadth-first search
guarantees finding a path to the
goal with the minimum number
of states

47
Time and space worst case complexity
• to describe the worst-case complexity of breadth-first and depth-first search in
terms of:
• d - distance d from the start node (measured in number of edge traversals),
• b - is the "branching factor" of the graph

• BFS takes
• O(bd + 1) time
• O(bd + 1) memory

• DFS takes
• O(bd) time

• O(bd) memory
48
Completeness
• A search method is described as being complete
• if it is guaranteed to find a goal state if one exists

• Breadth-first search is complete,


• breadth-first search will eventually find the goal state, but

• but depth-first search is not complete


• depth first search may get lost in parts of the graph that have no
goal state and never return

49
Terminology
• Heuristic:
• generally refers to a "rule of thumb",
• something that's helpful but not guaranteed to work

• A heuristic function
• Is a function that estimates the cost to reach the goal from a given state,
• helping to make informed decisions that optimize the search process

• Estimated distance to a goal:


• If we can get some estimate of the "distance" to a goal from the current node and
• we introduce a preference for nodes closer to the goal, then
• there is a good chance that the search will terminate more quickly
• This type of heuristic function depends on the sate and the goal
50
Implementing The Search Strategies
• Best-first (also known as "greedy") search
• is a heuristic (informed) search that

• uses the value of a heuristic function defined on the states to guide the search

• This will not guarantee finding a "best" path,


• for example, the shortest path to a goal

• The heuristic is used in the hope that it will steer us


• to a quick completion of the search or
• to a relatively good goal state

51
Implementing The Search Strategies
• Best-first search can be implemented as follows:
• pick the "best" path (as measured by heuristic value of the node's state) from all of Q and
• add the extensions somewhere on Q
• So, at any step, we are always examining the pending node with the best heuristic value

• Note that, in the worst case,


• this search will examine all the same paths that depth or breadth first would examine,
• but the order of examination may be different and
• therefore the resulting path will generally be different
• Best-first has a kind of breadth-first flavor and
• we expect that Q will tend to grow more than in depth-first search

52
Implementing The Search Strategies

53
Implementation Issues: Finding the best node
• Note
• that best-first search requires finding the best node in Q

• One simple method


• is simply to scan the Q completely, keeping track of the best element found
• Surprisingly, this simple strategy turns out to be the right thing to do in some
circumstances

• A more sophisticated strategy,


• such as keeping a data structure called a "priority queue",
• is more often the correct approach

54
Implementation Issues: Finding the best node

55
Best-First Search
• we look at the whole Q to find the
best node (by heuristic value)
• We start with Start node as before,
• but now we're showing the heuristic
value of each path
• (which is the value of its state) in the
Q,
• so we can easily see which one to
extract next

56
Best-First Search
• We pick the first node and extend to
A and B

57
Best-First Search
• We pick the node corresponding to A,
• since it has the best value (= 2) and
extend to C and D

58
Best-First Search
• The node corresponding to C has the
lowest value
• so we pick that one
• That goes nowhere

59
Best-First Search
• Then, we pick the node
corresponding to B
• which has lower value than the path
to D and
• extend to G (not D because of
previous Visit)

60
Best-First Search
• We pick the node corresponding to G
and rejoice

61
Best-First Search
• We found the path to the goal
• from S to B to G

62
Completeness
• If the state space is infinite, in general
• the Best First search is not complete

• If the state space is finite and


• we do not discard nodes that revisit states, in general
• the Best First search is not complete

• If the state space is finite and


• we discard nodes that revisit states,
• the Best First search is complete, but in general
• is not optimal 63
Uniform Cost (UC) Search

64

You might also like