0% found this document useful (0 votes)
82 views29 pages

Ai Unit 2

The document summarizes several uninformed and informed search strategies used in artificial intelligence problem solving. It describes breadth-first search, uniform-cost search, depth-first search, depth-limited search, iterative deepening search, greedy best-first search, and A* search. Breadth-first search expands the shallowest nodes first using a FIFO queue, while uniform-cost search uses a priority queue ordered by path cost g(n). Greedy best-first search uses only the heuristic function h(n) to order nodes, while A* search combines h(n) and g(n) to minimize total estimated solution cost.

Uploaded by

Mayank Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views29 pages

Ai Unit 2

The document summarizes several uninformed and informed search strategies used in artificial intelligence problem solving. It describes breadth-first search, uniform-cost search, depth-first search, depth-limited search, iterative deepening search, greedy best-first search, and A* search. Breadth-first search expands the shallowest nodes first using a FIFO queue, while uniform-cost search uses a priority queue ordered by path cost g(n). Greedy best-first search uses only the heuristic function h(n) to order nodes, while A* search combines h(n) and g(n) to minimize total estimated solution cost.

Uploaded by

Mayank Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UNINFORMED SEARCH STRATEGIES:

• The strategies have no additional information about


states beyond that provided in the problem definition
• Will generate successors and distinguish a goal state
from a non-goal state.
• All search strategies are distinguished by the order in
which nodes are expanded.
• Breadth-first search, Uniform-cost search, Depth-first
search, Depth-limited search, Iterative deepening
search
Breadth-first search:
• A simple strategy in which the root node is expanded
first, then all the successors of the root node are
expanded next, then their successors, and so on
• The shallowest unexpanded node is chosen for
expansion
• Uses a FIFO queue for the frontier
• The new nodes (which are always deeper than their
parents) go to the back of the queue, and old nodes,
which are shallower than the new nodes, get expanded
first.
• The goal test is applied to each node when it is
generated rather than when it is selected for
expansion.
• Discards any new path to a state already in the frontier
or explored set

Dr. A. H. Shanthakumara 1 Dept. of CSE


Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

Dr. A. H. Shanthakumara 2 Dept. of CSE


Properties of breadth-first search:
Complete??
• Yes, if the shallowest goal node is at some finite depth
d, breadth-first search will eventually find it after
generating all shallower nodes
Time??
• Imagine searching a uniform tree where every state has
b successors.
• The root of the search tree generates b nodes at the
first level, each of which generates b more nodes
• Now suppose that the solution is at depth d, the total
number of nodes generated is
• 1 + b + b2 + b3 + : : : + bd = O(bd+1), i.e., exp. in d
Space??
• stores every expanded node in the explored set, the
space complexity is always within a factor of b of the
time complexity
• O(bd+1) (keeps every node in memory)
Optimal??
• optimal if the path cost is a nondecreasing function of
the depth of the node. The most common such scenario
is that all actions have the same cost
• Space is the big problem; can easily generate nodes at
100MB/sec
• so 24hrs = 8640GB.

Dr. A. H. Shanthakumara 3 Dept. of CSE


Uniform-cost search:
• When all step costs are equal, breadth-first search is
optimal
• Instead of expanding the shallowest node, uniform-cost
search expands the node n with the lowest path cost
g(n). This is done by storing the frontier as a priority
queue ordered by g.

• Two other significant differences from breadth-first


search:
o The goal test is applied to a node when it is selected
for expansion rather than when it is first generated
o A test is added in case a better path is found to a
node currently on the frontier.
• Uniform-cost search does not care about the number of
steps a path has, but only about their total cost
• Therefore, it will get stuck in an infinite loop if there is a
path with an infinite sequence of zero-cost actions

Dr. A. H. Shanthakumara 4 Dept. of CSE


Properties of Uniform-cost search:
Complete??
• Completeness is guaranteed provided the cost of every
step exceeds some small positive constant ϵ
Time and space??
• Uniform-cost search is guided by path costs rather than
depths, so its complexity is not easily characterized in
terms of b and d. Instead, let C∗ be the cost of the
optimal solution
• With the assumption that every action costs at least ϵ,
worst-case time and space complexity is O(b1+‖C∗/ϵ‖),
which can be much greater than bd.
Optimal??
• Yes, nodes expanded in increasing order of g(n)
• When all step costs are the same, uniform-cost search
is similar to breadth-first search
Next class: DFS, depth-limited search, iterative deepening depth-first search, bidirectional
search, comparing uniformed search strategies

Dr. A. H. Shanthakumara 5 Dept. of CSE


Depth-first search:
• Always expands the deepest node in the current
frontier of the search tree.
• The search proceeds immediately to the deepest level
of the search tree, where the nodes have no successors
• Uses a LIFO Stack- the most recently generated
(deepest unexpanded) node is chosen for expansion
Implementation:
• A recursive function that calls itself on each of its
children in turn.
• fringe = LIFO Stack, i.e., put successors at front

Dr. A. H. Shanthakumara 1 Dept. of CSE


Properties of Depth-first search:
Complete??
• No: fails in infinite-depth spaces, spaces with loops
• Modify to avoid repeated states along path complete
infinite spaces
Time??
• It is bounded by the size of the state space

Dr. A. H. Shanthakumara 2 Dept. of CSE


• O(bm): where m is the maximum depth of any node and
it is terrible if m is much larger than d
• but if solutions are dense, may be much faster than
breadth-first
Space??
• O(bm), i.e., linear space!
Optimal??
• Depth first search will explore the entire left subtree
even if node C is a goal node, which would be a better
solution; hence, depth-first search is not optimal
Depth-limited search:
• It is a depth-first search with a predetermined depth
limit l to solves the infinite-path problem
• Nodes at depth l are treated as if they have no
successors.

Dr. A. H. Shanthakumara 3 Dept. of CSE


• Unfortunately,
o It introduces an additional source of
incompleteness if we choose l < d, that is, the
shallowest goal is beyond the depth limit.
o It is nonoptimal if we choose l > d. Its time
complexity is O(bl) and its space complexity is
O(bl).
Iterative deepening depth-first search:
• A general strategy, used with depth-first tree search,
that finds the best depth limit.
• It does this by gradually increasing the limit—first 0,
then 1, then 2, and so on—until a goal is found.
• Iterative deepening combines the benefits of depth-
first and breadth-first search.

Four iterations of iterative deepening search on a binary


tree:

Dr. A. H. Shanthakumara 4 Dept. of CSE


Properties of iterative deepening search:
• Complete?? It is complete when the branching factor is
finite and optimal when the path cost is a
nondecreasing function of the depth of the node
• Time?? (d + 1)b0 + db1 + (d - 1)b2 + : : : + bd = O(bd)
• Space?? O(bd)
• Optimal?? Yes, if step cost = 1
• Can be modified to explore uniform-cost tree

Dr. A. H. Shanthakumara 5 Dept. of CSE


Bidirectional search:
• The idea behind bidirectional search is to run two
simultaneous searches—one forward from the initial
state and the other backward from the goal—hoping
that the two searches meet in the middle

• Replace the goal test with a check to see whether the


frontiers of the two searches intersect; if they do, a
solution has been found.
Comparing uninformed search strategies:

Next class: Informed search strategies: Greedy best-first search, A* search,

Dr. A. H. Shanthakumara 6 Dept. of CSE


INFORMED (HEURISTIC) SEARCH STRATEGIES:
• Uses problem-specific knowledge beyond the definition
of the problem itself
• More efficiently than an uninformed strategy.
• Use of f, a heuristic function, instead of g (in uniform-
cost search) to order the priority queue, denoted h(n):
• h(n) = estimated cost of the cheapest path from the
state at node n to a goal state.
• It is arbitrary, nonnegative, problem-specific functions,
with one constraint: if n is a goal node, then h(n)=0.
Greedy best-first search:
• Tries to expand the node that is closest to the goal, on
the grounds that this is likely to lead to a solution
quickly.
• It evaluates nodes by using just the heuristic function;
that is, f(n) = h(n).
Example : route-finding problems in Romania(to find a
path from Arad to Bucharest)
• We use the straight line distance heuristic, which we
will call hSLD
• We need to know the straight-line distances to
Bucharest

Dr. A. H. Shanthakumara 1 Dept. of CSE


Dr. A. H. Shanthakumara 2 Dept. of CSE
• The first node to be expanded from Arad will be Sibiu
because it is closer to Bucharest than either Zerind or
Timisoara.
• The next node to be expanded will be Fagaras because
it is closest.
• Fagaras in turn generates Bucharest, which is the goal.
• Finds a solution without ever expanding a node that is
not on the solution path; hence, its search cost is
minimal
• But not optimal, the path via Sibiu and Fagaras to
Bucharest is 32 kilometers longer than the path through
Rimnicu Vilcea and Pitesti.
• It is also incomplete even in a finite state space (the
problem of getting from Iasi to Fagaras)

Dr. A. H. Shanthakumara 3 Dept. of CSE


• The worst-case
case time and space complexity for the tree
version is O(bm), where m is the maximum depth of the
search space.
Find the route from Tumkur to Shivamogga using Greedy
best first search:
Node hSLD Node hSLD
Tumkur 220 Hosadurga 105
KB Cross 155 Hiriyur 145
Sira 185 Arasikere 110
Tiptur 135 Holalkere 75
Chitradurga 105 Shivamogga 0

A* search: Minimizing the total estimated solution cost:


cost
• It evaluates nodes by combining g(n), the cost to reach
the node, and h(n), the cost to get from the node to the
goal:

Dr. A. H. Shanthakumara 4 Dept. of CSE


• f(n) = g(n) + h(n).
• Since g(n) gives the path cost from the start node to
node n, and h(n) is the estimated cost of the cheapest
path from n to the goal, we have
• f(n) = estimated cost of the cheapest solution through
n.
• The algorithm is identical to UNIFORM-COST-SEARCH
except that A∗ uses g + h instead of g.

Dr. A. H. Shanthakumara 5 Dept. of CSE


• The tree-search version of A∗ is optimal if h(n) is
admissible, while the graph-search version is optimal if
h(n) is consistent.
• That A∗ search is complete, optimal, and optimally
efficient among all such algorithms is rather satisfying.
• Unfortunately, it does not mean that A∗ is the answer
to all our searching needs.
Find the route from Tumkur to Shivamogga using A∗
search:
Node hSLD Node hSLD
Tumkur 220 Hosadurga 105
KB Cross 155 Hiriyur 145
Sira 185 Arasikere 110
Tiptur 135 Holalkere 75
Chitradurga 105 Shivamogga 0

Dr. A. H. Shanthakumara 6 Dept. of CSE


Next class: Memory-bounded
bounded heuristic search, learning to search better,,

Dr. A. H. Shanthakumara 7 Dept. of CSE


Memory-bounded heuristic search:
∗ (IDA∗
Iterative-deepening A∗ ∗):
• Reduce the memory requirements for A∗ by adapting
the idea of iterative deepening to the heuristic search
context
• The cutoff used is the f-cost (g+h) rather than the depth
• The cutoff value is the smallest f-cost of any node that
exceeded the cutoff on the previous iteration
• It suffers from the real valued costs
Recursive best-first search (RBFS):
• Simple recursive algorithm that attempts to mimic the
operation of standard best-first search
• Its structure is similar to that of a recursive depth-first
search, but rather than continuing indefinitely down
the current path
• It uses the f limit variable to keep track of the f-value of
the best alternative path available from any ancestor of
the current node.

Dr. A. H. Shanthakumara 1 Dept. of CSE



• RBFS is somewhat more efficient than IDA∗, but still
suffers from excessive node regeneration.
• Each mind change, could require many reexpansions of
forgotten nodes to recreate the best path and extend it
one more node
• RBFS is an optimal algorithm if the heuristic function
h(n) is admissible.
• Space complexity is linear in the depth of the deepest
optimal solution

Dr. A. H. Shanthakumara 2 Dept. of CSE


• Time complexity is rather difficult to characterize: it
depends both on the accuracy of the heuristic function
and on how often the best path changes as nodes are
expanded.
Learning to search better:
• Could an agent learn how to search better?
• The answer is yes, and the method rests on an
important concept called the metalevel state space.
• Each state in a metalevel state space captures the
internal (computational) state of a program that is
searching in an object-level state space
• Each action in the metalevel state space is a
computation step that alters the internal state
• Metalevel learning algorithm can learn from the
experiences to avoid exploring unpromising subtrees
• The goal of learning is to minimize the total cost of
problem solving, trading off computational expense and
path cost

Dr. A. H. Shanthakumara 3 Dept. of CSE


CONSTRAINT SATISFACTION PROBLEMS:
• So far….,
o Problems can be solved by searching in a space of
states.
o These states can be evaluated by domain-specific
heuristics and tested to see whether they are goal
states
o Each state is atomic, or indivisible—a black box
with no internal structure.
• Now describes….,
o a way to solve a wide variety of problems more
efficiently
o We use a factored representation for each state: a
set of variables
o A problem is solved when each variable has a value
that satisfies all the constraints on the variable.
o A problem described this way is called a constraint
satisfaction problem, or CSP.
DEFINING CONSTRAINT SATISFACTION PROBLEMS:
• A constraint satisfaction problem consists of three
components, X, D, and C:
• X is a set of variables, {X1, . . . ,Xn}.
• D is a set of domains, {D1, . . . ,Dn}, one for each
variable.
• C is a set of constraints that specify allowable
combinations of values.
• state is defined by variables Xi with values from domain
Di

Dr. A. H. Shanthakumara 1 Dept. of CSE


• goal test is a set of constraints specifying allowable
combinations of values for subsets of variables
• For example, if X1 and X2 both have the domain {A,B},
then the constraint saying the two variables must have
different values can be written as <(X1,X2), [(A,B),
(B,A)]> or as <(X1,X2), X1 != X2>.
Example problem: Map coloring:

• Variables: {WA, NT, Q, NSW, V , SA, T}


• Domains: Di = {red; green; blue}
• Constraints: adjacent regions must have different colors
• e.g., WA ≠ NT (if the language allows this), or
• (WA, NT) ϵ {(red,green), (red,blue), (green, red), (green,
blue), , , ,}
• Constraint graph: The nodes of the graph correspond
to variables of the problem, and a link connects any two
variables that participate in a constraint.

Dr. A. H. Shanthakumara 2 Dept. of CSE


• Solutions are assignments satisfying all constraints,
• e.g., {WA=red; NT=green; Q=red; NSW=green; V=red;
SA=blue; T =green}

• With CSPs, once we find out that a partial assignment is


not a solution
• Many problems that are intractable for regular state-
space search can be solved quickly when formulated as
a CSP
Variations on the CSP formalism:
1. Discrete variables
• Finite domains
• Example: Map coloring problems and the 8-queens
problem, where the variables Q1, . . . ,Q8 are the
positions of each queen in columns 1, . . . , 8 and each
variable has the domain Di = {1, 2, 3, 4, 5, 6, 7, 8}.
• A discrete domain can be infinite, such as the set of
integers or strings.
• it is no longer possible to describe constraints by
enumerating all allowed combinations of values
• Need a constraint language,

Dr. A. H. Shanthakumara 3 Dept. of CSE


• e.g., StartJob1 + 5 < StartJob3
• linear constraints solvable, nonlinear undecidable
2. Continuous variables:
• continuous CONTINUOUS domains are common in the
real DOMAINS world and are widely studied in the field
of operations research
Varieties of constraints:
• Unary constraints involve a single variable,
o e.g., SA != green
• Binary constraints involve pairs of variables,
o e.g., SA != WA
• Higher-order constraints involve 3 or more variables,
o e.g., cryptarithmetic column constraints

o
• Preferences (soft constraints), e.g., red is better than
green
• often representable by a cost for each variable
assignment (constrained optimization problems)
INFERENCE IN CSPs:
• In CSPs, an algorithm can search (choose a new variable
assignment from several possibilities) or do a specific
type of inference called constraint propagation

Dr. A. H. Shanthakumara 4 Dept. of CSE


• Using the constraints to reduce the number of legal
values for a variable, which in turn can reduce the legal
values for another variable, and so on.
Local consistency:
• If we treat each variable as a node in a graph and each
binary constraint as an arc
• In each part of the graph causes inconsistent values to
be eliminated throughout the graph.
• There are different types of local consistency
Node consistency: A single variable is node-consistent if
all the values in the variable’s domain satisfy the
variable’s unary constraints.
Arc consistency:
• if every value in its domain satisfies the variable’s
binary constraints.
• Xi is arc-consistent with respect to another variable Xj if
for every value in the current domain Di there is some
value in the domain Dj that satisfies the binary
constraint on the arc (Xi,Xj).
• A network is arc-consistent if every variable is arc
consistent with every other variable.
Path consistency:
• Arc consistency tightens down the domains (unary
constraints) using the arcs (binary constraints).
• To make progress on problems, we need a stronger
notion of consistency.

Dr. A. H. Shanthakumara 5 Dept. of CSE


• Path consistency tightens the binary constraints by
using implicit constraints that are inferred by looking at
triples of variables.
• A two-variable set {Xi, Xj} is path-consistent with
respect to a third variable Xm if, for every assignment
{Xi = a, Xj = b} consistent with the constraints on {Xi,
Xj}, there is an assignment to Xm that satisfies the
constraints on {Xi, Xm} and {Xm, Xj}. This is called path
consistency because one can think of it as looking at a
path from Xi to Xj with Xm in the middle.
K-consistency:
• A CSP is strongly k-consistent if it is k-consistent and is
also (k − 1)-consistent, (k − 2)-consistent, . . . all the way
down to 1-consistent.
• We can then solve the problem as follows: First, we
choose a consistent value for X1. We are then
guaranteed to be able to choose a value for X2 because
the graph is 2-consistent, for X3 because it is 3-
consistent, and so on.
BACKTRACKING SEARCH FOR CSPs:
• Variable assignments are commutative,
o i.e., [WA=red then NT =green] same as [NT =green then
WA=red]
• Only need to consider assignments to a single variable
at each node
o b=d and there are dn leaves

Dr. A. H. Shanthakumara 6 Dept. of CSE


• Depth First search for CSPs with single-variable
assignments is called backtracking search

• The function INFERENCE can optionally be used to


impose arc-, path-, or k-consistency, as desired
• If a value choice leads to failure (noticed either by
INFERENCE or by BACKTRACK), then value assignments
(including those made by INFERENCE) are removed
from the current assignment and a new value is tried.
• The term backtracking search is used for a depth-first
search that chooses values for one variable at a time
and backtracks when a variable has no legal values left
to assign.
• When a branch of the search fails: back up to the
preceding variable and try a different value for it
(chronological backtracking)

Dr. A. H. Shanthakumara 7 Dept. of CSE


• BACKTRACKING-SEARCH keeps only a single
representation of a state and alters that representation
rather than creating new ones.
• The simplest strategy for SELECT-UNASSIGNED-
VARIABLE is to choose the next unassigned variable in
order, {X1, X2, . . .}.
• Illustration:

Next Class: Knowledge-based agents; The wumpus world as an example world

Dr. A. H. Shanthakumara 8 Dept. of CSE

You might also like