0% found this document useful (0 votes)
12 views43 pages

Lect 9 10

The document discusses informed search strategies in artificial intelligence, focusing on the importance of heuristics in improving search efficiency. It contrasts uninformed search methods with informed ones, detailing algorithms such as A* and greedy best-first search, and emphasizes the significance of admissible and consistent heuristics for optimality. Additionally, it explores methods for generating admissible heuristics from relaxed problems and learning from experience.

Uploaded by

botmaail02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views43 pages

Lect 9 10

The document discusses informed search strategies in artificial intelligence, focusing on the importance of heuristics in improving search efficiency. It contrasts uninformed search methods with informed ones, detailing algorithms such as A* and greedy best-first search, and emphasizes the significance of admissible and consistent heuristics for optimality. Additionally, it explores methods for generating admissible heuristics from relaxed problems and learning from experience.

Uploaded by

botmaail02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Artificial Intelligence

DSE 313 –Lecture 9

Searching for Solutions: informed Search


Vaibhav
Assistant Professor, Data Science and Engineering
IISERB

Based on slides/lectures by Stuart Russell, Henry Kautz, Subbarao Kambhampati, Amit


Sethi, Avijit Maji, B Ravindran, and Mausam
Review

• A search strategy is defined by picking the


order of node expansion
• Which nodes to check first?
• An abstract problem is modeled as a (finite or
infinite) decision tree.
• Weak methods do not use any knowledge of
the problem
• General applicable
• Usually die from combinatorial explosion when exposed to
“real life”
Knowledge and Heuristics

• Simon and Newell, Human Problem Solving, 1972.


• Thinking out loud: experts have strong opinions like
“this looks promising”, “no way this is going to work”.
• S&N: intelligence comes from heuristics that help find
promising states fast.
Why? What? How? (Heuristics)
• It is a technique for solving a problem quickly
• In uninformed a 8 puzzle problem in worst case can
have 3^20 search spaces, a 15 puzzle –10^15. In
chess it can grow to 35^80.
• Time complexity is exponential.
• In Informed we apply heuristics (Guess~ Knowledge
about goal) : NP->P (converts).
• How do we calculate Heuristics: A function. e.g.
euclidean distance
Informed Vs Uninformed
Uninformed (Optimal) Informed (Good solution)

Only information is state and We have other information than


goal (blind search) goal and state
(Information~~Knowledge==Heu
ristics)
Time consuming Less time consuming (Depends!)

f(n)=g(n) f(n)=h(n) + g(n)

Where, g(n)= cost from start node to node n, and h(n) is the estimated cost
from node n to goal
What will be value of g(n) and h(n) at start and goal state
Environment Type In coming Lecture
Fully
Observable Static Environment
yes

Deterministic

yes

Sequential
yes no

Discrete no
yes Discrete
no
yes
Planning, Control, Vector Search: Continuous Function
heuristic cybernetics Constraint Optimization
search Satisfaction
Outline

• Best-first search
• A* search
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search

Some other variations..


Best-first search

• Idea: use an evaluation function f(n) for each node


• estimate of "desirability"
→Expand most desirable unexpanded node
• Implementation:
Order the nodes in fringe in decreasing order of desirability
• Special cases:
• greedy best-first search
• A* search
Romania with step costs in km
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
= estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal
Greedy best-first search example
Properties of greedy best-first
search
• Complete? No (tree search) – can get stuck in
loops, e.g., Iasi → Neamt → Iasi → Neamt → …
Yes (graph search)
• Time? Exponential, but a good heuristic can give
dramatic improvement
• Space? Exponential-- keeps all nodes in memory
• Optimal? No
A * search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal
A* search example
Conditions for optimality: Admissibility
and consistency: Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach
the goal state from n.
• An admissible heuristic never overestimates the
cost to reach the goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual
road distance)
• Theorem: If h(n) is admissible, A* using TREE-
SEARCH is optimal
h(n) = 7
(Proof) Admissible 4
S
2
1
h(n) = 1 A B
Shortest path we are looking for: SBAG h(n) = 5
4

G
h(n) = 0
A to G heuristics 1<4, B to G
5<=5, from S (SBAG)<=7. That
means we can verify that it is
admissible

Lets construct the search tree and search graph for this to see if being
admissible is sufficient for graph search
Artificial Intelligence
DSE 313 –Lecture 10

Searching for Solutions: informed Search


Vaibhav
Assistant Professor, Data Science and Engineering
IISERB

Based on slides/lectures by Stuart Russell, Henry Kautz, Subbarao Kambhampati, Amit


Sethi, Avijit Maji, B Ravindran, and Mausam
Till Now
• Need guidance for better searching (heuristics)
f(n)=h(n)+g(n)

• Greedy best search


• A* search
• Case for optimality
h(n) = 7
(Proof) Admissible S
4 2
1
h(n) = 1 A B
Shortest path we are looking for: SBAG h(n) = 5
4

G
A to G heuristics 1<4, B to G h(n) = 0

5<=5, from S (SBAG)<=7. That


means we can verify that it is
admissible

Lets construct the search tree and search graph for this to see if being
admissible is sufficient for graph search
Optimality of A* (tree) (proof)
• Suppose h() is admissible
• Suppose some suboptimal goal path G2 has been generated and is in the
frontier. Let n be an unexpanded node in the frontier such that n is on a
shortest path to an optimal goal G.

Focus on G2
• f(G2) = g(G2) since h(G2) = 0
• g(G2) > g(G) since G2 is suboptimal, cost of reaching G is less.
Focus on G
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
Optimality of A* (tree) (proof)
f(G2) = g(G2) since h(G2) = 0 because h is admissible
g(G2) > g(G) since G2 is suboptimal, cost of reaching G is less.
f(G) = g(G) since h(G) = 0
f(G2) > f(G) from above (1)

Lets focus on n now, because this is the node in the


fringe we chose not to expand
• h(n)≤ h*(n) since h is admissible.
• g(n) + h(n)≤ g(n) + h*(n) (add g(n) both the sides)
• f(n) ≤ f(G) (2)
From (1) and (2) f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n'
of n generated by any action a,
h(n) ≤ c(n,a,n') + h(n')
Every consistent heuristic is also admissible.
• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is
optimal
Properties of A*

• Complete? Yes unless there are infinitely many nodes with f < f(G)
• Time? Exponential (can be converted to polynomial
with good heuristics)
• Space? Keeps all nodes in memory
• Optimal? Yes
The effect of heuristic accuracy on
performance
e.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?
e.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8 The true solution cost is 26


• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance
• One way to characterize the quality of a heuristic is the effective
branching factor
• If h2(n) ≥ h1(n) for all n (both admissible)
then h2 dominates h1
• h2 is better for search
• Typical search costs (average number of nodes expanded):
• d=12 IDS = 3,644,035 nodes
A*(h1) = 227 nodes
A*(h2) = 73 nodes
• d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes
Generating Admissible Heuristics
from relaxed problems
• Is it possible for a computer to invent such a heuristic
mechanically. (h2 h1 type)
• We need to come up with general principle of
development of heuristics.
• Solution: Idea of relaxed problem
e.g. relaxing the constraint of a variable being real and
linear from integer
• Relaxed problems are important in computing
admissible heuristics.
• A problem with fewer restrictions on the actions is
called a relaxed problem
• The cost of an optimal solution to a relaxed problem
is an admissible heuristic for the original problem
• Any optimal solution in the original problem is, by
definition, also a solution in the relaxed problem.
• The relaxed problem may have better solutions if the
added edges provide short cuts.
The state-space graph of the relaxed problem is a supergraph of the original state
space because the removal of restrictions creates added edges in the graph.

One problem with generating new heuristic functions is


that one often fails to get a single “clearly best”
heuristic.
Example, if the 8-puzzle actions are described as
A tile can move from square A to square B if
A is horizontally or vertically adjacent to B and B is blank,
we can generate three relaxed problems by removing one or both of the conditions:
(a) A tile can move from square A to square B if A is adjacent to B.
(b) A tile can move from square A to square B if B is blank.
(c) A tile can move from square A to square B.

a-> h2 c-> h1 h2 can be derived from a and h1 from c


Generating admissible heuristics from
subproblems: Pattern database
• Admissible heuristics can also be derived from the solution
cost of a subproblem of a given problem.
• the cost of the optimal solution of this subproblem is a
lower bound on the cost of the complete problem.
• Can be better than Manhattan distance.
• Store these exact solution costs for every possible
• Subproblem instance as pattern databases as heuristics
Learning heuristics from
experience
• Another solution is to learn from experience. “Experience”
here means solving lots of 8-puzzles
• Each optimal solution to an 8-puzzle problem provides
examples from which h(n) can be learned.
• From these examples, a learning algorithm can be used to
construct a function h(n).
Search formulation of the queens puzzle
• Successors: all valid ways of placing additional queen on the
board; goal: eight queens placed no queen attack each other
Q

Q
Incremental formulation

Q Q Q Q Q

Q Q Q Q Q

Q Q Q Q Q

Q Q

Q Q Q Q Q

Q Q

Q Q Q Q Q
Pose Estimation Example (Do you
have a goal state?)
• Given a geometric model of a 3D object and a 2D image of
the object.
• Determine the position and orientation of the object w.r.t.
the camera that snapped the image.
• Large (infinite) continuous search spaces (is systematic
search a good technique)

image 3D object

State (x, y, z, x-angle, y-angle, z-angle)


Two Major Pointers
• I may not need a systematic path.
• There is no goal to be satisfied.

You might also like