0% found this document useful (0 votes)
26 views34 pages

Lecture06 Informed Search (Part 2)

The document discusses informed search strategies. It explains that informed searches use knowledge or heuristics to search the most promising areas of the problem space first. This allows informed searches to often find solutions more quickly than uninformed searches within limited time. The document provides examples of how to formulate search problems, define heuristic functions, and properties of admissible and consistent heuristics that can guide searches to optimal solutions.

Uploaded by

Dream Maker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views34 pages

Lecture06 Informed Search (Part 2)

The document discusses informed search strategies. It explains that informed searches use knowledge or heuristics to search the most promising areas of the problem space first. This allows informed searches to often find solutions more quickly than uninformed searches within limited time. The document provides examples of how to formulate search problems, define heuristic functions, and properties of admissible and consistent heuristics that can guide searches to optimal solutions.

Uploaded by

Dream Maker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

TAI2151 – ARTIFICIAL INTELLIGENCE

FUNDAMENTALS
LECTURE 6 – SOLVING
PROBLEMS BY SEARCHING
(INFORMED SEARCH)
PART 2

Solving Problems By Searching – Informed Searches


SEARCH STRATEGIES

• Strategy: defines the order of node expansion


• Important properties of strategies:
• Completeness: always find a solution (will not stuck in a loop)
• Time complexity: number of nodes generated/expanded
• Space complexity: maximum number of nodes in memory
• Optimality: always get a least-cost solution
• Time and space complexity measured in terms of:
• b: maximum branching factor of the search tree
• d: depth of a solution with minimal distance to root
• m: maximum depth of the state space (may be infinity)

Solving Problems By Searching – Informed Searches


UNINFORMED VS INFORMED SEARCH STRATEGIES

• Uninformed search: blind search


• Search without knowledge
• If the search space is huge, can use too much time
• If Google returns the search result after 1 day, will you
use it?
• Informed search:
• Search the most promising space
• Find a solution more quickly and within limited time
• Often find the best solution

Solving Problems By Searching – Informed Searches


TO FORMULATE A SEARCH PROBLEM

• Which properties matter and how to represent?


• Get the initial state, goal state, possible intermediate states
• Which actions are possible and how to represent?
• Operator set: actions and transition model
• Which action is next?
• Decide through the path cost function
• How to know a solution is reached?
• Goal test
• In summary, we define a problem through the initial state, goal state,
actions, transition model, path cost, goal test
4

Solving Problems By Searching – Informed Searches


CHALLENGE: SELECT A SPACE STATE
• Real world is complicated
• State space must be abstracted
• Abstract state: set of real world states
• Abstract operator/actions: complex combination of
real actions
• For e.g.: Arad -- > Bucharest: all the possible routes
• Abstract solution
• Set of real paths that are solutions in the real world

Solving Problems By Searching – Informed Searches


TERMINOLOGY
• Agent: AI or someone who try to solve the problem
• States: a state of the agent in its environment.
• Initial state: initial starting state
• Goal state: solution state
• Operators/Actions: a kind of (movement) that make
within a state (choices that can be made within a
state)
• Goal test: way to determine whether a given state is a
goal state
• A machine needs some way to encode whether a state
happens to be in is the goal state
• Path Cost: numerical cost associated with a given path

Solving Problems By Searching – Informed Searches


EXAMPLE: VACUUM WORLD
• Given all the possible states, i.e.
the state space of a vacuum
which suck the dirt.
• Assume that the current task
environment is observable, and
the observed state is state 5.
• How should the vacuum react?

RIGHT, SUCK

Solving Problems By Searching – Informed Searches


EXAMPLE: VACUUM WORLD
• However, it is possible to have
unobservable states, means that
agent does not have the
knowledge of where it is
(uninformed search):
• {1,2,3,4,5,6,7,8}
• If the vacuum is at the RIGHT,
then the possible states are 2, 4,
6, 8.
• If the vacuum is at the LEFT, then
the possible states are 1, 3, 5, 7
• The possible actions are:
• [RIGHT, SUCK, LEFT, SUCK] 8

Solving Problems By Searching – Informed Searches


EXAMPLE: VACUUM WORLD
• If the agent has a sensor to sense
the dirt, now knowledge is
provided
• What if the area is clean?
• [RIGHT, if DIRT then SUCK] or
• [LEFT, if DIRT then SUCK]
• Goal: no dirt at all in all the places

Solving Problems By Searching – Informed Searches


SUCK RIGHT

LOOP LOOP

Assume that we are in state 5.

10

Solving Problems By Searching – Informed Searches


THE 8-PUZZLE PROBLEM: 3X3 BOARD WITH 8 TILES & 1 BLANK

Operators/actions: Move the tiles up, down, left or right.


Average solution cost is about 22 steps; branching factor is 3.
Good heuristic function can reduce the search process

Solving Problems By Searching – Informed Searches


THE 8-PUZZLE PROBLEM CONTINUED ...
• Agent: puzzle
• States: a state description specifies the location of
each of the eight tiles in one of the nine squares.
• Operators/actions: blank moves left, right, up, or
down.
• Goal test: state matches the goal configuration
shown in the right.
• Path Cost: each step costs 1, so that the path cost
is just the length of the path.

Solving Problems By Searching – Informed Searches


HEURISTIC FUNCTION

• A node is expand based on an evaluation function, i.e.


heuristic function
• Estimate the cost to reach the goal
• To formulate a heuristic function:
• Design based on external knowledge, experience or
intuition
• Have a long history in AI domain or other research domain

13

Solving Problems By Searching – Informed Searches


HEURISTIC FUNCTION
• Depend on the domain used
• Choosing appropriate function greatly affects the effectiveness of the
state-space search: we try to tell the system how to perform the
search
• A good heuristic
• Accurately represents the actual cost to reach a goal state
• Tell clearly in the state-space what to expand next
• Quickly reach the solution (Use to reduce search space)

• In Lecture 5, since the heuristic function of best first search is not good
enough, so A* is invented where the heuristic function consider the
cost from each node to the goal and from the starting node to next
node.
14

Solving Problems By Searching – Informed Searches


Inventing Heuristic functions
• How to invent a good heuristic function?
• Any function is possible, but will it help the search algorithm?
• Based on properties:
– Admissible (admissible heuristic function is always
optimal)
– Complete (will not stuck in loop)

Solving Problems By Searching – Informed Searches


WHAT IS ADMISSIBLE?
• Assume that:
• h(n) = heuristic estimate from n to the goal
• h*(n) = optimal cost/true cost from n to the goal
• h(n) is admissible if and only if for all nN: h(n) <= h*(n)
• When an algorithm is admissible, it can guarantee to find the optimal solution
• How to know the design heuristic function is admissible? Through proofing

Solving Problems By Searching – Informed Searches


EXAMPLE OF PROOF
Suppose some suboptimal goal G2 has been generated and is in
the queue. Let n be an unexpanded node in the queue such that
n is on a shortest path to an optimal goal G.

• Assume the optimal cost is C*


• f(G2) = g(G2) + h(G2)
• f(G2) = g(G2) since h(G2) is 0
• g(G2)>C*
• f(G) = g(G) since h(G) = 0
• Based on above, f(G2)>g(G) since G2 is suboptimal
• Based on above, f(G2) > f(G)
• h(n) <= C*
• f(n) = g(n)+h(n) <= C*
• From the above, we have f(n) <= C* <f(G2), and h is admissible
• Thus, A* will never select G2 for expansion.
Solving Problems By Searching – Informed Searches
CONSISTENT HEURISTICS

• Consistent (or monotone)


• Must fulfil the rule:
• Each node n, and each child
n’ (successor of n):
• h(n) <= c(n, n’) + h(n’)
• Each goal node G: h(G) = 0
• Consistent heuristic function
is admissible

Solving Problems By Searching – Informed Searches


Admissible Heuristics
• A* search uses an admissible (never over estimate, get us the
optimal solution) heuristic in which
h(n)  h*(n)
where h*(n) is the TRUE cost from n.

• h(n) is a consistent underestimate of the true cost

• For example, hSLD(n) never overestimates the actual road


distance.

Remember triangle inequality:


two sides of a triangle cannot add up to less than the third side.

Solving Problems By Searching – Informed Searches


PROPERTIES OF A*

• A* expands every path along which f(n) < C*


• A* will never expand any node which f(n) > C*
• If h is monotone A* will expand any node such that f(n) < C*
• Therefore, A* expands all the nodes for which f(n) < C* and a subset of the
nodes for which f(n) = C*
• Therefore, if h1(n) < h2(n), the subset of nodes expanded is smaller for h2(n)

Solving Problems By Searching – Informed Searches


Inventing Heuristic functions

• Examples of Heuristic Functions for A*


• the 8-puzzle problem
• h1(n) = the number of tiles in the wrong position
• is this admissible?
• h2(n) = the sum of distances of the tiles from their goal positions,
where distance is counted as the sum of vertical and horizontal tile
displacements (“Manhattan distance”)
• is this admissible?

Move 3 tiles
Move 2 tiles

Solving Problems By Searching – Informed Searches


Breadth-first Research

All the possible states


from initial state to
goal state

Solving Problems By Searching – Informed Searches


ADMISSIBLE HEURISTICS: THE 8-PUZZLE

5 4 1 2 3

Goal
Start 6 1 8 8 4
7 3 2 7 6 5
• Path cost: total number of vertical or horizontal moves
• Two different heuristics:
• h1(n) = number of misplaced tiles (wrong position).
• h2(n) = total Manhattan distance
(number of squares to desired location for each tile)

So, what are h1(start) and h2(start)?


h1(s) = 7
h2(s) = 2 + 3 + 3 + 2 + 4 + 2 + 0 + 2 = 18
Solving Problems By Searching – Informed Searches
Best-first (Greedy) search: h(n) = number of misplaced tiles
1 2 3
4 8 b 4
Initial state 5
7 6 5

Goal State

3
3

5 3

Solving Problems By Searching – Informed Searches


Best-first (Greedy) search: h(n) = number of misplaced tiles

1 2 3
8 b 4
7 6 5

Goal State

Solving Problems By Searching – Informed Searches


A* on 8-puzzle with h(n) = no. of misplaced tiles

1 2 3
8 b 4
7 6 5

Goal State

Solving Problems By Searching – Informed Searches


DOMINANCE AND PRUNING POWER OF HEURISTICS
• Definition:
• A heuristic function h2 dominates h1(more informed than h1) if both are
admissible and for every node n, h2(n) is greater than h1(n) (h2(n)  h1(n)).
• h2 is better for search where
• A* using h2 will never expand more nodes than A* using h1
• Search steps in h2 is lesser compare to h1

• An A* search with a dominating heuristic function h2 has the property that any
node it expands is also expanded by A* with h1 (Hart, Nillson & Raphale, 1968).

Solving Problems By Searching – Informed Searches


EFFECTIVE BRANCHING FACTOR
• Branching factor is used to measure the effectiveness of a heuristic
• Let n be the total number of nodes generated by A* for a particular problem
and d the depth of the solution; b* be the effective branching factor
• Effective branching factor b* is defined by n = 1 + b*+(b*)^2+…+(b*)^d
𝑑
• b* ≈ 𝑛
• A good value of b* is 1
• If b* is 100, means that it needs to consider 100 children for each node

b*=1.79
6≈1+1.79+(1.79)^2

n (successor from
initial state) = 6
d=2

Solving Problems By Searching – Informed Searches


Comparison Between Search Algorithms
• Given two admissible heuristics h1(n) and h2(n), which is better?
• If h2(n)  h1(n) for all n, then
– h2 is said to dominate h1
– h2 is better for search
• For our 8-puzzle heuristic, does h2 dominate h1?

Each data point corresponds to 100 instances of the


8-puzzle problem in which the solution depth varies.
Solving Problems By Searching – Informed Searches
PROPERTIES OF COMPLETENESS FOR OPTIMAL

• Proof:
• A* will expand only nodes whose f-values <= C* where C* is an
optimal cost path
• The evaluation function of a goal node along an optimal path equals
C*
• Lemma:
• Anytime before A* terminate, there exists and OPEN node n’ on an
optimal path with f(n’) <= C*
• C* = optimal cost path

Solving Problems By Searching – Informed Searches


INVENTING HEURISTIC FUNCTIONS

• How can we invent admissible heuristics in general?


• look at “relaxed” problem where constraints are removed
• Relaxed problem: a problem with fewer restrictions on the
actions (obtain by removing constraints)
• e.g.., we can move in straight lines between cities
• e.g.., we can move tiles independently of each other

Solving Problems By Searching – Informed Searches


Inventing Heuristic Functions Continued ...
• Hypothetical answer:
– Heuristic are generated from relaxed problems
– Hypothesis: relaxed problems are easier to solve
• In relaxed models the search space has more operators, or more
directed arcs
• Example: 8 puzzle:
– A tile can be moved from A to B if A is adjacent to B and B is clear
– We can generate relaxed problems by removing one or more of the
conditions
• A tile can be moved from A to B if A is adjacent to B
• ...if B is blank
• A tile can be moved from A to B.

Solving Problems By Searching – Informed Searches


Inventing Heuristic Functions Continued ...
• Powerful heuristic – could be harder to compute
but more powerful where fewer nodes expanded
• Not every relaxed problem is easy
– An easy problem can be solved optimally by a
greedy algorithm
• If a function causes exhaustive search, that is
impractical except on small problems

Solving Problems By Searching – Informed Searches


Improving Heuristics
• If we have several heuristics which are not dominating the
others:
– Select the max value as given below:
If a collection of admissible heuristics h1 ...hm is available
for a problem, and none dominates any others, we can have
best of all by defining
h(n) = max(h1(n), ..., hm(n)).
• Sometimes the first invented function is not good
– With more experience
– With more discovered new knowledge
– Improve the function (that is research concept)

34

Solving Problems By Searching – Informed Searches

You might also like