Teach 02 - 2020 Chapter-3
Teach 02 - 2020 Chapter-3
SOLVING PROBLEMS BY
SEARCHING
1
PROBLEM SOLVING AGENT
• Goal based agent called a Problem-solving agent.
• Problem Solving agents use atomic representations, that is
states of the world are considered as wholes, with no internal
structure visible to the problem solving algorithms.
• Problem-solving agents decide what to do by finding sequences
of actions that lead to desirable states.
• We start by defining the elements that constitute a “Problem”
and its “Solution”.
• Uninformed Search algorithms- algorithms that are given no
information about the problem other than its definition.
• Informed Search Algorithms- can do quite well given some
guidance on where to look for solutions.
2
• Intelligent agents are supposed to maximize
their performance measure.
• Goals help organize behavior by limiting the
objectives that the agent is trying to achieve.
• Goal formulation, based on the current
situation and the agent’s performance , is the
first step in problem solving
• Problem formulation is the process of deciding
what sorts of actions and states to consider,
given a goal.
3
• An agent with several immediate options of unknown value
can decide what to do by first examining different possible
sequences of actions that lead to states of known value,
and then choosing the best sequence.
10
1. Toy Problem- VACUUM WORLD
• States : is determined by both the agent location and the
dirt location. The agent is in one of two locations, each of which
might or might not contain dirt. Thus there are 2 X22 =8 possible
states.
• Initial state: any state can be designated as the initial state
• Actions :This generated the legal states
actions Left, Right, Suck
• Transition model: Actions have their expected effects, except that
moving LEFT in the leftmost square, moving RIGHT in the rightmost
square, and SUCKING in a clean square have no effect.
• Goal test: This checks whether all squares are clean
• Path cost: Each step costs 1, so the path cost is the number of steps
in the path.
11
2. Toy Problem: THE 8-PUZZLE
• The 8-puzzle belongs to the family of sliding-
block puzzles, which are often used as test
problems for new search algorithms in AI.
12
EXAMPLE: THE 8-PUZZLE
• The 8-puzzle, an instance of which is shown in Figure ,
consists of a 3 x 3 board with eight numbered tiles
and a blank space.
13
States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares.
• Transition model: Given a state and action, this returns the resulting
state ; for example: if we apply LEFT to the start in Figure above, the
resulting state has the 5 and the blank switched.
• Goal test: Checks whether the state matches the goal configuration as
shown in the figure.
• Path cost: Each step costs 1, so path cost is the number of steps in the
paths
14
3. Toy Problem: 8-QUEENS PROBLEM
• The goal of the 8-
queens problem is to
place eight queens on a
chessboard such that
no queen attacks any
other. (A queen attacks
any piece in the same
row, column or
diagonal)
15
EXAMPLE: 8-QUEENS PROBLEM
• There are two main kinds of formulation.
• An incremental formulation involves
operators that augment the state description,
starting with an empty state; for the 8-queens
problem, this means that each action add a
queen to the state
• A complete-state formulation starts with all 8
queens on the board and moves them around.
16
EXAMPLE: 8-QUEENS PROBLEM
• States: Any arrangement of 0 to 8 queens on
the board is a state.
• Initial State: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a
queen added to the specified square.
• Goal test: 8 queens are on the board, none
attacked.
17
REAL WORLDS PROBLEMS
• The route-finding problem is defined in terms
of specified locations and transitions along links
between them.
• Route-finding algorithms are used in a variety
of applications, such as routing in computer
networks, military operations planning, and
airline travel planning systems.
• These problems are typically complex to specify.
Example: Airline Travel Problem
18
1. REAL WORLD PROBLEM: Airline Travel Problem
• States: each is represented by a location (e.g. An airport) and
the current time
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat
class, leaving after the current time, leaving enough time for
within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will
have the flight’s destination as the current location and the
flight’s arrival time as the current time.
• Goal test: are we at the final destination specified by the user?
• Path cost: monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage awards, etc 19
REAL WORLD PROBLEMS: OTHER EXAMPLE
PROBLEMS
2. Touring problems: closely related to route finding
problem. Consider, for example, “Visit each city atleast
once, Starting and ending in Bucharest”.
3. Traveling Salesperson problem (TSP) is a touring
problem in which each city must be visited exactly
once. The aim is to find the shortest tour.
4. VLSI Layout problem requires positioning millions of
components and connections on a chip to minimize
area, minimize circuit delays, minimize stray
capacitances, and maximize manufacturing yield 20
REAL WORLD PROBLEMS: OTHER EXAMPLE
PROBLEMS
5. Robot Navigation- is a generalization of the route-
finding problem described earlier. Rather than
following a discrete set of routes, a robot can move
in a continuous space with (in principle) an infinite
set of possible actions and states.
6. Automatic assembly sequencing - the aim is to find
an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no
way to add some part later in the sequence without
undoing some of the work already done.
21
SEARCHING FOR SOLUTIONS
Having formulated some problems, we now need to solve
them.
• From the figure,
shows some of the
expansions in the search
tree for finding a route
from Arad to Bucharest
• The root of the search
tree is a search node If it is not a goal state then expanding
corresponding to the the current state; that is, applying the
initial state.(Arad) successor function to the current
state, thereby generating a new set of
• The first step is to test
states.
whether this is a goal
state or not. 22
23
• Continue choosing ,testing and
expanding until either a solution is
found or there are no more states to be
expanded.
• The set of all leaf nodes available for
expansion at any given point is called
the frontier.
• The choice of which state to expand is
determined by the search strategy.
24
INFRASTRUCTURE FOR SEARCH ALGORITHMS
32
DEPTH-FIRST SEARCH
• Depth-first search always expands the deepest
node in the current fringe of the search tree.
• The search proceeds immediately to the deepest
level of the search tree, where the nodes have no
successors.
• This strategy can be implemented by Tree-Search
with a Last-in-First-out (LIFO) queue, also known as
a stack.
• Depth-first search has very modest memory
requirements.
33
DEPTH-FIRST SEARCH
34
35
DEPTH-FIRST SEARCH
38
BIDIRECTIONAL SEARCH
• One forward from
the initial state, and
the other backward
from the goal,
stopping when two
searches meet in the
middle.
• The motivation is
that bd/2 + bd/2 < bd
39
COMPARING UNINFORMED SEARCH STRATEGIES
41
INFORMED SEARCH
BEST –FIRST SEARCH
The general approach we consider is called best-first search
An instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based
on an evaluation function, f(n). The choice of f(n)
determines the search strategy.
42
HEURISTIC FUNCTION
• A key component of the algorithm is a
heuristic function , denoted h(n).
43
GREEDY BEST-FIRST SEARCH
44
A* SEARCH: MINIMIZING THE TOTAL
ESTIMATED SOLUTION COST
• The most widely-known form of best-first search is called
A* search
• It evaluates nodes by combining g(n),the cost to reach the
node and h(n),the cost to get from the node to the goal.
f(n) = g(n) + h(n)
• Since g (n) gives the path cost from the start node to node
n, and h(n) is the estimated cost of the cheapest path from
n to the goal
f(n) = estimated cost of the cheapest solution through n
• A* search is both complete and optimal
45
HEURISTIC FUNCTIONS
• The 8-puzzle was one of the earliest heuristic search
problems.
• The object of the puzzle is to slide the tiles
horizontally or vertically into the empty space until
the configuration matches the goal configuration.
The average solution cost is about 22 steps.
46
HEURISTIC FUNCTIONS
• h1 = the number of misplaced tiles.
• h2 = the sum of the distances of the tiles from
their goal positions.
• Because tiles cannot move along diagonals, the
distance we will count is the sum of the
horizontal and vertical distances. This is
sometimes called the city block distance or
Manhattan distance.
• A problem with fewer restrictions on the
actions is called a relaxed problem.
47
48
LEARNING HEURISTICS FROM EXPERIENCE
49
Thank You!!!
50
BREADTH-FIRST SEARCH
51
52
GREEDY BEST-FIRST SEARCH
53
54
55
56
57
58
59
60
61
LOCAL SEARCH ALGORITHMS AND
OPTIMIZATION PROBLEMS
• Local search algorithms operate using a single
current state (rather than multiple
paths) ,move only to neighbors of that state .
They are not systematic
• Two key advantages:
1) They use very little memory
2) Find solutions in large or infinite (continuous)
state spaces
62
HILL-CLIMBING SEARCH
• It is simply a loop that continually moves in the
direction of increasing value-that is, uphill. It
terminates when it reaches a "peak" where no
neighbor has a higher value.
• The algorithm does not maintain a search tree, so the
current node data structure need only record the state
and its objective function value.
• Hill-climbing does not look ahead beyond the
immediate neighbors of the current state.
• Hill climbing is sometimes called greedy local search
63
64
Hill-climbing search
65
GENETIC ALGORITHMS
•A genetic algorithm or GA is a variant of
stochastic beam search in which successor
states are generated by combining two parent
states , rather than by modifying a single state.
• GAs begin with a set of k randomly generated
states ,called the population.
66
ONLINE SEARCH AGENTS
• An online search agent operates by interleaving
computation and action: first it takes an action, then it
observes the environment and computes the next
action.
• Online search is a necessary idea for an exploration
problem, where the states and actions are unknown to
the agent.
• An agent in this state of ignorance must use its actions
as experiments to determine what to do next, and
hence must interleave computation and action.
• Example : a robot that is placed in a new building and
must explore it to build map that it can use for getting
from A to B 67
ONLINE LOCAL SEARCH
• Using a random walk to explore the environment.
• A random walk simply selects at random one of the
available actions from the current state; preference
can be given to actions that have not yet been tried.
• Random walk will eventually find a goal or complete
its exploration provided that the space is finite .
• The process can be very slow.
68
• The figure shows an environment in which a random walk
will take exponentially many steps to find the goal because
at each step backward progress is twice as likely as forward
progress.
• An agent implementing this scheme which is called learning
real-time A*(LRTA*).
• Optimizing under uncertainty encourages the agent to
explore new ,possibly promising paths.
• An LRTA* agent is guaranteed to find a goal in any
finite ,safely explorable environment.
69
Thanks
70
71
72
UNIFORM COST SEARCH
• Uniform-cost search expands the node
n with the lowest path cost.
• Uniform-cost search does not care
about the number of steps a path has,
but only about their total cost.
73
DEPTH LIMITED SEARCH
• The problem of unbounded trees can be alleviated
with a predetermined depth limit l.
• That is, nodes at depth l are treated as if they have
no successors. This approach is called depth-limited
search.
• Typographical Error in this statement : If they have
no successors .This approach is called depth –
limited search.
• The depth limit solves the infinite-path problem.
74
HEURISTIC FUNCTION
• h2(n) = the sum of the distances of the tiles
from their goal positions.
• Because tiles cannot move along diagonals, the
distance we will count is the sum of the
horizontal and vertical distances. This is
sometimes called the city block distance or
Manhattan distance.
• A problem with fewer restrictions on the
actions is called a relaxed problem.
75