0% found this document useful (0 votes)
22 views75 pages

Teach 02 - 2020 Chapter-3

This document discusses problem solving using search algorithms. It defines the key elements of a problem including the initial state, actions, transition model, goal test, and path cost. It provides examples of toy problems like vacuum world, 8-puzzle, and 8-queens to illustrate search concepts. Real-world problems discussed include airline travel planning, touring and visiting multiple cities, the traveling salesperson problem, robot navigation, and assembly sequencing.

Uploaded by

abu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views75 pages

Teach 02 - 2020 Chapter-3

This document discusses problem solving using search algorithms. It defines the key elements of a problem including the initial state, actions, transition model, goal test, and path cost. It provides examples of toy problems like vacuum world, 8-puzzle, and 8-queens to illustrate search concepts. Real-world problems discussed include airline travel planning, touring and visiting multiple cities, the traveling salesperson problem, robot navigation, and assembly sequencing.

Uploaded by

abu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 75

CHAPTER-3

SOLVING PROBLEMS BY
SEARCHING

1
PROBLEM SOLVING AGENT
• Goal based agent called a Problem-solving agent.
• Problem Solving agents use atomic representations, that is
states of the world are considered as wholes, with no internal
structure visible to the problem solving algorithms.
• Problem-solving agents decide what to do by finding sequences
of actions that lead to desirable states.
• We start by defining the elements that constitute a “Problem”
and its “Solution”.
• Uninformed Search algorithms- algorithms that are given no
information about the problem other than its definition.
• Informed Search Algorithms- can do quite well given some
guidance on where to look for solutions.

2
• Intelligent agents are supposed to maximize
their performance measure.
• Goals help organize behavior by limiting the
objectives that the agent is trying to achieve.
• Goal formulation, based on the current
situation and the agent’s performance , is the
first step in problem solving
• Problem formulation is the process of deciding
what sorts of actions and states to consider,
given a goal.

3
• An agent with several immediate options of unknown value
can decide what to do by first examining different possible
sequences of actions that lead to states of known value,
and then choosing the best sequence.

• The process of looking for a sequence of actions that


reaches the goal is called search.

• A search algorithm takes a problem as input and returns a


solution in the form of action sequence.

• Once a solution is found the actions it recommends can be


carried out – called execution phase 4
WELL-DEFINED PROBLEMS AND SOLUTIONS
A problem can be defined formally by four components

1. Initial state that the agent starts in – e.g. In(Arad)


2. Actions: A description of the possible actions available to the
agent.
3. A description of what each action does; the formal name for this
is the transition model. We also use the term successor to refer to
any state reachable from a given state by a single action.
• The state space of the problem –the set of all states reachable
from the initial state by any sequence of actions.
• The state space forms a directed network or graph in which
the nodes are states and the arcs between nodes are actions
• A path in state space is a sequence of states connected by a
sequence of actions. 5
4. Goal test determines whether a given state is a goal.
5. Path cost function that assigns a numeric cost to each
path.
– The step cost of taking action a to go from state x to state y is
denoted by c(x,a,y)

• A solution to a problem is a path from the initial state to the goal


state
• Solution quality is measured by the path cost function
and an optimal solution has the lowest path cost
among all solutions.
• The process of removing detail from representation is
called abstraction.
6
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities,
--e.g., Arad, Sibiu, Fagaras, Bucharest
7
8
EXAMPLE PROBLEMS
• The problem solving approach has been applied to a
vast array of task environments.
• Toy and Real- world problems.
• Toy Problem- to illustrate or exercise various
problem methods. It can be given a concise, exact
description and hence is usable by different
researchers to compare the performance of
algorithms.
• Real World Problem- whose solutions people
actually care about. These problems tend not to
have a single agreed-upon description.
9
EXAMPLE PROBLEMS
TOY PROBLEMS
(VACUUM WORLD STATE SPACE GRAPH)

10
1. Toy Problem- VACUUM WORLD
• States : is determined by both the agent location and the
dirt location. The agent is in one of two locations, each of which
might or might not contain dirt. Thus there are 2 X22 =8 possible
states.
• Initial state: any state can be designated as the initial state
• Actions :This generated the legal states
actions Left, Right, Suck
• Transition model: Actions have their expected effects, except that
moving LEFT in the leftmost square, moving RIGHT in the rightmost
square, and SUCKING in a clean square have no effect.
• Goal test: This checks whether all squares are clean
• Path cost: Each step costs 1, so the path cost is the number of steps
in the path.
11
2. Toy Problem: THE 8-PUZZLE
• The 8-puzzle belongs to the family of sliding-
block puzzles, which are often used as test
problems for new search algorithms in AI.

12
EXAMPLE: THE 8-PUZZLE
• The 8-puzzle, an instance of which is shown in Figure ,
consists of a 3 x 3 board with eight numbered tiles
and a blank space.

• A tile adjacent to the blank space can slide into the


space.

• The object is to reach a specified goal state, such as


the one shown on the right of the figure.

13
States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares.

• Initial state: Any state can be designated as the initial state.

• Actions: Actions as movements of the blank space Left, Right, Up or


Down.

• Transition model: Given a state and action, this returns the resulting
state ; for example: if we apply LEFT to the start in Figure above, the
resulting state has the 5 and the blank switched.

• Goal test: Checks whether the state matches the goal configuration as
shown in the figure.

• Path cost: Each step costs 1, so path cost is the number of steps in the
paths
14
3. Toy Problem: 8-QUEENS PROBLEM
• The goal of the 8-
queens problem is to
place eight queens on a
chessboard such that
no queen attacks any
other. (A queen attacks
any piece in the same
row, column or
diagonal)
15
EXAMPLE: 8-QUEENS PROBLEM
• There are two main kinds of formulation.
• An incremental formulation involves
operators that augment the state description,
starting with an empty state; for the 8-queens
problem, this means that each action add a
queen to the state
• A complete-state formulation starts with all 8
queens on the board and moves them around.

16
EXAMPLE: 8-QUEENS PROBLEM
• States: Any arrangement of 0 to 8 queens on
the board is a state.
• Initial State: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a
queen added to the specified square.
• Goal test: 8 queens are on the board, none
attacked.
17
REAL WORLDS PROBLEMS
• The route-finding problem is defined in terms
of specified locations and transitions along links
between them.
• Route-finding algorithms are used in a variety
of applications, such as routing in computer
networks, military operations planning, and
airline travel planning systems.
• These problems are typically complex to specify.
Example: Airline Travel Problem
18
1. REAL WORLD PROBLEM: Airline Travel Problem
• States: each is represented by a location (e.g. An airport) and
the current time
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat
class, leaving after the current time, leaving enough time for
within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will
have the flight’s destination as the current location and the
flight’s arrival time as the current time.
• Goal test: are we at the final destination specified by the user?
• Path cost: monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage awards, etc 19
REAL WORLD PROBLEMS: OTHER EXAMPLE
PROBLEMS
2. Touring problems: closely related to route finding
problem. Consider, for example, “Visit each city atleast
once, Starting and ending in Bucharest”.
3. Traveling Salesperson problem (TSP) is a touring
problem in which each city must be visited exactly
once. The aim is to find the shortest tour.
4. VLSI Layout problem requires positioning millions of
components and connections on a chip to minimize
area, minimize circuit delays, minimize stray
capacitances, and maximize manufacturing yield 20
REAL WORLD PROBLEMS: OTHER EXAMPLE
PROBLEMS
5. Robot Navigation- is a generalization of the route-
finding problem described earlier. Rather than
following a discrete set of routes, a robot can move
in a continuous space with (in principle) an infinite
set of possible actions and states.
6. Automatic assembly sequencing - the aim is to find
an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no
way to add some part later in the sequence without
undoing some of the work already done.

21
SEARCHING FOR SOLUTIONS
Having formulated some problems, we now need to solve
them.
• From the figure,
shows some of the
expansions in the search
tree for finding a route
from Arad to Bucharest
• The root of the search
tree is a search node If it is not a goal state then expanding
corresponding to the the current state; that is, applying the
initial state.(Arad) successor function to the current
state, thereby generating a new set of
• The first step is to test
states.
whether this is a goal
state or not. 22
23
• Continue choosing ,testing and
expanding until either a solution is
found or there are no more states to be
expanded.
• The set of all leaf nodes available for
expansion at any given point is called
the frontier.
• The choice of which state to expand is
determined by the search strategy.
24
INFRASTRUCTURE FOR SEARCH ALGORITHMS

• For each node n of the tree, we have a structure that


contains four components:
– STATE: the state in the state space to which the node
corresponds;
– PARENT-NODE: the node in the search tree that
generated this node;
– ACTION: the action that was applied to the parent to
generate the node;
– PATH-COST: the cost, traditionally denoted by g(n),
of the path from the initial state to the node, as
indicated by the parent pointers;
25
IMPLEMENTATION: STATES VS. NODES

• The collection of nodes that have


been generated but not yet
expanded-this collection is called the
fringe.
• Each element of the fringe is a leaf
node, that is, a node with no
successors in the tree.
26
TREE SEARCH ALGORITHMS
function TREE-SEARCH(problem) returns a solution, or
failure
initialize the frontier using the initial state of problem
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the
corresponding solution
expand the chosen node, adding the resulting nodes to
the frontier
27
OPERATIONS ON THE QUEUE

• EMPTY?(queue) returns true only if


there are no more elements in the
queue.
• POP(queue) removes the first
element of the queue and returns it.
• INSERT(element, queue) inserts an
element and returns the resulting
queue. 28
MEASURING PROBLEM-SOLVING
PERFORMANCE
Output of a problem- solving algorithm is either failure or
a solution.
Evaluate an algorithm’s performance in four ways
• Completeness: Is the algorithm guaranteed to find a
solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a
solution?
• Space complexity: How much memory is needed to
perform the search?
29
UNINFORMED SEARCH STRATEGIES
• Uninformed search consist of five search strategies.
 Breadth-first search
Depth-first search
Iterative deepening search
Bidirectional Search
• Uninformed search (blind search)
• They have no additional information about the
state beyond the problem definition.
• Uninformed search strategy can generate
successors and distinguish a goal state from a non
goal state.
30
BREADTH-FIRST SEARCH
• The root node is expanded first, then all the
successors of the root node, and their successors,
and so on .
• All the nodes are expanded at a given depth in the
search tree before any nodes at the next level are
expanded.
• Implemented by calling Tree-Search with an empty
fringe that is a first-in-first out (FIFO) queue, assuring
that the nodes that are visited first will be expanded
first.
• The FIFO queue puts all newly generated successors
at the end of the queue, which means that shallow
nodes are expanded before deeper nodes. 31
Breadth-first search on a simple binary tree.
At each stage the node is expanded

32
DEPTH-FIRST SEARCH
• Depth-first search always expands the deepest
node in the current fringe of the search tree.
• The search proceeds immediately to the deepest
level of the search tree, where the nodes have no
successors.
• This strategy can be implemented by Tree-Search
with a Last-in-First-out (LIFO) queue, also known as
a stack.
• Depth-first search has very modest memory
requirements.
33
DEPTH-FIRST SEARCH

34
35
DEPTH-FIRST SEARCH

Depth-first search on a binary tree. Nodes that have been expanded


and have no descendants in the fringe can be removed from
memory; these are shown in black. Nodes at depth 3 are assumed to
have no successors and M is the only goal node.

A Variant of depth-first search called backtracking search uses still


less memory
36
ITERATIVE DEEPENING DEPTH-FIRST SEARCH

• Iterative deepening search (or iterative deepening


depth-first search) is a general strategy, used in
combination with depth-first search, that finds the
best depth limit.
• It does this by gradually increasing the limit-first 0,
then 1, then 2, and so on-until a goal is found.
• This will occur when the depth limit reaches d, the
depth of the shallowest goal node
• Iterative deepening combines the benefits of depth
first and breadth first search.
37
ITERATIVE DEEPENING SEARCH

38
BIDIRECTIONAL SEARCH
• One forward from
the initial state, and
the other backward
from the goal,
stopping when two
searches meet in the
middle.

• The motivation is
that bd/2 + bd/2 < bd
39
COMPARING UNINFORMED SEARCH STRATEGIES

b: the branching factor or maximum number of successors of any node.


d: the depth of the shallowest goal node (i.e., the number of steps along the path

from the root).


m: the maximum length of any path in the state space.
40
C∗: the cost of the optimal solution, and assume that every action costs at least E.
INFORMED (HEURISTIC)SEARCH
Informed search strategies use problem specific
knowledge beyond the definition of the problem
itself – can find solutions more efficiently than
an uninformed strategy.
• Greedy best-first search
• A* search
• Heuristics

41
INFORMED SEARCH
BEST –FIRST SEARCH
The general approach we consider is called best-first search
 An instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based
on an evaluation function, f(n). The choice of f(n)
determines the search strategy.

 The node with the lowest evaluation is selected for


expansion.

42
HEURISTIC FUNCTION
• A key component of the algorithm is a
heuristic function , denoted h(n).

h(n)estimated cost of the cheapest path


from node n to a goal node.

43
GREEDY BEST-FIRST SEARCH

• Greedy best-first search tries to expand the node


that is closest to the goal, on the: grounds that this is
likely to lead to a solution quickly. Thus, it evaluates
nodes by using just the heuristic function: f (n) = h(n).
• Resembles to follow a single path all the way to the
goal ,but will back up when it hits a dead end.
• It is not optimal, and it is incomplete.

44
A* SEARCH: MINIMIZING THE TOTAL
ESTIMATED SOLUTION COST
• The most widely-known form of best-first search is called
A* search
• It evaluates nodes by combining g(n),the cost to reach the
node and h(n),the cost to get from the node to the goal.
f(n) = g(n) + h(n)
• Since g (n) gives the path cost from the start node to node
n, and h(n) is the estimated cost of the cheapest path from
n to the goal
f(n) = estimated cost of the cheapest solution through n
• A* search is both complete and optimal
45
HEURISTIC FUNCTIONS
• The 8-puzzle was one of the earliest heuristic search
problems.
• The object of the puzzle is to slide the tiles
horizontally or vertically into the empty space until
the configuration matches the goal configuration.
The average solution cost is about 22 steps.

46
HEURISTIC FUNCTIONS
• h1 = the number of misplaced tiles.
• h2 = the sum of the distances of the tiles from
their goal positions.
• Because tiles cannot move along diagonals, the
distance we will count is the sum of the
horizontal and vertical distances. This is
sometimes called the city block distance or
Manhattan distance.
• A problem with fewer restrictions on the
actions is called a relaxed problem.
47
48
LEARNING HEURISTICS FROM EXPERIENCE

• A heuristic function h(n) is supposed


to estimate the cost of a solution
beginning from the start at node n.
• A solution is learn from experience.
• Experience here means solving lots
of 8-puzzles ,for instance.

49
Thank You!!!

50
BREADTH-FIRST SEARCH

51
52
GREEDY BEST-FIRST SEARCH

53
54
55
56
57
58
59
60
61
LOCAL SEARCH ALGORITHMS AND
OPTIMIZATION PROBLEMS
• Local search algorithms operate using a single
current state (rather than multiple
paths) ,move only to neighbors of that state .
They are not systematic
• Two key advantages:
1) They use very little memory
2) Find solutions in large or infinite (continuous)
state spaces

62
HILL-CLIMBING SEARCH
• It is simply a loop that continually moves in the
direction of increasing value-that is, uphill. It
terminates when it reaches a "peak" where no
neighbor has a higher value.
• The algorithm does not maintain a search tree, so the
current node data structure need only record the state
and its objective function value.
• Hill-climbing does not look ahead beyond the
immediate neighbors of the current state.
• Hill climbing is sometimes called greedy local search
63
64
Hill-climbing search

65
GENETIC ALGORITHMS
•A genetic algorithm or GA is a variant of
stochastic beam search in which successor
states are generated by combining two parent
states , rather than by modifying a single state.
• GAs begin with a set of k randomly generated
states ,called the population.

66
ONLINE SEARCH AGENTS
• An online search agent operates by interleaving
computation and action: first it takes an action, then it
observes the environment and computes the next
action.
• Online search is a necessary idea for an exploration
problem, where the states and actions are unknown to
the agent.
• An agent in this state of ignorance must use its actions
as experiments to determine what to do next, and
hence must interleave computation and action.
• Example : a robot that is placed in a new building and
must explore it to build map that it can use for getting
from A to B 67
ONLINE LOCAL SEARCH
• Using a random walk to explore the environment.
• A random walk simply selects at random one of the
available actions from the current state; preference
can be given to actions that have not yet been tried.
• Random walk will eventually find a goal or complete
its exploration provided that the space is finite .
• The process can be very slow.

68
• The figure shows an environment in which a random walk
will take exponentially many steps to find the goal because
at each step backward progress is twice as likely as forward
progress.
• An agent implementing this scheme which is called learning
real-time A*(LRTA*).
• Optimizing under uncertainty encourages the agent to
explore new ,possibly promising paths.
• An LRTA* agent is guaranteed to find a goal in any
finite ,safely explorable environment.
69
Thanks

70
71
72
UNIFORM COST SEARCH
• Uniform-cost search expands the node
n with the lowest path cost.
• Uniform-cost search does not care
about the number of steps a path has,
but only about their total cost.

73
DEPTH LIMITED SEARCH
• The problem of unbounded trees can be alleviated
with a predetermined depth limit l.
• That is, nodes at depth l are treated as if they have
no successors. This approach is called depth-limited
search.
• Typographical Error in this statement : If they have
no successors .This approach is called depth –
limited search.
• The depth limit solves the infinite-path problem.

74
HEURISTIC FUNCTION
• h2(n) = the sum of the distances of the tiles
from their goal positions.
• Because tiles cannot move along diagonals, the
distance we will count is the sum of the
horizontal and vertical distances. This is
sometimes called the city block distance or
Manhattan distance.
• A problem with fewer restrictions on the
actions is called a relaxed problem.
75

You might also like