0% found this document useful (0 votes)
10 views70 pages

Unit 2

The document discusses problem-solving agents, detailing their goal formulation, search processes, and execution phases. It outlines the components of well-defined problems, including initial states, actions, transitions, goal tests, and path costs, while distinguishing between toy and real-world problems. Additionally, it covers various examples of problems such as the 8-puzzle, 8-queens, and real-world applications like airline travel and robot navigation, emphasizing the importance of search algorithms in finding solutions.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views70 pages

Unit 2

The document discusses problem-solving agents, detailing their goal formulation, search processes, and execution phases. It outlines the components of well-defined problems, including initial states, actions, transitions, goal tests, and path costs, while distinguishing between toy and real-world problems. Additionally, it covers various examples of problems such as the 8-puzzle, 8-queens, and real-world applications like airline travel and robot navigation, emphasizing the importance of search algorithms in finding solutions.

Uploaded by

dnyangitte01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

UNIT 2

PROBLEM
SOLVING

1
Problem-Solving Agents
• Intelligent agents are supposed to maximize their performance
measure.
• This is achieved, if the agent can adopt a goal and aim to satisfy
it.
• Goals helps to organize behavior of agent by limiting its
objectives.
• Goal formulation is based on the current situation and the agent’s
performance measure, it is the first step in problem solving.
• Problem formulation is the process of deciding what actions and
states are to consider, to achieve given goal.

2
Problem-Solving Agents
• The process of looking for a sequence of actions that reaches the
goal is called Search.
• A search algorithm takes a problem as input and returns a
solution in the form of an action sequence.
• Once a solution is found, the actions it recommends can be
carried out.
• This is called the execution phase.
• Thus it forms a simple Formulate-Search-Execute design for the
agent

3
Problem-Solving Agents
• A simple problem-solving
agent.
• It first formulates a goal and a
problem, then searches for a
sequence of actions that
would solve the problem, and
then executes the actions one
at a time.
• When this is complete, it
formulates another goal and
starts over.

4
Problem-Solving Agents
• After formulating a goal and a problem to solve, the agent
calls a search procedure to solve it.
• It then uses the solution to guide its actions, and does
whatever the solution recommends as the next thing to do.
• Typically it does, the first action of the sequence and then
removing that step from the sequence.
• Once the solution has been executed, the agent will
formulate a new goal.

5
Problem-Solving Agents
• When the agent is executing the solution sequence it ignores its
percepts while choosing an action because it knows in advance
what they will be.
• An agent that carries out its plans with its eyes closed, must be
quite certain of what is going on.
• This is called as open-loop system because ignoring the percepts
breaks the loop between agent and environment.

6
Well-defined problems and solutions
• A problem can be defined formally by five components:
• Initial State: This state requires an initial state for the
problem which starts the AI agent towards a specified goal.
• In this state new methods also initialize problem domain
solving by a specific class.
• Action: This stage of problem formulation works with
function with a specific class taken from the initial state and
all possible actions done in this stage.

7
Well-defined problems and solutions
• Transition: This stage of problem formulation integrates the actual
action done by the previous action stage and collects the final
stage to forward it to their next stage.
• Goal test: This stage determines that the specified goal achieved
by the integrated transition model or not, whenever the goal
achieves stop the action and forward into the next stage to
determines the cost to achieve the goal.
• Path costing: This component of problem-solving numerical
assigned what will be the cost to achieve the goal.
• It requires all hardware software and human working cost.

8
Example Problems
• The problem-solving approach has been applied to a vast array of
task environments.
• One of the major approach is to distinguishing between toy and
real-world problems.
• A toy problem illustrates various problem-solving methods.
• It can be given a concise, exact description and hence is usable by
different researchers to compare the performance of algorithms.
• A real-world problem is one whose solutions people actually care
about.
• These problems does not have a single agreed-upon description.

9
Example Problems

10
Toy problems

11
Toy problems
• The vacuum world problem is formulated as follows:-
• States: The state is determined by both the agent location and the
dirt locations.
• The agent is in one of two locations, each of which might or might
not contain dirt.
• Thus, there are 2 × 2² = 8 possible world states.
• A larger environment with n locations has n*2ⁿ states.
• Initial state: Any state can be designated as the initial state
• Actions: In this simple environment, each state has just three
actions: Left, Right, and Suck.
• Larger environments might also include Up and Down

12
Toy problems
• Transition model: The actions have their expected effects,
except that moving Left in the leftmost square, moving Right
in the rightmost square, and Sucking in a clean square have no
effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number
of steps in the path.

13
8-puzzle problem
• The 8-puzzle, consists of a 3×3 board with eight numbered tiles
and a blank space.
• A tile adjacent to the blank space can slide into the space.
• The object is to reach a specified goal state.

14
8-puzzle problem
• The standard formulation is as follows:
• States: A state description specifies the location of each of the
eight tiles and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note
that any given goal
• Actions: The simplest formulation defines the actions as
movements of the blank space Left, Right, Up, or Down.

15
8-puzzle problem
• Transition model: Given a state and action, this returns the
resulting state.
• Goal test: This checks whether the state matches the goal
configuration
• Path cost: Each step costs 1, so the path cost is the number
of steps in the path.

16
8-queens problem
• The goal of the 8-queens problem is to place eight queens on a
chessboard such that no queen attacks any other.
• A queen attacks any piece in the same row, column or diagonal.

17
8-queens problem
• There are two main kinds of formulation exists in this problem.
• An incremental formulation involves operators that add to the
state description, starting with an empty state.
• For the 8-queens problem, this means that in each action one
queen can move and adds only one queen to the state.
• A complete-state formulation starts with all 8 queens on the board
and moves them around.
• In either case, the path cost is of no interest because only the final
state counts.

18
8-queens problem
• The first incremental formulation is as the follows:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.

19
8-queens problem
• One of the solution formulation can contain, prohibit placing a
queen in any square that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per
column in the leftmost n columns, with no queen attacking
another.
• Actions: Add a queen to any square in the leftmost empty column
such that it is not attacked by any other queen.
• This formulation reduces the 8-queens state space from 1.8×1014
to 2,057, and solutions are easy to find.

20
Toy problems
• This toy problem was devised by Donald Knuth (1964) and
illustrates how infinite state spaces can arise.
• Knuth assumed that, starting with the number 4, a sequence of
factorial, square root, and floor operations will reach any desired
positive integer.
• For example, we can reach 5 from 4 as follows:

21
Toy problems
• The problem definition is very simple:
• States: Positive numbers.
• Initial state: 4.
• Actions: Apply factorial, square root, or floor operation (factorial
for integers only).
• Transition model: As given by the mathematical definitions of the
operations.
• Goal test: State is the desired positive integer.

22
Toy problems
• There is no bound on how large a number might be
constructed in the process of reaching a given target—
• for example, the number 620,448,401,733,239,439,360,000
is generated in the expression for 5
• Thus the state space for this problem is infinite.
• Such state spaces arise frequently in tasks involving the
generation of mathematical expressions, circuits, proofs,
programs, and other recursively defined objects.

23
Real-world problems
• 1. The Airline Travel Problems can be solved by a travel-planning
Web site:
• States: Each state obviously includes a location (e.g., an airport)
and the current time.
• As, the cost of an action (a flight segment) may depend on
previous segments, their fare bases, and their status as domestic
or international, the state must record extra information about
these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat
class, leaving after the current time, leaving enough time for
within-airport transfer if needed

24
Real-world problems
• Transition model: The state resulting from taking a flight will have
the flight’s destination as the current location and the flight’s
arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight
time, customs and immigration procedures, seat quality, time of
day, type of airplane, frequent-flyer mileage awards, and so on.

25
Real-world problems
• 2. The traveling salesperson problem (TSP) is a touring problem in
which each city must be visited exactly once. The aim is to find the
shortest tour.
• The problem is known to be NP-hard, but an enormous amount of
effort has been expended to improve the capabilities of TSP
algorithms.
• In addition to planning trips for traveling salespersons, these
algorithms have been used for tasks such as planning movements
of automatic circuit-board drills and of stocking machines on shop
floors.

26
Real-world problems
• 3. Robot Navigation is a generalization of the route-finding
problem.
• A robot can move in a continuous space with an infinite set of
possible actions and states.
• For a circular robot moving on a flat surface, the space is
essentially two-dimensional.
• When the robot has arms and legs or wheels that must also be
controlled, the search space becomes many-dimensional.
• In addition to the complexity of the problem, real robots must
also deal with errors in their sensor readings and motor controls

27
Real-world problems
• 4. Automatic assembly sequencing of complex objects by a robot:-
• It was first demonstrated by FREDDY (Michie, 1972).
• It deals with the assembly of intricate objects such as electric
motors.
• In assembly problems, the aim is to find an order in which to
assemble the parts of some object.
• If the wrong order is chosen, there will be no way to add some
part later in the sequence without undoing some of the work
already done.

28
Real-world problems
• Checking a step in the sequence is a difficult search problem and
closely related to robot navigation.
• Thus, the generation of legal actions is the expensive part of
assembly sequencing.
• Another important assembly problem is protein design, in which
the goal is to find a sequence of amino acids that will fold into a
three-dimensional protein with the right properties to cure some
disease.

29
Real-world problems
• 5. A VLSI layout problem:- It requires positioning millions of
components and connections on a chip to minimize area,
minimize circuit delays, minimize stray capacitances, and
maximize manufacturing yield.
• The layout problem comes after the logical design phase and is
usually split into two parts:
• Cell layout and Channel routing.
• In cell layout, the primitive components of the circuit are grouped
into cells, each of which performs some recognized function.

30
Real-world problems
• Each cell has a fixed footprint (size and shape) and requires a
certain number of connections to each of the other cells.
• The aim is to place the cells on the chip so that they do not
overlap, so that there is room for the connecting wires to be
placed between the cells.
• Channel routing finds a specific route for each wire through the
gaps between the cells.

31
Searching for Solutions
• A solution is an action sequence, so search algorithms work by
considering various possible action sequences.
• The possible action sequences starting at the initial state form a
search tree with the initial state at the root.
• The branches are actions and the nodes correspond to states in
the state space of the problem

32
Searching for Solutions
• The root node of the tree corresponds to the initial state.
• The first step is to test the goal state.
• It then considers taking various actions.
• This can be done by expanding the current state, that is, applying
each legal action to the current state, thereby generating a new
set of states.
• The search process works by following one path at a time and
repeats the same procedure for all other sides, until solution is
found.
• A node in the tree, with no children is called as a leaf node.
• The set of all leaf nodes available for expansion at any given point
is called the frontier.

33
Searching for Solutions
• The process of expanding nodes on the frontier continues until
either a solution is found or there are no more states to expand.
• In general, all search algorithms has a basic structure.
• They vary primarily according to how they choose which state to
expand next, this is called as search strategy.
• In a tree, if a path exists from one source to destination and back
to the same source again, that is called as repeated state in the
tree and the path is called as loopy path.
• Loopy path is infinite because there is no limit to how often one
can traverse a loop.
• Loopy paths are a special case of the more general concept of
redundant paths, which exist whenever there is more than one
way to get from one state to another
34
Searching for Solutions

Informal description of general tree-search and graph-search algorithms


35
Searching for Solutions
• In some cases, redundant paths are unavoidable.
• This includes all problems where the actions are reversible, such
as route-finding problems and sliding-block puzzles.
• Route finding on a rectangular grid is a particularly important
example in computer games.
• In such a grid, each state has four successors, so a search tree of
depth d that includes repeated states has 4*d leaves.
• Thus, following redundant paths can cause a tractable problem to
become intractable.

36
Searching for Solutions
• The algorithms that forget their history should repeat it.
• The way to avoid exploring redundant paths is to remember
where one has been.
• To do this, a special data structure called the explored set (or
closed list) is used, which remembers every expanded node.
• The newly generated nodes that match previously generated
nodes and already added in the explored set or the frontier can be
discarded instead of being added to the frontier.
• The frontier then separates the state-space graph into the
explored region and the unexplored region, so that every path
from the initial state to an unexplored state has to pass through a
state in the frontier.

37
Searching for Solutions

The frontier (white nodes) always separates the explored region of the state space
(black nodes) from the unexplored region (gray nodes).
(a) one root has been expanded. (b) one leaf node has been expanded. (c) the
remaining successors of the root have been expanded in clockwise order.
38
Infrastructure for search algorithms
• Search algorithms require a data structure to keep track of the
search tree that is being constructed.
• For each node n of the tree, contains four components:
• n.STATE: the state in the state space to which the node
corresponds.
• n.PARENT: the node in the search tree that generated this node.
• n.ACTION: the action that was applied to the parent to generate
the node.
• n.PATH-COST: the cost of the path from the initial state to the
node.

39
Infrastructure for search algorithms
• The frontier needs to be stored in such a way that the search
algorithm can easily choose the next node to expand according to
its preferred strategy.
• The data structure for this is a queue.
• The operations on a queue are as follows:
• EMPTY?(queue) returns true only if there are no more elements in
the queue.
• POP(queue) removes the first element of the queue and returns
it.
• INSERT(element, queue) inserts an element and returns the
resulting queue.

40
Measuring problem-solving performance
• An algorithm’s performance can be evaluated in four ways:
• Completeness: the algorithm guaranteed to find a solution which
is complete.
• Optimality: the strategy that finds the optimal solution.
• Time complexity: time required to find a solution.
• Space complexity: memory needed to perform the search
operation.

41
Measuring problem-solving performance
• Time and space complexity are always considered with respect to
some measure of the problem difficulty.
• In AI, the graph is represented implicitly by the initial state,
actions, and transition model and is frequently infinite.
• Complexity is expressed in terms of three quantities:
• b:- the branching factor or maximum number of successors of any
node;
• d:- the depth of the deepest goal (i.e., the number of steps along
the path from the root);
• m:- the maximum length of any path in the state space.
• Time is measured in terms of the number of nodes generated
during the search, and space in terms of the maximum number of
nodes stored in memory.

42
Measuring problem-solving
performance
• To assess the effectiveness of a search algorithm the search cost is
considered, which typically depends on the time complexity and
memory usage.
• This is called as the total cost, which combines the search cost and
the path cost of the solution found.

43
Uninformed Search Strategies
• There are two types of search strategies:
• 1. Uninformed search (blind search):- these strategies have no
additional information about states beyond that provided in the
problem definition.
• They can generate successors and distinguish a goal state from a
non-goal state.
• All search strategies are distinguished by the order in which nodes
are expanded.
• 2. Informed search(heuristic search strategies) :- these strategies
know whether one non-goal state is more promising than another.

44
Uninformed Search Strategies
• Breadth-first search:-
• It is a simple strategy in which the root node is expanded first, all
the successors of the root node are expanded next, then their
successors, and so on.
• In general, all the nodes are expanded at a given depth in the
search tree before any nodes at the next level are expanded.
• Breadth-first search is an instance of the general graph-search
algorithm in which the shallowest unexpanded node is chosen for
expansion.
• This is by using a FIFO queue for the frontier.
• Thus, new nodes go to the back of the queue, and old nodes,
which are shallower than the new nodes, get expanded first.

45
Breadth-first search
• The goal test is applied to each node when it is generated rather
than when it is selected for expansion.
• The algorithm, doing the graph search, discards any new path to a
state already in the frontier or explored set.
• It is easy to see that any such path must be at least as deep as the
one already found.
• Thus, breadth-first search always has the shallowest path to every
node on the frontier.

46
Breadth-first search

47
Breadth-first search

48
Breadth-first search
• The performance evaluation of BFS w.r.t four major
components is as follows:-
• 1.Completeness:- it is complete - if the shallowest goal node
is at some finite depth d, breadth-first search will eventually
find it after generating all shallower nodes.
• As soon as a goal node is generated, it is the shallowest goal
node because all shallower nodes must have been generated
already and failed the goal test.
• 2. Optimal:- the shallowest goal node is not necessarily the
optimal one.
• BFS is optimal if the path cost is a non decreasing function of
the depth of the node, which is technically difficult.

49
Breadth-first search
• 3. Time Complexity:- If the algorithm were to apply the goal test
to nodes when selected for expansion, rather than when
generated, the whole layer of nodes at depth d would be
expanded before the goal was detected and the time complexity
would be O(b d+1).
• 4. Space Complexity:- for any kind of graph search, which stores
every expanded node in the explored set, the space complexity is
always within a factor of b of the time complexity.
• For breadth-first graph search in particular, every node generated
remains in memory.
• There will be O(b d−1) nodes in the explored set and O(bd)
nodes in the frontier, so the space complexity is O(bd) i.e., it is
dominated by the size of the frontier.

50
Breadth-first search
• Disadvantages of BFS:
• 1. The memory requirements are a bigger problem for breadth-
first search than is the execution time.
• 2. Exponential-complexity search problems cannot be solved by
uninformed methods for any but the smallest instances

51
Uniform-Cost search
• When all step costs are equal, breadth-first search is optimal
because it always expands the shallowest unexpanded node.
• By a simple extension, we can find an algorithm that is optimal
with any step-cost function.
• Instead of expanding the shallowest node, uniform-cost search
expands the node n with the lowest path cost g(n).
• This is done by storing the frontier as a priority queue ordered
by g.

52
Uniform-Cost search
• There are two other significant differences from breadth-first
search.
• 1. The goal test is applied to a node when it is selected for
expansion rather than when it is first generated.
• The reason is that the first goal node that is generated may be
on a suboptimal path.
• 2. A test is added in case a better path is found to a node
currently on the frontier
• Uniform-cost search expands nodes in order of their optimal
path cost.
• Hence, the first goal node selected for expansion must be the
optimal solution.

53
Uniform-Cost search

54
Depth-first search
• Depth-first search always expands the deepest node in the
current frontier of the search tree.
• The search proceeds immediately to the deepest level of the
search tree, where the nodes have no successors.
• As those nodes are expanded, they are dropped from the
frontier, so then the search backs up to the next deepest
node that still has unexplored successors.

55
Depth-first search
• The depth-first search algorithm is an instance of the graph-
search algorithm whereas breadth-first-search uses a FIFO
queue, depth-first search uses a LIFO queue.
• A LIFO queue means that the most recently generated node
is chosen for expansion.
• This must be the deepest unexpanded node because it is one
deeper than its parent, which, in turn, was the deepest
unexpanded node when it was selected.
• Depth-first search with a recursive function that calls itself
on each of its children, is used as a alternative method to the
graph search style implementation.

56
Depth-first search

57
Depth-first search
• The performance evaluation of DFS w.r.t four major components is
as follows:-
• 1. Completeness:- The properties of DFS depend strongly on
whether the graph-search or tree-search version is used.
• The graph-search version, avoids repeated states and redundant
paths, and it is complete in finite state spaces because it will
eventually expand every node.
• The tree-search version, on the other hand, is not complete.
• Depth-first tree search can be modified at no extra memory cost
so that it checks new states against those on the path from the
root to the current node, this avoids infinite loops in finite state
spaces but does not avoid the creation of redundant paths.
• 2. Optimal:- DFS is not optimal.
58
Depth-first search
• 3 .Time Complexity:- TC of depth-first graph search is bounded by
the size of the state space
• A depth-first tree search, may generate all of the O(bm) nodes in
the search tree, where m is the maximum depth of any node.
• This can be much greater than the size of the state space.

59
Depth-first search
• 4. Space Complexity:-For a graph search, there is no advantage,
but a depth-first tree search needs to store only a single path from
the root to a leaf node, along with the remaining unexpanded
sibling nodes for each node on the path.
• Once a node has been expanded, it can be removed from memory
as soon as all its descendants have been fully explored.
• For a state space with branching factor b and maximum depth m,
depth-first search requires storage of only O(bm) nodes.
• As depth-first tree search requires very less space than BFG, it is
used as the basic workhorse of many areas of AI.

60
Depth-limited search
• Due to the failure of depth-first search in infinite state spaces, it
can be improved by supplying depth-first search with a
predetermined depth limit.
• That is, nodes at depth are treated as if they have no successors.
• This approach is called depth-limited search.
• The depth limit solves the infinite-path problem.

61
Depth-limited search
• It also introduces an additional source of incompleteness if l < d,
that is, the shallowest goal is beyond the depth limit.
• Depth-limited search will also be non optimal if l > d.
• Its time complexity is O(bl) and its space complexity is O(bl ).
• Depth-first search can be viewed as a special case of depth-limited
search with l =∞.

62
Iterative deepening depth-first search
• Iterative deepening search is a general strategy, used in
combination with depth-first tree search, that finds the best depth
limit.
• It does this by gradually increasing the limit, first 0, then 1, then 2,
and so on—until a goal is found.
• This will occur when the depth limit reaches d, the depth of the
shallowest goal node.
• Iterative deepening combines the benefits of depth-first and
breadth-first search.
• Like DFS, its memory requirements are O(bd) to be precise.
• Like BFS, it is complete when the branching factor is finite and
optimal when the path cost is a non decreasing function of the
depth of the node
63
Iterative deepening depth-first search

64
Iterative deepening depth-first search

65
Iterative deepening depth-first search
• In general, iterative deepening is the preferred uninformed
search method when the search space is large and the depth
of the solution is not known.
• Iterative deepening search is analogous to breadth-first
search in that it explores a complete layer of new nodes at
each iteration before going on to the next layer.

66
Bidirectional search
• In this method, two simultaneous searches are run
simultaneously.
• One in forward direction from the initial state and the other in
backward direction from the goal, and the two searches will meet
in the middle.
• Bidirectional search is implemented by replacing the goal test with
a check to see whether the frontiers of the two searches intersect.
• If they intersect, a solution has been found.
• The check can be done when each node is generated or selected
for expansion and, with a hash table

67
Bidirectional search
• The time complexity of bidirectional search using breadth-first
searches in both directions is O(b d/2).
• Thus time complexity is reduced in bidirectional search.
• The space complexity is also O(b d/2).
• For performing intersection check ,at least one of the frontiers
must be kept in memory .
• This space requirement is the most significant weakness of
bidirectional search.

68
Bidirectional search

69
Listing of Various Searching
Algorithms

70

You might also like