AI-2nd Unit
AI-2nd Unit
SEARCHING
problem-solving agent
• one kind of goal-based agent called a problem-solving agent is discussed in this chapter
• Problem-solving agents use atomic representations
• Discussion of problem solving begins with precise definitions of problems and their solutions and
give several examples to illustrate these definitions.
• Then describe several general-purpose search algorithms that can be used to solve these problems.
• Later, will see several uninformed search algorithms—algorithms that are given no information
about the problem other than its definition. Although some of these algorithms can solve any
solvable problem, none of them can do so efficiently.
• Informed search algorithms, on the other hand, can do quite well given some guidance on where
to look for solutions.
PROBLEM-SOLVING AGENTS
• Intelligent agents are supposed to maximize their performance measure, achieving this is
sometimes simplified if the agent can adopt a goal and aim at satisfying it.
• GOAL FORMULATION
• Problem Formulation
• Search
• Solution
• Execution
• Open-loop
GOAL FORMULATION
• Goals help organize behavior by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider.
• Goal formulation, based on the current situation and the agent’s performance measure, is the first
step in problem solving.
PROBLEM FORMULATION
• A goal to be a set of world states—exactly those states in which the goal is satisfied. The agent’s
task is to find out how to act, now and in the future, so that it reaches a goal state.
• Problem formulation is the process of deciding what actions and states to consider, given a goal.
Search, Solution, and Execution
• The process of looking for a sequence of actions that reaches the goal is called search.
• A search algorithm takes a problem as input and returns a solution in the form of an action
sequence.
• Once a solution is found, the actions it recommends can be carried out. This is called the
execution phase.
• After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It
then uses the solution to guide its actions, doing whatever the solution recommends as the next
thing to do—typically, the first action of the sequence—and then removing that step from the
sequence. Once the solution has been executed, the agent will formulate a new goal.
A simple problem-solving agent. It first formulates a goal
and a problem, searches for a sequence of actions that
would solve the problem, and then executes the actions
one at a time. When this is complete, it formulates
another goal and starts over.
Well-defined problems and
solutions
A problem can be defined formally by five components:
• Initial state: The initial state that the agent starts in.
• A description of the possible actions available to the agent. Given a particular state s,
ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these actions
is applicable in s. For example, from the state In(Arad), the applicable actions are {Go(Sibiu),
Go(Timisoara), Go(Zerind)}.
• A description of what each action does; the formal name for this is the transition model, specified
by a function RESULT(s, a) that returns the state that results from doing action a in state s. We
also use the term successor to refer to any state reachable from a given state by a single action.2
For example, we have (In(Arad),Go(Zerind)) = In(Zerind) .
• Together, the initial state, actions, and transition model implicitly define the state space of the
problem—the set of all states reachable from the initial state by any sequence of actions.
• The state space forms a directed network or graph in which the nodes are states and the links
between nodes are actions.
• (The map of Romania shown in Figure 3.2 can be interpreted as a state-space graph if we view
each road as standing for two driving actions, one in each direction.) A path in the state space is a
sequence of states connected by a sequence of actions.
• The goal test, which determines whether a given state is a goal state. Sometimes there is an
explicit set of possible goal states, and the test simply checks whether the given state is one of
them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses
a cost function that reflects its own performance measure.
• The step cost of taking action a in state s to reach state s is denoted by c(s, a, s).
• A solution to a problem is an action sequence that leads from the initial state to a goal state.
Solution quality is measured by the path cost function, and an optimal solution has the lowest
path cost among all solutions.
A simplified road map of part of Romania.
EXAMPLE PROBLEMS
• Toy problems
• Vacuum world
• Puzzle
• 8 queens problem
• Real world problems
• Route finding problem
• TSP
• VLSI Layout
• Robot navigation
• Automatic assembly sequencing
• Protein design
Toy problems - vacuum world
• The traveling salesperson problem (TSP) is a touring problem in which each city must be visited
exactly once. The aim is to find the shortest tour.
• The problem is known to be NP-hard, but an enormous amount of effort has been expended to
improve the capabilities of TSP algorithms.
• In addition to planning trips for traveling salespersons, these algorithms have been used for tasks
such as planning movements of automatic circuit-board drills and of stocking machines on shop
floors.
VLSI layout
• VLSI – Very Large Scale Integration. It is a technology that involves packing millions of
transistors onto a single silicon chip.
• A problem requires positioning millions of components and connections on a chip to minimize
area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield.
• The layout problem comes after the logical design phase and is usually split into two parts: cell
layout and channel routing.
• In cell layout, the primitive components of the circuit are grouped into cells, each of which
performs some recognized function.
• Each cell has a fixed footprint (size and shape) and requires a certain number of connections to
each of the other cells. The aim is to place the cells on the chip so that they do not overlap and so
that there is room for the connecting wires to be placed between the cells.
• Channel routing finds a specific route for each wire through the gaps between the cells. These
search problems are extremely complex, but definitely worth solving.
ROBOT NAVIGATION
• Robot navigation is a generalization of the route-finding problem
• Rather than following a discrete set of routes, a robot can move in a continuous space with (in
principle) an infinite set of possible actions and states.
• For a circular robot moving on a flat surface, the space is essentially two-dimensional. When the
robot has arms and legs or wheels that must also be controlled, the search space becomes many-
dimensional. Advanced techniques are required just to make the search space finite.
AUTOMATIC ASSEMBLY SEQUENCING
• Automatic assembly sequencing of complex objects by a robot was first demonstrated by
FREDDY
• In assembly problems, the aim is to find an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no way to add some part later in the
sequence without undoing some of the work already done.
• Checking a step in the sequence for feasibility is a difficult geometrical search problem closely
related to robot navigation.
• Thus, the generation of legal actions is the expensive part of assembly sequencing. Any practical
algorithm must avoid exploring all but a tiny fraction of the state space.
SEARCHING FOR SOLUTIONS
• A solution is an action sequence, so search algorithms work by considering various possible
action sequences.
• The possible action sequences starting at the initial state form a search tree with the initial state at
the root; the branches are actions and the nodes correspond to states in the state space of the
problem.
• The set of all leaf nodes available for expansion at any given point is called the frontier.
Partial search trees for finding a route from Arad to Bucharest. Nodes that have been
expanded are shaded; nodes that have been generated but not yet expanded are outlined
in bold; nodes that have not yet been generated are shown in faint dashed lines.
An informal description of the general tree-search algorithms.
An informal description of the general graph-search
algorithms.
A sequence of search trees generated by a graph search on the Romania problem of Figure
3.2. At each stage, we have extended each path by one step. Notice that at the third
stage, the northernmost city (Oradea) has become a dead end: both of its successors are
already explored via other paths.
The separation property of GRAPH-SEARCH, illustrated on a rectangular-grid problem.
The frontier (white nodes) always separates the explored region of the state space
(black nodes) from the unexplored region (gray nodes).
In (a), just the root has been expanded.
In (b), one leaf node has been expanded.
In (c), the remaining successors of the root have been expanded in clockwise order.
Infrastructure for search
algorithms
• Search algorithms require a data structure to keep track of the search tree that is being constructed.
• For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as
indicated by the parent pointers.
• EMPTY?(queue) returns true only if there are no more elements in the queue.
• POP(queue) removes the first element of the queue and returns it.
• INSERT(element, queue) inserts an element and returns the resulting queue.
Measuring problem-solving
performance
Evaluation of an algorithm’s performance will be done in four ways:
Breadth-first search on a simple binary tree. At each stage, the node to be expanded
next is indicated by a marker.
Psuedocode
• It is complete—if the shallowest goal node is at some finite depth d, breadth-first search will
eventually find it after generating all shallower nodes (provided the branching factor b is finite).
• breadth-first search is optimal if the path cost is a non-decreasing function of the depth of the
node.
• time and space is not so good.
Uniform-cost search
• Instead of expanding the shallowest node, uniform-cost search expands the node n with the
lowest path cost g(n). This is done by storing the frontier as a priority queue ordered by g.
• There are two other significant differences from breadth-first search:
• The first is that the goal test is applied to a node when it is selected for expansion rather than
when it is first generated.
• The second difference is that a test is added in case a better path is found to a node currently on the
frontier.
• The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs
80 and 99, respectively. The least-cost node, Rimnicu Vilcea, is
expanded next, adding Pitesti with cost 80 + 97=177. The least-cost
node is now Fagaras, so it is expanded, adding Bucharest with cost
99+211=310.
• Now a goal node has been generated, but uniform-cost search keeps
going, choosing Pitesti for expansion and adding a second path to
Bucharest with cost 80+97+101= 278. Now the algorithm checks to
see if this new path is better than the old one; it is, so the old one is
discarded. Bucharest, now with g-cost 278, is selected for
expansion and the solution is returned.
• Completeness is guaranteed provided the cost of every step exceeds some small positive constant .
• It is optimal - uniform-cost search expands nodes in order of their optimal path cost. Hence, the
first goal node selected for expansion must be the optimal solution.
• let C∗ be the cost of the optimal solution, and assume that every action costs at least Then the
algorithm’s worst-case time and space complexity is
Depth-first search
• Depth-first search always expands the deepest node in the current frontier of the search tree
• The search proceeds immediately to the deepest level of the search tree, where the nodes have no
successors. As those nodes are expanded, they are dropped from the frontier, so then the search
“backs up” to the next deepest node that still has unexplored successors.
• breadth-first-search uses a FIFO queue, depth-first search uses a LIFO queue. A LIFO queue
means that the most recently generated node is chosen for expansion. This must be the deepest
unexpanded node because it is one deeper than its parent—which, in turn, was the deepest
unexpanded node when it was selected.
Depth-first search on a binary
tree. The unexplored region is
shown in light gray. Explored
nodes with no descendants in the
frontier are removed from
memory. Nodes at depth 3 have
no successors and M is the only
goal node.
• Graph search version which avoids repeated states and redundant paths, is complete and tree
search version is not complete
• depth-first search is not optimal
• The time complexity of depth-first graph search is bounded by the size of the state space. A depth-
first tree search, on the other hand, may generate all of the O(b^ m) nodes in the search tree, where
m is the maximum depth of any node;
Depth-limited search
• Depth-Limited Search (DLS) is a variant of the Depth-First Search (DFS) algorithm in solving
search problems
• In DLS, the search process explores the nodes of a tree or graph up to a specified maximum depth
limit, preventing it from exploring deeper levels.
• This method is particularly useful in avoiding infinite loops in spaces where a DFS might explore
indefinitely.
How Depth-Limited Search Works
• Depth Limit: DLS sets a depth limit LLL, meaning the search will only explore up to depth LLL
in the tree or graph. Any nodes beyond this limit are not expanded.
• Recursive Depth-First Search: Similar to DFS, DLS uses a recursive approach but includes a
condition to stop when the depth limit is reached.
• Uninformed DLS: Like DFS, DLS is uninformed, meaning it doesn't use any knowledge of the
solution space (like heuristics) but simply explores based on the structure of the problem.
• Completeness: DLS is not complete if the solution lies beyond the depth limit.
• Optimality: It is not optimal unless the depth limit exactly matches the depth of the solution.
• Time Complexity: The time complexity is 𝑂(𝑏^𝐿), where b is the branching factor and L is the
depth limit.
• Space Complexity: The space complexity is 𝑂(𝑏𝐿), as it only stores nodes along the current path.
Iterative deepening depth-first
search
• combines the benefits of both Depth-First Search (DFS) and Breadth-First Search (BFS). It
does this by performing a series of depth-limited searches, each time increasing the depth limit by
one, until the goal is found.
How IDDFS Works
• The key idea behind IDDFS is that it performs DFS multiple times with increasing depth limits,
starting from 0 and going up incrementally.
• At each iteration, it performs a depth-limited search (as in Depth-Limited Search, or DLS) to a
specific depth.
• The search proceeds deeper at each iteration, gradually exploring the entire search space while
avoiding the memory consumption of BFS and the potential pitfalls of unbounded DFS.
Steps of IDDFS:
1. Start with a depth limit of 0.
2. Perform a Depth-First Search (DFS) up to this depth.
3. If the goal is found, return the solution.
4. If the goal is not found, increase the depth limit by 1 and repeat the process.
5. Continue iterating until the goal is found.
The iterative deepening search algorithm, which repeatedly applies depth limited search with
increasing limits. It terminates when a solution is found or if the depth limited search returns
failure, meaning that no solution exists.
Four iterations of iterative
deepening search on a binary
tree.
• Completeness: IDDFS is complete, meaning that if a solution exists, it will eventually be found.
This is because it systematically explores all nodes in increasing depths, similar to BFS.
• Optimality: IDDFS is optimal if all actions have the same cost (as in BFS). It finds the shallowest
(least costly) solution.
• Time Complexity: In the worst case, the time complexity is O(b^d), where b is the branching
factor and d is the depth of the solution.
• Space Complexity: IDDFS has a low space complexity of O(bd), which is the same as DFS. This
is because it only stores the current path in memory.
Bidirectional search
• It is a graph-based search algorithm used in Artificial Intelligence (AI) to find the shortest path
between an initial state and a goal state.
• It runs two simultaneous searches—one forward from the start node and the other backward from
the goal node—until the two searches meet.
• This approach reduces the search space significantly compared to unidirectional algorithms like
Breadth-First Search (BFS) or Depth-First Search (DFS).
How Bidirectional Search Works
• Two Searches: The algorithm performs two BFSs, one starting from the initial state and the other
from the goal state. Each search progresses level by level.
• Meeting Point: The two searches continue until they meet at some node. At this point, the
algorithm has found a path from the start to the goal.
• Final Path: The final path is constructed by combining the path from the start node to the meeting
node (from the forward search) and the path from the goal node to the meeting node (from the
backward search).
• Completeness: It is complete, meaning it will find a solution if one exists, provided that both the
forward and backward searches are exhaustive.
• Optimality: It is optimal when BFS is used for both directions, as it guarantees finding the
shortest path if all step costs are uniform.
• Time Complexity: he time complexity of a bidirectional search is O(b^{d/2}), where b is the
branching factor and ddd is the depth of the solution. This is a significant improvement over the
O(b^d) complexity of unidirectional BFS.
• Space Complexity: Like BFS, it requires storing all nodes at each level, but because each search
explores half the depth, the space complexity is also O(b^{d/2}), which is much lower than
O(b^d).
Comparing uninformed search
strategies