problem solving by searching ppt
problem solving by searching ppt
Search algorithms
• Uninformed:
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth Limited Search
• Iterative Deepening Search
• Bi-directional Search
• Heuristic-based:
• Greedy best-first search
• A* Search
• Hill Climbing Search Algorithm
Uninformed search strategies
• Also called ‘blind search’
• The strategies have no additional information about states
Beyond that provided in the problem definition.
All they can do is generate successors and distinguish a goal state from
a non-goal state.
All search strategies are distinguished by the order in which nodes are
expanded
Breadth-first search
• Breadth-first search is a simple strategy in which the root node is
expanded first.
• Then all the successors of the root node are expanded next, then their
successors, and so on
In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded.
This is achieved very simply by using a FIFO queue for the frontier.
Breadth-first search
Space and time complexity of BFS
• Imagine searching a uniform tree where every state has b successors.
• The root of the search tree generates b nodes at the first level.
• Each of which generate b more nodes, for a total of b2 at the second level
• Each of these generate b more nodes , yielding b3 nodes at the third level, and so on.
• Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that level. Then the total number of nodes
generated is
b+ b2 + b3 +…+bd=O(bd)
• There will be O(bd-1) nodes in the explored set and O(bd) nodes in the frontier, so
the space complexity is O(bd), i.e.: it is dominated by the size of the frontier.
Breadth-First Search-Algorithm
• To solve the 8-puzzle problem, the tiles in the puzzle are moved in
different directions. Instead of moving the tiles in the empty space, the
user can visualize moving the empty space in place of the tiles,
basically, swapping the tile with the empty space. The empty space
cannot move diagonally and can make a single move at a time. The user
can move the empty space in four different directions, and they are
• Up
• Down
• Right
• Left
• Generally, search techniques are classified into two types:
• Uninformed search
• In artificial intelligence, uninformed search techniques use the following search
algorithms like linear search, depth-first search (DFS), binary search, and breadth-first
search (BFS). These algorithms are named uninformed search because they do not know
anything about what they are searching for and where they need to search for it. The
uniformed search consumes more time.
• Informed search
• In artificial intelligence, the exact opposite of informed search is uninformed search. In
this search, the algorithm will be aware of where the best chance of finding the solution is
and what should be the next move. The informed search techniques are best-first search,
hill-climbing, A*, and AO* algorithm. Heuristic search/function is an informed search
technique.
• The heuristic function is an informed search method, which informs the search regarding
the goal direction. It provides data to estimate which neighboring node should lead to the
goal state. A heuristic function is used to calculate the heuristic value. A heuristic value
informs the algorithm which path will give an earlier solution. Based on the search
problems various heuristic values are generated from the heuristic function. Thus, to
optimize the search, heuristic value is used from the heuristic search technique.
• A* algorithm
• A* algorithm is an informed search technique algorithm. This is
widely used in pathfinding and graph traversal, the process of
plotting an efficiently traversable path between multiple points,
called nodes. The key feature is the algorithm is that it will keep
track of each visited node which helps in ignoring the nodes that are
already visited. A* algorithm has a list that holds all the nodes that
are left to be explored and it chooses the most optimal node from
this list, thus saving time not exploring unnecessary (less optimal)
nodes.
• States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note
that any given goal can be reached from exactly half of the possible
initial states.
• Actions: The simplest formulation defines the actions as movements
of the blank space Left, Right, Up, or Down. Different subsets of these
are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the resulting
state; for example, if we apply Left to the start state in Figure 3.4, the
resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal
configuration shown in Figure 3.4. (Other goal configurations are
possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in
• The first example we examine is the vacuum world first introduced in Chapter 2.
(See Figure 2.2.) This can be formulated as a problem as follows: • States: The
state is determined by both the agent location and the dirt locations. The agent is
in one of two locations, each of which might or might not contain dirt. Thus,
there are 2 × 2 2 = 8 possible world states. A larger environment with n locations
has n · 2 n states. • Initial state: Any state can be designated as the initial state. •
Actions: In this simple environment, each state has just three actions: Left, Right,
and Suck. Larger environments might also include Up and Down. • Transition
model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean
square have no effect. The complete state space is shown in Figure 3.3. • Goal
test: This checks whether all the squares are clean. • Path cost: Each step costs 1,
so the path cost is the number of steps in the path.
Toy problems
• States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might
not contain dirt. Thus, there are 2 × 2 2 = 8 possible world states. A larger
environment with n locations has n · 2 n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions:
Left, Right, and Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square,
and Sucking in a clean square have no effect. The complete state space is
shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the
path
8-QUEENS PROBLEM
• The goal of the 8-queens problem is to place eight queens on a
chessboard such that no queen attacks any other. (A queen attacks
any piece in the same row, column or diagonal.) Figure 3.5 shows an
attempted solution that fails: the queen in the rightmost column is
attacked by the queen at the top left
• Although efficient special-purpose algorithms exist for this problem and for the whole n-
queens family, it remains a useful test problem for search algorithms. There are two main
kinds of formulation. An incremental formulation involves operators that augment the
state INCREMENTAL FORMULATION description, starting with an empty state; for the 8-
queens problem, this means that each action adds a queen to the state. A complete-state
formulation starts with all 8 queens on COMPLETE-STATE FORMULATION the board and
moves them around. In either case, the path cost is of no interest because only the final
state counts. The first incremental formulation one might try is the following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked. In this formulation, we have 64 · 63
· · · 57 ≈ 1.8 × 1014 possible sequences to investigate. A better formulation would prohibit
placing a queen in any square that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the leftmost
n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column such that it is not
attacked by any other queen.
Informed (heuristic) Search Strategies
• Heuristic functions are the most common form in which additional knowledge of
the problem is imparted to the search algorithm.
• A node is selected for expansion based on an evaluation function , f(n)
• The evaluation function is constructed as a cost estimate.
• So the node with the lowest evaluation is expanded first
• Greedy best-first search can start down an infinite path and never
• return to try other possibilities, it is incomplete.
• Because of its greediness the search makes choices that can lead to a
dead end; then one backs up in the search tree to the deepest
unexpanded node.
• Greedy best-first search resembles depth-first search in the way it
prefers to follow a single path all the way to the goal but will
• back up when it hits a dead end.
• The quality of the heuristic function determines the practical usability of
greed search.
Greedy best-first search
• Greedy search is not optimal
• Greedy search is incomplete without systematic checking of repeated
states.
• In the worst case, the Time and Space Complexity of Greedy Search
are both O(bm)
Where
. b Is the branching factor and
m the maximum path length
A* (Star) Search in Artificial Intelligence
f(n)=g(n)+h(n)
h(n)= cost of the cheapest path from node n to a goal state.
g(n)=cost of the cheapest path from the initial state to node n.
Types of State space search
• Heuristic Search Algorithm
Hill Climbing, A* algorithm
• It keeps track of one current state and on each iteration moves to the
neighboring state with highest value- that is, it heads in the direction
that provides the steepest ascent.
Hill Climbing Search Algorithm
• Uniform-cost-search:
• This means that an exhaustive tree search to depth 22 would look at about 322~~ 3.1*1010
states
• This is reduced to 181,440 distinct states(which is still very large)
• corresponding number for the 15-puzzle is roughly 1013
• We need heuristic function that never overestimate the number of steps to the goal
Assignment
• Explain State Space Search in Artificial Intelligence with example