0% found this document useful (0 votes)
11 views

problem solving by searching ppt

solving problem by searching informed and informed

Uploaded by

Surya Basnet
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

problem solving by searching ppt

solving problem by searching informed and informed

Uploaded by

Surya Basnet
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 88

Searching

Search algorithms
• Uninformed:
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth Limited Search
• Iterative Deepening Search
• Bi-directional Search
• Heuristic-based:
• Greedy best-first search
• A* Search
• Hill Climbing Search Algorithm
Uninformed search strategies
• Also called ‘blind search’
• The strategies have no additional information about states
Beyond that provided in the problem definition.
All they can do is generate successors and distinguish a goal state from
a non-goal state.
All search strategies are distinguished by the order in which nodes are
expanded
Breadth-first search
• Breadth-first search is a simple strategy in which the root node is
expanded first.
• Then all the successors of the root node are expanded next, then their
successors, and so on
In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded.
This is achieved very simply by using a FIFO queue for the frontier.
Breadth-first search
Space and time complexity of BFS
• Imagine searching a uniform tree where every state has b successors.
• The root of the search tree generates b nodes at the first level.
• Each of which generate b more nodes, for a total of b2 at the second level
• Each of these generate b more nodes , yielding b3 nodes at the third level, and so on.
• Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that level. Then the total number of nodes
generated is
b+ b2 + b3 +…+bd=O(bd)

• There will be O(bd-1) nodes in the explored set and O(bd) nodes in the frontier, so
the space complexity is O(bd), i.e.: it is dominated by the size of the frontier.
Breadth-First Search-Algorithm

• 1. Create a variable called NODE-LIST and set it to the initial state.


• 2. Until a goal state is found or NODE-LIST is empty:
a) Remove the first element from NODE-LIST and call it E. If
NODE-LIST was empty, then quit.
b) For element E do the following:
i. Apply the rule to generate a new state,
ii. If the new state is a goal state. quit and return this state.
iii. Otherwise, add the new state to the end of NODE-LIST
• Step 2: A is removed from NODE-LIST. The node is expanded, and its children
B and C are generated. They are placed at the back of NODE-LIST.
• NODE-LIST: B C
• Step 3: Node B is removed from NODE-LIST and is expanded. Its children D, E
are generated and put at the back of NODE-LIST.
• NODE-LIST: C D E

• Step 4: Node C is removed from NODE-LIST and is expanded. Its children D


and G are added to the back of NODE-LIST.
• NODE-LIST: D E D G

• Step 5: Node D is removed from NODE-LIST. Its children C and F are


generated and added to the back of NODE-LIST.
• NODE-LIST: E D G C F
• Step 6: Node E is removed from NODE-LIST. It has no children.
• NODE-LIST: D G C F

• Step 7: D is expanded , B and F are put in OPEN.


NODE-LIST: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node.
Hence the algorithm returns the path A-C-G by following the parent
pointers of the node corresponding to G.
Uniform cost search
Uniform-cost search
Uniform Cost Search
• The search begins at root node. The search continues by visiting the
next node which has the least total cost from the root node. Nodes are
visited in this manner until a goal is reached.
• Now goal node has been generated, but uniform cost search keeps
going, choosing a node (with less total cost from the root node to that
node than the previously obtained goal path cost) for expansion and
adding a second path
• Now the algorithm checks to see if this new path is better than the
old one; if it is so the old one is discarded and new one is selected for
expansion and the solution is returned.
Uniform cost search example1 (Find path from A
to E)
• Expand A to B,C,D
• The path to B is the cheapest one with path cost 2.
• Expand B to E – Total path cost = 2+9 =11
• This might not be the optimal solution since
the path AC as path cost 4 ( less than 11)
• Expand C to E – Total path cost = 4+5 =9
• Path cost from A to D is 10 ( greater than path cost, 9)
Hence optimal path is ACE
Home work: Uniform cost search
• The graph below shows the step-costs
for different paths going
from the start (S) to the goal (G).

Use uniform cost search


to find the optimal path to the goal
Depth-first Search
Depth-first search
Advantage of DFS over BFS-space
complexity
Depth-First Search Algorithm
Depth-First Search Algorithm

• 1. If the initial state is a goal state, quit and return success.


• 2. Otherwise do the following success or failure is signaled:
a) Generate a successor, E , of the initial state. If there ae no
more successor, signal failure.
b) Call Depth-First Search with E as the initial state.
C) If success is returned, signal success. Otherwise continue in
this loop.
• Step 1: Initially NODE-LIST contains only one node corresponding to
the source state A.
NODE-LIST:A
• Step 2: A is removed from NODE-LIST. A is expanded and its children B
and C are put in front of NODE-LIST.
NODE-LIST: B C
Step 3: Node B is removed from NODE-LIST , and its children D and E
are pushed in front of NODE-LIST.
NODE-LIST: D E C
Step 4: Node D is removed from NODE-LIST. C and F are pushed in front
of NODE-LIST.
NODE-LIST: C F E C
• Step 5: Node C is removed from NODE-LIST. Its child G is pushed in
front of NODE-LIST.
NODE-LIST: G F E C
Step 6: Node G is expanded and found to be a goal node.
Node-List: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminate.


Iterative Deepening Search(IDS)
• The iterative Deepening Depth First Search is simply called as iterative
deepening search(IDS)
• Combine the benefits of depth-first and breadth-first search to finds
the best solution.
• It gradually increase the limit from 0,1,2 and so on until reaches the
goal.
• It will terminate when the depth limit reaches d, depth of the
shallowest goal node, with success message.
Iterative Deepening Search(IDS)
• May seem wasteful because states are generated multiple times but
actually not very costly because nodes at the bottom level are
generated only once.
• The overhead of these multiple expansion is small, because most of
the nodes are towards leaves(bottom) of the search tree.
• Thus the nodes that are evaluated several times(towards top of tree) are in
relatively small number
Iterative deepening is the preferred uninformed search method when the
search space is large and the depth of the solution is unknown.
Performance measure of IDS
• Combines the best of breadth-first and depth-first search strategies
• Completeness: Yes
• Time Complexity: O(bd)
• Space Complexity: O(bd)
• Optimality: Yes, if step cost=1
Bi-directional Search
• Bi-directional searching , there are two searching one from the start
state toward the goal and another from the goal state toward the
start.
• This algorithm stops when both search meet in the middle.
• Can (sometimes) lead to finding a solution more quickly.
• Both search forward from initial state, and backwards from goal.
• Stop when the two searches meet in the middle.
• Motivation: bd/2+bd/2 is much less than bd
• Can be implemented using BFS or iterative deepening(but at least one
frontier needs to be kept in memory)
• Works well only when there are unique start and goal states.
• Complete: Yes(if only single goal, otherwise difficult)
• Time Complexity: O(bd/2) (better when compared to other algorithms)
• Space Complexity: O(bd/2) Space requirement is more
• Optimal: Yes, but not always optimal, even if both searches are BFS
• Check when each node is expanded or selected for expansion
Depth Limited Search (DLS)
• The failure of Depth-first search is the infinite state space, that can be
overcome by supplying depth-first search with a predetermined depth limit.
• The depth limit ‘l’ i.e. the cut-off value or the maximum level of the depth to
overcome the disadvantage of depth first search.
• Nodes at depth are treated as if they have no successors.
• This approach is called Depth-Limited Search.
• The depth limit solves the infinite-limited search.
• The depth limit solves the infinite-path search
• Notice that depth limited search can terminate with two kinds of failure
• The standard failure value indicates no solution
• The cutoff value indicates no solution within the depth limit.
Performance measure of DLS
• The DLS solves the infinite path problem, but it introduces many other
problems.
• Completeness: it is incomplete if l<d
• Optimality: It is non-optimality if l<d, because not guaranteed to find
the shortest solution in the search technique.
• Time and Space Complexity
• Time: O(bl)
• Space: O(bl)
What is N-puzzle problem?

• The N-puzzle problem contains N tiles (N+1 including empty tiles)


where the value of N can be 8, 15, 24, etc. The puzzle is divided into
the square root of (N+1) rows and the square root of (N+1) columns.
These problems have an initial state or configuration and a goal
state or configuration. The puzzle can be solved by shifting the tiles
one by one in the empty space. The objective is to place the
numbers on the tiles to match the final state using the empty space.
The other name for the N-puzzle problem is the sliding puzzle.
• Example: N=8, then the square root of (8+1) = 3 rows and 3 columns.
What is 8 puzzle problem?

• The 8-puzzle problem is invented and popularized by Noyes Palmer


Chapman in the year 1870. It is played on a 3-by-3 grid with 8 square
blocks/tiles labeled 1 through 8 and a blank square. The goal of this
8-puzzle problem is to rearrange the given blocks in the correct
order. The tiles can be shifted vertically or horizontally using the
empty square.
Rules to solve 8 puzzle problem

• To solve the 8-puzzle problem, the tiles in the puzzle are moved in
different directions. Instead of moving the tiles in the empty space, the
user can visualize moving the empty space in place of the tiles,
basically, swapping the tile with the empty space. The empty space
cannot move diagonally and can make a single move at a time. The user
can move the empty space in four different directions, and they are
• Up
• Down
• Right
• Left
• Generally, search techniques are classified into two types:
• Uninformed search
• In artificial intelligence, uninformed search techniques use the following search
algorithms like linear search, depth-first search (DFS), binary search, and breadth-first
search (BFS). These algorithms are named uninformed search because they do not know
anything about what they are searching for and where they need to search for it. The
uniformed search consumes more time.
• Informed search
• In artificial intelligence, the exact opposite of informed search is uninformed search. In
this search, the algorithm will be aware of where the best chance of finding the solution is
and what should be the next move. The informed search techniques are best-first search,
hill-climbing, A*, and AO* algorithm. Heuristic search/function is an informed search
technique.
• The heuristic function is an informed search method, which informs the search regarding
the goal direction. It provides data to estimate which neighboring node should lead to the
goal state. A heuristic function is used to calculate the heuristic value. A heuristic value
informs the algorithm which path will give an earlier solution. Based on the search
problems various heuristic values are generated from the heuristic function. Thus, to
optimize the search, heuristic value is used from the heuristic search technique.
• A* algorithm
• A* algorithm is an informed search technique algorithm. This is
widely used in pathfinding and graph traversal, the process of
plotting an efficiently traversable path between multiple points,
called nodes. The key feature is the algorithm is that it will keep
track of each visited node which helps in ignoring the nodes that are
already visited. A* algorithm has a list that holds all the nodes that
are left to be explored and it chooses the most optimal node from
this list, thus saving time not exploring unnecessary (less optimal)
nodes.
• States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note
that any given goal can be reached from exactly half of the possible
initial states.
• Actions: The simplest formulation defines the actions as movements
of the blank space Left, Right, Up, or Down. Different subsets of these
are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the resulting
state; for example, if we apply Left to the start state in Figure 3.4, the
resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal
configuration shown in Figure 3.4. (Other goal configurations are
possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in
• The first example we examine is the vacuum world first introduced in Chapter 2.
(See Figure 2.2.) This can be formulated as a problem as follows: • States: The
state is determined by both the agent location and the dirt locations. The agent is
in one of two locations, each of which might or might not contain dirt. Thus,
there are 2 × 2 2 = 8 possible world states. A larger environment with n locations
has n · 2 n states. • Initial state: Any state can be designated as the initial state. •
Actions: In this simple environment, each state has just three actions: Left, Right,
and Suck. Larger environments might also include Up and Down. • Transition
model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean
square have no effect. The complete state space is shown in Figure 3.3. • Goal
test: This checks whether all the squares are clean. • Path cost: Each step costs 1,
so the path cost is the number of steps in the path.
Toy problems
• States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might
not contain dirt. Thus, there are 2 × 2 2 = 8 possible world states. A larger
environment with n locations has n · 2 n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions:
Left, Right, and Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square,
and Sucking in a clean square have no effect. The complete state space is
shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the
path
8-QUEENS PROBLEM
• The goal of the 8-queens problem is to place eight queens on a
chessboard such that no queen attacks any other. (A queen attacks
any piece in the same row, column or diagonal.) Figure 3.5 shows an
attempted solution that fails: the queen in the rightmost column is
attacked by the queen at the top left
• Although efficient special-purpose algorithms exist for this problem and for the whole n-
queens family, it remains a useful test problem for search algorithms. There are two main
kinds of formulation. An incremental formulation involves operators that augment the
state INCREMENTAL FORMULATION description, starting with an empty state; for the 8-
queens problem, this means that each action adds a queen to the state. A complete-state
formulation starts with all 8 queens on COMPLETE-STATE FORMULATION the board and
moves them around. In either case, the path cost is of no interest because only the final
state counts. The first incremental formulation one might try is the following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked. In this formulation, we have 64 · 63
· · · 57 ≈ 1.8 × 1014 possible sequences to investigate. A better formulation would prohibit
placing a queen in any square that is already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the leftmost
n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column such that it is not
attacked by any other queen.
Informed (heuristic) Search Strategies
• Heuristic functions are the most common form in which additional knowledge of
the problem is imparted to the search algorithm.
• A node is selected for expansion based on an evaluation function , f(n)
• The evaluation function is constructed as a cost estimate.
• So the node with the lowest evaluation is expanded first

• The implementation is identical to that for uniform-cost search


• except for the use of f instead of g to order the priority queue . The choice of f
determines the search strategy
• Most best-first algorithm include as a component of f a heuristic function h(n)
• H(n)= estimated cost of the cheapest path from the state at node n to a goal state
We can consider h(n) to be arbitrary, nonnegative, problem-specific functions,
with one constant: if n is a node then h(n)=0
Informed Searching Strategies
• Greedy Best First Search
Greedy best-first search
• Greedy best-first search algorithm always selects the path which
• appears best at that moment.
• It is the combination of depth-first search and breadth-first search
• algorithms
• It uses the heuristic function and search
• With the help of Best-first search, at each step we can choose the
• most promising node.
• In the best first search algorithm, we expand the node which is
• closes to the goal node and the minimum cost is estimated by
• heuristic function.
Greedy best-first search
• The evaluation function is f(n)=h(n)
• Were, h(n)=estimated cost from node n to the goal.
• Greedy search ignored the cost of the path that has already been
• traversed to reach n
• Therefore the solution given is not necessarily optimal
Greedy best-first search

• Greedy best-first search can start down an infinite path and never
• return to try other possibilities, it is incomplete.
• Because of its greediness the search makes choices that can lead to a
dead end; then one backs up in the search tree to the deepest
unexpanded node.
• Greedy best-first search resembles depth-first search in the way it
prefers to follow a single path all the way to the goal but will
• back up when it hits a dead end.
• The quality of the heuristic function determines the practical usability of
greed search.
Greedy best-first search
• Greedy search is not optimal
• Greedy search is incomplete without systematic checking of repeated
states.
• In the worst case, the Time and Space Complexity of Greedy Search
are both O(bm)
Where
. b Is the branching factor and
m the maximum path length
A* (Star) Search in Artificial Intelligence

Algorithm A*(Hart et al. , 1968)

f(n)=g(n)+h(n)
h(n)= cost of the cheapest path from node n to a goal state.
g(n)=cost of the cheapest path from the initial state to node n.
Types of State space search
• Heuristic Search Algorithm
Hill Climbing, A* algorithm

Uninformed Search Algorithm


Breadth-first search, depth-first search, depth-limited search, iterative
depending depth-first search, Uniform cost search
Hill Climbing Algorithm (Local search Algorithm)

• Hill climbing algorithm is a Heuristic


search algorithm which continuously
moves in the direction of increasing
value to find the peak of the mountain
or best solution to the problem.

• It keeps track of one current state and on each iteration moves to the
neighboring state with highest value- that is, it heads in the direction
that provides the steepest ascent.
Hill Climbing Search Algorithm

• In this algorithm, when it reaches a peak


value where no neighbor has a higher value,
then it terminates.
• It is also called greedy local search as
it only searches its good immediate
neighbor state and not beyond that.

• Hill climbing is mostly used when a


good heuristic is available.
Hill Climbing Search Algorithm

• Different regions in the state space landscape


• Local Maximum is a state which is better than its neighbor states,
but there is also another state which is higher than it.

• Global Maximum is the best possible state of state space


landscape. It has the highest value of objective function.

• Current state is a state in a landscape diagram where an agent


is currently present.
• Flat local maximum is a flat space in the landscape where all the
neighbor states of current states have the same value.
• Shoulder is a plateau region which has an uphill edge.
Greedy best-first search
• Greedy best-first search tries to expand the node that is closest to the goal.
• On the grounds that this is likely to lead to a solution quickly
• Route-finding problems in Romania
• Heuristic= Straight line distance heuristic(hSLD)
• if the goal is Bucharest, we need to know the straight-line distance to Bucharest
• The values of hSLD cannot be computed from the problem description itself.
• it takes a certain amount of experience to know that hSLD is correlated with actual road
distances and is,therefore a useful heuristic.
Uniform-cost vs Greedy best-first
• Example: Given node A, having child nodes B,C,D with associated
costs of (10,5,7). After expanding C, we see nodes E,F,G with costs of
(40,50,60). Which node will be chosen by UCS and GBS? Why?

• Uniform-cost-search:

• Greedy Best-first Search:


A * search
• The most widely known form of best-first search
• It evaluates nodes by combining
• G(n) the cost to reach the node and
• H(n) the cost to get from the node to the goal
• F(n)=g(n)+h(n)
• Since g(n) gives the path cost from the start node to node n, and
h(n) is the estimated cost of the cheapest path from n to the goal
F(n)=estimated cost of the cheapest solution through n
Another example of heuristic function (8-puzzle)
• The average solution cost for a randomly generated 8-puzzle instance is about 22
steps
• The branching factor is about 3
• When the empty tile is in the middle, four moves are possible, when it is in a corner, two
and when it is along an edge, three

• This means that an exhaustive tree search to depth 22 would look at about 322~~ 3.1*1010
states
• This is reduced to 181,440 distinct states(which is still very large)
• corresponding number for the 15-puzzle is roughly 1013
• We need heuristic function that never overestimate the number of steps to the goal
Assignment
• Explain State Space Search in Artificial Intelligence with example

You might also like