0% found this document useful (0 votes)
43 views

Module 2

The document provides an overview of various problem solving and search algorithms including: breadth-first search, depth-first search, depth-limited search, uniform-cost search, iterative deepening depth-first search, and bidirectional search. It discusses the key properties, time and space complexity, completeness, and optimality of each algorithm.

Uploaded by

Bhavana Thummala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Module 2

The document provides an overview of various problem solving and search algorithms including: breadth-first search, depth-first search, depth-limited search, uniform-cost search, iterative deepening depth-first search, and bidirectional search. It discusses the key properties, time and space complexity, completeness, and optimality of each algorithm.

Uploaded by

Bhavana Thummala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 98

Module 2

9 Hours
Module No. 2 Problem Solving methods

Problem graphs, Matching, Indexing and Heuristic functions -Hill Climbing-


Depth first and Breath first, Constraints satisfaction - Related algorithms,
Measure of performance and analysis of search algorithms.

Dr Reeja S R
Professor- SCOPE
Problem-solving agents:
 Search: Searchingis a step by step procedure to solve a search-problem in a given
search space. A search problem can have three main factors:
 Search Space: Search space represents a set of possible solutions, which a system may
have.
 Start State: It is a state from where agent begins the search.
 Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a
transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal
node.
 Optimal Solution: If a solution has the lowest cost among all solutions.
Properties of Search Algorithms:

 Completeness: A search algorithm is said to be complete if it guarantees to


return a solution if at least any solution exists for any random input.
 Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for
is said to be an optimal solution.
 Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
 Space Complexity: It is the maximum storage space required at any point
during the search, as the complexity of the problem.
Types of search algorithms
Breadth-first Search:
 Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
 BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search algorithm.
 Breadth-first search implemented using FIFO queue data structure.
 Advantages:
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
 Disadvantages:
 It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
 BFS needs lots of time if the solution is far away from the root node.
Example:
 Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d=
depth of shallowest solution and b is a node at every state.
 T (b) = 1+b2+b3+.......+ bd= O (bd)
 Space Complexity: Space complexity of BFS algorithm is given by the Memory
size of frontier which is O(bd).
 Completeness: BFS is complete, which means if the shallowest goal node is at
some finite depth, then BFS will find a solution.
 Optimality: BFS is optimal if path cost is a non-decreasing function of the
depth of the node.
2. Depth-first Search
 Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
 Advantage:
 DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
 Disadvantage:
 There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the infinite loop
Example:
 Completeness: DFS search algorithm is complete within finite state space as
it will expand every node within a limited search tree.
 Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
 T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
 Where, m= maximum depth of any node and this can be much larger than
d (Shallowest solution depth)
 Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
 Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
3. Depth-Limited Search Algorithm:

 Depth-limited search can be terminated with two Conditions of failure:


 Standard failure value: It indicates that problem does not have any solution.
 Cutoff failure value: It defines no solution for the problem within a given
depth limit.
 Advantages:
 Depth-limited search is Memory efficient.
 Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.
Example:
 Completeness: DLS search algorithm is complete if the solution is above the
depth-limit.
 Time Complexity: Time complexity of DLS algorithm is O(bℓ).
 Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
 Optimal: Depth-limited search can be viewed as a special case of DFS, and it
is also not optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:
 Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph. This algorithm comes into play when a different cost is available for each
edge. The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to solve any
graph/tree where the optimal cost is in demand. A uniform-cost search algorithm
is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost
of all edges is the same.
 Advantages:
 Uniform cost search is optimal because at every state the path with the least cost
is chosen.
 Disadvantages:
 It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.
Example:
 Completeness:
 Uniform-cost search is complete, such as if there is a solution, UCS will find it.
 Time Complexity:
 Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node.
Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0
and end to C*/ε.
 Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.
 Space Complexity:
 The same logic is for space complexity so, the worst-case space complexity of Uniform-
cost search is O(b1 + [C*/ε]).
 Optimal:
 Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
5. Iterative deepening depth-first Search:
 The iterative deepening algorithm is a combination of DFS and BFS algorithms.
This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.
 This algorithm performs depth-first search up to a certain "depth limit", and it
keeps increasing the depth limit after each iteration until the goal node is found.
 This Search algorithm combines the benefits of Breadth-first search's fast search
and depth-first search's memory efficiency.
 The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
 Advantages:
 Itcombines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.
 Disadvantages:
 The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
 Completeness:
 This algorithm is complete is ifthe branching factor is finite.
 Time Complexity:
 Let's suppose b is the branching factor and depth is d then the worst-case
time complexity is O(bd).
 Space Complexity:
 The space complexity of IDDFS will be O(bd).
 Optimal:
 IDDFS algorithm is optimal if path cost is a non- decreasing function of the
depth of the node.
6. Bidirectional Search Algorithm:
 Bidirectional search algorithm runs two simultaneous searches, one form
initial state called as forward-search and other from goal node called as
backward-search, to find the goal node. Bidirectional search replaces one
single search graph with two small subgraphs in which one starts the
search from an initial vertex and other starts from goal vertex. The search
stops when these two graphs intersect each other.
 Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
 Advantages:
 Bidirectional search is fast.
 Bidirectional search requires less memory
 Disadvantages:
 Implementation of the bidirectional search tree is difficult.
 In bidirectional search, one should know the goal state in advance.
Example:
 Completeness: Bidirectional Search is complete if we use BFS in both
searches.
 Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
 Space Complexity: Space complexity of bidirectional search is O(bd).
 Optimal: Bidirectional search is Optimal.
Informed Search Algorithms
 Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path.
 It takes the current state of the agent as its input and produces the estimation of how close
agent is from the goal.
 The heuristic method, however, might not always give the best solution, but it guaranteed to
find a good solution in reasonable time.
 Heuristic function estimates how close a state is to the goal.
 It is represented by h(n), and it calculates the cost of an optimal path between the pair of
states. The value of the heuristic function is always positive.
 h(n) <= h*(n) 
 Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be
less than or equal to the estimated cost.
Pure Heuristic Search:

 Pure heuristic search is the simplest form of heuristic search algorithms. It


expands nodes based on their heuristic value h(n). It maintains two lists, OPEN
and CLOSED list. In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet not been
expanded.
 On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm
continues unit a goal state is found.
 In the informed search we will discuss two main algorithms which are given
below:
 Best First Search Algorithm(Greedy search)
 A* Search Algorithm
1.) Best-first Search Algorithm (Greedy Search):
 f(n)= g(n).   Were, h(n)= estimated cost from node n to the goal.
 The greedy best first algorithm is implemented by the priority queue.
 Best first search algorithm:
 Step 1: Place the starting node into the OPEN list.
 Step 2: If the OPEN list is empty, Stop and return failure.
 Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
CLOSED list.
 Step 4: Expand the node n, and generate the successors of node n.
 Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step 6.
 Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the
node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the
OPEN list.
 Step 7: Return to Step 2.
 Advantages:
 Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
 This algorithm is more efficient than BFS and DFS algorithms.
 Disadvantages:
 It can behave as an unguided depth-first search in the worst case scenario.
 It can get stuck in a loop as DFS.
Example:

 Path Cost(S----> B----->F----> G) =(4+2+0)=6


 Distance(S----> B----->F----> G) =(2+1+3)= 6
Expand the nodes of S and put in the CLOSED list

 Initialization: Open [A, B], Closed [S]


 Iteration 1: Open [A], Closed [S, B]
 Iteration 2: Open [E, F, A], Closed [S, B]
                  : Open [E, A], Closed [S, B, F]
 Iteration 3: Open [I, G, E, A], Closed [S, B, F]
                  : Open [I, E, A], Closed [S, B, F, G]
 Hence the final solution path will be: S----> B----->F----> G
 Space Complexity: The worst case space complexity of Greedy best first
search is O(bm). Where, m is the maximum depth of the search space.
 Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.
 Optimal: Greedy best first search algorithm is not optimal.
Find the Path cost and Distance of a
graph using Greedy Search

 shortest h(n) is E
 shortest h(n) is F
2.) A* Search Algorithm:
 A* search is the most commonly known form of best-first search. It uses
heuristic function h(n), and cost to reach the node n from the start state g(n).
It has combined features of UCS and greedy best-first search, by which it
solve the problem efficiently. A* search algorithm finds the shortest path
through the search space using the heuristic function. This search algorithm
expands less search tree and provides optimal result faster. A* algorithm is
similar to UCS except that it uses g(n)+h(n) instead of g(n).
 In A* search algorithm, we use search heuristic as well as the cost to reach
the node. Hence we can combine both costs as following, and this sum is
called as a fitness number.
Algorithm of A* search:
 Step1: Place the starting node in the OPEN list.
 Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
 Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if
node n is goal node then return success and stop, otherwise
 Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.
 Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer
which reflects the lowest g(n') value.
 Step 6: Return to Step 2.
 Advantages:
 A* search algorithm is the best algorithm than other search algorithms.
 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.
 Disadvantages:
 It does not always produce the shortest path as it mostly based on heuristics and approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is
not practical for various large-scale problems.
Example:
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path
with cost 6.
Points to remember:
•A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
•The efficiency of A* algorithm depends on the quality of heuristic.
•A* algorithm expands all nodes which satisfy the condition f(n)
 Complete: A* algorithm is complete as long as:
 Branching factor is finite.
 Cost at every action is fixed.
 Optimal: A* search algorithm is optimal if it follows below two conditions:
 Admissible: the first condition requires for optimality is that h(n) should be
an admissible heuristic for A* tree search. An admissible heuristic is optimistic
in nature.
 Consistency: Second required condition is consistency for only A* graph-
search.
 If the heuristic function is admissible, then A* tree search will always find the
least cost path.
 Time Complexity: The time complexity of A* search algorithm depends on
heuristic function, and the number of nodes expanded is exponential to the
depth of solution d. So the time complexity is O(b^d), where b is the
branching factor.
 Space Complexity: The space complexity of A* search algorithm is O(b^d)
Hill Climbing Algorithm in Artificial Intelligence

 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no
neighbor has a higher value.
 Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.
 It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
 A node of hill climbing algorithm has two components which are state and value.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as
it only keeps a single current state.
Features of Hill Climbing:

 Following are some main features of Hill Climbing Algorithm:


 Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to
decide which direction to move in the search space.
 Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
 No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
State-space Diagram for Hill Climbing:
Different regions in the state space landscape:

 Local Maximum: Local maximum is a state which is better than its neighbor


states, but there is also another state which is higher than it.
 Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
 Current state: It is a state in a landscape diagram where an agent is currently
present.
 Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
 Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:

 Simple hill Climbing:


 Steepest-Ascent hill-climbing:
 Stochastic hill Climbing:
1. Simple Hill Climbing:
 It only evaluates the neighbor node state at a time and selects the first one
which optimizes current cost and set it as a current state.
 his algorithm has the following features:
 Less time consuming
 Less optimal solution and the solution is not guaranteed
 Algorithm for Simple Hill Climbing:
 Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
 Step 2: Loop Until a solution is found or there is no new operator left to apply.
 Step 3: Select and apply an operator to the current state.
 Step 4: Check new state:
 If it is goal state, then return success and quit.
 Else if it is better than the current state then assign new state as a current state.
 Else if not better than the current state, then return to step2.
 Step 5: Exit.
2. Steepest-Ascent hill climbing:
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This
algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time
as it searches for multiple neighbors
 Algorithm for Steepest-Ascent hill climbing:
 Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make current state as initial state.
 Step 2: Loop until a solution is found or the current state does not change.
 Let SUCC be a state such that any successor of the current state will be better than it.
 For each operator that applies to the current state:
 Apply the new operator and generate a new state.
 Evaluate the new state.
 If it is goal state, then return it and quit, else compare it to the SUCC.
 If it is better than SUCC, then set new state as SUCC.
 If the SUCC is better than the current state, then set current state to SUCC.
 Step 5: Exit.
3. Stochastic hill climbing:

 Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and
decides whether to choose it as a current state or examine another state.
Problems in Hill Climbing Algorithm:

 1. Local Maximum: A local maximum is a peak state in the landscape which is


better than each of its neighboring states, but there is another state also
present which is higher than the local maximum.
 Solution: Backtracking technique can be a solution of the local maximum in
state space landscape. Create a list of the promising path so that the
algorithm can backtrack the search space and explore other paths as well.
 2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of this
algorithm does not find any best direction to move. A hill-climbing search
might be lost in the plateau area.
 Solution: The solution for the plateau is to take big steps or very little steps
while searching, to solve the problem. Randomly select a state which is far
away from the current state so it is possible that the algorithm could find
non-plateau region.
 3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and cannot
be reached in a single move.
 Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.
Simulated Annealing:

 A hill-climbing algorithm which never makes a move towards a lower value


guaranteed to be incomplete because it can get stuck on a local maximum.
And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which yields
both efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a
high temperature then cooling gradually, so this allows the metal to reach a
low-energy crystalline state. The same process is used in simulated annealing
in which the algorithm picks a random move, instead of picking the best
move. If the random move improves the state, then it follows the same path.
Otherwise, the algorithm follows the path which has a probability of less than
1 or it moves downhill and chooses another path.
Heuristic Search Technique

 Uninformed search algorithm


 Informed search algorithms
Breadth-first search

 Algorithm
 1. Create a variable called LIST and set it to be the starting state.
 2. Loop until a goal state is found or LIST is empty, Do
 a. Remove the first element from the LIST and call it E. If the LIST is empty,
quit.
 b. For every path each rule can match the state E, Do
 (i) Apply the rule to generate a new state.
 (ii) If the new state is a goal state, quit and return this state.
 (iii) Otherwise, add the new state to the end of LIST.
 Advantages
 1. Guaranteed to find an optimal solution (in terms of shortest number of
steps to reach the goal).
 2. Can always find a goal node if one exists (complete).
 Disadvantages
 1. High storage requirement: exponential with tree depth.
Depth-first search
 A search strategy that extends the current path as far as possible before backtracking to the
last
choice point and trying the next alternative path is called Depth-first search (DFS).
 • This strategy does not guarantee that the optimal solution has been found.
 • In this strategy, search reaches a satisfactory solution more rapidly than breadth first, an
advantage when the search space is large.
 Algorithm
 Depth-first search applies operators to each newly generated state, trying to drive directly
toward the goal.
 1. If the starting state is a goal state, quit and return success.
 2. Otherwise, do the following until success or failure is signalled:
 a. Generate a successor E to the starting state. If there are no more successors, then signal
failure.
 b. Call Depth-first Search with E as the starting state.
 c. If success is returned signal success; otherwise, continue in the loop.
 Advantages
 1. Low storage requirement: linear with tree depth.
 2. Easily programmed: function call stack does most of the work of
maintaining state of the search.
 Disadvantages
 1. May find a sub-optimal solution (one that is deeper or more costly than the
best solution).
 2. Incomplete: without a depth bound, may not find a solution even if one
exists.
Bounded depth-first search

 Depth-first search can spend much time (perhaps infinite time) exploring a
very deep path that does not contain a solution, when a shallow solution
exists. An easy way to solve this problem is to put a maximum depth bound on
the search. Beyond the depth bound, a failure is generated automatically
without exploring any deeper.
 Problems:
 1. It’s hard to guess how deep the solution lies.
 2. If the estimated depth is too deep (even by 1) the computer time used is
dramatically increased, by a factor of bextra.
 3. If the estimated depth is too shallow, the search fails to find a solution; all
that computer time is wasted.
Heuristics

 A heuristic is a method that improves the efficiency of the search process


 Heuristic search: To find a solution in proper time rather than a complete
solution in unlimited time we use heuristics.
 ‘A heuristic function is a function that maps from problem state descriptions
to measures of desirability, usually represented as numbers’. Heuristic search
methods use knowledge about the problem domain and choose promising
operators first.
 For finding a solution, by using the heuristic technique, one should carry out
the following steps:
 1. Add domain—specific information to select what is the best path to
continue searching along.
 2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n.
Specifically, h(n) = estimated cost(or distance) of minimal cost path from n to
a goal state.
 3. The term, heuristic means ‘serving to aid discovery’ and is an estimate,
based on domain specific information that is computable from the current
state description of how close we are to a goal.
 1. State: The current city in which the traveller is located.
 2. Operators: Roads linking the current city to other cities.
 3. Cost Metric: The cost of taking a given road between cities.
 4. Heuristic information: The search could be guided by the direction of the
goal city from the current city, or we could use airline distance as an
estimate of the distance to the goal.

 Heuristic search techniques – Thumb rule


Characteristics of heuristic search

 Heuristics are knowledge about domain, which help search and reasoning in its domain.
 • Heuristic search incorporates domain knowledge to improve efficiency over blind
search.
 • Heuristic is a function that, when applied to a state, returns value as estimated merit
of state, with respect to goal.
  Heuristics might (for reasons) underestimate or overestimate the merit of a state with
respect to goal.
  Heuristics that underestimate are desirable and called admissible.
 • Heuristic evaluation function estimates likelihood of given state leading to goal state.
 • Heuristic search function estimates cost from current state to goal, presuming function
is efficient.
Heuristic search compared with other
search
Generate and Test Strategy

 Algorithm: Generate-And-Test
 1.Generate a possible solution.
 2.Test to see if this is the expected solution.
 3.If the solution has been found quit else go to step 1.
 Systematic Generate-And-Test
 Generate-And-Test And Planning
Generate-And-Test And Planning
 Algorithm:
 Analysis :
  The worst case time complexity for Best First Search is O(n * Log n) where n
is number of nodes. In worst case, we may have to visit all nodes before we
reach goal.
 Note that priority queue is implemented using Min(or Max) Heap, and insert
and remove operations take O(log n) time.
  Performance of the algorithm depends on how well the cost or evaluation
function is designed.
A* Search Algorithm

 Some terminology
 A node is a state that the problem's world can be in.
 State space search
 Heuristics and Algorithms
 Cost
 Path finding
Implementing A*(Pseudocode)
 8-puzzle problem
 Advantages:
 It is complete and optimal.
 It is the best one from other techniques. It is used to solve very complex
problems.
 It is optimally efficient, i.e. there is no other optimal algorithm guaranteed to
expand fewer nodes than A*.
 Disadvantages:
 This algorithm is complete if the branching factor is finite and every action
has fixed cost.
AO* Search: (And-Or) Graph

 OPEN:
 It contains the nodes that has been traversed but yet not been marked
solvable or unsolvable.
 CLOSE:
 It contains the nodes that have already been processed.
Implementation:
 Advantages:
 It is an optimal algorithm.
 If traverse according to the ordering of nodes. It can be used for both OR and AND graph.
 Disadvantages:
 Sometimes for unsolvable nodes, it can’t find the optimal path. Its complexity is than other
 algorithms.
How AO* works
 Let's try to understand it with the following diagram

 The algorithm always moves towards a lower cost value.


 Basically, We will calculate the cost function here (F(n)= G (n) + H (n))
 H:  heuristic/ estimated  value of the nodes. and G: actual cost or edge value
(here unit value).
 Here we have taken the edges value 1 , meaning we have to focus solely on
the heuristic value.
 The Purple color values are edge values (here all are same that is one).
 The Red color values are Heuristic values for nodes.
 The Green color values are New Heuristic values for nodes.
 Procedure:
 In the above diagram we have two ways from A to D or A to B-C (because of
and condition). calculate cost to select a path
 F(A-D)= 1+10 = 11            and               F(A-BC) = 1 + 1 + 6 +12 = 20
 As we can see F(A-D) is less than F(A-BC) then the algorithm choose the
path F(A-D).
 Form D we have one choice that is F-E.
 F(A-D-FE) = 1+1+ 4 +4 =10
 Basically 10 is the cost of reaching FE from D. And Heuristic value of node
D also denote the cost of reaching FE from D. So, the new Heuristic value of D
is 10. 
 And the Cost from A-D remain same that is 11.
 Suppose we have searched this path and we have got the Goal State, then we
will never explore the other path. (this is what AO* says  but here we are going
to explore other path as well to see what happen)
 Let's Explore the other path:
 In the above diagram we have two ways from A to D or A to B-C (because of and condition). calculate cost to
select a path
 F(A-D)= 1+10 = 11            and               F(A-BC) = 1 + 1 + 6 +12 = 20
 As we know the cost is more of F(A-BC) but let's take a look 
 Now from B we have two path G and H , let's calculate the cost
 F(B-G)= 5+1 =6    and F(B-H)= 7 + 1 = 8
 So, cost from F(B-H) is more than F(B-G) we will take the path B-G.
 The Heuristic value from G to I is 1 but let's calculate the cost form G to I.
 F(G-I) = 1 +1 = 2. which is less than Heuristic value 5. So, the new Heuristic value form G to I is 2.
 If it is a new value, then the cost from G to B must also have changed. Let's see the new cost form (B to G)
 F(B-G)= 1+2 =3 . Mean the New Heuristic value of B is 3.
 But A is associated with both B and C .
 As we can see from the diagram C only have one choice or one node to explore that is J. The Heuristic value of
C is 12.
 Cost form C to J= F(C-J) = 1+1= 2 Which is less than Heuristic value 
 Now the New Heuristic value of C is 2. 
 And the New Cost from A- BC that is F(A-BC) = 1+1+2+3 = 7 which is less than F(A-D)=11. 
 In this case Choosing path A-BC is more cost effective and good than that of A-D.
 But this will only happen when the algorithm explores this path as well. But according to the algorithm, algorithm
will not accelerate this path (here we have just did  it to see how the other path can also be correct).
 But it is not the case in all the cases that it will happen in some cases that the algorithm will get optimal
solution.
PROBLEM REDUCTION

 Problem Reduction with AO* Algorithm.


 When a problem can be divided into a set of sub problems, where each sub problem can
be solved separately and a combination of these will be a solution, AND-OR graphs or
AND – OR trees are used for representing the solution. The decomposition of the problem
or problem reduction generates AND arcs. One AND are may point to any number of
successor nodes.
 All these must be solved so that the arc will rise to many arcs, indicating several
possible solutions.
 Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR graph.
1. traverse the graph starting at the initial node and following
the current best path, and accumulate the set of nodes that are
on the path and have not yet been expanded.
2. Pick one of these unexpanded nodes and expand it. Add its
successors to the graph and computer f ' (cost of the remaining
distance) for each of them.
3. Change the f ' estimate of the newly expanded node to reflect
the new information produced by its successors. Propagate this
change backward through the graph. Decide which of the
current best path.
The working of AO* algorithm is
illustrated in figure as follows
AO* Search Procedure.
Constraints satisfaction

 Until a complete solution is found or until all paths have led to lead ends, do
 1. select an unexpanded node of the search graph.
 2. Apply the constraint inference rules to the selected node to generate all
possible new
 constraints.
 3. If the set of constraints contains a contradiction, then report that this path is
a dead end.
 4. If the set of constraints describes a complete solution then report success.
 5. If neither a constraint nor a complete solution has been found then apply the
rules to generate
 new partial solutions. Insert these partial solutions into the search graph.
Example

 CONSTRAINTS:-
 1. no two digit can be assigned to same letter.
 2. only single digit number can be assign to a letter.
 1. no two letters can be assigned same digit.
 2. Assumption can be made at various levels such that they do not contradict each
other.
 3. The problem can be decomposed into secured constraints. A constraint
satisfaction approach may be used.
 4. Any of search techniques may be used.
 5. Backtracking may be performed as applicable us applied search techniques.
 6. Rule of arithmetic may be followed.
Solution Process:
 We are following the depth-first method to solve the problem.
 1. initial guess m=1 because the sum of two single digits can generate at most a carry '1'.
 2. When n=1 o=0 or 1 because the largest single digit number added to m=1 can generate the
sum of either 0 or 1 depend on the carry received from the carry sum. By this we conclude that
o=0 because m is already 1 hence we cannot assign same digit another letter(rule no.)
 3. We have m=1 and o=0 to get o=0 we have s=8 or 9, again depending on the carry received
from the earlier sum.
 The same process can be repeated further. The problem has to be composed into various
constraints. And each constraints is to be satisfied by guessing the possible digits that the letters
can be assumed that the initial guess has been already made . rest of the process is being shown
in the form of a tree, using depth-first search for the clear understandability of the solution
process.
MEANS - ENDS ANALYSIS:-

 A problem is solved using means - ends analysis by


 1. Computing the current state s1 to a goal state s2 and computing their
difference D12.
 2. Satisfy the preconditions for some recommended operator op is selected, then
to reduce the difference D12.
 3. The operator OP is applied if possible. If not the current state is solved a goal
is created and means- ends analysis is applied recursively to reduce the sub goal.
 4. If the sub goal is solved state is restored and work resumed on the original
problem.
 ( the first AI program to use means - ends analysis was the GPS General problem
solver)
 means- ends analysis I useful for many human planning activities.
 Consider the example of planning for an office worker. Suppose we have a
different table of three rules:
 1. If in out current state we are hungry , and in our goal state we are not
hungry , then either the "visit hotel" or "visit Canteen " operator is
recommended.
 2. If our current state we do not have money , and if in your goal state we
have money, then the "Visit our bank" operator or the "Visit secretary"
operator is recommended.
 3. If our current state we do not know where something is , need in our goal
state we do know, then either the "visit office enquiry" , "visit secretary" or
"visit co worker " operator is recommended.
Apply A* algorithm

Node H(n)
A 10.4
B 6.7
C 4.0
D 8.9
E 6.9
F 3.0
G 0

You might also like