0% found this document useful (0 votes)
39 views39 pages

Session 4

Uploaded by

Venkata Rajesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views39 pages

Session 4

Uploaded by

Venkata Rajesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

DATA DRIVEN

ARTIFICIAL
INTELLIGENT SYSTEMS
SESSION NO: 4
TOPIC: HEURISTIC SEARCH TECHNIQUES: GREEDY, BFS, A*, HILL CLIMBING
HEURISTIC SEARCH TECHNIQUES

• All of the search methods in the preceding section are uninformed in that they did not take into
account the cost incurred to reach the goal.
• They do not use any information about where they are trying to get to unless they happen to
stumble on a goal.
• A heuristic is a method that improves the efficiency of the search process.
• To find a solution in proper time rather than a complete solution in unlimited time we use
heuristics.
• These heuristic search methods use heuristic functions to evaluate the next state towards the goal
state
• A heuristic function h(n), takes a node n and returns a non-negative real number that is an estimate
of the cost of the least-cost path from node n to a goal node.
2
INFORMED SEARCH STRATEGY

• Informed search strategy is one that uses problem-specific knowledge beyond the definition
of the problem itself—can find solutions more efficiently than can an uninformed strategy.

• We learn the following two strategies as part of Informed Search Strategies.

• Greedy Best First Search

• A* search

3
GREEDY BEST-FIRST SEARCH

• Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is
likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function; that is, f
(n) = h(n).

• The algorithm is called "greedy“ because at each step it tries to get as close to the goal as it can.

• Greedy Best First search using hSLD finds a solution without ever expanding a node that is not on the
solution path; hence, its search cost is minimal.

4
ALGORITHM:

1. Start with OPEN containing just the initial state

2. Until a goal is found or there are no nodes left on OPEN do:


Pick the best node on OPEN
Generate its successors
For each successor do:
If it has not been generated before, evaluate it, add it to OPEN, and record its parent.
If it has been generated before, change the parent if this new path is better than the previous one. In
that case, update the cost of getting to this node and to any successors that this node may already
have.

5
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S----> B----->F----> G

6
ADVANTAGES

• Simple and Easy to Implement: Greedy Best-First Search is a relatively straightforward algorithm, making it
easy to implement.
• Fast and Efficient: Greedy Best-First Search is a very fast algorithm, making it ideal for applications where
speed is essential.
• Low Memory Requirements: Greedy Best-First Search requires only a small amount of memory, making it
suitable for applications with limited memory.
• Flexible: Greedy Best-First Search can be adapted to different types of problems and can be easily extended to
more complex problems.
• Efficiency: If the heuristic function used in Greedy Best-First Search is good to estimate, how close a node is
to the solution, this algorithm can be a very efficient and find a solution quickly, even in large search spaces

7
DISADVANTAGES

• Inaccurate Results: Greedy Best-First Search is not always guaranteed to find the optimal solution, as it is only
concerned with finding the most promising path.
• Local Optima: Greedy Best-First Search can get stuck in local optima, meaning that the path chosen may not be
the best possible path.
• Heuristic Function: Greedy Best-First Search requires a heuristic function in order to work, which adds
complexity to the algorithm.
• Lack of Completeness: Greedy Best-First Search is not a complete algorithm, meaning it may not always find a
solution if one is exists.

8
• Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
• Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where,
m is the maximum depth of the search space.
• Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.

9
A* SEARCH

• For Minimizing the total estimated solution cost A* Search is used

• It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from the
node to the goal: f(n) = g(n) + h(n) .

• Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of the
cheapest path from n to the goal, we have f (n) = estimated cost of the cheapest solution through n .

10
A* SEARCH

•If we are trying to find the cheapest solution, a reasonable thing to try first is the node with the lowest
value of f(n)=g(n) + h(n).

•The algorithm is identical to GBFS except that A* uses g + h instead of h.

11
ALGORITHM

1. Start with OPEN containing only initial node. Set that node’s g(n) value to 0, its h(n) value to whatever it
is, and its f(n) value to h+0 or h(n). Set CLOSED to empty list.
2. Until a goal node is found, repeat the following procedure:
1. If there are no nodes on OPEN, report failure.
2. Otherwise pick the node on OPEN with the lowest f(n) value. Call it BESTNODE.
3. Remove it from OPEN. Place it in CLOSED.
4. See if the BESTNODE is a goal state. If so exit and report a solution.
5. Otherwise, generate the successors of BESTNODE but do not set the BESTNODE to point to them yet .

12
3. For each of the SUCCESSOR, do the following:
a. Set SUCCESSOR to point back to BESTNODE.
(These backwards links will make it possible to recover the path once a solution is found.)

b. Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to SUCCESSOR

c. See if SUCCESSOR is the same as any node on OPEN. If so call the node OLD. See whether it is cheaper
to get to OLD via its current parent or SUCCESSOR via BESTNODE by comparing their g values. If
SUCCESSOR is cheaper, then reset OLD’s parent to point to BESTNODE, record the new cheaper path
in g(OLD) and update f’(OLD).

13
• So to propagate the new cost downward, do a depth-first traversal of the tree starting at OLD,
changing each node’s g value (and thus also its f’ value), terminating each branch when you
reach either a node with no successor or a node to which an equivalent or better path has
already been found.
e. If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN and add it
to the list of BESTNODE’s successors.
f. Compute f’(SUCCESSOR) = g(SUCCESSOR) + h’(SUCCESSOR)

14
• Initialization: {(S, 5)}
• Iteration1: {(S--> A, 4), (S-->G, 10)}
• Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
• Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11),
(S--> A-->B, 7), (S-->G, 10)}
• Iteration 4 will give the final result, as S--->A--->C--->G it
provides the optimal path with cost 6

15
ADVANTAGES & DISADVANTAGES

Advantages:

That A* search is complete, optimal, and optimally efficient among all such algorithms.

Disadvantages:

A* search algorithm has some complexity issues.


The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is
not practical for various large-scale problems.

16
Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the number of
nodes expanded is exponential to the depth of solution d. So the time complexity is O(b^d), where b is the
branching factor.
Space Complexity: The space complexity of A* search algorithm is O(b^d)
Complete: A* algorithm is complete as long as:
Branching factor is finite.
Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:
Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic for A* tree
search. An admissible heuristic is optimistic in nature.
Consistency: Second required condition is consistency for only A* graph-search.

17
SOME IMPORTANT FACT OF A* SEARCH

 F(n)=g(n)+h(n)

 If g=0 then F(n)=h(n), then the node that seems closest to the goal will be chosen (and it becomes
GBFS).
 If g=1 and h=0 then the path involving fewest number of steps will be chosen.

 If h=0 , then F(n)=g(n) then the search will be controlled by g (and it becomes uniform cost search).

 If h=0 and g= 0 then the search strategy will be random

18
• Look at the graph and try to implement both Greedy BFS and A* algorithms step by step
using the two list, OPEN and CLOSED.

19
INTRODUCTION TO HILL CLIMBING

 The search algorithms that we have seen so far designed to explore search spaces
systematically.
This systematicity is achieved by keeping one or more paths in memory and by recording
which alternatives have been explored at each point along the path.
 When a goal is found, the path to that goal also constitutes a solution to the problem.
 In many problems, how-ever, the path to the goal is irrelevant. For example, in the 8-
queens problem . what matters is the final configuration of queens, not the order in which
they are added.
 The same general property holds for many important applications such as integrated-
circuit design, factory-flour layout, job-shop scheduling, automatic programming,
telecommunications network optimization, vehicle routing. and portfolio management.

20
LOCAL SEARCH ALGORITHMS

 If the path to the goal does not matter, a different class of algorithms like Local Search
Algorithms can be considered.
 Local search algorithms operate using a single current node (rather than multiple paths)
and generally move only to neighbors of that node.
 Typically, the paths followed by the search are not retained.
 Although local search algorithms are not systematic, they have two key advantages:
 They use very little memory—usually a constant amount; and
 They can often find reasonable solutions in large or infinite(continuous) state spaces
for which systematic algorithms are unsuitable.

21
 In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an objective
function.

 A complete local search algorithm always finds a goal if one exists.

 An optimal algorithm always finds a global minimum/maximum.

22
HILL CLIMBING

 Hill climbing is a simple local optimization method that “climbs” up the hill until
a local optimum is found (assuming a maximization goal).
 The method works by iteratively searching for new solutions within the
neighborhood of current solution, adopting new solutions if they are better.
 There are several hill climbing variants:
 Simple Hill Climbing: Which selects one next state in the neighborhood and
adopts that.
 Steepest Ascent Hill Climbing: which searches for up to N solutions in the
neighborhood of S and then adopts the best one. It is time consuming but
gives an optimum result.

23
ALGORITHM: SIMPLE HILL CLIMBING

1. Evaluate the initial state. If it is also goal state, then return it and quit. Otherwise
continue with the initial state as the current state.
2. Loop until a solution is found or until there are no new operators left to be applied in
the current state:
a. Select an operator that has not yet been applied to the current state and apply it to
produce a new state
b. Evaluate the new state

i. If it is the goal state, then return it and quit.


ii. If it is not a goal state but it is better than the current state, then make it the
current state.
iii. If it is not better than the current state, then continue in the loop.
HEURISTIC OR INFORMED SEARCH

• Heuristic or Informed Search means searching with information.


• Some information about problem space (heuristic) is used to compute preference among
the children for exploration and expansion.
• Examples: Best First Search, Hill Climbing, Constraint Satisfaction etc.
• Heuristic function: It maps each state to a numerical value which depicts goodness of a
node.
• H(n)=value where , H() is a heuristic function and n is the current state.
HILL CLIMBING(CONTINUED…) – BLOCKS
WORLD PROBLEM

26
27
Hill Climbing(continued…) – Blocks World Problem
STEEPEST-ASCENT HILL CLIMBING

This is a variation of simple hill climbing which considers all the moves from the current state and
selects the best one as the next state. Also known as Gradient search
Algorithm: Steepest-Ascent Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with
the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no change to current state:
a. Let SUCC be a state such that any possible successor of the current state will be better than
SUCC
b. For each operator that applies to the current state do:
i. Apply the operator and generate a new state
ii. Evaluate the new state. If is is a goal state, then return it and quit. If not, compare it to
SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC alone.
c. If the SUCC is better than the current state, then set current state to SUCC.
HILL-CLIMBING (CONTINUED…) - LIMITATIONS

Both simple Hill climbing and Steepest-Ascent Hill climbing may have
following limitations:
1. Local Maxima: a local maximum as opposed to global maximum.
Way Out: Backtrack to some earlier node and try going in a
different direction
2. Plateaus: An area of the search space where evaluation function is flat,
thus requiring random walk.
Way out: Make a big jump to try to get in a new section
3. Ridge: Where there are steep slopes and the search direction is not
towards the top but towards the side.
Way out: Apply two or more rules before doing the test.
35
SUMMARY

 In the case of the greedy BFS algorithm, the evaluation function is f(n)=h(n), that is, the greedy
BFS algorithm first expands the node whose estimated distance to the goal is the smallest. So,
greedy BFS does not use the "past knowledge", i.e. g(n). Hence its connotation "greedy". In
general, the greedy BST algorithm is not complete, that is, there is always the risk of taking a path
that does not bring us to the goal.
 In the case of the A* algorithm, the evaluation function is f(n)=g(n)+h(n), where h is an admissible
heuristic function. The "star", often denoted by an asterisk, *, refers to the fact that A* uses an
admissible heuristic function, which essentially means that A* is optimal, that is, it always finds the
optimal path between the starting node and the goal node.

36
SELF ASSESSMENT QUESTIONS
1. Which of the following heuristic function used in Greedy Best first search?

(a) Farthest neighbor heuristic


(b) f(n)=g(n)
(c) f(n)=g(n)+h(n)
(d) f(n)=h(n)

2. Which of the following search strategy uses direct heuristic techniques?

(a) A*
(b) TSP
(c) GBFS
(d) DFS
TERMINAL QUESTIONS

1.Apply the Best first Search for the following graph if A is start node and M is Goal node

2.Discuss the advantages and disadvantages of Best first Search

38
• Reference Books:

• 1. Russel and Norvig, ‘Artificial Intelligence’, third edition, Pearson Education, PHI, (2015)

• 2. Elaine Rich & Kevin Knight, ‘Artificial Intelligence’, 3nd Edition, Tata Mc Graw Hill Edition, Reprint( 2008)

• Sites and Web links:

1. https://www.virtusa.com/digital-themes/heuristic-search-techniques

2. https://towardsdatascience.com/a-star-a-search-algorithm-eb495fb156bb

3. https://www.tutorialandexample.com/local-search-algorithms-and-optimization-problem/

4. http://cgi.di.uoa.gr/~ys02/siteAI2008/local-search-2spp.pdf

5. http://aima.eecs.berkeley.edu/slides-pdf/chapter04b.pdf

39

You might also like