0% found this document useful (0 votes)
3 views

ai unit 2 notes

al3391 unit 2 notes

Uploaded by

DIYA MEERA
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

ai unit 2 notes

al3391 unit 2 notes

Uploaded by

DIYA MEERA
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

SRI RAJA RAAJAN COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

II-YEAR-AI&DS -2021 R
SUB CODE: AL3391
SUB NAME: Artificial Intelligence

UNIT II
PROBLEM SOLVING
Heuristic search strategies – heuristic functions. Local search and optimization
problems – local search in continuous space – search with non-deterministic
actions – search in partially observable environments – online search agents and
unknown environments

PREPARED BY VERIFIED BY HOD APPROVED BY DEAN


A.JEYA CHITHRA
(AP/CSE)

1
UNIT II

2.1.Heuristic Search Strategies:

What is Heuristics?

 A heuristic is a technique that is used to solve a problem faster than the classic methods.
These
techniques are used to find the approximate solution of a problem when classical methods
do
not.


Heuristics are said to be the problem-solving techniques that result in practical and quick
solutions.
Why do we need heuristics?

 Heuristics are used in situations in which there is the requirement of a short-term solution.
 On facing complex situations with limited resources and time, Heuristics can help the companies to make

quick decisions by shortcuts and approximated calculations.


 Most of the heuristic methods involve mental shortcuts to make decisions on past experiences.
 The heuristic method might not always provide us the finest solution, but it is assured that it helps us find
a good solution in a reasonable time.

Heuristic search techniques in AI (Artificial Intelligence)

We can perform the Heuristic techniques into two categories:

1.Direct Heuristic Search techniques in AI

2.Weak Heuristic Search techniques in AI

2
Direct Heuristic Search techniques in AI

It includes Blind Search, Uninformed Search, and Blind control strategy. These search
techniques are not always possible as they require much memory and time. These techniques
search the complete space for a solution and use the arbitrary ordering of operations.

The examples of Direct Heuristic search techniques include Breadth-First Search (BFS) and
Depth First Search (DFS).

Weak Heuristic Search techniques in AI

It includes Informed Search, Heuristic Search, and Heuristic control strategy. These techniques
are helpful when they are applied properly to the right types of tasks. They usually require
domain-specific information.

The examples of Weak Heuristic search techniques include Best First Search (BFS) and A*.

Before describing certain heuristic techniques, let's see some of the techniques listed below:

o Bidirectional Search
o A* search
o Simulated Annealing
o Hill Climbing
o Best First search
o Beam search

First, let's talk about the Hill climbing in Artificial

intelligence. Hill Climbing Algorithm

It is a technique for optimizing the mathematical problems. Hill Climbing is widely used when a
good heuristic is available.

It is a local search algorithm that continuously moves in the direction of increasing


elevation/value to find the mountain's peak or the best solution to the problem.

It terminates when it reaches a peak value where no neighbor has a higher value. Traveling-
salesman Problem is one of the widely discussed examples of the Hill climbing algorithm, in
which we need to minimize the distance traveled by the salesman.

It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that. The steps of a simple hill-climbing algorithm are listed below:

3
Step 1: Evaluate the initial state. If it is the goal state, then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

If it is a goal state, then return to success and quit.

Else if it is better than the current state, then assign a new state as a current state.

Else if not better than the current state, then return to step2.

Step 5: Exit.

Best first search (BFS)

This algorithm always chooses the path which appears best at that moment. It is the combination
of depth-first search and breadth-first search algorithms. It lets us to take the benefit of both
algorithms. It uses the heuristic function and search. With the help of the best-first search, at
each step, we can choose the most promising node.

Best first search algorithm:

Step 1: Place the starting node into the OPEN list.

Step 2: If the OPEN list is empty, Stop and return failure.

Step 3: Remove the node n from the OPEN list, which has the lowest value of h(n), and places it
in the CLOSED list.

Step 4: Expand the node n, and generate the successors of node n.

Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is the goal node, then return success and stop the search, else continue to next
step.

Step 6: For each successor node, the algorithm checks for evaluation function f(n) and then
check if the node has been in either OPEN or CLOSED list. If the node has not been in both lists,
then add it to the OPEN list.

Step 7: Return to Step 2.

4
A* Search Algorithm

A* search is the most commonly known form of best-first search. It uses the heuristic function
h(n) and cost to reach the node n from the start state g(n). It has combined features of UCS and
greedy best-first search, by which it solve the problem efficiently.
It finds the shortest path through the search space using the heuristic function. This search
algorithm expands fewer search tree and gives optimal results faster.

Algorithm of A* search:

Step 1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not. If the list is empty, then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of the evaluation
function (g+h). If node n is the goal node, then return success and stop, otherwise.

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list. If not, then compute the
evaluation function for n' and place it into the Open list.

Step 5: Else, if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Examples of heuristics in everyday life

Some of the real-life examples of heuristics that people use as a way to solve a problem:

o Common sense: It is a heuristic that is used to solve a problem based on the observation
of an individual.
o Rule of thumb: In heuristics, we also use a term rule of thumb. This heuristic allows an
individual to make an approximation without doing an exhaustive search.
o Working backward: It lets an individual solve a problem by assuming that the problem
is already being solved by them and working backward in their minds to see how much a
solution has been reached.
5
o Availability heuristic: It allows a person to judge a situation based on the examples of
similar situations that come to mind.

o Familiarity heuristic: It allows a person to approach a problem on the fact that an


individual is familiar with the same situation, so one should act similarly as he/she acted
in the same situation before.
o Educated guess: It allows a person to reach a conclusion without doing an exhaustive
search. Using it, a person considers what they have observed in the past and applies that
history to the situation where there is not any definite answer has decided yet.

Types of heuristics

There are various types of heuristics, including the availability heuristic, affect heuristic
and representative heuristic.
Each heuristic type plays a role in decision-making. Let's discuss about the Availability
heuristic, affect heuristic, and Representative heuristic.

Availability heuristic

Availability heuristic is said to be the judgment that people make regarding the likelihood of
an event based on information that quickly comes into mind.
On making decisions, people typically rely on the past knowledge or experience of an event.
It allows a person to judge a situation based on the examples of similar situations that come to
mind.

Representative heuristic

It occurs when we evaluate an event's probability on the basis of its similarity with another event.

Example: We can understand the representative heuristic by the example of product packaging,
as consumers tend to associate the products quality with the external packaging of a product.

If a company packages its products that remind you of a high quality and well-known product,
then consumers will relate that product as having the same quality as the branded product.

So, instead of evaluating the product based on its quality, customers correlate the products
quality based on the similarity in packaging.

Affect heuristic

It is based on the negative and positive feelings that are linked with a certain stimulus. It includes
quick feelings that are based on past beliefs.

Its theory is one's emotional response to a stimulus that can affect the decisions taken by an
individual.

When people take a little time to evaluate a situation carefully, they might base their decisions
based on their emotional response.

6
Example: The affect heuristic can be understood by the example of advertisements.
Advertisements can influence the emotions of consumers, so it affects the purchasing decision of
a consumer. The most common examples of advertisements are the ads of fast food. When fast-
food companies run the advertisement, they hope to obtain a positive emotional response that
pushes you to positively view their products.

If someone carefully analyzes the benefits and risks of consuming fast food, they might decide
that fast food is unhealthy.

But people rarely take time to evaluate everything they see and generally make decisions based
on their automatic emotional response.

So, Fast food companies present advertisements that rely on such type of Affect heuristic for
generating a positive emotional response which results in sales.

Limitation of heuristics

Along with the benefits, heuristic also has some limitations.

o Although heuristics speed up our decision-making process and also help us to solve
problems, they can also introduce errors just because something has worked accurately in
the past, so it does not mean that it will work again.
o It will hard to find alternative solutions or ideas if we always rely on the existing
solutions or heuristics.

1. Heuristic Functions in Artificial Intelligence:

Heuristic Functions in AI:


 As we have already seen that an informed search make use of heuristic functions in order
to reach the goal node in a more prominent way.
 Therefore, there are several pathways in a search tree to reach the goal node from the
current node. The selection of a good heuristic function matters certainly.
 A good heuristic function is determined by its efficiency.
 More is the information about the problem, more is the processing time.
Some toy problems, such as 8-puzzle, 8-queen, tic-tac-toe, etc., can be solved more efficiently
with the help of a heuristic function. Let’s see how
Consider the following 8-puzzle problem where we have a start state and a goal state.
Our task is to slide the tiles of the current/start state and place it in an order followed in the goal
state.
There can be four moves either left, right, up, or down. There can be several ways to convert
the current/start state to the goal state, but, we can use a heuristic function h(n) to solve the
problem more efficiently.

7
h(n)=Number of tiles out of position.

A heuristic function for the 8-puzzle problem is defined below:

So, there is total of three tiles out of position i.e., 6,5 and 4. Do not count the empty tile present
in the goal state).
i.e. h(n)=3. Now, we require to minimize the value of h(n) =0.
We can construct a state-space tree to minimize the h(n) value to 0, as shown below:

 It is seen from the above state space tree that the goal state is minimized from h(n)=3 to
h(n)=0.
 However, we can create and use several heuristic functions as per the reqirement. It is
also clear from the above example that a heuristic function h(n) can be defined as the
information required to solve a given problem more efficiently.
 The information can be related to the nature of the state, cost of transforming from
one state to another, goal node characterstics, etc., which is expressed as a heuristic
function.

8
2. Local Search Algorithms and Optimization Problem:
The informed and uninformed search expands the nodes systematically in two ways:

 keeping different paths in the memory and


 selecting the best suitable path,

 Which leads to a solution state required to reach the goal node. But beyond these “classical
search algorithms," we have some “local search algorithms” where the path cost does not
matters, and only focus on solution-state needed to reach the goal node.
 A local search algorithm completes its task by traversing on a single current node rather than multiple
paths and following the neighbors of that node generally.
Although local search algorithms are not systematic, still they have the following two
advantages:

 Local search algorithms use a very little or constant amount of memory as they operate
only on a single path.
 Most often, they find a reasonable solution in large or infinite state spaces where
the classical or systematic algorithms do not work.
Working of a Local search algorithm
Consider the below state-space landscape having both:

 Location: It is defined by the state.


 Elevation: It is defined by the value of the objective function or heuristic cost function.

The local search algorithm explores the above landscape by finding the following two points:

 Global Minimum: If the elevation corresponds to the cost, then the task is to find
the lowest valley, which is known as Global Minimum.
 Global Maxima: If the elevation corresponds to an objective function, then it finds
the highest peak which is called as Global Maxima. It is the highest point in the
valley.
9
We will understand the working of these points better in Hill-climbing search.

Below are some different types of local searches:

 Hill-climbing Search
 Simulated Annealing
 Local Beam Search

Hill Climbing Algorithm in AI

Hill Climbing Algorithm:


 Hill climbing search is a local search problem. The purpose of the hill climbing search is to
climb a hill and reach the topmost peak/ point of that hill.
 It is based on the heuristic search technique where the person who is climbing up on the hill estimates
the direction which will lead him to the highest peak.
State-space Landscape of Hill climbing algorithm
To understand the concept of hill climbing algorithm, consider the below landscape representing
the goal state/peak and the current state of the climber. The topographical regions shown in
the figure can be defined as:

 Global Maximum: It is the highest point on the hill, which is the goal state.
 Local Maximum: It is the peak higher than all other peaks but lower than the
global maximum.
 Flat local maximum: It is the flat area over the hill where it has no uphill or downhill.
It is a saturated point of the hill.
 Shoulder: It is also a flat area where the summit is possible.
 Current state: It is the current position of the person.

10
Types of Hill climbing search algorithm
There are following types of hill-climbing search:

Simple hill climbing search


 Simple hill climbing is the simplest technique to climb a hill. The task is to reach the highest
peak of the mountain.
 Here, the movement of the climber depends on his move/steps. If he finds his next step better
than the previous one, he continues to move else remain in the same state.
This search focus only on his previous and next step.

Simple hill climbing Algorithm

1. Create a CURRENT node, NEIGHBOUR node, and a GOAL node.


2. If the CURRENT node=GOAL node, return GOAL and terminate the search.
3. Else CURRENT node<= NEIGHBOUR node, move ahead.
4. Loop until the goal is not reached or a point is not found.

Steepest-ascent hill climbing


Steepest-ascent hill climbing is different from simple hill climbing search. Unlike simple
hill climbing search, It considers all the successive nodes, compares them, and choose the node
which is closest to the solution.
Steepest hill climbing search is similar to best-first search because it focuses on each node
instead of one.
Note: Both simple, as well as steepest-ascent hill climbing search, fails when there is no closer
node.
Steepest-ascent hill climbing algorithm

1. Create a CURRENT node and a GOAL node.


2. If the CURRENT node=GOAL node, return GOAL and terminate the search.
3. Loop until a better node is not found to reach the solution.
4. If there is any better successor node present, expand it.
5. When the GOAL is attained, return GOAL and terminate.

Stochastic hill climbing


Stochastic hill climbing does not focus on all the nodes. It selects one node at random and
decides whether it should be expanded or search for a better one.

11
Random-restart hill climbing
Random-restart algorithm is based on try and try strategy. It iteratively searches the node and
selects the best one at each step until the goal is not found.
The success depends most commonly on the shape of the hill. If there are few plateaus, local
maxima, and ridges, it becomes easy to reach the destination.

Limitations of Hill climbing algorithm


Hill climbing algorithm is a fast and furious approach. It finds the solution state rapidly because
it is quite easy to improve a bad state. But, there are following limitations of this search:

 Local Maxima: It is that peak of the mountain which is highest than all its neighboring
states but lower than the global maxima. It is not the goal peak because there is another peak higher
than it.

 Plateau: It is a flat surface area where no uphill exists. It becomes difficult for the
climber to decide that in which direction he should move to reach the goal point.
Sometimes, the person gets lost in the flat area.

 Ridges: It is a challenging problem where the person finds two or more local maxima of
the same height commonly. It becomes difficult for the person to navigate the right point
and stuck to that point itself.

12
Simulated Annealing

 Simulated annealing is similar to the hill climbing algorithm. It works on the current
situation. It picks a random move instead of picking the best move.
 If the move leads to the improvement of the current situation, it is always accepted as a step
towards the solution state, else it accepts the move having a probability less than 1.
 This search technique was first used in 1980 to solve VLSI layout problems.
 It is also applied for factory scheduling and other large optimization tasks.
Local Beam Search
o Local beam search is quite different from random-restart search.
o It keeps track of k states instead of just one.
o It selects k randomly generated states, and expand them at each step.
o If any state is a goal state, the search stops with success.
o Else it selects the best k successors from the complete list and repeats the same process.
o In random-restart search where each search process runs independently, but in local beam
search, the necessary information is shared between the parallel search processes.
Disadvantages of Local Beam search

 This search can suffer from a lack of diversity among the k states.
 It is an expensive version of hill climbing search.

3. Local search in continuous space:

13
14
15
4. Search with non-deterministic actions:

16
17
5. Search in partial observable environments:

18
19
20
6. Online Search Agents and Unknown Environments:
6.1.offline search and online search
6.2.online search problem
6.3.Online search agent
6.4.Online local search

21
22
An online depth first exploration agent:
 This agent stores it map in a table ,result[s,a],that records the state resulting from
executing action a in state s.
 To explore map, the difficulties come when the agent has tried all the action in a
state s.
 To avoid dead end,
The algorithm keep another table that lists, for each state, parent state, to which
the agent has not yet backtracked.
 If the agent has run out of the state to which it can backtrack then its search is
complete.
 It keeps just one current state in memory, and it can do random walk to explore the
environment.
 A RANDOM WALK simply select one of the available actions from the current
state.
 This process will continue until it find goal or complete its exploration.
Advantage
 That space is finite and safely explorable.
Disadvantage
 The process can be very slow

An environment in which a random walk will take exponentially many steps to find the goal.

23
24
Online local search:
ke depth-first search, hill-climbing search has the property of locality in its node expansions

25

You might also like