0% found this document useful (0 votes)
13 views30 pages

AI Unit2

Uploaded by

s98388510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

AI Unit2

Uploaded by

s98388510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT 2:

Problem Solving by Search-I: Problem- Solving Agents, Searching for Solutions,


Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search, Iterative
deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search Strategies: Greedy
best-first search, A* search, Heuristic Functions, Beyond Classical Search: Hill-climbing search

2.1 PROBLEM SOLVING AGENTS

2.1.1 PROBLEM-SOLVING APPROACH IN ARTIFICIAL INTELLIGENCE PROBLEMS

The reflex agents are known as the simplest agents because they directly map states into actions.
Unfortunately, these agents fail to operate in an environment where the mapping is too large to store and
learn. Goal-based agent, on the other hand, considers future actions and the desired outcomes.

Here, we will discuss one type of goal-based agent known as a problem-solving agent, which uses
atomic representation with no internal states visible to the problem-solving algorithms.

Problem-solving agent

The problem-solving agent perfoms precisely by defining problems and its severalsolutions.

 According to psychology, “a problem-solving refers to a state where we wish to reach toa definite
goal from a present state or condition.”
 According to computer science, a problem-solving is a part of artificial intelligence
which encompasses a number of techniques such as algorithms, heuristics to solve a
problem.
Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.

2.1.2 PROBLEM DEFINITION


To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of theinitial
situations and also final situations which constitute (i.e) acceptable solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact
onthe appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particularproblem.

2.1.3 Well Defined Problems and solving agent

 Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that
goal. Goal formulation is based on the current situation and the agent’s performance measure
(discussed below).
 Problem Formulation: It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal. There are following five components involved
in problem formulation:
 Initial State: It is the starting state or initial step of the agent towards its goal.
 Actions: It is the description of the possible actions available to the agent.
 Transition Model: It describes what each action does.
 Goal Test: It determines if the given state is a goal state.
 Path cost: It assigns a numeric cost to each path that follows the goal. The problem- solving
agent selects a cost function, which reflects its performance measure. Remember, an optimal solution
has the lowest path cost among all the solutions.
Note: Initial state, actions, and transition model together define the state-space of the problem implicitly.
State-space of a problem is a set of all states which can be reached from the initial state followed by any
sequence of actions. The state-space forms a directed map or graph where nodes are the states, links between
the nodes are actions, and the path is a sequence of states connected by the sequence of actions.

 Search: It identifies all the best possible sequence of actions to reach the goal state fromthe current
state. It takes a problem as an input and returns solution as its output.
 Solution: It finds the best algorithm out of various algorithms, which may be proven as
thebest optimal solution.
 Execution: It executes the best optimal solution from the searching algorithms to reach thegoal
state from the current state.
Example Problems

Basically, there are two types of problem approaches:

 Toy Problem: It is a concise and exact description of the problem which is used by the researchers to
compare the performance of algorithms.
 Real-world Problem: It is real-world based problems which require solutions. Unlike a toyproblem,
it does not depend on descriptions, but we can have a general formulation of the problem.

2.1.4 Some Toy Problems

 8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a
blank space. The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified goal state similar to the goal state, as shown in the below figure.
 In the figure, our task is to convert the current state into goal state by sliding digits into the blank
space.

In the above figure, our task is to convert the current(Start) state into goal state by sliding digitsinto the
blank space.

The problem formulation is as follows:

 States: It describes the location of each numbered tiles and the blank tile.
 Initial State: We can start from any state as the initial state.

 Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down
 Transition Model: It returns the resulting state as per the given state and actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the cost of each step is 1. Note: The
8-puzzle problem is a type of sliding-block problem which is used for testingnew search algorithms
in artificial intelligence.

 8-queens problem: The aim of this problem is to place eight queens on a chessboard in an order
where no queen may attack another. A queen can attack other queens either diagonally or in same row
and column.
From the following figure, we can understand the problem as well as its correct solution.

It is noticed from the above figure that each queen is set into the chessboard in a position where no other
queen is placed diagonally, in same row or column. Therefore, it is one right approach to the 8-queens
problem.

For this problem, there are two main kinds of formulation:

1. Incremental formulation: It starts from an empty state where the operator augments a queen at each
step.
Following steps are involved in this formulation:

 States: Arrangement of any 0 to 8 queens on the chessboard.

 Initial State: An empty chessboard


 Actions: Add a queen to any empty box.
 Transition model: Returns the chessboard with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without any attack.
 Path cost: There is no need for path cost because only final states are counted. In

this formulation, there is approximately 1.8 x 1014 possible sequence to investigate.

2. Complete-state formulation: It starts with all the 8-queens on the chessboard and moves
themaround, saving from the attacks.

Following steps are involved in this formulation

 States: Arrangement of all the 8 queens one per column with no queen attacking the otherqueen.
 Actions: Move the queen at the location where it is safe from the attacks.

This formulation is better than the incremental formulation as it reduces the state space from 1.8 x1014 to
2057, and it is easy to find the solutions.

2.1.5 Some Real-world problems

 Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit
each city only once. The objective is to find the shortest tour and sell-out the stuff in each city.
 VLSI Layout problem: In this problem, millions of components and connections are
positioned on a chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing
the manufacturing yield.
The layout problem is split into two parts:

 Cell layout: Here, the primitive components of the circuit are grouped into cells, each
performing its specific function. Each cell has a fixed shape and size. The task is to place the cells on the
chip without overlapping each other.
 Channel routing: It finds a specific route for each wire through the gaps betweenthe cells.

 Protein Design: The objective is to find a sequence of amino acids which will foldinto
3D protein having a property to cure some disease.
2.1.6 Searching for solutions

We have seen many problems. Now, there is a need to search for solutions to solve them.
In this section, we will understand how searching can be used by the agent to solve a problem.

For solving different kinds of problem, an agent makes use of different strategies to reach the goal by
searching the best possible algorithms. This process of searching is known as search strategy.
2.1.7 General problem solving, Water-jug problem,

8- puzzle problem General Problem Solver:

The General Problem Solver (GPS) was the first useful AI program, written by Simon, Shaw,
and Newell in 1959. As the name implies, it was intended to solve nearly any problem.

Newell and Simon defined each problem as a space. At one end of the space is the starting point;
on the other side is the goal. The problem-solving procedure itself is conceived as a set of
operations to cross that space, to get from the starting point to the goal state, one step at a time.

The General Problem Solver, the program tests various actions (which Newell and Simon called
operators) to see which will take it closer to the goal state. An operator is any activity that
changes the

state of the system. The General Problem Solver always chooses the operation that appears to
bring it closer to its goal.

Example: Water Jug Problem

Consider the following problem:

A Water Jug Problem: You are given two jugs, a 4-gallon one and a 3-gallon
one, a pump which has unlimited water which you can use to fill the jug, and the
ground on which water may be poured. Neither jug has any measuring markings on
it. How can you get exactly 2 gallons of water in the 4-gallon jug?

State Representation and Initial State :


We will represent a state of the problem as a tuple (x, y) where x represents the amount
of water in the 4-gallon jug and y represents the amount of water in the 3-gallon jug. Note
0 ≤x≤ 4, and 0 ≤y ≤3. Our initial state: (0, 0)

Goal Predicate - state = (2, y) where 0≤ y≤ 3.

Operators -we must defi ne a set of operators that will take us from one state to another:
1. Fill 4-gal jug (x,y) → (4,y)
x<4

2. Fill 3-gal jug (x,y) → (x,3)


y<3

3. Empty 4-gal jug on ground (x,y) → (0,y)


x>0

4. Empty 3-gal jug on ground (x,y) → (x,0)


y>0

5. Pour water from 3-gal jug (x,y) →! (4, y - (4 - x))


to ll 4-gal jug 0 < x+y 4 and y > 0
6. Pour water from 4-gal jug (x,y) → (x - (3-y), 3)
to ll 3-gal-jug 0 < x+y 3 and x > 0
7. Pour all of water from 3-gal jug (x,y) → (x+y, 0)

into 4-gal jug 0 < x+y 4 and y 0


8. Pour all of water from 4-gal jug (x,y) → (0, x+y)
into 3-gal jug 0 < x+y 3 and x 0

Through Graph Search, the following solution is found :

Gals in 4-gal jug Gals in 3-gal jug Rule Applied


0 0
1. Fill 4
4 0
6. Pour 4 into 3 to ll
1 3
4. Empty 3
1 0
8. Pour all of 4 into 3
0 1
1. Fill 4
4 1
6. Pour into 3
2 3

Second Solution:
2.2 Uninformed Search Strategies

Uninformed search

Also called blind, exhaustive or brute-force search, uses no information about the problem to
guide the search and therefore may not be very efficient.

Informed Search:

Also called heuristic or intelligent search, uses information about the problem to guide the
search, usually guesses the distance to a goal state and therefore efficient, but the search may not
be always possible.

2.2.1 Uninformed Search Methods:


Breadth- First -Search:
Consider the state space of a problem that takes the form of a tree. Now, if we search the goal
along each breadth of the tree, starting from the root and continuing up to the largest depth, we
call it breadth first search.

• Algorithm:
1. Create a variable called NODE-LISTand set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LISTand call it E. If
NODE- LISTwas empty, quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate anew state
ii. If the newstate is a goal state, quit and return this state
iii. Otherwise, add the newstate to the end of NODE-LIST
BFS illustrated:

Step 1: Initially fringe contains only one node corresponding to the source state A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.

Figure 2
FRINGE: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put
at the back of fringe.
Figure 3
FRINGE: C D E

Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.

Figure 4
FRINGE: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated and added to the
back of fringe.

Figure 5
FRINGE: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
FRINGE: D G C F

Step 7: D is expanded; B and F are put in OPEN.


Figure 7
FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the
path A C G by following the parent pointers of the node corresponding to G. The algorithm
terminates.

Breadth first search is:

 One of the simplest search strategies


 Complete. If there is a solution, BFS is guaranteed to find it.
 If there are multiple solutions, then a minimal solutionwill be found
 The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.
 Time complexity : O(bd )
 Space complexity : O(bd )
 Optimality :Yes
b - branching factor(maximum noof successors of
any node), d – Depth of the shallowestgoal node
Maximumlength of any path (m) in search space
Advantages: Finds the path of minimal lengthto the goal.
Disadvantages:
 Requires the generation and storage of a tree whose size is exponential the depth
of the shallowest goal node.
 The breadth first search algorithm cannot be effectively used unless the search space is
quite small.

2.2.2 Uniform Cost


Search Depth- First-
Search.
We may sometimes search the goal along the largest depth of the tree, and move up only when
further traversal along the depth is not possible. We then attempt to find alternative offspring of
the parent of the node (state) last visited. If we visit the nodes of a tree using the above principles
to search the goal, the traversal made is called depth first traversal and consequently the search
strategy is called depth first search.

• Algorithm:
1. Create a variable called NODE-LISTand set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LISTand call it E. If NODE-
LISTwas empty, quit
b. For each way that each rule can match the state described in Edo:
i. Apply the rule to generate anew state
ii. If the newstate is a goal state, quit and return this state
iii. Otherwise, add the newstate in front of NODE-LIST
DFS illustrated:

A State Space Graph

Step 1: Initially fringe contains only the node for A.

Figure 1
FRINGE: A
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.

Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.

Figure 3
FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.

Figure 4
FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.

Figure 6
FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first searchis:

1. The algorithm takes exponential time.


2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will
d
take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search tree. If
the search tree has infinite depth, the algorithm may not terminate. This can happen if the search
space is infinite. It can also happen if the search space contains cycles. The latter case can be
handled by checking for cycles in the algorithm. Thus Depth First Search is not complete.

2.2.3 Depth limited search

What is depth limited


search?
The depth limited search is a variation of a well-known depth first search(DFS) traversing algorithm. It takes
care of an edge case problem with DFS by implementing a depth limit.

The DFS algorithm

To implement DFS with a stack, we use these steps:

1. We push the root node into the stack.


2. We pop the root node before pushing all its child nodes into the stack.
3. We pop a child node before pushing all of its nodes back into the stack.
4. We repeat this process until we either find our result or traverse the whole tree.

The problem with DFS

Let’s say we are working with a tree whose height is either very large or infinite. In such a case, our DFS
algorithm would also go on infinitely, as there would always be more child nodes to push back into the stack.

This is what the depth limited search aims to address with a level limit variable.

Example

Let’s look at the following tree with six nodes. Our target node is 5. For this example, we’ll use a level limit of
two as we traverse the tree.

We use a visited array to mark off nodes we have already traversed through. This keeps us from visiting the
same node multiple times.

Code explanation

 We follow normal DFS and start with the root node 1 at level 0.
 We mark it visited and add its children nodes, 2 and 3, to the stack. We increment our level by 1.
 Node 2 is at the top of our stack. We add its children, 4 and 5, to our stack. We increment our level by
1. Our level counter is now 2.
 Node 4 is at the top of our stack. We can add its child node 6 to our stack. However, doing so
would exceed our level counter, so we ignore Node 6.
 Node 5 is at the top of our stack. It has no children to append to our stack.
 Node 5 is our desired result, so the algorithm stops.

When depth limited search fails

As we can see, we ignore Node 6 as it was below our level limit. So, what would happen if our desired node
was 6 all along?

This is known as a cutoff failure. Our target exists, but it is too deep for us to traverse.

Let’s suppose our level limit was 3 instead, and our desired node was 7. We could traverse the whole tree
without finding our result. That would be a standard failure.

2.2.4 Iterative Depending DFS

Description:

 It is a search strategy resulting when you combine BFS and DFS, thus combining the
advantages of each strategy, taking the completeness and optimality of BFS and the
modest memory requirements of DFS.

 IDS works by looking for the best search depth d, thus starting with depth limit 0 and
make a BFS and if the search failed it increase the depth limit by 1 and try a BFS again
with depth 1 and so on – first d = 0, then 1 then 2 and so on – until a depth d is reached
where a goal is found.
Algorithm:

procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found

procedure DLS(node,
depth) if depth = 0 and
node is a goal return node
else if depth > 0
foreach child of node
found ← DLS(child, depth−1)
if found ≠
null return
found
return null

Performance Measure:
o Completeness: IDS is like BFS, is complete when the branching factor b is finite.

o Optimality: IDS is also like BFS optimal when the steps are of the same cost.

 Time Complexity:

o One may find that it is wasteful to generate nodes multiple times, but actually it is
not that costly compared to BFS, that is because most of the generated nodes are
always in the deepest level reached, consider that we are searching a binary tree
and our depth limit reached 4, the nodes generated in last level = 2 4 = 16, the
nodes generated in all nodes before last level = 20 + 21 + 22 + 23= 15

o Imagine this scenario, we are performing IDS and the depth limit reached depth
d, now if you remember the way IDS expands nodes, you can see that nodes at
depth d are generated once, nodes at depth d-1 are generated 2 times, nodes at
depth d-2 are generated 3 times and so on, until you reach depth 1 which is
generated d times, we can view the total number of generated nodes in the worst
case as:
 N(IDS) = (b)d + (d – 1)b2+ (d – 2)b3 + …. + (2)bd-1 + (1)bd = O(bd)
o If this search were to be done with BFS, the total number of generated nodes in
the worst case will be like:
 N(BFS) = b + b2 + b3 + b4 + …. bd + (bd+ 1 – b) = O(bd + 1)
o If we consider a realistic numbers, and use b = 10 and d = 5, then number of
generated nodes in BFS and IDS will be like
 N(IDS) = 50 + 400 + 3000 + 20000 + 100000 = 123450

 N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999990 = 1111100


 BFS generates like 9 time nodes to those generated with IDS.

 Space Complexity:

o IDS is like DFS in its space complexity, taking O(bd) of memory.

Weblinks:

i. https://www.youtube.com/watch?v=7QcoJjSVT38

ii. https://mhesham.wordpress.com/tag/iterative-deepening-depth-first-search

Conclusion:

 We can conclude that IDS is a hybrid search strategy between BFS and DFS
inheriting their advantages.
 IDS is faster than BFS and DFS.
 It is said that “IDS is the preferred uniformed search method when there is a large
search space and the depth of the solution is not known”.

2.2.5 Bidirectional Search


Searching a graph is quite famous problem and have a lot of practical use. We have already discussed here how
to search for a goal vertex starting from a source vertex using BFS. In normal graph search using BFS/DFS we
begin our search in one direction usually from source vertex toward the goal vertex, but what if we start
search from both direction simultaneously.
Bidirectional search is a graph search algorithm which find smallest path from source to goal vertex. It runs two
simultaneous search –
1. Forward search from source/initial vertex toward goal vertex
2. Backward search from goal/target vertex toward source vertex
Bidirectional search replaces single search graph(which is likely to grow exponentially) with two smaller sub
graphs – one starting from initial vertex and other starting from goal vertex. The search terminates when two
graphs intersect.
Just like A* algorithm, bidirectional search can be guided by a heuristic estimate of remaining distance from
source to goal and vice versa for finding shortest path possible.
Consider following simple example-
Suppose we want to find if there exists a path from vertex 0 to vertex 14. Here we can execute two searches,
one from vertex 0 and other from vertex 14. When both forward and backward search meet at vertex 7, we
know that we have found a path from node 0 to 14 and search can be terminated now. We can clearly see that
we have successfully avoided unnecessary exploration.

2.3 Informed (Heuristic) Search Strategies


2.3.1 Greedy Best First Search

Greedy best-first search tries to expand the node that is closest to the goal, on the: grounds that
this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic
f (n) = h (n).
function:

Taking the example of Route-finding problems in Romania, the goal is to reach Bucharest
starting from the city Arad. We need to know the straight-line distances to Bucharest from
various cities as shown in Figure 8.1. For example, the initial state is In (Arad), and the straight
line distance heuristic h SLD (In (Arad)) is found to be 366. Using the straight-line distance
heuristic hSLD, the goal state can be reached faster.
Arad 366 Mehadia 241 Hirsova 151
Bucharest 0 Neamt 234 Urziceni 80
Craiova 160 Oradea 380 Iasi 226
Drobeta 242 Pitesti 100 Vaslui 199
Eforie 161 Rimnicu Vilcea 193 Lugoj 244
Fagaras 176 Sibiu 253 Ze rind 374
Giurgiu 77 Timisoara 329
Figure 8.1: Values of hSLD-straight- line distances to B u c h a r e s t.
The Initial State

After Expanding Arad

After Expanding Sibiu

After Expanding Fagaras


Figure 8.2: Stages in a greedy best-first search for Bucharest using the straight- line distance heuristic
hSLD. Nodes are labeled with their h-values.

Figure 8.2 shows the progress of greedy best-first search using h SLD to find a path from Arad to
Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer to
Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest.
Fagaras in turn generates Bucharest, which is the goal.

Evaluation Criterion of Greedy Search

 Complete: NO [can get stuck in loops, e.g., Complete in finite space with
repeated- state checking ]
 Time Complexity: O (bm) [but a good heuristic can give dramatic improvement]
 Space Complexity: O (bm) [keeps all nodes in memory]
 Optimal: NO

Greedy best-first search is not optimal, and it is incomplete. The worst-case time and space
complexity is O (bm), where m is the maximum depth of the search space.

2.3.2 A* Search
What is A* Search Algorithm?
A* Search algorithm is one of the best and popular technique used in path-finding and graph
traversals.

Why A* Search Algorithm?


Informally speaking, A* Search algorithms, unlike other traversal techniques, it has “brains”.
What it means is that it is really a smart algorithm which separates it from the other conventional
algorithms. This fact is cleared in detail in below sections.
And it is also worth mentioning that many games and web-based maps use this algorithm to find
the shortest path very efficiently (approximation).
Motivation
To approximate the shortest path in real-life situations, like- in maps, games where there can be
many hindrances.
We can consider a 2D Grid having several obstacles and we start from a source cell (colored red
below) to reach towards a goal cell (colored green below)

Explanation
Consider a square grid having many obstacles and we are given a starting cell and a target cell. We want
to reach the target cell (if possible) from the starting cell as quickly as possible. Here A* Search
Algorithm comes to the rescue.
What A* Search Algorithm does is that at each step it picks the node according to a value-‘f’ which is a
parameter equal to the sum of two other parameters – ‘g’ and ‘h’. At each step it picks the node/cell
having the lowest ‘f’, and process that node/cell.
We define ‘g’ and ‘h’ as simply as possible below
g = the movement cost to move from the starting point to a given square on the grid, following the path
generated to get there.
h = the estimated movement cost to move from that given square on the grid to the final destination.
This is often referred to as the heuristic, which is nothing but a kind of smart guess. We really don’t
know the actual distance until we find the path, because all sorts of things can be in the way (walls,
water, etc.). There can be many ways to calculate this ‘h’ which are discussed in the later sections.

2.3.3 AO* algorithm

Best-first search is what the AO* algorithm does. The AO* method divides any given difficult problem
into a smaller group of problems that are then resolved using the AND-OR graph concept. AND OR
graphs are specialized graphs that are used in problems that can be divided into smaller problems. The
AND side of the graph represents a set of tasks that must be completed to achieve the main goal, while
the OR side of the graph represents different methods for accomplishing the same main goal.

in the above figure, the buying of a car may be broken down into smaller problems or tasks that can be
accomplished to achieve the main goal in the above figure, which is an example of a simple AND-OR
graph. The other task is to either steal a car that will help us accomplish the main goal or use your own
money to purchase a car that will accomplish the main goal. The AND symbol is used to indicate the
AND part of the graphs, which refers to the need that all subproblems containing the AND to be
resolved before the preceding node or issue may be finished.
The start state and the target state are already known in the knowledge-based
search strategy known as the AO* algorithm, and the best path is identified by heuristics. The informed
search technique considerably reduces the algorithm’s time complexity. The AO* algorithm is far more
effective in searching AND-OR trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like
this: f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal
state.

Difference between the A* Algorithm and AO* algorithm


A* algorithm and AO* algorithm both works on the best first search.
They are both informed search and works on given heuristics values.
A* always gives the optimal solution but AO* doesn’t guarantee to give the optimal solution.
Once AO* got a solution doesn’t explore all possible paths but A* explores all paths.
When compared to the A* algorithm, the AO* algorithm uses less memory.
opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.
2.4 Heuristic Function
What is Heuristic Function in AI? Heuristic functions and search algorithms are essential concepts in the field
of artificial intelligence, particularly in the context of problem-solving and optimization. They are often used in
various AI applications, including game playing, route planning, and decision-making.
Heuristic Functions:
 A heuristic function in artificial intelligence, also known as a heuristic or simply a heuristic, is an
evaluation function used to estimate the cost or potential of reaching a goal state from a given state in a
problem-solving domain.
 Heuristics are typically rules of thumb or approximate strategies that guide the search for a solution.
They provide a way to assess the desirability of different options without exhaustively exploring every
possibility.
 Heuristics are used to make informed decisions in situations where it's computationally expensive to
search through all possible states or actions. They help prioritize the exploration of more promising
paths.
Search Algorithms:
 In AI, search algorithms are methods for systematically exploring the state space of a problem to find a
solution. The state space represents all possible states that the system can be in, and the search
algorithm tries to navigate this space to reach a goal state.
 There are various search algorithms, such as depth-first search, breadth-first search, A* search, and
others, which determine how to traverse the state space efficiently and effectively.
 Heuristic functions are often used in combination with search algorithms to guide the search process.
When a heuristic function is applied to estimate the potential of different states, it can significantly
improve the efficiency and effectiveness of the search.

2.4.1 Heuristic Search Strategies:

A Heuristic technique helps in solving problems, even though there is no guarantee that it will
never lead in the wrong direction. There are heuristics of every general applicability as well as
domain specific. The strategies are general purpose heuristics. In order to use them in a specific
domain they are coupler with some domain specific heuristics. There are two major ways in
which domain - specific, heuristic information can be incorporated into rule-based search
procedure.

A heuristic function is a function that maps from problem state description to measures
desirability, usually represented as number weights. The value of a heuristic function at a given
node in the search process gives a good estimate of that node being on the desired path to
solution.

2.4.1.1 HILL CLIMBING PROCEDURE:

Hill Climbing Algorithm

We will assume we are trying to maximize a function. That is, we are trying to find a point in the
search space that is better than all the others. And by "better" we mean that the evaluation is
higher. We might also say that the solution is of better quality than all the others.

The idea behind hill climbing is as follows.

1. Pick a random point in the search space.


2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
We can also present this algorithm as follows (it is taken from the AIMA book (Russell, 1995)
and follows the conventions we have been using on this course when looking at blind and
heuristic searches).
Algorithm:
Function HILL-CLIMBING(Problem) returns a solution
state Inputs: Problem, problem
Local variables: Current, a node
Next, a node
Current = MAKE-NODE(INITIAL-STATE[Problem])
Loop do
Next = a highest-valued successor of Current
If VALUE[Next] < VALUE[Current] then returnCurrent
Current = Next
End

Also, if two neighbors have the same evaluation and they are both the best quality, then the
algorithm will choose between them at random.

2.4.1.2 Problems with Hill Climbing

The main problem with hill climbing (which is also sometimes called gradient descent) is that
we are not guaranteed to find the best solution. In fact, we are not offered any guarantees about
the solution. It could be abysmally bad.

You can see that we will eventually reach a state that has no better neighbours but there are better
solutions elsewhere in the search space. The problem we have just described is called a local
maxima.

sSimulated annealing search


A hill-climbing algorithm that never makes “downhill” moves towards states with lower value
(or higher cost) is guaranteed to be incomplete, because it can stuck on a local maximum. In
contrast, a purely random walk –that is, moving to a successor chosen uniformly at random from
the set of successors – is complete, but extremely inefficient. Simulated annealing is an
algorithm that combines hill-climbing with a random walk in some way that yields both
efficiency and completeness.
Figure 10.7 shows simulated annealing algorithm. It is quite similar to hill climbing. Instead of
picking the best move, however, it picks the random move. If the move improves the situation, it
is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1.
The probability decreases exponentially with the “badness” of the move – the amount E by
which the evaluation is
worsened. The probability also decreases as the "temperature" T goes down: "bad moves are
more likely to be allowed at the start when temperature is high, and they become more unlikely
as T decreases. One can prove that if the schedule lowers T slowly enough, the algorithm will
find a global optimum with probability approaching 1.
Simulated annealing was first used extensively to solve VLSI layout problems. It has been
applied widely to factory scheduling and other large-scale optimization tasks.
function S I M U L A T E D - A N NEALING( problem, schedule) returns a solution state
inputs: problem, a problem
schedule, a mapping from time to "temperature"
local variables: current, a node
next, a node
T, a "temperature" controlling the probability of downward steps
current MAKE-NODE(INITIAL-STATE[problem])
for tl to ∞ do
T schedule[t]
if T = 0 then return current
next a randomly selected successor of current
EVALUE[next] – VALUE[current]
if E> 0 then current  next
else current  next only with probability eE /T

You might also like