0% found this document useful (0 votes)
14 views37 pages

Unit-2 AI

AI notes

Uploaded by

Rinku Bathra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views37 pages

Unit-2 AI

AI notes

Uploaded by

Rinku Bathra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Unit-2

Hill Climbing Algorithm in Artificial


Intelligence
o Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of the
mountain or best solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps
to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
AD

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 1
State-space Diagram for Hill Climbing:
The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the
goal of search is to find the global minimum and local minimum. If the function
of Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently


present.

Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 2
Shoulder: It is a plateau region which has an uphill edge.
AD

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing
algorithm. It only evaluates the neighbor node state at a time and selects the
first one which optimizes current cost and set it as a current state. It only
checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to
apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
This algorithm examines all the neighboring nodes of the current state and selects
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 3
one neighbor node which is closest to the goal state. This algorithm consumes
more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be
better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is
better than each of its neighboring states, but there is another state also present
which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state


space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 4
2. Plateau: A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this algorithm does
not find any best direction to move. A hill-climbing search might be lost in the
plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from
the current state so it is possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which
is higher than its surrounding areas, but itself has a slope, and cannot be reached
in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.
AD

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 5
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum. And if
algorithm applies a random walk, by moving a successor, then it may complete
but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.

In mechanical term Annealing is a process of hardening a metal or glass to a high


temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state. The same process is used in simulated annealing in which the
algorithm picks a random move, instead of picking the best move. If the random
move improves the state, then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or it moves downhill and
chooses another path.

ill Climbing Algorithm in Artificial Intelligence


o Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of the
mountain or best solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 6
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps
to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
AD

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the
goal of search is to find the global minimum and local minimum. If the function
of Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 7
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently


present.

Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.


AD

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 8
Simple hill climbing is the simplest way to implement a hill climbing
algorithm. It only evaluates the neighbor node state at a time and selects the
first one which optimizes current cost and set it as a current state. It only
checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to
apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
This algorithm examines all the neighboring nodes of the current state and selects
one neighbor node which is closest to the goal state. This algorithm consumes
more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be
better than it.
b. For each operator that applies to the current state:
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 9
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving.
Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is
better than each of its neighboring states, but there is another state also present
which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state


space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this algorithm does
not find any best direction to move. A hill-climbing search might be lost in the
plateau area.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 10
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from
the current state so it is possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which
is higher than its surrounding areas, but itself has a slope, and cannot be reached
in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.
AD

Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum. And if
algorithm applies a random walk, by moving a successor, then it may complete
but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 11
In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state. The same process is used in simulated annealing in which the
algorithm picks a random move, instead of picking the best move. If the random
move improves the state, then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or it moves downhill and
chooses another path.

BFS algorithm
In this article, we will discuss the BFS algorithm in the data structure. Breadth-
first search is a graph traversal algorithm that starts traversing the graph from the
root node and explores all the neighboring nodes. Then, it selects the nearest node
and explores all the unexplored nodes. While using BFS for traversal, any node
in the graph can be considered as the root node.

There are many ways to traverse the graph, but among them, BFS is the most
commonly used approach. It is a recursive algorithm to search all the vertices of
a tree or graph data structure. BFS puts every vertex of the graph into two
categories - visited and non-visited. It selects a single node in a graph and, after
that, visits all the nodes adjacent to the selected node.

Applications of BFS algorithm


The applications of breadth-first-algorithm are given as follows -

o BFS can be used to find the neighboring locations from a given source
location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method
to find all the neighboring nodes. Most torrent clients, such as BitTorrent,
uTorrent, etc. employ this process to find "seeds" and "peers" in the
network.
o BFS can be used in web crawlers to create web page indexes. It is one of
the main algorithms that can be used to index web pages. It starts traversing
from the source page and follows the links associated with the page. Here,
every web page is considered as a node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 12
o It can be used in ford-Fulkerson method to compute the maximum flow in
a flow network.

Algorithm
The steps involved in the BFS algorithm to explore a graph are given as follows
-

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until QUEUE is empty

Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).

Step 5: Enqueue all the neighbours of N that are in the ready state (whose
STATUS = 1) and set

their STATUS = 2

(waiting state)

[END OF LOOP]

Step 6: EXIT

Example of BFS algorithm


Now, let's understand the working of BFS algorithm by using an example. In the
example given below, there is a directed graph having 7 vertices.

In the above graph, minimum path 'P' can be found by using the BFS that will
start from Node A and end at Node E. The algorithm uses two queues, namely
QUEUE1 and QUEUE2. QUEUE1 holds all the nodes that are to be processed,
while QUEUE2 holds all the nodes that are processed and deleted from QUEUE1.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 13
Now, let's start examining the graph starting from Node A.

Step 1 - First, add A to queue1 and NULL to queue2.


AD

1. QUEUE1 = {A}
2. QUEUE2 = {NULL}

Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all
neighbors of node A to queue1.

1. QUEUE1 = {B, D}
2. QUEUE2 = {A}

Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all
neighbors of node B to queue1.

1. QUEUE1 = {D, C, F}
2. QUEUE2 = {A, B}

Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all
neighbors of node D to queue1. The only neighbor of Node D is F since it is
already inserted, so it will not be inserted again.

1. QUEUE1 = {C, F}
2. QUEUE2 = {A, B, D}

Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors
of node C to queue1.

1. QUEUE1 = {F, E, G}
2. QUEUE2 = {A, B, D, C}

Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors
of node F to queue1. Since all the neighbors of node F are already present, we
will not insert them again.
AD

1. QUEUE1 = {E, G}
2. QUEUE2 = {A, B, D, C, F}

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 14
Step 6 - Delete node E from queue1. Since all of its neighbors have already been
added, so we will not insert them again. Now, all the nodes are visited, and the
target node E is encountered into queue2.

1. QUEUE1 = {G}
2. QUEUE2 = {A, B, D, C, F, E}

Complexity of BFS algorithm


Time complexity of BFS depends upon the data structure used to represent the
graph. The time complexity of BFS algorithm is O(V+E), since in the worst case,
BFS algorithm explores every node and edge. In a graph, the number of vertices
is O(V), whereas the number of edges is O(E).

The space complexity of BFS can be expressed as O(V), where V is the number
of vertices.

Implementation of BFS algorithm


Now, let's see the implementation of BFS algorithm in java.

In this code, we are using the adjacency list to represent our graph. Implementing
the Breadth-First Search algorithm in Java makes it much easier to deal with the
adjacency list since we only have to travel through the list of nodes attached to
each node once the node is dequeued from the head (or start) of the queue.

In this example, the graph that we are using to demonstrate the code is given as
follows -

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 15
DFS (Depth First Search) algorithm
In this article, we will discuss the DFS algorithm in the data structure. It is a
recursive algorithm to search all the vertices of a tree data structure or a graph.
The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.

Because of the recursive nature, stack data structure can be used to implement the
DFS algorithm. The process of implementing the DFS is similar to the BFS
algorithm.

The step by step process to implement the DFS traversal is given as follows -

1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that
vertex into the stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of
the stack) to the top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex
on the stack's top.
5. If no vertex is left, go back and pop a vertex from the stack.
6. Repeat steps 2, 3, and 4 until the stack is empty.

Applications of DFS algorithm


The applications of using the DFS algorithm are given as follows -

o DFS algorithm can be used to implement the topological sorting.


o It can be used to find the paths between two vertices.
o It can also be used to detect cycles in the graph.
o DFS algorithm is also used for one solution puzzles.
o DFS is used to determine if a graph is bipartite or not.

Algorithm
Step 1: SET STATUS = 1 (ready state) for each node in G
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 16
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting
state)

Step 3: Repeat Steps 4 and 5 until STACK is empty

Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)

Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)

[END OF LOOP]

Step 6: EXIT

Pseudocode
1. DFS(G,v) ( v is the vertex where the search starts )
2. Stack S := {}; ( start with an empty stack )
3. for each vertex u, set visited[u] := false;
4. push S, v;
5. while (S is not empty) do
6. u := pop S;
7. if (not visited[u]) then
8. visited[u] := true;
9. for each unvisited neighbour w of uu
10. push S, w;
11. end if
12. end while
13. END DFS()

Example of DFS algorithm


Now, let's understand the working of the DFS algorithm by using an example. In
the example given below, there is a directed graph having 7 vertices.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 17
Now, let's start examining the graph starting from Node H.

Step 1 - First, push H onto the stack.

1. STACK: H

Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all
the neighbors of H onto the stack that are in ready state.

1. Print: H]STACK: A

Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all
the neighbors of A onto the stack that are in ready state.

1. Print: A
2. STACK: B, D

Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all
the neighbors of D onto the stack that are in ready state.
AD

1. Print: D
2. STACK: B, F

Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all
the neighbors of F onto the stack that are in ready state.

1. Print: F
2. STACK: B

Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all
the neighbors of B onto the stack that are in ready state.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 18
1. Print: B
2. STACK: C

Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all
the neighbors of C onto the stack that are in ready state.

1. Print: C
2. STACK: E, G

Step 8 - POP the top element from the stack, i.e., G and PUSH all the neighbors
of G onto the stack that are in ready state.

1. Print: G
2. STACK: E

Step 9 - POP the top element from the stack, i.e., E and PUSH all the neighbors
of E onto the stack that are in ready state.
AD

1. Print: E
2. STACK:

Now, all the graph nodes have been traversed, and the stack is empty.

Complexity of Depth-first search algorithm


The time complexity of the DFS algorithm is O(V+E), where V is the number of
vertices and E is the number of edges in the graph.

The space complexity of the DFS algorithm is O(V).

Implementation of DFS algorithm


Now, let's see the implementation of DFS algorithm in Java.

In this example, the graph that we are using to demonstrate the code is given as
follows -

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 19
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which looked
through search space for all possible solutions of the problem without having any
additional knowledge about search space. But informed search algorithm contains
an array of knowledge such as how far we are from the goal, path cost, how to
reach to goal node, etc. This knowledge help agents to explore less to the search
space and find more efficiently the goal node.

The informed search algorithm is more useful for large search space. Informed
search algorithm uses the idea of heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search,


and it finds the most promising path. It takes the current state of the agent as its
input and produces the estimation of how close agent is from the goal. The
heuristic method, however, might not always give the best solution, but it
guaranteed to find a good solution in reasonable time. Heuristic function
estimates how close a state is to the goal. It is represented by h(n), and it calculates
the cost of an optimal path between the pair of states. The value of the heuristic
function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic
cost should be less than or equal to the estimated cost.

Pure Heuristic Search:

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 20
Pure heuristic search is the simplest form of heuristic search algorithms. It
expands nodes based on their heuristic value h(n). It maintains two lists, OPEN
and CLOSED list. In the CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm
continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given
below:

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm

A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search algorithm finds the shortest path through the
search space using the heuristic function. This search algorithm expands less
search tree and provides optimal result faster. A* algorithm is similar to UCS
except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.

At each point in the search space, only those node is expanded which have the lowest value of
f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 21
Step 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED
list, if not then compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the
f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to
reach any node from start state.
Here we will use OPEN and CLOSED list.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 22
Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 23
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-
->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search
for all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should
be an admissible heuristic for A* tree search. An admissible heuristic is
optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-
search.

If the heuristic function is admissible, then A* tree search will always find the
least cost path.

Time Complexity: The time complexity of A* search algorithm depends on


heuristic function, and the number of nodes expanded is exponential to the depth
of solution d. So the time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)

AO* algorithm – Artificial intelligence

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 24


Best-first search is what the AO* algorithm does. The AO* method divides any
given difficult problem into a smaller group of problems that are then
X X
resolved using the AND-OR graph concept. AND OR graphs are specialized
graphs that are used
Prof.Sanchita in problems that can be divided into smaller problems. The
Chourawar
AND side of the graph represents a set of tasks that must be completed to
achieve the main goal, while the OR side of the graph represents different
methods for accomplishing the same main goal.

AND-OR Graph

In the above figure, the buying of a car may be broken down into smaller
problems or tasks that can be accomplished to achieve the main goal in the
above figure, which is an example of a simple AND-OR graph. The other task is
to either steal a car that will help us accomplish the main goal or use your own
money to purchase a car that will accomplish the main goal. The AND symbol
is used to indicate the AND part of the graphs, which refers to the need that all
subproblems containing the AND to be resolved before the preceding node or
issue may be finished.
The start state and the target state are already known in
the knowledge-based search strategy known as the AO* algorithm, and the best
path is identified by heuristics. The informed search technique considerably

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 25
reduces the algorithm’s time complexity. The AO* algorithm is far more
effective in searching AND-OR trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.

Difference between the A* Algorithm and AO*


algorithm
 A* algorithm and AO* algorithm both works on the best first search.
 They are both informed search and works on given heuristics values.
 A* always gives the optimal solution but AO* doesn’t guarantee to
give the optimal solution.
 Once AO* got a solution doesn’t explore all possible paths but A*
explores all paths.
 When compared to the A* algorithm, the AO* algorithm uses less
memory.
 opposite to the A* algorithm, the AO* algorithm cannot go into an
endless loop.
Example:

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 26
AO* Algorithm – Question tree

Here in the above example below the Node which is given is the heuristic value
i.e h(n). Edge length is considered as 1.
Step 1

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 27
AO* Algorithm (Step-1)

With help of f(n) = g(n) + h(n) evaluation function,


Start from node A,
f(A⇢B) = g(B) + h(B)
=1 + 5 ……here g(n)=1 is taken by default for path cost
=6

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


=1+2+1+4 ……here we have added C & D because they
are in AND
=8
So, by calculation A⇢B path is chosen which is the minimum path, i.e
f(A⇢B)
Step 2

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 28
AO* Algorithm (Step-2)

According to the answer of step 1, explore node B


Here the value of E & F are calculated as follows,

f(B⇢E) = g(e) + h(e)


f(B⇢E) = 1 + 7
=8

f(B⇢f) = g(f) + h(f)


f(B⇢f) = 1 + 9
= 10
So, by above calculation B⇢E path is chosen which is minimum path, i.e
f(B⇢E)
because B's heuristic value is different from its actual value The heuristic is
updated and the minimum cost path is selected. The minimum value in our
situation is 8.
Therefore, the heuristic for A must be updated due to the change in B's
heuristic.
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 29
So we need to calculate it again.

f(A⇢B) = g(B) + updated h(B)


=1+8
=9
We have Updated all values in the above tree.

Step 3

AO* Algorithm (Step-3) -Geeksforgeeks

By comparing f(A⇢B) & f(A⇢C+D)


f(A⇢C+D) is shown to be smaller. i.e 8 < 9
Now explore f(A⇢C+D)
So, the current node is C

f(C⇢G) = g(g) + h(g)


f(C⇢G) = 1 + 3
=4

f(C⇢H+I) = g(h) + h(h) + g(i) + h(i)

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 30
f(C⇢H+I) = 1 + 0 + 1 + 0 ……here we have added H & I because
they are in AND
=2

f(C⇢H+I) is selected as the path with the lowest cost and the heuristic is
also left unchanged
because it matches the actual cost. Paths H & I are solved because the heuristic
for those paths is 0,
but Path A⇢D needs to be calculated because it has an AND.

f(D⇢J) = g(j) + h(j)


f(D⇢J) = 1 + 0
=1
the heuristic of node D needs to be updated to 1.

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


=1+2+1+1
=5

as we can see that path f(A⇢C+D) is get solved and this tree has become a
solved tree now.
In simple words, the main flow of this algorithm is that we have to find firstly
level 1st heuristic
value and then level 2nd and after that update the values with going upward
means towards the root node.
In the above tree diagram, we have updated all the values.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 31
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 32
(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 33
Constraint Satisfaction Problems in Artificial
Intelligence
We have encountered a wide variety of methods, including adversarial search and
instant search, to address various issues. Every method for issue has a single
purpose in mind: to locate a remedy that will enable that achievement of the
objective. However there were no restrictions just on bots' capability to resolve
issues as well as arrive at responses in adversarial search and local search,
respectively.

These section examines the constraint optimization methodology, another form


or real concern method. By its name, constraints fulfilment implies that such an
issue must be solved while adhering to a set of restrictions or guidelines.

Whenever a problem is actually variables comply with stringent conditions of


principles, it is said to have been addressed using the solving multi - objective
method. Wow what a method results in a study sought to achieve of the intricacy
and organization of both the issue.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 34
Three factors affect restriction compliance, particularly regarding:

o It refers to a group of parameters, or X.


o D: The variables are contained within a collection several domain. Every
variables has a distinct scope.
o C: It is a set of restrictions that the collection of parameters must abide by.

In constraint satisfaction, domains are the areas wherein parameters were located
after the restrictions that are particular to the task. Those three components make
up a constraint satisfaction technique in its entirety. The pair "scope, rel" makes
up the number of something like the requirement. The scope is a tuple of variables
that contribute to the restriction, as well as rel is indeed a relationship that
contains a list of possible solutions for the parameters should assume in order to
meet the restrictions of something like the issue.

Issues with Contains A certain amount Solved

For a constraint satisfaction problem (CSP), the following conditions must be


met:

o States area
o fundamental idea while behind remedy.
AD

The definition of a state in phase space involves giving values to any or all of the
parameters, like as

X1 = v1, X2 = v2, etc.

There are 3 methods to economically beneficial to something like a parameter:

1. Consistent or Legal Assignment: A task is referred to as consistent or legal


if it complies with all laws and regulations.
2. Complete Assignment: An assignment in which each variable has a number
associated to it and that the CSP solution is continuous. One such task is
referred to as a completed task.
3. A partial assignment is one that just gives some of the variables values.
Projects of this nature are referred to as incomplete assignment.

Domain Categories within CSP


(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar
Medi-caps University, Indore (M.P.)
pg. 35
The parameters utilize one of the two types of domains listed below:

o Discrete Domain: This limitless area allows for the existence of a single
state with numerous variables. For instance, every parameter may receive
a endless number of beginning states.
o It is a finite domain with continous phases that really can describe just one
area for just one particular variable. Another name for it is constant area.

Types of Constraints in CSP


Basically, there are three different categories of limitations in regard towards the
parameters:

o Unary restrictions are the easiest kind of restrictions because they only
limit the value of one variable.
o Binary resource limits: These restrictions connect two parameters. A value
between x1 and x3 can be found in a variable named x2.
o Global Resource limits: This kind of restriction includes a unrestricted
amount of variables.

The main kinds of restrictions are resolved using certain kinds of resolution
methodologies:

o In linear programming, when every parameter carrying an integer value


only occurs in linear equation, linear constraints are frequently utilised.
o Non-linear Constraints: With non-linear programming, when each variable
(an integer value) exists in a non-linear form, several types of restrictions
were utilised.

Note: The preferences restriction is a unique restriction that operates in the actual world.

Think of a Sudoku puzzle where some of the squares have initial fills of certain
integers.

You must complete the empty squares with numbers between 1 and 9, making
sure that no rows, columns, or blocks contains a recurring integer of any kind.
This solving multi - objective issue is pretty elementary. A problem must be
solved while taking certain limitations into consideration.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 36
The integer range (1-9) that really can occupy the other spaces is referred to as a
domain, while the empty spaces themselves were referred as variables. The values
of the variables are drawn first from realm. Constraints are the rules that
determine how a variable will select the scope.

(CSEA10)AI Unit-2 by Prof. Sanchita Chourawar


Medi-caps University, Indore (M.P.)
pg. 37

You might also like