0% found this document useful (0 votes)
41 views36 pages

Ai Unit 2 Notes

Uploaded by

akshayahirrao103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views36 pages

Ai Unit 2 Notes

Uploaded by

akshayahirrao103
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

UNIT -2 Notes

State Space Search in Artificial Intelligence


State space search is a problem-solving technique used in Artificial
Intelligence (AI) to find the solution path from the initial state to the
goal state by exploring the various states. The state space search
approach searches through all possible states of a problem to find a
solution. It is an essential part of Artificial Intelligence and is used in
various applications, from game-playing algorithms to natural
language processing. A state space is a way to mathematically
represent a problem by defining all the possible states in which the
problem can be. This is used in search algorithms to represent the
initial state, goal state, and current state of the problem. Each state in
the state space is represented using a set of variables.
The efficiency of the search algorithm greatly depends on the size of
the state space, and it is important to choose an appropriate
representation and search strategy to search the state space
efficiently.
Features of State Space Search
State space search has several features that make it an effective
problem-solving technique in Artificial Intelligence. These features
include:
• Exhaustiveness:
State space search explores all possible states of a problem to
find a solution.
• Completeness:
If a solution exists, state space search will find it.
• Optimality:
Searching through a state space results in an optimal solution.
• Uninformed and Informed Search:
State space search in artificial intelligence can be classified as
uninformed if it provides additional information about the
problem.
In contrast, informed search uses additional information, such as
heuristics, to guide the search process.
Steps in State Space Search
The steps involved in state space search are as follows:
• To begin the search process, we set the current state to the
initial state.
• We then check if the current state is the goal state. If it is, we
terminate the algorithm and return the result.
• If the current state is not the goal state, we generate the set of
possible successor states that can be reached from the current
state.
• For each successor state, we check if it has already been visited.
If it has, we skip it, else we add it to the queue of states to be
visited.
• Next, we set the next state in the queue as the current state
and check if it's the goal state. If it is, we return the result. If
not, we repeat the previous step until we find the goal state or
explore all the states.
• If all possible states have been explored and the goal state still
needs to be found, we return with no solution.

State Space Representation


State space Representation involves defining an INITIAL STATE and
a GOAL STATE and then determining a sequence of actions,
called states, to follow.
• State:
A state can be an Initial State, a Goal State, or any other
possible state that can be generated by applying rules between
them.
• Space:
In an AI problem, space refers to the exhaustive collection of all
conceivable states.
• Search:
This technique moves from the beginning state to the desired
state by applying good rules while traversing the space of all
possible states.
• Search Tree:
To visualize the search issue, a search tree is used, which is a
tree-like structure that represents the problem. The initial state
is represented by the root node of the search tree, which is the
starting point of the tree.
• Transition Model:
This describes what each action does, while Path Cost assigns a
cost value to each path, an activity sequence that connects the
beginning node to the end node. The optimal option has the
lowest cost among all alternatives.
Applications of State Space Search
• State space search algorithms are used in various fields, such as
robotics, game playing, computer networks, operations
research, bioinformatics, cryptography, and supply chain
management. In artificial intelligence, state space search
algorithms can solve problems like pathfinding, planning,
and scheduling.
• They are also useful in planning robot motion and finding the
best sequence of actions to achieve a goal. In games, state
space search algorithms can help determine the best move for
a player given a particular game state.
• State space search algorithms can optimize routing and
resource allocation in computer networks and operations
research.
• In Bioinformatics, state space search algorithms can help find
patterns in biological data and predict protein structures.
• In Cryptography, state space search algorithms are used to
break codes and find cryptographic keys.

Breadth First Search (BFS) Algorithm


Breadth-first search (BFS) is an algorithm that is used to graph data or
searching tree or traversing structures. The full form of BFS is the
Breadth-first search.
The algorithm efficiently visits and marks all the key nodes in a graph
in an accurate breadthwise fashion. This algorithm selects a single
node (initial or source point) in a graph and then visits all the nodes
adjacent to the selected node. Remember, BFS accesses these nodes
one by one.
Once the algorithm visits and marks the starting node, then it moves
towards the nearest unvisited nodes and analyses them. Once
visited, all nodes are marked. These iterations continue until all the
nodes of the graph have been successfully visited and marked.
The architecture of BFS algorithm

1. In the various levels of the data, you can mark any node as the
starting or initial node to begin traversing. The BFS will visit the
node and mark it as visited and places it in the queue.
2. Now the BFS will visit the nearest and un-visited nodes and
marks them. These values are also added to the queue. The
queue works on the FIFO model.
3. In a similar manner, the remaining nearest and un-visited nodes
on the graph are analyzed marked and added to the queue.
These items are deleted from the queue as receive and printed
as the result.
Use Of BFS Algorithm
There are numerous reasons to utilize the BFS Algorithm to use as
searching for your dataset. Some of the most vital aspects that make
this algorithm your first choice are:
• BFS is useful for analyzing the nodes in a graph and constructing
the shortest path of traversing through these.
• BFS can traverse through a graph in the smallest number of
iterations.
• The architecture of the BFS algorithm is simple and robust.
• The result of the BFS algorithm holds a high level of accuracy in
comparison to other algorithms.
• BFS iterations are seamless, and there is no possibility of this
algorithm getting caught up in an infinite loop problem.
Rules of BFS Algorithm
Here, are important rules for using BFS algorithm:
• A queue (FIFO-First in First Out) data structure is used by BFS.
• You mark any node in the graph as root and start traversing the
data from it.
• BFS traverses all the nodes in the graph and keeps dropping
them as completed.
• BFS visits an adjacent unvisited node, marks it as done, and
inserts it into a queue.
• Removes the previous vertex from the queue in case no
adjacent vertex is found.
• BFS algorithm iterates until all the vertices in the graph are
successfully traversed and marked as completed.
• There are no loops caused by BFS during the traversing of data
from any node.
Applications of BFS Algorithm
Let’s take a look at some of the real-life applications where a BFS
algorithm implementation can be highly effective.
• Un-weighted Graphs: BFS algorithm can easily create the
shortest path and a minimum spanning tree to visit all the
vertices of the graph in the shortest time possible with high
accuracy.
• P2P Networks: BFS can be implemented to locate all the
nearest or neighboring nodes in a peer to peer network. This
will find the required data faster.
• Web Crawlers: Search engines or web crawlers can easily build
multiple levels of indexes by employing BFS. BFS
implementation starts from the source, which is the web page,
and then it visits all the links from that source.
• Navigation Systems: BFS can help find all the neighboring
locations from the main or source location.
• Network Broadcasting: A broadcasted packet is guided by the
BFS algorithm to find and reach all the nodes it has the address
for.

DFS (Depth First Search) Algorithm


Depth-first search (DFS) algorithm in artificial intelligence is like an
explorer. It is a graph traversal algorithm that begins at a starting
point, checks nearby spots first, and keeps going deeper before
moving to new places. It repeats this pattern to explore the whole
graph. When the Depth First Search (DFS) algorithm reaches a point
where there are no more unexplored paths in the current iteration, it
goes back to the previous node and tries a different path. It does this
by using a data structure called a stack to keep track of which node to
visit next in its search.
Depth First Search Algorithm
A standard DFS implementation puts each vertex of the graph into
one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while
avoiding cycles.
The DFS algorithm works as follows:
1. Start by putting any one of the graph's vertices on top of a
stack.
2. Take the top item of the stack and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones
which aren't in the visited list to the top of the stack.
4. Keep repeating steps 2 and 3 until the stack is empty.

Applications of DFS
DFS is a versatile algorithm that can be used to solve a variety of
problems in AI, including:
1. Finding the Shortest Path between Two Nodes in a Graph: DFS
can help find the shortest path between two nodes in a graph.
It explores all possible paths and remembers the shortest to
find. This is helpful for problems like finding the quickest way
between two places on a map or finishing a set of tasks faster.
2. Finding all of the possible Solutions to a Problem: DFS can find
all possible solutions by exploring all branches of the problem
space. This is useful for problems such as finding all of the
possible ways to arrange a set of objects or finding all of the
possible ways to win a game.
3. Detecting Cycles in a Graph: DFS can be used to detect cycles in
a graph by keeping track of the nodes that have already been
visited. If DFS ever visits a node that has already been visited,
then there is a cycle in the graph. This is helpful for solving
issues like finding loops in a program or detecting deadlocks in
a computer system.
4. Topological sorting of a Graph: DFS can be used to topologically
sort a graph by exploring the graph in reverse order. You can
use this for tasks scheduled in order or finding the order to
compile modules.
5. Finding all of the strongly Connected Components in a
Graph: DFS can be used to find all of the strongly connected
components in a graph by exploring the graph twice. This is
helpful for problems like finding communities in a social
network or connected circuit parts.

Depth Limited Search Algorithm


Depth limited search is an uninformed search algorithm which is
similar to Depth First Search(DFS). It can be considered equivalent to
DFS with a predetermined depth limit 'l'. Nodes at depth l are
considered to be nodes without any successors.
Depth limited search may be thought of as a solution to DFS's infinite
path problem; in the Depth limited search algorithm, DFS is run for a
finite depth 'l', where 'l' is the depth limit.
Before moving on to the next path, a Depth First Search starts at the
root node and follows each branch to its deepest node. The problem
with DFS is that this could lead to an infinite loop. By incorporating a
specified limit termed as the depth limit, the Depth Limited Search
Algorithm eliminates the issue of the DFS algorithm's infinite path
problem; In a graph, the depth limit is the point beyond which no
nodes are explored.
Depth Limited Search Algorithm
We are given a graph G and a depth limit 'l'. Depth Limited Search is
carried out in the following way:
1. Set STATUS=1(ready) for each of the given nodes in graph G.
2. Push the Source node or the Starting node onto the stack and
set its STATUS=2(waiting).
3. Repeat steps 4 to 5 until the stack is empty or the goal node has
been reached.
4. Pop the top node T of the stack and set its STATUS=3(visited).
5. Push all the neighbours of node T onto the stack in the ready
state (STATUS=1) and with a depth less than or equal to depth
limit 'l' and set their STATUS=2(waiting).
(END OF LOOP)
6. END
When one of the following instances are satisfied, a Depth Limited
Search can be terminated.
• When we get to the target node.
• Once all of the nodes within the specified depth limit have been
visited.
Advantages of Depth Limited Search
1.Depth limited search is more efficient than DFS, using less
time and memory.
2.If a solution exists, DFS guarantees that it will be found in a
finite amount of time.
3.To address the drawbacks of DFS, we set a depth limit and run
our search technique repeatedly through the search tree.
4.DLS has applications in graph theory that are highly
comparable to DFS.
Disadvantages of Depth Limited Search
1.For this method to work, it must have a depth limit.
2.If the target node does not exist inside the chosen depth
limit, the user will be forced to iterate again, increasing
execution time.
3.If the goal node does not exist within the specified limit, it will
not be discovered.

Iterative Deepening Depth-First Search (IDDFS) Algorithm


An iterative deepening depth-search algorithm also traverses
a graph by exploring it vertex-by-vertex, but it does it by following the
vertical order of the vertices. However, its depth is initially limited
and gets increased by each consecutive iteration.
Contrary to the depth-first search algorithm, the iterative deepening
depth-first search algorithm does guarantee the shortest path
between any two reachable vertices in a graph, it is widely used in
many applications. Some of those are:
finding connected components,
performing topological sorting,
finding the bridges of a graph,
determining the closeness of any two vertices in a graph or a tree,
and
solving puzzles with a unique solution such as labyrinths.

The iterative deepening depth-first search algorithm begins denoting


the start vertex as visited and placing it onto the stack of visited
nodes.
The algorithm will check if the vertex corresponds to the entity being
searched for. If the entity being searched for is found, the algorithm
will stop executing and it will return the corresponding vertex.
Otherwise, the algorithm will loop through its neighboring vertices
and recursively descent to each one of them, one step deeper in each
iteration.
This way, the algorithm will:
• a) eventually find the target entity along the downward path;
• b) reach the last (leaf) vertex in the branch, backtrack through
the graph (implementation-wise: it will return to the previous
caller in the function call stack) and repeat the descent along
the next neighboring vertex;
• c) exhaust the graph by marking all the vertices as visited
without finding the target entity;
• d) finish in case of reaching the depth search limit.
The iterative deepening depth-first search algorithm is slightly
less efficient and simple in terms of traversing a graph, but still
quite appropriate.
However, it might take a significantly smaller amount of time to
find the solution in a deep graph because the search depth is
increased per round, contrary to the original depth-first search
algorithm, where the search depth is virtually unlimited. The
next path of the graph can be explored much sooner, as soon as
the depth limit is reached.

Heuristic Search Techniques


Heuristics is a method of problem-solving where the goal is to come
up with a workable solution in a feasible amount of time. Heuristic
techniques strive for a rapid solution that stays within an appropriate
accuracy range rather than a perfect solution.
If there are no specific answers to a problem or the time required to
find one is too great, a heuristic function is used to solve the
problem. The aim is to find a quicker or more approximate answer,
even if it is not ideal. Put another way, utilizing a heuristic means
trading accuracy for speed.
A heuristic is a function that determines how near a state is to the
desired state. Heuristics functions vary depending on the problem
and must be tailored to match that particular challenge. The majority
of AI problems revolve around a large amount of information, data,
and constraints, and the task is to find a way to reach the goal state.
The heuristics function in this situation informs us of the proximity to
the desired condition. The distance formula is an excellent option if
one needed a heuristic function to assess how close a location in a
two-dimensional space was to the objective point.
Heuristics is a method of problem-solving where the goal is to come
up with a workable solution in a feasible amount of time. Heuristic
techniques strive for a rapid solution that stays within an appropriate
accuracy range rather than a perfect solution.
When it seems impossible to tackle a specific problem with a step-by-
step approach, heuristics are utilized in AI (artificial
intelligence) and ML (machine learning). Heuristic functions in AI
prioritize speed above accuracy; hence they are frequently paired
with optimization techniques to provide better outcomes.
What is the Heuristic Function?
If there are no specific answers to a problem or the time required to
find one is too great, a heuristic function is used to solve the
problem. The aim is to find a quicker or more approximate answer,
even if it is not ideal. Put another way, utilizing a heuristic means
trading accuracy for speed.
A heuristic is a function that determines how near a state is to the
desired state. Heuristics functions vary depending on the problem
and must be tailored to match that particular challenge. The majority
of AI problems revolve around a large amount of information, data,
and constraints, and the task is to find a way to reach the goal state.
The heuristics function in this situation informs us of the proximity to
the desired condition. The distance formula is an excellent option if
one needed a heuristic function to assess how close a location in a
two-dimensional space was to the objective point.

Properties of a Heuristic Search Algorithm


Heuristic search algorithms have the following properties:
• Admissible Condition: If an algorithm produces an optimal
result, it is considered admissible.
• Completeness: If an algorithm ends with a solution, it is
considered complete.
• Dominance Property: If A1 and A2 are two heuristic algorithms
and have h1 and h2 heuristic functions, respectively, then A1
Will dominate A2 if h1 is superior to h2 for all possible values of
node n.
• Optimality Property: If an algorithm is thorough, allowable, and
dominates the other algorithms, that'll be the optimal one and
will unquestionably produce an optimal result.
Hill Climbing Algorithm

• To discover the mountain's peak or the best solution to the


problem, the hill climbing algorithm is a local search algorithm
continuously advancing in the direction of increasing elevation
or value. When it reaches a peak value where none of its
neighbors have a greater value, it ends.
• The hill climbing algorithm is a method for solving
mathematical optimization issues. Traveling-salesman is one of
the most cited instances of a hill-climbing algorithm. The
problem where we need to cut down on the salesman's journey
distance.
• Because it just searches inside its good immediate neighbor
state and not further afield, it is also known as greedy local
search.
• State and value make up the two components of a hill-climbing
algorithm node.
• Large computational problems can be solved memory-
effectively by using the hill climbing algorithm. It considers both
the current state and the state immediately nearby. When we
wish to optimize or decrease a certain function dependent on
the input it is receiving, the hill climbing problem in artificial
intelligence is extremely helpful.
• The "Traveling Salesman" Problem, in which we must reduce
the salesman's journey distance, is the most popular hill
climbing algorithm example in AI. Hill Climbing Algorithm is
adept at efficiently locating local minima/maxima but may not
discover the global optimal (best possible) solution.
• Hill climbing is a heuristic strategy, or to put it another way. It is
a search technique or informed search technique that assigns
various weights based on actual numbers to distinct nodes,
branches, and destinations in a path. The search can now be
improved using these statistics and the heuristic established in
the hill climbing search in the AI model. The hill-climbing
algorithm's key characteristics are its high input efficiency and
superior heuristic assignment.
• How Does Hill Climbing Algorithm Work?
• The following steps are used by this algorithm to determine the
best answer:
• It tries to characterize the present situation as the starting point
or initial state.
• It searches for an ideal solution while generalizing the solution
to the existing condition. The chosen answer might not be the
ideal one.
• It evaluates the generated solution in relation to the goal state,
also known as the final state.
• It will determine if the desired state has been attained or not. If
this goal is not met, it will look for an alternative approach.
Features of Hill Climbing
Generate and Test variant: The Generate and Test method has an
extension called Hill Climbing. Feedback from the Generate and Test
approach aids in choosing which way to move through the search
space.
Greedy approach: The hill climbing in artificial intelligence in state
space advances in the direction that best optimizes the output taken
out in the solution-focused direction. It moves to the end to arrive at
the solution while optimizing the cost of function.
No backtracking: Backtracking to the prior state is not feasible since
it cannot remember the system's previous state.
Feedback mechanism: The program contains a feedback system that
aids in choosing the movement's direction. The generate-and-test
technique improves the feedback system.
Incremental change: The algorithm makes small adjustments to the
current solution.
Types of Hill Climbing
Following are the types of hill climbing in artificial intelligence:
1. Simple Hill Climbing
One of the simplest approaches is straightforward hill climbing. It
carries out an evaluation by examining each neighbor node's state
one at a time, considering the current cost, and announcing its
current state. It seeks to find out how the following neighboring state
is doing. It attempts to move if the success rate is higher than the
current condition; otherwise, it remains in place. Although it is
advantageous since it takes less time, the local optima have an
impact on it. Therefore, it cannot always guarantee the best optimal
solution.
Algorithm for Simple Hill climbing:
Analyze the starting situation. Stop and return success if it's a goal
state. If not, the initial state should be set as the current state.
Continue iterating until the solution state is reached or until no new
operators are available to be applied to the current state.
Choose a state that hasn't yet been applied to the existing state, then
do so to create a new state.
To assess the new state, carry out these.
Stop and return success if the current state is a goal state.
If it is superior to the current situation, make it the situation and go
on.
Continue in the loop until a solution is found if it is not an
improvement over the situation as it is.
Exit.
2. Steepest-Ascent Hill Climbing
A variant of the straightforward hill-climbing algorithm is the
steepest-Ascent algorithm. This method looks at every node that
borders the current state and chooses the one that is most near the
goal state. This algorithm takes longer since it looks for more
neighbors.
Algorithm for Steepest Ascent Hill climbing:
Analyze the starting situation. Stop and return success if it's a goal
state. If not, the initial state should be set as the current state.
Follow these instructions again and again until a solution is found, or
the situation stays the same.
Choose a state that hasn't yet been used to modify the existing state.
Create a new "best state" that is initially equivalent to the existing
state and then apply it to create the new state.
Execute these to assess the new state.
Stop and return success if the current state is a goal state.
If it is superior to the best state, make it the best state; otherwise,
keep going by adding another new state to the loop.
Set the ideal situation as the current situation.
Exit.
3. Stochastic hill climbing
It is the exact opposite of the methods that were previously
explained. With this method, the agent doesn't look up the values of
nearby nodes. The agent chooses a neighboring node entirely at
random, moves to that node, and then determines whether to
continue this path based on the heuristic of that node.
If you want to dive deeper into hill climbing algorithms in artificial
intelligence and want to know how much time it takes to get certified
as a data scientist.
State-space Diagram for Hill Climbing and Analysis
The optimization function and states are graphically represented in a
state-space diagram. The local maximum and global maximum are
what we seek to establish if the y-axis is the objective function.
The local minimum and the global minimum are what we seek to
determine if the cost function reflects this axis. Here you can find
more details regarding local minimum, local maximum, global
minimum, and global maximum. A straightforward state-space
diagram is shown in the diagram below. The state-space is
represented by the x-axis, and the objective function has been
plotted on the y-axis.
Different Regions in the State Space Diagram
Local Maximum: As the diagram makes clear, this is the state that is
marginally superior to its neighboring states but never higher than
the highest state.
Global maximum: Its cost function value is at its highest, and it is the
highest state in the state space.
Current State: This is the condition in which an active agent is
present.
Flat local maximums are what happens when all the neighboring
states have the same value and can be visualized as flat spaces (as
shown in the diagram).
Shoulder region: A region with an upward edge, it is also one of the
issues with algorithms for climbing hills.
Advantage of Hill Climbing Algorithm in Artificial Intelligence
Hill climbing in AI is a field that can be used continuously. Routing-
associated issues, like portfolio management, chip design, and task
scheduling, are advantageous.
When you have a limited amount of computational capacity, hill
climbing technique in AI is a useful solution for optimizing the
difficulties.
Compared to other search algorithms, this one is more effective.
In terms of vehicle routing, automatic programming, circuit
construction, etc., hill-climbing artificial intelligence processes are
useful.
It can address concerns with pure advancement, where the aim is to
identify the most suitable state.
Problems in Different Regions in Hill climbing
1. Local maximum
All nearby states have a value that is worse than the present state
when it reaches its local maximum. Since hill climbing search
employs a greedy strategy, it won't progress to a worse state and end
itself. Even though there might be a better way, the process will come
to an end.
To get around the local maximum issue: Use the backtracking
strategy. Keep track of the states you've visited. The search can go
back to its initial setup and try a different route if it reaches an
unpleasant condition.
2. Plateau
All neighbors have the same value on the plateau. Therefore,
choosing the ideal course is impossible.
To overcome plateaus: Break through plateaus by taking a huge leap.
Choose a state that is far from the one you are in at random.
3. Ridge
Any point on a ridge can appear as a peak since all directions of
movement are downhill. As a result, the algorithm terminates in this
condition.
To get over a Ridge: follow two or more rules before being tested. It
suggests acting simultaneously in numerous directions.
Applications of Hill Climbing Algorithm
1. Marketing
A marketing manager can create the most effective marketing strategies
with the use of a hill-climbing algorithm. The Traveling-Salesman
algorithm is frequently employed to resolve these issues. It can be
advantageous by reducing the distance travelled and enhancing
travel times for sales team members. The algorithm effectively
establishes the local minima.
2. Robotics
The efficient operation of robotics benefits from hill climbing. It
improves how well various robot systems and parts work together.
3. Job Scheduling
Job scheduling has also used the hill climbing algorithm. This is the
method by which resources on a computer system are distributed
among various tasks. The migration of jobs from one node to a
neighboring node allows for job scheduling. The appropriate
migratory route is established using a hill-climbing technique.

Simulated Annealing Algorithm


Simulated Annealing is an optimization algorithm used to find the
global optimum in a large search space. It is inspired by the annealing
process in metallurgy, which involves heating and controlled cooling
of a material. It is a type of optimization algorithm falling under the
optimization category of machine learning methods. The algorithm
uses a random search strategy that accepts new solutions, even
those worse than the current solution, based on a probability that
decreases as the metaphorical 'temperature' decreases. This ability
to accept worse solutions occasionally can help the algorithm escape
local minima and move towards finding a global minimum. Simulated
Annealing has been used in a variety of applications, including neural
network optimization, VLSI design, and job scheduling, among
others.
Simulated Annealing is a powerful optimization algorithm that can be
used in a variety of applications where finding the global minimum is
a necessity. Its ability to escape local minima and its versatility make
it a popular choice among machine learning practitioners.
Unlike some optimization algorithms that can become trapped in a
local minimum, Simulated Annealing allows for exploration of the
search space in a controlled manner, which can aid in finding the
global minimum. This algorithm is a valuable tool in the field of
machine learning and optimization.
To get started with Simulated Annealing, you will need to follow
these steps:
1. Define the problem you want to solve and the objective
function that you want to optimize.
2. Choose an initial solution to the problem.
3. Set the initial temperature and cooling schedule parameters.
4. Iteratively generate new candidate solutions by perturbing the
current solution and accepting or rejecting them based on the
probability function.
5. Stop the algorithm when the stopping criteria are met (e.g.,
maximum number of iterations or convergence to a satisfactory
solution).
Simulated Annealing has been used in a wide range of applications,
including optimization problems in engineering, economics, and
physics, as well as in machine learning and data science.

Heuristic Function
A heuristic is a function that determines how near a state is to the
desired state. Heuristics functions vary depending on the problem
and must be tailored to match that particular challenge. The majority
of AI problems revolve around a large amount of information, data,
and constraints, and the task is to find a way to reach the goal state.
The heuristics function in this situation informs us of the proximity to
the desired condition. The distance formula is an excellent option if
one needed a heuristic function to assess how close a location in a
two-dimensional space was to the objective point.
Properties of a Heuristic Search Algorithm
Heuristic search algorithms have the following properties:
• Admissible Condition: If an algorithm produces an optimal
result, it is considered admissible.
• Completeness: If an algorithm ends with a solution, it is
considered complete.
• Dominance Property: If A1 and A2 are two heuristic algorithms
and have h1 and h2 heuristic functions, respectively, then A1
Will dominate A2 if h1 is superior to h2 for all possible values of
node n.
• Optimality Property: If an algorithm is thorough, allowable, and
dominates the other algorithms, that'll be the optimal one and
will unquestionably produce an optimal result.
Different Categories of Heuristic Search Techniques in AI
We can categorize the Heuristic Search techniques into two types:
Direct Heuristic Search Techniques
Direct heuristic search techniques may also be called blind control
strategy, blind search, and uninformed search.
They utilize an arbitrary sequencing of operations and look for a
solution throughout the entire state space. These include Depth First
Search (DFS) and Breadth First Search (BFS).
BFS is a heuristic search method to diagram data or quickly scan
intersection or tree structures. DFS is predicated on the likelihood of
last in, first out. Similarly, the LIFO stack data structure is used to
complete the process in recursion.
Weak Heuristic Techniques
Weak heuristic techniques are known as a Heuristic control strategy,
informed search, and Heuristic search. These are successful when
used effectively on the appropriate tasks and typically require
domain-specific knowledge.To explore and expand, users require
additional information to compute preferences across child nodes. A
heuristic function is connected to each node. Let's first look at some
of the strategies we frequently see before detailing specific ones.
Here are a few examples.
• A* Search
• Best-first search
• Tabu search
• Bidirectional search
• Constant satisfaction problems
• Hill climbing
Examples of Heuristic Functions in AI

A variety of issues can be solved using a heuristic function in AI.


Let's talk about some of the more popular ones.

Traveling Salesman Problem

What is the quickest path between each city and its starting point,
given a list of cities and the distances between each pair of them?
This problem could be brute-forced for a small number of cities. But
as the number of cities grows, finding a solution becomes more
challenging.
This issue is well-solved by the nearest-neighbor heuristic, which
directs the computer to always choose the closest unexplored city as
the next stop on the path. While NN only sometimes offers the
optimum solution, it is frequently near enough that the variation is
insignificant to respond to the salesman's problem. This approach
decreases TSP's complexity from O(n!) to O (n^2).

Search Engine

People have been interested in SEO as long as there have been


searching engines. Users want to quickly discover the information
they need when utilizing a search engine. Search engines use
heuristics to speed up the search process because such a staggering
amount of data is available. A heuristic could initially attempt each
alternative at each stage. Still, as the search progresses, it can quit at
any point if the present possibility is inferior to the best solution
already found. The search engine's accuracy and speed can be
improved in this way.
A* Algorithm
Pathfinding and the A* algorithm, a fundamental topics in the realm
of artificial intelligence and computer science. In this topic, we will
embark on a journey to understand how the A* algorithm works and
why it plays a crucial role in various applications. Pathfinding is the
process of finding the most efficient route or path from a starting
point to a destination in a given environment. While this may sound
straightforward, it's a problem with profound implications and
applications across many domains. Let's explore why pathfinding is of
paramount importance:
1. Robotics: In the field of robotics, autonomous machines need to
navigate their surroundings efficiently. Robots ranging from
automated vacuum cleaners to self-driving cars rely on pathfinding
algorithms to avoid obstacles and reach their goals safely.
2. Game Development: In video game development, creating
intelligent non-player characters (NPCs) or game agents requires
robust pathfinding. It's what makes game characters move
realistically in virtual worlds, whether they are exploring dungeons,
following you in a role-playing game, or competing in a sports
simulation.
3. GPS Navigation: When you use a GPS navigation app to find the
quickest route to your destination, it employs pathfinding algorithms
behind the scenes. These algorithms consider real-time traffic data
and road conditions to suggest optimal routes for you.
4. Network Routing: Beyond physical navigation, pathfinding also
plays a pivotal role in data communication. In the world of computer
networks, routing algorithms determine the most efficient paths for
data packets to travel from source to destination.
5. Supply Chain Management: In logistics and supply chain
management, efficient route planning is critical. Trucks, drones, and
delivery services optimize their delivery routes to save time, fuel, and
resources.
6. Urban Planning: In urban planning, pathfinding helps design
efficient transportation networks, traffic management systems, and
emergency response strategies.
Heuristics in Pathfinding: Heuristics are informed guesses or
estimates that help us make intelligent decisions. In pathfinding, a
heuristic function provides an estimate of the cost or distance from a
specific node to the goal node. Heuristics guide the search process by
helping us prioritize nodes that seem promising based on these
estimates.
The A* algorithm or A star algorithm in AI is a powerful pathfinding
algorithm that efficiently finds the shortest path in a graph while
considering both the actual cost incurred so far and an estimate of
the remaining cost.
A* is known for its efficiency in finding the shortest path in a graph,
and its success lies in its systematic approach to exploration and
optimization. Here are the key steps involved in A*:
1. Initialization:
• Begin by initializing two sets: the open set and the closed set.
• The open set initially contains only the starting node, while the
closed set is empty.
• Set the cost of reaching the starting node (g-score) to zero and
calculate the heuristic cost estimate to the goal (h-score) for the
starting node.
2. Main Loop:
• The main loop continues until one of two conditions is met:
• The goal node is reached, and the optimal path is found.
• The open set is empty, indicating that no path exists to the
goal.
3. Selecting the Node for Evaluation:
• At each iteration of the loop, select the node from the open set
with the lowest f-score (f = g + h).
• This node is the most promising candidate for evaluation, as it
balances the actual cost incurred (g) and the estimated
remaining cost (h).
4. Evaluating Neighbors:
• For the selected node, consider its neighboring nodes (also
known as successors).
• Calculate the actual cost to reach each neighbor from the
current node (g-score).
• Calculate the heuristic cost estimate from each neighbor to the
goal (h-score).
5. Updating Costs:
• For each neighbor, calculate the total estimated cost (f-score)
by summing the actual cost (g-score) and the heuristic estimate
(h-score).
• If a neighbor is not in the open set, add it to the open set.
• If a neighbor is already in the open set and its f-score is lower
than the previously recorded f-score, update the neighbor's f-
score and set its parent to the current node. This means a
shorter path to the neighbor has been discovered.
6. Moving to the Next Node:
• After evaluating the neighbors of the current node, move the
current node to the closed set, indicating that it has been fully
evaluated.
• Return to the main loop and select the next node for evaluation
based on its f-score.
7. Goal Reached or No Path Found:
• If the goal node is reached, the algorithm terminates, and the
optimal path can be reconstructed by backtracking from the
goal node to the starting node using the parent pointers.
• If the open set becomes empty without reaching the goal, the
algorithm terminates with the conclusion that no path exists.
8. Path Reconstruction (Optional):
• Once the goal is reached, you can reconstruct the optimal path
by following the parent pointers from the goal node back to the
starting node. This path represents the shortest route.
The A* algorithm's efficiency lies in its ability to intelligently explore
the graph by prioritizing nodes with lower estimated total costs (f-
scores). This allows it to converge quickly toward the optimal path
while avoiding unnecessary exploration. In practice, A* is a versatile
tool for solving pathfinding problems in AI projects, and its
effectiveness has made it a go-to choice for applications ranging from
robotics to video games and more.
Now that we've covered the basics of the A* algorithm, it's time to
explore a crucial concept: heuristics. Heuristics are key to the success
of the A* search algorithm in AI, and they play a pivotal role in
guiding its search process efficiently.
1. The Role of Heuristics:
• In pathfinding algorithms like A*, heuristics are informed
guesses or estimates of how close a given node is to the goal
node.
• Heuristics provide a way for the algorithm to prioritize which
nodes to explore next. Instead of exhaustively searching all
possible paths, A* uses heuristics to focus on the most
promising routes.
2. The Heuristic Function (h(n)):
• The heuristic function, often denoted as "h(n)," calculates the
estimated cost from a specific node (n) to the goal node.
• It should satisfy two important criteria:
• Admissibility: The heuristic should never overestimate the
true cost to reach the goal. In other words, h(n) ≤ true
cost.
• Consistency (or the Triangle Inequality): The heuristic
should satisfy the triangle inequality: h(n) ≤ c(n, n') + h(n'),
where c(n, n') is the actual cost of moving from node n to
its neighbor n'.
3. Common Heuristics in A*:
• A* can use a variety of heuristics, and the choice of heuristics
can significantly impact the algorithm's performance.
4. Impact of Heuristic Choice on Performance:
• The choice of heuristic can significantly affect the A*
algorithm's performance. Different heuristics may lead to
different paths and exploration patterns.
• An admissible heuristic (one that never overestimates the true
cost) ensures that A* will always find an optimal path. However,
more informed heuristics tend to guide the algorithm toward
the optimal path more efficiently.
• Example: In a grid-based pathfinding scenario, Manhattan
distance may be a less informed heuristic than Euclidean
distance because it doesn't consider diagonal movements. As a
result, A* with Manhattan distance may explore more nodes
and take longer to reach the goal if diagonal moves are
possible.
5. Choosing the Right Heuristic:
• Selecting the most appropriate heuristic depends on the
specific problem and the characteristics of the environment.
• It's often beneficial to experiment with different heuristics to
find the one that strikes the right balance between
informativeness and computational efficiency.
Applications Of A* Algorithm
1. GPS Navigation:
• Role: A* is the backbone of many GPS navigation systems. It
helps users find the shortest or fastest route from their current
location to their desired destination.
• Importance: A* considers real-time traffic data, road
conditions, and various routes, providing users with up-to-date
and optimal navigation instructions. This application has
revolutionized how people navigate cities and regions.
2. Video Games:
• Role: In video game development, A* is often used to create
intelligent non-player characters (NPCs) and game agents that
navigate virtual worlds.
• Importance: A* enables NPCs to move realistically, avoid
obstacles, and chase or evade players. It contributes to the
immersive and interactive nature of video games, from puzzle-
solving adventures to open-world exploration.
3. Robotics:
• Role: In robotics, A* is employed for path planning and obstacle
avoidance. Robots, from automated vacuum cleaners to self-
driving cars, use A* to navigate their environments safely and
efficiently.
• Importance: Path planning with A* ensures that robots can
accomplish tasks and reach destinations while avoiding
collisions with obstacles. It's a cornerstone of autonomous
robotics and industrial automation.
4. Network Routing:
• Role: In computer networks and the internet, routing
algorithms based on A* principles help direct data packets from
their source to their destination through the most efficient
path.
• Importance: Efficient routing is crucial for data transmission,
ensuring that data packets reach their intended recipients
quickly and without unnecessary delays. It's essential for
maintaining a well-functioning internet infrastructure.
5. Supply Chain Management:
• Role: In logistics and supply chain management, A* is used to
optimize delivery routes for trucks, drones, and delivery
services.
• Importance: Optimized routes reduce transportation costs, fuel
consumption, and delivery times. This, in turn, enhances the
efficiency of supply chain operations and improves customer
satisfaction.
6. Urban Planning:
• Role: Urban planners use A* algorithms to design efficient
transportation networks, traffic management systems, and
emergency response strategies in cities and metropolitan areas.
• Importance: Well-designed transportation systems are essential
for reducing congestion, minimizing commute times, and
enhancing the quality of life for urban residents.

AO* Search (And-Or) Graph Algorithm


Best-first search is what the AO* algorithm does. The AO*
method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.
AND OR graphs are specialized graphs that are used in problems that
can be divided into smaller problems. The AND side of the graph
represents a set of tasks that must be completed to achieve the main
goal, while the OR side of the graph represents different methods for
accomplishing the same main goal.

AND-OR Graph
In the above figure, the buying of a car may be broken down into
smaller problems or tasks that can be accomplished to achieve the
main goal in the above figure, which is an example of a simple AND-
OR graph. The other task is to either steal a car that will help us
accomplish the main goal or use your own money to purchase a car
that will accomplish the main goal. The AND symbol is used to
indicate the AND part of the graphs, which refers to the need that all
subproblems containing the AND to be resolved before the preceding
node or issue may be finished.
The start state and the target state are already known in the
knowledge-based search strategy known as the AO* algorithm, and
the best path is identified by heuristics. The informed search
technique considerably reduces the algorithm’s time complexity. The
AO* algorithm is far more effective in searching AND-OR
trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.

Difference between the A* Algorithm and AO* algorithm


• A* algorithm and AO* algorithm both works on the best first
search.
• They are both informed search and works on given heuristics
values.
• A* always gives the optimal solution but AO* doesn’t guarantee
to give the optimal solution.
• Once AO* got a solution doesn’t explore all possible paths but
A* explores all paths.
• When compared to the A* algorithm, the AO* algorithm
uses less memory.
• opposite to the A* algorithm, the AO* algorithm cannot go into
an endless loop.

AO* Search Algorithm


Step 1: Place the starting node into OPEN.
Step 2: Compute the most promising solution tree say T0.
Step 3: Select a node n that is both on OPEN and a member of T0.
Remove it from OPEN and place it in CLOSE
Step 4: If n is the terminal goal node then leveled n as solved and
leveled all the ancestors of n as solved. If the starting node is marked
as solved then success and exit.
Step 5: If n is not a solvable node, then mark n as unsolvable. If
starting node is marked as unsolvable, then return failure and exit.
Step 6: Expand n. Find all its successors and find their h (n) value,
push them into OPEN.
Step 7: Return to Step 2.
Step 8: Exit.
Advantages of AO* Star
• It is an optimal algorithm.
• If traverse according to the ordering of nodes.
• It can be used for both OR and AND graph.
Disadvantages of AO* Star
• Sometimes for unsolvable nodes, it can’t find the optimal path.
• Its complexity is than other algorithms.

You might also like