0% found this document useful (0 votes)
4 views79 pages

Ai Unit-1

Informed search algorithms, or heuristic search, utilize specific knowledge to enhance the efficiency of the search process, leading to faster and more optimal solutions compared to uninformed methods. Key characteristics include the use of a heuristic function to estimate costs, improved efficiency in large search spaces, and potential for optimality and completeness. Types of informed search include Greedy Best-First Search and A* Search, both of which apply heuristic functions to guide the search towards the goal, with applications in pathfinding, optimization, and AI tasks.

Uploaded by

ptlikitha07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views79 pages

Ai Unit-1

Informed search algorithms, or heuristic search, utilize specific knowledge to enhance the efficiency of the search process, leading to faster and more optimal solutions compared to uninformed methods. Key characteristics include the use of a heuristic function to estimate costs, improved efficiency in large search spaces, and potential for optimality and completeness. Types of informed search include Greedy Best-First Search and A* Search, both of which apply heuristic functions to guide the search towards the goal, with applications in pathfinding, optimization, and AI tasks.

Uploaded by

ptlikitha07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

INFORMED SEARCH:

 Informed search algorithms, also known as heuristic search.


 These algorithms use specific knowledge to improve the efficiency of the
search process, leading to faster and more optimal solutions compared to
uninformed search methods.
 This additional information, typically in the form of a heuristic function, helps
estimate the cost or distance from a given node in the search space to the goal
node. The use of heuristics distinguishes informed search algorithms from
uninformed search algorithms, which do not use any domain-specific
knowledge.

CHARACTERISTICS OF INFORMED SEARCH ALGORITHMS


1. Heuristic Function: Informed search algorithms use a heuristic function
h(n) that provides an estimate of the minimal cost from node n to the goal.
This function helps the algorithm to prioritize which nodes to explore first
based on their potential to lead to an optimal solution.
Example: A player can begin the game of tic tac toe from a variety of
positions, and each position has a different chance of winning. The first
player, however, has the best chance of succeeding if they begin from the
middle of the board. As a result, winning chances can be used as a heuristic.
2 Efficiency: By focusing on more promising paths, informed search
algorithms often find solutions more quickly than uninformed methods,
especially in large or complex search spaces.
3.Optimality and Completeness: Depending on the heuristic used,
informed search algorithms can be both optimal and complete. An algorithm is
complete if it is guaranteed to find a solution if one exists, and it is optimal if it
always finds the best solution.

Types of Informed Search


Greedy Best-First Search
Greedy Best-First Search is a graph traversal algorithm that is used in artificial
intelligence for finding the shortest path between two points or solving problems
with multiple possible solutions.
It is classified as a heuristic search algorithm since it relies on an evaluation
function to determine the next step, focusing on getting as close to the goal as
quickly as possible.
Working:
 The algorithm starts at the initial node and evaluates its neighbours.
 It chooses the neighbour with the lowest heuristic value h(n) and continues
the process.
 Greedy Best-First Search does not guarantee finding the optimal path, as it
can get trapped in local optima by always choosing the path that looks best
at the moment.
Steps in working:
1.Initialize: Start from an initial node (often the root or starting point). Add
this node to a priority queue.
2.Expand Nodes: Evaluate all neighbouring nodes of the current node.
Assign each node a value based on a heuristic function, typically
representing the estimated distance to the goal.
3.Select Best Node: From the priority queue, select the node with the
lowest heuristic value (the node that appears closest to the goal).
4.Goal Check: If the selected node is the goal, terminate the search.
Continue: If not, repeat steps 2 to 4 for the next node until the goal is
reached or the queue is empty.

 Greedy Best-First Search works by evaluating the cost of each possible


path and then expanding the path with the lowest cost. This process is
repeated until the goal is reached.
 The algorithm uses a heuristic function to determine which path is the most
promising.
 The heuristic function takes into account the cost of the current path and
the estimated cost of the remaining paths.
 If the cost of the current path is lower than the estimated cost of the
remaining paths, then the current path is chosen. This process is repeated
until the goal is reached.
An example of the best-first search algorithm is below graph, suppose we have
to find the path from A to G
The values in green colour represent the heuristic value of reaching the goal node
G from current node

1) We are starting from A, so from A there are direct path to node B (with
heuristics value of 32), from A to C (with heuristics value of 25) and from A to D
(with heuristics value of 35).
2) So as per best first search algorithm choose the path with lowest heuristics
value, currently C has lowest value among above node. So, we will go from A to
C.

3) Now from C we have direct paths as C to F (with heuristics value of 17) and
C to E (with heuristics value of 19), so we will go from C to F.
4) Now from F we have direct path to go to the goal node G (with heuristics
value of 0), so we will go from F to G.

5) So now the goal node G has been reached and the path we will follow is A-
>C->F->G.
Advantages of Greedy Best-First Search:
 Simple and Easy to Implement: Greedy Best-First Search is a relatively
straightforward algorithm, making it easy to implement.
 Fast and Efficient: Greedy Best-First Search is a very fast algorithm,
making it ideal for applications where speed is essential.
 Low Memory Requirements: Greedy Best-First Search requires only a
small amount of memory, making it suitable for applications with limited
memory.
 Flexible: Greedy Best-First Search can be adapted to different types of
problems and can be easily extended to more complex problems.
 Efficiency: If the heuristic function used in Greedy Best-First Search is
good to estimate, how close a node is to the solution, this algorithm can
be a very efficient and find a solution quickly, even in large search
spaces.
Disadvantages of Greedy Best-First Search:
 Inaccurate Results: Greedy Best-First Search is not always guaranteed to
find the optimal solution, as it is only concerned with finding the most
promising path.
 Local Optima: Greedy Best-First Search can get stuck in local optima,
meaning that the path chosen may not be the best possible path.
 Heuristic Function: Greedy Best-First Search requires a heuristic function
in order to work, which adds complexity to the algorithm.
 Lack of Completeness: Greedy Best-First Search is not a complete
algorithm, meaning it may not always find a solution if one is exists. This
can happen if the algorithm gets stuck in a cycle or if the search space is a
too much complex.
Applications of Greedy Best-First Search:
 Pathfinding: Greedy Best-First Search is used to find the shortest path
between two points in a graph. It is used in many applications such as
video games, robotics, and navigation systems.
 Machine Learning: Greedy Best-First Search can be used in machine
learning algorithms to find the most promising path through a search
space.
 Optimization: Greedy Best-First Search can be used to optimize the
parameters of a system in order to achieve the desired result.
 Game AI: Greedy Best-First Search can be used in game AI to evaluate
potential moves and chose the best one.
 Navigation: Greedy Best-First Search can be use to navigate to find the
shortest path between two locations.
 Natural Language Processing: Greedy Best-First Search can be use in
natural language processing tasks such as language translation or speech
reorganisation to generate the most likely sequence of words.
 Image Processing: Greedy Best-First Search can be use in image
processing to segment image into regions of interest.

A* Search
A* search is the most commonly known form of best-first search. The heuristic
function h(n), along with the distance from the initial state g(n) to the node n, is
used. Due to the combination of UCS and greedy best-first search features, the
issue is effectively solved. Using the heuristic function, the A* search method
locates the shortest route through the search space. This search algorithm
produces the best results more quickly and extends the search tree a little less.
In contrast to UCS, the A* algorithm utilizes g(n)+h(n) instead of g(n).
We use both the search heuristic and the node-reach cost in the A* search
method. Therefore, we can add both costs as follows; this total is known as the
fitness number.
f(n)=g(n)+h(n)

Steps to perform in this algorithm:


 Step 1: Place the beginning node in the OPEN list as the first step.
 Step 2: Verify whether or not the OPEN list is empty; if it is, return
failure and stop.
 Step 3: Choose the node from the OPEN list that has the evaluation
function (g+h) with the least value. Return success and stop if node n is
the destination node; otherwise, continue.
 Step 4: Generate all of the successors for node n, expand it, and add it to
the closed list. Check to see if each successor, n’, is already in the OPEN
or CLOSED list before computing its evaluation function and adding it to
the Open list.
 Step 5: If node n’ is not already in OPEN or CLOSED, it should be
attached to the back pointer, which represents the value with the lowest
g(n’) value.
 Step 6: Return to Step 2 in Step 6.

Example: Given the heuristic values and distances between nodes, let’s use the
A* algorithm to find the optimal path from node S to node G.
Here’s the table representing the nodes, their heuristic values, and the distances
between nodes:
Node Heuristic Value
S 5
A 3
B 4
C 2
D 6
G 0
Initialization:
Start with the initial state, which is node S.
Create an open list and add S to it with a cost of 0 (initial cost) and a heuristic
value of 5 (estimated cost to reach G).
Node Expansion:
The algorithm selects the node from the open list with the lowest cost +
heuristic value. In this case, it’s node S with a cost of 0 + 5 = 5.
Expand node S and generate its successor nodes A and G.
Calculate the cost to reach A and G and add them to the open list.
Continued Expansion:
The algorithm selects node A from the open list with a cost of 1 (cost to reach
A) + 3 (heuristic value of A) = 4.
Expand node A and generate its successor node C.
Calculate the cost to reach C and add it to the open list.
Goal Reached:
The algorithm terminates upon reaching the goal state, which is node G with a
cost of 10 (cost to reach G) + 0 (heuristic value of G).

Result:
The A* algorithm finds the optimal path from node S to node G: S -> A -> C ->
G.
This path has the lowest cost among all possible paths, considering both the
actual distance and the heuristic values.
In this example, the A* algorithm efficiently finds the optimal solution, and the
optimal path is indeed S -> A -> C -> G with a cost of 10.

HEURISTIC FUNCTIONS
Heuristic functions are strategies or methods that guide the search process in AI
algorithms by providing estimates of the most promising path to a solution.
They are often used in scenarios where finding an exact solution is
computationally infeasible.
Instead, heuristics provide a practical approach by narrowing down the search
space, leading to faster and more efficient problem-solving.
Heuristic functions transform complex problems into more manageable
subproblems by providing estimates that guide the search process.
This approach is particularly effective in AI planning, where the goal is to
sequence actions that lead to a desired outcome.
Role of Heuristic Functions in AI
Heuristic functions are essential in AI for several reasons:
 Efficiency: They reduce the search space, leading to faster solution times.
 Guidance: They provide a sense of direction in large problem spaces,
avoiding unnecessary exploration.
 Practicality: They offer practical solutions in situations where exact
methods are computationally prohibitive.
Common Problem Types for Heuristic Functions
Heuristic functions are particularly useful in various problem types, including:
1. Pathfinding Problems: Pathfinding problems, such as navigating a maze
or finding the shortest route on a map, benefit greatly from heuristic
functions that estimate the distance to the goal.
2. Constraint Satisfaction Problems: In constraint satisfaction problems,
such as scheduling and puzzle-solving, heuristics help in selecting the most
promising variables and values to explore.
3. Optimization Problems: Optimization problems, like the traveling
salesman problem, use heuristics to find near-optimal solutions within a
reasonable time frame.
Different Categories of Heuristic Search Techniques in AI

We can categorize the Heuristic Search techniques into two types:

Direct Heuristic Search Techniques

Direct heuristic search techniques may also be called blind control strategy, blind
search, and uninformed search.

Weak Heuristic Techniques

Weak heuristic techniques are known as a Heuristic control strategy, informed


search, and Heuristic search. These are successful when used effectively on the
appropriate tasks and typically require domain-specific knowledge.

To explore and expand, users require additional information to compute


preferences across child nodes.
Examples of Heuristic Functions in AI

Traveling Salesman Problem

What is the quickest path between each city and its starting point, given a list of
cities and the distances between each pair of them?

This problem could be brute-force for a small number of cities. But as the number
of cities grows, finding a solution becomes more challenging.

This issue is well-solved by the nearest-neighbour heuristic, which directs the


computer to always choose the closest unexplored city as the next stop on the
path. While NN only sometimes offers the optimum solution, it is frequently near
enough that the variation is insignificant to respond to the salesman's problem.
This approach decreases TSP's complexity from O(n!) to O (n^2).

Search Engine

People have been interested in SEO as long as there have been searching engines.
Users want to quickly discover the information they need when utilizing a search
engine. Search engines use heuristics to speed up the search process because such
a staggering amount of data is available. A heuristic could initially attempt each
alternative at each stage. Still, as the search progresses, it can quit at any point if
the present possibility is inferior to the best solution already found. The search
engine's accuracy and speed can be improved in this way.

Applications of Heuristic Functions in AI


Heuristic functions find applications in various AI domains. Here are three
notable examples:
1. Game AI: In games like chess and tic-tac-toe, heuristic functions evaluate
the board’s state, guiding the AI to make strategic moves that maximize its
chances of winning.
2. Robotics: Robotic path planning uses heuristics to navigate environments
efficiently, avoiding obstacles and reaching target locations.
3. Natural Language Processing (NLP): In NLP, heuristics help in parsing
sentences, understanding context, and generating coherent text responses.

Local search:

A local search algorithm is an optimization technique in AI that iteratively


improves a solution by exploring its neighbouring solutions.
Classical search:

Classical search is a search algorithm that uses problem specific knowledge to


find a solution.

Hill Climbing Search:

Hill climbing is a simple optimization algorithm used in Artificial Intelligence


(AI) to find the best possible solution for a given problem. It belongs to the
family of local search algorithms and is often used in optimization problems
where the goal is to find the best solution from a set of possible solutions.

o Hill climbing algorithm is a local search algorithm which continuously


moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbour has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance travelled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbour state and not beyond that.
o A node of hill climbing algorithm has two components which are state
and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

 In Hill Climbing, the algorithm starts with an initial solution and then
iteratively makes small changes to it in order to improve the solution.
These changes are based on a heuristic function that evaluates the quality
of the solution. The algorithm continues to make these small changes
until it reaches a local maximum, meaning that no further improvement
can be made with the current set of moves.
 There are several variations of Hill Climbing, including steepest ascent
Hill Climbing, first-choice Hill Climbing, and simulated annealing. In
steepest ascent Hill Climbing, the algorithm evaluates all the possible
moves from the current solution and selects the one that leads to the best
improvement. In first-choice Hill Climbing, the algorithm randomly
selects a move and accepts it if it leads to an improvement, regardless of
whether it is the best move. Simulated annealing is a probabilistic
variation of Hill Climbing that allows the algorithm to occasionally
accept worse moves in order to avoid getting stuck in local maxima.
Hill Climbing can be useful in a variety of optimization problems, such as
scheduling, route planning, and resource allocation. However, it has some
limitations, such as the tendency to get stuck in local maxima and the lack
of diversity in the search space. Therefore, it is often combined with other
optimization techniques, such as genetic algorithms or simulated
annealing, to overcome these limitations and improve the search results.

Advantages of Hill Climbing algorithm:

1. Hill Climbing is a simple and intuitive algorithm that is easy to


understand and implement.
2. It can be used in a wide variety of optimization problems, including those
with a large search space and complex constraints.
3. Hill Climbing is often very efficient in finding local optima, making it a
good choice for problems where a good solution is needed quickly.
4. The algorithm can be easily modified and extended to include additional
heuristics or constraints.

Disadvantages of Hill Climbing algorithm:

1. Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
2. The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
3. Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
4. It may be less effective than other optimization algorithms, such as
genetic algorithms or simulated annealing, for certain types of problems.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
o Deterministic Nature:
Hill Climbing is a deterministic optimization algorithm, which means that
given the same initial conditions and the same problem, it will always
produce the same result. There is no randomness or uncertainty in its
operation.
5. Local Neighborhood:
Hill Climbing is a technique that operates within a small area around the
current solution. It explores solutions that are closely related to the
current state by making small, gradual changes. This approach allows it
to find a solution that is better than the current one although it may not be
the global optimum.

State-space Diagram for Hill Climbing:


 The state-space landscape is a graphical representation of the hill-
climbing algorithm which is showing a graph between various states
of algorithm and Objective function/Cost.
 On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis. If the
function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum
and local maximum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its
neighbour states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state


space landscape. It has the highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is
currently present.

Flat local maximum: It is a flat space in the landscape where all the
neighbour states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:

1.Simple hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing


algorithm. It only evaluates the neighbor node state at a time and selects
the first one which optimizes current cost and set it as a current state. It
only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state.

This algorithm has the following features:

o less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and
stop.

Step 2: Loop Until a solution is found or there is no new operator left to


apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

If it is goal state, then return success and quit.

Else if it is better than the current state then assign new state as a current
state.

Else if not better than the current state, then return to step2.

Step 5: Exit.
2. Steepest-Ascent hill climbing:

The steepest-Ascent algorithm is a variation of simple hill climbing


algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal
state. This algorithm consumes more time as it searches for multiple
neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not
change.
o Let SUCC be a state such that any successor of the current state
will be better than it.
o For each operator that applies to the current state:
o Apply the new operator and generate a new state.
o Evaluate the new state.
o If it is goal state, then return it and quit, else compare it to
the SUCC.
o If it is better than SUCC, then set new state as SUCC.
o If the SUCC is better than the current state, then set current
state to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state
or examine another state.

Algorithm for Simple Hill climbing :

 Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make
the initial state as the current state.
 Loop until the solution state is found or there are no new operators present which can be
applied to the current state.
o Select a state that has not been yet applied to the current state and apply it to produce
a new state.
o Perform these to evaluate the new state.
o If the current state is a goal state, then stop and return success.
o If it is better than the current state, then make it the current state and proceed
further.
o If it is not better than the current state, then continue in the loop until a
solution is found.
 Exit from the function.

2. Steepest-Ascent Hill climbing:

Algorithm for Steepest Ascent Hill climbing :

 Evaluate the initial state. If it is a goal state then stop and return success.
Otherwise, make the initial state as the current state.
 Repeat these steps until a solution is found or the current state does not
change
o Select a state that has not been yet applied to the current state.
o Initialize a new ‘best state’ equal to the current state and apply it to
produce a new state.
o Perform these to evaluate the new state
o If the current state is a goal state, then stop and return success.
o If it is better than the best state, then make it the best state else
continue the loop with another new state.
o Make the best state as the current state and go to Step 2 of the second
point.
 Exit from the function.

3. Stochastic hill climbing:

o Evaluate the initial state. If it is a goal state then stop and return
success. Otherwise, make the initial state the current state.
o Repeat these steps until a solution is found or the current state does
not change.
o Select a state that has not been yet applied to the current state.
o Apply the successor function to the current state and generate all the
neighbor states.
o Among the generated neighbor states which are better than the
current state choose a state randomly (or based on some probability
function).
o If the chosen state is the goal state, then return success, else make it
the current state and repeat step 2 of the second point.
o Exit from the function.

SIMULATED ANNEALING SEARCH

Simulated Annealing is a probabilistic technique used for solving both


combinatorial and continuous optimization problems.

Simulated Annealing is an optimization algorithm designed to search for an


optimal or near-optimal solution in a large solution space. The name and concept
are derived from the process of annealing in metallurgy, where a material is
heated and then slowly cooled to remove defects and achieve a stable crystalline
structure. In Simulated Annealing, the “heat” corresponds to the degree of
randomness in the search process, which decreases over time (cooling schedule)
to refine the solution. The method is widely used in combinatorial optimization,
where problems often have numerous local optima that standard techniques like
gradient descent might get stuck in. Simulated Annealing excels in escaping these
local minima by introducing controlled randomness in its search, allowing for a
more thorough exploration of the solution space.

The algorithm starts with an initial solution and a high “temperature,” which
gradually decreases over time. Here’s a step-by-step breakdown of how the
algorithm works:

 Initialization: Begin with an initial solution S\omicron and an initial


temperature T\omicron. he temperature controls how likely the algorithm
is to accept worse solutions as it explores the search space.

 Neighbourhood Search: At each step, a new solution S’ is generated by


making a small change (or perturbation) to the current solution S.

 Objective Function Evaluation: The new solution S’ is evaluated using


the objective function. If S’ provides a better solution than S, it is accepted
as the new solution.

 Acceptance Probability: If S’ is worse than S, it may still be accepted


with a probability based on the temperature and the difference in objective
function values. The acceptance probability is given by:

P(\text{accept}) = e^ {-\frac {\Delta E} {T}}

 Cooling Schedule: After each iteration, the temperature is decreased


according to a predefined cooling schedule, which determines how quickly
the algorithm converges. Common cooling schedules include linear,
exponential, or logarithmic cooling.

 Termination: The algorithm continues until the system reaches a low


temperature (i.e., no more significant improvements are found), or a
predetermined number of iterations is reached.

Cooling Schedule and Its Importance:

The cooling schedule plays a crucial role in the performance of Simulated


Annealing. If the temperature decreases too quickly, the algorithm might
converge prematurely to a suboptimal solution (local optimum). On the
other hand, if the cooling is too slow, the algorithm may take an excessively
long time to find the optimal solution. Hence, finding the right balance
between exploration (high temperature) and exploitation (low temperature)
is essential.

Advantages of Simulated Annealing

 Ability to Escape Local Minima: One of the most significant advantages


of Simulated Annealing is its ability to escape local minima. The
probabilistic acceptance of worse solutions allows the algorithm to explore
a broader solution space.

 Simple Implementation: The algorithm is relatively easy to implement


and can be adapted to a wide range of optimization problems.

 Global Optimization: Simulated Annealing can approach a global


optimum, especially when paired with a well-designed cooling schedule.

 Flexibility: The algorithm is flexible and can be applied to both continuous


and discrete optimization problems.

Limitations of Simulated Annealing

 Parameter Sensitivity: The performance of Simulated Annealing is


highly dependent on the choice of parameters, particularly the initial
temperature and cooling schedule.

 Computational Time: Since Simulated Annealing requires many


iterations, it can be computationally expensive, especially for large
problems.

 Slow Convergence: The convergence rate is generally slower than more


deterministic methods like gradient-based optimization.

Applications of Simulated Annealing

 Simulated Annealing has found widespread use in various fields due to its
versatility and effectiveness in solving complex optimization problems.
Some notable applications include:

 Traveling Salesman Problem (TSP): In combinatorial optimization, SA is


often used to find near-optimal solutions for the TSP, where a salesman
must visit a set of cities and return to the origin, minimizing the total travel
distance.

 VLSI Design: SA is used in the physical design of integrated circuits,


optimizing the layout of components on a chip to minimize area and delay.

 Machine Learning: In machine learning, SA can be used for


hyperparameter tuning, where the search space for hyperparameters is
large and non-convex.

 Scheduling Problems: SA has been applied to job scheduling, minimizing


delays and optimizing resource allocation.

 Protein Folding: In computational biology, SA has been used to predict


protein folding by optimizing the conformation of molecules to achieve the
lowest energy state.

LOCAL SEARCH IN CONTINUOUS SPACES

Continuous space:

Continuous space or continuous state space is the state of environment whose


successor function in many situations return infinite number of states. This
problem can be solved by two methods.

i) Discretization
ii) Gradient function

Discretization is the process of converting continuity of anything into


discrete counter parts. Gradient function is a method through which
magnitude and direction of the steepest slope can be determined.

The optimal solution to continuous can be obtained after the application


of any of these methods along with local search technique.

Example:

Suppose, there is a requirement of having three airports anywhere in Tamilnadu


such that the sum of the squared distance from each city to its nearest airport must
be minimum. For this purpose, the coordinates for the three airports need to be
determined. Consider the coordinates be (x1, x2), (x2, y2) and (x3,y3). The objective
fuction for this continuous space is f(x1,y1,x2,y2,x3,y3).

If discretization is followed, the neighbourhood of each state is separated


allowing to visit only one airport at a time in only one direction i.e. either x or y.
This process results in 12 successors every state.

In case of gradient function, the vector Δf allows to calculate the magnitude and
direction of the steepest slope.Δf for the objective function f(x1,y1,x2,y2,x3,y3) is
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
Δf = ( , , , , , )
𝜕𝑥1 𝜕𝑦 𝜕𝑥2 𝜕𝑦2 𝜕𝜘3 𝜕𝑦3

Application of local search algorithm in continuous state spaces may encounter problems like local
maxima, ridges and plateaus.

random restarts and simulated annealing are more helpful than local search algorithms.

Other methods like stochastic hill climbing and simulated annealing can be applied directly to the
continuous spaces.

You might also like