0% found this document useful (0 votes)
10 views12 pages

Local Search Algo and Optimistic Problems

The document discusses local search and optimization problems, emphasizing algorithms that focus on finding solution states rather than paths, such as the 8-queen problem and job-shop scheduling. It details local search algorithms, particularly the hill climbing algorithm and its variations, including simple, steepest-ascent, and stochastic hill climbing, as well as simulated annealing and genetic algorithms. These algorithms are designed to efficiently explore solution spaces and optimize problems, often yielding high-quality solutions for complex tasks like the Traveling Salesman Problem.

Uploaded by

patbate71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

Local Search Algo and Optimistic Problems

The document discusses local search and optimization problems, emphasizing algorithms that focus on finding solution states rather than paths, such as the 8-queen problem and job-shop scheduling. It details local search algorithms, particularly the hill climbing algorithm and its variations, including simple, steepest-ascent, and stochastic hill climbing, as well as simulated annealing and genetic algorithms. These algorithms are designed to efficiently explore solution spaces and optimize problems, often yielding high-quality solutions for complex tasks like the Traveling Salesman Problem.

Uploaded by

patbate71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

Local Search and Optimization Problems

• The search algorithms we have seen so far, more often concentrate on path through

which the goal is reached. But if the problem does not demand the path of the
solution and it expects only the final configuration of the solution then we have
different types of problem to solve.

• Following are the problems where only solution state configuration is important and

not the path which has arrived at solution.

1) 8-queen (where only solution state configuration is expected).

2) Integrated circuit design.

3) Factory-floor layout.

4) Job-shop scheduling.

5) Automatic programming.

6) Telecommunication network optimization.

7) Vehicle routing.

8) Portfolio management.

If we have such type of problem then we can have another class of algorithms, the
algorithms that do not worry about the paths at all. These are local search algorithms.

Local Search Algorithms


- They operate using single state. (rather than multiple paths).

- They generally move to neighbours of the current state.

- There is no such requirement of maintaining paths in memory.

- They are not "Systematic" algorithm procedure.

- The main advantages of local search algorithm are

1) They use very little and constant amount of memory.


2) They have ability to find resonable solution for infinite state spaces (for which
systematic algorithms are unsuitable).

Local search algorithms are useful for solving pure optimization problems. In pure
optimization problems main aim is to find the best state according to required
objective function.

Local search algorithm make use of concept called as state space landscape. This
landscape has two structures -

1) Location (defined by the state)

2) Elevation (defined by the value of the heuristic cost function or objective


function).

• If elevation corresponds to the cost, then the aim is to find the lowest valley - (a

global minimum).

• If elevation corresponds to the objective function then aim is to find the highest

peak - (a global maximum).

• Local search algorithms explore the landscape.

Performance measurement

- Local search is complete i.e. it surely finds goal if one exists.

- Local search is optimal as it always find global minimum or maximum.

Local search representation


Hill Climbing Algorithm in Artificial Intelligence
 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to
the problem. It terminates when it reaches a peak value where no neighbor has a higher
value.
 Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.
 It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
 A node of hill climbing algorithm has two components which are state and value.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:
 Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move
in the search space.
 Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
 No backtracking: It does not backtrack the search space, as it does not remember the
previous states.

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing algorithm which is
showing a graph between various states of algorithm and Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or cost function, and state-
space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.
Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


 Simple hill Climbing:
 Steepest-Ascent hill-climbing:
 Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates
the neighbor node state at a time and selects the first one which optimizes current cost and set
it as a current state. It only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state. This algorithm has the following features:
 Less time consuming
 Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:


 Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
 Step 2: Loop Until a solution is found or there is no new operator left to apply.
 Step 3: Select and apply an operator to the current state.
 Step 4: Check new state:
1. If it is goal state, then return success and quit.
2. Else if it is better than the current state then assign new state as a current state.
3. Else if not better than the current state, then return to step2.
 Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:


 Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
 Step 2: Loop until a solution is found or the current state does not change.
1. Let SUCC be a state such that any successor of the current state will be better than it.
2. For each operator that applies to the current state:
I. Apply the new operator and generate a new state.
II. Evaluate the new state.
III. If it is goal state, then return it and quit, else compare it to the SUCC.
IV.If it is better than SUCC, then set new state as SUCC.
V. If the SUCC is better than the current state, then set current state to SUCC.
 Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state or
examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of
its neighboring states, but there is another state also present which is higher than the local
maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction to
move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to
solve the problem. Randomly select a state which is far away from the current state so it is possible
that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.
Example for Local Search
Consider the 8-queens problem:

A complete-state formulation is used for local search algorithms. In 8-queens


problem, each state has 8-queens on the board one per column. There are two
functions related with 8-queens.

1) The successor function: It is function which returns all possible states which are
generated by a single queen move to another cell in the same column. The total
successor of the each state 8 x 7 = 56.

2) The heuristic cost function: It is a function 'h' which hold the number of attacking
pair of queens to each other either directly or indirectly. The value is zero for the
global minimum of the function which occurs only at perfect solutions.

Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk,
by moving a successor, then it may complete but not efficient. Simulated Annealing is an
algorithm which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then
cooling gradually, so this allows the metal to reach a low-energy crystalline state. The same process
is used in simulated annealing in which the algorithm picks a random move, instead of picking the
best move. If the random move improves the state, then it follows the same path. Otherwise, the
algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses
another path.

Simulated Annealing
Simulated annealing is a probabilistic local search algorithm inspired by the annealing process in
metallurgy. It allows the algorithm to accept worse solutions with a certain probability, which
decreases over time. This randomness introduces exploration into the search process, helping the
algorithm escape local optima and potentially find global optima.
 Initialization: Start with an initial solution.
 Evaluation: Evaluate the quality of the initial solution.
 Neighbor Generation: Generate neighboring solutions.
 Selection: Choose a neighboring solution based on the improvement in the objective
function and the probability of acceptance.
 Termination: Continue iterating until a termination condition is met.
The key to simulated annealing's success is the "temperature" parameter, which controls the
likelihood of accepting worse solutions. Initially, the temperature is high, allowing for more
exploration. As the algorithm progresses, the temperature decreases, reducing the acceptance
probability and allowing the search to converge towards a better solution.

Travelling Salesman Problem


The Travelling Salesman Problem (TSP) is a classic optimization problem in which a salesman is
tasked with finding the shortest possible route that visits a set of cities exactly once and returns to
the starting city. TSP is NP-hard, meaning that finding an exact solution for large instances becomes
computationally infeasible.
Local search algorithms, including hill climbing and simulated annealing, are often used to find
approximate solutions to the TSP. In this context, the cities and their connections form the solution
space, and the objective function is to minimize the total distance traveled.
These algorithms iteratively explore different routes, making incremental changes to improve the
tour's length. While they may not guarantee the absolute optimal solution, they often find high-
quality solutions in a reasonable amount of time, making them practical for solving TSP instances.

Genetic Algorithms
1) Evolutionary pace of learning algorithm is genetic algorithm. The higher degree of
eugency can be achieved with new paradigm of AI called a genetic algorithms.

2) A genetic algorithm is a rich flavour of stochastic beam search.

3) In the genetic algorithm two parent states are combined together by which a good
successor state is generated.

4) The analogy to natural selection is the same as in stochastic beam search, except
now we are dealing with sexual rather than asexual reproduction.

1. Term used in Genetic Algorithm


1) Population: Population is set of states which are generated randomly.

2) Individual: It is a state or individual and it is represented as string over a finite


alphabet.

Example - A string of 0s and 1s.


For example - In 8-queen all states are specified with the position of 8-queens. The
memory required is

8×log2 8 = 24 bits. Each state is represented with 8 digits.

3) Fitness function: It is a evaluation function. On its basis each state is rated. A


fitness function should return higher values for better states. For the state, the
probability of being chosen for reproducing is directly proportional to the fitness
score.

In 8-queen problem the fitness function has 28 value for number of non attacking
pairs.

4) Crossover: Selection of state is dependent on fitness function. If fitness function


value is above threshold then only state is selected otherwise discarded. For each
state, pairs are divided, that division point or meeting point is called crossover point,
which is chosen in random order from the positions in the string.

5) Mutation: Mutation is one of the generic operator. Mutation works on random


selections or changes. For example mutation select and changes a single bit of pattern
switching 0 to 1 or 1 to #.

6) Schema: The schema is a substring in which position of some bit can be


unspecified.

2. Working of a Genetic Algorithm


Input: 1) State population (a set of individuals)

2) Fitness function (that rates individual).

Steps

1) Create an individual 'X' (parent) by using random selection with fitness function
'A' of 'X'.

2) Create an individual 'Y' (parent) by using random selection with fitness function'B'
of 'Y'.

3) Child with good fitness is created for X + Y.


4) For small probability apply mutate operator on child.

5) Add child to new population.

6) The above process is repeated until child (an individual) is not fit as specified by
fitness function.

Genetic algorithm example

Example: 8-queens problem states.

States:

Assume each queen has its own column, represent a state by listing a row where the
queen is in each column (digits 1 to 8)

For example, the state below will be represented as 16257483.

3. Example: 8 Queens Problem Fitness


Fitness function: Instead of h as before, use the number of non-attacking pairs of
queens. There are 28 pairs of different queens, smaller column first, all together, so
solutions have fitness 28. (Basically, fitness function is 28 - h)

For example, fitness of the state below is 27 (queens in columns 4 and 7 attack each
other).
Example: 8 queens problem crossover.

Choose pairs for reproduction (so that those with higher fitness are more likely to be
chosen, perhaps multiple times).

For each pair, choose a random crossover point between 1 to 8, say 3.

Produce offspring by taking substring 1-3 from the first parent and 4 - 8 from the
second (and vice versa). Apply mutation (with small probability) to the offspring.

Importance of representation

Parts we swap in crossover should result in a well-informed solution (and in addition


better be meaningful).

Consider what would happen with binary representation (where position requires 3
digits).
Also chosen representation reduces search space considerably (compared to
representing each square for example).

4. Optimization using Genetic Algorithm


1) Genetic algorithm is biological model of intelligent evolution. It generates the
population of competing children.

2) The poor candidate solution will be vanished, as genetic algorithm generates best
child.

Thus, in practice genetic algorithm is most promising survive and reproduction


technique by constructing new optimized solution. Thus it is optimization oriented
technique.

Application of genetic algorithm on optimization.

1) Circuit layout. 2) Job-shop scheduling.

You might also like