0% found this document useful (0 votes)
12 views31 pages

Unit II

This document introduces meta-heuristic and evolutionary algorithms, which are designed to find optimal solutions to complex optimization problems that traditional methods struggle with. It discusses various sampling methods, the importance of decision variables, and the classification of algorithms into nature-inspired and non-nature-inspired categories. Additionally, it outlines the general algorithmic process, performance evaluation metrics, and factors influencing the effectiveness of these algorithms.

Uploaded by

arasupadhmasini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views31 pages

Unit II

This document introduces meta-heuristic and evolutionary algorithms, which are designed to find optimal solutions to complex optimization problems that traditional methods struggle with. It discusses various sampling methods, the importance of decision variables, and the classification of algorithms into nature-inspired and non-nature-inspired categories. Additionally, it outlines the general algorithmic process, performance evaluation metrics, and factors influencing the effectiveness of these algorithms.

Uploaded by

arasupadhmasini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit II

INTRODUCTION TO META HEURISTIC & EVOLUTIONARY ALGORITHM


Meta heuristic : class of algorithms that can find solutions to problems that are
difficult to solve using traditional methods

Searching the Decision Space for Optimal Solutions:

• The goal of solving an optimization problem is finding a Solution in the decision


space whose value of the objective function is the best among all possible
solutions.

The methods that apply trial‐and‐error search include (1) sampling grid, (2) random
sampling, and (3) targeted sampling.

• The goal of a sampling grid is evaluating all possible solutions and choosing the
best one.

• If the problem is discrete, a sampling network evaluates all possible solutions and
constraints.

• The solution that satisfies all the constraints and has the best objective function
value among all feasible solutions is chosen as the optimum.
Therefore, the sampling grid method is practical for relatively small problems only.
• S possible solutions, among which r = 1 is the optimal one, and K possible
solutions are chosen randomly among the S possible ones to be evaluated.
• First, let us consider that the random selection is done without replacement,
and let Z denote the number of optimal solutions found in the randomly
chosen sample of K possible solutions.
• Z can only take the value 0 or 1 in this instance
• optimal one is found from the hypergeometric distribution
P(Z = 1) = K/S. , if there are S = 10^6 possible solutions and K = 10^5 possible
solutions are randomly
• P(Z = 1) 1-((S -1) S)^K = 1 - (999999 10^6 )10^5 = 0.095 or 9.5%

• The sampling grid and random sampling are not efficient or practical methods
to solve real‐world engineering problems
Random sampling
• Methods is that they require that all the decision space be searched Precisely

Targeted Sampling

• It searches the decision space, taking into account the knowledge gained from
previously tested possible solutions
• Selects the next sample solutions based on results from previously tested
solutions.
• Targeted sampling is the basis of all meta‐heuristic and evolutionary algorithms
that rely on a systematic search to find an optimum.
• Meta‐heuristic and evolutionary algorithms are typically applied to calculate
near‐optimal solutions of problems that cannot be solved easily or at all using
other techniques
Definition of Terms of Meta‐Heuristic and Evolutionary Algorithms

• Problem‐independent techniques hat can be applied to a wide range of problems.

• Start from an initial state and initial data.

• Finding appropriate values for the decision variables of an optimization problem so that

the objective function is optimized.


Initial State
• Each meta‐heuristic and evolutionary algorithm starts from an initial state
of variables.
• This initial state can be predefined, randomly generated, or deterministically
calculated from formulas.
Iterations
• An iteration ends when a new possible solution is generated.
p
Final State
Termination criteria
(1) the number of iterations,
(2) the improvement threshold of the value of solution between consecutive Iterations
(3) the run time of the optimization algorithm.
Initial Data (Information)
(1) data about the optimization problem, which are required for simulation
(2) Parameters of the algorithm, which are required for its execution and may have to be
calibrated.
Decision Variables
• Decision variables are those whose values are calculated by execution of the
algorithm
• Values are reported as solution of an optimization problem upon reaching the
stopping criterion
State Variables
• The state variables are related to the decision variables.
• In fact, the values of the state variables change as the decision variables change.
Objective Function
• The objective function determines the optimality of solutions.
• An objective function value is assigned to each solution of an optimization
problem.
Simulation Model
• A simulation model is a single function or a set of mathematical operations that
evaluate the values of the state variables in response to the values of the decision
variables.
• The simulation model is a mathematical representation of a real problem or system that
forms part of an optimization problem.

Constraints
• Constraints delimit the feasible space of solutions of an optimization problem.

Fitness Function
• The value of the objective function is not always the chosen measure of desirability of a
solution.
Principles of Meta‐Heuristic and Evolutionary Algorithms

• Relation between the simulation model and the optimization algorithm in an optimization
problem.
p
Classification of Meta‐Heuristic and Evolutionary Algorithms

Nature‐Inspired and Non‐Nature‐Inspired Algorithms

• Algorithms are inspired by natural process, such as the genetic Algorithm (GA),
the ant colony optimization (ACO), the honey‐bee mating optimization (HBMO)
• The are other types of algorithms such as Tabu Search (TS) that has origins
unrelated to natural processes.
• TS is classified as a non‐nature‐inspired algorithm, it takes advantage of artificial
intelligence aspects such as memory

Population‐Based and Single‐Point Search Algorithms


• Single point Algorithms calculate iteratively one possible solution to an
optimization problem.
• The algorithms generate a single solution and attempt to improve that solution in
each iteration.
• Trajectory methods and encompass local search‐based meta‐heuristics
• population‐based algorithms perform search processes that describe the
evolution of a set of solutions in the search space. The GA is a good example
Memory‐Based and Memory‐Less Algorithms
• It resort to the search history to guide the future search for an optimal
solution.
• Memory‐less algorithms apply a Markov process to guide the search for a
Solution as the information they rely upon to determine the next action is the
Current state of the search process.
Meta‐ Heuristic and Evolutionary Algorithms in Discrete or Continuous Domains

In meta‐heuristic and evolutionary algorithms, each solution of an optimization


problem is defined as an array of decision variables

where X = a solution of optimization problem, xi = ith decision variable of the


solution array X, and N = the number of decision variables.
Decision variables may be binary, discrete, or continuous values.
2x1 + x2 + 5x3 ≤ 20,
Discrete values are used for problem with discrete decision space in which the
decision variables are chosen from a predefined set of values.
V1 = {1.1, 4.5, 9.0, 10.25, 50.1} and V2 = {1, 7, 80, 100, 250}. X1 and X2 we choose
form V1, V2
Generating Random Values of the Decision Variables

• Algorithms that generate initial solutions deterministically.

• During the search for an optimum, they generate random values of the decision variables.

• Binary decision variables

• Continuous decision variables are assigned values randomly between their lower and upper
boundaries
Dealing with Constraints
Removal Method
• The removal method eliminates each possible solution that does not satisfy the
constraints
Disadvantages
• This method does not distinguish between solutions with small and large
constraints violations.
• Even infeasible solutions may yield clues about the optimal solution.
Refinement Method
• This method does not delete any of the infeasible solutions from the search
process.
• The refinement method refines infeasible solutions to render them feasible
solutions.
Penalty Functions
• The application of penalty functions to avoid infeasible solutions overcomes
the shortcomings of the removal and refinement methods.
• This method adds (or subtracts) a penalty function to the objective function of a
minimization (or maximization) problem.
X = solution of the optimization problem, f(X) = value of the objective function of
solution X, G(X) = a constraint whose value exceeds δ1 (delta), H(X) = a constraint
whose value is less than δ2, Z(X) = a constraint whose value equals δ3

Phi -
theta (θ)
psi (Ψ
-- function

F(X) = f (X) - Penalty


Fitness Function
• The penalized objective function is called the fitness function.

• F (X) = f (X) +- Penalty


• where X = solution of the optimization problem, f(X) = objective function of
solution X,

• F(X) = fitness function (penalized objective function) of solution X.

• The penalty is added (or subtracted) in a minimization (maximization) problem.


Selection of Solutions in Each Iteration

• Selection refers solutions from a set of solutions during the algorithmic


calculations.

The selection operators bypass many current solutions.

Can select randomly or deterministic.

Will select highly relatively high merit.

selection is done based on the fitness values of the decision variables.

By iteration , new algorithm will evolve, that time new solution occurs.

Deterministic selection and selection with Uniform distribution.

Common selection methods are the Boltzmann selection, the roulette wheel, the
tournament selection.
A selection method with high selective pressure most likely selects the best solutions and
eliminates the worst ones at every step of the search.
General Algorithm
• All meta‐heuristic and evolutionary algorithms begin by generating initial (possible or
tentative) solution(s)
• Which are improved by the old algorithm.
• Iterative
Step 0: Read input data.
Step 1: Generate initial possible or tentative solutions randomly or deterministically.
Step 2: Evaluate the fitness values of all current solutions.
Step 3: Rename the current solutions as old solutions.
Step 4: Rank all the old solutions and identify the best among them, those with
relatively high fitness values.
Step 5: Select a subset of the old solutions with relatively high fitness values.
Step 6: Generate new solutions.
Step 7: Evaluate the fitness value of the newly generated solutions.
Step 8: If termination criteria are not satisfied, go to step 3; otherwise go to step 9.
Step 9: Report all the most recently calculated solutions or the best solution
achieved at the time when the algorithm terminates execution.
Performance Evaluation of Meta‐Heuristic and Evolutionary Algorithms

• An evolutionary or meta‐heuristic algorithm starts with initial solutions and


attempts to improve them.

• The progress of an algorithm that gradually convergences to a near optimum of


imaginary hypothetical minimization problem.

Number of Functional Evaluations


(NFE)
Convergence of different runs of an optimization algorithm toward near optimal
solutions of a minimization problem.
Convergence of different runs of an optimization algorithm in a minimization problem.

• Run No. 1 converges fast but to a solution that is clearly non optimal.
• Run No. 2 reaches an acceptable solution that is close enough to the global optimum, but
its convergence speed is relatively low.
• Run No. 3 achieves a near‐optimal solution with the fastest convergence rate.
The following factors determine the quality of the algorithm’s performance

(1) capacity to reach near‐optimal solutions consistently, that is, across several runs solving a
given problem,

(2) speed of the solution algorithm in reaching near‐optimal solutions. Algorithmic


reliability is defined based on the variance of the solutions’ fitness ‐function values
achieved in several runs of an algorithm reported as final solutions by different runs of an
algorithm.

You might also like