0% found this document useful (0 votes)
57 views66 pages

03 Operations Research

The document discusses using operations research techniques like simulation and optimization to help solve environmental management problems. It describes using a simulation model over time to represent the system and optimize a performance criterion. Optimization algorithms like heuristic, metaheuristic and exact methods are explored to search for the best control policies. Ant colony optimization and genetic algorithms are presented as examples of metaheuristics that could be applied.

Uploaded by

orangekaer2009
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views66 pages

03 Operations Research

The document discusses using operations research techniques like simulation and optimization to help solve environmental management problems. It describes using a simulation model over time to represent the system and optimize a performance criterion. Optimization algorithms like heuristic, metaheuristic and exact methods are explored to search for the best control policies. Ant colony optimization and genetic algorithms are presented as examples of metaheuristics that could be applied.

Uploaded by

orangekaer2009
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

The role of Operations Research

Solving Decisional Models for Environmental Management

The search process


Simulation Results Simulation Model Control Variables u(t) x(t+1)=f(x(t),u(t))
Maps Time series

Performance Criterion

J=J(x,u)

Search algorithm J max?

Modification of control variable

yes

Adapted from Seppelt, 2003

Stop

What is search
To search means to explore possible values
for the control variable u, to obtain the value of the state x caused by u, and to evaluate the performance J

The policy
P {Mt () ; t = 0, 1, . . .

The policy denes the value of u to be

chosen in front of the value of x at each time step t and in each spatial location z

Finding the policy


The problem is hard effects are uncertain decisions are dynamic and recursive

The methodology
Scenario Analysis Optimisation

Scenario analysis
Scenario denition requires knowledge on
the modelled system, which might not be available, due to complexity (Jakeman and Letcher, 2003) limited

In some (most) cases Scenarios are too Too much weight on past experience

Optimisation
Given the goal, the computer automatically
generates scenarios, by combination of parameters, controls, exogenous inputs

A numerical optimisation algorithm is in

charge of evaluating the performance and directing exploration in the most promising direction

Simulation explained

The simulation model


xt+!t (z) = M!t (xt (z), ut (z), "(z), z)

It is a constraint for the optimisation


problem

all model variables may vary in space boundary and initial conditions are given

The performance criterion


state x and control u depend on time and
location
J(xt (z), ut (z))

the performance criterion maps a trajectory


from state and policy space to a real number

The problem
Model:

xt+!t (z) = M!t (xt (z), ut (z), "(z), z)


J(xt (z), ut (z))

Performance criterion: Problem: nd u so that


x(t) = M!t (x, u , ")
J(x, u ) J(x, u)

for all t in [0, T] and u in U

Solving the problem


Exact methods Approximation algorithms Heuristic and metaheuristic algorithms

Exact methods
Exact algorithms
by denition always nd a solution, which is an optimal one. take too long to nd an optimal solution, even on the fastest computers available.

For many real-world problems, they may

Exact methods
The algorithm depends on the problem
formulation

Linear programming, simplex Integer programming, branch & bound Dynamic programming

Approximation algorithms
Find a sub-optimal solution It is possible to say how far from the
optimum (e.g. 95%)

Formally proven Often is very specic to the problem at


hand

E.g: neuro-dynamic programming

Heuristics
Apply rules-of-thumb to the solution of the
problem

No guarantee to nd an optimum No guarantee on the quality of the solution Yet, they are fast and generic

Metaheuristics
set of concepts that can be used to dene
heuristic methods that can be applied to a wide set of different problems. be applied to different optimization problems with relatively few modications to make them adapted to a specic problem

general algorithmic frameworks which can

Metaheuristics are...
Simulated annealing Evolutionary computation Tabu search Ant Colony Systems

Ant Colony Systems

Ants
Leaf cutters, fungi growers Breeding other Insects Weaver ants

Insects, social insects, and ants


1018 living insects (estimate) ~2% of all insects are social Examples of social insects:
All ants All termites Some bees Some wasps

50% of all social insects are ants Average body weight of an ant: 1 - 5 mg Total mass of ants ~ total mass of humans Ants have colonized earth for 100 million yrs; Homo sapiens sapiens for 50000 years

Ant colony society



Size of a colony: from a few (30) to millions of worker ants Labour division:

Queen: reproduction Soldiers: defense Specialised workers: food gathering, offspring tending, nest grooming, nest building and maintenance

How do ants coordinate their activities?


Basic principle: stigmergy stigmergy is a kind of indirect
communication, used by social insects to coordinate their activities

Stigmergy Stimulation of working


ants due to the results they have reached
Grass P. P., 1959

What are ant-based algorithms?


Ant-based algorithms are multi-agent
systems using articial stigmergy computational problems

to coordinate articial ants to solve

Articial Stigmergy
Indirect communication by changes on the Features of articial stigmergy : Indirect communication Local accessibility
environment accessible only locally to communicating agents (Dorigo and Di Caro, 1999)

The bridge experiment

Goss et al., 1989, Deneubourg et al., 1990

Pheromone trail

Ants and termites follow pheromone trails

Asymmetric bridge experiment

Goss et al., 1989

Ant System for TSP


Travelling Salesman Problem Dorigo, Maniezzo, Colorni, 1991 Gambardella & Dorigo, 1995

Pheromone trail deposition

memory

Probabilistic rule to choose the path

Pheromone, euristics and memory to visit the next city


Memory of visited cities

j
p (t ) = f ( ij ( t ), ij (t ))
k ij

tij ;hij tik ;hik

i tir ;hir r r

Pheromone trail deposition


k k k ij (t + 1) (1 ) ij (t ) + ij (t )

where (i,j) are the links visited by ant k, and

(t ) = quality

k ij

where qualityk is inversely proportional to the length of the solution found by ant k

The algorithm
Ants depart from the depot choosing the next visit in the list of customers Ants follow a probabilistic route as a function of: (i) some articial pheromone values and (ii) local euristic values, Ants memorise the current tour and the current travel time, taking into account the problem constraints (e.g. capacity, time windows) Once they have completed a tour, they update the global pheromone trail, in order to distribute the information gathered on the new solution AntSystem is distributed and not synchronised

Why does it works?


origin

3 main components: TIME: a short route

i tijd j

gets pheromone more quickly gets more pheromone

QUALITY: a short route COMBINATORICS: a short path

d
destination

gest pheromone more frequently since it (usually) has a smaller number of decision points

Some results on the Travelling Salesman Problem

A constructive heuristics with local search


The best strategy to approximate the a constructive heuristics and local search
appear to have found it solution of a combinatorial optimisation problem is to couple

It is hard to nd the best coupling: Ant System (and similar algorithms)

The ACO Metaheuristics


Ant System has been extended, to be applied to any problem formulated as a shortest path problem The extension has been named Optimisation metaheuristics based on Ant Colonies (ACO) The main application:

NP complex combinatorial optimisation problems

The ACO-metaheuristic procedure


procedure ACO-metaheuristic() while (not-termination-criterion) planning subroutines generate-and-manage-ants() E.g. local search evaporate-pheromone() execute-daemon-actions() {opzionale} end planning subroutines end while end procedure

ACO: Applications
Sequential ordering in a production line Vehicle Routing of trucks goods
distributions

Job-shop scheduling Project scheduling Water distribution problems

Genetic Algorithms

Evolutionary computing
EC (Evolutionary computing) = GA (Genetic Algorithms - Holland, 1975) ES (Evolution Strategies - Rechenberg
1973)

EP (Evolutionary Programming - Fogel,


Owens, Walsh, 1966)

GP (Genetic Programming - Koza, 1992)

Biological basis
Evolution operates on chromosomes, which
encode the structure of a living being the most efcient chromosomes

Natural selection favours reproduction of Mutations (new chromosomes) and


recombination (mixing chromosomes) occur during reproduction

Use
GA are very well suited when the structure
of the search space is not well known

In other words, if the model of our system


is rough, we can use GA to nd the best policy

How it works
A genetic algorithm maintains a population
of candidate solutions for the problem at hand, and makes it evolve by iteratively applying a set of stochastic operators

The skeleton of the algorithm



Generate initial population P(0) t:=0 while not converging do

evaluate P(t) P(t) select best individuals from P(t) P(t) apply reproduction on P(t) P(t+1) replace individuals (P(t),P(t)

end while

Components of a GA
Encoding principles (gene, chromosome) Initialization procedure (creation) Selection of parents (reproduction) Genetic operators (mutation,
recombination)

Evaluation function (environment) Termination condition

Representation (encoding)

Possible individuals encoding

Bit strings Real numbers Permutations of element Lists of rules Program elements ... any data structure ...

(0101 ... 1100) (43.2 -33.1 ... 0.0 89.2) (E11 E3 E7 ... E1 E15) (R1 R2 R3 ... R22 R23) (genetic programming)

Choosing the encoding


Use a data structure as close as possible to
the natural representation needed

Write appropriate genetic operators as If possible, ensure that all genotypes


correspond to feasible solutions preserve feasibility

If possible, ensure that genetic operators

Initialization
Start with a random population a previously saved population a set of solutions provided by a human
expert

a set of solutions provided by another


heuristic

Selection
Purpose: to focus the search in promising
regions of the space

Inspiration: Darwins survival of the ttest Trade-off between exploration and


exploitation of the search space

Linear Ranking Selection


Based on sorting of individuals by decreasing tness. The probability to be extracted for the ith individual in the ranking is dened as

where b can be interpreted as the expected sampling rate of the best individual

Local Tournament Selection


Extracts k individuals from the population
with uniform probability (without reinsertion) and makes them play a tournament,

where the probability for an individual to


the number k of participants

win is generally proportional to its tness

Selection pressure is directly proportional to

Recombination (Crossover)
Enables the evolutionary process to move
toward promising regions of the search space

Matches good parents sub-solutions to construct better offspring

Mutation
Purpose: to simulate the effect of errors that
happen with low probability during duplication

Results: Movement in the search space Restoration of lost information to the


population

Evaluation (tness function)


Solution is only as good as the evaluation Similar-encoded solutions should have a
similar tness function; choosing a good one is often the hardest part

Termination condition
Examples: A pre-determined number of generations or
time has elapsed

A satisfactory solution has been achieved No improvement in solution quality has

taken place for a pre-determined number of generations

Acknowledgements
Part of the material extracted from

Introduction to Genetic Algorithms by Assaf Zaritsky, Ben Gurion University,

Simulated Annealing

The origin
It is the oldest metaheuristic Originated in statistical mechanics

(Metropolis Monte Carlo algorithm) problems by Kirkpatrick et al. 1983

First presented to solve combinatorial

The idea
It searches in directions which result in
solutions that are of worse quality than the current solution

It allows to escape local minima The probability of the exploration of these


unpromising directions is decremented during the search

s GenerateInitialSolution() Temp:=InitialTemp while not converging loop

The algorithm

s PickAtRandomFromNeighbor(s) if J(s) < J(s)

s s

else Accept s as new solution with prob p(T,s,s)

Update(T)

end while

Boltzmann probability
if s is worse than s, then s might still be
chosen as the new solution on temperature T

the probability depends on d=f(s)-f(s) and the higher d the lower the probability the higher T the higher the probability

Boltzmann probability
It determines the equilibrium distribution of
a system in various energy states at a given temperature

P(s s ) e

f (s ) f (s) T

Temperature T decreases during the search

process (similarity with annealing process of metal and glass)

Pros and Cons


Pros proven to converge to the optimum easy to implement Cons converges in innite time the cooling process must be slow

End of Part II

You might also like