Reasons For Adopting Stochiastic Operation Method: (Diferencial Evolution)
Reasons For Adopting Stochiastic Operation Method: (Diferencial Evolution)
AN OVERVIEW OF DE
In computer science Differential Evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as met heuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, met heuristics such as DE do not guarantee an optimal solution is ever found. DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-Newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc. DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed. DE is originally due to Storn and Price. Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas.
Evolutionary Algorithms
DE is an Evolutionary Algorithm This class also includes Genetic Algorithms, Evolutionary Strategies and Evolutionary Programming.
SDS
Initialisation
Mutation
Recombination
Selection
Notation
Suppose we want to optimise a function with D real parameters We must select the size of the population N (it must be at least 4) The parameter vectors have the form: xi,G = [x1,i,G, x2,i,G, . . . xD,i,G] i = 1, 2, . . . ,N. (where G is the generation number)
Initialisation
Initialisation
Mutation
Recombination
Selection
[xLj, xUj]
Mutation
Initialisation
Mutation
Recombination
Selection
Each of the N parameter vectors undergoes mutation, recombination and selection. Mutation expands the search space For a given parameter vector xi,G randomly select three vectors
xr1,G, xr2,G
distinct.
and
xr3,G
Recombination
Initiation
Mutation
Recombination
Selection
Recombination incorporates successful solutions from the previous generation. The trial vector ui,G+1 is developed from the elements of the target vector, xi,G, and the elements of the donor vector, vi,G+1. Elements of the donor vector enter the trial vector with probability CR
Selection
Initiation
Mutation
Recombination
Selection
The target vector xi,G is compared with the trial vector vi,G+1 and the one with the lowest function value is admitted to the next generation. Mutation, recombination and selection continue until some stopping criterion is reached.
Simulation programme
clear; clc; nlns=41; % IEEE 30-BUS TEST SYSTEM (American % Bus Bus Voltage Angle ---Load---% No code Mag. Degree MW Mvar global busdata linedata Pdt tic; basemva=100; % Bus Bus Voltage Angle ---Load---% No code Mag. Degree MW Mvar busdata=[1 1 1.06 0.0 0.0 0.0 2 2 1.043 0.0 21.7 12.7 3 0 1.0 0.0 2.4 1.2 4 0 1.06 0.0 7.6 1.6 5 2 1.01 0.0 94.2 19.0 6 0 1.0 0.0 0.0 0.0 7 0 1.0 0.0 22.8 10.9 8 2 1.01 0.0 30.0 30.0 9 0 1.0 0.0 0.0 0.0 10 0 1.0 0.0 5.8 2.0 11 2 1.082 0.0 0.0 0.0 12 0 1.0 0 11.2 7.5 13 2 1.071 0 0 0.0 14 0 1 0 6.2 1.6 15 0 1 0 8.2 2.5 16 0 1 0 3.5 1.8 17 0 1 0 9.0 5.8 18 0 1 0 3.2 0.9 19 0 1 0 9.5 3.4 20 0 1 0 2.2 0.7 21 0 1 0 17.5 11.2 22 0 1 0 0 0.0 23 0 1 0 3.2 1.6 24 0 1 0 8.7 6.7 25 0 1 0 0 0.0 26 0 1 0 3.5 2.3 27 0 1 0 0 0.0 28 0 1 0 0 0.0 29 0 1 0 2.4 0.9 30 0 1 0 10.6 1.9 % % Bus % nl linedata=[1 1 2 3 2 2 4 5 6 6 6
-----Generator-----Static Mvar MW Mvar Qmin Qmax Qc/-Ql 0.0 0.0 0 0 0 40.0 0.0 -40 50 0 0.0 0.0 0 0 0 0.0 0.0 0 0 0 0.0 0.0 -40 40 0 0.0 0.0 0 0 0 0.0 0.0 0 0 0 0.0 0.0 -10 60 0 0.0 0.0 0 0 0 0.0 0.0 -6 24 19 0.0 0.0 0 0 0 0 0 0 0 0 0 0 -6 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4.3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0];
bus nr 2 3 4 4 5 6 6 7 7 8 9
R p.u. 0.0192 0.0452 0.0570 0.0132 0.0472 0.0581 0.0119 0.0460 0.0267 0.0120 0.0
X p.u. 0.0575 0.1852 0.1737 0.0379 0.1983 0.1763 0.0414 0.1160 0.0820 0.0420 0.2080
Line code 1/2 B = 1 for lines p.u. > 1 or < 1 tr. tap at bus nl 0.02640 1 0.02040 1 0.01840 1 0.00420 1 0.02090 1 0.01870 1 0.00450 1 0.01020 1 0.00850 1 0.00450 1 0.0 0.978
6 10 9 11 9 10 4 12 12 13 12 14 12 15 12 16 14 15 16 17 15 18 18 19 19 20 10 20 10 17 10 21 10 22 21 22 15 23 22 24 23 24 24 25 25 26 25 27 28 27 27 29 27 30 29 30 8 28 6 28 gencost = [1 2 5 8 11 13 Pdt=283.4;
0 0 0 0 0 .1231 .0662 .0945 .2210 .0824 .1073 .0639 .0340 .0936 .0324 .0348 .0727 .0116 .1000 .1150 .1320 .1885 .2544 .1093 0 .2198 .3202 .2399 .0636 .0169 10 200 10 150 20 180 10 100 20 180 10 150
.5560 .2080 .1100 .2560 .1400 .2559 .1304 .1987 .1997 .1923 .2185 .1292 .0680 .2090 .0845 .0749 .1499 .0236 .2020 .1790 .2700 .3292 .3800 .2087 .3960 .4153 .6027 .4533 .2000 .0599 100 50 120 20 40 15 60 10 40 10 100 12
0 0.969 0 1 0 1 0 0.932 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0.968 0 1 0 1 0 1 0.0214 1 0.065 1]; 200; 80; 50; 35; 30; 40];
n=length(gencost(:,1)); % Initialization and run of differential evolution optimizer. % A simpler version with fewer explicit parameters is in run0.m % % Here for Rosenbrock's function % Change relevant entries to adapt to your personal applications % % The file ofunc.m must also be changed % to return the objective function % % VTR "Value To Reach" (stop when ofunc < VTR) VTR = 1.e-6; % D number of parameters of the objective function D = n-1; % XVmin,XVmax vector of lower and bounds of initial population % the algorithm seems to work well only if [XVmin,XVmax] % covers the region where the global minimum is expected % *** note: these are no bound constraints!! *** XVmin=gencost(2:6,5)'; XVmax=gencost(2:6,6)';
% NP
% itermax maximum number of iterations (generations) itermax = 100; % F DE-stepsize F ex [0, 2] F = 0.8; crossover probabililty constant ex [0, 1] CR = 0.8; 1 2 3 4 5 --> --> --> --> --> DE/best/1/exp DE/rand/1/exp DE/rand-to-best/1/exp DE/best/2/exp DE/rand/2/exp 6 --> 7 --> 8 --> 9 --> else DE/best/1/bin DE/rand/1/bin DE/rand-to-best/1/bin DE/best/2/bin DE/rand/2/bin
% CR
% strategy % % % %
strategy = 1; % refresh % % refresh intermediate output will be produced after "refresh" iterations. No intermediate output will be produced if refresh is < 1 = 10; =
SIMULATION RESULTS
1.006 1.004 1.002 1 0.998 0.996 0.994 0.992 0.99 0.988 0.986 0.984 Normal BUS 1-2 BUS 1-3 BUS 2-4 THERMAL SOLAR
4 THERMAL 3 SOLAR 2
Tabular Result on COST of generation in different Buses BUS NORMAL THERMAL 2.2037e+005 SOLAR
Tabular Result on MAX. LOAD FLOW of generation in different Buses BUS NORMAL THERMAL SOLAR