Project On Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques
Project On Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques
Prepared By:
Submitted to:
Girmaw T. (PhD)
June, 2023
1.3 Economic Load Dispatch of a power system using Genetic Algorithm ................................... 5
2.5.1 Hopfield Artificial Neural Network for Economic Load Dispatch: ................................. 37
References ..................................................................................................................................... 43
i
Economic Load Dispatch Using Genetic Algorithm Based Optimization
1.1 Introduction
The principal objective of the economic dispatch of power system is to schedule the
generation units in order to serve the load demand at the minimum operating cost while meeting
unit and system constraints. In an electrical power system, a continuous balance must be
maintained between electrical generation and varying load demand, while the system frequency,
voltage levels, and security also must be kept constant. Furthermore, it is desirable that the cost of
such generation be minimal. Numerous classical techniques such as LaGrange based methods and
lambada Iteration method has been used. Many other methods such as gradient methods, Newton’s
methods, linear and quadratic programming, etc. also used. Artificial intelligence optimization
techniques like genetic algorithm, artificial neural network, particle swarm optimization and
simulated annealing are now used for optimization of power flow and hence economic load
dispatch.
Genetic Algorithms (GAs) are numerical optimization algorithms based on the principle
inspired from the genetic and evolution mechanisms observed in natural systems and population
of living being. Genetic algorithms are resolution algorithms based on the mechanics of natural
selection and natural genetics. They combine survival of the fittest among string structures with
structured yet randomized information exchange to form a resolution algorithm with some of
man’s capacity for survival. In every generation, a new set of artificial creatures (strings) is created
by using bits and pieces from the fittest of the old; an occasional new part is used for good measure
and it essentially derived from a simple model of population genetics. The three prime operators
associated with the genetic algorithm are reproduction, crossover, and mutation.
Generally speaking, the GA for the economic dispatch problem (EDP) starts by coding the
variables, randomly selecting several initial values, calculating the resultant objective function by
solving the EDP based on the decision variables, selecting a subset of the initially selected
variables based on highest savings, cross mating the coded locations and mutating the resultant
1
code to arrive at a better solution. In the idea of genetic algorithm adaptation is intelligence. If you
Reproduction or Selection
Crossover
Mutation
Reproduction or Selection
Reproduction is the first operator applied on the population. Chromosomes are selected
from the population as parents to crossover and produce offspring in such a way that best should
survive and create offspring. These offspring are the basis for the next generation. So, it is desirable
that mating pool should consist of good individuals. A selection strategy in GA is simply a process
that favors the selection of better individuals in the population for the mating pool. Besides,
Selection Operator is the process of selecting two or more parents from the population for crossing.
Purpose of selection is to emphasize filter individuals in the population in hopes that their offspring
have higher fitness. The process that determines which solutions are to be preserved and allowed
to reproduce and which ones deserve to die out. The primary objective of the selection operator is
to emphasize the good solutions and eliminate the bad solutions in a population while keeping the
population size constant.
Crossover Operator
The crossover operator plays vital and central role in GA working, in fact it may be
considered to be one of the algorithm’s defining characteristics. This operator provides mechanism
for sharing information with probability of crossover between chromosomes by combining two
parents’ chromosome to produces offspring with the possibility that good chromosomes may
generate better ones. There are three types of crossover operators:
2
(3) Uniform crossover.
In single point and multipoint crossover segment of bits are exchanged between cross sites
whereas uniform crossover exchanges bits of a string rather than segments. At each string position,
the bits are probabilistically exchanged with some fixed probability.
Mutation
Mutation operator changes 1 to 0 and vice versa with a small probability. The operator
injects new genetic material into the population to prevent the premature convergence of GA to
suboptimal solutions.
Fitness Function
The Genetic algorithm is based on Darwin’s principle that “The candidates, which can
survive, will live, others would die”. This principal is used to find fitness value of the process for
solving maximization problems. Minimization problems are usually transferred into maximization
problems using some suitable transformations. Fitness value f (x) is derived from the objective
function and is used in successive genetic operations. The fitness function for maximization
problem can be used the same as objective function F (X ) [1].
f (x) = F(X )
point remains unchanged. The following fitness function is often used in minimization problems:
Among them Roulette wheel is the most common. Parents are selected according to their
Initialize population
o Select parents
o Apply mutation
o Calculate fitness
end
population size: - it is how many chromosomes in one population. If population size is too many
GA become extremely sluggish and if it is too few, not many possibilities for mating /only a part
Crossover frequency: - if there is mating all the time (100%), all the offspring will made by
crossover elseif 0% mate is there, the parents will copy. It is reasonable to copy some chromosome
Mutation frequency: - for 0% mutation there is no change in copies(offspring) and hence there
should be a mutation with a certain percent frequency. If it is too often [50%] mutation, huge
4
variability which prevents convergence will obtained while a [1%] mutation also provides change
in copies.
Probability of mutation (𝑃 ) =
GA dynamically changes the search process through the probabilities of crossover and
mutation and reached to optimal solution. GA can modify the encoded genes. GA can evaluate
multiple individuals and produce multiple optimal solutions
Generally, the GA for the EDP starts by coding the variables, randomly selecting several
initial values, calculating the resultant objective function by solving the EDP based on the decision
variables, selecting a subset of the initially selected variables based on highest savings, cross
mating the coded locations and mutating the resultant code to arrive at a better solution.
The economic load dispatch problem is considered as general minimization problem with
constraints and can be written as
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓(𝑋) = 𝑎 𝑃 + 𝑏 𝑃 + 𝑐
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 ≤𝑃 ≤𝑃
𝑃 −𝑃 −𝑃 =0
𝑃 = 𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 𝑃𝑜𝑤𝑒𝑟
5
I. Read Input data a, b, and c Coefficients and, 𝑃 , 𝑃 , 𝑎𝑛𝑑 𝑃 .
II. Compute λ 𝑎𝑛𝑑 λ
III. Randomly initialized population size
the ED problem.
| 𝑃 −𝑃 |<ε
∆
Normalize the error (𝑁 ) =
The GA can be presented in flow chart when it is intended for economic load dispatch.
6
START
MUTATION
COST calculation
No CHECK THE
CONVERGENCE
Yes
STOP
7
1.3.1 Economic Dispatch Optimization Problem Using Genetic Algorithm
100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊
100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊
100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊
𝑃 = 800𝑀𝑊
Besides, For Genetic algorithm (GA) application on ELD assume the length of string L is
9, Population in size is 6 in order to reduce the optimization cost of the economic dispatch
problem developed above.
The incremental or marginal cost of the given generator cost function is written as follows.
𝑑𝐿(λ, 𝑃 )
= 0.003124P + 7.92 = λ
𝑑𝑃
𝑑𝐿(λ, 𝑃 )
= 0.00388P + 7.85 = λ
𝑑𝑃
𝑑𝐿(λ, 𝑃 )
= 0.00400P + 7.90 = λ
𝑑𝑃
8
Using The incremental or marginal cost equation it is possible to compute the minimum
and maximum incremental cost value (of λ 𝑎𝑛𝑑 λ ) by substituting maximum and
(λ, )
minimum power of each generator on the corresponding incremental cost equation .
The minimum and maximum lambda value is therefore summarized in matrix form as
follows.
λ λ 8.2324 9.482
λ λ = 8.238 9.79 ,
λ λ 8.30 9.9
Find the search space by comparing lambda minimum and maximum values, take the
most minimum and most maximum values, we get
9
λ = 8.2324; λ = 9.90
∑𝑃 − 𝑃
𝑁 =
𝑃
Fitness=( )
.
𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 = ( ∑
)* population Size
Initial population
Selection Operation
Table 1.1. result of generated power and GA parameters for initial population
Expected Actual
Initial Population DV λ Value P1(MW) P2(MW) P3(MW) NE Fitness
Count Count
For the initial population taken, the results of generated power are within the limits of the
power inequality constraint and hence the table 1.1 above is taken as it is for crossover operation.
10
Single point cross over is used and taken from the first two bits of the population. Color of
the last two bits in the population changes and takes the corresponding color of the two bits which
participate in the cross over operation and the result is presented in table 1.2.
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
offspring Value Count Count
Mutation
Randomly the third bits /genes/ of the third, fourth and fifth chromosomes are selected for mutation
and the value of the corresponding bit is changed to 1 if it was 0 and vice versal. The new values of
𝑃 , 𝑃 , 𝑃 and λ is then calculated and the mutation result presented in table 1.3.
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
offspring Value Count Count
Ordering the corresponding fitness values in the cross over and mutation operators is the
last phase of first iteration. and based on their fitness value, choose the top six offspring.
11
Table 1.4 crossover plus mutation result
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
Crossover + mutation Value Count Count
Table 1.5 The best 𝑆𝑖𝑥 Off springs after Gene operator
The work on first iteration completed in table 1.5. The Roulette wheel
selection is used to select new parents from the previous offspring which is obtained
in the combination of crossover and mutation. If the actual count is two it replicates
itself and the iteration continues until the power balance error less than the tolerance.
12
Since the actual count in table 1.5 is one in all offspring’s, the whole population is
Table 1.6 new parent for second iteration (second iteration selection)
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
New Parent Value Count Count
The same procedure followed as used in iteration one for crossover, mutation and other
operations of the second iteration
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
crossover Value Count Count
Since the last two digits in each chromosome are the same no change is
observed in bits of the first two in the right.
13
Table 1.7 Mutation
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
Mutuation Value Count Count
Matching of crossover with mutation and select the best six offspring is then
presented in table 1.8 and table 1.9 respectively
λ Expected Actual
Crossover + mutation DV P1(MW) P2(MW) P3(MW) NE Fitness
Value Count Count
14
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.395 1.35859 1
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.395 1.16297 1
0 0 1 1 0 1 1 0 1 109 8.5881 213.864 190.2348 172.0278 -0.28 1.389 1.35222 1
0 1 0 0 1 0 1 0 0 148 8.7154 254.6043 223.0371 203.846 -0.15 1.174 1.14316 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.169 0.9747 1
It is shown the way how genetic algorithm computes to get out the fittest value and the
iteration iterates until the fittest value obtained or until | ∑ 𝑃 − 𝑃 | < ε achieved.
clc;
clear all;
Pd=800;
laminc=0.0005;
epsilon=0.005;
delp=Pd;
for i=1:length(a)
l_min(i)=2*a(1,i)*Pmin+b(1,i);
l_max(i)=2*a(1,i)*Pmax+b(1,i);
end
%GA parameters================================================
15
itermax=100; % Maximum Number of Iterations
n_bits=9;
n_var=1;
chrom_length=n_var*n_bits;
lam_min=min(l_min);
lam_max=max(l_max);
iter=1;
%===================Initialization of population=================%
% pop_bin=round(rand(n_pop,chrom_length));
pop_bin=[0 0 1 1 0 1 1 1 0
010110100
100000001
101010100
110010001
0 1 0 0 1 0 1 1 0];
pop_bin;
for p=1:n_pop
p;
string=pop_bin(p,:);
dec_value(p,:)=bi2de(string);
end
16
dec_value
for p=1:n_pop
lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);
if lambda(p,:)<=lam_min;
lambda(p,:)=lam_min;
elseif lambda(p,:)>=lam_max
lambda(p,:)=lam_max;
end
end
lambda;
pp=[];
for p=1:n_pop
for i=1:length(a)
pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));
end
pg1(p,:)=pp(p,:);
for i=1:length(a)
if pg1(p,i)<=Pmin
pg1(p,i)=Pmin;
elseif pg1(p,i)>=Pmax
pg1(p,i)=Pmax;
end
F(p,i)=a(1,i)*pg1(p,i)^2+b(1,i)*pg1(p,i);
end
17
delp(p)=abs(sum(pg1(p,:))-Pd);
error(p)=abs(delp(p));
NE(p)=error(p)/Pd;
fitness(p,:)=1/(1+NE(p));
FT(p,:)=sum(F(p,:));
end
fitness;
pg1
while (max(delp(1,:))>epsilon*Pd)
% for iter=1%:50
% iter
%====================Genetic
operators====================================%
sum_fit=sum(fitness);
prob=fitness./sum_fit;
expe_count=prob.*n_pop;
act_count=round(expe_count);
ct=0;
for i=1:n_pop
for j=1:act_count(i)
parent(ct+j,:)=pop_bin(i,1:n_bits);
18
end
ct=ct+act_count(i);
end
parent;
off_spring=[];
for i=1:n_pop/2
parent1=parent(2*i-1,:);
parent2=parent(2*i,:);
cross_site=round(1+(6-1)*rand());
child1=[parent1(1,1:cross_site-1) parent2(1,cross_site:n_bits)];
child2=[parent2(1,1:cross_site-1) parent1(1,cross_site:n_bits)];
off_spring=[off_spring;child1;child2];
end
off_spring;
for p=1:n_pop
string=off_spring(p,:);
dec_value(p,:)=bi2de(string);
end
dec_value;
for p=1:n_pop
lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);
if lambda(p,:)<=lam_min
19
lambda(p,:)=lam_min;
elseif lambda(p,:)>=lam_max
lambda(p,:)=lam_max;
end
end
lambda;
pp=[];
for p=1:n_pop
for i=1:length(a)
pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));
end
delp(p)=abs(sum(pp(p,:))-Pd);
error(p)=abs(delp(p));
NE(p)=error(p)/Pd;
fitness(p,:)=1/(1+NE(p));
pg2(p,:)=pp(p,:);
for i=1:length(a)
if pg2(p,i)<=Pmin
pg2(p,i)=Pmin;
elseif pg2(p,i)>=Pmax
pg2(p,i)=Pmax;
end
F(p,i)=a(1,i)*pg2(p,i)^2+b(1,i)*pg2(p,i);
end
20
FT(p,:)=sum(F(p,:));
end
lambda
error;
NE;
fitness;
pg2
%====================mutation=========================================
%
for i=1:2
r1=round(1+(n_pop-1)*rand(1));
b1=round(1+(n_bits-1)*rand(1));
if off_spring(r1,b1)==0
off_spring(r1,b1)=1;
else
off_spring(r1,b1)=0;
end
end
for p=1:n_pop
string=off_spring(p,:);
dec_value(p,:)=bi2de(string);
end
dec_value;
for p=1:n_pop
21
lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);
if lambda(p,:)<=lam_min
lambda(p,:)=lam_min;
elseif lambda(p,:)>=lam_max
lambda(p,:)=lam_max;
end
end
lambda;
pp=[];
for p=1:n_pop
for i=1:length(a)
pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));
end
delp(p)=abs(sum(pp(p,:))-Pd);
error(p)=abs(delp(p));
NE(p)=error(p)/Pd;
fitness(p,:)=1/(1+NE(p));
pg3(p,:)=pp(p,:);
for i=1:length(a)
if pg3(p,i)<=Pmin
pg3(p,i)=Pmin;
elseif pg3(p,i)>=Pmax
pg3(p,i)=Pmax;
end
22
F(p,i)=a(1,i)*pg3(p,i)^2+b(1,i)*pg3(p,i);
end
FT(p,:)=sum(F(p,:));
end
error
NE
fitness
FT
pg3
%===================================%
% [m temp]=sort(off_spring(:,n_bits+1),'descend');
%===================================%
int_pop=[pop_bin;off_spring];
[m temp]=sort(int_pop(:,n_bits+1),'descend');
pop_bin=[];
for i=1:n_pop
pop_bin(i,1:n_bits)=int_pop(temp(i),1:n_bits);
fitness(i,:)=int_pop(temp(i),n_bits+1);
p_gen(i,:)=int_pop(temp(i),n_bits+2:n_bits+3);
FT(i,:)=int_pop(temp(i),n_bits+4);
end
% pop_bin=off_spring;
23
% best_solution(iter,:)=int_pop(temp(1),n_bits+1);
% best_cost(iter,1)=int_pop(temp(1),n_bits+4);
iter=iter+1;
end
pop_bin
MATLAB Result
dec_value =
236
90
257
85
275
210
pg1 =
pg2 =
24
374.2818 310.3754 294.8141
fitness =
0.6639
0.8936
0.7209
0.8168
0.9483
0.7883
FT =
1.0e+04 *
1.1264
0.8033
0.4160
0.8889
0.7517
0.9253
pg3 =
25
374.2818 310.3754 294.8141
pg2 =
NE =
fitness =
0.9514
0.9999
0.4977
0.8936
0.8168
0.8259
FT =
1.0e+04 *
0.7490
0.7088
26
Economic Load Dispatch Using Artificial Neural Network
2.1 Introduction
Artificial Neural Networks (ANN) are a type of machine learning model that are inspired
by the structure and function of the human brain. The concept of ANN was first introduced in the
late 1940s by Warren McCulloch and Walter Pitts, who proposed a simplified mathematical model
of how neurons in the brain work together to process information.
In the following decades, researchers continued to develop and refine the concept of ANN,
but progress was slow due to limited computational power and data availability. However, in the
1980s and 1990s, advancements in computer technology and the availability of large datasets
allowed for significant progress in the field of ANN.
The human brain consists of a large number, more than a billion of neural cells that process
information. Each cell works like a simple processor. The massive interaction betweenall cells and
their parallel processing only makes the brain’s abilities possible.
The biological neuron is defined in summarized way as is a specialized cell found in the
nervous system of animals that is responsible for transmitting information through electrical and
chemical signals. It consists of a cell body, dendrites, and an axon. The dendrites receive signals
from other neurons, while the axon sends signals to other neurons or muscles. The communication
between neurons is facilitated by the release of neurotransmitters from the axon terminals, which
bind to receptors on the dendrites of the receiving neuron. This process allows for the transmission
of information throughout the nervous system, which is essential for various bodily functions and
behaviors. The definitions to dendrites, soma, neuro transmitters and others presented below.
Dendrites are branching fibers that extend from the cell body or soma. Soma or cell body of a
neuron contains the nucleus and other structures, support chemical processing and production of
27
neurotransmitters. Axon is a singular fiber carries information away from the soma to the synaptic
sites of other neurons (dendrites and somas), muscles, or glands. Axon hillock is the site of
summation for incoming information. At any moment, the collective influence of all neurons that
conduct impulses to a given neuron will determine whether or not an action potential will be
initiated at the axon hillock and propagated along the axon.
Myelin sheath consists of fat-containing cells that insulate the axon from electrical activity. This
insulation acts to increase the rate of transmission of signals. A gap exists between each myelin
sheath cell along the axon. Since fat inhibits the propagation of electricity,the signals jump from one
gap to the next. Nodes of Ranvier are the gaps (about 1 μm) between myelin sheath cells. Since
fat serves as a good insulator, the myelin sheaths speed the rate of transmission of an electrical
impulse along the axon. Synapse is the point of connection between two neurons or a neuron and
a muscle or agland. Electrochemical communication between neurons take place at these junctions.
Terminal buttons of a neuron are the small knobs at the end of an axon that release chemicals
called neurotransmitters. Information flow in a neural cell. The input/output and the propagation
of information are shown below.
28
activation function(i.e., squashing/transfer/threshold function). An output line transmits the result
to other neurons.
Artificial Neural Network (ANN) is an efficient computing system whose central theme is
borrowed from the analogy of biological neural networks. The neuron is the basic working unit of
the brain, a specialized cell designed to transmit information to other nerve cells, muscle, or gland
cells. Neural networks are designed to work just like the human brain does. For example, in the
case of facial recognition, the brain might start with “is it female or male? Is it black or white? Is
it old or young? Is there a scar?” and so forth.
ANN acquires a large collection of units that are interconnected in some pattern to allow
communication between the units. These units, also referred to as nodes or neurons, are simple
processors which operate in parallel. ANNs learn (or are trained) through experience with
appropriate learning exemplars just as people do. Neural networks gather their knowledge by
detecting the patterns and relationships in data.
Generally, ANN is machine learning model that is inspired by the structure and function
of biological neurons in the brain. ANN consists of interconnected nodes, also known as artificial
neurons, that process information and learn from data through a process called training. The nodes
are organized into layers, with input nodes receiving data and output nodes producing results.
During training, the weights between nodes are adjusted to optimize the model's performance on
a specific task. ANN has been successfully applied in various fields, such as image recognition,
natural language processing, and predictive analytics.
2.2.1 Basic Elements of ANN
Neuron consists of three basic components –weights, thresholds and a single activation
function. Though, input layer, hidden layer and output layer are also taken as elements of ANN.
An Artificial neural network (ANN) model based on the biological neural system is shown in
Figure 2.
29
Figure 2. Artificial Neural Network model sample
Input layer: This layer receives input data and passes it to the hidden layer.
Hidden layer: This layer processes the input data and applies weights to each input. It then
passes the result to the output layer.
Weights: These are the strength of the connections between neurons in the hidden layer. They
are adjusted during the learning process to improve the accuracy of the output. W 1,W2 up to Wn
are weights added on the input
Bias: This is an additional input to each neuron in the hidden layer that helps to adjust the
output.
Activation function: This function determines the output of each neuron in the hidden layer
based on the input and weights applied.
Learning rate: This is a parameter that controls how quickly the ANN adjusts the weights
during the learning process.
Output layer: This layer produces the final output based on the input and weights applied in the
hidden layer.
cell Neuron/node
30
Dendrites/synapse weights
Axon Output
31
Figure 4. sigmoid activation function
tanh: takes a real-valued input and squashes it to the range [-1, 1].
32
ReLU: ReLU stands for Rectified Linear Unit. Refunction, which is a piecewise linear function
that outputs zero if its input is negative, and directly outputs the input otherwise:
Feed Forward Neural Network: It contains multiple neurons (nodes) arranged in layers. Nodes
from adjacent layers have connections or edges between them. A feedforward neural network can
consist of three types of nodes which are input layer, hidden layer and output layer.
33
transfer function used in individual artificial neuron or number of connections between individual
artificial neurons and suited for solving linear problems
Where; Wij is the strength of the connection weight from unit j to unit i,
Hence, These Hopfield ANN is used in solving economic load dispatch problems
34
(a) Elman ANN (b) right Jordan ANN
Elman ANN is a simple three-layer artificial neural network that has back-loop from
hidden layer to input layer. This type of artificial neural network has memory that allowing it
to both detect and generate time-varying patterns. The Elman artificial neural network has
typically sigmoid artificial neurons in its hidden layer, and linear artificial neurons in its output
layer which approximate any function with arbitrary accuracy if only there is enough artificial
neurons in hidden layer. Jordan network is similar to Elman network. The only difference is
that context units are fed from the output layer instead of the hidden layer.
In addition to what listed above, long-short term memory is also one of the Recurrent ANN.
Bidirectional ANN, Self-organizing map ANN, stochastic ANN and Physical ANN are also the
ANN architectures.
2.4 Learning
Supervised learning involves training an ANN with labeled data, where the inputs
and corresponding outputs are known. The ANN learns to map inputs to outputs by adjusting
its internal parameters to minimize the error between predicted and actual outputs.
Unsupervised learning involves training an ANN with unlabeled data, where the
inputs are known but the corresponding outputs are not. The ANN learns to identify patterns
and relationships in the data by adjusting its internal parameters to optimize certain criteria,
such as maximizing the similarity between inputs or minimizing the variance within clusters.
35
Reinforcement learning involves training an ANN to interact with an environment
and learn from feedback in the form of rewards or punishments. The ANN learns to take
actions that maximize the expected cumulative reward over time by adjusting its internal
parameters to approximate an optimal policy.
In order to solve the economic load dispatch problem using ANN the objective function
used in GA above is also used here.
Here are the general steps that have to be followed to compute economic load dispatch
problem using ANN:
2. Collect data:
Gather information on the three generating units' demand, cost of production, and other pertinent
factors.
Use MATLAB to train the ANN using the collected data. The ANN will learn to predict the total
cost of generating power based on the input variables.
Test the trained ANN using new data to see how accurate it is at predicting the total cost of
generating power.
Use the trained ANN to find the optimal power output for each generating unit that will
minimize the total cost of generating power while meeting the demand.
36
7. Validate the solution:
Validate the solution by comparing it with other methods or by testing it with real-world data.
Augmented Lagrange Hopfield network is used for solving the ED problem including power loss
as expressed in Kron’s formula. ALHN is continuous Hopfield network with its energy function
based on augmented Lagrange function. In ALHN, the energy function is augmented by Hopfield
terms from a Hopfield network and penalty factors from the augmented LaGrange function to
damp out oscillation of Hopfield network during convergence process thus ALHN can overcome
the drawbacks of the conventional Hopfield network due to its simplicity while getting closer to
optimal solution and featuring faster convergence.
L=∑ (𝑎 + 𝑏 𝑝 + 𝑐 𝑝 )+ λ (𝑝 +𝑝 -∑ 𝑝 ) + Ɓ(𝑝 +𝑝 −∑ 𝑝 )
Where
λ Lagrange multiplier
Ɓ penalty factor
To represent power output in ALHN, NG continuous neurons and one multiplier neuron are
required. The energy function of the problem is formulated based on the augmented LaGrangian
function as follows:
E=∑ (𝑎 + 𝑏 𝑉 + 𝑐 𝑉 )+ 𝑉 (𝑝 +𝑝 -∑ 𝑉 ) + Ɓ(𝑝 +𝑝 −
∑ 𝑉 ) +∑ ∫ 𝑔 (𝑉)𝑑𝑉 + ∫ 𝑔 (𝑉)𝑑𝑉
37
Where E energy function of ALHN
The sum of integral terms are Hopfield terms where their global effect is displacement of solutions
towards the interior of state space.
The dynamics of neuron inputs are defined as the derivative of the energy function with respect to
output of the neurons which are derived as follows:
(𝑏 + 2𝑐 𝑉 )
=− =
+𝑉 +Ɓ 𝑃 +𝑃 −∑ 𝑉 −1 +𝑈
=+ =𝑃 + 𝑃 -∑ 𝑉
Where =2 ∑ 𝐵 𝑉 +𝐵
The input of neurons at iteration n are updated from the iteration n-1 as follows:
( ) ( ) 𝑑𝑈 ( ) 𝑑𝐸
𝑈 =𝑈 +𝛼 =𝑈 −𝛼
𝑑𝑡 𝑉
( ) ( ) 𝑑𝐸
𝑈 =𝑈 +𝛼 = 𝑈 +𝑎
𝑑𝑉𝑙𝑎𝑚𝑑𝑎
The output of continuous neurons representing the unit power outputs are
calculated via a sigmoid function:
Where σ is the slope of the sigmoid function determining the shape of this function.
38
The outputs of multiplier neurons are defined by a transfer function as follows:
𝑉 =𝑔 𝑈 =𝑈
For the selection of parameters in the neural network the slope of the sigmoid function σ and the
penalty factor Ɓ are fixed at 100 and 0.01 respectively the updating step size for neurons including
𝛼 𝑎𝑛𝑑 𝛼 which are smaller than one, will be tuned depending on the problem.
The algorithm requires initial conditions for all neurons. For the continuous neurons representing
the unit power outputs, their outputs are initiated by mean distribution. That is the initial output of
a generating unit is given proportional to its maximum power output as follows:
( )
𝑉 =𝑃 𝑥 ∑
( )
Where 𝑉 is the initial value of the output of continuous neuron i,
The inputs of continuous neurons are calculated via the inverse function of the sigmoid function.
( )
( )
𝑈 =ln ( )
( )
Where 𝑈 is the initial value of the input of continuous neuron I?
For the multiplier neuron associated with the power balance constraint, the output is initialized by
the mean value which is obtained by solving 𝑑𝐸 ⁄𝑑𝑉 =0 in the above formula by neglecting penalty
factor and input of neurons.
( )
( )
𝑉 = ∑
( )
Where 𝑉 is the initial value of the output of the multiplier neuron.
The initial value of the input of the multiplier neuron is initialized by its output value. In the ALHN
model, the errors are calculated from the constraint errors and neural iterative errors. The power
balance constraint error at iteration n is determined as:
( )
∆𝑃 ( ) = 𝑃 +𝑝 −∑ 𝑉
39
Where ∆𝑃( )
is the power balance constraint error at iteration n. the iterative errors of neurons at
iteration n are defined as:
( ) ( )
∆𝑉 = 𝑉 −𝑉
( ) ( )
∆𝑉 = 𝑉 −𝑉
( ) ( )
Where ∆𝑉 and ∆𝑉 are iterative errors of continuous and multiplier neurons at iteration n,
respectively. The maximum error of the model at iteration n is determined by the combination of
power balance and iterative error.
( ) ( ) ( )
𝐸𝑟𝑟 = max ∆𝑃( ) , ∆𝑉 , ∆𝑉
( )
Where 𝐸𝑟𝑟 max is the maximum error of ALHN at iteration n. The algorithm will be terminated
when either the maximum error is lower than a pre-specified tolerance or maximum number of
iterations is reached.
2.5.2 The algorithm of ALHN for solving the ED problem:
Step 1: Read parameters for the problem including cost coefficients, maximum power outputs,
load demand, and loss coefficients.
Step 2: Select parameters for ALHN including slope of sigmoid function, penalty factor, and
updating step sizes.
Step 3: Set the maximum number of iterations and the threshold for maximum error of ALHN.
Step 4: Initiate outputs of all neurons and calculate their corresponding inputs
Where; 𝑁 is the maximum number of iterations and İ is the threshold of the maximum error
of ALHN.
clc;
clear all;
Data=[105.4 320 179; 111 375.4 298.9;100 150 200;250 150 129;321 150 200;100 150 200;
119 211 296.7;100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;
100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;
100 150 200; 100 150 200; 240 150 127;100 150 200;195 159 495;201 158 240;
260 260 260;100 150 200;100 150 200;100 150 200;100 150 200;]
41
net.performFcn = 'mse'; % Mean squared error performance function
net.divideFcn = 'dividerand'; % Randomly divide data into training, validation, and testing sets
X=Data;
[net,tr] = train(net,X,Y)
42
References
[1] Arunpreet Kaur, Harinder Pal Singh and Abhishek Bhardwaj, "Analysis of Economic Load
Dispatch Using Genetic Algorithm," International Journal of Application or Innovation in
Engineering & Management (IJAIEM), vol. Volume 3, no. Issue 3,, March 2014.
[2] S.Rajasekran and G.A.V Pai, "Neural network, Fuzzy logic and genetic algorithm," in
Prentice hall of India, New Delhi, 2004.
43