0% found this document useful (0 votes)
192 views

Project On Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques

Project on Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques

Uploaded by

fekadu gebey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views

Project On Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques

Project on Economic Load Dispatch Using Genetic Algorithm and Artificial Neural Network Optimization Techniques

Uploaded by

fekadu gebey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

BAHIR DAR UNIVERSITY

BAHIR DAR INSTITUTE OF TECHNOLOGY

FACULTY OF ELECTRICAL AND COMPUTER ENGINEERING

POST GRADUATE PROGRAM IN POWER SYSTEMS ENGINEERING

COURSE: Optimization and Artificial Intelligence Applications in Power


System

Project on Economic Load Dispatch Using Genetic Algorithm and Artificial


Neural Network Optimization Techniques

Prepared By:

Asmamaw Worku / BDU1401882


Fekadu Gebey /BDU1401884
Siraw Gedamu / BDU1401878

Submitted to:

Girmaw T. (PhD)

June, 2023

Bahir Dar, Ethiopia


Table of Contents

Economic Load Dispatch Using Genetic Algorithm Based Optimization ..................................... 1

1.1 Introduction ............................................................................................................................... 1

1.2 Genetic Algorithm .................................................................................................................... 1

1.2.1 Genetic Algorithm Operators ............................................................................................. 2

1.2.2 Simple Genetic Algorithm ................................................................................................. 4

1.3 Economic Load Dispatch of a power system using Genetic Algorithm ................................... 5

1.3.1 Economic Dispatch Optimization Problem Using Genetic Algorithm .............................. 8

Economic Load Dispatch Using Genetic Algorithm MATLAB Code ......................................... 15

Economic Load Dispatch Using Artificial Neural Network ......................................................... 27

2.1 Introduction ............................................................................................................................. 27

2.2 Biological Neuron ................................................................................................................... 27

2.2 Artificial Neural Network ....................................................................................................... 29

2.2.1 Basic Elements of ANN ................................................................................................... 29

2.2.2 TYPES OF ACTIVATION FUNCTIONS ...................................................................... 31

2.3 Neural Network Architectures ................................................................................................ 33

2.4 Learning .................................................................................................................................. 35

2.5. Economic Load Dispatch of a Power System Using ANN.................................................... 36

2.5.1 Hopfield Artificial Neural Network for Economic Load Dispatch: ................................. 37

2.5.2 The algorithm of ALHN for solving the ED problem:..................................................... 40

MATLAB Code-ELD using ANN ................................................................................................ 41

References ..................................................................................................................................... 43

i
Economic Load Dispatch Using Genetic Algorithm Based Optimization

1.1 Introduction

The principal objective of the economic dispatch of power system is to schedule the
generation units in order to serve the load demand at the minimum operating cost while meeting
unit and system constraints. In an electrical power system, a continuous balance must be
maintained between electrical generation and varying load demand, while the system frequency,
voltage levels, and security also must be kept constant. Furthermore, it is desirable that the cost of
such generation be minimal. Numerous classical techniques such as LaGrange based methods and
lambada Iteration method has been used. Many other methods such as gradient methods, Newton’s
methods, linear and quadratic programming, etc. also used. Artificial intelligence optimization
techniques like genetic algorithm, artificial neural network, particle swarm optimization and
simulated annealing are now used for optimization of power flow and hence economic load
dispatch.
Genetic Algorithms (GAs) are numerical optimization algorithms based on the principle
inspired from the genetic and evolution mechanisms observed in natural systems and population
of living being. Genetic algorithms are resolution algorithms based on the mechanics of natural
selection and natural genetics. They combine survival of the fittest among string structures with
structured yet randomized information exchange to form a resolution algorithm with some of
man’s capacity for survival. In every generation, a new set of artificial creatures (strings) is created
by using bits and pieces from the fittest of the old; an occasional new part is used for good measure
and it essentially derived from a simple model of population genetics. The three prime operators
associated with the genetic algorithm are reproduction, crossover, and mutation.

1.2 Genetic Algorithm

Generally speaking, the GA for the economic dispatch problem (EDP) starts by coding the

variables, randomly selecting several initial values, calculating the resultant objective function by

solving the EDP based on the decision variables, selecting a subset of the initially selected

variables based on highest savings, cross mating the coded locations and mutating the resultant

1
code to arrive at a better solution. In the idea of genetic algorithm adaptation is intelligence. If you

adopt yourself to the circumstances, you are an intelligent species.

1.2.1 Genetic Algorithm Operators

The GA operators may be classified as:

 Reproduction or Selection
 Crossover
 Mutation

Reproduction or Selection

Reproduction is the first operator applied on the population. Chromosomes are selected
from the population as parents to crossover and produce offspring in such a way that best should
survive and create offspring. These offspring are the basis for the next generation. So, it is desirable
that mating pool should consist of good individuals. A selection strategy in GA is simply a process
that favors the selection of better individuals in the population for the mating pool. Besides,
Selection Operator is the process of selecting two or more parents from the population for crossing.
Purpose of selection is to emphasize filter individuals in the population in hopes that their offspring
have higher fitness. The process that determines which solutions are to be preserved and allowed
to reproduce and which ones deserve to die out. The primary objective of the selection operator is
to emphasize the good solutions and eliminate the bad solutions in a population while keeping the
population size constant.
Crossover Operator

The crossover operator plays vital and central role in GA working, in fact it may be
considered to be one of the algorithm’s defining characteristics. This operator provides mechanism
for sharing information with probability of crossover between chromosomes by combining two
parents’ chromosome to produces offspring with the possibility that good chromosomes may
generate better ones. There are three types of crossover operators:

(1) One point crossover,

(2) Multipoint crossover and

2
(3) Uniform crossover.

In single point and multipoint crossover segment of bits are exchanged between cross sites
whereas uniform crossover exchanges bits of a string rather than segments. At each string position,
the bits are probabilistically exchanged with some fixed probability.

Mutation

Mutation operator changes 1 to 0 and vice versa with a small probability. The operator

injects new genetic material into the population to prevent the premature convergence of GA to

suboptimal solutions.

Fitness Function

The Genetic algorithm is based on Darwin’s principle that “The candidates, which can
survive, will live, others would die”. This principal is used to find fitness value of the process for
solving maximization problems. Minimization problems are usually transferred into maximization
problems using some suitable transformations. Fitness value f (x) is derived from the objective
function and is used in successive genetic operations. The fitness function for maximization
problem can be used the same as objective function F (X ) [1].

The fitness function for the maximization problem is:

f (x) = F(X )

For minimization problems, the fitness function is an equivalent maximization problem


chosen such that the optimum

point remains unchanged. The following fitness function is often used in minimization problems:

F(X ) = 1/(1+ f (x))

Here f (x) is fitness function and F(X ) is objective function.

The various methods for selection are [2]:


Roulette wheel selection (RWS)
1. Stochastic remainder selection (SRS)
2. Tournament selection (TS)
3
3. Rank selection (RS)
4. Boltzmann selection

Among them Roulette wheel is the most common. Parents are selected according to their

fitness value. The better chromosomes have more chance to be selected.

1.2.2 Simple Genetic Algorithm

Here is the algorithm to be followed for genetic algorithm

 Initialize population

 Determine the fitness function /the fitness of your population/

 While stopping criteria not satisfied

o Select parents

o Perform crossover- offspring provided

o Apply mutation

o Calculate fitness

 end

population size: - it is how many chromosomes in one population. If population size is too many

GA become extremely sluggish and if it is too few, not many possibilities for mating /only a part

of the search space will be sampled.

Crossover frequency: - if there is mating all the time (100%), all the offspring will made by

crossover elseif 0% mate is there, the parents will copy. It is reasonable to copy some chromosome

in to the next generation [85-95] % and for some may be [60%].

Mutation frequency: - for 0% mutation there is no change in copies(offspring) and hence there
should be a mutation with a certain percent frequency. If it is too often [50%] mutation, huge

4
variability which prevents convergence will obtained while a [1%] mutation also provides change
in copies.

Probability of mutation (𝑃 ) =

1.3 Economic Load Dispatch of a power system using Genetic Algorithm

GA dynamically changes the search process through the probabilities of crossover and
mutation and reached to optimal solution. GA can modify the encoded genes. GA can evaluate
multiple individuals and produce multiple optimal solutions

Generally, the GA for the EDP starts by coding the variables, randomly selecting several

initial values, calculating the resultant objective function by solving the EDP based on the decision

variables, selecting a subset of the initially selected variables based on highest savings, cross

mating the coded locations and mutating the resultant code to arrive at a better solution.

The economic load dispatch problem is considered as general minimization problem with
constraints and can be written as

𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓(𝑋) = 𝑎 𝑃 + 𝑏 𝑃 + 𝑐

𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 ≤𝑃 ≤𝑃

𝑃 −𝑃 −𝑃 =0

Where, 𝑓(𝑋) = 𝐶𝑜𝑠𝑡 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛

𝑃 = 𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 𝑃𝑜𝑤𝑒𝑟

𝑃 = 𝑇𝑟𝑎𝑛𝑠𝑚𝑖𝑠𝑠𝑖𝑜𝑛 𝐿𝑖𝑛𝑒 𝐿𝑜𝑠𝑠

𝑃 = 𝑇𝑜𝑡𝑎𝑙 𝐷𝑒𝑚𝑎𝑛𝑑 𝑃𝑜𝑤𝑒𝑟


The fitness functions in Genetic Algorithm equivalent to objective function of economic
dispatch.

Steps in Genetic algorithm in solving economic dispatch

5
I. Read Input data a, b, and c Coefficients and, 𝑃 , 𝑃 , 𝑎𝑛𝑑 𝑃 .
II. Compute λ 𝑎𝑛𝑑 λ
III. Randomly initialized population size

Creating an initial population by randomly generating a set of feasible solutions (chromosomes).

IV. Decode binary in to decimal value


V. Obtain the value of the decimal equivalent with in specified search
space.
λ −λ
λ =λ + ∗ DV
2 −1

Where; DV = Decimal Value


L= Length of population
𝛌𝒊 𝒃𝒊
VI. Solve for 𝑷𝑮𝒊 = ( ) i.e., Evaluating each chromosome by solving
𝟐∗𝒂𝒊

the ED problem.

If, 𝑃 < 𝑃 , 𝑡ℎ𝑒𝑛 𝑃 = 𝑃

If, 𝑃 > 𝑃 , 𝑡ℎ𝑒𝑛 𝑃 = 𝑃

VII. Calculate ∆𝑃 𝑒𝑟𝑟𝑜𝑟

| 𝑃 −𝑃 |<ε


Normalize the error (𝑁 ) =

Calculate fitness function =

VIII. When ∆𝑃 > ε perform Genetic operation /apply genetic operators


IX. Eliminate initial population and updated to offspring population and
return to step II until the error is eliminated.

The GA can be presented in flow chart when it is intended for economic load dispatch.

6
START

Define cost function, cost, Variables,


Select GA parameters

Generate Initial population

DECODE THE CHROMOSOMES

SELECT MATES FOR reproduction

CROSS OVER OPERATION

MUTATION

COST calculation

No CHECK THE
CONVERGENCE

Yes

STOP

Figure 1: flow chart for genetic algorithm for ELD optimization

7
1.3.1 Economic Dispatch Optimization Problem Using Genetic Algorithm

In order to see the Genetic algorithm optimization technic on economic load


dispatch; the following fuel cost function with three generating units is considered.
The three generating units are intended feed a total load of (PD) 800MW.
$
𝐹 (𝑃 ) = 0.001562𝑃 + 7.92𝑃 + 561 ( )

$
𝐹 (𝑃 ) = 0.00194𝑃 + 7.85𝑃 + 310 ( )

$
𝐹 (𝑃 ) = 0.00400𝑃 + 7.90𝑃 + 78 ( )

Subject to the inequality constraint

100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊

100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊

100𝑀𝑊 ≪ 𝑃 ≪ 500𝑀𝑊

and the equality constraint

𝑃 = 800𝑀𝑊

Besides, For Genetic algorithm (GA) application on ELD assume the length of string L is
9, Population in size is 6 in order to reduce the optimization cost of the economic dispatch
problem developed above.

Computation of 𝛌𝒎𝒊𝒏 𝒂𝒏𝒅 𝛌𝒎𝒂𝒙

The incremental or marginal cost of the given generator cost function is written as follows.

𝑑𝐿(λ, 𝑃 )
= 0.003124P + 7.92 = λ
𝑑𝑃

𝑑𝐿(λ, 𝑃 )
= 0.00388P + 7.85 = λ
𝑑𝑃

𝑑𝐿(λ, 𝑃 )
= 0.00400P + 7.90 = λ
𝑑𝑃

8
Using The incremental or marginal cost equation it is possible to compute the minimum
and maximum incremental cost value (of λ 𝑎𝑛𝑑 λ ) by substituting maximum and
(λ, )
minimum power of each generator on the corresponding incremental cost equation .

0.003124P + 7.92 = λ , but P = 100MW,

λ = 0.003124 𝑥 100 + 7.92 = 8.2324

0.003124P + 7.92 = λ , but P = 500MW

λ = 0.003124 𝑥 500 + 7.92 = 9.482

0.00388P + 7.85 = λ , but P = 100MW,

λ = 0.00388 𝑥 100 + 7.85 = 8.238

0.00388P + 7.85 = λ , but P = 500MW

λ = 0.00388 𝑥 500 + 7.85 = 9.79

0.00400P + 7.90 = λ , but P = 100MW,

λ = 0.00400 𝑥 100 + 7.90 = 8.30

0.00400P + 7.90 = λ , but P = 500MW

λ = 0.400 𝑥 500 + 7.9 = 9.9

The minimum and maximum lambda value is therefore summarized in matrix form as
follows.

λ λ 8.2324 9.482
λ λ = 8.238 9.79 ,
λ λ 8.30 9.9

Find the search space by comparing lambda minimum and maximum values, take the
most minimum and most maximum values, we get

9
λ = 8.2324; λ = 9.90

Initialization of population; assume population size to be six

∑𝑃 − 𝑃
𝑁 =
𝑃

Fitness=( )

.
𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 = ( ∑
)* population Size

Actual Count ≈ 𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝐶𝑜𝑢𝑛𝑡

Initial population

Initial population selection is done by selecting a decimal value randomly which is in


between minimum and maximum value of the power balance constraints (100 𝑎𝑛𝑑 500 𝑀𝑊).
Later this assigned initial population is used as main input to implement the GA optimization in
economic load dispatch.

Selection Operation

Table 1.1. result of generated power and GA parameters for initial population

Expected Actual
Initial Population DV λ Value P1(MW) P2(MW) P3(MW) NE Fitness
Count Count

001101110 103 8.5685 207.5963 185.1883 167.1327 -0.3 1.4288 1.50199 2


010110100 180 8.8198 288.0323 249.9518 229.9532 -0.04 1.0418 1.09513 1
100000001 257 9.0711 368.4683 314.7152 292.7738 0.22 0.8197 0.86171 1
101010100 340 9.3420 455.1721 384.5252 360.4894 0.5 0.6666 0.70072 1
110010001 401 9.5410 518.8942 435.8313 410.2564 0.706 0.5861 0.61612 1
010010110 150 8.7219 256.6936 224.7193 205.4777 -0.14 1.1647 1.22434 1

For the initial population taken, the results of generated power are within the limits of the
power inequality constraint and hence the table 1.1 above is taken as it is for crossover operation.

Cross Over Operation

10
Single point cross over is used and taken from the first two bits of the population. Color of
the last two bits in the population changes and takes the corresponding color of the two bits which
participate in the cross over operation and the result is presented in table 1.2.

Table 1.2. result of the cross over operation

λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
offspring Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.4761 1


0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.09452 1
1 0 0 0 0 0 0 0 0 256 9.0678 367.4237 313.8742 291.9579 0.217 0.8220 0.86969 1
1 0 1 0 1 0 1 0 1 341 9.3452 456.2168 385.3663 361.3053 0.504 0.6651 0.70367 1
1 1 0 0 1 0 0 1 0 402 9.5443 519.9388 436.6724 411.0722 0.71 0.5849 0.61888 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23714 1
As the results of the cross over operation shows all the results of P1, P2, P3 and λ are within
the ranges of the minimum and the maximum. There is no value which violates the constraints. Therefore,
it is possible to proceed to mutation operation.

Mutation

Randomly the third bits /genes/ of the third, fourth and fifth chromosomes are selected for mutation
and the value of the corresponding bit is changed to 1 if it was 0 and vice versal. The new values of
𝑃 , 𝑃 , 𝑃 and λ is then calculated and the mutation result presented in table 1.3.

Table 1.3. result of Mutation operation

λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
offspring Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.47808 1


0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.09598 1
1 0 0 0 0 0 1 0 0 260 9.0809 371.6022 317.2385 295.2213 0.23 0.8130 0.8613 1
1 0 1 0 1 0 0 0 1 337 9.3322 452.0383 382.0019 358.0419 0.49 0.6711 0.711 1
1 1 0 0 1 0 1 1 0 406 9.5573 524.1173 440.0367 414.3356 0.723 0.5803 0.61485 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23879 1

Ordering the corresponding fitness values in the cross over and mutation operators is the
last phase of first iteration. and based on their fitness value, choose the top six offspring.

11
Table 1.4 crossover plus mutation result

λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
Crossover + mutation Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1


0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.26359 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23879 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23714 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.11951 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.2315 1
1 0 0 0 0 0 0 0 0 256 9.0678 367.4237 313.8742 291.9579 0.217 0.8220 0.97854 1
1 0 0 0 0 0 1 0 0 260 9.0809 371.6022 317.2385 295.2213 0.23 0.8130 1.06257 1
1 0 1 0 1 0 0 0 1 337 9.3322 452.0383 382.0019 358.0419 0.49 0.6711 0.87715 1
1 0 1 0 1 0 1 0 1 341 9.3452 456.2168 385.3663 361.3053 0.504 0.6651 0.96471 1
1 1 0 0 1 0 0 1 0 402 9.5443 519.9388 436.6724 411.0722 0.71 0.5849 0.84847 1
1 1 0 0 1 0 1 1 0 406 9.5573 524.1173 440.0367 414.3356 0.723 0.5803 1.05059 1

Table 1.5 The best 𝑆𝑖𝑥 Off springs after Gene operator

crosover+mutation (New λ Expected Actual


DV P1(MW) P2(MW) P3(MW) NE Fitness
offspring) Value Count Count
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.26359 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23879 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23714 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.11951 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.2315 1

The work on first iteration completed in table 1.5. The Roulette wheel

selection is used to select new parents from the previous offspring which is obtained

in the combination of crossover and mutation. If the actual count is two it replicates

itself and the iteration continues until the power balance error less than the tolerance.

12
Since the actual count in table 1.5 is one in all offspring’s, the whole population is

taken as a new parent in for the second iteration.

Table 1.6 new parent for second iteration (second iteration selection)
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
New Parent Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1


0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.26359 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23879 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.23714 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.11951 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.2315 1

The same procedure followed as used in iteration one for crossover, mutation and other
operations of the second iteration

Table 1.7 cross over of the second iteration

λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
crossover Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1


0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 0.9747 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 0.9747 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 0.86233 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 0.86233 1

Since the last two digits in each chromosome are the same no change is
observed in bits of the first two in the right.

13
Table 1.7 Mutation
λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
Mutuation Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.35859 1


0 0 1 1 0 1 1 0 1 109 8.5881 213.864 190.2348 172.0278 -0.28 1.3886 1.35222 1
14
8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.13865 1
0 1 0 0 1 0 1 0 1 9
14
8.7154 254.6043 223.0371 203.846 -0.15 1.1739 1.14316 1
0 1 0 0 1 0 1 0 0 8

0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.00738 1

Matching of crossover with mutation and select the best six offspring is then
presented in table 1.8 and table 1.9 respectively

Table 1.8 crossover with mutation


λ Expected Actual
DV P1(MW) P2(MW) P3(MW) NE Fitness
Crossover + mutation Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1


0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.35859 1
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.3951 1.16297 1
0 0 1 1 0 1 1 0 1 109 8.5881 213.864 190.2348 172.0278 -0.28 1.3886 1.35222 1
0 1 0 0 1 0 1 0 0 148 8.7154 254.6043 223.0371 203.846 -0.15 1.1739 1.14316 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 0.9747 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 1.13865 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.1693 0.9747 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 0.86233 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 0.86233 1
0 1 0 1 1 0 1 1 0 182 8.8263 290.1216 251.634 231.5849 -0.03 1.0345 1.00738 1

Table 1.8 best six offspring of the second iteration

λ Expected Actual
Crossover + mutation DV P1(MW) P2(MW) P3(MW) NE Fitness
Value Count Count

0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.395 1.16297 1

14
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.395 1.35859 1
0 0 1 1 0 1 1 0 0 108 8.5848 212.8194 189.3937 171.2119 -0.28 1.395 1.16297 1
0 0 1 1 0 1 1 0 1 109 8.5881 213.864 190.2348 172.0278 -0.28 1.389 1.35222 1
0 1 0 0 1 0 1 0 0 148 8.7154 254.6043 223.0371 203.846 -0.15 1.174 1.14316 1
0 1 0 0 1 0 1 0 1 149 8.7186 255.649 223.8782 204.6618 -0.14 1.169 0.9747 1

It is shown the way how genetic algorithm computes to get out the fittest value and the
iteration iterates until the fittest value obtained or until | ∑ 𝑃 − 𝑃 | < ε achieved.

Economic Load Dispatch Using Genetic Algorithm MATLAB Code

clc;

clear all;

%================= data and inputs given for ELD ========================%

a=[0.003124 0.00388 0.004];

b=[7.92 7.85 7.9];

Pd=800;

Pmin=100; % minimum value is the the same for all generators========%

Pmax=500; % maximum value is the same for all generators========%

laminc=0.0005;

epsilon=0.005;

delp=Pd;

for i=1:length(a)

l_min(i)=2*a(1,i)*Pmin+b(1,i);

l_max(i)=2*a(1,i)*Pmax+b(1,i);

end

%GA parameters================================================

15
itermax=100; % Maximum Number of Iterations

n_pop=6; % Population Size

n_bits=9;

n_var=1;

chrom_length=n_var*n_bits;

lam_min=min(l_min);

lam_max=max(l_max);

iter=1;

%===================Initialization of population=================%

% pop_bin=round(rand(n_pop,chrom_length));

pop_bin=[0 0 1 1 0 1 1 1 0

010110100

100000001

101010100

110010001

0 1 0 0 1 0 1 1 0];

%=========== decoded value=======================================%

%change binary input to decimal equivalent

pop_bin;

for p=1:n_pop

p;

string=pop_bin(p,:);

dec_value(p,:)=bi2de(string);

end

16
dec_value

for p=1:n_pop

lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);

if lambda(p,:)<=lam_min;

lambda(p,:)=lam_min;

elseif lambda(p,:)>=lam_max

lambda(p,:)=lam_max;

end

end

lambda;

pp=[];

for p=1:n_pop

for i=1:length(a)

pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));

end

pg1(p,:)=pp(p,:);

for i=1:length(a)

if pg1(p,i)<=Pmin

pg1(p,i)=Pmin;

elseif pg1(p,i)>=Pmax

pg1(p,i)=Pmax;

end

F(p,i)=a(1,i)*pg1(p,i)^2+b(1,i)*pg1(p,i);

end

17
delp(p)=abs(sum(pg1(p,:))-Pd);

error(p)=abs(delp(p));

NE(p)=error(p)/Pd;

fitness(p,:)=1/(1+NE(p));

FT(p,:)=sum(F(p,:));

end

fitness;

pg1

[pop_bin]=[pop_bin fitness pg1 FT];

%=========================iterative loop of GA========================

while (max(delp(1,:))>epsilon*Pd)

% for iter=1%:50

% iter

%====================Genetic
operators====================================%

%====================== 1. Roulette wheel selection ======================%

sum_fit=sum(fitness);

prob=fitness./sum_fit;

expe_count=prob.*n_pop;

act_count=round(expe_count);

ct=0;

for i=1:n_pop

for j=1:act_count(i)

parent(ct+j,:)=pop_bin(i,1:n_bits);

18
end

ct=ct+act_count(i);

end

parent;

% %======================== 2.Crossover process


============================%

off_spring=[];

for i=1:n_pop/2

parent1=parent(2*i-1,:);

parent2=parent(2*i,:);

cross_site=round(1+(6-1)*rand());

child1=[parent1(1,1:cross_site-1) parent2(1,cross_site:n_bits)];

child2=[parent2(1,1:cross_site-1) parent1(1,cross_site:n_bits)];

off_spring=[off_spring;child1;child2];

end

off_spring;

for p=1:n_pop

string=off_spring(p,:);

dec_value(p,:)=bi2de(string);

end

dec_value;

for p=1:n_pop

lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);

if lambda(p,:)<=lam_min

19
lambda(p,:)=lam_min;

elseif lambda(p,:)>=lam_max

lambda(p,:)=lam_max;

end

end

lambda;

pp=[];

for p=1:n_pop

for i=1:length(a)

pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));

end

delp(p)=abs(sum(pp(p,:))-Pd);

error(p)=abs(delp(p));

NE(p)=error(p)/Pd;

fitness(p,:)=1/(1+NE(p));

pg2(p,:)=pp(p,:);

for i=1:length(a)

if pg2(p,i)<=Pmin

pg2(p,i)=Pmin;

elseif pg2(p,i)>=Pmax

pg2(p,i)=Pmax;

end

F(p,i)=a(1,i)*pg2(p,i)^2+b(1,i)*pg2(p,i);

end

20
FT(p,:)=sum(F(p,:));

end

lambda

error;

NE;

fitness;

pg2

%====================mutation=========================================
%

for i=1:2

r1=round(1+(n_pop-1)*rand(1));

b1=round(1+(n_bits-1)*rand(1));

if off_spring(r1,b1)==0

off_spring(r1,b1)=1;

else

off_spring(r1,b1)=0;

end

end

for p=1:n_pop

string=off_spring(p,:);

dec_value(p,:)=bi2de(string);

end

dec_value;

for p=1:n_pop
21
lambda(p,:)=lam_min+((lam_max-lam_min)/((2^n_bits)-1))*dec_value(p,:);

if lambda(p,:)<=lam_min

lambda(p,:)=lam_min;

elseif lambda(p,:)>=lam_max

lambda(p,:)=lam_max;

end

end

lambda;

pp=[];

for p=1:n_pop

for i=1:length(a)

pp(p,i)=(lambda(p,:)-b(1,i))/(2*a(1,i));

end

delp(p)=abs(sum(pp(p,:))-Pd);

error(p)=abs(delp(p));

NE(p)=error(p)/Pd;

fitness(p,:)=1/(1+NE(p));

pg3(p,:)=pp(p,:);

for i=1:length(a)

if pg3(p,i)<=Pmin

pg3(p,i)=Pmin;

elseif pg3(p,i)>=Pmax

pg3(p,i)=Pmax;

end

22
F(p,i)=a(1,i)*pg3(p,i)^2+b(1,i)*pg3(p,i);

end

FT(p,:)=sum(F(p,:));

end

error

NE

fitness

FT

pg3

off_spring=[off_spring fitness pg3 FT];

%===================================%

% [m temp]=sort(off_spring(:,n_bits+1),'descend');

%===================================%

int_pop=[pop_bin;off_spring];

[m temp]=sort(int_pop(:,n_bits+1),'descend');

pop_bin=[];

for i=1:n_pop

pop_bin(i,1:n_bits)=int_pop(temp(i),1:n_bits);

fitness(i,:)=int_pop(temp(i),n_bits+1);

p_gen(i,:)=int_pop(temp(i),n_bits+2:n_bits+3);

FT(i,:)=int_pop(temp(i),n_bits+4);

end

pop_bin=[pop_bin fitness p_gen FT];

% pop_bin=off_spring;

23
% best_solution(iter,:)=int_pop(temp(1),n_bits+1);

% best_cost(iter,1)=int_pop(temp(1),n_bits+4);

iter=iter+1;

end

pop_bin

MATLAB Result

dec_value =

236

90

257

85

275

210

pg1 =

348.0096 289.2222 274.2955

194.5799 165.6876 154.4669

370.0783 306.9908 291.5311

189.3255 161.4569 150.3632

388.9942 322.2211 306.3045

320.6865 267.2228 252.9562

pg2 =

192.4782 163.9953 152.8254

350.1114 290.9144 275.9370

185.1219 158.0724 147.0802

24
374.2818 310.3754 294.8141

321.7374 268.0690 253.7769

387.9434 321.3750 305.4838

error = 405.0453 95.2207 309.7254 179.4712 43.5833 214.8021

NE = 0.5063 0.1190 0.3872 0.2243 0.0545 0.2685

fitness =

0.6639

0.8936

0.7209

0.8168

0.9483

0.7883

FT =

1.0e+04 *

1.1264

0.8033

0.4160

0.8889

0.7517

0.9253

pg3 =

461.5055 380.6039 362.9358

341.7043 284.1454 269.3710

185.1219 158.0724 147.0802

25
374.2818 310.3754 294.8141

321.7374 268.0690 253.7769

387.9434 321.3750 305.4838

pg2 =

320.6865 267.2228 252.9562

321.7374 268.0690 253.7769

348.0096 289.2222 274.2955

341.7043 284.1454 269.3710

374.2818 310.3754 294.8141

370.0783 306.9908 291.5311

error = 40.8655 0.0991 807.2736 95.2207 179.4712 168.6002

NE =

0.0511 0.0001 1.0091 0.1190 0.2243 0.2108

fitness =

0.9514

0.9999

0.4977

0.8936

0.8168

0.8259

FT =

1.0e+04 *

0.7490

0.7088

26
Economic Load Dispatch Using Artificial Neural Network

2.1 Introduction

Artificial Neural Networks (ANN) are a type of machine learning model that are inspired
by the structure and function of the human brain. The concept of ANN was first introduced in the
late 1940s by Warren McCulloch and Walter Pitts, who proposed a simplified mathematical model
of how neurons in the brain work together to process information.

In the following decades, researchers continued to develop and refine the concept of ANN,
but progress was slow due to limited computational power and data availability. However, in the
1980s and 1990s, advancements in computer technology and the availability of large datasets
allowed for significant progress in the field of ANN.

Today, ANN is widely used in a variety of applications, including image processing


computer vision, natural language processing, and speech recognition. ANN has also been
instrumental in the development of deep learning, a subset of machine learning that uses multi-
layered neural networks to extract features from data and make predictions.

2.2 Biological Neuron

The human brain consists of a large number, more than a billion of neural cells that process
information. Each cell works like a simple processor. The massive interaction betweenall cells and
their parallel processing only makes the brain’s abilities possible.

The biological neuron is defined in summarized way as is a specialized cell found in the
nervous system of animals that is responsible for transmitting information through electrical and
chemical signals. It consists of a cell body, dendrites, and an axon. The dendrites receive signals
from other neurons, while the axon sends signals to other neurons or muscles. The communication
between neurons is facilitated by the release of neurotransmitters from the axon terminals, which
bind to receptors on the dendrites of the receiving neuron. This process allows for the transmission
of information throughout the nervous system, which is essential for various bodily functions and
behaviors. The definitions to dendrites, soma, neuro transmitters and others presented below.

Dendrites are branching fibers that extend from the cell body or soma. Soma or cell body of a
neuron contains the nucleus and other structures, support chemical processing and production of

27
neurotransmitters. Axon is a singular fiber carries information away from the soma to the synaptic
sites of other neurons (dendrites and somas), muscles, or glands. Axon hillock is the site of
summation for incoming information. At any moment, the collective influence of all neurons that
conduct impulses to a given neuron will determine whether or not an action potential will be
initiated at the axon hillock and propagated along the axon.

Figure 1. The biological Neuron

Myelin sheath consists of fat-containing cells that insulate the axon from electrical activity. This
insulation acts to increase the rate of transmission of signals. A gap exists between each myelin
sheath cell along the axon. Since fat inhibits the propagation of electricity,the signals jump from one
gap to the next. Nodes of Ranvier are the gaps (about 1 μm) between myelin sheath cells. Since
fat serves as a good insulator, the myelin sheaths speed the rate of transmission of an electrical
impulse along the axon. Synapse is the point of connection between two neurons or a neuron and
a muscle or agland. Electrochemical communication between neurons take place at these junctions.
Terminal buttons of a neuron are the small knobs at the end of an axon that release chemicals
called neurotransmitters. Information flow in a neural cell. The input/output and the propagation
of information are shown below.

Referring the biological neuron, an artificial neuron is a mathematical function conceived as a


simple model of a real (biological) neuron. The McCulloch-Pitts Neuron. This is a simplified
model of real neurons, known as a Threshold Logic Unit. A set of input connections brings in
activations from another neuron. A processing unit sums the inputs, and then applies a non-linear

28
activation function(i.e., squashing/transfer/threshold function). An output line transmits the result
to other neurons.

2.2 Artificial Neural Network

Artificial Neural Network (ANN) is an efficient computing system whose central theme is
borrowed from the analogy of biological neural networks. The neuron is the basic working unit of
the brain, a specialized cell designed to transmit information to other nerve cells, muscle, or gland
cells. Neural networks are designed to work just like the human brain does. For example, in the
case of facial recognition, the brain might start with “is it female or male? Is it black or white? Is
it old or young? Is there a scar?” and so forth.

ANN acquires a large collection of units that are interconnected in some pattern to allow
communication between the units. These units, also referred to as nodes or neurons, are simple
processors which operate in parallel. ANNs learn (or are trained) through experience with
appropriate learning exemplars just as people do. Neural networks gather their knowledge by
detecting the patterns and relationships in data.

Generally, ANN is machine learning model that is inspired by the structure and function
of biological neurons in the brain. ANN consists of interconnected nodes, also known as artificial
neurons, that process information and learn from data through a process called training. The nodes
are organized into layers, with input nodes receiving data and output nodes producing results.
During training, the weights between nodes are adjusted to optimize the model's performance on
a specific task. ANN has been successfully applied in various fields, such as image recognition,
natural language processing, and predictive analytics.
2.2.1 Basic Elements of ANN

Neuron consists of three basic components –weights, thresholds and a single activation
function. Though, input layer, hidden layer and output layer are also taken as elements of ANN.
An Artificial neural network (ANN) model based on the biological neural system is shown in
Figure 2.

29
Figure 2. Artificial Neural Network model sample

Input layer: This layer receives input data and passes it to the hidden layer.

Hidden layer: This layer processes the input data and applies weights to each input. It then
passes the result to the output layer.

Weights: These are the strength of the connections between neurons in the hidden layer. They
are adjusted during the learning process to improve the accuracy of the output. W 1,W2 up to Wn
are weights added on the input

Bias: This is an additional input to each neuron in the hidden layer that helps to adjust the
output.

Activation function: This function determines the output of each neuron in the hidden layer
based on the input and weights applied.

Learning rate: This is a parameter that controls how quickly the ANN adjusts the weights
during the learning process.

Output layer: This layer produces the final output based on the input and weights applied in the
hidden layer.

Correlation/Analogy/ of biological neuron and artificial neuron

Biological neuron Artificial neuron

cell Neuron/node

30
Dendrites/synapse weights

Soma Net input

Axon Output

2.2.2 TYPES OF ACTIVATION FUNCTIONS


An activation function is a function used in artificial neural networks which outputs a small
value for small inputs, and a larger value if its inputs exceed a threshold. In other words, an
activation function is like a gate that checks that an incoming value is greater than a critical number
or not. Besides, the purpose of the activation function is to introduce non-linearity into the output
of a neuron. This is important because most real-world data is nonlinear and we want neurons to
learn these nonlinear representations. Common activation functions used in ANN which we may
practice are listed below.
Identity Function: f(x) is just an identity function usually used in simple networks. It collects the
input and produces an output which is proportionate to the given input. This is better than step
function because it gives multiple outputs.

Figure 3 identity function

Sigmoid: takes a real-valued input and squashes it to range between 0 and 1.

31
Figure 4. sigmoid activation function

F(x) = [ 1/ (1+ e -ax)]

Where x is input and F(x) is an output and a is constant

tanh: takes a real-valued input and squashes it to the range [-1, 1].

Figure 5. tanh activation function

F(x) = [ (1- e -ax)/(1+ e -ax)]

Where x is input and F(x) is an output

32
ReLU: ReLU stands for Rectified Linear Unit. Refunction, which is a piecewise linear function
that outputs zero if its input is negative, and directly outputs the input otherwise:

Figure 6. ReLU activation function

2.3 Neural Network Architectures

the following are lists of neural network Architecture

Feed Forward Neural Network: It contains multiple neurons (nodes) arranged in layers. Nodes
from adjacent layers have connections or edges between them. A feedforward neural network can
consist of three types of nodes which are input layer, hidden layer and output layer.

Figure 7 feed forward neural network architecture

Artificial neural network with feed-forward topology is called Feed-Forward artificial


neural network and as such has only one condition: information must flow from input to output in
only one direction with no back-loops. There are no limitations on number of layers, type of

33
transfer function used in individual artificial neuron or number of connections between individual
artificial neurons and suited for solving linear problems

Recurrent Networks: The Recurrent Networks differ from feed-forward architecture is


that a Recurrent network has at least one feedback loop. There could be neurons with self-feedback
links; that is the output of a neuron is fed back into itself as input. This creates an internal state of
the network which allows it to exhibit dynamic temporal behavior.

Hopfield Artificial Neural Network: A Hopfield artificial neural network is a type of


recurrent artificial neural network that is used to store one or more stable target vectors. These
stable vectors can be viewed as memories that the network recalls when provided with similar
vectors that act as a cue to the network memory. These binary units only take two different
values for their states that are determined by whether or not the units' input exceeds their
threshold. Binary units can take either values of 1 or -1, or values of 1 or 0. Consequently there
are two possible definitions for binary unit activation ai.

Where; Wij is the strength of the connection weight from unit j to unit i,

sj is the state of unit j,

ꝋj is the threshold of unit i.

Hence, These Hopfield ANN is used in solving economic load dispatch problems

Elman and Jordan Artificial Neural Networks

34
(a) Elman ANN (b) right Jordan ANN

Figure 8. Elman and Jordan Artificial Neural Network

Elman ANN is a simple three-layer artificial neural network that has back-loop from
hidden layer to input layer. This type of artificial neural network has memory that allowing it
to both detect and generate time-varying patterns. The Elman artificial neural network has
typically sigmoid artificial neurons in its hidden layer, and linear artificial neurons in its output
layer which approximate any function with arbitrary accuracy if only there is enough artificial
neurons in hidden layer. Jordan network is similar to Elman network. The only difference is
that context units are fed from the output layer instead of the hidden layer.

In addition to what listed above, long-short term memory is also one of the Recurrent ANN.
Bidirectional ANN, Self-organizing map ANN, stochastic ANN and Physical ANN are also the
ANN architectures.

2.4 Learning

There are three major learning paradigms; supervised learning, unsupervised


learning and reinforcement learning. Usually, they can be employed by any given type of
artificial neural network architecture. Each learning paradigm has many training algorithms.

Supervised learning involves training an ANN with labeled data, where the inputs
and corresponding outputs are known. The ANN learns to map inputs to outputs by adjusting
its internal parameters to minimize the error between predicted and actual outputs.

Unsupervised learning involves training an ANN with unlabeled data, where the
inputs are known but the corresponding outputs are not. The ANN learns to identify patterns
and relationships in the data by adjusting its internal parameters to optimize certain criteria,
such as maximizing the similarity between inputs or minimizing the variance within clusters.

35
Reinforcement learning involves training an ANN to interact with an environment
and learn from feedback in the form of rewards or punishments. The ANN learns to take
actions that maximize the expected cumulative reward over time by adjusting its internal
parameters to approximate an optimal policy.

2.5. Economic Load Dispatch of a Power System Using ANN

In order to solve the economic load dispatch problem using ANN the objective function
used in GA above is also used here.

Here are the general steps that have to be followed to compute economic load dispatch
problem using ANN:

1. Define the problem:

Identify the objective function of the economic load dispatch problem.

2. Collect data:

Gather information on the three generating units' demand, cost of production, and other pertinent
factors.

3. Define the input and output variables:

Define the input variables and the output variable.

4. Train the ANN:

Use MATLAB to train the ANN using the collected data. The ANN will learn to predict the total
cost of generating power based on the input variables.

5. Test the ANN:

Test the trained ANN using new data to see how accurate it is at predicting the total cost of
generating power.

6. Optimize the solution:

Use the trained ANN to find the optimal power output for each generating unit that will
minimize the total cost of generating power while meeting the demand.

36
7. Validate the solution:

Validate the solution by comparing it with other methods or by testing it with real-world data.

Economic Load Dispatch Using ANN MATLAB Code

2.5.1 Hopfield Artificial Neural Network for Economic Load Dispatch:

Augmented Lagrange Hopfield network is used for solving the ED problem including power loss
as expressed in Kron’s formula. ALHN is continuous Hopfield network with its energy function
based on augmented Lagrange function. In ALHN, the energy function is augmented by Hopfield
terms from a Hopfield network and penalty factors from the augmented LaGrange function to
damp out oscillation of Hopfield network during convergence process thus ALHN can overcome
the drawbacks of the conventional Hopfield network due to its simplicity while getting closer to
optimal solution and featuring faster convergence.

The augmented LaGrangian function of the problem is formulated as follows:

L=∑ (𝑎 + 𝑏 𝑝 + 𝑐 𝑝 )+ λ (𝑝 +𝑝 -∑ 𝑝 ) + Ɓ(𝑝 +𝑝 −∑ 𝑝 )

Where

L augmented Lagrange function

𝑎 , 𝑏 , 𝑐 cost coefficient of generating unit i,

𝑝 Total power load demand

𝑝 Total power loss

λ Lagrange multiplier

Ɓ penalty factor

To represent power output in ALHN, NG continuous neurons and one multiplier neuron are
required. The energy function of the problem is formulated based on the augmented LaGrangian
function as follows:

E=∑ (𝑎 + 𝑏 𝑉 + 𝑐 𝑉 )+ 𝑉 (𝑝 +𝑝 -∑ 𝑉 ) + Ɓ(𝑝 +𝑝 −

∑ 𝑉 ) +∑ ∫ 𝑔 (𝑉)𝑑𝑉 + ∫ 𝑔 (𝑉)𝑑𝑉

37
Where E energy function of ALHN

𝑉 Output of continuous neuron I representing output power 𝑝 ,

𝑉 Output of the multiplier neuron representing lagrangian multiplier

The sum of integral terms are Hopfield terms where their global effect is displacement of solutions
towards the interior of state space.

The dynamics of neuron inputs are defined as the derivative of the energy function with respect to
output of the neurons which are derived as follows:

(𝑏 + 2𝑐 𝑉 )
=− =
+𝑉 +Ɓ 𝑃 +𝑃 −∑ 𝑉 −1 +𝑈

=+ =𝑃 + 𝑃 -∑ 𝑉

Where =2 ∑ 𝐵 𝑉 +𝐵

𝑈 input of continuous neuron i,

𝑈 Input of the multiplier neuron.

The input of neurons at iteration n are updated from the iteration n-1 as follows:

( ) ( ) 𝑑𝑈 ( ) 𝑑𝐸
𝑈 =𝑈 +𝛼 =𝑈 −𝛼
𝑑𝑡 𝑉
( ) ( ) 𝑑𝐸
𝑈 =𝑈 +𝛼 = 𝑈 +𝑎
𝑑𝑉𝑙𝑎𝑚𝑑𝑎

Where 𝛼 and 𝑎 are updating step size for neurons

The output of continuous neurons representing the unit power outputs are
calculated via a sigmoid function:

𝑉 =g (𝑈 ) = +[1 + tanh (𝜎𝑈 )]+𝑃

Where σ is the slope of the sigmoid function determining the shape of this function.
38
The outputs of multiplier neurons are defined by a transfer function as follows:

𝑉 =𝑔 𝑈 =𝑈

For the selection of parameters in the neural network the slope of the sigmoid function σ and the
penalty factor Ɓ are fixed at 100 and 0.01 respectively the updating step size for neurons including
𝛼 𝑎𝑛𝑑 𝛼 which are smaller than one, will be tuned depending on the problem.

The algorithm requires initial conditions for all neurons. For the continuous neurons representing
the unit power outputs, their outputs are initiated by mean distribution. That is the initial output of
a generating unit is given proportional to its maximum power output as follows:

( )
𝑉 =𝑃 𝑥 ∑

( )
Where 𝑉 is the initial value of the output of continuous neuron i,

The inputs of continuous neurons are calculated via the inverse function of the sigmoid function.
( )
( )
𝑈 =ln ( )

( )
Where 𝑈 is the initial value of the input of continuous neuron I?

For the multiplier neuron associated with the power balance constraint, the output is initialized by
the mean value which is obtained by solving 𝑑𝐸 ⁄𝑑𝑉 =0 in the above formula by neglecting penalty
factor and input of neurons.
( )
( )
𝑉 = ∑

( )
Where 𝑉 is the initial value of the output of the multiplier neuron.

The initial value of the input of the multiplier neuron is initialized by its output value. In the ALHN
model, the errors are calculated from the constraint errors and neural iterative errors. The power
balance constraint error at iteration n is determined as:

( )
∆𝑃 ( ) = 𝑃 +𝑝 −∑ 𝑉

39
Where ∆𝑃( )
is the power balance constraint error at iteration n. the iterative errors of neurons at
iteration n are defined as:

( ) ( )
∆𝑉 = 𝑉 −𝑉

( ) ( )
∆𝑉 = 𝑉 −𝑉

( ) ( )
Where ∆𝑉 and ∆𝑉 are iterative errors of continuous and multiplier neurons at iteration n,
respectively. The maximum error of the model at iteration n is determined by the combination of
power balance and iterative error.

( ) ( ) ( )
𝐸𝑟𝑟 = max ∆𝑃( ) , ∆𝑉 , ∆𝑉

( )
Where 𝐸𝑟𝑟 max is the maximum error of ALHN at iteration n. The algorithm will be terminated
when either the maximum error is lower than a pre-specified tolerance or maximum number of
iterations is reached.
2.5.2 The algorithm of ALHN for solving the ED problem:

Here are the steps.

Step 1: Read parameters for the problem including cost coefficients, maximum power outputs,
load demand, and loss coefficients.

Step 2: Select parameters for ALHN including slope of sigmoid function, penalty factor, and
updating step sizes.

Step 3: Set the maximum number of iterations and the threshold for maximum error of ALHN.

Step 4: Initiate outputs of all neurons and calculate their corresponding inputs

Step 5: Set the iteration n = 1.

Step 6: Calculate dynamics of neurons

Step 7: Update inputs of neurons

Step 8: Calculate outputs of neurons

Step 9: finally Calculate maximum error


( )
Step 10: if n<𝑁 or 𝐸𝑟𝑟 >ε, n = n + 1 and return to Step 6.
40
Step 11: Calculate total cost using the objective function.

Where; 𝑁 is the maximum number of iterations and İ is the threshold of the maximum error
of ALHN.

Finally, if the error is small take it as a final solution.

MATLAB Code-ELD using ANN

clc;

clear all;

%Sample output generating Data which are used as an input

%generating out put sampled data here is taken randomly

Data=[105.4 320 179; 111 375.4 298.9;100 150 200;250 150 129;321 150 200;100 150 200;

119 211 296.7;100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;

100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;100 150 200;

100 150 200; 100 150 200; 240 150 127;100 150 200;195 159 495;201 158 240;

260 260 260;100 150 200;100 150 200;100 150 200;100 150 200;]

% Define the problem data

Pd = 300; % Total power demand

a = [0.001562 0.00194 0.002]; % Quadratic cost coefficients

b = [7.92 7.85 7.90]; % Linear cost coefficients

c = [561 310 78]; % Constant cost coefficients /bias/

Pmin = [100 100 100]; % Minimum power output limits

Pmax = [500 500 500]; % Maximum power output limits

% Define the neural network

net = feedforwardnet([5 3]); % Two hidden layers with 5 and 3 neurons

net.trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation

41
net.performFcn = 'mse'; % Mean squared error performance function

net.divideFcn = 'dividerand'; % Randomly divide data into training, validation, and testing sets

% Generate training data

X=Data;

Y = sum(a.*(X.^2))+ b*X + c; % Compute the total cost

% Train the neural network

[net,tr] = train(net,X,Y)

% Test the neural network

Xtest = [100 120 130]'; % Test power outputs

Ftest = sum(a.*(Xtest.^2)) + b*Xtest + c; % True total cost

Fpred = net(Xtest); % Predicted total cost

% Display the results

disp(['True total cost: ' num2str(Ftest)]);

disp(['Predicted total cost: ' num2str(Fpred)]);

disp(['Percentage error: ' num2str(100*abs(Ftest-Fpred)/Ftest) '%']);

42
References

[1] Arunpreet Kaur, Harinder Pal Singh and Abhishek Bhardwaj, "Analysis of Economic Load
Dispatch Using Genetic Algorithm," International Journal of Application or Innovation in
Engineering & Management (IJAIEM), vol. Volume 3, no. Issue 3,, March 2014.

[2] S.Rajasekran and G.A.V Pai, "Neural network, Fuzzy logic and genetic algorithm," in
Prentice hall of India, New Delhi, 2004.

43

You might also like