0% found this document useful (0 votes)
244 views40 pages

Chapter - 3 Optimal Power Flow Problem & Solution Methodologies

The document discusses optimal power flow (OPF) problem formulation and solution methodologies. It covers the objectives of OPF problems such as minimizing fuel costs or power losses. Constraints for OPF problems include power flow equations and operating limits. Conventional solution methods include gradient, Newton, and quadratic programming methods. Intelligent methods include genetic algorithms and particle swarm optimization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
244 views40 pages

Chapter - 3 Optimal Power Flow Problem & Solution Methodologies

The document discusses optimal power flow (OPF) problem formulation and solution methodologies. It covers the objectives of OPF problems such as minimizing fuel costs or power losses. Constraints for OPF problems include power flow equations and operating limits. Conventional solution methods include gradient, Newton, and quadratic programming methods. Intelligent methods include genetic algorithms and particle swarm optimization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

38

CHAPTER – 3

OPTIMAL POWER FLOW PROBLEM & SOLUTION

METHODOLOGIES

3.0 INTRODUCTION

This chapter covers existing methodologies for solution of

Optimal Power Flow (OPF) problem. They include formulation of OPF

problem, objective function, constraints, applications and in-depth

coverage of various popular OPF methods.

The OPF methods are broadly grouped as Conventional and

Intelligent. The conventional methodologies include the well known

techniques like Gradient method, Newton method, Quadratic

Programming method, Linear Programming method and Interior point

method. Intelligent methodologies include the recently developed and

popular methods like Genetic Algorithm, Particle swarm optimization.

Solution methodologies for optimum power flow problem are

extensively covered in this chapter.

3.1 OPTIMAL POWER FLOW PROBLEM

In an OPF, the values of some or all of the control variables need

to be found so as to optimise (minimise or maximize) a predefined

objective. It is also important that the proper problem definition with

clearly stated objectives be given at the onset. The quality of the

solution depends on the accuracy of the model studied. Objectives

must be modeled and its practicality with possible solutions.

Objective function takes various forms such as fuel cost,

transmission losses and reactive source allocation. Usually the


39

objective function of interest is the minimisation of total production

cost of scheduled generating units. This is most used as it reflects

current economic dispatch practice and importantly cost related

aspect is always ranked high among operational requirements in

Power Systems.

OPF aims to optimise a certain objective, subject to the network

power flow equations and system and equipment operating limits. The

optimal condition is attained by adjusting the available controls to

minimise an objective function subject to specified operating and

security requirements.

Some well-known objectives can be identified as below:

Active power objectives

1. Economic dispatch (minimum cost, losses, MW generation or

transmission losses)

2. Environmental dispatch

3. Maximum power transfer

Reactive power objectives

MW and MVAr loss minimization

General goals

1. Minimum deviation from a target schedule

2. Minimum control shifts to alleviate Violations

3. Least absolute shift approximation of control shift

Among the above the following objectives are most commonly used:

(a) Fuel or active power cost optimisation

(b) Active power loss minimisation


40

(c) VAr planning to minimise the cost of reactive power support

The mathematical description of the OPF problem is presented below:

3.1.1 OPF Objective Function for Fuel Cost Minimization

The OPF problem can be formulated as an optimization problem [2, 5,

6, 18] and is as follows:

Total Generation cost function is expressed as:

 
NG
F ( PG )    i  i PGi   i pG2i (3.1)
i 1

The objective function is expressed as:

Min F ( PG )  f ( x, u) (3.2)

Subject to satisfaction of Non linear Equality Constraints:

g ( x, u)  0 (3.3)

and Non linear Inequality Constraints:

h ( x, u)  0 (3.4)

u min  u  u max (3.5)

xmin  x  xmax (3.6)

F ( PG ) is total cost function f(x, u) is the scalar objective, g(x, u)

represents nonlinear equality constraints (power flow equations), and

h(x, u) is the nonlinear inequality constraint of vector arguments x, u.

The vector x contains dependent variables consisting of:

 Bus voltage magnitudes and phase angles

 MVAr output of generators designated for bus voltage control

 Fixed parameters such as the reference bus angle

 Non controlled generator MW and MVAr outputs


41

 Non controlled MW and MVAr loads

 Fixed bus voltages, line parameters

The vector u consists of control variables including:

 Real and reactive power generation

 Phase – shifter angles

 Net interchange

 Load MW and MVAr (load shedding)

 DC transmission line flows

 Control voltage settings

 LTC transformer tap settings

The equality and inequality constraints are:

 Limits on all control variables

 Power flow equations

 Generation / load balance

 Branch flow limits (MW, MVAr, MVA)

 Bus voltage limits

 Active / reactive reserve limits

 Generator MVAr limits

 Corridor (transmission interface) limits

3.1.2 Constraints for Objective Function of Fuel Cost

Minimization

Consider Fig 3.1 representing a standard IEEE 14 Bus

single line diagram. 5 Generators are connected to 5 buses. For

a given system load, total system generation cost should be

minimum.
42

Fig: 3.1 IEEE 14 – Bus Test System

The network equality constraints are represented by the load flow

equations [18]:

Pi (V ,  )  PGi  PDi  0 (3.7)

Qi (V ,  )  QGi  QDi  0 (3.8)

where:

N
Pi (V ,  ) |Vi | |Vi ||Yij | cos(i   j  ij ) (3.9)
i 1

N
Qi (V ,  ) |Vi | |Vi ||Yij | sin(i   j  ij ) (3.10)
i 1

Yij |Yij | ij (3.11)

and Load balance equation.

=0 (3.12)

The Inequality constraints representing the limits on all variables, line

flow constraints,

Vi min  Vi  Vi max , i  1,..., N , (3.13)

PGi min  PGi  PGi max , i  1,..., NG (3.14)


43

QGi min  QGi  QGi max , i  1,..., NGq , (3.15)

kvi I l max  Vi  V j  Kvj I l max , l  1,..., Nl (3.16)

i, j are the nodes of line l.

ki Il max   i   j  K j Il max , l  1,..., Nl (3.17)

i, j are the nodes of line l.

Sli  Sli max i  1,..., Nl (3.18)

Tk min  Tk  Tk max i  1,..., N l (3.19)

i min  i  i max
(3.20)

3.1.3 OPF Objective Function for Power Loss Minimization

The objective functions to be minimized are given by the sum of

line losses

Nl
PL   Plk
k 1
(3.21)

Individual line losses P1 can be expressed in terms of voltages and


k

phase angles as

Pl  gk Vi 2  V j2  2ViV j cos(i   j ) (3.22)


k

The objective function can now be written as

Nl
Min PL  g
i 1
k
(Vi 2  V j2  2ViV j cos(i   j ) (3.23)

This is a quadratic form and is suitable for implementation

using the quadratic interior point method.


44

The constraints are equivalent to those specified in Section

3.1.1 for cost minimization, with voltage and phase angle expressed in

rectangular form.

3.1.4 Constraints for Objective Function of Power Loss

Minimization

The controllable system quantities are generator MW, controlled

voltage magnitude, reactive power injection from reactive power

sources and transformer tapping. The objective use herein is to

minimize the power transmission loss function by optimizing the

control variables within their limits. Therefore, no violation on other

quantities (e.g. MVA flow of transmission lines, load bus voltage

magnitude, generator MVAR) occurs in normal system operating

conditions. These are system constraints to be formed as equality and

inequality constraints as shown below.

The Equality constraints are given by Eqns. (3.7) – (3.12)

The Inequality constraints are given by Eqns. (3.13) – (3.20)

3.1.5 Objectives of Optimal Power Flow

Present commercial OPF programs can solve very large and

complex power systems optimization problems in a relatively less

time. Many different solution methods have been suggested to solve

OPF problems.

In a conventional power flow, the values of the control variables

are predetermined. In an OPF, the values of some or all of the control

variables need to be known so as to optimize (minimize or maximize)

a predefined objective. The OPF calculation has many applications in


45

power systems, real-time control, operational planning, and planning

[19–24]. OPF is used in many modern energy management systems

(EMSs).

OPF continues to be significant due to the growth in power

system size and complex interconnections [25 – 29]. For example,

OPF should support deregulation transactions or furnish information

on what reinforcement is required. OPF studies can decide the

tradeoffs between reinforcements and control options as per the

results obtained from carrying out OPF studies. It is clarified when a

control option enhances utilization of an existing asset (e.g.,

generation or transmission), or when a control option is an

inexpensive alternative to installing new facilities. Issues of priority of

transmission access and VAr pricing or auxiliary costing to afford

price and purchases can be done by OPF [2, 3, 28].

The main goal of a generic OPF is to reduce the costs of meeting

the load demand for a power system while up keeping the security of

the system. From the viewpoint of an OPF, the maintenance of system

security requires keeping each device in the power system within its

desired operation range at steady-state. This will include maximum

and minimum outputs for generators, maximum MVA flows on

transmission lines and transformers, as well as keeping system bus

voltages within specified ranges.

The secondary goal of an OPF is the determination of system

marginal cost data. This marginal cost data can aid in the pricing of

MW transactions as well as the pricing auxiliary services such as


46

voltage support through MVAR support. The OPF is capable of

performing all of the control functions necessary for the power

system. While the economic dispatch of a power system does control

generator MW output, the OPF controls transformer tap ratios and

phase shift angles as well. The OPF also is able to monitor system

security issues including line overloads and low or high voltage

problems. If any security problems occur, the OPF will modify its

controls to fix them, i.e., remove a transmission line overload.

The quality of the solution depends on the accuracy of the

model used. It is essential to define problem properly with clearly

stated objectives be given at the onset. No two-power system utilities

have the same type of devices and operating requirements. The model

form presented here allows OPF development to easily customize its

solution to different cases under study [32–38].

OPF, to a large extent depends on static optimization method

for minimizing a scalar optimization function (e.g., cost). It was first

introduced in the 1960s by Tinney and Dommel [29]. It employs first-

order gradient algorithm for minimization objective function subject

to equality and inequality constraints. Solution methods were not

popular as they are computationally intensive than traditional power

flow. The next generation OPF has been greater as power systems

operation or planning need to know the limit, the cost of power,

incentive for adding units, and building transmission systems a

particular load entity.


47

3.1.6 Optimal Power Flow Challenges

The demand for an OPF tool has been increasing to assess the

state and recommended control actions both for off line and online

studies, since the first OPF paper was presented in 60’s. The thrust

for OPF to solve problems of today’s deregulated industry and the

unsolved problem in the vertically integrated industry has posed

further challenges to OPF to evaluate the capabilities of existing OPF

in terms of its potential and abilities [30].

Many challenges are before OPF remain to be answered. They can be

listed as given below.

1. Because of the consideration of large number of variety of

constraints and due to non linearity of mathematical models OPF

poses a big challenge for the mathematicians as well as for

engineers in obtaining optimum solutions.

2. The deregulated electricity market seeks answer from OPF, to

address a variety of different types of market participants, data

model requirements and real time processing and selection of

appropriate costing for each unbundled service evaluation.

3. To cope up with response time requirements, modeling of

externalities (loop flow, environmental and simultaneous transfers),

practicality and sensitivity for on line use.

4. How well the future OPF provide local or global control measures to

support the impact of critical contingencies, which threaten system

voltage and angle stability simulated.


48

5. Future OPF has to address the gamut of operation and planning

environment in providing new generation facilities, unbundled

transmission services and other resources allocations.

Finally it has to be simple to use and portable and fast enough.

After brief overview of the applications of Optimal Power Flow as

mentioned above, detailed explanation of the most common

applications is given below.

3.2 OPF SOLUTION METHODOLOGIES

A first comprehensive survey regarding optimal power dispatch

was given by H.H.Happ [31] and subsequently an IEEE working group

[32] presented bibliography survey of major economic-security

functions in 1981. Thereafter in 1985, J. Carpentier presented a

survey [33] and classified the OPF algorithms based on their solution

methodology. In 1990, B. H. Chowdhury et al [34] did a survey on

economic dispatch methods. In 1999, J. A. Momoh et al [3] presented

a review of some selected OPF techniques.

The solution methodologies can be broadly grouped in to two namely:

1. Conventional (classical) methods

2. Intelligent methods.

The further sub classification of each methodology is given below as


per the Tree diagram.
49
O P F Solution Methodologies
O P F Methods

Conventional Methods Intelligent Methods


Gradient Methods Artificial Neural
Generalised Networks
Reduced Fuzzy Logic
GradientGradient
Reduced Evolutionary
Conjugate Gradient Programming
Hessian – based Ant Colony
Newton – based
Particle Swarm Optimisation
Linear Programming
Quadratic Programming
Interior Point

Fig: 3.2 Tree diagram indicating OPF Methodologies

3.2.1 Conventional Methods

Traditionally, conventional methods are used to effectively solve

OPF. The application of these methods had been an area of active

research in the recent past. The conventional methods are based on

mathematical programming approaches and used to solve different

size of OPF problems. To meet the requirements of different objective

functions, types of application and nature of constraints, the popular

conventional methods is further sub divided into the following [2, 3]:

(a) Gradient Method [2, 5, 6, 29]

(b) Newton Method [35]

(c) Linear Programming Method [2, 5, 6, 36]

(d) Quadratic Programming Method [5]

(e) Interior Point Method [2, 5, 6, 37]

Even though, excellent advancements have been made in classical

methods, they suffer with the following disadvantages: In most cases,


50

mathematical formulations have to be simplified to get the solutions

because of the extremely limited capability to solve real-world large-

scale power system problems. They are weak in handling qualitative

constraints. They have poor convergence, may get stuck at local

optimum, they can find only a single optimized solution in a single

simulation run, they become too slow if number of variables are large

and they are computationally expensive for solution of a large system.

3.2.2 Intelligent Methods

To overcome the limitations and deficiencies in analytical methods,

Intelligent methods based on Artificial Intelligence (AI) techniques have

been developed in the recent past. These methods can be classified or

divided into the following,

a) Artificial Neural Networks (ANN) [38]

b) Genetic Algorithms (GA) [5, 6, 8, 14]

c) Particle Swarm Optimization (PSO) [5, 6, 11, 15, 16]

d) Ant Colony Algorithm [39]

The major advantage of the Intelligent methods is that they are

relatively versatile for handling various qualitative constraints. These

methods can find multiple optimal solutions in single simulation run.

So they are quite suitable in solving multi objective optimization

problems. In most cases, they can find the global optimum solution.

The main advantages of Intelligent methods are: Possesses learning

ability, fast, appropriate for non-linear modeling, etc. whereas, large

dimensionality and the choice of training methodology are some

disadvantages of Intelligent methods.


51

Detailed description on important aspects like Problem

formulation, Solution algorithm, Merits & Demerits and Researchers’

contribution on each of the methodology as referred above is

presented in the coming sections.

The contribution by Researchers in each of the methodology has

been covered with a lucid presentation in Tabular form. This helps the

reader to quickly get to know the significant contributions and salient

features of the contribution made by Researchers as per the Ref. No.

mentioned in the list of References.

3.3 CONVENTIONAL METHODOLOGIES

The list of OPF Methodologies is presented in the Tree diagram

Fig. 3.1. It starts with Gradient Method.

3.3.1 Gradient Method

The Generalised Reduced Gradient is applied to the OPF

problem [29] with the main motivation being the existence of the

concept of the state and control variables, with load flow equations

providing a nodal basis for the elimination of state variables. With the

availability of good load flow packages, the sensitivity information

needed is provided. This in turn helps in obtaining a reduced problem

in the space of the control variables with the load flow equations and

the associated state variables eliminated.


52

3.3.1.1 OPF Problem Formulation

The objective function considered is total cost of generation. The

objective function to be minimized is

F (PG )  
all gen
Fi (PGi ) (3.24)

Where the sum extended to all generation on the power system

including the generator at reference bus.

The unknown or state vector x is defined as,

 i
 on each PQ bus
x   Vi (3.25)
 on each PV bus
 i

another vector of independent variables, y is defined as [2]:

 k
 on the Slack bus / reference
 Vk bus
 P net
y   net
k
on each PQ bus
Qk
(3.26)
 net
 Pk
 sch on each PV
 Vk bus

The vector y represents all the parameters known that must be

specified. Some of these parameters can be adjusted (for example the

generator output, Pk and the generator bus voltage). While some of


net

the parameters are fixed, such as P and Q at each load bus in respect

of OPF calculations. This can be understood by dividing the vector y

into two parts, u and p.

u 
y  (3.27)
 p
53

where u represents the vector of control or adjustable variables

and p represents the fixed or constant variables.

Now with this we can define a set of m equations that govern the

power flow [2]:

 PGi  V ,    PGinet
 for each PQ (load) bus
g ( x, y )  QGi  V ,    QGi net

 (3.28)
 PGk  V ,    PGknet
for each PV (generator) bus
not including reference bus

These equations are the bus equations usually referred in

Newton Power Flow. It may be noted that the reference bus power

generation is not an independent variable. In other words the

reference bus generation always changes to balance the power flow,

which cannot be specified at the beginning of the calculations.

The cost function / objective function can be expressed as a

function of the control variables and state variables. For this, cost

function is divided in to the following.

F (PG )   F (P
gen
i Gi )  Fref (PG ref ) (3.29)

Where Fref is the cost function of reference bus.

And the first summation does not include the reference bus. The PGi s

are all independent, controlled variables, where as PG ref is a function of

the network voltages and angles.

i.e. PG ref  Pref (| v |,  ) (3.30)

the cost function becomes


54

  Fi (PGi )  Fref (Pref (v,  ))  f (x , u ) (3.31)


gen

To solve the optimization problem, we can define Lagrangian function

as

 (x, u, p )  f (x, u )   T g(x, u, p ) (3.32)

This can be further written as,

 Pi (v,  )  P inet 
 ( x, u, p)   Fi ( PGi )  Fref [ Pref (| v |,  )]  [1, 2 ,.., N ]  
Qi (v,  )  Q i 
net
gen
(3.33)

Thus we have a Lagrange function that has a single objective

function and N Lagrange multipliers one for each of the N power flow

equations.

3.3.1.2 Solution Algorithm

To minimize the cost function, subject to the constraints, the

gradient of Lagrange function is set to zero [2]:

 = 0 (3.34)

To do this, the gradient vector is separated in to three parts

corresponding to the variables x, u and  .

It is represented as

T
L f  g 
 Lx      0 (3.35)
x x  x 

T
L f  g 
 Lu  =    0 (3.36)
u u  u 

L
 L  = g (x, u, p )  0 (3.37)

55

Eq. (3.35) consists of a vector of derivation of the objective

function w.r.t the state variables x. Since the objective function itself

is not a function of the state variable except for the reference bus, this

becomes:

   Pref 
 Fref (Pref ) 
  Pref  1 
f    Pref 
  Fref (Pref ) 
x   Pref  |V1 | (3.38)
 
 
 
 
g
The term in equation (3.35) is actually the Jacobian matrix
x

for the Newton Power flow which is already known. That is:

 P1 P1 P1 P1 


   |V1 |  2  |V2 | 
 1 
 Q1 Q1 Q1 Q1 
   |V1 |  2  |V2 | 
 1 
 g   P2 P2 
 x      |V1 |
 (3.39)
 1 
 Q2 Q2 
 
 1  |V1 | 
 
 
 

This matrix has to be transposed for use in Eq. (3.35). Eq. (3.36)

is the gradient of the Lagrange function w.r.t the control variables.

f
Here the vector is vector of the derivatives of the objective function
u

w.r.t the control variables.


56

  
 P F1(P1 ) 
 1 
f   
  F2 (P2 ) (3.40)
u P
 2 
 
 
 

 g 
The other term in Eq. (3.36),   actually consists of a matrix of all
 u 

zeroes with some -1 terms on the diagonals, which correspond to Eq.

in g ( x, u, p) where a control variable is present. Finally Eq. (3.37)

consists of the power flow equation themselves.

Algorithm for Gradient Method

The solution steps of the gradient method of OPF are as follows.

Step 1: Given a set of fixed parameters p, assume a starting set of

control variables ‘u’.

Step 2: Solve for Power flow. This guarantees Eq. (3.33) is satisfied.

Step 3: Solve Eq. (3.32) for 

T 1
 g   f
     (3.41)
 x   x

Step 4: Substitute  from Eq. (3.41) into Eq. (3.36) and compute the

gradient.

T
L f  g 
L  =    (3.42)
u u  u 

Step 5: If L equals zero within the prescribed tolerance, the

minimum has been reached other wise:


57

Step 6: Find a new set of control variables.

u new  u old   u (3.43)

where u   

Here  u is a step in negative direction of the gradient. The step

size is adjusted by the positive scalar  .

In this algorithm, the choice of  is very critical. Too small a value of

 guarantees the convergence, but slows down the process of

convergence; too a high a value of  causes oscillations around the

minimum. Several methods are available for optimum choice of step

size.

3.3.1.3 OPF Solution by Gradient Method — Researchers’


Contribution
The Significant Contributions/Salient Features of Presentations made
by Researchers are furnished below:

Sl.No. Author Title of Journal /


[Ref. No] Topic Publication Significant Contributions/Salient
Details Features
1 Dommel Optimal IEEE  Using penalty function
H.W. power Transactions optimization approach, developed
and flow on Power nonlinear programming (NLP)
Tinney solutions Apparatus method for minimization of fuel
W.F and cost and active power losses.
[29] Systems,  Verification of boundary, using
PAS- 87, Lagrange multiplier approach, is
pp. 1866– achieved.
1876,  Capable of solving large size power
October system problems up to 500 buses.
1968.  Its drawback is in the modeling of
components such as transformer
taps that are accounted in the load
flow but not in the optimization
routine.
58

2 C. M. Determina- Proceedings  Provided solutions for power


Shen and tion of of IEEE, system problems by an iterative
M.A. Optimum vol. 116, indirect approach based on
Laughton Power No. 2, pp. Lagrange-Kuhn-Tucker conditions
[40] System 225-239, of optimality.
Operating 1969.  A sample 135 kV British system of
Conditions 270 buses was validated by this
method and applied to solve the
economic dispatch objective
function with constraints.
 Constraints include voltage levels,
generator loading, reactive-source
loading, transformer-tap limits,
transmission-line loading.
 This method shown less
computation time, with a tolerance
of 0.001, when compared to other
penalty function techniques.
3 O. Alasc Optimum IEEE  Developed a non linear
and B Load Transactions programming approach based on
Stott Flow with on Power reduced gradient method utilizing
[41] steady Apparatus the Lagrange multiplier and
state and penalty- function technique.
security Systems,  This method minimises the cost of
PAS- 93, total active power generation.
pp.745–  Steady state security and
754, 1974. insecurity constraints are
incorporated to make the optimum
power flow calculation a powerful
and practical tool for system
operation and design. .
 Validated on the 30- bus IEEE test
system and solved in 14.3 seconds.
 The correct choice of gradient step
sizes is crucial to the success of
the algorithm.
59

3.3. 1. 4 Merits and Demerits of Gradient Method

The Merits and Demerits of Gradient Method are summarized

and given below.

Merits

1) With the Gradient method, the Optimal Power Flow solution

usually requires 10 to 20 computations of the Jacobian matrix

formed in the Newton method.

2) The Gradient procedure is used to find the optimal power flow

solution that is feasible with respect to all relevant inequality

constraints. It handles functional inequality constraints by

making use of penalty functions.

3) Gradient methods are better fitted to highly constrained problems.

4) Gradient methods can accommodate non linearities easily

compared to Quadratic method.

5) Compact explicit gradient methods are very efficient, reliable,

accurate and fast.

This is true when the optimal step in the gradient direction is

computed automatically through quadratic developments.

Demerits

1) The higher the dimension of the gradient, the higher the accuracy

of the OPF solution. However consideration of equality and

inequality constraints and penalty factors make the relevant

matrices less sparse and hence it complicates the procedure and

increases computational time.


60

2) Gradient method suffers from the difficulty of handling all the

inequality constraints usually encountered in optimum power

flow.

3) During the problem solving process, the direction of the Gradient

has to be changed often and this leads to a very slow

convergences. This is predominant, especially during the

enforcement of penalty function; the selection of degree of penalty

has bearing on the convergence.

4) Gradient methods basically exhibit slow convergence

characteristics near the optimal solution.

5) These methods are difficult to solve in the presence of inequality

constraints.

3.3.2 Newton Method

In the area of Power systems, Newton’s method is well known

for solution of Power Flow. It has been the standard solution

algorithm for the power flow problem for a long time The Newton

approach [42] is a flexible formulation that can be adopted to develop

different OPF algorithms suited to the requirements of different

applications. Although the Newton approach exists as a concept

entirely apart from any specific method of implementation, it would

not be possible to develop practical OPF programs without employing

special sparsity techniques. The concept and the techniques together

comprise the given approach. Other Newton-based approaches are

possible.
61

Newton’s method [2, 35] is a very powerful solution algorithm

because of its rapid convergence near the solution. This property is

especially useful for power system applications because an initial

guess near the solution is easily attained. System voltages will be

near rated system values, generator outputs can be estimated from

historical data, and transformer tap ratios will be near 1.0 p.u.

3.3.2.1 OPF Problem Formulation

Eqns. (3.1) – (3.6) describe the OPF Problem and constraints.

3.3.2.2 Solution Algorithm

The solution for the Optimal Power Flow by Newton’s method

requires the creation of the Lagrangian as shown below [35, 42]:

L( z )  f ( x)   T h( x)   T g ( x) (3.44)

where z   x    ,  and  are vectors of the Lagrange multipliers,


T

and g(x) only includes the active (or binding) inequality constraints.

A gradient and Hessian of the Lagrangian is then defined as

 L( z ) 
Gradient = L( z )    = a vector of the first partial derivatives of
 zi 

the Lagrangian (3.45)

  2 L( z )  2 L( z )  2 L( z ) 
 
 xi x j xi  j xi  j  a matrix of
the second
  2 L( z )    2 L( z ) 
     0  partial
2
Hessian = L ( z ) H 0
 i j   i x j
 z z  derivatives
 2  of the
  L( z ) 
  x 0 0  Lagrangian
 i j 

(3. 46)
62

It can be observed that the structure of the Hessian matrix shown

above is extremely sparse. This sparsity is exploited in the solution

algorithm.

According to optimization theory, the Kuhn-Tucker necessary

conditions of optimality can be mentioned as given under,

Let Z  [ x ,  ,  ] , is the optimal solution.


* * * *

 x L( z* )   x L([ x* ,  * ,  * ])  0 (3.47)

 L( z* )   L([ x* ,  * ,  * ])  0 (3.48)

 L( z* )   L([ x* ,  * ,  * ])  0 (3.49)

i*  0 if h(x*) =0 (i.e., the inequality constraint is active) (3.50)

i*  0 if h(x*)  0 (i.e., the inequality constraint is not active) (3.51)

i * = 0 Real (3.52)

By solving the equation  z L( z )  0 , the solution for the optimal


*

problem can be obtained.

It may be noted that special attention must be paid to the

inequality constraints of this problem. As noted, the Lagrangian only

includes those inequalities that are being enforced. For example, if a

bus voltage is within the desired operating range, then there is no

need to activate the inequality constraint associated with that bus

voltage. For this Newton’s method formulation, the inequality

constraints have to be handled by separating them into two sets:

active and inactive. For efficient algorithms, the determination of

those inequality constraints that are active is of utmost importance.

While an inequality constraint is being enforced, the sign of its


63

associated Lagrange multiplier at solution determines whether

continued enforcement of the constraint is necessary. Essentially the

Lagrange multiplier is the negative of the derivative of the function

that is being minimized with respect to the enforced constraint.

Therefore, if the multiplier is positive, continued enforcement will

result in a decrease of the function, and enforcement is thus

maintained. If it is negative, then enforcement will result in an

increase of the function, and enforcement is thus stopped. The outer

loop of the flow chart in Fig. 3.2 performs this search for the binding

or active constraints.

Considering the issues discussed above, the solution of the

minimization problem can be found by applying Newton’s method.

Algorithm for Newton method

Once an understanding of the calculation of the Hessian and Gradient

is attained, the solution of the OPF can be achieved by using the

Newton’s method algorithm.

Step 1: Initialize the OPF solution.


a) Initial guess at which inequalities are violated.
b) Initial guess z vector (bus voltages and angles, generator
output power, transformer tap ratios and phase shifts, all
Lagrange multipliers).
Step 2: Evaluate those inequalities that have to be added or removed
using the information from Lagrange multipliers for hard
constraints and direct evaluation for soft constraints.
Step 3: Determine viability of the OPF solution. Presently this ensures
that at least one generator is not at a limit.
Step 4: Calculate the Gradient (Eq. (3.51)) and Hessian (Eq. (3.52)) of
the Lagrangian.
Step 5: Solve the Eq. [ H ] z  L( z ) .
64

Step 6: Update solution znew  zold  z .

Step 7: Check whether || z ||  . If not, go to Step 4, otherwise


continue.
Step 8: Check whether correct inequalities have been enforced. If not
go to Step 2. If so, problem solved.

3.3.2.3 OPF Solution by Newton Method — Researchers


Contribution
The Significant Contributions/Salient Features of Researchers are
furnished below:
.

Sl.No Author Title of Journal /


[Ref. No] Topic Publication Significant Contributions
Details
1 A. M. H. Optimal IEEE  While using Lagrange multiplier
Rashed Load Flow Transactions and Newton’s method, the method
and D. Solution on Power also introduced an acceleration
H. Kelly Using Apparatus factor to compute the update
[43] Lagrangianand Systems, controls
Multipliers vol. PAS-93,  As an extension of Tinney’s work, it
and the pp. 1292- employs a nonlinear programming
Hessian 1297, 1974. methodology based on the homotopy
Matrix continuation algorithm for
minimizing loss and cost objective
functions
 Validation of voltage magnitude was
done on 179-bus system and results
are comparable to augmented
MINOS schemes.
2 H. H. Optimal IEEE  Application of Lagrange multipliers
Happ. Power Transactions to an economic dispatch objective
[44] Dispatch on Power function was presented.
Apparatus  Obtained solution for incremental
and losses, using the Jacobian matrix
Systems, attained from Newton –Raphson
vol. PAS-93, load flow.
no. 3,  Results obtained on a 118 bus test
pp. 820-830, system are good for both on-line and
May/June, off-line operations.
1974.  Comparable with the B-matrix
method in terms of optimum
production cost for total generation,
losses, and load.
65

3 David I. Optimal IEEE Network sparsity techniques and


Sun, Power Transactions Lagrange multiplier approach was
Bruce Flow by on Power used
Ashley, Newton Apparatus Solution for reactive power
Brian Approach and Systems, optimization based on Newton
Brewer, vol.PAS-103, method was presented.
Art no. 10, pp. Quadratic approximation of the
Hughes, 2864-2879, Lagrangian was solved at each
William Oct 1984. iteration and also validated on an
F. Tinney actual 912-bus system.
[35]  Approach is suitable for practical
large systems due to super-linear
convergence to Kuhn-Tucker
condition makes.
4 Maria, Newton IEEE  Initially, the augmented Lagrangian
G. A. optimal Transactions is formed.
and power on Power  Set of Non linear equations as, first
Findlay, flow Systems, vol. partial derivatives of the augmented
J. A., A program PWRS-2, pp. objective with respect to the control
[45] for 576-584, variables are obtained.
Ontario Aug. 1987.  All the Non linear equations are
hydro solved simultaneously by the NR
EMS method unlike the Dommel and
Tinney method, where only part of
these equations is solved by the NR
method.
5 M. V. F. A 9th PSCC  Solution for Economic dispatch
Pereira, Decompos Conference, problem with security constraints
L. M. V. ition pp. 585- using Bender’s decomposition
G. Pinto, Approach 591, 1987. approach is obtained.
S. to  In addition, solution is also
Granville Security obtained for dispatch problems like
and A. Constrain : the pure economic dispatch
Monticelli ed problem, the security-constrained
[46] Optimal dispatch problem and the security-
Power constrained dispatch with re-
Flow with scheduling problem
Post  This method linearises AC/DC
Contingen power flows and performs
cy sensitivity analysis of load
Corrective variations
Reschedul  Practical testing of the method has
ing shown encouraging results.
6 C. W. An IEEE  For security constrained dispatch
Sanders Algorithm Transactions calculations provided an algorithm
and C. for Real- on Power  The method was validated on a
A. Time Systems, vol. 1200 bus 1500 line practical power
Monroe Security PWRS-2, system.
[47] Constrain no. 4,
66

ed pp. 175-182, Designed constrained economic


Dispatch November dispatch calculation (CEDC) in
1987. order to achieve following goals:
a) Establish economic base points
to load frequency control (LFC);
b) Enhance dependability of service
by considering network
transmission limitations,
c)Furnish constrained participation
factors,
d) Adaptable to current control
computer systems.
 CEDC is efficient compared to
benchmark OPF algorithm and
adapts the basic Lagrange
multiplier technique for OPF. It is
assumed to be in the standard
cubic polynomial form algorithm.
 The objective of CEDC was
optimized subject to area
constraints, line constraints, and
the line-group constraints.
Computation of constraint-
sensitivity factors was done to
linearise security constraints.
 The sensitivity factors can be
decided from telemetry-based
values of the fractional system load
internal to the bounded area.
 Load flow was adapted to stimulate
the periodic incremental system
losses but not as a constraint.

7 A. Security IEEE  An algorithm based on


Monticelli ConstrainedTransactions mathematical programming
M. V. F. Dispatch on Power decomposition, for solving an
Pereira, Systems, vol. economic dispatch problem with
and S. PWRS-2, security constraints is presented.
Granville no. 4,  Separate contingency analysis with
[48] pp. 175- generation rescheduling can be
182, done to estimate constraint
November violation.
1987.  Preventive control actions are built
in and an automatic way of
adjusting the controls are included
 Using Monticelli’s method, the
specific dispatch problem with
rescheduling was tested on the
67

IEEE 118-bus test system.


 Detecting infeasibility is also
included in this method. employed
8 Monticelli Adaptive IEEE  This method introduces adaptive
and Wen- MovementTransactions movement penalties to ensure
Hsiung E. Penalty on Power positive definitiveness and
Liu Method Systems, convergence is attained without any
[49] for the vol 7, no. 1, negative affect.
Newton’s pp. 334-342,  Handling of penalties is automatic
Optimal 1992. and tuning is not required.
Power  Results are encouraging when
Flow tested on the critical 1650 –bus
system.
9 S. D. A new Electric  An algorithm based on Newton-
Chen algorithm Power Raphson (NR) method covering
and J. based on System sensitivity factors to solve emission
F. Chen the Research, dispatch in real-time is proposed.
[50] Newton- vol. 40  Development of Jacobian matrix
Raphson pp. 137- and the B-coefficients is done in
approach 141, 1997. terms of the generalized generation
for real- shift distribution factor.
time  Computation of penalty factor and
emission incremental losses is simplified
dispatch with fast Execution time.
10 K. L. Lo Newton- IEE  Fixed Newton method and the
and Z. like Proceedings- modification of the right hand side
J. Meng method forGenerations, vector method are presented for
[51] line outageTransmission simulation of line outage
simulation, Distribution,  Above methods have better
vol. 151, convergence characteristics than
no. 2, Fast decoupled load flow method
pp. 225-231, and Newton based full AC load flow
March 2004. method.
11 X. Tong Semi Proceedings Semi smooth Newton- type
and M. smooth of IEEE/PES algorithm is presented where in
Lin Newton- Transmission General inequality constraints and
[52] type and bounded constraints are tackled
algorithmsDistribution separately.
for solving Conference, The KKT system of the OPF is
optimal Dalian, altered to a system of non smooth
power flow China, pp. 1- bounded constrained equations
problems 7, 2005. with inclusion of diagonal matrix
and the non linear complementary
function.
Number of variables is less with low
computing cost.
131

3.4.2 Particle Swarm Optimisation Method

Particle swarm optimization (PSO) is a population based

stochastic optimization technique inspired by social behavior of bird

flocking or fish schooling [15, 16 and 17].

In PSO, the search for an optimal solution is conducted using a

population of particles, each of which represents a candidate solution

to the optimization problem. Particles change their position by flying

round a multidimensional space by following current optimal particles

until a relatively unchanged position has been achieved or until

computational limitations are exceeded. Each particle adjusts its

trajectory towards its own previous best position and towards the

global best position attained till then. PSO is easy to implement and

provides fast convergence for many optimization problems and has

gained lot of attention in power system applications recently.

The system is initialized with a population of random solutions

and searches for optima by updating generations. However, unlike GA,

PSO has no evolution operators such as crossover and mutation. In

PSO, the potential solutions, called particles, fly through the problem

space by following the current optimum particles .In PSO, each

particle makes it s decision using its own experience together with its

neighbor’s experience.
132

3.4.2.1 OPF Problem Formulation

The OPF problem is to optimize the steady state performance of

a power system in terms of an objective function while satisfying

several equality and inequality constraints. Mathematically, the OPF

problem can be represented by Eq. (3.1) – (3.6).

Min F (PG )  f (x, u )  J (x, u )

Objective Function

The objective function is given by Eq. (3.1) and is reproduced

below with an addition of function J.

NG NG
J  FT  F (PG )   Fi (PGi )   ( i  i PGi   i PG2i )
i 1 i 1

Subject to: g (x , u )  0

h (x, u )  0

x T  [PG1 , VL1 ....VLN , QG1 ....QGN , Sl1 ... SlN ] (3.108)


D G l

where x is the vector of dependent variables consisting of slack bus

power PG , load bus voltages VL , generator reactive power outputs QG ,


1

and transmission line loadings Sl . And hence, x represented as above.

NL, NG and nl are number of load buses, number of generators, and

number of transmission lines, respectively.

u is the vector of independent variables consisting of generator

voltages VG , generator real power outputs PG except at the slack bus

PG1 , transformer tap settings T, and shunt VAR compensations Qc .

Hence, u can be expressed as


133

uT  [VG1 ... VGN , PG2 .... PGN , T1 ... TNT , Qc1 ....QcNC ] (3.109)
G G

Where NT and NC are the number of the regulating transformers and

shunt compensators, respectively. J is the objective function to be

minimized, g is the equality constraints representing typical load flow

equations, h is the equality constraint representing system constraints

as given below.

(a) Generation constraints: Generator voltages, real power outputs,

and reactive power outputs are restricted by their lower and upper

limits, are represented by Eq. (3.12) – (3.20).

(b) Shunt VAR constraints: Shunt VAR compensations are restricted

by their limits as follows:

Qc i min  Qc i  Qc i max , i  1,..., NC (3.110)

(c) Security constraints: These include the constraints of voltages at

load buses and transmission line loadings as follows:

VL i min  VL i  VL i max , i  1,..., N D (3.111)

Sl i  Sl i max , i  1,..., N l (3.112)

It is the worth mentioning that the control variables are self

constrained. The hard inequalities of PG1 , VL , QG and Sl can be

incorporated in the objective function as quadratic penalty terms.

Therefore, the objective function can be augmented as follows:


134

ND
J aug  J  P (PG1  PG 1lim )  V  (VL i  VG i lim )2
2

i 1

NG Nl
 Q  (QG i  QG i lim )
2
 S  (Sl i  Sl i max )2 (3.113)
i 1 i 1

Where P , V , Q and S are penalty factors and x lim is the limit

value of the dependent variable x given as

x max
 x  x max
x lim   (3.114)
x min
 x  x min

3.4.2.2 Solution Algorithm

Description of basic elements required for the development of Solution

Algorithm is given below.

 Particle, X(t) : It is a candidate solution represented by an m -

dimensional vector, where m is the number of optimised parameters.

At time t, the jth particle Xj(t) can be described as Xj(t) = [xj,1(t),……

……xj, m (t)], where xs are the optimised parameters and xj, k(t) is the

position of the jth particle with respect to the kth dimension, i.e. the

value of the kth optimised parameter in the jth candidate solution.

 Population, pop (t): It is a set of n particles at time t, i.e. pop (t)=

[Xi(t),…. Xn(t)T.

 Swarm: It is an apparently disorganized population of moving

particles that tend to cluster together, while each particle seems to be

moving in a random direction.

 Particle velocity, V(t) : It is the velocity of the moving particles

represented by an m dimensional vector. At time t, the jth particle


135

velocity Vj(t) can be described as Vj(t) = [vj,1(t),…….. ……vj, m (t)], where

vj,,k(t) is the velocity component of the jth particle with respect to kth

dimension.

 Inertia weight, w (t): It is a control parameter to control the

impact of the previous velocities on the present velocity. Thus it

influences the trade off, between the global and local exploration

abilities of the particles, large inertia weight to enhance the global

exploration, is recommended at the initial stages where as for final

stages, the inertia weight is reduced for better local exploration.

 Individual best X* (t): During the search process, the particle

compares its fitness value at the current position, to the best fitness

value it has ever attained at any time up to the current time. The best

position that is associated with the best fitness encountered so far is

called the individual best, X* (t). In this way, the best position X* (t).for

each particle in the swarm, can be determined and updated during

the search. For example, in a minimisation problem with objective

function J, the individual best of the jth particle X*j (t) is determined

such that J(X*j (t))  J(X*j (  )),   t. For simplicity it is assumed that

Jj* = J(X*j (t)).For the jth particle, individual best can be expressed as X*j

(t) = [x*j, 1 (t) ………… x*j, m (t)].

 Global best X** (t): It is the best position among all individual

best positions ( i.e. the best of all) achieved so far .Therefore ,the

global best can be determined as such that J(X**j (t))  J(X*j (  )),

j=1,…….n. For simplicity, assume that J**= J(X** (t)).


136

 Stopping criteria: the conditions under which the search

process will terminate. In the present case, the search will terminate if

one of the following conditions is met.,

a) The number of iterations since, the last change of the best solution

is greater than a prespecified number.

or

b) The number of iterations reaches the maximum allowable number.

With the description of basic elements as above, the Solution

algorithm is developed as given below.

 In order to make uniform search in the initial stages and very

local search in later stages, an annealing procedure is followed. A

decrement function for decreasing the inertia weight given as w(t)=

 w(t-1),  is a decrement constant smaller than but close to 1, is

considered here.

 Feasibility checks, for imposition of procedure of the particle

positions, after the position updating to prevent the particles from

flying outside the feasible search space.

 The particle velocity in the kth dimension is limited by some

maximum value, vk max. With this limit, enhancement of local

exploration space is achieved and it realistically simulates the

incremental changes of human learning. In order to ensure uniform

velocity through all dimensions, the maximum velocity in the kth

dimension is given as :

vk max  (xk max  xk min ) / N (3.115)


137

In PSO algorithm, the population has n particles and each particle is

an m – dimensional vector, where m is the number of optimized

parameters. Incorporating the above modifications, the computational

flow of PSO technique can be described in the following steps.

Step 1 (Initialization)

 Set the time counter t=0 and generate randomly n particles,


[ X j (0) , j  1,...n] , where X j (0)  [ x j , 1 (0),..., x j , m (0)] .

 x j , k (0) is generated by randomly selecting a value with uniform

probability over the kth optimized parameter search space

[xk min , xk max ] .

 Similarly, generate randomly initial velocities of all

particles, [V j (0), j  1,...n] , where V j (0)  [v j , 1 (0),..., v j , m (0)] .

 v j , k (0) is generated by randomly selecting a value with uniform

probability over the kth dimension [vk max , vk max ] .

 Each particle in the initial population is evaluated using the

objective function J.

For each particle, set X *j (0)  X j (0) and J j  J j , j  1,..., n . Search


*

for the best value of the objective function J best .

 Set the particle associated with J best as the global best, X ** (0) , with

an objective function of J ** .

 Set the initial value of the inertia weight w(0) .

Step 2 (Time updating)

Update the time counter t = t + 1.

Step 3 (Weight updating)


138

Update the inertia weight w(t )   w(t  1) .

Step 4 (Velocity updating)

Using the global best and individual best of each particle, the jth

particle velocity in the kth dimension is updated according to the

following equation:

v j ,k (t )  w(t ) v j ,k (t  1)  c1 r1 ( x*j ,k (t  1)  x j ,k (t  1))

 c2 r2 ( x**
j , k (t  1)  x j , k (t  1)) (3.116)

Where c1 and c2 are positive constants and r1 and r2 are uniformly

distributed random numbers in [0, 1]. It is worth mentioning that the

second term represents the cognitive part of PSO where the particle

changes its velocity based on its own thinking and memory. The third

term represents the social part of PSO where the particle changes its

velocity based on the social-psychological adaptation of knowledge. If

a particle violates the velocity limits, set its velocity equal to the limit.

Step 5 (Position updating)

Based on the updated velocities, each particle changes its

position according to the following equation:

x j , k (t )  v j , k (t )  x j , k (t  1) (3.117)

If a particle violates its position limits in any dimension, set its

position at proper limit.

Step 6 (Individual best updating)

Each particle is evaluated according to its updated position. If

J j  J *j , j  1,..., n , then update individual best as X *j (t )  X j (t ) and

J *j  J j and go to step 7; else go to step 7.


139

Step 7 (Global best updating)

Search for the minimum value J min among J *j , where min is the

index of the particle with minimum objective function, i.e.

min { j; j  1,..., n} . If J min  J ** , then update global best as

X ** (t )  X min (t ) and J **  J min and go to step 8 ; else go to step 8.

Step 8 (Stopping criteria)

If one of the stopping criteria is satisfied then stop; else go to

step 2.

3.4.2.3 PSO Method — Researches Contribution

The Significant Contributions/Salient Features of Researchers are

furnished below:

Sl.No Author Title of Journal /


Significant Contributions / Salient
[Ref. No] Topic Publication
Features
Details
1 Hirotaka A Particle IEEE  Reactive power and voltage control
Yoshida, Swarm Transactions (VVC), is handled by Particle Swarm
Kenichi Optimization on Power Optimisation, while taking into
Kawata, for Reactive Systems, account voltage security assesment
Yoshikazu Power and vol. 15, no. (VSA).
Fukuyama Voltage 4, pp. 1232  The method treats , VVC as a
[16] Control – 1239, Nov. mixed integer nonlinear
Considering 2000. optimization problem (MINLP) and
Voltage decides a control approach with
Security continuous and independent
Assessment control variables such as AVR
operating values, OLTC tap
positions, and the number of
reactive power compensation
equipment.
 Voltage security is taken care by
adapting a continuation power
flow (CPFLOW) and a voltage
contingency analysis method.
140

 The viability of the proposed


method for VVC is confirmed on
practical power systems with
encouraging results.
2 M.A. Abido Optimal Electrical  Provided capable and dependable
[17] Power Flow Power and evolutionary based method, the
using Energy Particle swarm optimization (PSO),
Particle Systems to solve Optimal Power Flow
Swarm 24, pp. 563 problem.
Optimization – 571. 2002  For optimal position of OPF
problem control variables, PSO
algorithm is used.
 Presumptions forced on the
optimized objective functions are
considerably removed by this
optimisation technique in solving
OPF problem,
 Validation was done for various
objective functions such as fuel cost
minimisation, enhancement of
voltage profile and voltage stability.
 Observations prove that this
method is better than the
conventional methods and Genetic
Algorithms in respect of efficacy
and robustness.
3 Cui-Ru A Modified Proceedings  Solved OPF problem in a power
Wang, He- Particle of the Fourth system by employing modified
Jin Yuan, Swarm International particle swarm optimization (MPSO)
Zhi-Qiang Optimization Conference algorithm.
Huang, Algorithm on Machine  MPSO using swarm intelligence
Jiang-Wei and its OPF Learning provides a new thinking for solution
Zhang and Problem and of nonlinear, non-differential and
Chen-Jun Cybernetics, multi-modal problem.
Sun [18] Guangzhou,  Particle understands from itself and
pp.2885- the best one as well as from other
2889, Aug particles in this algorithm
2005.  Possibility to discover the global
optimum is improved and the affect
of starting position of the particles
is reduced by enriched knowledge.

You might also like