0% found this document useful (0 votes)
74 views

Lecture 22 Linear Programming and Optimal Power Flow

This document provides an overview of linear programming and optimal power flow techniques. It discusses two example problems to illustrate how to set up linear programming problems and introduces concepts like slack variables to convert inequality constraints to equality constraints. The document also briefly covers related topics like matrix singular value decomposition and pseudoinverses.

Uploaded by

Manuel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Lecture 22 Linear Programming and Optimal Power Flow

This document provides an overview of linear programming and optimal power flow techniques. It discusses two example problems to illustrate how to set up linear programming problems and introduces concepts like slack variables to convert inequality constraints to equality constraints. The document also briefly covers related topics like matrix singular value decomposition and pseudoinverses.

Uploaded by

Manuel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

ECEN 615

Methods of Electric Power


Systems Analysis
Lecture 22: Linear Programming and
Optimal Power Flow

Prof. Tom Overbye


Dept. of Electrical and Computer Engineering
Texas A&M University
[email protected]
Announcements
• Read Chapters 3 and 8 from the book
• Homework 5 is due on Thursday November 14
• Second exam is in class on Nov 21
– Same format as with the first exam

2
Aside: Matrix Singular Value
Decomposition (SVD)
• The SVD is a factorization of a matrix that
generalizes the eigendecomposition to any m by n
matrix to produce The original concept is more than
100 years old, but has founds lots
Y  UΣV T of recent applications
where S is a diagonal matrix of the singular values
• The singular values are non-negative real numbers
that can be used to indicate the major components
of a matrix (the gist is they provide a way to
decrease the rank of a matrix
• A key application is image compression
3
Aside: SVD Image Compression
Example
Images can be
represented with
matrices. When
an SVD is applied
and only the
largest singular
values are retained
the image is
compressed.

Computationally the
SVD is order m2n
(with n  m)

Image Source: www.math.utah.edu/~goller/F15_M2270/BradyMathews_SVDImage.pdf 4


Aside: Pseudoinverse of a Matrix
• The pseudoinverse of a matrix generalizes concept of a
matrix inverse to an m by n matrix, in which m >= n
– Specifically this is a Moore-Penrose Matrix Inverse
• Notation for the pseudoinverse of A is A+
• Satisfies AA+A = A
• If A is a square matrix, then A+ = A-1
• Quite useful for solving the least squares problem since
the least squares solution of Ax = b is x = A+ b
• Can be calculated using an SVD A  U Σ V T
 
A VΣ U T

5
Least Squares Matrix
Pseudoinverse Example
• Assume we wish to fix a line (mx + b = y) to three
data points: (1,1), (2,4), (6,4)
• Two unknowns, m and b; hence x = [m b]T
• Setup in form of Ax = b

1 1 1  1 1
 2 1  m    4  so A =  2 1
  b     
 6 1    4   6 1

6
Least Squares Matrix
Pseudoinverse Example, cont.
• Doing an economy SVD
 0.182 0.765
  6.559 0   0.976 0.219 
A  UΣV   0.331 0.543 
T
  0.219 0.976 
0 0.988
 0.926 0.345    

• Computing the pseudoinverse


   0.976 0.219  0.152 0   0.182 0.331 0.926 
A VΣ U   T
 0   0.765 0.543 0.345 
 0.219 0.976  1.012  
  T  0.143 0.071 0.214 
A VΣ U   
 0.762 0.548 0.310 

In an economy SVD the S matrix has dimensions of


m by m if m < n or n by n if n < m
7
Least Squares Matrix
Pseudoinverse Example, cont.
• Computing x = [m b]T gives
1 
  0.143 0.071 0.214    0.429 
A b   4   
 0.762 0.548 0.310  4  1.71 
 
• With the pseudoinverse approach we immediately see
the sensitivity of the elements of x to the elements of b
– New values of m and b can be readily calculated if y changes
• Computationally the SVD is order m2n (with n  m)

8
Quick Coverage of Linear Programming
• LP is probably the most widely used mathematical
programming technique
• It is used to solve linear, constrained minimization
(or maximization) problems in which the objective
function and the constraints can be written as linear
functions

9
Example Problem 1
• Assume that you operate a lumber mill which
makes both construction-grade and finish-grade
boards from the logs it receives. Suppose it takes 2
hours to rough-saw and 3 hours to plane each 1000
board feet of construction-grade boards. Finish-
grade boards take 2 hours to rough-saw and 5
hours to plane for each 1000 board feet. Assume
that the saw is available 8 hours per day, while the
plane is available 15 hours per day. If the profit
per 1000 board feet is $100 for construction-grade
and $120 for finish-grade, how many board feet of
each should you make per day to maximize your
10
profit?
Problem 1 Setup
Let x1 =amount of cg, x 2 = amount of fg
Maximize 100 x1  120 x2
s.t. 2 x1  2 x2  8
3x1  5 x2  15
x1, x2  0
Notice that all of the equations are linear, but
they are inequality, as opposed to equality, constraints;
we are seeking to determine the values of x1 and x2

11
Example Problem 2
• A nutritionist is planning a meal with 2 foods: A
and B. Each ounce of A costs $ 0.20, and has 2
units of fat, 1 of carbohydrate, and 4 of protein.
Each ounce of B costs $0.25, and has 3 units of fat,
3 of carbohydrate, and 3 of protein. Provide the
least cost meal which has no more than 20 units of
fat, but with at least 12 units of carbohydrates and
24 units of protein.

12
Problem 2 Setup
Let x1 =ounces of A, x 2 = ounces of B
Minimize 0.20x1  0.25 x2
s.t. 2 x1  3 x2  20
x1  3 x2  12
4x1  3 x2  24
x2  0 are linear, but
x1,equations
Again all of the
they are inequality, as opposed to equality, constraints;
we are again seeking to determine the values of x1 and x2;
notice there are also more constraints then solution
variables

13
Three Bus Case Formulation
• For the earlier three bus system given the initial
condition of an overloaded transmission line,
minimize the cost of generation such that the
change in generation Bus 2
60 MW 60 MW
Bus 1

is zero, and the flow 10.00 $/MWh

on the line between 0.0 MW 10.00 $/MWh


120 MW 180.0 MW
120%
buses 1 and 3 is not 0 MW

violating its limit


60 MW
120% 120 MW
Total Cost 60 MW
1800 $/hr
• Can be setup consider- Bus 3 10.00 $/MWh
180 MW

ing the change in 0 MW

generation, (DPG1, DPG2, DPG3)


14
Three Bus Case Problem Setup
Let x1 = PG1, x 2 = PG2 , x 3 = PG3
Minimize 10x1  12 x2  20 x3
2 1
s.t. x1  x2  20 Line flow constraint
3 3
x1  x2  x3  0 Power balance constraint

enforcing limits on x1, x2 , x3

15
LP Standard Form
The standard form of the LP problem is
Minimize cx Maximum problems can
s.t. Ax  b be treated as minimizing
the negative
x0
where x  n-dimensional column vector
c  n-dimensional row vector
b  m-dimensional column vector
A  m×n matrix
For the LP problem usually n>> m
The previous examples were not in this form!
16
Replacing Inequality Constraints
with Equality Constraints
• The LP standard form does not allow inequality
constraints
• Inequality constraints can be replaced with equality
constraints through the introduction of slack variables,
each of which must be greater than or equal to zero
  bi    yi  bi with yi  0
  bi    yi  bi with yi  0

• Slack variables have no cost associated with them;


they merely tell how far a constraint is from being
binding, which will occur when its slack variable is
zero 17
Lumber Mill Example with Slack
Variables
• Let the slack variables be x3 and x4, so
Minimize -(100 x1  120 x2 ) Minimize the negative
s.t. 2 x1  2 x2  x3  8
3x1  5 x2  x4  15
x1, x2 , x3 , x4  0

18
LP Definitions
A vector x is said to be basic if
1. Ax  b
2. At most m components of x are non-zero; these
are called the basic variables; the rest are non basic
variables; if there as less than m non-zeros then
x is called degenerate AB is called the basis matrix
 xB 
Define x    (with x B basic) and A   A B A N 
xN 
 xB 
With  A B A N     b so x B  A B 1  b  A N x N 
xN 
19
Fundamental LP Theorem
• Given an LP in standard form with A of rank m
then
– If there is a feasible solution, there is a basic feasible
solution
– If there is an optimal, feasible solution, then there is an
optimal, basic feasible solution
• Note, there could be a LARGE number of basic,
feasible solutions
– Simplex algorithm determines the optimal,
basic feasible solution usually very quickly

20
LP Graphical Interpretation
• The LP constraints define a polyhedron in the
solution space
– This is a polytope if the polyhedron is bounded and
nonempty
– The basic, feasible
solutions are
vertices of this
polyhedron
– With the linear cost
function the solution
will be at one of
vertices

Image: Figure 3.26 from course text 21


Simplex Algorithm
• The key is to move intelligently from one basic
feasible solution (i.e., a vertex) to another, with the
goal of continually decreasing the cost function
• The algorithm does this by determining the “best”
variable to bring into the basis; this requires that
another variable exit the basis, while always
retaining a basic, feasible solution
• This is called pivoting

22
Determination of Variable to
Enter the Basis
• To determine which non-basic variable should
enter the basis (i.e., one which currently 0), look at
how the cost function changes w.r.t. to a change in
a non-basic variable (i.e., one that is currently zero)
 xB  Elements of xn
Define z  c x  [c B cN ]  are all zero, but
x N  we are looking
With x B  A B1  b  A N x N  to change one
to decrease the
 
Then z  c B A B1b  c N  c B A B1A N x N cost

23
Determination of Variable to
Enter the Basis
• Define the reduced (or relative) cost coefficients as
r is an n-m dimensional
r  c N  c B A B1A N
row vector

• Elements of this vector tell how the cost function


will change for a change in a currently non-basic
variable
• The variable to enter the basis is usually the one
with the most negative relative cost
• If all the relative costs are nonnegative then we are
at an optimal solution

24
Determination of Variable
to Exit Basis
• The new variable entering the basis, say a position j,
causes the values of all the other basic variables to
change. In order to retain a basic, feasible solution, we
need to insure no basic variables become negative.
The change in the basic variables is given by
1
x B  xB  AB a j 
where  is the value of the variable entering the
basis, and a j is its associated column in A

25
Determination of Variable
to Exit Basis
We find the largest value  such
x B  x B  A B1a j  0
If no such  exists then the problem is unbounded;
otherwise at least one component of x B equals zero.
The associated variable exits the basis.

26
Canonical Form
• The Simplex Method works by having the problem in
what is known as canonical form
• Canonical form is defined as having the m basic
variables with the property that each appears in only
one equation, its coefficient in that equation is unity,
and none of the other basic variables appear in the
same equation
• Sometime canonical form is readily apparent
Minimize -(100 x1  120 x2 ) Note that with x3 and
s.t. 2 x1  2 x2  x3  8 x4 as basic variables
3x1  5 x2  x4  15 AB is the identity
x1, x2 , x3 , x4  0 matrix 27
Canonical Form
• Other times canonical form is achieved by initially
adding artificial variables to get an initial solution
• Example of the nutrition problem in canonical form
with slack and artificial variables (denoted as y) used
to get an initial basic feasible solution
Let x1 =ounces of A, x 2 = ounces of B Note that with y1, y2,
Minimize y1 +y 2 +y3 and y3 as basic
s.t. 2 x1  3 x2  x 3  y1  20 variables AB is the
x1  3 x2  x4  y2  12 identity matrix
4x1  3 x2  x5  y3  24
x1, x2 , x3 , x4 , x5 , y1, y2 , y3  0
28
LP Tableau
• With the system in canonical form, the Simplex
solution process can be illustrated by forming what is
known as the LP tableau
– Initially this corresponds to the A matrix, with a column
appended to include the b vector, and a row added to give the
relative cost coefficients; the last element is the negative of the
cost function value
– Define the tableau as Y, with elements Yij
– In canonical form the last column of the tableau gives the
values of the basic variables
• During the solution the tableau is updated by pivoting

29
LP Tableau for the Nutrition
Problem with Artificial Variables
• When in canonical form the relative costs vector is
r  c N  c B A B1A N  c B A N
T T
0  7 
0  2 3 1 0 0   9 
   
 
r  0    1 1 1 1 3 0 1 0   1
0    
   4 3 0 0 1  1 
0   1 
• The initial tableau for the artificial problem is then
x1 x2 x3 x4 x5 y1 y2 y3 Note the last column
2 3 1 0 0 1 0 0 20 gives the values of
1 3 0 1 0 0 1 0 12 the basic variables
4 3 0 0 1 0 0 1 24
7 9 1 1 1 0 0 0 56
30
LP Tableau Pivoting
• Pivoting is used to move from one basic feasible
solution to another
– Select the pivot column (i.e., the variable coming into the
basis, say q) as the one with the most negative relative cost
– Select the pivot row (i.e., the variable going out of the basis) as
the one with the smallest ratio of xi/Yiq for Yiq >0; define this as
row p (xi is given in the last column)
That is, we find the largest value  such
x B  x B  A B1a q  0
If no such  exists then the problem is unbounded;
otherwise at least one component of x B equals zero.
The associated variable exits the basis.
31
LP Tableau Pivoting for Nutrition
Problem
• Starting at
x1 x2 x3 x4 x5 y1 y2 y3
2 3 1 0 0 1 0 0 20
1 3 0 1 0 0 1 0 12
4 3 0 0 1 0 0 1 24
7 9 1 1 1 0 0 0 56

• Pivot on column q=2; for row get minimum of


{20/3, 12/3, 24/3), which is row p=2

32
LP Tableau Pivoting

• Pivoting on element Ypq is done by


– First dividing row p by Ypq to change the pivot element to unity.
– Then subtracting from the kth row Ykq/Ypq times the pth row for
all rows with Ykq <> 0
x1 x2 x3 x4 x5 y1 y2 y3
2 3 1 0 0 1 0 0 20 I’m only
1 3 0 1 0 0 1 0 12 showing
4 3 0 0 1 0 0 1 24 fractions
7 9 1 1 1 0 0 0 56 with two
x1 x2 x3 x4 x5 y1 y2 y3 ROD digits
1 0 1 1 0 1 1 0 8
Pivoting gives 0.33 1 0 0.33 0 0 0.33 0 4
3 0 0 1 1 0 1 1 12
4 0 1 2 1 0 3 0 20
33
LP Tableau Pivoting, Example, cont.
• Next pivot on column 1, row 3
x1 x2 x3 x4 x5 y1 y2 y3
1 0 1 1 0 1 1 0 8
0.33 1 0 0.33 0 0 0.33 0 4
3 0 0 1 1 0 1 1 12
4 0 1 2 1 0 3 0 20
• Which gives
x1 x2 x3 x4 x5 y1 y2 y3
0 0 1 0.67 0.33 1 0.67 0.33 4
0 1 0 0.44 0.11 0 0.44 0.11 2.67
1 0 0 0.33 0.33 0 0.33 0.33 4.0
0 0 1 0.67 0.33 0 1.67 1.33 4
34
LP Tableau Pivoting, Example, cont.
• Next pivot on column 3, row 1
x1 x2 x3 x4 x5 y1 y2 y3 Since there
are no
0 0 1 0.67 0.33 1 0.67 0.33 4
0 1 0 0.44 0.11 0 0.44 0.11 2.67
negative
relative
1 0 0 0.33 0.33 0 0.33 0.33 4
costs we are
0 0 1 0.67 0.33 0 1.67 1.33 4
done with
• Which gives getting a
x1 x2 x3 x4 x5 y1 y2 y3 starting
0 0 1 0.67 0.33 1 0.67 0.33 4
solution
0 1 0 0.44 0.11 0 0.44 0.11 2.67
1 0 0 0.33 0.33 0 0.33 0.33 4
0 0 0 0 0 1 1 1 0
35
LP Tableau Full Problem
• The tableau from the end of the artificial problem is
used as the starting point for the actual solution
– Remove the artificial variables
– Update the relative costs with the costs from the original
problem and update the bottom right-hand corner value
c  [0.2 0.25 0 0 0]
r  c N  c B A B1A N  c B A N
T  0.67 0.33  T
0
   0.04 
r      0 0.25 0.2  0.44 0.11   
0    0.04 
 0.33 0.33

• Since none of the relative costs are negative we are 36


Marginal Costs of Constraint
Enforcement
If we would like to determine how the cost function
will change for changes in b, assuming the set
of basic variables does not change
then we need to calculate
z  (c B x B )  (c B A B1b)
   c B A B1  λ
b b b
So the values of λ tell the marginal cost of enforcing
each constraint.
The marginal costs will be used to determine the OPF
locational marginal costs (LMPs)
37
Nutrition Problem Marginal Costs
• In this problem we had basic variables 1, 2, 3;
nonbasic variables of 4 and 5
1
 2 3 1   20   4 
x B  A B1  b  A N x N   1 3 0  12    2.67 
     
 4 3 0   24   4 
1
2 3 1  0 
λ  c B A B1   0.2 0.25 0  1 3 0   0.044 
   
 4 3 0  0.039 

There is no marginal cost with the first constraint since it is not


binding; values tell how cost changes if the b values were changed
38
Lumber Mill Example Solution
Minimize -(100 x1  120 x2 )
s.t. 2 x1  2 x2  x3  8
3x1  5 x2  x4  15 An initial basic feasible solution
is x1  0, x2  0, x3  8, x4  15
x1, x2 , x3 , x4  0
The solution is x1  2.5, x2  1.5, x3  0, x4  0
1
2 2 35
Then λ =  100 120    
 3 5  10 
Economic interpretation of l is the profit is increased by
35 for every hour we up the first constraint (the saw) and
by 10 for every hour we up the second constraint (plane)
39
Complications
• Often variables are not limited to being  0
– Variables with just a single limit can be handled by
substitution; for example if x  5 then x-5=z  0
– Bounded variables, high  x  0 can be handled with a slack
variable so x + y = high, and x,y  0
• Unbounded conditions need to be detected (i.e., unable
to pivot); also the solution set could be null
Minimize x1  x2 s.t. x1  x2  8
 x1  x2  y1  8  x2  8 is a basic feasible solution
x1 x2 y1
1 1 1 8
2 0 1 8 40
Complications
• Degenerate Solutions
– Occur when there are less than m basic variables > 0
– When this occurs the variable entering the basis could also
have a value of zero; it is possible to cycle, anti-cycling
techniques could be used
• Nonlinear cost functions
– Nonlinear cost functions could be approximated by assuming
a piecewise linear cost function
• Integer variables
– Sometimes some variables must be integers; known as integer
programming; we’ll discuss after some power examples

41
LP Optimal Power Flow
• LP OPF was introduced in
– B. Stott, E. Hobson, “Power System Security Control
Calculations using Linear Programming,” (Parts 1 and 2)
IEEE Trans. Power App and Syst., Sept/Oct 1978
– O. Alsac, J. Bright, M. Prais, B. Stott, “Further Developments
in LP-based Optimal Power Flow,” IEEE Trans. Power
Systems, August 1990
• It is a widely used technique, particularly for real power
optimization; it is the technique used in PowerWorld

42

You might also like