0% found this document useful (0 votes)
15 views

03a Optimization

Uploaded by

asfar ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

03a Optimization

Uploaded by

asfar ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

2/26/2024

INTRODUCTION TO
OPTIMIZATION
Dr. Naeem Iqbal

Economic dispatch problem

A B C L

 Several generating units serving the load


 What share of the load should each
generating unit produce?
 Consider the limits of the generating units
 Ignore the limits of the network

1
2/26/2024

Objective
 Most engineering activities have an
objective:
◦ Achieve the best possible design
◦ Achieve the most economical operating
conditions
 This objective is usually quantifiable
 Examples:
◦ minimize cost of building a transformer
◦ minimize cost of supplying power
◦ minimize losses in a power system
◦ maximize profit from a bidding strategy

February 26, 2024 3

Decision Variables
 The value of the objective is a function of
some decision variables:
F = f ( x1 , x2 , x3 , .. xn )

 Examples of decision variables:


◦ Dimensions of the transformer
◦ Output of generating units, position of taps
◦ Parameters of bids for selling electrical energy

February 26, 2024 4

2
2/26/2024

Optimization Problem
 What value should the decision variables
take so that
it is minimum or maximum?

F = f ( x1 , x2 , x3 , .. xn )

February 26, 2024 5

Example: function of one variable


f(x*) f(x)

x* x

f(x) is maximum for x = x*

February 26, 2024 6

3
2/26/2024

Minimization and Maximization


f(x*)
f(x)

x* x

-f(x)
-f(x*)

If x = x* maximizes f(x) then it minimizes - f(x)


February 26, 2024 7

Minimization and Maximization


 maximizing f(x) is thus the same thing as
minimizing g(x) = -f(x)

 Minimization and maximization problems


are thus interchangeable

 Depending on the problem, the optimum


is either a maximum or a minimum

February 26, 2024 8

4
2/26/2024

Necessary Condition for Optimality


f(x)
f(x*)

df df
>0 <0
dx dx

x* x

If x = x * maximises f ( x ) then:
df
f ( x ) < f ( x * ) for x < x * Þ > 0 for x < x *
dx
df
f ( x ) < f ( x * ) for x > x * Þ < 0 for x > x *
dx
February 26, 2024 9

Necessary Condition for Optimality


f(x) df
=0
dx

x* x

df
If x = x * maximises f ( x ) then = 0 for x = x *
dx

February 26, 2024 10

5
2/26/2024

Example
f(x)

x
df
For what values of x is =0?
dx
In other words, for what values of x is the necessary condition for optimality satisfied?

February 26, 2024 11

Example
f(x)

A B C D x

 A, B, C, D are stationary points


 A and D are maxima
 B is a minimum
 C is an inflexion point

February 26, 2024 12

6
2/26/2024

How can we distinguish minima and maxima?

f(x)

A B C D x

d2 f
For x = A and x = D, we have: <0
dx 2
The objective function is concave around a maximum
February 26, 2024 13

How can we distinguish minima and maxima?

f(x)

A B C D x

d2 f
For x = B we have: >0
dx 2
The objective function is convex around a minimum
February 26, 2024 14

7
2/26/2024

How can we distinguish minima and maxima?

f(x)

A B C D x

d2 f
For x = C , we have: =0
dx 2
The objective function is flat around an inflexion point
February 26, 2024 15

Necessary and Sufficient Conditions of Optimality

 Necessary condition:
df
=0
dx
 Sufficient condition:
◦ For a maximum: d2 f
<0
dx 2
◦ For a minimum: d2 f
>0
dx 2

February 26, 2024 16

8
2/26/2024

Isn’t all this obvious?


 Can’t we tell all this by looking at the
objective function?
◦ Yes, for a simple, one-dimensional case when we
know the shape of the objective function
◦ For complex, multi-dimensional cases (i.e. with
many decision variables) we can’t visualize the
shape of the objective function
◦ We must then rely on mathematical techniques

February 26, 2024 17

Feasible Set
 The values that the decision variables can
take are usually limited
 Examples:
◦ Physical dimensions of a transformer must be
positive
◦ Active power output of a generator may be
limited to a certain range (e.g. 200 MW to
500 MW)
◦ Reactive power output of a generator may be
limited to a certain range (e.g. -100 MVAr to
150 MVAr)
February 26, 2024 18

9
2/26/2024

Feasible Set
f(x)

xMIN A D xMAX x

Feasible Set

The values of the objective function outside


the feasible set do not matter
February 26, 2024 19

f(x)
Interior and Boundary Solutions

xMIN A B D E xMAX x
 A and D are interior maxima
 B and E are interior minima
 XMIN is a boundary minimum Do not satisfy the
 XMAX is a boundary maximum Optimality conditions!
February 26, 2024 20

10
2/26/2024

Two-Dimensional Case
f(x1,x2)

x1*

x1
x2*

x2 f(x1,x2) is minimum for x1*, x2*


February 26, 2024 21

Necessary Conditions for


Optimality
f(x ,x )
1 2

¶f ( x 1 ,x 2 )
=0
¶x 1
x * ,x *
1 2
¶f ( x 1 ,x 2 )
=0
¶x 2 x * ,x *
1 2
x1*

x1
x2*

x2
February 26, 2024 22

11
2/26/2024

Multi-Dimensional Case
At a maximum or minimum value of f ( x1, x2 , x3 , .. xn )
we must have: ¶f
=0
¶x 1
¶f
=0
¶x 2

¶f
=0
¶x n

A point where these conditions are satisfied is called a stationary point

February 26, 2024 23

Sufficient Conditions for Optimality


f(x1,x2) minimum maximum

x1

x2
February 26, 2024 24

12
2/26/2024

Sufficient Conditions for Optimality


f(x1,x2)

Saddle point

x1

x2
February 26, 2024 25

Sufficient Conditions for Optimality


Calculate the Hessian matrix at the stationary point:

æ ¶2 f ¶2 f ¶2 f ö
ç ¶x 2 ¶x 1 ¶x 2 ¶x 1 ¶x n ÷
ç 1 ÷
ç ¶ f 2
¶2 f ¶2 f ÷
ç ÷
ç ¶x 2 ¶x 1 ¶x 22 ¶x 2 ¶x n ÷
ç ÷
ç ¶2 f ¶2 f ¶ f ÷÷
2
çç ÷
è ¶x n ¶x 1 ¶x n ¶x 2 ¶x n2 ø

February 26, 2024 26

13
2/26/2024

Sufficient Conditions for Optimality


 Calculate the eigenvalues of the Hessian matrix at the
stationary point
 If all the eigenvalues are greater or equal to zero:
◦ The matrix is positive semi-definite
◦ The stationary point is a minimum
 If all the eigenvalues are less or equal to zero:
◦ The matrix is negative semi-definite
◦ The stationary point is a maximum
 If some or the eigenvalues are positive and other are
negative:
◦ The stationary point is a saddle point

February 26, 2024 27

Contours
f(x1,x2)

F2

F1

x1

F1 F2

x2
February 26, 2024 28

14
2/26/2024

Contours
A contour is the locus of all the point that give the same value
to the objective function

x2

x1
Minimum or February
maximum 26, 2024 29

Example 1
Minimise C = x 12 + 4 x 22 - 2 x 1 x 2

Necessary conditions for optimality:


¶C
= 2 x1 - 2 x 2 = 0
¶x 1 ì x1 = 0 is a stationary
í point
¶C îx 2 = 0
= -2 x 1 + 8 x 2 = 0
¶x 2

February 26, 2024 30

15
2/26/2024

Example 1
Sufficient conditions for optimality:

æ ¶ 2C ¶ 2C ö
ç ¶x 2 ¶x 1 ¶x 2 ÷ æ 2 -2 ö
Hessian Matrix: ç ÷ =ç
1
÷
ç ¶ C 2
¶ 2 C ÷ è -2 8 ø
ç ÷
è ¶x 2 ¶x 1 ¶x 22 ø

must be positive definite (i.e. all eigenvalues must be positive)

l-2 2
= 0 Þ l 2 - 10 l + 12 = 0
2 l-8
10 ± 52 The stationary point
Þl = ³0 is a minimum
2
February 26, 2024 31

Example 1
x2

C=9
C=4
C=1

x1

Minimum: C=0

February 26, 2024 32

16
2/26/2024

Example 2
Minimize C = -x12 + 3x22 + 2x1 x2

Necessary conditions for optimality:


¶C
= -2x1 + 2x2 = 0
¶x1 ì x1 = 0 is a stationary
í point
¶C îx 2 = 0
= 2x1 + 6x2 = 0
¶x2

February 26, 2024 33

Example 2
Sufficient conditions for optimality:

æ ¶ 2C ¶ 2C ö
ç ¶x 2 ¶x 1 ¶x 2 ÷ æ -2 2 ö
Hessian Matrix: ç ÷ =ç
1
÷
ç ¶ C 2
¶ 2 C ÷ è 2 6ø
ç ÷
è ¶x 2 ¶x 1 ¶x 22 ø

l+2 -2
= 0 Þ l2 - 4 l - 8 = 0
-2 l-6
4 + 80
Þl = >0 The stationary point
2
is a saddle point
4- 80
or l = <0
2
February 26, 2024 34

17
2/26/2024

Example 2
x2
C=0

C=9

C=4
C=1

C=-9 C=-4 C=-1 C=-4 C=-9

x1
C=1

C=4
C=9

This stationary point is a saddle point C=0


February 26, 2024 35

Optimization with Constraints

February 26, 2024 36

18
2/26/2024

Optimization with Equality


Constraints
 There are usually restrictions on the
values that the decision variables can take
Minimise
f ( x1 , x2 ,.. , xn ) Objective function

subject to:
w 1 ( x1 , x2 ,.. , xn ) = 0
Equality constraints

w m ( x1 , x2 ,.. , xn ) = 0

February 26, 2024 37

Number of Constraints
 N decision variables
 M equality constraints
 If M > N, the problems is over-constrained
◦ There is usually no solution
 If M = N, the problem is determined
◦ There may be a solution
 If M < N, the problem is under-constrained
◦ There is usually room for optimization

February 26, 2024 38

19
2/26/2024

Example 1
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22
x2
Subject to w ( x 1 , x 2 ) º 5 - x 1 - x 2 = 0

w ( x1 , x 2 ) º 5 - x1 - x 2 = 0

Minimum

x1
f ( x 1 , x 2 ) = 0.25 x February
1 + x26,22024
2 2
39

Example 2: Economic Dispatch


x1 x2
L
G1 G2

C 1 = a1 + b1 x 12 Cost of running unit 1

C 2 = a 2 + b 2 x 22 Cost of running unit 2

C = C 1 + C 2 = a1 + a 2 + b1 x 12 + b 2 x 22 Total cost

Optimization problem:

Minimise C = a1 + a 2 + b1 x 12 + b 2 x 22
Subject to: x 1 + x 2 = L
February 26, 2024 40

20
2/26/2024

Solution by substitution
Minimise C = a1 + a 2 + b1 x 12 + b 2 x 22
Subject to: x 1 + x 2 = L
Þ x 2 = L - x1
Þ C = a1 + a 2 + b1 x 12 + b 2 ( L - x 1 )
2

Unconstrained minimization

dC
= 2 b1 x 1 - 2 b 2 ( L - x 1 ) = 0
dx 1
b2 L æ b1 L ö
Þ x1 = çÞ x2 = ÷
b1 + b 2 è b1 + b 2 ø
d 2C
= 2b1 + 2 b 2 > 0 Þ minimum
dx 12 February 26, 2024 41

Solution by substitution

 Difficult
 Usually impossible when constraints are non-
linear
 Provides little or no insight into solution

 Solution using Lagrange multipliers

February 26, 2024 42

21
2/26/2024

Gradient
Consider a function f (x1 , x2 ,.. , xn )
æ ¶f ö
ç ¶x1 ÷
ç ÷
ç ¶f ÷
The gradient of f is the vector Ñf = ç ¶x2 ÷÷
ç
ç ÷
ç ¶f ÷
ç ÷
çè ¶xn ÷ø

February 26, 2024 43

Properties of the Gradient


 Each component of the gradient vector
indicates the rate of change of the function
in that direction
 The gradient indicates the direction in which
a function of several variables increases most
rapidly
 The magnitude and direction of the gradient
usually depend on the point considered
 At each point, the gradient is perpendicular
to the contour of the function

February 26, 2024 44

22
2/26/2024

Example 3
f ( x , y ) = ax 2 + by 2
æ ¶f ö
ç ¶x ÷ æ 2 ax ö
Ñf = ç ÷ = ç ÷
¶f
ç ÷ è 2 by ø y
è ¶y ø

C
A

D
February 26, 2024 45

Example 4
f ( x , y ) = ax + by f = f3
æ ¶f ö
ç ¶x ÷ æ a ö f = f2 y
Ñf = ç ÷ = ç ÷ Ñf
¶f
ç ÷ è bø
è ¶y ø
f = f1 Ñf

Ñf

February 26, 2024 46

23
2/26/2024

Lagrange multipliers
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22 subject to w ( x 1 , x 2 ) º 5 - x 1 - x 2 = 0

w ( x1 , x 2 ) = 5 - x1 - x 2

f = 0.25 x 12 + x 22 = 6
f = 0.25 x 12 + x 22 = 5

February 26, 2024 47

Lagrange multipliers
æ ¶f ö
ç ¶x 1 ÷
Ñf = ç ÷
Ñf ç ¶f ÷
ç ÷
è ¶x 2 ø
f ( x1 , x 2 ) = 6

f ( x1 , x 2 ) = 5 Ñf

Ñf

February 26, 2024 48

24
2/26/2024

Lagrange multipliers
æ ¶w ö
ç ¶x 1 ÷
Ñw = ç ÷
w ( x1 , x 2 ) çç ¶w ÷÷
è ¶x 2 ø
f ( x1 , x 2 ) = 6

f ( x1 , x 2 ) = 5

Ñw

Ñw

February 26, 2024 Ñw 49

Lagrange multipliers
The solution must be on the constraint

To reduce the value of f, we must move


in a direction opposite to the gradient

w ( x1 , x 2 )
Ñf

f ( x1 , x 2 ) = 6
A
f ( x1 , x 2 ) = 5
?

Ñf

February 26, 2024 50

25
2/26/2024

Lagrange multipliers
• We stop when the gradient of the function
is perpendicular to the constraint because
moving further would increase the value
of the function
w ( x1 , x 2 )
Ñf
f ( x1 , x 2 ) = 6
A
f ( x1 , x 2 ) = 5 Ñf

Ñw
C
Ñf
Ñw
At the optimum, the gradient of the
function is parallel to the gradient B
of the constraint

February 26, 2024


Ñw 51

Lagrange multipliers
At the optimum, we must have: Ñf Ñw
Which can be expressed as: Ñf + l Ñw = 0
In terms of the co-ordinates: ¶f ¶w
+l =0
¶x 1 ¶x 1
¶f ¶w
+l =0
¶x 2 ¶x 2
The constraint must also be satisfied: w ( x1 , x 2 ) = 0

l is called the Lagrange multiplier

February 26, 2024 52

26
2/26/2024

Lagrangian function
To simplify the writing of the conditions for optimality,
it is useful to define the Lagrangian function:

( x 1 , x 2 ,l ) = f ( x 1 , x 2 ) + lw ( x 1 , x 2 )

The necessary conditions for optimality are then given


by the partial derivatives of the Lagrangian:

¶ ( x 1 , x 2 ,l ) ¶f ¶w
= +l =0
¶x 1 ¶x 1 ¶x 1
¶ ( x 1 , x 2 ,l ) ¶f ¶w
= +l =0
¶x 2 ¶x 2 ¶x 2
¶ ( x 1 , x 2 ,l )
= w ( x1 ,x 2 ) = 0
¶l February 26, 2024 53

Example
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22 subject to w ( x 1 , x 2 ) º 5 - x 1 - x 2 = 0

( x 1 , x 2 , l ) = 0.25 x 12 + x 22 + l ( 5 - x 1 - x 2 )

¶ ( x 1 , x 2 ,l )
º 0.5 x 1 - l = 0
¶x 1
¶ ( x 1 , x 2 ,l )
º 2x2 -l = 0
¶x 2
¶ ( x 1 , x 2 ,l )
º 5 - x1 - x 2 = 0
¶l
February 26, 2024 54

27
2/26/2024

Example
¶ ( x 1 , x 2 ,l )
º 0.5 x 1 - l = 0 Þ x1 = 2 l
¶x 1
¶ ( x 1 , x 2 ,l ) 1
º 2x2 -l = 0 Þ x2 = l
¶x 2 2
¶ ( x 1 , x 2 ,l ) 1
º 5 - x1 - x 2 = 0 Þ 5 - 2l - l = 0
¶l 2

Þl=2
Þ x1 = 4
Þ x2 =1
February 26, 2024 55

Example
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22
x2
Subject to w ( x 1 , x 2 ) º 5 - x 1 - x 2 = 0

w ( x1 , x 2 ) º 5 - x1 - x 2 = 0

f ( x1 , x 2 ) = 5 Minimum
1

x1
4
February 26, 2024 56

28
2/26/2024

Important Note!
If the constraint is of the form: ax 1 + bx 2 = L

It must be included in the Lagrangian as follows:

= f ( x1 ,.. , xn ) + l ( L - ax1 - bx2 )


And not as follows:

= f ( x1 ,.. , xn ) + l ( ax1 + bx2 )

February 26, 2024 57

Application to Economic Dispatch


minimise f ( x 1 , x 2 ) = C 1 ( x 1 ) + C 2 ( x 2 )
x1 x2
L s.t . w ( x 1 , x 2 ) º L - x 1 - x 2 = 0
G1 G2

( x 1 , x 2 , l ) = C1 ( x 1 ) + C 2 ( x 2 ) + l ( L - x 1 - x 2 )
¶ dC 1
º -l =0
¶x 1 dx 1 dC 1 dC 2
= =l
¶ dC 2 dx 1 dx 2
º -l =0
¶x 2 dx 2
¶ Equal incremental cost
º L - x1 - x 2 = 0 solution
¶l February 26, 2024 58

29
2/26/2024

Equal incremental cost solution

Cost curves:
C1 ( x 1 ) C2 ( x 2 )

x1 x2

dC 1 dC 2
Incremental
cost curves: dx 1 dx 2

x1 February 26, 2024 x2 59

Interpretation of this solution


dC 1 dC 2
dx 1 dx 2

x1* x1 x2* x2

-
L -
+

If < 0, reduce λ
L - x1* - x2* If > 0, increase λ

February 26, 2024 60

30
2/26/2024

Physical interpretation
dC DC
= lim
C( x ) dx Dx®0 Dx

DC For Dx sufficiently small:


Dx dC
DC » ´ Dx
dx
x
If Dx = 1 MW :
dC(x) dC
DC »
dx dx
The incremental cost is the cost of
one additional MW for one hour.
This cost depends on the output of
the generator.
x February 26, 2024 61

Physical interpretation
dC 1
: Cost of one more MW from unit 1
dx 1
dC 2
: Cost of one more MW from unit 2
dx 2
dC 1 dC 2
Suppose that >
dx 1 dx 2
dC 1
Decrease output of unit 1 by 1MW Þ decrease in cost =
dx 1
dC 2
Increase output of unit 2 by 1MW Þ increase in cost =
dx 2
dC 2 dC 1
Net change in cost = - <0
dx 2 dx February
1
26, 2024 62

31
2/26/2024

Physical interpretation
It pays to increase the output of unit 2 and decrease the
output of unit 1 until we have:

dC 1 dC 2
= =l
dx 1 dx 2

The Lagrange multiplier λ is thus the cost of one more MW


at the optimal solution.

This is a very important result with many applications in


economics.

February 26, 2024 63

Generalization
Minimise
f ( x1 , x2 ,.. , xn )
subject to:
w 1 ( x1 , x2 ,.. , xn ) = 0

w m ( x1 , x2 ,.. , xn ) = 0
Lagrangian:

= f ( x1,.. , xn ) + l1w1 ( x1,.. , xn ) + + lmw m ( x1 ,.. , xn )

• One Lagrange multiplier for each constraint


• n + m variables: x1, …, xn and λ1, …, λm

February 26, 2024 64

32
2/26/2024

Optimality conditions
= f ( x1 ,.. , xn ) + l1w 1 ( x1,.. , xn ) + + lmw m ( x1 ,.. , xn )
¶ ¶f ¶w 1 ¶w m
= + l1 + +lm =0
¶x 1 ¶x 1 ¶x 1 ¶x 1
n equations

¶ ¶f ¶w 1 ¶w m
= + l1 + +lm =0
¶x n ¶x n ¶x n ¶x n

= w1 ( x1 , ,x n ) = 0
¶l 1
m equations


= w m ( x1 , ,x n ) = 0 n + m equations in
¶l m February 26, 2024
n + m variables
65

33

You might also like