0% found this document useful (0 votes)
27 views63 pages

Chapter 2 Power System Operation

The document discusses nonlinear optimization techniques. It describes how nonlinear programming problems can be solved using numerical methods since analytical solutions may not be possible for nonlinear objective functions or constraints. It then covers various numerical methods for nonlinear optimization including elimination methods like unrestricted search, exhaustive search and dichotomous search that can be used to find the minimum of a unimodal function.

Uploaded by

khadarf420
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views63 pages

Chapter 2 Power System Operation

The document discusses nonlinear optimization techniques. It describes how nonlinear programming problems can be solved using numerical methods since analytical solutions may not be possible for nonlinear objective functions or constraints. It then covers various numerical methods for nonlinear optimization including elimination methods like unrestricted search, exhaustive search and dichotomous search that can be used to find the minimum of a unimodal function.

Uploaded by

khadarf420
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 63

Unit Two

Non Linear Optimization Techniques


Introduction
 Power system operation problems are nonlinear. Thus
nonlinear programming (NLP) based techniques can easily
handle power system operation problems such as the OPF
problems with nonlinear objective and constraint functions.
Nonlinear programming

 Non-linear optimization is a branch of mathematical


optimization that deals with problems where the
objective function or constraints are not linear. Unlike
linear optimization, non-linear optimization involves the
optimization of functions that may exhibit curvature,
irregular shapes, or non-linear relationships
Nonlinear programming

• When objective function or constraint is nonlinear


• Analytical
– If objective function is differentiable
– Constraints are equality
• Numerical method
– When objective function is not differentiable
Nonlinear programming

 Is a numerical method
 Used for non differentiable or analytically unsolvable
objective functions
Example 2.1 0.75 1 1
f x    0.65x tan  0.65
1 x 2
x

Example 2.2 illustrates a case where the objective function


is a complicated one for which the classical methods of
optimization are difficult to apply.
General outline of NLP

The basic philosophy of most of the numerical


methods of optimization is to produce a sequence of
improved approximations to the optimum according
to the following scheme:
1. Start with an initial trial point X1
2. Find a suitable direction Si that points in the direction of
optimum
3. Find an appropriate step length i for movement in
direction of Si
4. Obtain the newX approximation
 X   SXi+1 as
i 1 i i i

5. Test if Xi+1 is optimum, if not optimum, set i=i+1 and go


to step 2 else stop
Methods available for one dimensional case
One dimensional minimization methods

• Analytical methods (differential calculus methods)


• Numerical methods
Elimination methods
Unrestricted search
Exhaustive search
Dichotomous search
Fibonacci method
One dimensional minimization methods

Differential calculus methods:

 Analytical method

 Applicable to continuous, twice differentiable functions

 Calculation of the numerical value of the objective function is


virtually the last step of the process

• The optimal value of the objective function is calculated after


determining the optimal values of the decision variables
One dimensional minimization methods

Numerical methods:

 The values of the objective function are first found at various combinations
of the decision variables

 Conclusions are then drawn regarding the optimal solution

 Elimination methods can be used for the minimization of even


discontinuous functions
Unimodal function

• A unimodal function is one that has only one peak


(maximum) or valley (minimum) in a given interval
• Thus a function of one variable is said to be unimodal if, given
that two values of the variable are on the same side of the
optimum, the one nearer the optimum gives the better
functional value (i.e., the smaller value in the case of a
minimization problem). This can be stated mathematically as
follows:
A function f (x) is unimodal if
 x1 < x2 < x* implies that f (x2) < f (x1) and
 x2 > x1 > x* implies that f (x1) < f (x2) where x* is the minimum point
Unimodal function
Some examples of unimodal functions are shown in Fig below
Thus a unimodal function can be a nondifferentiable or even a
discontinuous function. If a function is known to be unimodal
in a given range, the interval in which the minimum lies can be
narrowed down provided that the function values are known at
two different points in the range.
Elimination methods

In most practical problems, the optimum solution is known to lie within


restricted ranges of the design variables. In some cases, this range is not
known, and hence the search has to be made with no restrictions on the
values of the variables.

UNRESTRICTED SEARCH

 Search with fixed step size

 Search with accelerated step size


Unrestricted Search
Search with fixed step size
 The most elementary approach for such a problem is to use a
fixed step size and move from an initial guess point in a
favorable direction (positive or negative).

 The step size used must be small in relation to the final


accuracy desired.

 Simple to implement

 Not efficient in many cases


Elimination

• Unrestricted search – with fixed step size


Unrestricted Search
Search with accelerated step size
• Although the search with a fixed step size appears to be very
simple, its major limitation comes because of the
unrestricted nature of the region in which the minimum can
lie.

• For example, if the minimum point for a particular function


happens to be xopt=50,000 and in the absence of knowledge
about the location of the minimum, if x1 and s are chosen as
0.0 and 0.1, respectively, we have to evaluate the function
5,000,001 times to find the minimum point. This involves a
large amount of computational work.
Unrestricted Search

Search with accelerated step size (cont’d)


• An obvious improvement can be achieved by increasing the
step size gradually until the minimum point is bracketed.
• A simple method consists of doubling the step size as long
as the move results in an improvement of the objective
function.
• One possibility is to reduce the step length after bracketing
the optimum in ( xi-1, xi). By starting either from xi-1 or xi, the
basic procedure can be applied with a reduced step size.
This procedure can be repeated until the bracketed interval
becomes sufficiently small.
Example: Find the minimum of f such that f=x(x-1.5) starting with
x0=0.0 and with fixed step size of 0.05

• F0(0)=0 • X4=0.15
• Let x1=0.0+0.05 • F3(0.15)=0.15(0.15-1.5)
• F1(x2)=0.05(0.05-1.5) =-0.2025
=-0.0725 • F3<F2, continue
• F1<F0, continue • X5=0.2
• X3=0.05+0.05=0.1 • F4(0.2)=0.2(0.2-1.5)
• F2(0.1)=0.1(0.1-1.5) =-0.26
=-0.14 • F4<F3, continue
• F2<F1, continue
Exhaustive search

• The exhaustive search method can be used to solve


problems where the interval in which the optimum is known
to lie is finite.

• Let xs and xf denote, respectively, the starting and final


points of the interval of uncertainty.

• The exhaustive search method consists of evaluating the


objective function at a predetermined number of equally
spaced points in the interval (xs, xf), and reducing the
interval of uncertainty using the assumption of unimodality.
Exhaustive search

• Suppose that a function is defined on the interval (xs, xf), and


let it be evaluated at eight equally spaced interior points x 1
to x8. The function value appears as:

• Thus, the minimum must lie, according to the assumption of


unimodality, between points x5 and x7. Thus the interval
(x5,x7) can be considered as the final interval of uncertainty.
Exhaustive search
• In general, if the function is evaluated at n equally spaced points in
the original interval of uncertainty of length L0= xf - xs, and if the
optimum value of the function (among the n function values) turns
out to be at point xj, the final interval of uncertainty is given by:

2
Ln  x j 1  x j 1  L0
n 1
• The final interval of uncertainty obtainable for different number of
trials in the exhaustive search method is given below:

Number 2 3 4 5 6 … n

of trials
Ln/L0 2/3 2/4 2/5 2/6 2/7 … 2/(n+1)
Example

Find the minimum of f = x(x-1.5) in the interval (0.0,1.0)


to within 10 % of the exact value.

Solution: If the middle point of the final interval of


uncertainty is taken as the approximate optimum point,
the maximum deviation could be 1/(n+1) times the initial
interval of uncertainty. Thus, to find the optimum within
10% of the exact value, we should have

1 1
 or n  9
n  1 10
Example

By taking n = 9, the following function values can be calculated:

i 1 2 3 4 5 6 7 8 9

xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

fi=f(xi) - - -0.36 -0.44 -0.50 -0.54 -0.56 -0.56 -0.54


0.14 0.26
Since f7 = f8 , the assumption of unimodality gives the final interval of
uncertainty as L9= (0.7,0.8). By taking the middle point of L9 (i.e.,
0.75) as an approximation to the optimum point, we find that it is in
fact, the true optimum point.
Dichotomous search

 The exhaustive search method is a simultaneous search


method in which all the experiments are conducted before
any judgement is made regarding the location of the
optimum point.

 In the dichotomous search, two experiments are placed as


close as possible at the center of the interval of uncertainty.

 Based on the relative values of the objective function at the


two points, almost half of the interval of uncertainty is
eliminated.
Dichotomous search

• Let the positions of the two


experiments be given by:
L0 
x1  
2 2
L 
x2  0 
2 2

where  is a small positive number


chosen such that the two
experiments give significantly
different results.
Dichotomous Search

 Then the new interval of uncertainty is given by (L0/2+/2).

 The building block of dichotomous search consists of


conducting a pair of experiments at the center of the current
interval of uncertainty.

 The next pair of experiments is, therefore, conducted at the


center of the remaining interval of uncertainty.

 This results in the reduction of the interval of uncertainty by


nearly a factor of two.
Dichotomous Search

 The intervals of uncertainty at the ends of different pairs of


experiments are given in the following table.
Number of 2 4 6

experiments

Final interval
of uncertainty 1  L0     1  L0     
(L0+ )/2     
2 2  2 2 4 2 2

 In general, the final interval of uncertainty after conducting n


experiments (n even) is given by:

L0  1 
Ln  n / 2   1  n / 2 
2  2 
Dichotomous Search

Example: Find the minimum of f = x(x-1.5) in the interval (0.0,1.0) to within


10% of the exact value.

Solution: The ratio of final to initial intervals of uncertainty is given by:


Ln 1   1 
 n / 2  1  n / 2 
L0 2 L0  2 

where  is a small quantity, say 0.001, and n is the number of experiments.


If the middle point of the final interval is taken as the optimum point, the
requirement can be stated as:

1 Ln 1

2 L0 10
i.e.
1   1  1
 1  n / 2  
2n / 2 L0  2  5
Dichotomous Search

Since  = 0.001 and L0 = 1.0, we have

1 1  1  1
 1  n / 2  
2n / 2 1000  2  5
i.e.
999 1 995 999
n/2
 or 2 n/2
  5.0
1000 2 5000 199
Since n has to be even, this inequality gives the minimum admissable value
of n as 6. The search is made as follows: The first two experiments are
made at:
L0 
x1    0.5  0.0005  0.4995
2 2
L 
x2  0   0.5  0.0005  0.5005
2 2
Dichotomous Search

with the function values given by:


f1  f ( x1 )  0.4995(1.0005)  0.49975
f 2  f ( x2 )  0.5005(0.9995)  0.50025

Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The
second pair of experiments is conducted at :
1.0  0.4995
x3  (0.4995  )  0.0005  0.74925
2
1.0  0.4995
x4  (0.4995  )  0.0005  0.75025
2
which gives the function values as:
f 3  f ( x3 )  0.74925(0.75075)  0.5624994375
f 4  f ( x4 )  0.75025(0.74975)  0.5624994375
Dichotomous Search

Since f3 > f4 , we delete (0.4995,x3) and obtain the new interval of


uncertainty as:
(x3,1.0)=(0.74925,1.0)
The final set of experiments will be conducted at:
1.0  0.74925
x3  (0.74925  )  0.0005  0.874125
2
1.0  0.74925
x4  (0.74925  )  0.0005  0.875125
2
which gives the function values as:

f 5  f ( x5 )  0.874125(0.625875)  0.5470929844
f 6  f ( x6 )  0.875125(0.624875)  0.5468437342
Dichotomous Search

Since f5 < f6 , the new interval of uncertainty is given by (x3, x6)


(0.74925,0.875125). The middle point of this interval can be taken as
optimum, and hence:
xopt  0.8121875
f opt  0.5586327148
Fibonacci method

As stated earlier, the Fibonacci method can be used to find


the minimum of a function of one variable even if the
function is not continuous. The limitations of the method
are:

 The initial interval of uncertainty, in which the optimum lies,


has to be known.

 The function being optimized has to be unimodal in the


initial interval of uncertainty.
Fibonacci method

The limitations of the method (cont’d):

 The exact optimum cannot be located in this method. Only an


interval known as the final interval of uncertainty will be
known. The final interval of uncertainty can be made as
small as desired by using more computations.

 The number of function evaluations to be used in the search


or the resolution required has to be specified before hand.
Fibonacci method

This method makes use of the sequence of Fibonacci


numbers, {Fn}, for placing the experiments. These numbers
are defined as:

F0  F1  1
Fn  Fn 1  Fn  2 , n  2,3,4, 

which yield the sequence 1,1,2,3,5,8,13,21,34,55,89,...


Fibonacci method

Procedure:
Let L0 be the initial interval of uncertainty defined by a x
 b and n be the total number of experiments to be
conducted. Define
Fn 2
L 
*
2 L0
Fn

and place the first two experiments at points x1 and x2,


which are located at a distance of L2* from each end of L0.
Fibonacci method

Procedure:
This gives
Fn  2
x1  a  L*2  a  L0
Fn
Fn  2 Fn 1
x2  b  L  b 
*
2 L0  a  L0
Fn Fn
Discard part of the interval by using the unimodality
assumption. Then there remains a smaller interval of
uncertainty L2 given by:
 F  Fn 1
L2  L0  L*2  L0 1  n  2   L0
 Fn  Fn
Fibonacci method

Procedure:
The only experiment left in will be at a distance of
Fn  2 F
L*2  L0  n  2 L2
Fn Fn 1
from one end and
Fn 3 F
L2  L*2  L0  n 3 L2
Fn Fn 1
from the other end. Now place the third experiment in the interval
L2 so that the current two experiments are located at a distance of:
Fn 3 F
L*3  L0  n 3 L2
Fn Fn 1
Fibonacci method

Procedure:
 This process of discarding a certain interval and placing a new experiment
in the remaining interval can be continued, so that the location of the jth
experiment and the interval of uncertainty at the end of j experiments are,
respectively, given by:

Fn  j
L 
*
j L j 1
Fn ( j  2)
Fn ( j 1)
Lj  L0
Fn
Fibonacci method

Procedure:
• The ratio of the interval of uncertainty remaining after conducting
j of the n predetermined experiments to the initial interval of
uncertainty becomes:

Lj Fn ( j 1)

L0 Fn
and for j = n, we obtain

Ln F1 1
 
L0 Fn Fn
Fibonacci method

• The ratio Ln/L0 will permit us to determine n, the required number


of experiments, to achieve any desired accuracy in locating the
optimum point.Table gives the reduction ratio in the interval of
uncertainty obtainable for different number of experiments.
Fibonacci method

Position of the final experiment:


• In this method, the last experiment has to be placed with some
care. Equation
Fn  j
L 
*
j L j 1
Fn ( j  2)
gives
L*n F 1
 0  for all n
Ln 1 F2 2
• Thus, after conducting n-1 experiments and discarding the
appropriate interval in each step, the remaining interval will
contain one experiment precisely at its middle point.
Fibonacci method

Position of the final experiment:


• However, the final experiment, namely, the nth
experiment, is also to be placed at the center of the
present interval of uncertainty.
• That is, the position of the nth experiment will be the
same as that of ( n-1)th experiment, and this is true for
whatever value we choose for n.
• Since no new information can be gained by placing the
nth experiment exactly at the same location as that of
the (n-1)th experiment, we place the nth experiment
very close to the remaining valid experiment, as in the
case of the dichotomous search method.
Fibonacci method

Example:
Minimize
f(x)=0.65-[0.75/(1+x2)]-0.65 x tan-1(1/x) in the interval [0,3]
by the Fibonacci method using n=6.
Solution: Here n=6 and L0=3.0, which yield:
Fn  2 5
L2 *  L0  (3.0)  1.153846
Fn 13
Thus, the positions of the first two experiments are given by
x1=1.153846 and x2=3.0-1.153846=1.846154 with f1=f(x1)=-
0.207270 and f2=f(x2)=-0.115843. Since f1 is less than f2, we
can delete the interval [x2,3] by using the unimodality
assumption.
Fibonacci method

Solution:
Fibonacci method
Solution:
The third experiment is placed at x3=0+ (x2-x1)=1.846154-
1.153846=0.692308, with the corresponding function value of f3=-
0.291364. Since f1 is greater than f3, we can delete the interval [x1,x2]
Fibonacci method
Solution:
The next experiment is located at x4=0+ (x1-x3)=1.153846-
0.692308=0.461538, with f4=-0.309811. Noting that f4 is less than f3, we
can delete the interval [x3,x1]
Fibonacci method
Solution:
The location of the next experiment can be obtained as x5=0+ (x3-
x4)=0.692308-0.461538=0.230770, with the corresponding objective
function value of f5=-0.263678. Since f4 is less than f3, we can delete the
interval [0,x5]
Fibonacci method
Solution:
The final experiment is positioned at x6=x5+ (x3-x4)=0.230770+(0.692308-
0.461538)=0.461540 with f6=-0.309810. (Note that, theoretically, the
value of x6 should be same as that of x4; however,it is slightly different
from x4 due to the round off error). Since f6 > f4 , we delete the interval
[x6, x3] and obtain the final interval of uncertainty as L6 = [x5,
x6]=[0.230770,0.461540].
Fibonacci method
Solution:
The ratio of the final to the initial interval of uncertainty is
L6 0.461540  0.230770
  0.076923
L0 3. 0

This value can be compared with

Ln F1 1
 
L0 Fn Fn

which states that if n experiments (n=6) are planned, a resolution


no finer than 1/Fn= 1/F6=1/13=0.076923 can be expected from
the method.
Using MATLAB

• MATLAB has built in function named fminunc


• It solves unconstrained minimization
• Steps
– Write the objective function as a user defined function in
mfile
– Call fminunc using the objective function and initial values
as an argument
• Example: minimize the function

• With initial point


• Solution
• 1. write the objective function as mfile
function y=objc(x)
y=100*(x(2)-x(1)^2)^2+(1-x(1))^2
• 2. Call fminunc with the objective function as
argument
xo=[-1.2;1];
[x,fx]=fminunc(@objc,xo);
Reading assignment

• Steepest descent
• Quasi-Newton Method
Constrained NLP
• Problem statement
Sequential quadratic programming
• Is the latest and best method
• Converts the constrained NLP to a quadratic
problem using
– Gradient of function
– Lagrangian multiplier
• Derivation
– Lagrangian equation of the above NLP is

• Converted Problem
Solution

• The solution procedure has the following steps


– Step 1 – start with an initial guess X
– Step 2 update X as

– Where S is the solution of an auxiliary optimization problem

– Where H is the positive definite matrix initially taken as


identity and is updated to converge to Hessian of the
Lagrangian and the constant  is given by
• And  is found from the solution of

• MATLAB solution
– Steps :
• Write the objective function as m-file and save it
• Write the constraint function as a separate m-file and save it
• Prepare the upper and lower bounds as vectors
• Call the build in function fmincon() using the objective function,
constrain functions and upper and lower bounds as arguments
Example: solve the following minimization problem using
MATLAB

• Minimize

• Use X=[11.8765 7] as starting point


• Solution: There are no equality constraints and
lower and upper bounds
Programs

• function y=objc2(x)
y=0.1*x(1)+0.05773*x(2);
end
Save this as objc2 in a separate m-file

• function [c ceq]=constr(x)
ceq=[];
c=[(0.6/x(1)) +(0.3464/x(2))-0.1;6-x(1);7-x(2)];
end
Save this also in a separate m-file named constr
• Write the main calling function
• xo=[ 11.8756 7];
• [x, fx]=fmincon(@obj2,xo,[],[],[],[],[],[],@constr);
The answer will be

x=

9.4639 9.4642

fx =

1.4928
Modern optimization techniques
• Drawback of classical • AI based techniques
optimization techniques – Fuzzy logic system
– Need differential – Neural network
– Single directional – Evolutionary algorithm
– Stack at local extreme • Genetic algorithm
• Simulated annealing
• Particle swarm

You might also like