Chapter 2 Power System Operation
Chapter 2 Power System Operation
Is a numerical method
Used for non differentiable or analytically unsolvable
objective functions
Example 2.1 0.75 1 1
f x 0.65x tan 0.65
1 x 2
x
Analytical method
Numerical methods:
The values of the objective function are first found at various combinations
of the decision variables
UNRESTRICTED SEARCH
Simple to implement
• F0(0)=0 • X4=0.15
• Let x1=0.0+0.05 • F3(0.15)=0.15(0.15-1.5)
• F1(x2)=0.05(0.05-1.5) =-0.2025
=-0.0725 • F3<F2, continue
• F1<F0, continue • X5=0.2
• X3=0.05+0.05=0.1 • F4(0.2)=0.2(0.2-1.5)
• F2(0.1)=0.1(0.1-1.5) =-0.26
=-0.14 • F4<F3, continue
• F2<F1, continue
Exhaustive search
2
Ln x j 1 x j 1 L0
n 1
• The final interval of uncertainty obtainable for different number of
trials in the exhaustive search method is given below:
Number 2 3 4 5 6 … n
of trials
Ln/L0 2/3 2/4 2/5 2/6 2/7 … 2/(n+1)
Example
1 1
or n 9
n 1 10
Example
i 1 2 3 4 5 6 7 8 9
experiments
Final interval
of uncertainty 1 L0 1 L0
(L0+ )/2
2 2 2 2 4 2 2
L0 1
Ln n / 2 1 n / 2
2 2
Dichotomous Search
1 Ln 1
2 L0 10
i.e.
1 1 1
1 n / 2
2n / 2 L0 2 5
Dichotomous Search
1 1 1 1
1 n / 2
2n / 2 1000 2 5
i.e.
999 1 995 999
n/2
or 2 n/2
5.0
1000 2 5000 199
Since n has to be even, this inequality gives the minimum admissable value
of n as 6. The search is made as follows: The first two experiments are
made at:
L0
x1 0.5 0.0005 0.4995
2 2
L
x2 0 0.5 0.0005 0.5005
2 2
Dichotomous Search
Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The
second pair of experiments is conducted at :
1.0 0.4995
x3 (0.4995 ) 0.0005 0.74925
2
1.0 0.4995
x4 (0.4995 ) 0.0005 0.75025
2
which gives the function values as:
f 3 f ( x3 ) 0.74925(0.75075) 0.5624994375
f 4 f ( x4 ) 0.75025(0.74975) 0.5624994375
Dichotomous Search
f 5 f ( x5 ) 0.874125(0.625875) 0.5470929844
f 6 f ( x6 ) 0.875125(0.624875) 0.5468437342
Dichotomous Search
F0 F1 1
Fn Fn 1 Fn 2 , n 2,3,4,
Procedure:
Let L0 be the initial interval of uncertainty defined by a x
b and n be the total number of experiments to be
conducted. Define
Fn 2
L
*
2 L0
Fn
Procedure:
This gives
Fn 2
x1 a L*2 a L0
Fn
Fn 2 Fn 1
x2 b L b
*
2 L0 a L0
Fn Fn
Discard part of the interval by using the unimodality
assumption. Then there remains a smaller interval of
uncertainty L2 given by:
F Fn 1
L2 L0 L*2 L0 1 n 2 L0
Fn Fn
Fibonacci method
Procedure:
The only experiment left in will be at a distance of
Fn 2 F
L*2 L0 n 2 L2
Fn Fn 1
from one end and
Fn 3 F
L2 L*2 L0 n 3 L2
Fn Fn 1
from the other end. Now place the third experiment in the interval
L2 so that the current two experiments are located at a distance of:
Fn 3 F
L*3 L0 n 3 L2
Fn Fn 1
Fibonacci method
Procedure:
This process of discarding a certain interval and placing a new experiment
in the remaining interval can be continued, so that the location of the jth
experiment and the interval of uncertainty at the end of j experiments are,
respectively, given by:
Fn j
L
*
j L j 1
Fn ( j 2)
Fn ( j 1)
Lj L0
Fn
Fibonacci method
Procedure:
• The ratio of the interval of uncertainty remaining after conducting
j of the n predetermined experiments to the initial interval of
uncertainty becomes:
Lj Fn ( j 1)
L0 Fn
and for j = n, we obtain
Ln F1 1
L0 Fn Fn
Fibonacci method
Example:
Minimize
f(x)=0.65-[0.75/(1+x2)]-0.65 x tan-1(1/x) in the interval [0,3]
by the Fibonacci method using n=6.
Solution: Here n=6 and L0=3.0, which yield:
Fn 2 5
L2 * L0 (3.0) 1.153846
Fn 13
Thus, the positions of the first two experiments are given by
x1=1.153846 and x2=3.0-1.153846=1.846154 with f1=f(x1)=-
0.207270 and f2=f(x2)=-0.115843. Since f1 is less than f2, we
can delete the interval [x2,3] by using the unimodality
assumption.
Fibonacci method
Solution:
Fibonacci method
Solution:
The third experiment is placed at x3=0+ (x2-x1)=1.846154-
1.153846=0.692308, with the corresponding function value of f3=-
0.291364. Since f1 is greater than f3, we can delete the interval [x1,x2]
Fibonacci method
Solution:
The next experiment is located at x4=0+ (x1-x3)=1.153846-
0.692308=0.461538, with f4=-0.309811. Noting that f4 is less than f3, we
can delete the interval [x3,x1]
Fibonacci method
Solution:
The location of the next experiment can be obtained as x5=0+ (x3-
x4)=0.692308-0.461538=0.230770, with the corresponding objective
function value of f5=-0.263678. Since f4 is less than f3, we can delete the
interval [0,x5]
Fibonacci method
Solution:
The final experiment is positioned at x6=x5+ (x3-x4)=0.230770+(0.692308-
0.461538)=0.461540 with f6=-0.309810. (Note that, theoretically, the
value of x6 should be same as that of x4; however,it is slightly different
from x4 due to the round off error). Since f6 > f4 , we delete the interval
[x6, x3] and obtain the final interval of uncertainty as L6 = [x5,
x6]=[0.230770,0.461540].
Fibonacci method
Solution:
The ratio of the final to the initial interval of uncertainty is
L6 0.461540 0.230770
0.076923
L0 3. 0
Ln F1 1
L0 Fn Fn
• Steepest descent
• Quasi-Newton Method
Constrained NLP
• Problem statement
Sequential quadratic programming
• Is the latest and best method
• Converts the constrained NLP to a quadratic
problem using
– Gradient of function
– Lagrangian multiplier
• Derivation
– Lagrangian equation of the above NLP is
• Converted Problem
Solution
• MATLAB solution
– Steps :
• Write the objective function as m-file and save it
• Write the constraint function as a separate m-file and save it
• Prepare the upper and lower bounds as vectors
• Call the build in function fmincon() using the objective function,
constrain functions and upper and lower bounds as arguments
Example: solve the following minimization problem using
MATLAB
• Minimize
• function y=objc2(x)
y=0.1*x(1)+0.05773*x(2);
end
Save this as objc2 in a separate m-file
• function [c ceq]=constr(x)
ceq=[];
c=[(0.6/x(1)) +(0.3464/x(2))-0.1;6-x(1);7-x(2)];
end
Save this also in a separate m-file named constr
• Write the main calling function
• xo=[ 11.8756 7];
• [x, fx]=fmincon(@obj2,xo,[],[],[],[],[],[],@constr);
The answer will be
x=
9.4639 9.4642
fx =
1.4928
Modern optimization techniques
• Drawback of classical • AI based techniques
optimization techniques – Fuzzy logic system
– Need differential – Neural network
– Single directional – Evolutionary algorithm
– Stack at local extreme • Genetic algorithm
• Simulated annealing
• Particle swarm