0% found this document useful (0 votes)
82 views56 pages

ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)

This document discusses optimization techniques for linear and non-linear programming problems. It begins with an introduction to linear programming and the simplex method. It then covers various non-linear programming techniques including elimination methods, interpolation methods, and direct/indirect search methods. Specific non-linear programming techniques discussed in more detail include unrestricted search with fixed/accelerated step sizes, exhaustive search, and one-dimensional minimization methods. Power system problems are noted to be non-linear. Applications of linear programming are also listed.

Uploaded by

Fitsum Haile
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views56 pages

ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)

This document discusses optimization techniques for linear and non-linear programming problems. It begins with an introduction to linear programming and the simplex method. It then covers various non-linear programming techniques including elimination methods, interpolation methods, and direct/indirect search methods. Specific non-linear programming techniques discussed in more detail include unrestricted search with fixed/accelerated step sizes, exhaustive search, and one-dimensional minimization methods. Power system problems are noted to be non-linear. Applications of linear programming are also listed.

Uploaded by

Fitsum Haile
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

ECEG-6311

Power System Optimization


and AI
Lecture 4:
Linear and Non Linear Programming

Yoseph Mekonnen (Ph.D.)

Page 1
Outlines
Introduction to Linear Programming
Simplex Method
Introduction to non Linear Programming
Elimination Method
Interpolation Method
Direct Search Methods
Indirect Search (Descent) Methods

Page 2
Introduction to Linear Programming
Linear programming is an optimization method applicable for
the solution of problems in which the objective function and
the constraints appear as linear functions of the decision
variables.
The constraint equations in a linear programming problem
may be in the form of equalities or inequalities.

George B. Dantzig, who was a member of the Air Force


group, formulated the general linear programming problem
and devised the simplex method of solution in 1947.
Linear programming is considered a revolutionary
development that permits us to make optimal decisions in
complex situations.

Page 3
..Contd..
Although several other methods have been developed over
the years for solving LP problems, the simplex method
continues to be the most efficient and popular method for
solving general LP problems.

Page 4
..Contd..
Application of Linear Programming
Petroleum Refineries
Manufacturing Firm
Food-processing Industry
In The Iron And Steel Industry
Metalworking Industries
Paper Mills
Routing Of Messages In A Communication Network
Routing Of Aircraft And Ships
Engineering Design Problems
etc

Page 5
..Contd..
Standard Form Of A Linear Programming Problem
Scalar Form

where cj, bj, and aij (i = 1, 2, . . . , m; j = 1, 2, . . . , n) are


known constants, and xj are the decision variables.

Page 6
..Contd..
Matrix Form

where

Page 7
..Contd..
The characteristics of a linear programming problem,
stated in standard form, are
1. The objective function is of the minimization type.
2. All the constraints are of the equality type.
3. All the decision variables are nonnegative.

Page 8
..Contd..
Any linear programming problem can be expressed in
standard form by using the following transformations.
1. The maximization of a function f (x1, x2, . . . , xn) is
equivalent to the minimization of the negative of the
same function.

2. An unrestricted variable (which can take a positive,


negative, or zero value) can be written as the difference
of two nonnegative variables. s.

Page 9
..Contd..
Thus if xj is unrestricted in sign, it can be written as xj =
x′j − x′′j , where

It can be seen that xj will be negative, zero, or positive,


depending on whether x′′j is greater than, equal to, or less
than x′j .
3.If a constraint appears in the form of a “less than or
equal to” type of inequality as:

It can be converted into the equality form by adding a


nonnegative slack variable xn+1 as follows:

Page 10
..Contd..
Similarly, if the constraint is in the form of a “greater than
or equal to” type of inequality as:

It can be converted into the equality form by subtracting a


variable as:

where xn+1 is a nonnegative variable known as a surplus


variable.

Page 11
..Contd..

Page 12
Non Linear Programming Techniques
 If any of the functions among the objective and
constraint functions in is nonlinear, the problem is called a
nonlinear programming (NLP) problem.
 This is the most general programming problem and all
other problems can be considered as special cases of the
NLP problem.

Power system problems are


nonlinear

Page 13
..Contd..
UNIMODAL FUNCTION
A unimodal function is one that has only one peak (maximum)
or valley (minimum) in a given interval.
Thus a function of one variable is said to be unimodal if,
given that two values of the variable are on the same side
of the optimum, the one nearer the optimum gives the
better functional value (i.e., the smaller value in the case of
a minimization problem).

Page 14
..Contd..

For Minimization
a) [x2, 1], discarded
b) [0, x1], discarded
c) [0, x1] and [x2, 1], discarded

Page 15
..Contd..

Page 16
..Contd..
One-dimensional Minimization Methods

Page 17
..Contd..
The elimination methods can be used for the minimization
of even discontinuous functions.
The quadratic and cubic interpolation methods involve
polynomial approximations to the given function.
The direct root methods are root finding methods that can
be considered to be equivalent to quadratic interpolation.
The assumption of unimodality is made in all the elimination
techniques.
If a function is known to be multimodal (i.e., having several
valleys or peaks), the range of the function can be
subdivided into several parts and the function treated as a
unimodal function in each part.

Page 18
Elimination Methods
UNRESTRICTED SEARCH(Search with Fixed Step Size)
The optimum solution is known to lie within restricted
ranges of the design variables.

In some cases this range is not known, and hence the search
has to be made with no restrictions on the values of the
variables.

The step size must be small in relation to the final accuracy


desired.
Although this method is very simple to implement, it is not
efficient in many cases.

Page 19
..Contd..
This method is described in the following steps:

Page 20
..Contd..
Search with Accelerated Step Size
Although the search with a fixed step size appears to be
very simple, its major limitation comes because of the
unrestricted nature of the region in which the minimum can
lie.
If forexample,the minimum point for a particular function
happens to be xopt = 50, 000 and, in the absence of
knowledge about the location of the minimum, if x1 and s
are chosen as 0.0 and 0.1, respectively, we have to evaluate
the function 5,000,001 times to find the minimum point.
This involves a large amount of computational work.
An obvious improvement can be achieved by increasing the
step size gradually until the minimum point is bracketed.

Page 21
..Contd..
A simple method consists of doubling the step size as long
as the move results in an improvement of the objective
function.
Reduction of the step length after bracketing the optimum
in (xi−1, xi).
By starting either from xi−1 or xi , the basic procedure can
be applied with a reduced step size.
This procedure can be repeated until the bracketed interval
becomes sufficiently small.

Page 22
..Contd..
Example
Find the minimum of f = x(x−1.5) by starting from 0.0 with
an initial step size of 0.05.
The function value at x1 is f1 = 0.0.
If we try to start moving in the negative x direction, we
find that x−2 = −0.05 and f−2 = 0.0775.
Since f−2 > f1, the assumption of unimodality indicates
that the minimum cannot lie toward the left of x−2.
Thus we start moving in the positive x direction and obtain
the following results:

Page 23
..Contd..
Single Step Size

Doubling Step size

Page 24
..Contd..
From these results, the optimum point can be seen to be
xopt ≈ x6 = 0.8.
In this case, the points x6 and x7 do not really bracket
the minimum point but provide information about it.
If a better approximation to the minimum is desired, the
procedure can be restarted from x5 with a smaller step
size.

Page 25
EXHAUSTIVE SEARCH
The exhaustive search method can be used to solve
problems where the interval in which the optimum is known
to lie is finite.
Let xs and xf denote, respectively, the starting and final
points of the interval of uncertainty.
The exhaustive search method consists of evaluating the
objective function at a predetermined number of equally
spaced points in the interval (xs, xf ), and reducing the
interval of uncertainty using the assumption of unimodality.

Page 26
..Contd..
In general, if the function is evaluated at n equally spaced
points in the original interval of uncertainty of length L0 =
xf − xs , and if the optimum value of the function (among
the n function values) turns out to be at point xj , the final
interval of uncertainty is given by:

The final interval of uncertainty obtainable for different


number of trials in the exhaustive search method is given
below:

Page 27
..Contd..
Example Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.00) to
within 10% of the exact value.
If the middle point of the final interval of uncertainty is taken as the
approximate optimum point, the maximum deviation could be 1/(n + 1)
times the initial interval of uncertainty. Thus to find the optimum within
10% of the exact value, we should have :

Since f(x7)=f(x8), the assumption of unimodality gives the final interval


of uncertainty as L9 = (0.7, 0.8).
By taking the middle point (i.e., 0.75) as an approximation to the
optimum point, we find that it is, in fact, the optimum point.

Page 28
..Contd..
DICHOTOMOUS SEARCH
The exhaustive search method is a simultaneous search
method in which all the experiments are conducted before
any judgment is made regarding the location of the optimum
point.
The dichotomous search method, as well as the Fibonacci
and the golden section methods discussed in subsequent
sections, are sequential search methods in which the result
of any experiment influences the location of the subsequent
experiment.
In the dichotomous search, two experiments are placed as
close as possible at the center of the interval of
uncertainty.

Page 29
..Contd..
Based on the relative values of the objective function at
the two points, almost half of the interval of uncertainty is
eliminated. Let the positions of the two experiments be
given by:

where δ is a small positive number chosen so that the two


experiments give significantly different results.

Page 30
..Contd..
The next pair of experiments is, therefore, conducted at
the center of the remaining interval of uncertainty.
This results in the reduction of the interval of uncertainty
by nearly a factor of 2.
The intervals of uncertainty at the end of different pairs
of experiments are given in the following table:

In general, the final interval of uncertainty after


conducting n experiments (n even) is given by

Page 31
..Contd..
Example
Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.00)
to within 10% of the exact value.
The ratio of final to initial intervals of uncertainty is given
by:

where δ is a small quantity, say 0.001, and n is the number


of experiments. If the middle point of the final interval is
taken as the optimum point, the requirement can be stated
as:

Page 32
..Contd..
Since δ = 0.001 and L0 = 1.0, we have

Since n has to be even, this inequality gives the minimum


admissible value of n as 6. The first two experiments are
made at:

Functional Value

Page 33
..Contd..
Since f2<f1, the new interval of uncertainty will be (0.4995,
1.0). The second pair of experiments is conducted at:

Since f3 > f4, we delete (0.4995, x3) and obtain the new
interval of uncertainty as:

Page 34
..Contd..
The final set of experiments will be conducted at

The corresponding function values are

Since f5 < f6, the new interval of uncertainty is given by


(x3, x6) = (0.74925, 0.875125). The middle point of this
interval can be taken as optimum, and hence

Page 35
..Contd..
INTERVAL HALVING METHOD
In the interval halving method, exactly one-half of the
current interval of uncertainty is deleted in every stage.
It requires three experiments in the first stage and two
experiments in each subsequent stage.
The procedure can be described by the following steps:

Page 36
1. Divide the initial interval of uncertainty L0 = [a, b] into
four equal parts and label the middle point x0 and the
quarter-interval points x1 and x2.
2.Evaluate the function f (x) at the three interior points
to obtain f1 = f (x1), f0 = f (x0), and f2 = f (x2).
3.(a) If f2 > f0 > f1 as shown in Fig. a, delete the interval
(x0, b), label x1 and x0 as the new x0 and b, respectively,
and go to step 4.
3. (b) If f2 < f0 < f1 as shown in Fig. b, delete the interval
(a, x0), label x2 and x0 as the new x0 and a, respectively,
and go to step 4.
3. (c) If f1 > f0 and f2 > f0 as shown in Fig, delete both
the intervals (a, x1) and (x2, b), label x1 and x2 as the
new a and b, respectively, and go to step 4.
4. Test whether the new interval of uncertainty, L = b − a,
satisfies the convergence criterion L ≤ ε, where ε is a
small quantity. If the convergence criterion is satisfied,
stop the procedure.
Otherwise, set the new L0 = L and go to step 1.

Page 37
..Contd..
NB
In the interval method, the function value at the middle
point of the interval of uncertainty, f0, will be available in
all the stages except the first stage.
2. The interval of uncertainty remaining at the end of n
experiments (n ≥ 3 and odd) is given by:

Page 38
..Contd..
Example
Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.0)
to within 10% of the exact value.

If the middle point of the final interval of uncertainty is


taken as the optimum point, the specified accuracy can be
achieved if:

Page 39
..Contd..
Since n has to be odd, inequality (E2) gives the minimum
permissible value of n as 7.
With this value of n = 7, the search is conducted as follows.
The first three experiments are placed at one-fourth
points of the interval L0 = [a = 0, b = 1] as:

Since f1 > f0 > f2, we delete the interval (a, x0)=(0.0, 0.5),
label x2 and x0 as the new x0 and a so that a=0.5, x0=0.75,
and b=1.0. By dividing the new interval of uncertainty,
L3=(0.5, 1.0) into four equal parts, we obtain

Page 40
..Contd..

Since f1 > f0 and f2 > f0, we delete both the intervals (a,
x1) and (x2, b), and label x1, x0, and x2 as the new a, x0,
and b, respectively. Thus the new interval of uncertainty
will be L5 = (0.625, 0.875). Next, this interval is divided
into four equal parts to obtain

Page 41
..Contd..
Again we note that f1 > f0 and f2 > f0 and hence we delete
both the intervals (a, x1) and (x2, b) to obtain the new
interval of uncertainty as L7 = (0.6875, 0.8125).

By taking the middle point of this interval (L7) as optimum,


we obtain:

This solution happens to be the exact solution in this case.

Page 42
..Contd..
FIBONACCI METHOD
The Fibonacci method can be used to find the minimum of a
function of one variable even if the function is not continuous.
This method, has the following limitations:
1. The initial interval of uncertainty, in which the optimum lies,
has to be known.
2. The function being optimized has to be unmoral in the initial
interval of uncertainty.
3. The exact optimum cannot be located in this method. Only an
interval known as the final interval of uncertainty will be
known. The final interval of uncertainty can be made as small
as desired by using more computations.
4. The number of function evaluations to be e used in the search
or the resolution required has to be specified beforehand.

Page 43
..Contd..
This method makes use of the sequence of Fibonacci
numbers, {Fn}, for placing the experiments. These numbers
are defined as:

Which yields 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89,. . . .

Let L0 be the initial interval of uncertainty defined by a ≤ x


≤ b and n be the total number of experiments to be
conducted. Define:

Page 44
..Contd..
Now place the first two experiments at points x1 and x2,
which are located at a distance of L∗2 from each end of L0.
This gives:

Discard part of the interval by using the unimodality


assumption. Then there remains a smaller interval of
uncertainty L2 given by:

Page 45
..Contd..
This process of discarding a certain interval and placing a
new experiment in the remaining interval can be continued,
so that the location of the j th experiment and the interval
of uncertainty at the end of j experiments are,
respectively, given by:

The ratio of the interval of uncertainty remaining after


conducting j of the n predetermined experiments to the
initial interval of uncertainty becomes
and for j = n, we obtain

Page 46
..Contd..
The ratio Ln/L0 will permit us to determine n, the required
number of experiments, to achieve any desired accuracy in
locating the optimum point.

Page 47
..Contd..
Example
Minimize f (x) in the interval [0,3] by the Fibonacci method
using n = 6.

Here n = 6 and L0 = 3.0, which yield

Page 48
..Contd..
The positions of the first two experiments are
given by x1 = 1.153846 and x2 = 3.0 − 1.153846 =
1.846154
f1 = f (x1) = −0.207270 and f2 = f (x2) = −0.115843.
Since f1<f2, delete the interval [x2, 3.0]
The third experiment is placed at x3=0+(x2−x1) =
1.846154 − 1.153846 = 0.692308,
f3 = −0.291364. Since f1 > f3, we delete [x1, x2].
The next experiment is located at x4 =0+(x1−x3) =
1.153846 − 0.692308 = 0.461538
f4 = −0.309811. Sincef4 < f3,delete [x3, x1] (Fig.c).
The location of the next experiment can be
obtained as x5 =
0+(x3−x4)=0.692308−0.461538=0.230770
f5 = −0.263678. Since f5 > f4, delete [0, x5] (Fig.
d).
The final experiment is positioned at x6 = x5 + (x3 −
x4) = 0.230770 + (0.692308 − 0.461538) = 0.461540
f6 = −0.309810. Since f6 > f4, delete [x6, x3]
the final interval of uncertainty as L6 = [x5, x6] =
[0.230770, 0.461540] (Fig. e). The ratio of the final
to the initial interval of uncertainty is: 0.076923
=1/F6=1/13

Page 49
Reading Assignment
Golden Section Method
Interpolation Method (Quadratic and Cubic)
Indirect Search (Descent) Methods

Page 50
DIRECT ROOT METHODS
Newton Method
Consider the quadratic approximation of the function f(λ)
at λ = λi using the Taylor’s series expansion:

By setting the derivative of equal to zero for the minimum


of f(λ), we obtain:

If λi denotes an approximation to the minimum of f (λ), the


function can be rearranged to obtain an improved
approximation as:

Page 51
..Contd..
Convergence criteria
where ε is a small quantity
Remarks
1. The Newton method developed by Newton for solving
nonlinear equations and later refined by Raphson, hence
Newton–Raphson
2. The method requires both the first-and second-order
derivatives of f (λ).
3. If f ′′(λi) = 0 the Newton iterative method has a powerful
(fastest) convergence property, known as quadratic
convergence.
4. If the starting point for the iterative process is not close
to the true solution λ∗,the solution might diverge

Page 52
..Contd..
Example
Find the minimum of the function
Starting point λ1 = 0.1. Use ε = 0.01.
The first and second derivatives of the function f (λ) are given by:

Iteration 1

Page 53
..Contd..

Convergence check: |f′(λ4)| = |−0.0005033|<ε. Since the


process has converged, the optimum solution is taken as λ∗≈
λ4 = 0.480409.

Page 54
Reading Assignment
Quasi-Newton Method
Secant Method

Page 55
Thank You!

Page 56

You might also like