Lec 33
Lec 33
Lec 33
Lecture No- 33
Interior and Exterior Penalty Function Method
Penalty function method, these are used for solving the general non-linear programming
problem. Now, in penalty function method what it does the problem reduces the non-
linear problem is being reduced to a sequence of unconstraint optimization problems.
Thus, let us consider a general non-linear programming problem in this fashion find X,
there are n number of decision variables which minimises the objective function f (X)
which is non-linear in nature subject to, since we are considering a general non-linear
programming problem that is why let us consider a sequence of linearic; that is a non-
linear equations with equality constraints, say there are p number of equality constraints.
And let us consider another set, these are inequality types and m number of inequality
constraints.
Now, penalty function method what it does? It reduces this non-linear programming
problem into a sequence of unconstraint optimization problem. Because thus, this penalty
function method can also being named as the sequence of unconstraint minimization
technique, that is the full form is sequence of unconstrained. Now, today I will tell you
how we are just handling this sequence of unconstraint minimization problem in penalty
function method. In there are 2 types of penalty function method; one is the exterior
penalty function method, and another one is the interior penalty function method; in
exterior penalty function method a sequence it generates a sequence of infeasible
solutions 1 after another. And once it reaches to the feasible this is a iterative process.
Now, once it is reaches to the feasible space then, the iteration process stops and that is
the exterior penalty function method. But in the interior penalty function method I will
explain you with a example as well as graphically; what it does it generates a sequence
of feasible points. But it will converts to the optimal solution of the original problem.
Now, whatever I say it let me just elaborate the things with a mathematical form.
Now, as we see for the equality constraint how the penalty term is coming in the penalty
function method; let me tell you that factor first. Now, say this is a these are the equality
constraints now we are on iterative process is running; if I if in the process there is a
solution say X 1 which is infeasible. Then, certainly at the point X 1 h k (X 1) at least
one of these will not be satisfied because it is being satisfied. Then, only this is the
feasible point otherwise this is infeasible point; thus, we can see that in the process of
iteration if at some iteration X 1 is the guess point of the optimal solution; which is not
feasible in that case h k (X 1) not equal to 0; at least 1 h k X, for at least1 k, h k (X 1) not
equal to 0.
Now, from here the penalty term is being formed it is being said that if X 1 in is
infeasible then, we are incurring 1 penalty; that is h k (X 1). Now, since penalty
sometimes is positive, sometimes its negative that why the penalty can be said as the
square term; thus, we can say if we just multiply with a parameter mu k say then, for not
being achieved the feasible solution this penalty fact this penalty we have to incur for
equality constraint.
Now, let us see what is happening for the inequality constraint. And now in equality
constraints are of the type g j (X) less than is equal to 0; that is why if again in the same
thing if X 1 is a solution rather the guess point for the optimal solution at any iteration.
And if X 1 is not feasible then, at least for 1 j, g j (X) is greater than 0 because if g j (X)
is less than equal to 0; that means, X 1 for all j then, X 1 must be the feasible point that is
why since, X 1 is not the feasible point we can see the g j X is less than 0. And at least
for 1 j that also could should be the logic here.
And, here we can see that we will incur a penalty that is the what is the amount of it
amount is g j (X); that is why we can generalise this with a penalty function max of 0, g j
(X) it means; that if X is within the feasible space the penalty is 0; if X is outside the
feasible space then, we are incurring 1 the penalty value that is g j (X 1) all right. And we
can even again multiply with a penalty parameter mu j. Now, in the penalty function
method what it does it converts the constraint problem, constraint non-linear
programming problem into the unconstraint minimization problem in the following
fashion minimization of f (X) plus mu summation h k square (X); that is the penalty for
the equality constraint it is from 1 to p mu k.
I should write down mu k here; inside the equality sign plus summation j is equal to 1 to
m because there are m number in equality constraints; we can write down mu j let me put
it prime max of 0, g j (X); how we are summarizing the non-linear constraint problem
into the unconstraint problem; just you see we are considering the objective function plus
we are attaching, we are appending, we are augmenting the penalty terms. Now, for
equality constraint this is the penalty term for any selection that is infeasible selection.
And, for inequality constraint this is the penalty term for any infeasible selection; thus,
we can say that we are trying to minimise the total penalty as well as we are trying to
minimise the function f (X); that is why these are all coming into the additive form. And
this could be the representative of the given original non-linear programming problem
that the basic philosophy of the penalty function method. But let me just detail the
interior penalty method and exterior penalty method in specific. Now, there are few
functions we are considering for interior and exterior penalty function method.
Now, these are the popularly known functions let me first consider the interior penalty
function method. Now, only the differences are there in selection of the penalty terms we
can consider the penalty terms for the interior penalty function method; the very well
known method is the inverse barrier function in other way the interior penalty function
method is also being named as the barrier method; the function is minus 1 by g j (X), this
interior penalty function methods works only the less than equality type of constraints.
Now, this is inverse barrier function. And as I said the interior penalty function method is
being considered is being taken that function is being taken in such a way that in every
iteration we will move through the feasible space. And we will converge to the optimal
solution of the original problem; how the in inverse barrier function is being used I will
just tell you in the next. There is another barrier function that is a very popular function;
that is a logarithm barrier function is also being considered that is the term is minus log
that is a natural log base e minus g j (X); why we have considered this kind of functions I
will just in the next I will show you 1 example. And similarly for the exterior penalty
function method the popularly known functions are as I just say it max of 0, g j (X) or
max of 0 to the power p where p is an integer. Now, in the next I will show how it is
being used both the functions in both the cases.
Let me consider one example simple example this example is minimize x, subject to 5
minus x less than equals to 0; if it is draw it then certainly. And if this is the x equal to 5
this is the feasible space for us x greater than equal to 5 is the feasible space. And
minimize x means certainly at x equal to 5 the solution is coming this is a simple
problem we have considered. Now, here we are using the interior penalty function
method. And we are using the exterior penalty function method separately as I said in the
interior penalty function method; the popularly known function is let me consider the
first one inverse barrier function we can consider lock function as well.
Now, in the interior method we consider the unconstrained problem rather converting
this problem into the unconstrained problem in this manner f (X) plus mu into 1 minus
mu into minus 1 by g j (X); that means, mu divided by 5 minus X this is the
unconstrained problem for us. Now, if we considered let me draw the picture how the
method is being implemented. Now, this is the iterative process what it does if I consider
say mu is equal to 100; then, the function will come this way asymptotic function all
right. Now, x minus mu by 5 minus x for mu is equals 100 this is the function certainly
this is the unconstrained problem apply any unconstrained optimization technique.
And, that we can achieve the minimum at here. Now, let us reduce the value of mu then,
the functional will come here say mu is equal to 10 function is coming here then, the
minimum is coming here as And if it just reduced mu further and further in this way the
unconstrained problem only generates the sequence of optimal solutions for different mu.
And these optimal of the individual unconstrained problem will approach to the optimal
solution all right; that means thus, this means that for a for a complicated non-linear
programming problem; where objective function is very complicated function even the
constrained is very complicated function; we will just convert the optimization problem
the non-linear problem into the unconstrained problem.
And, for different value of mu we will solve the series of unconstrained optimization
technique. And that sequence will generate that sequence of problem will generate
another sequence of optimal solution; which will approach to the optimal solution of the
original problem; where X star is the optimal solution here all right. This is the interior
penalty function method thus, we can summarize that the function has been taken in such
a way that the functional form here the inverse barrier function. Just now, I have a just
mentioned minus 1 by g j (X) minus 1 by 5 minus X.
And, we are considering g j (X) as lesser than equal to constraint. And this will
automatically moves through the feasible region. And it is converging to the optimal
solution that is the interior method; thus, we can summarize if mu tending to 0. Then, the
optimal will converge to sequence of optimal solution will converge to X star. Now, let
us consider the exterior penalty function method for the same problem where is the only
difference; difference is only information of the unconstrained problem.
Now, here the formation would be x plus mu into max of 0, g j (X) that is means; max of
5 minus x let me consider square as I said p could be any integer, I can consider square, I
can consider cube, I can consider even the linear function; generally we are avoiding the
linear function. Because since, we are solving the unconstrained optimization problems
that is why there is a need for to apply the necessary. And sufficient conditions as we did
for the classical optimization technique; unless the function is of at least second order
this is second degree it is very difficult to do the second order derivative; that is why let
us consider at least the power of the penalty function as square we can consider cube, 4
anything.
Now, if this is the form of the unconstrained problem for the exterior penalty function
method; just see what is happening in this case this is x equal to 5 all right. Now, this is
Y is equal to X sorry, this is Y equal to X all right. And what it does for large mu this is
the function this function and certainly optimal will come here. And for lesser mu rather
for the very small mu say mu is equal to 0.1 all right. Now, if you move to point mu is
equal to say 10; then, this is another function it should be here we know the here is the
optimal.
Now, for another mu the optimal will come here in this way it will proceed say mu is
equal to 100; as we as mu tending to infinity for this function this optimal solution series
of sequence of optimal solutions will approach to x equal to 5; that is the beauty of this
function that is a penalty function in exterior penalty function method; that is why the
penalty terms have been taken in this fashion the interior this function automatically
guide us to move through the feasible space. And it will generate a sequence of optimal
solution with will approach for mu tending to 0; it will approach to the optimal solution
of the original problem.
And, for the exterior penalty function method if we consider this kind of function
automatically it will guide us to move through the infeasible space because this is the
feasible space through the infeasible space. So, that the sequence of optimal solution will
approach to the optimal solution of the original problem in this fashion; that is the basic
philosophy of the interior and exterior penalty function method.
(Refer Slide Time: 18:56)
Let me write down the algorithm for both the methods; step 1 start with an initial phase
basic start with an initial feasible solution say X 1. Because we need initial feasible
solution because since, we are solving the unconstrained optimization problem any
technique we have learnt that can be applied here. But for some methodology we need
the initial solution. And which will be updated in say in the respective in the other
iterations. And so that the functional objective functional value will decrease further for
minimization problem; that is why initial solution selection is very important for this
penalty function method.
And, for the interior penalty function we are considering the initial solution as a feasible
point. And for the exterior penalty function method we are considering the starting point
as a infeasible point; because from infeasible space we are moving to the feasible space.
Now, start an initial feasible solution X 1 such that for all j g j(X 1) is lesser than 0; then,
only X 1 is the feasible solution. Now, let us select some mu 1, because as I said since
mu 1 is approaching to 0; that is why mu 1 could be very high value initially. And select
mu 1 and solve unconstraint optimization problem as I have just explained by
augmenting either the inverse barrier function or logarithm barrier function; once, we are
solving the unconstraint minimization problem; then, we will get the next solution
optimal as X 2 say.
Now, since this is a iterative process let me set K is equal to 1 here; so that in the next we
can move to K is equal to 2 all right. And mu 1 that is the mu K will be updated with a
new value mu K plus 1; that is lesser than mu K generally we are considering mu K plus
1 is equal to c into mu K; where c is lesser than 0 sorry, lesser than 1. Because we
wanted to reduce the value of mu than mu K that is why we are selecting mu K plus 1 is
equal to c into mu K; again we are solving the unconstraint optimization problem
minimum problem with a augmented penalty term certainly.
And, we will get another optimal solution X 3 in this way we will repeat the process.
And unless the we will just do the iterations one after another unless the series of optimal
solutions will converge to a point; thus, it is being said that the sequence of optimal
solutions whatever we are achieving through the interior penalty function methods; the
limiting point the limit point of that sequence is the optimal solution of the original
problem. Now, let us consider one non-linear programming problem in the next and we
will solve it.
Let us consider a general non-linear programming problem; where the objective function
is x 1 minus 2 x 2 subject to the constraint 1 plus x 1 minus x 2 square greater than equal
to 0 and x 2 greater than and equal to 0; as I have discussed all the constraints are of the
type less than; let us let us convert these both inequality constraints to the less than type
that is it would be. Now, we are applying the interior penalty function method let me
consider the logarithm barrier function here; that is why the unconstraint problem could
be minimization of x 1 minus 2 x 2 minus, mu log of minus g j (X 1) minus mu of minus
g j (X 2) this is the unconstraint problem; let me name this function as phi this is the
function of x 1, x 2 and mu.
Now, we have to solve this by supplying the value for mu different value for mu we will
start from a high mu value and we will approach to mu tending to 0. But before to that let
me solve this one with a classical optimization technique; then, the necessary condition
would be this equal to 0 which will give me 1 minus mu divided by 1 plus x 1, minus x 2
square equal to 0. And the second condition is that minus 2, plus 2, mu 1, minus x 1,
minus x 2, square minus mu by x 2 equal to 0; these are the necessary conditions we
have to get the values for x 1, x 2 which will satisfy both the equations.
That would be the stationary points from there we have to select that point of x 1, x 2
which will minimise the function f (X); that is the idea of the unconstraint technique that
is why from here; from the first equation we are getting that 1 plus x 1 minus x 2 square
is equal to mu; if we just substitute this value here then, here 1 would be there 1 x 2 will
be there. Because 2 mu, x 2 then, we are getting 1 equation as x 2 square minus x 2
minus mu by 2 equal to 0. And from here we are getting the value of x 2 as 1 plus minus
1 by all right. Now, from here only the plus is the feasible solution that is why I we will
consider minus at all.
And, this value will give us the value for x 1 as 0 all right. And x 2 star would be limit
mu tending to 0 1 plus. And this value is 1 all right; once, we are getting that then, we are
getting the solution of the original problem as x 1 star is equal to 0 , x 1 is equal to 1.
And the minimum value for f would be then minus 2 that is through the classical
optimization technique by considering the interior penalty function method. And with the
idea that mu tending to 0 in the interior function.
We are getting the solution same thing if we just apply; the iterative process for different
value of mu if we start mu is equal to 10; for different value of mu just see how the
values are being progressed from for 10 this is x 1, star x 2 star. And this is the original f
and this is phi this is not f this is phi all right; similarly, mu is equal to 1 these are the
values. And once mu is approaching to 0; that means, we are reducing the value of mu
further and further just you see x 1 is approaching to 0. And as we have seen the original
x 1 star is equal to 0 and x 2 star is equal to 1 we are approaching to that value just you
see. And the original f mean were approaching to minus 2 that is the beauty of this
method; the method tells you that instead of applying the classical technique that way we
can solve the unconstraint optimization problem has every iterations. And we will just
approach to the optimal solution that is the proof just I am showing you. and since this is.
And, since this is so from here we can develop a result. And entire interior penalty
function method is based on this result if the function f (X) minus mu summation 1 by g j
(X), j is equal to 1 to m; if we are having m number of inequality constraints. And minus
1 by g j (X) is the augmented penalty term; then, if we consider mu for all constraints we
can consider different mus even. But for simplicity we are considering single mu this is
minimized for a decreasing sequence of mus.
And, the unconstrained minima for every mu if we consider the iteration as K; K starting
from 1 then, for every K we are getting 1 optimal unconstrained minima converges to the
optimal solution of the original problem has mu tending to 0; thus, we can say that if {X
K} stars this is a sequence of optimal solution for unconstrained problem; then, the limit
point of this sequence is the optimal solution of the original problem. But one thing is
that there is a disadvantage of this method even because if we consider once the X is
lying of the boundary of the feasible region we could see that this augment. And penalty
term will go to infinity value because g j (X) would be is equal to 0. But we will get the
this value as infinity that is why that is the only disadvantage of this method. But with
extrapolation technique I will just mention at the end. And with extrapolation technique
we can remove this disadvantage.
Now, similarly we can explain the exterior penalty function method; let me write down
the algorithm step 1 star with an infeasible point. Because this is the reverse to the
interior method consider a suitable mu 1 as well. And solve unconstrained equivalent
unconstrained problem; then, we will get X 2 what is our problem; the problem is f (X)
plus mu summation max of g j (X) to the power p for different value of mu we will solve
this unconstrained optimization problem; first we will start for a suitable mu. And
starting mu should be very small value because we are approaching to the high value mu
it tending to infinity will get the solution.
Then, the step 2 would be get optimal of this problem if this is X 2 star; then, we will test
whether X 2 star feasible or not how to check? We will just consider whether g j (X 2)
star is less than equal to 0 or not; for all j we will just check if we see that X 2 star is
feasible; then, we well stop our process. If it is not then, we will go to step 3 how we will
go to step 3? If I just start with K is equal to 1 the iteration 1; then, we will update K to K
plus 1. And what else we will consider in the next mu K plus 1 is equal to c into mu K
because we are approaching to the high value that is why c must be greater than 1 all
right; as we have seen for the interior method it was less than 1 here it would be greater
than 1; then, we will get another unconstrained problem say this is mu K we will get
another unconstrained problem.
And, we will solve it we will get the optimal we will check whether this is feasible; once,
it is coming feasible stop our iteration that is the idea. And otherwise we will just run the
methodology and we will reach to the optimal solution at the end series of sequence of
X, K stars again we will generate. And the limit point of this sequence as mu K tending
to infinity that would be the solution for the original problem.
Let us consider the another example for these to explain this methodology minimize f
(X) here; we are considering non-linear objective function we can consider the non-
linear constraint as well. But we are considering the linear constraint of less than equality
type to illustrate the methodology let us consider the unconstrained problem; that would
be objective function plus mu, max of 0. And we are considering square because we
wanted to apply the classical technique for getting the optimal solution for the original
problem that is why this is a unconstrained problem; we wanted to minimise this is phi
all right, if this is...
So, then again the same del phi by del x 1 equal to 0, del 5 by del x 2 equal to 0, from
here if we just apply we are getting from the first equation x 2 plus mu 2 mu, x 1 plus 2 x
2 minus 4 equal to 0 this is for infeasible points. And for the feasible points we are
getting this value as 0 that is why this term will not contribute anything here only minus
x 2 equal to 0 that is why the only solution x 1 equal to 0 x 2 equal to 0. But here if we
consider only the infeasible points; but the penalty term will give some positive value;
then, we are getting minus x 1 plus here it is 4 mu, x 1 plus 2 x 2 minus 4 equal to 0.
Now, from both the equations we are getting x 1 star is equal to 2 by 1 by and x 2 star is
equal to 1 by. And once mu is tending to infinity this x 1 star value will approach to 2
and x 2 star value will approach to 1. And this is the optimal solution of the original
problem. Now, here also we can generate we can apply the iterative process instead of
applying this classical technique for several mu we will generate a series of values of x 1
star. And x 2 star we will see that for mu tending to infinity that series will converge to
these values.
But thus, that is all about the interior and exterior penalty function method. But there are
few things to be discussed here one thing is that as you have seen that for the for both the
methods; if we select mu in such a way that we are getting the optimal solution at the
boundary of the feasible region; then, the process fails this is one thing that is why there
is a method that is called the extrapolation technique; through which very nicely we can
guess the true minima of the original problem.
And, from here we are trying to interpolate linear line; then, we just get we will get K
number of equations with K number of with unknowns. And from here we can find out
the value for A 0 as X K star minus mu, X K minus 1 star divided by 1 minus mu. And A
1 we are getting as X K minus 1 star, minus X K star divided by mu K minus 1, 1 minus
c; these are all c how we are reaching that let me just tell you say we are having 2 points
X K stars and X K plus X K ,X K minus 1 star.
Then, we are getting the optimal solution that is why if we just substitute mu is equal to
0; then, we will get the true minima which we were not getting through the process a
iterative process; we will see the true minima will be equal to a 0 that is the beauty of
this method that is why we can resolve the disadvantage of that interior penalty function
method by doing the extrapolation technique and we can guess the true minima. And
another difficulty is there for solving the interior and exterior penalty function is that the
starting feasible starting point is very important. Because in the interior method we are
considering the starting point as a feasible point. And for the exterior penalty function
method we are considering the starting point as the infeasible point.
(Refer Slide Time: 45:15)
But from where to start, what to, which value to take as a starting point? That is very
important for us that is why for this case we are considering we can give you some idea
how to consider that starting value with this example small example, just you see if I
consider this example; then, we will see that we are having the objective functions. And
4 constraints here; we have considered constraints as linear; we can consider constraints
as non-linear as well. Now, in this case if we just draw the graph of it we will see that the
functions will be like this; the feasible space will be this will be the feasible space.
And, objective function this is the contours of circles at centring at 6, 7 that is why
circuits would be like this these are the contours of circle; certainly, the minimum will lie
here all right that is the minimum solution for this. Now, if we apply the exterior penalty
function method; then, we have to construct that we have to construct the unconstrained
problem in this fashion just see.
(Refer Slide Time: 46:49)
Minimize f (X); that means (x 1 minus 6) square plus( x 2 minus 7)square plus mu, max
of 0 (minus 3 x 1 minus 2 x 2 plus 6) square plus mu max of this is so big one and very
difficult to handle. But so easily we can do it just you see that would be 0, 2 by 3 x 1
minus x 2, minus 4 by 3; this is the exterior penalty function method. And this is the
function we are considering by augmenting the penalty function. And if this is so if we
apply the technique; then, we have to differentiate with respect to x 1 equate to 0, x 2
equate to 0 very difficult to handle this; that is why what we do we will start from a
points let me consider 6, 7 that is the infeasible point.
If we just see the feasible space here 6, 7 is far away from here that is why 6, 7 is the
infeasible point; that is why very nicely we can consider the points 6, 7 if we just
substitute 6, 7 in every constraint we will see that all constraints will satisfied except this
one all right; that means, for all the constraints we are getting the value; here the values
are we are we can get the values at 0 actually this square is here outside not inside all
right. Now, we will get the maximum value as 0, because 6, 7 will be satisfied and that
would be the negative value.
And, here also it will be satisfied only this one we will get the positive value for 6, 7; that
is why if you start from this point this unconstrained problem will reduce to another
unconstrained problem; which is very easy to handle all right; because all are cancelled
because of 0 value. Now, we can consider this as a phi function here we will consider del
phi by del x 1 equal to 0 ,del phi by del x 2 is equal to 0; that is why as I said starting
point selection is very important if we intelligently select the starting point; then, it may
happen that few constraints will be critically satisfied few constraints will be satisfied
with the less than size; then, we can reduce the size of the unconstrained problem if we
just equate to 0 we will get the value for x 1 star is equal to 6, 1 plus mu divided by 1
plus 2 mu, x 2 star is equal to 7 minus 6 mu divided by 1 plus 2 mu.
And, if tending to 0; then, we will get sorry, this is we are considering mu tending to
infinity here if mu tending to infinity here we will get x 1 star is equal to 3 and x 2 star is
equal to 4 all right; that is the solution of the original optimal solution of the original
problem. So, nicely so easily we are getting with the exterior penalty function method;
instead of applying this technique let us apply the iterative process. And let us see what
is happening we are starting from mu value 0.5 we are getting X 1 star as and we are
starting from 6, 7.
Because as I say it again and again for solving the unconstrained problem few technique;
some techniques are there where we need the initial feasible initial solution initial guess
optimal solution is very important for us in that case; the selection is very important if
you select this solution nearer to the optimal the number of iterations will be less instead
of 6 even if you consider 60, 70, 63, 73, anything. Then, it may happen that we have to
do the iterations more number of iterations here.
Now, if consider mu is equal to 0.5 these are the optimal solutions mu is equal to 1 this is
the optimal solution 4, 5. And if mu is tending to infinity for bigger mu just you see the
value is approaching X 1 is approaching to 3 and X 2 is approaching to 4. And just now
we have got the same solution with the classical technique all right; that is all about the
interior and exterior penalty function method. And the idea is that with this method very
nicely we can solve the non-linear programming problem.
And, though it is having certain disadvantages; that is the selection is very important the
mu value is very important for us. But the method guides us in such a way that we can
reach to the optimal solution of the original problem. And we need not to solve even the
unconstrained problem, your constrained problem. And we are we are solving the
sequence of unconstrained problem; that is why it is being named as the sequence of
unconstrained minimization technique as well that is all for today.
Thank you very much.