Optimization Techniques, Chapter - 1
Optimization Techniques, Chapter - 1
5
Introduction to Optimization
1.4 0 Optimization Techniques
components
Such problems are called unconstrained optimization problems .
12. Selection of machining conditions in metal cutting processes for minimum production
Remark: How one can solve a maximization problem as a mm1mizat1on problem?
cost
13. Design of pumps, turbines and heat transfer equipment for maximum efficiency
f(x)
14. Shortest route taken by a sales person visiting various cities during one tour f(x)
15. Allocation of resources or services among several activities to maximum the benefit.
16. Planning the best strategy to obtain maximum profit in the presence of a competitor
17 . Optimal production planning, controlling and scheduling ·
\::•mof f(<l
18 . Optimum design of chemical processing equipment and plants I X'
19 . Controlling the waiting and idle times and queueing in production lines to reduce the
costs.
x•, Maximum of - f(x)
20. Selection of a site for an industry
21. Analysis of statistical data and building empirical models from experimental resuits
to obtain the most accurate representation of the physical phenomenon
'"""-f(x)
22. Design of optimum pipeline networks for process industries
Fig: 1 Minimum of f(x) = maximum of - f(x)
23. Planning of maintenance and replacement of equipment to reduce operating costs.
24 . Optimum design of electrical networks. From the above figure, if a point x.* corresponds to the minimum value of function
25. Solvmg for optimality in several mathematical-economics and military problems. f(x), the same point also corresponds to the ma'-.mrnm value of the negative of the function,
Anyway, we are going to discuss outlines of the theory _and applica~io~ of _m athemat1cal - f(x) .
programming techniques suitable for the solution of engmeenng opt1m1zat10n problems. Thus, without loss of generality, indirectly n maximization problem can be solved as a
Then it is the nght time to con,s1der: minimization problem. then optimization cnn be taken to mean mm1m1zation, smce the
maximum of a function can be found by seeking the minnnum of the negative of the same
1.4 Statement of an optimization problem . functio,o .
An optimization or a mathematical programming problem can be stated as follows. Remark: The number of vnrinbks n and the number of constraints m and/or p need not
Find X = (x,, x 2, ••• x 0 ~T which minimizes f(X) } be related in nny "n)
g (X) "' 0 arc fc11s1blc or acceptable. ·r he collect100 of all the constramt surfaces g1(X) -
1
0, J • I, 2, .. , m, wl11ch separate~ the acceptable region fl called the composite constraint
11
s111 f:lcc,
I} Following Fig 2 shows a 2D design space where 1he infeasible region 1s md1ca1ed
hatched lines conslrntnl surfaces ma hypothet,cal two rltmens100:il design space
1f 3 n_n-dimens1om1l C:mc!>1an ~pace\\ 1th each coordinate a,1s represent mg a design Unfeasible region!
,·an_ablc x ~1 -= I. 2... n) 1s rons1dC'rl"d. the.- space 1s calkd the dl'sig n vari abl e space or
51 ":PIY d~~ !>p:t_cc. E3ch point m the n-d1mcns1onal design space 1s called a design
~ a n d represents e1ther a possible or an impossible solution to the design prcibi"em." Behaviour - - +
constraint g, = 0
The choice of the 1mport:mt design ,-:uiables in an op1tm1zallon problem largely depends
on the user 3:,d
his expenence Ho,,ever, It 1s important to understand that the efficiency
and speed 01 opttm1zat10n technique depend to a large extent, on the number of chosen • Free unacceptable point
r--S..~ ccrS.:3!1\t o, = 0
design vanables. Thus by selecu,·ely choosing the design variables. the efficiency of the
opt1m1zauon technique can be increased. The first thumb rule of the formulation of an
opt1m1zat1on problem 1s to choose as few design variables as possible.
Des ign Co nstrain ts
The selected design , anaoles have 10 satisfy certain specified func ti onal and other Fig: 2 Constraint surfaces In a two d -e"s :)~ ces ;~ sca::e
requirements. knov.n as constraints (restrictions). These constraints that must be satisfied
to produce an acceptable aes1gn are coilccuvely called des ign constraints . A design point thnt hes on one or me.re t.bn o::e co::<~-: s=~ .s cu':ed a bound
Constramts that rq,:-esent hmuauons on the be.hav1or or performance of the system point, nod the nssocinted constr:unt t5 t"itlled an t.ethcc ~D.Stv.ibt
are termed beba\ior or functional constraints. Constraints that represent phys ical Design points that do not he on an) c-onstr.w·t s-.::fat"C J...-e ttO';I; ... as frtt points
l1m1tat1ons on des1gn\=ai-~abricabil1ty and transportabi lity arc Depending on whether n p!lrt1culnr des1gn pomt t-el~s to t:.e :i~p~•-e or 1$li:Cept:iblc
known as geometric or side c.onstraints. 1
region, it can bl' ident,fitd or d11s~1fied ns O!'~ oflt.e fo ' ~ - - ~ f~ ... t)'PCS
Mathemaucally. there zre usU21ly two types of constraints. Ec1u a lity or In equalit y
1. free and tlC'l'ept:lbk l'l'tnt
co n straints. l!}cqu.alrty constraints state that the relat1onsh1ps among dcs1j.!n vannblcs
11. Free nml unnl·ceptnbk pmnt
are either greater th.an. smaller than or equal to a r~source value.
Equal Hy constraints stat.c that the relat1onsh1ps should exactly match a resource vnluc 111. Hound nml :i..-,eptnbk pl,tnt
Equality constraints are usu.ally more difficult to handk and therefore, need lo be avoided l\' Ht,untl nml UIIOl'l't'l'lllhk pomt
wherever possible Thus. the tecond thumb ruk in the formul:-Jl1011 ofopl111111n1rnn p1ohk111 i\11 thl" 11b1Wl' f,,111 tYp(':- !Ire she""' m tht" :it-o,~ ft$~
is that the number of complex ~ual11y con£tra1ntfl should he kept a11 low aH pn1iH1hlt•
Objective Funct ion
Constraint surface ( I hr 1h111l 1,1,k 111 thr h,rn111lnlh'l\ p1,,~lun- \) t,, \ J ,1 i1._n-.:t1v'l tn tcrm:s of the Je.,ign ~r
1
Consider an opt1mw,11on problem w11h only incqu1.1l11y c,m11trr1111111 g1(X),. o. I he Gel 111 'tt1·1·1s11111 \',It 111hks 0111\ ,,thl'f p1,,t,km ~tam~~\",.." 1th ~,~t t,, \\ 1-•.:h the dcSJgn ~ vp.um,z.e J
0
values of X that satisfy the cqua11on gp<) 0 form'i II hyper 1urfncc 111 the ,lt:1,11•11 ~pa1:c 1'1118 l\111,111111 is \..1111\\ 11 ti, tht' 11hJ.-dh t fuu~t\ou) tht- chl,,1..."C ot obJC.-Ct1,e iundton •~.s ' cmc
"' • 1 one ol the most
and js called a constraint surface. 'lh1£ co11.&traw1 6Urlace tl1v1<kti lhc dl!tllHII RpllCl' 111111 hv• lhl" 111111,it' 11f thi' p111\,k111 l h\\" th~ sdi.'\'t1,,1\\'I the ohJ~IIH' ,undion ' b.
• 11on, 1hcrc nuy c
two regions: one 1s g (X) < 0 and the ,,tht!r Hi g1(XJ,,, 0, I hu,-. lht p1111111i ly111i: 1111 1lw hyp~1 1111p1111.1111 tk1·1s11111-. 111 th~,, h\1\c.- optlll\\11\\ ,ksl$n pn.xe~ In soin(' )llUl \ ' oblcm
1 I ni \ ptinllZ!IIIOD pr
su r fac e will satisfy the constraint g1{X) cn11cally, where /ill 1l1e p11111111 ly111v 111 1hr ri·f\11111 1111111· th,111 t1111• tih1Nll\l' 1\111,11~1n t1, ~-- ~,\lshl·ll ,1mult:inNU~) • g problem
> o arc infeasible or unacceptahlc, Mid lh<: JH1tnlll ly111g 111 tlw 1t'ft11tll wl11·11• Ill\ 11h 1111111111\1,p\(" 11\111•1'\\\ \' l\l\\\'11\111~ 1, ktH\\\ll lls a muhiotijN'thr progr.1mmtn
whe re gi (X)
/ntroduclton to Optimization 0
1.8 0 Opt1m1zet1on Techn1quos
J
Choose design
(mmus) and nee, ersa.
Fonnulate objective
function
Obt3\n sohltton
Fig: 4
fig; 3
The first step is to rc,\hl\' tht' nee1.\ for using optimrzunon ma s\)ecif1c design ?t0b\er
Once the ob1ect1ve func11on surfaces are drawn along w11h the constraint surfuccs, the \'hen the dcs1~m.'r nt•c\\s 11., d\oos(' the im1,orhmt design \'anab\es associated with t\
op11mum point can be determined without much difficulty, But the main problem i~ th111, prohkm.
whenever the number of design van ables excecd!i two o, three, the constra1111 unu ohJcctivc
lhi;- t\,1m\1\11ti1.H1 or 0pt1mit;\\1on probkm invo\vcs other cons1derat10ns such a
function surfaces become complex even for v1sualmt1on and the prnhlcrn lrnK to he 1mlvcd
( 'onstrnints, t1h_11.·ct\\ c \\m1.·ti1.,11
purely as a mathematical problem.
A ty11ic11\ 1.,pt1n1imtion probkm's muthemat\cn\ mode\ is as fo\\ows:
1.10 0 Optimization Techniques 0 1.11
lntroducl1on to Optimization
1\1:iximize or minimize (objective function) Geometric programming problem (GMP): A GMP is one in which the obJective function
subJcct to (Constraints) and constraints are expressed as posynomtals m X. A function h(X) 1s called posynom1al
OR if h can be expressed as the sum of power terms each of the form
Optimize (obJective function) Ci X/11 X/•2 ... x:'"
subject to (constraints) / where c.I and a IJ are constants with c1 > 0 and x_J > 0.
1.5 Classification of optimization problems V Quadratic programming problem: A quadratic programmmg problem 1s a nonlmear
programming problem with a quadratic objective function and \mear constramts.
Linear programming problem: If the objective function and all the constraints m Eq( I)
~ptimizano_n problems can be classified in many ways, as explained below:
1• Classificatio·n based on the Existence of Constraints: are linear functions of the design variables, the mathematical programmmg problem 1s
As discussed earlier ,_ an Y op f1m1zation
· · problem can be classified as constrained or called a linear programming (LP) problem
~nconstramed dependmg on whether or not constraints exist in the problem A LP problem is often stated in the following standard form:
11 • Classification based on the Nature of the design variables:
Based on the nature of design variables, optimization problems can be classified into two
categones. -
Pa~ameter (or) std)ic optimization problems : The problem is to find values to a set of
F;nd X ~ \1 \ wluch minimizes f()O ~ t. , x,
des~gn paramet~rs that make some prescribed function of these parameters minimum
sub1ect to certarn constraints. \
subject to constraints \
Trajectory (or) _dynamic optimtzation problems : The problem is to find a set of design n
para~eters. which are all contmuous functions of some other parameter that minimizes Laux;= bf j = 1. 2 ..... rn
an obJective functlon subject to a set of constraints. ,~1
The problem
the value oh.= x.* ,s to be fou
A single-va riable optimiza tion problem 1s one in wh1ch
such that'-* rn1mm11 es f(\.), wher:: f(x) 1s the obJectiv e funct10n a
tn the interval [a, b)
x 1s a real variable .
x, for which the funct1
The purpose of an opt11111znt1on techmqu e is to find a solution
d here are for minimiz at
f(x) is mm1mum. Even though the opt1miza hon methods descnbe
by adopting the equival
problems . they can nlso be used to solve n mux.inuz ation problem
dunl problem ( f(,)), and consider ed it to be mm1m1z ed.
2.2 0 Classical Optimization 0
Optimization Techniques
g;,
:Ve first present the basic termin0lo' .
optimality (relatiYe mumnum ofa f and the neces~ary and sufficient cond1t1ons for
f(x)
1 3 5
f(x 6) } .
✓ Relative or Local maximum
f(x) x = Relative minimum is also global rnirumum
J \
~(xfunction
A · or local maximum at x
· sat· d to h ave a relative
*) > ~( ,.of_ one variable f( x ) ts = x * if
- x -:- h) for all sufficiently small values of h close to zero.
(OR)
A funct10 n f(x) has a local or relative maximum at x* if there exists a (small) interval ''
centered at x* (neighbourhood of x*) such that f(x*) ;c: f(x) for all x in this interval !' X
'
:
2· T~e thheorem does not say what happens if a minimum or maximum occurs at a point As f(•l(x*) ct- 0, there exists an interval around x• for every point x of which the nt
x w ere the denvative fails to exist. derivative f1•i(x) has the same sign, namely. that of f• 1(x*). ·
J. Th_e theorem does not say what happens 1f a minimum or maximum occurs at an end Thus for every point x* + h of th•is int:rval f'• 1(x 4 + 8h) has the sign o~ ~•>(x*),-
pomt of the interval of definition of the function
lf n is even then !1_:_ is positive irrespective of sign of 'h'. Hence f(;* + h) - f(x*) an
4. The theorem does not say that the function necessarily will have a minimum or , n! .
maximum at every point where the derivative is zero. f(•l(x*) will have tlie same signJ
For example: Thus if f(•l(x*) ~ 0 then f(x* + h)- f(~*);?: 0 ==> f(x* + h);?: f{x') [from (2)]
1
f(x)
Therefore x* is a relative minim~m of 'f ·if fC• (x*)_;?: O_
(or) if f(•l (x*) s O then f (x* + h) - f(x*) So·
⇒ f(x* + h) s f(x*) [from (2)]
Therefore x* is a relallve maximum of 'f 1f fC•> (x*) S 0.
h" · h - f 'h'
If n is odd, then -;;"T changes sign ,v1th the change m t e S1gn o •
Hence x* is neither a maximum nor a minimum Hence the theorem.
Note: The point x* 1s neither a maximum nor a mimmum. called a po~nt of intlectioi
~ \A..) :::: Go:;t tt - \ cw :..3 +-('2.0~ '- -+'°O
Fig; 7 ~ OLVED PROB-LEMS 1
Pro ble ~ Determine the maximum and minimum values of the function f(x) = 1,2<
~or the function in the above figure the derivative f(x) =Oat x = 0. However, this poi nt
V4Sx' + ~~+ 5
1s neither a minimum nor a maximum.
In general, a point ·x• at which f (x*) = 0 1s called a s tationary point .
,f S9'/fpo11 : Smee
1
P(\.) = 60 (x~ - 3'\ + 2:,:.!)
~/ ~ = 60 \.:('- - l)(\.- 2)
If a function f(x) possesses continuous derivatives of every order in the neig hborhood
of x = x* the following theorem provides the sufficient condition for the minimum or
£ ./ f(\.) - 0 :it\. 0, \. = l. snd \. - 2
maximum value of the function . rf~ / \Y Also f"(x) - 60l•h' Qy-. -h:)
}Theorem (S u fficient condition): Let ((x*) = f"(x*) = ... =fl• ''(x*) = O but fC•l(x*) 1; O.
v; }- At x \, P'('\) - 60 and henr,· \. l 1s a rtl:lt1w m:l\imt:m. since f'(x) < O, e
then f(x*) 1s ,~ dcnvat1ve .
f.,., · f( \. l) = I2 •
I.
a minimum value of f(x) 1f fl"' (x*) > 0 and n is even ~
240 anJ ht'Ol'C' \. 2 ,s a rel:1t1\'e nunimum, smce f'(x) > O, e
1
1. a maximum value of f(x) if fC•J (x*) < 0 and n 1s even
iii. neither a maximum nor a mmimum 1f n 1s odd
2) == 11
Classical OptimizatJ0/J 0 2. 7
2.6 0 Optimization Techniques
At x - 0, f"(x) = 0 and hence we must go to the next derivative Al x = -1 f"(x) = -54 1' 0
x = -1 1s neither minimum nor maximum
f'" (x) 60(12x 2 -18x + 4) = 240 at x = O
. Smee f'"(x) 1' 0 at x = 0 x ,;, 0 - . At x = 2, f'(x) = 0, f"'(x) = 0 then
111flect1on point. ' is neither a maximum nor a minimum and it is arr 2
⇒
2
fl•(x) = 6(x - 2) 2 + 12(x + 1) (x - 2) + 6(x - 2) + 6(x + ~)
Problem 2: Find the maxima and minima, if any, of the function f(x) = 4x3 - 18xz -t- 27x - 7 + l 2(x + I) (x 2) + 6(x - 2) + 6(x + ·1 ).
Solution: f(x) - 12x 2 - 36x + 27 = 3(4x 2 - 12x + 9) At x=2 ⇒ f1•(2) = 6(3) 2 + 6(3) = 6(27 + 3) = 180 > _l)
At x""
2
• f"(x) = f" G) = 3(8 x ; - 12) = 0, then we must go to the next denvative,
stationary point - that 1s where f(x) = 0.
then
x = -1, f'(x) = 0 ~
f"'(x.) = 2(x - 2) 3 + 2(x + 1) 3(x - 2) 2 + 2(x - 2) 3(x + 1)2 + 3(x - 2) 2(x + 1) A
2
= 1a11 a12
a21 a22
I
Classical Optimization
Examples
A2 =I~~ I- 9
1=8>0 -
A. 3 =\AI 0
4. A square matnx A 1s negative semi definite<=> All the leading principal minors of A
are alternate!} <;; 0 and~ 0 Hence .\ <'.:0
·. The g.1'en mamx 'A' 1s positi,e senu dd,mtc.
In other words. 1'" leadmg prmc1pal minor of A 1c; A1 is either zero or has the sign of
5.
1-1).1-1.2 ... n
A square matn:r. A 1s indefinite<=> 1f it is none of the above four types
4. Consider the matnx r\ -
( / '.1
No" r\ . \ 'l)
Alternative method tr, find the nature of a square matrix
A square matrix A ,s said to be
I pos1t1ve defin11c 1f all the e1gen value,; of A > O
A, r-i' 11~0
-\
.\ ts t'tlh\'r it·ro l"•r has the sign of(-\)', 1 =- l, 2
2. negative definite rf all the c1gcn valuei; of A< O
J. pos1t1ve semr definite 1f all tht" eigen v;ilue11 of Ai! 0 and al leas I one c1gcn value .., o \he ~1wn m;Hrt\. -\ 1:- ne~utl\'C sc-m1 definite.
4. negarive semi definite 1f all the c1gcn value,; of' A~ 0 :md at lcr1Hl one c1gc11 v11luc o
5. 1ndefini te If some of the e1gcn values of/\ arc I vc ,rnd olhcr11 vc.
2 .1 0 0 Optimization Techniques
(·!) 'f --~"?YI' <t;·I J '?.,
~ \ iJ, Class/cal Optimization 0 2.11
Now A,= 4 > o l c--f\) / < -P\ <~ ;)' ()_
A = 14 -31 1J 1/ I Example <)'l,...
i -3 0 = -9 < 0 ~ \,; t~~ ' If rz_.
~
A Al = IA/ = -82
i_s not positive definite, s111ce
/J \}
(/ ,,
/'?!'I
i··• \ ' ;J'
,11..
/
/ 0(/
{IL.,
~
A JSnotnegatiYedefirrite . some A, are negative tl /
- ,sma:roneA h . U l,,
A is not positive semi d f . 'not av111g the sign (-1 )'. i = 1 2 3 \)
. e mite, some A . are ne . ' , . Then Hessian matrix (J)
A is not negative semi definite A . ' gattve and are all non zero.
. ' ; ate all non zero i e· th
. . The matrix A is indefinite. . ' ey are not alternately~ 0 and~ 0
a2r a 2r a2r
ax 2
I
'ax1 ax2 ax1 axn
~
a2 r a2f a2f Now
ax2ax1 ax~ ax2axn
Hessian matrix [J]
t -~ \= -684 o
J2 = \
1
<
a2f a 2f a 2r 1
axnax1 axnax2 ax~ J =\JI= \ i
3
0
-~os\ o
-~4
-108 -TI,
<
In particular ·. J is not positive definite, not negative definite or even semi defimte at (l, 2. 3)r
when X = (x., x)T C learly J is indefinite.
·/ Necessary and sufficient condition for optimizati on of multivariable objecth
function without constraints
Working Rule: Given. Find the minimum or maximum of the funcuon f(X). wh<
Hessian matrix (J)
X = (x,, x2, ••• x.f
I Step 1:
Necessary co11ditio11: A necessary condmon for s. contrnuous functton f(X) w
continuous first and second partial derivatnes to h:.ive an e,treme pomt at X'" 1s that e:
l'irst partial dcnvat1vc of f(X.), naluated :1t X*. ,,, v:.imsh. that 1s
a2 f
Hessian matrix (J)
~ Step 2: th
Getting swtio11a1·y points by solving (I): lt 1s also important to note that e ab
a2r ncccssnry cond1t1on~ (1) an.' also satisfied for the cases other than Tr
ox3ax2 tly extreme
the givenpomt.
cond111
111cl11dc, for c,ninp\e, 1nflecllon nnd saddle pomts Consequen ,
0 2.1 :
2.12 Classical Opttmizatton
0 Opt1m1zat1011 Tach111ques
ar ar ar point of f(x,, x 2, xJ
2 4 2 + 4x 2 + 4x X2 + 4x,
+ l6x x for relative
a~; ~ O, ax2 = O, OXJ O .../problem 4: Examine f(X) = x, + x2 1 1
XJ l J
J = [~ ~
0 0 2
~] ax3 . '
~ = 4x
+ J6x, + 8xi = O
- ow cons1denng the hessian matnx
we can have only solution is the point (0, 0, O). N '
Now J,=121=2>0
Jl = I~ ~ I= 4>0
(J] at (0, 0. 0):
., -[! ! :861j
Jco o oJ - 4 I6
JJ = 111 = 8 > O
Clearly J is positive definite J, =2 > 0
and
.-. The point (2, 4, 6) is a relative minimum point of f(x" x1, xJ Jl = 0
1 JJ = -128 '< 0
~roblem 3: Fmd the extreme points of the function f(x,, x2 , x1 ) = x, + 2x 3 + x2 X 3 - x,
-x22-~1?-- J is indefinite
Solutio11: The necessary condition for the existence bf extreme point is : (0 0 0) is a saddle point of f(X). d
' ' . - i + k (x - x )2 + k, x/) - Tx: for ma-..;.1mum an
af af af Examine f(x,, x 2) - (k2 x, 1 2
'
~ = o, ax2 = o, ax3 = 0 k and k 3 are positive)
2 h pt1mum·
⇒ I - 2x, = 0, x3 - 2x 1 = 0, 2 + x2 - 2x 1 = 0 The necessary conditions to ave o -
Solution: (l)
the solut10n of these simultaneous equations is : (i, ~. D ~
ol\1
= k x, -
i
k1 (xi - x.) =0
(2)
To find the nature ofth1s stationary pomt, consider the hess1an matrix (J] evaluated at
u.rD Then
From tl) and (2)·
Tk, T(k2 + k3) \. \·•here L\-= k k. ... k, k, + k, k,
2 ( ) (xi"'', x/) = ( --;;-~) ' ' • fhess1an rnamx (J)
f d by testino the n:1turc o
\! H) = [ ~0 ~2
I
~]
-2
1-
I
Jl = ~2 ~2 = 4 > 0 / I The hess1an matnx Pli- 1• ., 2·1"' L -k:i
,
k1 + k:i 1
2.16 0 Optin111a1,on Techniques 0 2.11
Classical Optimization
then
J II..: ➔ k,I k, .._ 1-.J --.. o t n method
"or single variable ob1ect1ve
1•
we can solve it by unconstrained opt1m1za io
\ - !Ji i.. , k. ~ i.., i.., + kl kl > 0
J 1s pos1t1,·c defimtc func11on I
(x ,', :x/) corresponds lo the mrn1mum of!(\,, x,) f(x,} = 6x, - 2 == 0 => x, =- j
r'(x,) == 6 ~ 0
/ 2.4 Multivariable optimization with Equality constraints I
In this secu.:in. \\t' con$1dcr the opt1m1z.a11on of mult1,anable obJecllve function f(X). f"(x ) == 6 > 0 at x, = 3
Then
,, luch 1s continuous. subjected to equality constra1111s In general. the problem 1s.
opllm1ze f(X) = ~ is a relauve m1rumum of f(x ,}
x, 3
subJected to g,C:q = 0. J = L 2, ... m } (I)
"here X=(x , xi· ···'-•)T r = r(~)3
m••
= ~6
Here m ~ n, otherwise if m > n. the problem becomes over defined and 111 general.
there "111 be no solution There are several methods to solve this type of problem. out of Problem 2: Mimm1ze f(X) == x , + x/ ' 2
f(XJ f(x 1• ~
'\' . '\ '-)
(._2)
X X ) -
2' 'I 2 (x l i· X l I ( I
I I 2x,)1) subJL'Cl to · J _ , . - t Hence the equality
rim probkm hus llm.'t' ,.m:1bks 3th\ l,ne eq\1u~t: con~trn1~1~- from the obJect1ve
I constrnmt cnn be 11:;~•ll to d1m11',\t~' ,\\\) \,ne \,f the es1gn \ ana :,
(6x,1 - 1x + I) 1
2 function d,t,o~t· 11, {'li 11111'-11 l" :-.1,
clearly, the problem rransformcd 11110 srnglc varrnbl(; optIrniza11nn prohlcrn:
(3)
I
Mm1m1ze f(.x1) = (6x,1 '1x 1 + I),
2
2.18 0 0 2.19
Optim,zat,011 Tech111<111es Classical Optimization
Where '), (),,. ).l' , .. \,J arc unknown con,;1ants called lagr.,nge mult1pl1ers. one
f(:-.. :-.J ~h '-: \ /(1 - ,f - ,"}i (4)
multiplier"-, for each constraint g,(X).
The c
'' hich , 11ax1m12ed as, J n unc\mstramcd
,
funct1on rn l\\'O 'arnblcs Clearly L(X. ),) 1s a function of n + m unknowns or variables x - ~'l" •• "•·>..,.AT A,..
necessary cond111ons for the maximum off: ' . Now. the necessary conditions for the extremum of L. which also correspond to the
s1mphfymg. I - 2x/ - x/ = o
oL
1 - x/- 2x/ = o a>.. = - g>(X) = 0. J = 1. 2. .. m.
1
solvmg x, • = . = _!__
x..,,. ..J3 and hence x3• I
= ../3 clearly, the above theorem gives the followmg necessary coud.100.:..s·
aL aL aL
To find whether t.he solution found corres d . o:q = 0, ax2 = 0, .... O'a. = 0
apply the sufficiency conditions to fix pon_ s t_o a rnax1rnu_m or a minimum, we
x/): ( ,, J½) cons1denng the hess1an matrix [J) at (x,*. g,(X) = 0. ~(X) = 0. ... g,J)...J = 0
these conditions, when solved as simultaneous hnear ec:muo::.s u:. ~ ·s =c ;_•s determine
[ -;;]
the stationary points of the lagrangian function so lh~ opnt:.inn~c of ii.X) sub1ect to
~ g,(X) = 0 1s equivalent to the optim1nuon of L(X. l).
J111• "2"l
= Now, the sufficient conditions (for dete,n.1ning the n:in:xe otsnnon:i..~ pomts) to ha"e
-16 -32
✓3 ✓3 extreme pomts are given by the follo-..\lng theorem:
Theorem (sufficient condition) : Let f\.X) and ~(X). J = \. 2 .• c. be ~,ce continuously
differentiable of X. Let there ex 1st 3. point (X"', >.. ·) :;ausf)ir:-g tb,: nece:,sa!) condltlons.
then
J, = -32
A <O
Further
= Ill > O
a nd J2
--- The hessian rt12.tmc of f(x 1, x 2) 1s negative definite at (x/", x/). Hence the point
.
LetL==
.
[.a 2
~
ih, '1 o,m
1 foral\iandJbethematfl'\Ots.:-~o.edord~rccn,J.tt,;esofL(X,A-)
11. X.. 1s a minimum pomt, 1f ~lartmg \\Ith principal minor of order (2111 + l);the last
(n -m) pnnc1pa\ mrnor~ of\" have the sign of(-\)'" a2 L as.i
Note: Other conditions e,1st that are both necessary and sufficient for 1dcnt1fy111g extTeme po1111s where L,J = ax,axJ , g;, = ax, . . f rder (m + 1), the last
· 1f star t"In g with principal mmor O O . )m-•
X* is a maximum pomt, . . n pattern starting with (-1
~ = [g~
'- (n - m) pnnc1pa
. 1 minors of J s form an allematmg s1g
Define matnx
L-\1J and . . f order (2m + l ), the last
evaluated at the stationary pomt (X*, A*), whereµ 1s an unknown parameter. Consider X* is a minimum pornt, if starting with pnnc1pal °,!mor o
the determmant 1~1- then consider l.ll = 0, clearly we can have an equation of order n. (n - m) principal mrnors of Js have the sign of (-1) .
(n - m) 1molnng ~1. Then each of real (n - m) roots of the equation obtained from It.I= 0 Method 2: Consider the matrix D. instead of J,s
must be
{.g]
1 Negatffe 1f X* 1s a maximum point ./ L12
11. Positive 1f X* 1s a m1111mum pomt . . / Ln-J.L
Also 1f some roots are +ve and some are -ve, then X* is not an extreme point.
Lni
Necessary and sufficie~t Condition for optimization of Multivariable objective
Function with equality constraints (Lagrange multipliers)
"orki!lg Rule: Gtven the problem: and consider the equation \D.\ = _o ·.- . ( - m) solve for (n - m) roots ofµ,
then, we will have an equation mvolvmg µ of order n ,
Optirmze f(X)
if the roots are
subject to g,O() = 0 ¥ J = I, ... m
I. negative if X* is a maximum point
Step 1: From the Lagrangean Function L ii. positive if X* is a minimum point
L(x., x2, ••• , x., ).. 1, \ , ••• , Am) = f(x,, x 2, ••• , x.) - J... g (X) - A. g (X) . .. J..., gm(X) otherwise X* is not an extreme point.
1 1 2 2 0
stationary pomt.
(ii)
From cm\
·.~
'\ l
C'ons1dr-r th~ cas~ ') • e ,~i- .,~,-~
From \I\ land (1\. \ 1) •h i\t Cl, l, I, I):
⇒ 8l -1-2).
l-4
...
0
0
~ . '"'' ,~
1 0
\~41 i
)''I.'~
)2 .., 0 and J, - M
=- A., io l -4 and
2>.. 28 (2, -2
.,
J.,
x.,•l.
~
1"
At (2.8, 0, 1.4, 1.4): J,
1. .. u
x. X... x., i ) = (2 8. 0. 1 ... 1 -4)
~ ~
Coasider tbr cae ,., •.I: er cnaof
and\ 0 4\=-16<0.\~ -12S> andJ Jl
from ad x. = 2. X., -= l 42 oo-oa
and from "'X. + 1:1 + 2x.,- 14 = 0 max1mizat1on or min1m1zat1on
~ S +.,7-/ + 2 - 14 = 0 (2.8. 0. l 4. \ 4) IS DOC an O.t:'ffllC po ~
~ 1. -4•0 ~ ~ ::.,-
~ x,•:2
.a :-
At liter: Consider ..\ '!:I.- : '
x S,.X. A.)•(l.:12.1.l)
~
HCDCC tbr 51ab01my pomts arc
and at (l, J, 1. 1)
~--.r
~ 2. 1. 1). (2. -2. J, I). (2 8, 0, I .4, 1.4)
r.-;
l
\b I}
To ftnd die 11811ft of diac Slabmm'y pomta. we have to con11der sufficiency conditions IL.\\ o~~ i.6~
and rbea bordcffll beuwl IUUlS: (!.)
., ".. or $Q
aq?,-1, I, I)
Al (l, l, I, I):
/1 '"'
1
'"' 0 '\ \., \, $. \ -l.. \ ~\ ,s nul ,m
here m I and n 3st l !or I u&toury point 10 be a n11n1mum, 1larlln1& w11h
principal minor of order lm l 3 &he l (n m, .% pnnctpal m1nor1 or J, have 1hr ic, ~ Cl)'> I
So/111io11:
1'1 ohlc111 3: M101m1ze f(X) ,._ x/+ x/ +- x,1
subJCCl to g 1(X) = x, + 1( 1 +- 3x 3 - 2· 0
x 1 +2x,-20) g2(X) = Sx, + 2x 1 + xi 5= 0
So/11tio11: The Lagrangian function 1s
aL
a,, = Sx, - -1,1 - 'A. - 2)..1 , o (I)
L(x,, x1, xi, i.,, ). 2) = f(x,, x1, x1) - ). g (X) - t 1g7'X)
L ,_ x/ + x/ + x/ - ). 1 (x, + :< 2 + _3xi - 2) ).1{5x., ... 2xi-'- "{ 1 - 5) ,.--- 0
aL
<h2 = -hl - ·h, - )., + "i = 0 Necessary conditions to have extrema:
l' \ -~'n. --
aL aL '!.'l.., - {\)
ax1 = 2x, - A., - 5).1 = 0
a'-3 = 2x3 - A, - 211.1 = 0 (3) ..Jo; ~t '-"'
aL ry....\~~
(2)
aL axz = 2JC2 - A., - 2J..l = 0
a)., = 0 (or) (4)
aL
aL ax3 = 2XJ - JA., - )"'! =0
a>.. 2 = 0 (or) 2x 1 - x2 + 2x 3 - 20 =0 (5) aL (4)
aJ.., = 0 (or) x 1 ~ ~-'- 3~_-_2 = O
From (3) X = 2)q +.l2
3. 4 aL {5)
3
a,._I = 0 (or) 5x, 7 2..'½ - ~1 - 5 = 0
('.!J and (I).
Xl = 4\
X = 2),1 +.l2 From {I), (2) and (3) .
(2).
4
From (4) and (5): and again by,,."½· x 3 • the srationar) point is().."". A.•)= l'-, "{ 2 x3 , }., • "-21
I
~~- i ~- i 11
From (6) and (7) . ). I = ~9 and i•2=
52
9
Jl\ - t ~ 0 ~ 0
and
, I O O :
n 3. m 2, n m l, ?m + \ S. thus \\e need to chec\. the determinant of 18 onl).
the stationary pomt is l\h1ch should hnve the s1g.n of( I)'. um\ \1 8 \ - ~oO' 0. ·. X.. = (0 SL 0.35. 0 28) 1s a
mm1m1zn11011 po111t off('()
~ /1
2.1.. O c,... ,c., OptlmlHllon 0 2.27 •
.
ll •l a4 2xa' 0
(2)
(3)
and
For th11 atallonary point("•, y•, )..•), Ille bordered hcdian m:atnx wlll have the stgn
arc....,,
~-. ( I)•• ( 1)' ,. 1, (check) Ha and k c:onstanlt
l•2
I
(" •, y•) .. ( ~[ ITI •) ,s a 111111Ullnati011 point.
ma• a_
95
ii Problem 6: Mu1m1zc f(x,, x,) • n/ 1 2
subJect to 2•"/ + 2n, 1 2 • A."" 2ff
diclltllil. .J1t1fllllim•1.s ~ >.-)•(iJ~ Sol•do• : The Lagrange func:tlOII:
. Kz-AJ
2
L(1,, 1s, J.) ... n,2 ~- l(ln + 2Sll
At m. 2) ....,111-•
I 0.52,
1st
aL
hz
=~ - 4sy- ~ 211.
=.x.2-~-o (2}
=> il -,a e
• -a-11 -• =>p=-17.79<0 al.
ii -2n.2 +~Kz-A..-•
.
(3)
r-{: ¾) •ia- ◄-,-iatlVQwith fCX•>- 16.01. From (1) and (2): 1.-~-2
~
'llleN
l(x.'1.1)~~-l(x'+,S-a')
...
rt JD'i"ff I t.6e . . . . . offp,y)• 7',• -~ -~·-11--).••J(:)
.ii•-b"y' 2s1 0 (I)
(2)
If A.•2411
~,• i.a.•
Ta 111 that tl\11 1ob1tioa M.llJ
aufflolonoy oondlt\Ol\ •
~l,• '
w,..- i i • . . . . . - - of f we apply the
(3) Conalcltt
From (I) and (2)
Using (3) :
0 2.31
2.30 0 Class ical Optim izatio n
Optim izatio n Techn iques
analy zing, we
ofL with respe ct to X, Sand 1.; and
\==IJ J==1 6>o Now, taking the partia l deriva tives n
H ence J is •. Kuhn - Tucke r neces sary condi tions for the given maxim izatio
Th a pos1t1 ve defin ite matri x will have the follow ing
' ,
. us f(X) is a (stric tly) Conv ex funct 1011. probl em:
11. f(x) == -sxi
i =Ito n
Solut ion: J == [::;] == (-16)
\ gi (X) == 0, j = I to m
111 == -16 < 0 g; (X) s 0, j = I tom
functi on
Thus f(X) is a (stric tly) Conc ave .
\ 2:: 0 j = Ito m
a conve x . . . with the excep tion
: ~y local minim um of
funct ion f(X) is a globa l minim um
. Thus there to the minim izatio n case as well,
Not,e c Abov e condi tions can be applie d izatio n, the lag rang·e
can t exist more then one min.imum L0r a conve x fu t'!On. S.1m1·1ar l Y, any local maxim um in both maxim izatio n and minim
. I b I . nc that A. must be non positi ve (But, d in sign)
o f a conca ve funct1 0n fi(X) is a g o a maxim um . Th us th ere can't exist more than one ty const raints must be unres tricte
. rnu1t1pliers corres 11ond ing to equal{
maxi mum for a conca ve funct ion. case of maxim izatio n probl em and conve x in the case
of
lf L is conca ve in the
- the differ ent possib ilities of\ are as follow s:
2 ivari able t· · cons train ts minim izatio n probl em,
Mult
'7,,..,11/Till-6 now, .
O
p ,m,z atron with ineq ualit y. Table 1
. .
we studie d multi variab le optim iza ity constr aints, if the const raints
I . b tlon with equal
are of inequ ality form , we ca n so ve it y the m th d o f I_agran gean multi pliers , and this Type optim izatio n Type of const raints Lagrn nge multi pliers
ng yield s th K h e o A.j
proce dure of solvi condi tions for identi fying f(X) giX)
. e u n -: Tuck er neces sary .
. statio nary point s of a non! mear const ramed prob! b• .
to inequ ality const raints . s 2::
ient d . em su Ject
Thes e condi tions are also suffic un er certam rules that will be stated
or discu ssed s
later. Maxi mizat ion 2
= Unres tricte d
~ Cons ider the probl em .$
s
Maxi mize f(X) 2::
Minim izatio n 2
subje ct to g/X) .s O, j == 1 tom Unres tricte d
=
and x~0
- sary condi tions
be conve rted to e · b · condi tions: The Kuhn - Tucke r neces
The inequ ality const raints may quant ity add d t i~at_1ins y usmg non negat ive slack Suffic iency of the Kuhn - Tuck er satisf y certai n
varia bles. Let s.2 (~ 0) be the slack trarn tg/X )s0an ds=( s, also suffic ient if the objec tive functi on f(X) and the soluti on space
. e o eJ cons are condi tions are summ anzed as
s, ... s )Tand ~2== ( i numb er of inequ ality constr aint~. and conca vity. These
i i
s1 , s2, ··· Sm), where m JS the total condi tions regar ding conve xity
2 m
is given by follow s:
Thus , the Lagra ngean funct ion Table 2
L(x,, x2, ... x., SI' s2, ". Sm, A,, "'2' ".
Am)
m Requ ired condi tion
== f(x,, J!: 2, ... x.) - L Type of optim izatio n
j=I obJec tive Funct ion Solut ion space
(OR) f(X)
Con.cave Conv ex set
L(X, S, J...) = f(X) - fJ=l
\[g;(X ) + s/J
Maxim izatio n
Minim izatio n Conv ex Conv ex set
to prove that a
is conve x or conca ve than it is
wher e A= (A,, A. 2 , ... Am). rt is easy to verify that n functi on provi de a list of cond1t1ons that
are
const raints ) -~J ~ 0. soluti on space is Conv ex set. For
this reason , we
em with g1(~::; 0 (conv ~x trf'C
For the abov e~ax imiza tion probl - Tuck er neces sary condi tions.
\ must hold as part of the Kuhn
Thes e restri ction s on
0 2.."33
Classical Optimization
easier tc• apply m practice in the sense that the convexity of the solution space can be soLV~D PROBLEM_S 6
established by checking directly the convexity or concavity of the constraint functions. Pr 6Iem 1: Maximize
T ~ e- these conditions, we define the generalized nonlinear problem as / subject to
and
Optimize f(X)
subject to g/X) s 0,
giX) ~ 0,
g/X) = 0,
r
j = I, 2, ... r
J = r + 1, ... p
j=p+l, ... m
p m
'
f
f.
f
tt
Solution:
Then
Given
Max
g(x1, x2)
subject to-
- 6
i
·_ .1
= 2X 1 +· X 2 - 10 S 0 '
• 1 .~
2
- () 4 X 2 + 1.6 X - 0.2 x/-:- /1, (2xl + xl - 10 + s)
L(x1, x2' A) -.3. x1 . : i
,....._
L(X, S, 11.) = f(X) - L 11..(g.(X) + s.2) L \ (g/X) - s/) - L \ g/X) 3.6 - 0.8 x_, = 2A. (!)
~
-
j=I J J J j=r+l J=p+I
:Qr-.t 0
IV
~=
ax
A~
Bx1
1
where A. is the Lagrangean multiplier associated with constraint j. The conditions for I (2)'
establishing the sufficiency of the Kuhn - Tucker conditions are summarized below. I -01-, -~= 11,aag 1.6 - 0.4 x 2 = A.
r ax2 x2 .P ~c
Table 3 I
Ag(x) =0
t (2x 1
+ x2 - I 0) ':' 0 (3)
® 2x 1 + x 2 - IO::,; 0. (4)·
Conditions required g(x) ~ 0
Type of & A~ 0
(5) -
Note: Observe that in obtaining the above KT conditions the non negativity constraints subject to g(x,, X2) -- 3x 1 + 6x 1 - 72 ~ 0
X ~ 0 were completely ignored. However, we always consider the constraint X ~ O, and Then the Lagrangian function is given by
we have to discard all such solutions, that violate X ~ 0. 8 , "- (3 + 6x i - .72 + s2)
L(x" x2, ).._) = 3x,2 + 14x, x2 - x2- - x,
Note: The conditions in table 3 represent only a subset of the conditions in table 2, since
a solul!on space may be convex without satisfying the conditions in table 3. Kuhn - Tucker conditions are :
r
✓
• C
\ ' \.t ~ l l
2.34 0 Optimizalton Techniques (_, (
\\I \
,, \' / ' r J
f f
r
Classical Optimization Q 2.35
JL Jg
ch.1 = 'A a:1 (I) ⇒ 2\ ). 2 + l 3 = 0
2x 2 - "-, + l, = 0
JL <lg
ax2 =" a: 2 14x, 16x} 61'. ,, 1' \
l ~ •
(2) 2x 3 - 1 2 + ). 5 = ?
t...g(x) =0 'A.(3x, + 6x 2 72) = 0 -(I , r,
i~ (3) 'I t ~
,)
"-, (2x, + x 2 - 5) =0
g(,):; 0 3x 1 + 6x 2 72 :; 0 (4)
,._ (x, + x - 2)
2 3
=0
A.~ 0 A.~ 0 (5)
A. 3 (1 - x ,) =0
/' A. 4 (2 - x 2) =0
From (3) : Either A= 0 or 3x 1 + 6x 2 - 72 = 0
Case A.= 0:
"-s (-x1) = 0
2x 1 + x 2 - 5 :5 0
From (1) and (2):
xl = 0 and x~ = 0, not satisfying (4) X
1
+ X
3
- 2S 0
: (0, 0) not an optimal point
1-x)so -
:. consider case f... c;::c 0 - that is 3x, + 6x
2
- 72 =O (6) 2-x,so
From (I) and (2): x, = 22 x 2 then from (6) : x
2
= 1⇒ x = ~2 -X 3 :; 0~
1
and A.= 48.6;;;,: 0, the point (22, 1) is satisfying all K - T conditions a ~ c e it is an
optimal pomt, and . By trial and error, consider the possibility :
---~-=
_ , - - - - - -f::m=::u~-=-1(x~= f{22, 1} = 1752. --1 ·v X: = 1, X1 = ?, X :J 3
l
Solution: The Lagrangian function is given by suoJect to 3x 1 + h 1
:; 6, x ~ 0. x 1 i 0
L(x 1, X 2, X 3 • 1.,. "- 2 , "- 3 , '••• A. 5) So/11tio11: Consider Kuhn-Tucker cond1t1ons
= x/ + x/ + x/ - i. 1 (2x, + x2 - 5 + s/) - 1,2 (x, + x3 2 + s/)
- i, 3 (1 - x 1 + s/) - ),, (2 - x2 + s/) - A. 5 (- x 3 + s/) 8
ax1
ar = "- lh1
ag
Then the Kuhn - Tucker conditions are: ar ag (2)
10 - 2:-.. - :!:\. = 0
ax2 "- ch1 Q
ar I: \ a7.;;
al!i,
i-: 1 to 3 t...g(X) O A(}'- • ~'-- 6) =0
(3)
(4)
ax, JI g(X) -:.-0 :h ' ~, •. o:;; 0 (5)
1o. ..,. O "-..,. 0
", g1 = 0 J ""' I to 5
&; (X),, 0 Frnm (3) · t•1thcr A O 01 .h, • ?-\, 6 0 /
A. so, smce the problem is the mm1m1zat1on problem . . - (4 5) is satisfying equation
Suppnsc ,._ 0, tlwn From ( \ )_lmd l:!) : :\ 1 = 4, '- 1 =~.but lhis P~•:t + • x = (6),
J 6
(1\), wh~n ,._ 0, (11, 5) 1s \\l)l opt11n,1\ then com;ider A~ 0 - that is 3· , 2 2
2.36 0
Optimization Techniques
Classical Optimization 0 2.3:
I
subject to 3x 1 + 2x 2
2x 1 +3x 2 .s 12
Solution: Kuhn - Tucker conditions: Solutio11: The Kuhn -Tucker conditions: C
ar ag
}q
~ -2x 1 + 8 = 3).
ar A ag1 + i.. ag2 -=11.- f:
(~: I
axl = I axl 2 axl ax1 ax1
ar ag
ar
ax1 = "-,
a~
ax2 + Al
a~
ax2
-=11.-
ax2 ax2
-2x 2 + 10 = 2l C.
11.g = 0 l(3x, + 2x 2 - 6) = 0
ar ag1 ag2
3x 1 + 2x 2 - fj .S 0
ax.3 = A, ax3 + Al ax.3 -2X 3 =0 g .s 0
(3) ~ A~ 0
Al gl =0 q A. ~ 0
\ (x, + x 2 - 2) = O (4)
gj .s: 0
A2 (2x 1 +Jx 2 -12)=0 (5) From (3) : either A. = 0 or 3x, + 2x 2 - 6 = 0
A;~ 0 j = 1 to 2 X1+ X 2 - 2 _$: 0 Consider ,._ = 0: then by ( 1) ancl (2) : x, = 4, x,_ = 5
(6)
2x 1 +Jx 2 -12.s:O But the point (4, 5) is not satisfying Equation (4)
\~o}
\ ~ 0
(7)
(8)
:. (4, 5) is not an optimal point
Hence, distard the case ,._ = 0
Then consi<kr A* 0 - that is 3x 1 + 2x 2 - 6 :..Q--
From (4) and (5), we can have the casc;s: By (l) and (2) : 2x, - 3x 2 + 7 = 0
Case l : )., = 0, ).2 =0 Case 2 : ),, By (6) and (7): (x" x 2) = (0.3085, 2.539)
Case 3 . ~ 0 . , = 0 and 0 ),,2 CF
and by (1) "- = 2.461 ~ 0
• •, CF ' 1.2 CF O ; Case 4 : ),, CF O and ),, = 0
Check the cases - 1 2 3 h
1
2 Also the point (x" x 2) satisfying all the K - T condtti.ons, therefore it 1s an optir
· ' ' so t at these cases will be discarded 1
Now consider ·
point
the case 4 : .:l
'CF
o an d "-2
,
= 0 From (4) and (5): x + x - 2 =0 Hence, the optimum solution for the problem ts
1 2 (9)
From (I) and (2) - x + x, = 0 .3085, 'i.~ - 2 .53 9, "- = 2.461
· , - x2 1 = 0 (JO), then from (9) and (10) ·, x I = -21
and Maxf =21.3 l63
3
and by (9):
x2 = 2 , by (3): x3 = 0, by (1) : A,= 3 Problem 7: Maximize f(X) = x/ + "l
(x,, xz, x) - ({ rO) (A,, A.2) = (3, 0)
subject to - ,, $-I
,/ + x./ - 26 s 0
X. t X. - 6 = Q
1 1
~ol11tio11: Kuhn Tucker conditions:
Class/cal Optimization 0 2.3
2.38 0 Opt1m1zation Techniques
3 Ai-;,: 0
then by (1): -2x, = -l, + 2x, '2 AJ-;,: O
by (2): I = 2x 2 \ The solution of Equation ( l) to Equation ( 12) c:in be found in several ways By
Nowbyx 1 = 1,x 2 =S and error, and checking all the possibilities l!) The values of t- 1, A.~ and A. 3 correspon,
lo fhis solution can be obtained us: ·
⇒ 1"'1
2x2 = 0.1 and J., = J.8 A, - 20 ' Al= '20, "-1 = 100
.·. J., and ).1 e:: O Si'noc nll A's"' 0, this solution. cun be i1.lc-ntlfied us the optimum solution.
clearly the point (I, 5) satisfying all K - ~r con d'1t1oni;
, Thus
· (1, 5) is an optimal point "-1~ = 50
Hence, the optimum solution for the problem is ,,* - 50
\./ .., so and f nun = I0500.