0% found this document useful (0 votes)
26 views23 pages

Optimization Techniques, Chapter - 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views23 pages

Optimization Techniques, Chapter - 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

-0 1.

5
Introduction to Optimization
1.4 0 Optimization Techniques

· t f(X) is termed the obJective


,. Design of civil engineering structures such as frames, foundations, bndges, towers. where Xis an n-dimens1onal vector called the d es1gn vec or, . . 1
chimneys and dams for minimum cost function, and g,(X) and l,(X) are known as inequality and equality constraints, respective Y
6 Minimum weight design of structures for earth quake, wmd and other types of random The problem stated in Eq( 1) is called constrained optimization problem.
loading. Some optimizat10n problems do not involve any constraints and can be stated as
7. Design of water resources systems for maximum benefit .
8.
9.
Optimal plastic design of structures
Optimum design of control systems
10 . Inventory control
11 . Optimum design of linkages, cams, gears, machine tools, and other mechanical
f;nd X - l'. \ wh;,h m;,;m;zes f(X)
(2)

components
Such problems are called unconstrained optimization problems .
12. Selection of machining conditions in metal cutting processes for minimum production
Remark: How one can solve a maximization problem as a mm1mizat1on problem?
cost
13. Design of pumps, turbines and heat transfer equipment for maximum efficiency
f(x)
14. Shortest route taken by a sales person visiting various cities during one tour f(x)

15. Allocation of resources or services among several activities to maximum the benefit.
16. Planning the best strategy to obtain maximum profit in the presence of a competitor
17 . Optimal production planning, controlling and scheduling ·
\::•mof f(<l
18 . Optimum design of chemical processing equipment and plants I X'

19 . Controlling the waiting and idle times and queueing in production lines to reduce the
costs.
x•, Maximum of - f(x)
20. Selection of a site for an industry
21. Analysis of statistical data and building empirical models from experimental resuits
to obtain the most accurate representation of the physical phenomenon
'"""-f(x)
22. Design of optimum pipeline networks for process industries
Fig: 1 Minimum of f(x) = maximum of - f(x)
23. Planning of maintenance and replacement of equipment to reduce operating costs.
24 . Optimum design of electrical networks. From the above figure, if a point x.* corresponds to the minimum value of function
25. Solvmg for optimality in several mathematical-economics and military problems. f(x), the same point also corresponds to the ma'-.mrnm value of the negative of the function,
Anyway, we are going to discuss outlines of the theory _and applica~io~ of _m athemat1cal - f(x) .
programming techniques suitable for the solution of engmeenng opt1m1zat10n problems. Thus, without loss of generality, indirectly n maximization problem can be solved as a
Then it is the nght time to con,s1der: minimization problem. then optimization cnn be taken to mean mm1m1zation, smce the
maximum of a function can be found by seeking the minnnum of the negative of the same
1.4 Statement of an optimization problem . functio,o .
An optimization or a mathematical programming problem can be stated as follows. Remark: The number of vnrinbks n and the number of constraints m and/or p need not
Find X = (x,, x 2, ••• x 0 ~T which minimizes f(X) } be related in nny "n)

subject to the constraints (l) Design Vector


/1.ny cng1nccnng :;ystcrn or activity ,s defined by a set of guantities some of which are
g.(X)sO, j= 1,2, ... m viewed os Yllrtnbks dunng the design process. In general, certain quantities are usually
I (X) == 0 j == l, 2• ... p
J
1. 6 0 Opt,m,zat,on rochn1quos
/n/roduct,on to Opt1mizat1on 0 1.7

g (X) "' 0 arc fc11s1blc or acceptable. ·r he collect100 of all the constramt surfaces g1(X) -
1
0, J • I, 2, .. , m, wl11ch separate~ the acceptable region fl called the composite constraint
11
s111 f:lcc,
I} Following Fig 2 shows a 2D design space where 1he infeasible region 1s md1ca1ed
hatched lines conslrntnl surfaces ma hypothet,cal two rltmens100:il design space

Sldo constrain I g1 " O- -


In

1f 3 n_n-dimens1om1l C:mc!>1an ~pace\\ 1th each coordinate a,1s represent mg a design Unfeasible region!

,·an_ablc x ~1 -= I. 2... n) 1s rons1dC'rl"d. the.- space 1s calkd the dl'sig n vari abl e space or
51 ":PIY d~~ !>p:t_cc. E3ch point m the n-d1mcns1onal design space 1s called a design
~ a n d represents e1ther a possible or an impossible solution to the design prcibi"em." Behaviour - - +
constraint g, = 0
The choice of the 1mport:mt design ,-:uiables in an op1tm1zallon problem largely depends
on the user 3:,d
his expenence Ho,,ever, It 1s important to understand that the efficiency
and speed 01 opttm1zat10n technique depend to a large extent, on the number of chosen • Free unacceptable point
r--S..~ ccrS.:3!1\t o, = 0
design vanables. Thus by selecu,·ely choosing the design variables. the efficiency of the
opt1m1zauon technique can be increased. The first thumb rule of the formulation of an
opt1m1zat1on problem 1s to choose as few design variables as possible.
Des ign Co nstrain ts
The selected design , anaoles have 10 satisfy certain specified func ti onal and other Fig: 2 Constraint surfaces In a two d -e"s :)~ ces ;~ sca::e
requirements. knov.n as constraints (restrictions). These constraints that must be satisfied
to produce an acceptable aes1gn are coilccuvely called des ign constraints . A design point thnt hes on one or me.re t.bn o::e co::<~-: s=~ .s cu':ed a bound
Constramts that rq,:-esent hmuauons on the be.hav1or or performance of the system point, nod the nssocinted constr:unt t5 t"itlled an t.ethcc ~D.Stv.ibt
are termed beba\ior or functional constraints. Constraints that represent phys ical Design points that do not he on an) c-onstr.w·t s-.::fat"C J...-e ttO';I; ... as frtt points
l1m1tat1ons on des1gn\=ai-~abricabil1ty and transportabi lity arc Depending on whether n p!lrt1culnr des1gn pomt t-el~s to t:.e :i~p~•-e or 1$li:Cept:iblc
known as geometric or side c.onstraints. 1
region, it can bl' ident,fitd or d11s~1fied ns O!'~ oflt.e fo ' ~ - - ~ f~ ... t)'PCS
Mathemaucally. there zre usU21ly two types of constraints. Ec1u a lity or In equalit y
1. free and tlC'l'ept:lbk l'l'tnt
co n straints. l!}cqu.alrty constraints state that the relat1onsh1ps among dcs1j.!n vannblcs
11. Free nml unnl·ceptnbk pmnt
are either greater th.an. smaller than or equal to a r~source value.
Equal Hy constraints stat.c that the relat1onsh1ps should exactly match a resource vnluc 111. Hound nml :i..-,eptnbk pl,tnt
Equality constraints are usu.ally more difficult to handk and therefore, need lo be avoided l\' Ht,untl nml UIIOl'l't'l'lllhk pomt
wherever possible Thus. the tecond thumb ruk in the formul:-Jl1011 ofopl111111n1rnn p1ohk111 i\11 thl" 11b1Wl' f,,111 tYp(':- !Ire she""' m tht" :it-o,~ ft$~
is that the number of complex ~ual11y con£tra1ntfl should he kept a11 low aH pn1iH1hlt•
Objective Funct ion
Constraint surface ( I hr 1h111l 1,1,k 111 thr h,rn111lnlh'l\ p1,,~lun- \) t,, \ J ,1 i1._n-.:t1v'l tn tcrm:s of the Je.,ign ~r
1
Consider an opt1mw,11on problem w11h only incqu1.1l11y c,m11trr1111111 g1(X),. o. I he Gel 111 'tt1·1·1s11111 \',It 111hks 0111\ ,,thl'f p1,,t,km ~tam~~\",.." 1th ~,~t t,, \\ 1-•.:h the dcSJgn ~ vp.um,z.e J
0
values of X that satisfy the cqua11on gp<) 0 form'i II hyper 1urfncc 111 the ,lt:1,11•11 ~pa1:c 1'1118 l\111,111111 is \..1111\\ 11 ti, tht' 11hJ.-dh t fuu~t\ou) tht- chl,,1..."C ot obJC.-Ct1,e iundton •~.s ' cmc
"' • 1 one ol the most
and js called a constraint surface. 'lh1£ co11.&traw1 6Urlace tl1v1<kti lhc dl!tllHII RpllCl' 111111 hv• lhl" 111111,it' 11f thi' p111\,k111 l h\\" th~ sdi.'\'t1,,1\\'I the ohJ~IIH' ,undion ' b.
• 11on, 1hcrc nuy c
two regions: one 1s g (X) < 0 and the ,,tht!r Hi g1(XJ,,, 0, I hu,-. lht p1111111i ly111i: 1111 1lw hyp~1 1111p1111.1111 tk1·1s11111-. 111 th~,, h\1\c.- optlll\\11\\ ,ksl$n pn.xe~ In soin(' )llUl \ ' oblcm
1 I ni \ ptinllZ!IIIOD pr
su r fac e will satisfy the constraint g1{X) cn11cally, where /ill 1l1e p11111111 ly111v 111 1hr ri·f\11111 1111111· th,111 t1111• tih1Nll\l' 1\111,11~1n t1, ~-- ~,\lshl·ll ,1mult:inNU~) • g problem

> o arc infeasible or unacceptahlc, Mid lh<: JH1tnlll ly111g 111 tlw 1t'ft11tll wl11·11• Ill\ 11h 1111111111\1,p\(" 11\111•1'\\\ \' l\l\\\'11\111~ 1, ktH\\\ll lls a muhiotijN'thr progr.1mmtn
whe re gi (X)
/ntroduclton to Optimization 0
1.8 0 Opt1m1zet1on Techn1quos

For ex3mpk Now it is the time to consider:


' · · 1· 11 "r~ 0,.1,•'em rormulation- The purpose of the formula tic
where f,lX) :md fJX) drnote two obJcct1Ye functions :md f(>-.) 1s 3 ne\\ obJcct1vc funcuon (Optimal problem) opt1mn.a /0 £_!..JUIJ J' ~ .
· , • ~thematical rnodel of the optimal problem. which then cat1 t
for opt1m12at1on procedure 1s to create a ma .
solved using an optimization technique. . .
\\'here a, and a: are constants ,,hose ,alues md1cate the relative 11nport:mce of one
The following figure shows an outline of the steps usually mvolved m an opt1m
obi cell\ e funcuon re bit Ye to the other.
problem formulation process.
ln general. the obJe-:11,·e function can be of two types, Either the obJect1ve function 1s
to be max1m1zed or to be mmim1;:ed. But. the optimization methods are usually either for
Need for
mm1mrzanon problem or for mu,1m1zat1on problem. Fortunately, the duality principle* optimization
helps us b, a\loumg the same method 10 be used for mm1m1zation or maximization with
mmor charge m the obJect1ve funct10n instead of a change m the entire design.
* If the method. design is de,·eloped for solnng a mm1m1zat1on problem, it can also be
used to solve a mum11zat1on problem by simply multiplying the objective function by -

J
Choose design
(mmus) and nee, ersa.

Objective Function Surface


The 'ocus of 2.ll pomts satisfymg f(X) = c = constant forms a hyper surface in the design
space, and for each \'alue of c there corresponds a different member of a family of surfaces.
The~ St!TUces c21led objective function surfaces can be shown in a two dimensional
Fo~~::.~b:~,~;,o
(and I or variable
design space m follou-ing F1g 3. bounds)

Fonnulate objective
function

Choose an ophmt:ation __j I I


technique =._j
OptJmum po:r.: -

Obt3\n sohltton

Fig: 4
fig; 3
The first step is to rc,\hl\' tht' nee1.\ for using optimrzunon ma s\)ecif1c design ?t0b\er
Once the ob1ect1ve func11on surfaces are drawn along w11h the constraint surfuccs, the \'hen the dcs1~m.'r nt•c\\s 11., d\oos(' the im1,orhmt design \'anab\es associated with t\
op11mum point can be determined without much difficulty, But the main problem i~ th111, prohkm.
whenever the number of design van ables excecd!i two o, three, the constra1111 unu ohJcctivc
lhi;- t\,1m\1\11ti1.H1 or 0pt1mit;\\1on probkm invo\vcs other cons1derat10ns such a
function surfaces become complex even for v1sualmt1on and the prnhlcrn lrnK to he 1mlvcd
( 'onstrnints, t1h_11.·ct\\ c \\m1.·ti1.,11
purely as a mathematical problem.
A ty11ic11\ 1.,pt1n1imtion probkm's muthemat\cn\ mode\ is as fo\\ows:
1.10 0 Optimization Techniques 0 1.11
lntroducl1on to Optimization

1\1:iximize or minimize (objective function) Geometric programming problem (GMP): A GMP is one in which the obJective function
subJcct to (Constraints) and constraints are expressed as posynomtals m X. A function h(X) 1s called posynom1al
OR if h can be expressed as the sum of power terms each of the form
Optimize (obJective function) Ci X/11 X/•2 ... x:'"
subject to (constraints) / where c.I and a IJ are constants with c1 > 0 and x_J > 0.

1.5 Classification of optimization problems V Quadratic programming problem: A quadratic programmmg problem 1s a nonlmear
programming problem with a quadratic objective function and \mear constramts.
Linear programming problem: If the objective function and all the constraints m Eq( I)
~ptimizano_n problems can be classified in many ways, as explained below:
1• Classificatio·n based on the Existence of Constraints: are linear functions of the design variables, the mathematical programmmg problem 1s
As discussed earlier ,_ an Y op f1m1zation
· · problem can be classified as constrained or called a linear programming (LP) problem
~nconstramed dependmg on whether or not constraints exist in the problem A LP problem is often stated in the following standard form:
11 • Classification based on the Nature of the design variables:
Based on the nature of design variables, optimization problems can be classified into two
categones. -
Pa~ameter (or) std)ic optimization problems : The problem is to find values to a set of
F;nd X ~ \1 \ wluch minimizes f()O ~ t. , x,

des~gn paramet~rs that make some prescribed function of these parameters minimum
sub1ect to certarn constraints. \
subject to constraints \
Trajectory (or) _dynamic optimtzation problems : The problem is to find a set of design n
para~eters. which are all contmuous functions of some other parameter that minimizes Laux;= bf j = 1. 2 ..... rn
an obJective functlon subject to a set of constraints. ,~1

iii. Classification Based on the physical structure of the problem : x, ~ 0 i = 1. 2 ....• n


Depending on the physical structure of the problem, optimization problems can be where c1, a,i and bi are constants.
classified as optimal control and non optimal control problems. An optimal control (OC) v. Classificat~ased on.,the permissibl~ vnlues of the d~sig.n ,arillbles:
problem 1s a mathematical programming problem involving a number of stages, where According to the values pemutted for the design va.n:ibks. opt1minuon problems c:in be
each stage evolves from the preceeding stage in a prescribed manner. It is usually described classified as integer - nnd real valued progrumming probk"'s.
by two types ofvanables: the control (design) and the state variables. The control variables If some or nil of the design vnrinbles , • ,, • :\• of an opt1mi2:mon problem are
define the system and govern the evolution of the system from one stage to the next, and restricted to take on only mtegcr {discrete) , nlues, the problem is called an integer
the status of the system in any stage. The problem is to find a set of control or design
progromm111g problem.
vanables such that the total objective function over all the stages is minimized subJect to
On the other hnnd, 1f all the design v:.mabks an.· rermmed to t:i.ke any real ,·alue. the
a set of constraints on the control and state variables.
opt1m1zntion problem 1s called n ren\-\':.thied pro~rommmg problem
iv. Classification based on the nature of the Equations involved:
vi. Clnssifirntion bnscd 011 the tkterministh: " :. uure of the, nriubles :
Other important classification of optimization problem s is based on the nature or
Boscd on tllc cktennm1s\lc m\1111-e of the , .:m:.lbks mvoheJ, opt1m1zat1on problems can
expressions for the objective function and the constraints. According to thi s classificat1011,
be cl1,ss1 fiell ns 11 lkten\lln\stl,' .n,J stodust11.' \WOgr.,mmmg problems
optimization problems can be classified as linear, nonlinear, geometric and quodrnt1 e 0
A Stoc\rnsttt' pro~rnmm111g. probkm 1s :111 opt1m1zation problem in which so1ne . : a_ll
programming problems.
Nonlinear programming problem : If any of the funct1onA among the objective nnd
or the pllrnnickrs (des1~11 ,·1mubks and or or pr~·nss1gned p:irametas) are probabih 5ttc
constraint functions in Eq(l) is nonlinear, the problem is cnllcd a 11011l incnr progrnmm111g (11011 dt:tcrn1tn1$t\1' 1H s11whnst11') I ore deterministic
(NLP) problem. This is th~ most general programmmg problem and all the other problem s i\c11on\111p. \\) this 1kl1111\1011, tlh' probkm types cons11.kred pre\'tous Y'
can be considered as special cases of the NLP prohlem , ptol\rtll\\l\\in~ probkms
1.16 0 Optimiza tion Te chniqu e s
CHA PTE R 2
7 - Define the terms
a Design vector
b. Design constrain t
c . Behavwr constrain t
d. Side constrain t CLASSICAL
e . ObJecti\' e function
f. ObJecttv e function surface
g . Posynom ial
OPTIMIZATION
8. Define the folio" mg terms
Constram t surface
b. Bound pomt
c. Free point
d Active constra int
e.Feasible (or) acceptab le regwn
f. Infeasibl e (or) unaccept able region
· f . . . 2.1 Introdu ction
9. How do you solve a max im1za wn problem as mtnimiza tion problem?
10 . ✓ Classical optimiza tion techniqu es use different ial
calculus to deterrnin ~ points ofm_ax'.m
List the methods of operatio ns research . techniqu es have hm1te
and minima of continuo us and different iable functions . These
ons, since practical problems involve obJectiv e function s th:
scope in practical applicati
ng theory provides ti-
are not continuo us and/or different iable. However the underlyi
techmqu es . In this sect10n, "
basis for developi ng most of the nonlinea r program ming
ing the optimum solution of
present the necessar y and sufficien t condition s in determin
constram ts. and a multivar iab
single-va riable function, a multivar iable function with no
function with equality and inequalit y constrain ts.

2.2 Single variabl e optimiz ation


,~ The optimiza tion techniqu es of single variable function
s are simple and easier
single variable function s mvolYe only one var iable . Moreove r, the
understa nd, since
used as a subtask of many multn,an able optim1zat1on techniqu1
techniqu es are repeated ly
us in understa nding/le ami
H~nce, a thorough understa nding of these methods will help
complex techniqu es discusse d in forth coming chapter.

The problem
the value oh.= x.* ,s to be fou
A single-va riable optimiza tion problem 1s one in wh1ch
such that'-* rn1mm11 es f(\.), wher:: f(x) 1s the obJectiv e funct10n a
tn the interval [a, b)
x 1s a real variable .
x, for which the funct1
The purpose of an opt11111znt1on techmqu e is to find a solution
d here are for minimiz at
f(x) is mm1mum. Even though the opt1miza hon methods descnbe
by adopting the equival
problems . they can nlso be used to solve n mux.inuz ation problem
dunl problem ( f(,)), and consider ed it to be mm1m1z ed.
2.2 0 Classical Optimization 0
Optimization Techniques

g;,
:Ve first present the basic termin0lo' .
optimality (relatiYe mumnum ofa f and the neces~ary and sufficient cond1t1ons for
f(x)

~ . unc ion of a s111glc variable)


8 e ore we go to cond1t1ons for a oint b
p
- .
to e an optimal point, we define different types
_s
of optimal pomts. ' x, \ II

/ Relative or Local minimum x,


A function of one Yanable f(x) is said .
f(x*) $ f().. * + h) for all suffi . to have a relative or local minimum at x = x* if
ictently small positive and negative values of h.
(OR) x,
A funct10n f(x) has a loca1 or relative mt * . Fig: 5
centered at x" (neighbourhood of x* ntmum at: if there exists a (small) interval
(neighbourhood) at-which th f ) such that f(x ) $ f(x) for all x in this interval The above function 1s defined only one [a, b}. It has relative or local mmima at a .
e unction 1s defined.
x , x ; relative or local maxima at x" x. 3 , x 5 , and b; a global maximum at x 3 smce f(x
at x* (neighbou h d
Formally. a pomt x* 1s a local m · ·
r oo o x
f ·*) h
I ·
imm~ pomt of f(x) tf no pomt in the interval centered
as a funct10n value smaller than f(x*).
max 6{f(x ), f(x ), f(x ), f(b)}; and global mimmum at x, since f(xJ = min{f(a), f(x 2), f(
4

1 3 5
f(x 6) } .
✓ Relative or Local maximum
f(x) x = Relative minimum is also global rnirumum

J \
~(xfunction
A · or local maximum at x
· sat· d to h ave a relative
*) > ~( ,.of_ one variable f( x ) ts = x * if
- x -:- h) for all sufficiently small values of h close to zero.
(OR)
A funct10 n f(x) has a local or relative maximum at x* if there exists a (small) interval ''
centered at x* (neighbourhood of x*) such that f(x*) ;c: f(x) for all x in this interval !' X
'
:

(neighbourhood ofx*) at which the function is defined.' · ' ''


'
'' ''
F*orm~lly, a point x* is a local maximum point of f(x.} if no point in the interval centered
at x (ne1ghbourhood of (x*)) has a function value greater than f(x *). ,
Fig: 6
./' Global or absolute minimum The following two theorems provide the necessary and sufficient cond1t1ons for
A point x* 1s said to be a globa) or absolute minimum, If f(x*) $ f(x) for all x, and not just
for all x close to x* {1e: mterval centered at x*, (or) neighbourhood ofx*), in the domain re\ati"e minimum of a functlon of a smg\e ,~nab\e
over which f(x) 1s defined.
Theorem (Necessary condition) : If a function ft,,
~s. defined in the interval a 5, :x
df(x) ,
Formally, a point x* 1s said to be a global or absolute minimum pomt ifno point in the and has n relative nnnimum at x = , * "here a<...," <... b. s.nd if the denvatwe ~ =\ r-...""
entire domain over which f(x) 1s defined has a function value smaller than f(x*).
exists as n ftmtc number nt , = , * then f'(:\ •) = 0.
/ Global or absolute maximum Proof: Given that f'(',) C\.\sts and \Sa fimte m1mber at:\= x•
A point x* is said to be a global or abso]ute maximum, If f(x*) ~ f(x) for all x and not Just
~( . -.) - l.t (l,• + h) - f(,•)
for all x close to x* (i.e; interval centered at x*, (or) neighbourhood ofx*) m the domain then t ' - h-•O h
over which f(x) is defined.
Now we hnvc to sho,, that f'~'I.•)"" 0
Formally, a point x* is said to be a global or absolute maximum point 1f no pomt in the Smee• f' hns 11 ,d,1t1ve m11umum at,•, then from the definition of relative minm
entire domain over which f(x) is defined, has a function value greater than f(x"'). we hnvc f(x •) ~ fl,• f- h) for n\\ \'nh1c$ uf h sufficiently close to zero.
The following figure shows ~he difference between the local and global optimum points.
f(,: + h)-~'2 ;- 0
1\cncc If h-.. 0 thl'1' h
2.4 0 Opllmizalion Techniques 0
Classical Optimization

and If h < 0 then f(,. + h) • f(, ')


Proof: Applying 'I aylor's theorem with remainder after n terms a bou ·
tx==x* we hav
'
---h--~0
h2 hn-1 11
From(!) ash - ► 0 + gives f'(x"') ~ o (2) f(x* th)= f(x•) + h r(xt) + 2! f"(x•) + ... + (n - I)! f!•- (x*)
as h - ► 0 - gives f'(x*) ~ o A (1
(J) + f•l (x* + 0h) where 0<0<I
From (2) and (3): f(x*) = O
S111ce r(x*) = P'(x*) - ... ..;: f• ''(x*) =0
Hence the theorem.
Equation (1) becomes
Notes hn
1. This theorem can also be proved even if x* is a relative maximum
.f(x* + h) - f(x*) = ;; f,•J (x ♦' + 8h) (2

2· T~e thheorem does not say what happens if a minimum or maximum occurs at a point As f(•l(x*) ct- 0, there exists an interval around x• for every point x of which the nt
x w ere the denvative fails to exist. derivative f1•i(x) has the same sign, namely. that of f• 1(x*). ·
J. Th_e theorem does not say what happens 1f a minimum or maximum occurs at an end Thus for every point x* + h of th•is int:rval f'• 1(x 4 + 8h) has the sign o~ ~•>(x*),-
pomt of the interval of definition of the function
lf n is even then !1_:_ is positive irrespective of sign of 'h'. Hence f(;* + h) - f(x*) an
4. The theorem does not say that the function necessarily will have a minimum or , n! .
maximum at every point where the derivative is zero. f(•l(x*) will have tlie same signJ
For example: Thus if f(•l(x*) ~ 0 then f(x* + h)- f(~*);?: 0 ==> f(x* + h);?: f{x') [from (2)]
1
f(x)
Therefore x* is a relative minim~m of 'f ·if fC• (x*)_;?: O_
(or) if f(•l (x*) s O then f (x* + h) - f(x*) So·
⇒ f(x* + h) s f(x*) [from (2)]
Therefore x* is a relallve maximum of 'f 1f fC•> (x*) S 0.

h" · h - f 'h'
If n is odd, then -;;"T changes sign ,v1th the change m t e S1gn o •
Hence x* is neither a maximum nor a minimum Hence the theorem.
Note: The point x* 1s neither a maximum nor a mimmum. called a po~nt of intlectioi
~ \A..) :::: Go:;t tt - \ cw :..3 +-('2.0~ '- -+'°O
Fig; 7 ~ OLVED PROB-LEMS 1
Pro ble ~ Determine the maximum and minimum values of the function f(x) = 1,2<
~or the function in the above figure the derivative f(x) =Oat x = 0. However, this poi nt
V4Sx' + ~~+ 5
1s neither a minimum nor a maximum.
In general, a point ·x• at which f (x*) = 0 1s called a s tationary point .
,f S9'/fpo11 : Smee
1
P(\.) = 60 (x~ - 3'\ + 2:,:.!)
~/ ~ = 60 \.:('- - l)(\.- 2)
If a function f(x) possesses continuous derivatives of every order in the neig hborhood
of x = x* the following theorem provides the sufficient condition for the minimum or
£ ./ f(\.) - 0 :it\. 0, \. = l. snd \. - 2
maximum value of the function . rf~ / \Y Also f"(x) - 60l•h' Qy-. -h:)
}Theorem (S u fficient condition): Let ((x*) = f"(x*) = ... =fl• ''(x*) = O but fC•l(x*) 1; O.
v; }- At x \, P'('\) - 60 and henr,· \. l 1s a rtl:lt1w m:l\imt:m. since f'(x) < O, e
then f(x*) 1s ,~ dcnvat1ve .
f.,., · f( \. l) = I2 •
I.
a minimum value of f(x) 1f fl"' (x*) > 0 and n is even ~
240 anJ ht'Ol'C' \. 2 ,s a rel:1t1\'e nunimum, smce f'(x) > O, e
1
1. a maximum value of f(x) if fC•J (x*) < 0 and n 1s even
iii. neither a maximum nor a mmimum 1f n 1s odd
2) == 11
Classical OptimizatJ0/J 0 2. 7
2.6 0 Optimization Techniques

At x - 0, f"(x) = 0 and hence we must go to the next derivative Al x = -1 f"(x) = -54 1' 0
x = -1 1s neither minimum nor maximum
f'" (x) 60(12x 2 -18x + 4) = 240 at x = O
. Smee f'"(x) 1' 0 at x = 0 x ,;, 0 - . At x = 2, f'(x) = 0, f"'(x) = 0 then
111flect1on point. ' is neither a maximum nor a minimum and it is arr 2

2
fl•(x) = 6(x - 2) 2 + 12(x + 1) (x - 2) + 6(x - 2) + 6(x + ~)
Problem 2: Find the maxima and minima, if any, of the function f(x) = 4x3 - 18xz -t- 27x - 7 + l 2(x + I) (x 2) + 6(x - 2) + 6(x + ·1 ).
Solution: f(x) - 12x 2 - 36x + 27 = 3(4x 2 - 12x + 9) At x=2 ⇒ f1•(2) = 6(3) 2 + 6(3) = 6(27 + 3) = 180 > _l)

= 3(2x - 3)2 x =2 is minimizing point.


clearly f(x) =0atx=3/2 Problem 5: Maximize f(x) = x(Sn - x) on [0, 201
Now f"(x) = 3(8x - 12) Solutio11: The global maximum on [0, 70] occurs at an end point x =0 or x = 20 or at a

At x""
2
• f"(x) = f" G) = 3(8 x ; - 12) = 0, then we must go to the next denvative,
stationary point - that 1s where f(x) = 0.

Clearly f(x) = Sn - 2x, therefore x =


Srr
2
. . 01
is the only stationary pomt m [O, 2 .
f"(x) = 3(8) = 24 1' O.
Evaluatmg the objective function at ea~h of these points, we have
3
2 , f'"(x), = 24 > 0
At X =
Srr
3 X 0 -2 20
x=
2
1s neither a maximum nor a mimmum 1e; ~12. mflect1on pomt
f(x) 0 61.69 -85 .84
I
~' Prnblem 3: Determme extreme points of the (unct~on
f(x) = x 3 - 3x + 6
Solution: The necessary condition for extre1~e points 1s that the first derivative vanish :. - we can conclude that x = ~ is global maximization point and f(x) = 61.69.
r'(x) = 3x 2 -·~ = o· Sn
x ,=-±"\ or x ~l {;'l. A liter : Clearly f(x) is continuous, derivable every where f(x) = 0 at x = 2
Now, the second derivative is [(x) = 6Jt0
Srr
Al x = 1, f'(x) > 0 '-' 0 x = 1 is a minimizing point and f'(x) = -2, even, at x = 2
At x = -1, f'(x) <'0 :. x = -1 is a max1mizmg pomt
x = ~ is max1m1zation point of f(x) and then global max1m1zat10n pomt.
~ Problen@Find the minimum of the function 2
f(x~ = I 0x 6 - 48x 5 + 15x 4 + 200x 3 - 120x 2 - 480x -t- 100
I (not min, not ma~), x = 2 (mm)).
.,. 2.3 Multivariable optimization with no constraints
(Ans : x =
In this section, we consider the necessary and sufficient conditions for an n-vanabl<
Solutiou: Necessary condition : f(x) =0 ~
function f(X) with no constraints, to have optima. It 1s assumed that the first and seconc
⇒ x5 - 4x 4 + x 3 +10x 2
- 4x - 8 = 0
partial derivatives of f(X) are contmuous at every X = (x 1, x2 , ••• x.)1
By trial and error, synthetic d1v1s1on Before going to these cond1t1ons, we have to consider followmg prerequisites:
~ (x + I ) 2 (x 2) 3 = 0
Leading principal minors of a square matrix
X = -1, 2
3 Definition: I,,et A= (a,) ••• ,
Now, the second denvative 1s f''(x) = 2(x + l) (x - 2) + 3(x - 2)2 (x + 1)2
Then A 1 = \a 11 \ = a 11
when

then
x = -1, f'(x) = 0 ~
f"'(x.) = 2(x - 2) 3 + 2(x + 1) 3(x - 2) 2 + 2(x - 2) 3(x + 1)2 + 3(x - 2) 2(x + 1) A
2
= 1a11 a12
a21 a22
I
Classical Optimization

2.8 0 Optimization Techniquas

Examples

/ I. Consider the matrix A=[~ -1 - 1


=:] 5

Now Al= 3 > 0

A2 =I~~ I- 9
1=8>0 -

A.= IAI = 1:1: :~~


a1n
a20 A3 = IAI = I~
1-l -I
-l
-I
s,
1
=46>0

an, 302 3nn All A> 0 ¥1


are called lead mg pnncipal minors of the matnx A. The given matrix' A' 1s positive defimte .
In general. k'" pnnc1pal mmor of the matrix A 1s
/ 2. Consider the matnx A= [= ~ =~ =l]
r\ = I:~: : ~ .. :~~ I Now A, = -1
-I -2 _ _,

au ak2 akk A.=



\-1 -1 \ =2- l = l
1-I -2
;,..·acure of a square matrix A. = IAI = -2 + l - O = -l
A .square rn:imx A 1s positive definite <=> All the leading pnncipal minors of A are Clearly, each A, (starting f/om A,) 1s hanng the sign {-1)'. 1 = l. 2. 3
all posltl\'e
:. The given matnx. ·A· is negatwe ddimte
1 A .square matnx A 1s negative definite <=> All the leading principal minors of A;
should sausfy lhe follov,·mg property : beginnmg from that of first order (A 1). are
altemalely negauve and positive, In other words, the sign of A, is (-1 )' for 1 = I ton ../ 3. Consider the matrix A=(~ ;l 01
A sqU2re matnx A 1s positive semi definite<=> All the leading principal mmors of A A1 = 4
Now
are:?. 0
In other 1,1,,ords. some A, are pos1t1ve and the remaining A, are zero (or) At least one
A should be zero .
t\=\~ ! \ ~16-4=12

A. 3 =\AI 0
4. A square matnx A 1s negative semi definite<=> All the leading principal minors of A
are alternate!} <;; 0 and~ 0 Hence .\ <'.:0
·. The g.1'en mamx 'A' 1s positi,e senu dd,mtc.
In other words. 1'" leadmg prmc1pal minor of A 1c; A1 is either zero or has the sign of

5.
1-1).1-1.2 ... n
A square matn:r. A 1s indefinite<=> 1f it is none of the above four types
4. Consider the matnx r\ -
( / '.1
No" r\ . \ 'l)
Alternative method tr, find the nature of a square matrix
A square matrix A ,s said to be
I pos1t1ve defin11c 1f all the e1gen value,; of A > O
A, r-i' 11~0
-\
.\ ts t'tlh\'r it·ro l"•r has the sign of(-\)', 1 =- l, 2
2. negative definite rf all the c1gcn valuei; of A< O
J. pos1t1ve semr definite 1f all tht" eigen v;ilue11 of Ai! 0 and al leas I one c1gcn value .., o \he ~1wn m;Hrt\. -\ 1:- ne~utl\'C sc-m1 definite.
4. negarive semi definite 1f all the c1gcn value,; of' A~ 0 :md at lcr1Hl one c1gc11 v11luc o
5. 1ndefini te If some of the e1gcn values of/\ arc I vc ,rnd olhcr11 vc.
2 .1 0 0 Optimization Techniques
(·!) 'f --~"?YI' <t;·I J '?.,
~ \ iJ, Class/cal Optimization 0 2.11
Now A,= 4 > o l c--f\) / < -P\ <~ ;)' ()_
A = 14 -31 1J 1/ I Example <)'l,...
i -3 0 = -9 < 0 ~ \,; t~~ ' If rz_.
~
A Al = IA/ = -82
i_s not positive definite, s111ce
/J \}
(/ ,,
/'?!'I
i··• \ ' ;J'
,11..
/
/ 0(/
{IL.,
~
A JSnotnegatiYedefirrite . some A, are negative tl /
- ,sma:roneA h . U l,,
A is not positive semi d f . 'not av111g the sign (-1 )'. i = 1 2 3 \)
. e mite, some A . are ne . ' , . Then Hessian matrix (J)
A is not negative semi definite A . ' gattve and are all non zero.
. ' ; ate all non zero i e· th
. . The matrix A is indefinite. . ' ey are not alternately~ 0 and~ 0

- ian. matrix [J]·. The I1ess1an


dHess . matrix of f(X) - h
envat1ves of f(X), where X = (x x T. ts t e matrix of second order partial
,, 2' xl! ... , xJ ,

a2r a 2r a2r
ax 2
I
'ax1 ax2 ax1 axn
~
a2 r a2f a2f Now
ax2ax1 ax~ ax2axn
Hessian matrix [J]

t -~ \= -684 o
J2 = \
1
<

a2f a 2f a 2r 1
axnax1 axnax2 ax~ J =\JI= \ i
3
0
-~os\ o
-~4
-108 -TI,
<

In particular ·. J is not positive definite, not negative definite or even semi defimte at (l, 2. 3)r
when X = (x., x)T C learly J is indefinite.
·/ Necessary and sufficient condition for optimizati on of multivariable objecth
function without constraints
Working Rule: Given. Find the minimum or maximum of the funcuon f(X). wh<
Hessian matrix (J)
X = (x,, x2, ••• x.f
I Step 1:
Necessary co11ditio11: A necessary condmon for s. contrnuous functton f(X) w
continuous first and second partial derivatnes to h:.ive an e,treme pomt at X'" 1s that e:
l'irst partial dcnvat1vc of f(X.), naluated :1t X*. ,,, v:.imsh. that 1s

a2 f
Hessian matrix (J)
~ Step 2: th
Getting swtio11a1·y points by solving (I): lt 1s also important to note that e ab
a2r ncccssnry cond1t1on~ (1) an.' also satisfied for the cases other than Tr
ox3ax2 tly extreme
the givenpomt.
cond111
111cl11dc, for c,ninp\e, 1nflecllon nnd saddle pomts Consequen ,
0 2.1 :
2.12 Classical Opttmizatton
0 Opt1m1zat1011 Tach111ques

where these equations arc satisfied are'.


:ire necess:iry but not sufficient for dctermmrng the C\trunc pomts. 1 hus 11 1s more 1he extreme pomts or statt0nary points,
reason:ible to c:ill the points oht:11ncd from (1) (or) [tn stqi t soh111g (l)J as stallonory g 4 4 8)
points. (0, 0) (0, - - ), (- 3 • 0) and (- 3 • - 3 . •
3 veto use the sufficiency cond 1uo n .
Sol,e Eq(l) to get :;ta11onary pomts. T find the nature of chcse stationary po~nts, we ha
Step 3: then ~he hessian matrix [J] of f(x,,'x,) is given by
_., Finding the Sature ofstntio11arypoi11ts a,,d sufjicie11t co11ditio11: A sufficient condition
for a statwnar) pomt X* to be an e\treme pomt 1s that the Hessian matnx (J] evaluated at air
axf ~
ax1ax2] _ r 6.xi + .1 o l
6.r:: ~ 8 J
X* IS
J--[ a2f 32 f l O

1) pos1t1Ye definite ,,hen X* 1s a relatJye m1111mum point


3x_2a.frl ~
11) negau,·e definne when X .. is a relat1\'e maximum pomt
i\lso 111) mdefinite when X* 1s not an extreme pomt. Considering its leading pnnc1pal minors
Step 4: J, = (6x, + 41 = 6x 1 + ~
Calculating optimum or extreme i•alue of f(X): After find mg the nature of stationary ,.J - \6.r'o+ 4 6.r:0+ sII = (6:-: - .:t)(6x_ - s)
·
pomts obt.amed m step2 usmg step3 Fmd extreme values of given f(X) at correspond mg 2 -

extreme'pomts x~, 1fnecessary. Now coming to the nature of s t a t i o n ~ : .


0ote:
At (0, 0): J,= 4 >0
The hessian matnx is neither positive defimte nor negative definite at a point X* at which J2 = 32 > 0
ar ar a£ J is positive definite
axn = 0, then X* is called a saddle point.
·. (0, 0) 1s a relative minimum pomt and f(O. O) =6
:'iote: ·s
J, =_:W;,J)
In essence, for each solution X*, the nature of the hessian matrix is to be examined, after At (0, -
3 ):
that X* 1s a pomt of mm1mum or maximum ur saddle point of f(X) according as the J
!,_-
= -32/4.0
hess1an mamx 1s positive definite or negative defimte or mdefimte.
:. J 1s indefinite ..--....,__
!\'ote: 8 /1. ,, )/:: S
In actual practice, w~ rarely find semi defimte case . Therefore, muc h emphasis 1s not ·. (0 , - ) 1s a saddle point and f{O, - 3) ~
needed m this case.
3
At (- 4' 0): J, -4 < 0
SOLVED PROBLEMS 2 3
\ - - 32-... 0
J Problem 1: F' .,r the extreme points of the functJon J •~ 1ntkfin1tc
f(:r.,, xz) x,1 r x/ + 2x,2 + 4x/ + 6
l- ~ , 0) 1:- 3 s.\ddk p~)tl\t
"
~nil f\.- \ , 0)
... 7 l:;
Solution: The necessary conditions for the existence of extreme potnlS an·
a£ 4 S\
()..:., 3x,1 + 4x, ()::::, x,(Jx, + 4)
a~, 0
; ) 3 1 (" .+ lp1 -;,.. t'
1
\I ( ~- d· J,
ar
dx2
- 0 -:- 3x/ +- 8x 7 . (J
~ xiC3x, 1 8) - 0
?\ I c~r,<I J 1:- 1w~.1t\\t' 1kf1mtc

1e; x, 0 or and x, ,, ,ir ,t


I

( s)·\ ,~ a rd,111,c nH1\1mum


.i
point and f -;· - 1
( 8) so - 16.6
lI °'
J l
,, .,
0 2.15
Classical Optimization
2.14 0 Optimization Techniques

/problem 2: Fmd the maximum or mm1murn of the function f(x, x, x) = f(X) = x 1 +


x/ + x/ - 4x, 8x1 l 2x1 + 56 , ' i J ' ~ ~ ~) JS a relative maximum
· .·. J 1s negative definite and ( 2 • 3• 3
Solution: The necessary cond1t1on for the existence of exlrem points 1s clearly J, is with (- l)'s1gn,

ar ar ar point of f(x,, x 2, xJ
2 4 2 + 4x 2 + 4x X2 + 4x,
+ l6x x for relative
a~; ~ O, ax2 = O, OXJ O .../problem 4: Examine f(X) = x, + x2 1 1
XJ l J

⇒ 2x,-4=0, 2x 2 -8=0, 2x 1 -12=0 extrema.


Form the necessary condition
:. (2, 4, 6) 1s the only point that satisfies the ne~essary condition, So/11tio11:
Now, we must determine the nature of this stationary pomt (2, 4, 6) by using sufficiency ~ = 2x + 4x2 + 4x, = 0
ax1 I
cond1t1on.
Then the hess1an matnx evaluated at (2, 4, 6) 1s ~ = 4x, + 8x2 + 16x1 = 0
ol\2

J = [~ ~
0 0 2
~] ax3 . '
~ = 4x
+ J6x, + 8xi = O
- ow cons1denng the hessian matnx
we can have only solution is the point (0, 0, O). N '
Now J,=121=2>0

Jl = I~ ~ I= 4>0
(J] at (0, 0. 0):
., -[! ! :861j
Jco o oJ - 4 I6
JJ = 111 = 8 > O
Clearly J is positive definite J, =2 > 0
and
.-. The point (2, 4, 6) is a relative minimum point of f(x" x1, xJ Jl = 0
1 JJ = -128 '< 0
~roblem 3: Fmd the extreme points of the function f(x,, x2 , x1 ) = x, + 2x 3 + x2 X 3 - x,
-x22-~1?-- J is indefinite
Solutio11: The necessary condition for the existence bf extreme point is : (0 0 0) is a saddle point of f(X). d
' ' . - i + k (x - x )2 + k, x/) - Tx: for ma-..;.1mum an
af af af Examine f(x,, x 2) - (k2 x, 1 2
'
~ = o, ax2 = o, ax3 = 0 k and k 3 are positive)
2 h pt1mum·
⇒ I - 2x, = 0, x3 - 2x 1 = 0, 2 + x2 - 2x 1 = 0 The necessary conditions to ave o -
Solution: (l)
the solut10n of these simultaneous equations is : (i, ~. D ~
ol\1
= k x, -
i
k1 (xi - x.) =0
(2)
To find the nature ofth1s stationary pomt, consider the hess1an matrix (J] evaluated at

u.rD Then
From tl) and (2)·
Tk, T(k2 + k3) \. \·•here L\-= k k. ... k, k, + k, k,
2 ( ) (xi"'', x/) = ( --;;-~) ' ' • fhess1an rnamx (J)
f d by testino the n:1turc o
\! H) = [ ~0 ~2
I
~]
-2
1-

The sufficiency cond1t1ons. can be ven ,c "'

J, = -2 < 0 of f(xl' x 2) at (xi*, x/); -k'


Ik2 + kJ
and

I
Jl = ~2 ~2 = 4 > 0 / I The hess1an matnx Pli- 1• ., 2·1"' L -k:i
,
k1 + k:i 1
2.16 0 Optin111a1,on Techniques 0 2.11
Classical Optimization

then
J II..: ➔ k,I k, .._ 1-.J --.. o t n method
"or single variable ob1ect1ve
1•
we can solve it by unconstrained opt1m1za io
\ - !Ji i.. , k. ~ i.., i.., + kl kl > 0
J 1s pos1t1,·c defimtc func11on I
(x ,', :x/) corresponds lo the mrn1mum of!(\,, x,) f(x,} = 6x, - 2 == 0 => x, =- j
r'(x,) == 6 ~ 0
/ 2.4 Multivariable optimization with Equality constraints I
In this secu.:in. \\t' con$1dcr the opt1m1z.a11on of mult1,anable obJecllve function f(X). f"(x ) == 6 > 0 at x, = 3
Then
,, luch 1s continuous. subjected to equality constra1111s In general. the problem 1s.
opllm1ze f(X) = ~ is a relauve m1rumum of f(x ,}
x, 3
subJected to g,C:q = 0. J = L 2, ... m } (I)
"here X=(x , xi· ···'-•)T r = r(~)3
m••
= ~6
Here m ~ n, otherwise if m > n. the problem becomes over defined and 111 general.
there "111 be no solution There are several methods to solve this type of problem. out of Problem 2: Mimm1ze f(X) == x , + x/ ' 2

those ~e are cons1denng two methods . subJect to g,(X) = x, X, - 1 = O


p.4.1. Direct substitution method by direct substitution method. -
Solution: Here n = 2, m= l and n - m - 1
For a problem with n Yanables and m equality constraints, 1t is theoretically possible to
solYe simultaneously the m equality constra111ts and to express any set of m variables Now, expressing x2 interms of x,.
mterms of the remammg n - m variables. When these expressions are substituted 11110 the I
onguul objecuve function f(X), there results a new objective function 111volv111g on ly x1 = ~ ~ smgle vanable
n m ,·anables, the ne'" ob3ect1ve funct10n is not subjected to any constraint, and hence ·n f(X), the problem reduced to unconstraine1..
then, substituting l
its optimi.;m can be found by using the unconstra111ed optimization methods. 1
opt1m1zation problem: mimnuze f(x,) = :-../ "' .tj
SOLVED PROBLEMS 3
For a stationary pomt
Problem I: Mm1m1ze f(X) = ~ (x I 2 + x l 2 + x J 2)
2 fl\. ) - 0 => x./ - l = 0
subJected 10 g,(X} -: x 1 - x2 = 0 x, 1.l
g2 (X) = x, + x2 + x, - I= 0 and f'(x,).,,,., -S'-0. .
by direct substitution method. d l l) ~1\'C rel:itn e mm1mum off(~) _
.. the pomts ( l. I) an (- , g "' ' volume that c:i.n be mscnbed m a
Solution: Here n = 3 m = 2 and n - m = I Now, exprcssrng x and xi 111tcrms of x,,
1 l'roblcu_yJ-: rind the dnnens1ons oL, bo, of hlr;:-:-t
X2 - X1 sphere of unit r;\Lhus .
r., =I· x, - x 2 = I - 2x 1 So/11tio11: cknrly the probkm is
Now, subst11ut111g the.se expressions in f(X) 1\lt1\lll1l ?C f('\ ' '\ ., \.,)

f(XJ f(x 1• ~
'\' . '\ '-)
(._2)
X X ) -
2' 'I 2 (x l i· X l I ( I
I I 2x,)1) subJL'Cl to · J _ , . - t Hence the equality
rim probkm hus llm.'t' ,.m:1bks 3th\ l,ne eq\1u~t: con~trn1~1~- from the obJect1ve
I constrnmt cnn be 11:;~•ll to d1m11',\t~' ,\\\) \,ne \,f the es1gn \ ana :,
(6x,1 - 1x + I) 1
2 function d,t,o~t· 11, {'li 11111'-11 l" :-.1,
clearly, the problem rransformcd 11110 srnglc varrnbl(; optIrniza11nn prohlcrn:
(3)
I
Mm1m1ze f(.x1) = (6x,1 '1x 1 + I),
2
2.18 0 0 2.19
Optim,zat,011 Tech111<111es Classical Optimization

Where '), (),,. ).l' , .. \,J arc unknown con,;1ants called lagr.,nge mult1pl1ers. one
f(:-.. :-.J ~h '-: \ /(1 - ,f - ,"}i (4)
multiplier"-, for each constraint g,(X).
The c
'' hich , 11ax1m12ed as, J n unc\mstramcd
,
funct1on rn l\\'O 'arnblcs Clearly L(X. ),) 1s a function of n + m unknowns or variables x - ~'l" •• "•·>..,.AT A,..
necessary cond111ons for the maximum off: ' . Now. the necessary conditions for the extremum of L. which also correspond to the

(✓o - ,f- ,~) -


solution of original problem (I) are given by the fol\owmg theorem
,t:, = S\.~
/o - ,?- 'i)
,? ) -O TJ1eorem (Necessary conditions): Let f(X) and g (X). J = I. .. m be the continuous
cl1fferentiable functions. Let X" be an extremum of f(X) subJect to g/X) = 0. Then there
exist constants A., ), , ••• ),,. such that the pomt (X". i. ·) satisfies the cond1t1ons.
2
aa:~ = Sx, (./() - x? - ,i) - ,~ ) =0 oL of m a~
Jo - x~ - x~)
ax:= ax, - L,>-,~=0,1=1.2 ...
1=1
,n(r.i.<n}

s1mphfymg. I - 2x/ - x/ = o
oL
1 - x/- 2x/ = o a>.. = - g>(X) = 0. J = 1. 2. .. m.
1
solvmg x, • = . = _!__
x..,,. ..J3 and hence x3• I
= ../3 clearly, the above theorem gives the followmg necessary coud.100.:..s·
aL aL aL
To find whether t.he solution found corres d . o:q = 0, ax2 = 0, .... O'a. = 0
apply the sufficiency conditions to fix pon_ s t_o a rnax1rnu_m or a minimum, we
x/): ( ,, J½) cons1denng the hess1an matrix [J) at (x,*. g,(X) = 0. ~(X) = 0. ... g,J)...J = 0
these conditions, when solved as simultaneous hnear ec:muo::.s u:. ~ ·s =c ;_•s determine

[ -;;]
the stationary points of the lagrangian function so lh~ opnt:.inn~c of ii.X) sub1ect to
~ g,(X) = 0 1s equivalent to the optim1nuon of L(X. l).
J111• "2"l
= Now, the sufficient conditions (for dete,n.1ning the n:in:xe otsnnon:i..~ pomts) to ha"e
-16 -32
✓3 ✓3 extreme pomts are given by the follo-..\lng theorem:
Theorem (sufficient condition) : Let f\.X) and ~(X). J = \. 2 .• c. be ~,ce continuously
differentiable of X. Let there ex 1st 3. point (X"', >.. ·) :;ausf)ir:-g tb,: nece:,sa!) condltlons.
then
J, = -32
A <O
Further
= Ill > O
a nd J2
--- The hessian rt12.tmc of f(x 1, x 2) 1s negative definite at (x/", x/). Hence the point
.
LetL==
.
[.a 2
~
ih, '1 o,m
1 foral\iandJbethematfl'\Ots.:-~o.edord~rccn,J.tt,;esofL(X,A-)

(x, "', x 1 "') corresponds to the maximum off and f


. _!_
"'" = 3./3 \\'Ith ·espccl tu dcns1on ,·o.na\-1\es. ~;; rltl~lX)1
lh,,
'l -
... , ...
l ton.) :; l. 2, . m.
-,;/' 2.4.2. Method of Lagrange multipliers
Theblmathematical technique
. • 1·1ers 1s converting constra in ed optimization
ofLagra nge mu Ilip
pro em into unconstrained optimization problem · N0\\ lhc n' II\ Jll r ?, r-1 \$ -:.\\led bllrder<"ll he~si:rn m:1trh. , where O1$
l~ • \m-.oh1.111n)
To solve Eq(J), first we have to form the Lagrangian function . 1111111:--m n11ll tnl\lll'-. \hc.>11 th~- :-Hffl,-1ent ~,,nJ\t\\_,n,; f,,r e\.tremum cun b.; st:iteJ as follows:
Ill
I cl tx •,A•) lie 1h~· sH\t11.,mH)' p,,nu 1\,c th~· l a£r,mg1:.m tun~t1on l J.nJ J.J • be the vo.lue
L(x,, Xz, ... X 0 , \, ).,, ... l.n.J f{r.,. x, ,,, x.) - L J., g (x 1 1, x1 , ... x,.) (II) 111' corrcspo11d1n~ lwnkH·,l h~ss1,,n \\U\tn":\ -:,,mplltf,\ ,\t this stationary point Then t
JI
\ • lS 1111 m11\111111\\\ p,,11,t, 11' starnni:. "1th ,,nn.: 1p~\I mm,,r of order (m ... 1), the laS
,,, r·
, L(X, ?.) f(X) L \ g/X) t11 Ill) \ll\11 \.'lj\lll IH\111)\" ,,f
Jl/ fh.,1~\ l\O ullernutlllg sign p:11tern st::irung \\Ith (-1 n
(OR)
J I 111HI
2.20 C Classical Optimization 0 2.21
Optimization Teclln1ques

11. X.. 1s a minimum pomt, 1f ~lartmg \\Ith principal minor of order (2111 + l);the last
(n -m) pnnc1pa\ mrnor~ of\" have the sign of(-\)'" a2 L as.i
Note: Other conditions e,1st that are both necessary and sufficient for 1dcnt1fy111g extTeme po1111s where L,J = ax,axJ , g;, = ax, . . f rder (m + 1), the last
· 1f star t"In g with principal mmor O O . )m-•
X* is a maximum pomt, . . n pattern starting with (-1

~ = [g~
'- (n - m) pnnc1pa
. 1 minors of J s form an allematmg s1g
Define matnx
L-\1J and . . f order (2m + l ), the last
evaluated at the stationary pomt (X*, A*), whereµ 1s an unknown parameter. Consider X* is a minimum pornt, if starting with pnnc1pal °,!mor o
the determmant 1~1- then consider l.ll = 0, clearly we can have an equation of order n. (n - m) principal mrnors of Js have the sign of (-1) .
(n - m) 1molnng ~1. Then each of real (n - m) roots of the equation obtained from It.I= 0 Method 2: Consider the matrix D. instead of J,s
must be
{.g]
1 Negatffe 1f X* 1s a maximum point ./ L12
11. Positive 1f X* 1s a m1111mum pomt . . / Ln-J.L
Also 1f some roots are +ve and some are -ve, then X* is not an extreme point.
Lni
Necessary and sufficie~t Condition for optimization of Multivariable objective
Function with equality constraints (Lagrange multipliers)
"orki!lg Rule: Gtven the problem: and consider the equation \D.\ = _o ·.- . ( - m) solve for (n - m) roots ofµ,
then, we will have an equation mvolvmg µ of order n ,
Optirmze f(X)
if the roots are
subject to g,O() = 0 ¥ J = I, ... m
I. negative if X* is a maximum point
Step 1: From the Lagrangean Function L ii. positive if X* is a minimum point
L(x., x2, ••• , x., ).. 1, \ , ••• , Am) = f(x,, x 2, ••• , x.) - J... g (X) - A. g (X) . .. J..., gm(X) otherwise X* is not an extreme point.
1 1 2 2 0

where X = (x,, x2, ... , x )T, A= (A 1, A , ••• Am)


Step 2: Consider the necessary conditions to have extrema
0 2
SOLVED PROBLEMS 4
'I/
aL aL aL Problem 1: Minimize f(X) = x, + x/ + x/
1

¾ = O, ax2 = O, ... , axn = 0 subject to 4x, + x/ _+ 2x1 - 14 =O


g 1(X) = 0, g1(X) = 0, ... , gm(X} = 0 So/11tio11: The Lagrangian function 1s
Step 3: Solve the simultaneous linear equations obtamed m step 2, determrne the
stationary pomts of L.
2 3 1 +,~
Ltx,, x , '- : A ) _- x• z '. ! + xl
.! _). (4x + , ,-! + 2x 3 - 14)
_ 1•• · ,-

Necessary conditions to ha,:e extrema:


Step 4:
Method 1:
To find the nature of stationary pomts, consider the sufficiency conditions:
CalctJlate the bordered hessian matrix J 8 at each stationary point. and by the
nature of bordered hess1an matrix and from the followmg cnteria. find the nature of cnch
:~I =h - 4A, = 0 =- :x, - .2)., =- O (i)

stationary pomt.
(ii)

011 012 O1m g11 g12 gin


021 022 O2m g21 g22 .. g2n (\11

Om1 Om2 ill


J8 (bordered Hesswn matrix) Ornm gml Bm2 grnn 0 (or) gJ\.)
g11 g21 .. g,nJ L11 L12 Lin 3:1.1
g,2 g22 g,n2 L21 f-22 L211 From (1) \, !A.,
From (11) '-: 0 or 1-, = l
gin g211 /::U1n '-111 l.112 l.1111
',"\.\ ~.:!!''-lif'~\ ... ,~\": ~
C1,u1 •' Opt,m,1at on 0 2.23
2.22 ~
0 Opt m ·at 011
• IIQVH
"\~
"\.~"' "\"'l.."t.')-\'-\i

From cm\

·.~
'\ l
C'ons1dr-r th~ cas~ ') • e ,~i- .,~,-~
From \I\ land (1\. \ 1) •h i\t Cl, l, I, I):
⇒ 8l -1-2).
l-4
...
0
0
~ . '"'' ,~
1 0

\~41 i
)''I.'~
)2 .., 0 and J, - M
=- A., io l -4 and

2>.. 28 (2, -2
.,
J.,

x.,•l.
~
1"
At (2.8, 0, 1.4, 1.4): J,
1. .. u
x. X... x., i ) = (2 8. 0. 1 ... 1 -4)

~ ~
Coasider tbr cae ,., •.I: er cnaof
and\ 0 4\=-16<0.\~ -12S> andJ Jl
from ad x. = 2. X., -= l 42 oo-oa
and from "'X. + 1:1 + 2x.,- 14 = 0 max1mizat1on or min1m1zat1on
~ S +.,7-/ + 2 - 14 = 0 (2.8. 0. l 4. \ 4) IS DOC an O.t:'ffllC po ~
~ 1. -4•0 ~ ~ ::.,-
~ x,•:2
.a :-
At liter: Consider ..\ '!:I.- : '
x S,.X. A.)•(l.:12.1.l)
~
HCDCC tbr 51ab01my pomts arc
and at (l, J, 1. 1)
~--.r
~ 2. 1. 1). (2. -2. J, I). (2 8, 0, I .4, 1.4)
r.-;
l
\b I}
To ftnd die 11811ft of diac Slabmm'y pomta. we have to con11der sufficiency conditions IL.\\ o~~ i.6~
and rbea bordcffll beuwl IUUlS: (!.)
., ".. or $Q

aq?,-1, I, I)

Al (l, l, I, I):
/1 '"'

., ,u. 0, , .... \,,&)


0 -... Qµ

1
'"' 0 '\ \., \, $. \ -l.. \ ~\ ,s nul ,m
here m I and n 3st l !or I u&toury point 10 be a n11n1mum, 1larlln1& w11h
principal minor of order lm l 3 &he l (n m, .% pnnctpal m1nor1 or J, have 1hr ic, ~ Cl)'> I

s,gn of ( J )• Clearly we Ill c 10 k J J f'"4 4 '- 4 pnna1p1I m1nun or'• ., 0S ,


o,, Op\lffi\H ,,, \ \) ""' • ~ ' l~
,tJ l .. • 1 0
4 0 0
(2. 2, I. I) " • m1n1m1ut on point
0
l'robl•• 11
1ubJ01;\ \\) 1;\\M\tl,I\\ ~ ♦ \ \
20
l r.Q ~ (rl %fl.~)
,.....----
2.24 0 . )..-- 0 2.25
Opt1mrza11011 Techniques Classical Opt1m1zatt0n
J

So/111io11:
1'1 ohlc111 3: M101m1ze f(X) ,._ x/+ x/ +- x,1
subJCCl to g 1(X) = x, + 1( 1 +- 3x 3 - 2· 0
x 1 +2x,-20) g2(X) = Sx, + 2x 1 + xi 5= 0
So/11tio11: The Lagrangian function 1s
aL
a,, = Sx, - -1,1 - 'A. - 2)..1 , o (I)
L(x,, x1, xi, i.,, ). 2) = f(x,, x1, x1) - ). g (X) - t 1g7'X)
L ,_ x/ + x/ + x/ - ). 1 (x, + :< 2 + _3xi - 2) ).1{5x., ... 2xi-'- "{ 1 - 5) ,.--- 0
aL
<h2 = -hl - ·h, - )., + "i = 0 Necessary conditions to have extrema:
l' \ -~'n. --
aL aL '!.'l.., - {\)
ax1 = 2x, - A., - 5).1 = 0
a'-3 = 2x3 - A, - 211.1 = 0 (3) ..Jo; ~t '-"'
aL ry....\~~
(2)
aL axz = 2JC2 - A., - 2J..l = 0
a)., = 0 (or) (4)
aL
aL ax3 = 2XJ - JA., - )"'! =0
a>.. 2 = 0 (or) 2x 1 - x2 + 2x 3 - 20 =0 (5) aL (4)
aJ.., = 0 (or) x 1 ~ ~-'- 3~_-_2 = O
From (3) X = 2)q +.l2
3. 4 aL {5)
3
a,._I = 0 (or) 5x, 7 2..'½ - ~1 - 5 = 0
('.!J and (I).
Xl = 4\
X = 2),1 +.l2 From {I), (2) and (3) .
(2).
4
From (4) and (5): and again by,,."½· x 3 • the srationar) point is().."". A.•)= l'-, "{ 2 x3 , }., • "-21
I

subst1tutmg m (4) and (5) : 7\ + = 60


S).l (6) = (0.81. 0.35, 0.28, 0.0867. 0.3067) cons1denng the oorderro he-ss1an mamx at (X!. A"')
\ + 2).1 = 16 (7)

~~- i ~- i 11
From (6) and (7) . ). I = ~9 and i•2=
52
9
Jl\ - t ~ 0 ~ 0
and
, I O O :
n 3. m 2, n m l, ?m + \ S. thus \\e need to chec\. the determinant of 18 onl).
the stationary pomt is l\h1ch should hnve the s1g.n of( I)'. um\ \1 8 \ - ~oO' 0. ·. X.. = (0 SL 0.35. 0 28) 1s a
mm1m1zn11011 po111t off('()

At (X*. J.*), JII =· ,'


~ _~ ~ -4 0
4 '1 0
ll
~L~l:. - .,+ Prohkm 4: l\ln'\\m1zc-
subJC'C't to
f(\.)
~(\.) , 2,_i-
?"\ ... '- + 10
:\ - 0

I 2~0 2 j So/111io11: l'hc- L,1g_ran~1nn lu111:tl\)I\


Here m " 2 and n = 3, n _ m .. l, 2m l _5 lt\. \\ ~, • \,+ 10 Alx,+2x/-3)
Th '• Nl'l'l'ssnry l!lllH\it1l\\l~ 111 h;1,c- t''\trt'm;1,
-,gn ,01; (:~)amn~ t(h~t);e o_nly need to chcck the ,1e1crm1na111 of J,, uncl ti should hnvc the
"' - ~ , 1.e. pos111vc sign and IJ 11 1 72 ...-,0 ---, ,l\
X* is a mmim12a!Jon point of f(x x x )
I I l
/ .. ~
"I
). 0

~ /1
2.1.. O c,... ,c., OptlmlHllon 0 2.27 •
.
ll •l a4 2xa' 0
(2)

(3)
and
For th11 atallonary point("•, y•, )..•), Ille bordered hcdian m:atnx wlll have the stgn
arc....,,
~-. ( I)•• ( 1)' ,. 1, (check) Ha and k c:onstanlt
l•2
I
(" •, y•) .. ( ~[ ITI •) ,s a 111111Ullnati011 point.

ma• a_
95
ii Problem 6: Mu1m1zc f(x,, x,) • n/ 1 2
subJect to 2•"/ + 2n, 1 2 • A."" 2ff
diclltllil. .J1t1fllllim•1.s ~ >.-)•(iJ~ Sol•do• : The Lagrange func:tlOII:

. Kz-AJ
2
L(1,, 1s, J.) ... n,2 ~- l(ln + 2Sll

4• [: o~" •: ] and the necessary cond1bOIIS for Ille munaaa of f:


-h:I O -u.-,.. \ (ll

At m. 2) ....,111-•
I 0.52,
1st

aL
hz
=~ - 4sy- ~ 211.
=.x.2-~-o (2}

=> il -,a e
• -a-11 -• =>p=-17.79<0 al.
ii -2n.2 +~Kz-A..-•
.
(3)

r-{: ¾) •ia- ◄-,-iatlVQwith fCX•>- 16.01. From (1) and (2): 1.-~-2
~

.,..._5: IS C-.,J•Jar"r Jil (4)


9111jea11 1C-.y)•s2 +r-a2 •0; a. k > o that is ~- 2
~ 'lllef9 . ., . . . . II From Eqs. (3) and (4)

'llleN
l(x.'1.1)~~-l(x'+,S-a')

...
rt JD'i"ff I t.6e . . . . . offp,y)• 7',• -~ -~·-11--).••J(:)
.ii•-b"y' 2s1 0 (I)

(2)
If A.•2411
~,• i.a.•
Ta 111 that tl\11 1ob1tioa M.llJ
aufflolonoy oondlt\Ol\ •
~l,• '
w,..- i i • . . . . . - - of f we apply the
(3) Conalcltt
From (I) and (2)

Using (3) :
0 2.31
2.30 0 Class ical Optim izatio n
Optim izatio n Techn iques

analy zing, we
ofL with respe ct to X, Sand 1.; and
\==IJ J==1 6>o Now, taking the partia l deriva tives n
H ence J is •. Kuhn - Tucke r neces sary condi tions for the given maxim izatio
Th a pos1t1 ve defin ite matri x will have the follow ing
' ,
. us f(X) is a (stric tly) Conv ex funct 1011. probl em:
11. f(x) == -sxi
i =Ito n
Solut ion: J == [::;] == (-16)
\ gi (X) == 0, j = I to m
111 == -16 < 0 g; (X) s 0, j = I tom
functi on
Thus f(X) is a (stric tly) Conc ave .
\ 2:: 0 j = Ito m
a conve x . . . with the excep tion
: ~y local minim um of
funct ion f(X) is a globa l minim um
. Thus there to the minim izatio n case as well,
Not,e c Abov e condi tions can be applie d izatio n, the lag rang·e
can t exist more then one min.imum L0r a conve x fu t'!On. S.1m1·1ar l Y, any local maxim um in both maxim izatio n and minim
. I b I . nc that A. must be non positi ve (But, d in sign)
o f a conca ve funct1 0n fi(X) is a g o a maxim um . Th us th ere can't exist more than one ty const raints must be unres tricte
. rnu1t1pliers corres 11ond ing to equal{
maxi mum for a conca ve funct ion. case of maxim izatio n probl em and conve x in the case
of
lf L is conca ve in the
- the differ ent possib ilities of\ are as follow s:
2 ivari able t· · cons train ts minim izatio n probl em,
Mult
'7,,..,11/Till-6 now, .
O
p ,m,z atron with ineq ualit y. Table 1
. .
we studie d multi variab le optim iza ity constr aints, if the const raints
I . b tlon with equal
are of inequ ality form , we ca n so ve it y the m th d o f I_agran gean multi pliers , and this Type optim izatio n Type of const raints Lagrn nge multi pliers
ng yield s th K h e o A.j
proce dure of solvi condi tions for identi fying f(X) giX)
. e u n -: Tuck er neces sary .
. statio nary point s of a non! mear const ramed prob! b• .
to inequ ality const raints . s 2::
ient d . em su Ject
Thes e condi tions are also suffic un er certam rules that will be stated
or discu ssed s
later. Maxi mizat ion 2
= Unres tricte d
~ Cons ider the probl em .$
s
Maxi mize f(X) 2::
Minim izatio n 2
subje ct to g/X) .s O, j == 1 tom Unres tricte d
=
and x~0
- sary condi tions
be conve rted to e · b · condi tions: The Kuhn - Tucke r neces
The inequ ality const raints may quant ity add d t i~at_1ins y usmg non negat ive slack Suffic iency of the Kuhn - Tuck er satisf y certai n
varia bles. Let s.2 (~ 0) be the slack trarn tg/X )s0an ds=( s, also suffic ient if the objec tive functi on f(X) and the soluti on space
. e o eJ cons are condi tions are summ anzed as
s, ... s )Tand ~2== ( i numb er of inequ ality constr aint~. and conca vity. These
i i
s1 , s2, ··· Sm), where m JS the total condi tions regar ding conve xity
2 m
is given by follow s:
Thus , the Lagra ngean funct ion Table 2
L(x,, x2, ... x., SI' s2, ". Sm, A,, "'2' ".
Am)
m Requ ired condi tion
== f(x,, J!: 2, ... x.) - L Type of optim izatio n
j=I obJec tive Funct ion Solut ion space
(OR) f(X)
Con.cave Conv ex set
L(X, S, J...) = f(X) - fJ=l
\[g;(X ) + s/J
Maxim izatio n
Minim izatio n Conv ex Conv ex set

to prove that a
is conve x or conca ve than it is
wher e A= (A,, A. 2 , ... Am). rt is easy to verify that n functi on provi de a list of cond1t1ons that
are
const raints ) -~J ~ 0. soluti on space is Conv ex set. For
this reason , we
em with g1(~::; 0 (conv ~x trf'C
For the abov e~ax imiza tion probl - Tuck er neces sary condi tions.
\ must hold as part of the Kuhn
Thes e restri ction s on
0 2.."33
Classical Optimization

2.32 0 Optimization Techniques

easier tc• apply m practice in the sense that the convexity of the solution space can be soLV~D PROBLEM_S 6
established by checking directly the convexity or concavity of the constraint functions. Pr 6Iem 1: Maximize
T ~ e- these conditions, we define the generalized nonlinear problem as / subject to

and
Optimize f(X)
subject to g/X) s 0,
giX) ~ 0,
g/X) = 0,

r
j = I, 2, ... r
J = r + 1, ... p
j=p+l, ... m

p m
'
f
f.
f
tt
Solution:

Then
Given
Max
g(x1, x2)
subject to-
- 6

.d ·Kuhn Tucker cond1t10ns


Now, cons1 er
f( xi' x 2)
- 3 6 x - 0 4 x i + 1.6 x2 - 0.2 x/
-

i
·_ .1
= 2X 1 +· X 2 - 10 S 0 '
• 1 .~

2
- () 4 X 2 + 1.6 X - 0.2 x/-:- /1, (2xl + xl - 10 + s)
L(x1, x2' A) -.3. x1 . : i
,....._

L(X, S, 11.) = f(X) - L 11..(g.(X) + s.2) L \ (g/X) - s/) - L \ g/X) 3.6 - 0.8 x_, = 2A. (!)

~
-
j=I J J J j=r+l J=p+I
:Qr-.t 0
IV
~=
ax
A~
Bx1
1
where A. is the Lagrangean multiplier associated with constraint j. The conditions for I (2)'
establishing the sufficiency of the Kuhn - Tucker conditions are summarized below. I -01-, -~= 11,aag 1.6 - 0.4 x 2 = A.
r ax2 x2 .P ~c
Table 3 I
Ag(x) =0
t (2x 1
+ x2 - I 0) ':' 0 (3)
® 2x 1 + x 2 - IO::,; 0. (4)·
Conditions required g(x) ~ 0
Type of & A~ 0
(5) -

optimization f(X) g/X) A..J ®


· h A - o or 2x l + x2 - 10 = 0 '
Convex ~o Is j s r From (3) e1t er - ) . = 4 5 and X, = 1 But equation (4)
Consider the case ).._ = 0, t_h en from (l) a nd <2 · x, ·
Maximization Concave Concave s; 0 r + I s;j s; p
can't be satisfied by (4.5, 4) . ~ _
Linear Unrestricted p+Is;js;m , . h' h ·s obtamed when"- - 0
(4.5, 4) is a not an optimal s?lut10n, w ic i r
Convex :5 0 1 s j s; r
. . Discard the case).._= 0 . = o from (3)
Minimization Convex Concave ~o r+lsjsp .
Obviously consider * 10
t h e case A o - that. 1s .2x 1-~ -- - _ (a 3) 11, = 0.4 This pomt
.
. ) 2 + x _ 1o = O yields (x" x 2) - 5
Linear Unrestricted p+lsjs;m Along with (1) a~d (2 , x, h
• . ·' • ' .
e ( 3 5 3 is an optimal pomt
· ' 1
2 . .
(3.5, 3) satisfying all the K-T co n clitwns, enc
The validity of above condition;, rests on the fact that the given co11ditions yield a
.', f = f(3.5, 3) = 10.7- . /
Concave Lagrangean function L in case of maximization and a Convex L in case of
minimization. It can be verified by noticing that if g/x) is convex, then\ g/x) is convex P_59-~•;: · M};i~ize 3x 12 + 14 x, x2 - 8x/
if\ ~ 0 and concave if\ s O (same reasoning can be established for the remaining / subject to 3x1 + 6x2 ~ 72 'x,, xi?.. O
'conditions)
Solution: Given
Observe that a linear function is both Convex and Concave. Also, if a function f is = 3x 1i + l4x1 x1 - 8x11
Concave, then -f is Convex and vice versa. Max f(x ,, x)
2

Note: Observe that in obtaining the above KT conditions the non negativity constraints subject to g(x,, X2) -- 3x 1 + 6x 1 - 72 ~ 0
X ~ 0 were completely ignored. However, we always consider the constraint X ~ O, and Then the Lagrangian function is given by
we have to discard all such solutions, that violate X ~ 0. 8 , "- (3 + 6x i - .72 + s2)
L(x" x2, ).._) = 3x,2 + 14x, x2 - x2- - x,
Note: The conditions in table 3 represent only a subset of the conditions in table 2, since
a solul!on space may be convex without satisfying the conditions in table 3. Kuhn - Tucker conditions are :

r

• C
\ ' \.t ~ l l
2.34 0 Optimizalton Techniques (_, (
\\I \
,, \' / ' r J
f f
r
Classical Optimization Q 2.35

JL Jg
ch.1 = 'A a:1 (I) ⇒ 2\ ). 2 + l 3 = 0
2x 2 - "-, + l, = 0
JL <lg
ax2 =" a: 2 14x, 16x} 61'. ,, 1' \
l ~ •

(2) 2x 3 - 1 2 + ). 5 = ?
t...g(x) =0 'A.(3x, + 6x 2 72) = 0 -(I , r,
i~ (3) 'I t ~
,)
"-, (2x, + x 2 - 5) =0
g(,):; 0 3x 1 + 6x 2 72 :; 0 (4)
,._ (x, + x - 2)
2 3
=0
A.~ 0 A.~ 0 (5)
A. 3 (1 - x ,) =0
/' A. 4 (2 - x 2) =0
From (3) : Either A= 0 or 3x 1 + 6x 2 - 72 = 0
Case A.= 0:
"-s (-x1) = 0
2x 1 + x 2 - 5 :5 0
From (1) and (2):
xl = 0 and x~ = 0, not satisfying (4) X
1
+ X
3
- 2S 0
: (0, 0) not an optimal point
1-x)so -
:. consider case f... c;::c 0 - that is 3x, + 6x
2
- 72 =O (6) 2-x,so
From (I) and (2): x, = 22 x 2 then from (6) : x
2
= 1⇒ x = ~2 -X 3 :; 0~
1
and A.= 48.6;;;,: 0, the point (22, 1) is satisfying all K - T conditions a ~ c e it is an
optimal pomt, and . By trial and error, consider the possibility :

---~-=
_ , - - - - - -f::m=::u~-=-1(x~= f{22, 1} = 1752. --1 ·v X: = 1, X1 = ?, X :J 3

and "- 1 ~ 0, "- 2 = 0, "-3 = -2, "-,


= -4. As= O
Problem 3: Mm1mize f(X) = x/ + x/ + x/ • • • = -, 0) 1s an•OP_timal
and it is satisfying all K-T conditions, therefore (x,, x1, x1) (L-:-- (l
point
0)
subject to g 1(X) = 2x, + x 2 - 5 :; 0
off(X) = x,2 + x/ + x/ and also by Kuhn-Tucker sufficient cond1t1ons the pomt , 2 ,
g 2 {X) = x, + x 3 - 2 :; 0 clearly yields a global mmimum.
g 3 (X} = 1 - X 1 $ 0 ,
fm,n = f(x,, x.1, xl) = f(l, 2, 0),... 5.
g.(X) = 2 - x2 s 0 1
gsCX) = -x 3 s 0 Problem 4: Maximize f(x. x?) = 8x + lOx! - x,! - ~

l
Solution: The Lagrangian function is given by suoJect to 3x 1 + h 1
:; 6, x ~ 0. x 1 i 0
L(x 1, X 2, X 3 • 1.,. "- 2 , "- 3 , '••• A. 5) So/11tio11: Consider Kuhn-Tucker cond1t1ons
= x/ + x/ + x/ - i. 1 (2x, + x2 - 5 + s/) - 1,2 (x, + x3 2 + s/)
- i, 3 (1 - x 1 + s/) - ),, (2 - x2 + s/) - A. 5 (- x 3 + s/) 8
ax1
ar = "- lh1
ag
Then the Kuhn - Tucker conditions are: ar ag (2)
10 - 2:-.. - :!:\. = 0
ax2 "- ch1 Q
ar I: \ a7.;;
al!i,
i-: 1 to 3 t...g(X) O A(}'- • ~'-- 6) =0
(3)
(4)
ax, JI g(X) -:.-0 :h ' ~, •. o:;; 0 (5)
1o. ..,. O "-..,. 0
", g1 = 0 J ""' I to 5
&; (X),, 0 Frnm (3) · t•1thcr A O 01 .h, • ?-\, 6 0 /
A. so, smce the problem is the mm1m1zat1on problem . . - (4 5) is satisfying equation
Suppnsc ,._ 0, tlwn From ( \ )_lmd l:!) : :\ 1 = 4, '- 1 =~.but lhis P~•:t + • x = (6),
J 6
(1\), wh~n ,._ 0, (11, 5) 1s \\l)l opt11n,1\ then com;ider A~ 0 - that is 3· , 2 2
2.36 0
Optimization Techniques
Classical Optimization 0 2.3:

(6) along ,nth (2) and (3) (x. '\) = (.:_ ~)


, 2 13, 13 clearly ll 1s satisfying all K T cond1t 1ons. and sa tisfying all K -T cond1t1ons, :. it is an optimal point; Hence the optimum solut10
I 3
then 1t is an optimal (rnax) point , H ence f __ f ( _4 _33) _ for the given problem is x 1 =
2, x 2 = 2 , X3 = 0 and
conditions . m" 13' 13 - 2 I .4 according to KT
17
Max f=
Problem S: 2
Max1m1ze i(
x,, x2, x) = -x i - xi - x 2 + 4x + 6
subJected to x, + x2 .s 2 , .i • J ' x2 Problem 6: Maximize f(x,, x 2) = -x,2- x/ + 8x , + lOx 2
.s 6 , x,, x1 ~ 0
C

I
subject to 3x 1 + 2x 2
2x 1 +3x 2 .s 12
Solution: Kuhn - Tucker conditions: Solutio11: The Kuhn -Tucker conditions: C
ar ag

}q
~ -2x 1 + 8 = 3).
ar A ag1 + i.. ag2 -=11.- f:

(~: I
axl = I axl 2 axl ax1 ax1
ar ag
ar
ax1 = "-,
a~
ax2 + Al
a~
ax2
-=11.-
ax2 ax2
-2x 2 + 10 = 2l C.
11.g = 0 l(3x, + 2x 2 - 6) = 0
ar ag1 ag2
3x 1 + 2x 2 - fj .S 0
ax.3 = A, ax3 + Al ax.3 -2X 3 =0 g .s 0
(3) ~ A~ 0
Al gl =0 q A. ~ 0
\ (x, + x 2 - 2) = O (4)
gj .s: 0
A2 (2x 1 +Jx 2 -12)=0 (5) From (3) : either A. = 0 or 3x, + 2x 2 - 6 = 0
A;~ 0 j = 1 to 2 X1+ X 2 - 2 _$: 0 Consider ,._ = 0: then by ( 1) ancl (2) : x, = 4, x,_ = 5
(6)
2x 1 +Jx 2 -12.s:O But the point (4, 5) is not satisfying Equation (4)

\~o}
\ ~ 0
(7)

(8)
:. (4, 5) is not an optimal point
Hence, distard the case ,._ = 0
Then consi<kr A* 0 - that is 3x 1 + 2x 2 - 6 :..Q--
From (4) and (5), we can have the casc;s: By (l) and (2) : 2x, - 3x 2 + 7 = 0
Case l : )., = 0, ).2 =0 Case 2 : ),, By (6) and (7): (x" x 2) = (0.3085, 2.539)
Case 3 . ~ 0 . , = 0 and 0 ),,2 CF
and by (1) "- = 2.461 ~ 0
• •, CF ' 1.2 CF O ; Case 4 : ),, CF O and ),, = 0
Check the cases - 1 2 3 h
1
2 Also the point (x" x 2) satisfying all the K - T condtti.ons, therefore it 1s an optir
· ' ' so t at these cases will be discarded 1
Now consider ·
point
the case 4 : .:l
'CF
o an d "-2
,
= 0 From (4) and (5): x + x - 2 =0 Hence, the optimum solution for the problem ts
1 2 (9)
From (I) and (2) - x + x, = 0 .3085, 'i.~ - 2 .53 9, "- = 2.461
· , - x2 1 = 0 (JO), then from (9) and (10) ·, x I = -21
and Maxf =21.3 l63
3
and by (9):
x2 = 2 , by (3): x3 = 0, by (1) : A,= 3 Problem 7: Maximize f(X) = x/ + "l
(x,, xz, x) - ({ rO) (A,, A.2) = (3, 0)
subject to - ,, $-I
,/ + x./ - 26 s 0
X. t X. - 6 = Q
1 1
~ol11tio11: Kuhn Tucker conditions:
Class/cal Optimization 0 2.3
2.38 0 Opt1m1zation Techniques

~ - A Jg, ,,s.., ' ,ls..1 x, = I, x 2 = 5


ih, - 1 J,, + A, ,,,, ", ih1
and Max f = 4.
!!_ _ ilg1 J~•, Problem 8: Minimize f(x,, x 2, xJ = x,1 + x/ + x/ + 40x, + 20x 2
a,1 - A, ih-

+ A, :f:
• "~
+ A Jg_i
~ ,1'\J
1- >-, (0) + \ (2x 2 ) + ;i..1 ( !) g (x,, x2, x3) = x 1 + x1 - 100 ~ 0
subject to 1
.l.., g, = 0 t.. 1 (-x, + I)= 0 g 1 (x,, x1, x 3) = x 1 - SO~ 0
~ g: =O \ (x/ + x/ - 26) = O g (x,, x2, x 3) = x 1 + x1 + x1 - 150 ~ 0
3
A, g, = 0 c::;) A1 (x, + x2 - 6) =O x,, x 2, x 1 ~ 0
and
g,:;:;: 0 -x, +I:;:;: 0 The Kuhn Tucker conditions are given by
Solutio11:
g::;; 0
x/ + x/- 26:;:;: O 3
gJ = 0 x, + x2 - 6 = O ar =
OX!
L
j=I
i.
J
ag; ; i = l
ax,
to n
"-, 2 0 .l.., 2 0
A1 2 0 \ gj (X) = 0 ; J = 1 to m
\2 0
Unresmcted, since g 3 =0 g;(X) ~ 0 ; j = l to m
"-1
A.3 unrestricted
ii ~ 0 ; j = 1 to m
Thar 1s. -2x, =-\ + 2x, \ + AJ (I) That is
1 =2x2 }..
2
+\ (2) 2x 1+ 40 = A. 1 + A. 2 + A.J
\(x,-1)=0 (3) 2x, + 20 = A.2 + ''1
\ (x/ + x/ - 26) = O (4) 2x, = 1.3
x, + JS - 6 = 0 and A1 unrestricted (5) i
x, -1:;; 0
r :>.-i(x 1 - SO) =0
(6) I 1.2 (x, + x, - 100) =0
x, i + x/ - 26 :;; 0 (7) ~
t )..
3
(x 1 + x, + x, - 150) =0
J.,20 }
lc2 ~ 0 x, - so-;,: 0
(8)
11.3 unrestricted
x 1 +x,-l00-;,:0
By Eq (6), and Eq (5): x1 + x2 + x3 - 150 2 0
the case x1 = 1 and bY x, + 0
6 -_ 0 ⇒
).. -;,:
).,
Consider
=o x2 - x2 = 5 as 11. 3 unrestricted, choose 1

3 Ai-;,: 0
then by (1): -2x, = -l, + 2x, '2 AJ-;,: O
by (2): I = 2x 2 \ The solution of Equation ( l) to Equation ( 12) c:in be found in several ways By
Nowbyx 1 = 1,x 2 =S and error, and checking all the possibilities l!) The values of t- 1, A.~ and A. 3 correspon,
lo fhis solution can be obtained us: ·
⇒ 1"'1
2x2 = 0.1 and J., = J.8 A, - 20 ' Al= '20, "-1 = 100
.·. J., and ).1 e:: O Si'noc nll A's"' 0, this solution. cun be i1.lc-ntlfied us the optimum solution.
clearly the point (I, 5) satisfying all K - ~r con d'1t1oni;
, Thus
· (1, 5) is an optimal point "-1~ = 50
Hence, the optimum solution for the problem is ,,* - 50
\./ .., so and f nun = I0500.

You might also like