0% found this document useful (0 votes)
85 views12 pages

Unconstrained WS

This document discusses unconstrained optimization of functions with multiple variables. It explains that the gradient vector and Hessian matrix are used to find stationary points where partial derivatives are equal to zero. The necessary condition is that the gradient vector is equal to zero at these points. The sufficient condition requires the Hessian matrix to be positive or negative definite, indicating a minimum or maximum. An example function is analyzed to classify its stationary points.

Uploaded by

RizzyPrimanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views12 pages

Unconstrained WS

This document discusses unconstrained optimization of functions with multiple variables. It explains that the gradient vector and Hessian matrix are used to find stationary points where partial derivatives are equal to zero. The necessary condition is that the gradient vector is equal to zero at these points. The sufficient condition requires the Hessian matrix to be positive or negative definite, indicating a minimum or maximum. An example function is analyzed to classify its stationary points.

Uploaded by

RizzyPrimanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 12

Optimization using Calculus

Optimization of
Functions of Multiple
Variables: Unconstrained
Optimization

1 D Nagesh Kumar, IISc Optimization Methods: M2L3


Objectives

 To study functions of multiple variables, which are more difficult to


analyze owing to the difficulty in graphical representation and
tedious calculations involved in mathematical analysis for
unconstrained optimization.
 To study the above with the aid of the gradient vector and the
Hessian matrix.
 To discuss the implementation of the technique through examples

2 D Nagesh Kumar, IISc Optimization Methods: M2L3


Unconstrained optimization

 If a convex function is to be minimized, the stationary point is the


global minimum and analysis is relatively straightforward as
discussed earlier.
 A similar situation exists for maximizing a concave variable function.
 The necessary and sufficient conditions for the optimization of
unconstrained function of several variables are discussed.

3 D Nagesh Kumar, IISc Optimization Methods: M2L3


Necessary condition

 In case of multivariable functions a necessary condition for a


stationary point of the function f(X) is that each partial derivative is
equal to zero. In other words, each element of the gradient vector  x f
defined below must be equal to zero. i.e. the gradient vector of f(X),
at X=X*, defined as follows, must be equal to zero:
 f * 
 x (  )
 1 
 f * 
 x (  )
x f   0
2

  
 
  
 f 
 (  *
) 
 dxn 
4 D Nagesh Kumar, IISc Optimization Methods: M2L3
Sufficient condition
 For a stationary point X* to be an extreme point, the matrix of second
partial derivatives (Hessian matrix) of f(X) evaluated at X* must be:
 positive definite when X* is a point of relative minimum, and
 negative definite when X* is a relative maximum point.

 When all eigen values are negative for all possible values of X, then
X* is a global maximum, and when all eigen values are positive for
all possible values of X, then X* is a global minimum.
 If some of the eigen values of the Hessian at X* are positive and
some negative, or if some are zero, the stationary point, X*, is neither
a local maximum nor a local minimum.

5 D Nagesh Kumar, IISc Optimization Methods: M2L3


Example

Analyze the function f ( x)   x12  x22  x32  2 x1 x2  2 x1 x3  4 x1  5 x3  2


and classify the stationary points as maxima, minima and points of
inflection
Solution

6 D Nagesh Kumar, IISc Optimization Methods: M2L3


Example …contd.

7 D Nagesh Kumar, IISc Optimization Methods: M2L3


Example …contd.

8 D Nagesh Kumar, IISc Optimization Methods: M2L3


Theorem. The eigenvalues of a triangular matrix are its diagonal entries

Proof: Let the triangular matrix be


 a11 a12  a1n 
0 a22  a2 n 
A
  
 
0 0  ann 
The characteristic equation of A is
 a11   a12  a1n 
 0 a22    a2 n 
det  0
    
 
 0 0  ann   
or (a11 -  )(a22 -  )  (ann -  )  0
Hence 1  a11' ,  2  a22' ,  n  ann ' .
Example:
1 1
A 
 2 4 
The characteristic equation of A is
1   1 
det(A- I) = det   0
 2 4  
(1   )(4   )  2  0
 2  5  6  0
The eigenvalues are therefore
1  2, 2  3
Example …contd.

11 D Nagesh Kumar, IISc Optimization Methods: M2L3


Thank you

12 D Nagesh Kumar, IISc Optimization Methods: M2L3

You might also like