0% found this document useful (0 votes)
25 views54 pages

Optimization Techniques (Lectures 1 and 2)

Optimization is the process of finding the best conditions to maximize or minimize a function, essential for maximizing gains and minimizing losses in various engineering applications. Techniques include classical optimization for continuous functions, unconstrained and constrained optimization methods, and the use of Lagrange multipliers for handling constraints. The document outlines necessary and sufficient conditions for identifying extreme points in both single and multi-variable optimization scenarios.

Uploaded by

w.mohammadtalaat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views54 pages

Optimization Techniques (Lectures 1 and 2)

Optimization is the process of finding the best conditions to maximize or minimize a function, essential for maximizing gains and minimizing losses in various engineering applications. Techniques include classical optimization for continuous functions, unconstrained and constrained optimization methods, and the use of Lagrange multipliers for handling constraints. The document outlines necessary and sufficient conditions for identifying extreme points in both single and multi-variable optimization scenarios.

Uploaded by

w.mohammadtalaat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Optimization techniques

• What is optimization?
1. The process of finding the conditions that give the maximum or minimum value of a function.
2. It is the seek for the best course of action under given circumstances.

• Why do we need to study optimization?


1. Maximize the gain or profit of the system and minimize the waste or loss
2.
3.
4.
Optimization techniques
• Engineering applications of optimization
Optimization, in its broadest sense, can be applied to solve any engineering problem. Some typical
applications from different engineering disciplines indicate the wide scope of the subject:
1. Design of aircraft and aerospace structures for minimum weight
2. Finding the optimal trajectories of space vehicles
3. Design of civil engineering structures such as frames, foundations, bridges, towers, chimneys, and
dams for minimum cost
4. Minimum-weight design of structures for earthquake, wind, and other types of random loading
5. Design of water resources systems for maximum benefit
6. Optimal plastic design of structures
7. Optimum design of linkages, cams, gears, machine tools, and other mechanical components
Optimization techniques
• Engineering applications of optimization
8. Selection of machining conditions in metal-cutting processes for minimum production cost
9. Design of material handling equipment, such as conveyors, trucks, and cranes, for minimum cost
10. Design of pumps, turbines, and heat transfer equipment for maximum efficiency
11. Optimum design of electrical machinery such as motors, generators, and transformers
12. Optimum design of electrical networks
13. Shortest route taken by a salesperson visiting various cities during one tour
14. Optimal production planning, controlling, and scheduling
15. Analysis of statistical data and building empirical models from experimental results to obtain the most accurate
representation of the physical phenomenon
Optimization techniques
• Engineering applications of optimization
16. Optimum design of chemical processing equipment and plants
17. Design of optimum pipeline networks for process industries
18. Selection of a site for an industry
19. Planning of maintenance and replacement of equipment to reduce operating costs
20. Inventory control
21. Allocation of resources or services among several activities to maximize the benefit
22. Controlling the waiting and idle times and queueing in production lines to reduce the costs
23. Planning the best strategy to obtain maximum profit in the presence of a competitor
24. Optimum design of control systems
Classical Optimization Techniques
• Useful in finding the optimum solution of continuous and differentiable functions.

• analytical techniques

• make use of the of differential calculus in locating the optimum points.

• have limited scope in practical applications.

• form the basis for developing most of the numerical techniques of optimization
Unconstrained Optimization-Single Variable
• Continuity and differentiability of a function

Discontinuous (Holed)
Discontinuous (sudden jump) (broken) Discontinuous (sudden jump)

• Continuity is a basic condition to attain differentiability


(i.e. all differentiable functions must be continuous)
• Differentiability means that the derivative of the function exists at every point in the domain.
• Continuity means that the function has no breaks, holes, and/or sudden jumps (corners or
vertical tangent).
Unconstrained Optimization-Single Variable
• Necessary Condition
If a function f (x)
1. is defined in the interval a ≤ x ≤ b and
2. has a local (relative) minimum value at x = x*, where a < x* < b,
3. and the derivative exists as a finite number at x = x*

The necessary condition:


f’(x*)=0

• In general, a point x* at which f’(x*)=0 is called a stationary point.


Unconstrained Optimization-Single Variable
• Sufficient Condition
f (x*) is
I. a minimum value of f (x), if f(n)(x*) > 0 and n is even; If n=1 >>>> f(n)(x*)= f ’(x*)
If n=2 >>>> f(n)(x*)= f ’’(x*)
II. a maximum value of f (x), if f(n)(x*) < 0 and n is even If n=3 >>>> f(n)(x*)= f ’’’(x*)
III. neither a maximum nor a minimum if n is odd.

The sufficient condition:


If f’’(x*)>0, the stationary point (x*) is a local (relative) minima
If f’’(x*)<0, the stationary point (x*) is a local (relative) maxima
If f’’(x*)=0 and f’’’(x*)≠0 , the stationary point (x*) is a saddle point
Unconstrained Optimization-Single Variable
• Relative versus global extreme points:

Relative = Local
Extreme = Critical = Stationary = optimum
Unconstrained Optimization-Single Variable
• Saddle point [f’(x*)=0, f’’(x*)=0 and f’’’(x*)≠0 ]:

Relative = Local
Extreme = Critical = Stationary = optimum

(Saddle)
Unconstrained Optimization-Single Variable
• Example 1: Determine the local/relative maximum and minimum values of the function
Unconstrained Optimization-Single Variable
• Example 2:
Unconstrained Optimization-Single Variable
• Example 2 (continues):
Unconstrained Optimization-Single Variable
• Example 2 (continues):

Relative (local)
minima
Unconstrained Optimization-Single Variable
• Example 2 (continues):
Unconstrained Optimization-Single Variable
• Example 3:
Unconstrained Optimization-Single Variable
• Example 4:
Unconstrained Optimization-Single Variable
• Example 4 (continues):
Unconstrained Optimization-Single Variable
• Example 5:
Unconstrained Optimization-Single Variable
• Example 5 (continues):
Unconstrained Optimization-Single Variable
• Example 5 (continues):
Unconstrained Optimization- Multi Variable
• Necessary Condition
If f(X) has an extreme point (maximum or minimum) at X = X*
and if the first partial derivatives of f (X) exist at X*, then

The necessary condition:


𝜕𝑓 𝜕𝑓 𝜕𝑓
(𝑋 ∗ ) = (𝑋 ∗ ) = …… = (𝑋 ∗ ) =0
𝜕𝑥1 𝜕𝑥1 𝜕𝑥n
Unconstrained Optimization- Multi Variable
• Sufficient Condition
A sufficient condition for a stationary point X* to be an extreme point is that the
matrix of second partial derivatives (Hessian matrix) of f(X) evaluated at X*, is

I. positive definite when X* is a relative/local minimum point, and


II. negative definite when X* is a relative/local maximum point.
III. Indefinite when X* is a saddle point.
Unconstrained Optimization- Multi Variable
• To build up the Hessian matrix:
1. Hessian matrix (2x2) for two variables x and y
2. Hessian matrix (3x3) for three variables x, y and z

3. Hessian matrix (nxn) for n variables x1, x2, x3…..xn


Unconstrained Optimization- Multi Variable
• Positive versus negative definite Hessian matrix:
• Hn is +ve definite matrix, when all the sub determinants H1, H2, H3,……., Hn are
positive Odd (1x1) or H1 Even (2x2) or H2 Odd (3x3) or H3 Even (4x4) or H4
+ve +ve +ve ..etc.

• Hn is -ve definite matrix, when only the odd sub determinants H1, H3, H5, H7 ….. H(odd
value) are negative Odd (1x1) or H1 Even (2x2) or H2 Odd (3x3) or H3 Even (4x4) or H4
-ve +ve -ve ..etc.

Leading sub determinant= leading principal minor=


1st sub determinant (odd sub determinant)= (1x1) sub determinant
2nd Leading sub determinant= 2nd leading principal minor=
2nd sub determinant (even sub determinant)= (2x2) sub determinant

3rd Leading sub determinant= 3rd leading principal minor=


3rd sub determinant (odd sub determinant)= (3x3) sub determinant
Unconstrained Optimization- Multi Variable
• Indefinite Hessian matrix: odd even Odd Even
• it is neither positive nor negative, (1x1) (2x2) (3x3) (4x4)
i.e. do not follow the sequence of signs for = h1 = h2 = h3 = h4
either positive or negative definite hessian matrices -ve -ve +ve ………etc.

• Hn is indefinite matrix, when we notice; -ve -ve -ve ………etc.


1. two or more subsequent sub determinants are negative, or +ve -ve -ve ………etc.
2. any of the even sub determinants are negative, or +ve -ve +ve ………etc.
3. At least two subsequent sub determinants are positive within -ve +ve +ve ………etc.
a negative definite sequence. +ve +ve -ve ………etc.

(odd sub determinant)= (1x1) sub determinant

(even sub determinant)= (2x2) sub determinant

(odd sub determinant)= (3x3) sub determinant


Unconstrained Optimization- Multi Variable
• Indefinite Hessian matrix:
• Hn is indefinite matrix, when two or more subsequent sub determinants are negative
For example: Hn is indefinite when:
odd even odd
1. H1 and H2 are negative sub determinants.
-ve -ve +ve
2. H2 and H3 are negative sub determinants. -ve -ve -ve
+ve -ve -ve

(odd sub determinant)= (1x1) sub determinant

(even sub determinant)= (2x2) sub determinant

(odd sub determinant)= (3x3) sub determinant


Unconstrained Optimization- Multi Variable
• Indefinite Hessian matrix:
• Hn is indefinite matrix, when any of the even sub determinants are negative

odd even odd


• Hn is indefinite matrix, when at least two subsequent -ve -ve +ve
sub determinants are positive within a negative definite sequence -ve -ve -ve
+ve -ve -ve
odd even odd
+ve -ve +ve
-ve +ve +ve
+ve +ve -ve
For example:
Hn is indefinite when:
1. H1 and H2 are positive determinants, while H3 is negative.
2. H2 and H3 are positive determinants, while H1 is negative.
Unconstrained Optimization- Multi Variable
• Example 1:

Note: Some reference text books name the hessian matrix J, while others name it H
Unconstrained Optimization- Multi Variable
• Example 1 (continues):

A 3D graph showing the


objective function of
example 1

Note: Some reference text books name the hessian matrix J, while others name it H
Unconstrained Optimization- Multi Variable
• Example 2: Find the extreme points of the function
Unconstrained Optimization- Multi Variable
• Example 2 (continues):
Unconstrained Optimization- Multi Variable
• Example 2 (continues):
Unconstrained Optimization- Multi Variable
• Example 3:
Unconstrained Optimization- Multi Variable
• Example 4:
Unconstrained Optimization- Multi Variable
• Example 4 (continues):
Unconstrained Optimization- Multi Variable
• Example 4 (continues):
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Problem statement :

This can be reformulated as

where, L(x1,x2,λ) is the LaGrange function, and λ is the LaGrange multiplier.


λ indicates the changes in optimal value of the objective function per unit change in the
right-hand side of the equality constraints.
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Necessary Condition:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 1:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 1 (continues):
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Sufficient Condition:

• Evaluate Z
• If z>0, Minima
• If z<0, Maxima
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 2:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 2 (continues):
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 2 (continues): Applying the sufficiency condition to test the maxima of f(x1,x2)

Reduces to
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 2 (continues):

Z is negative, thus the point (x*1 , x*2 )


corresponds to the maximum of f(x1,x2) .
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 3:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with equality constraint)
• Example 3 (continues):

Sufficiency condition
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Problem statement and reformulation:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Example 1:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Example 1 (cont.):
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Example 2:
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Example 2 (cont.):
Constrained Optimization- Multi Variable
(The method of Lagrange multipliers LM with inequality constraint)
• Example 2 (cont.):

You might also like