0% found this document useful (0 votes)
4 views

Non-Linear Programming problems in Engineering Optimization course

The document provides an overview of unconstrained and constrained non-linear optimization methods, focusing on techniques for minimizing functions without constraints. It details various algorithms, including direct search and gradient-based methods, and explains the iterative nature of these optimization processes. Additionally, it introduces constrained optimization, outlining methods such as sequential linear programming and penalty function approaches.

Uploaded by

siva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Non-Linear Programming problems in Engineering Optimization course

The document provides an overview of unconstrained and constrained non-linear optimization methods, focusing on techniques for minimizing functions without constraints. It details various algorithms, including direct search and gradient-based methods, and explains the iterative nature of these optimization processes. Additionally, it introduces constrained optimization, outlining methods such as sequential linear programming and penalty function approaches.

Uploaded by

siva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

BMEE215L Engineering Optimization

Module-6: Unconstrained and constrained non-linear optimization

Lecture: Unconstrained Non-Linear Optimization

Dr. Siva Prasad Darla


M.Tech., Ph.D.

School of Mechanical Engineering


VIT, Vellore

Introduction to unconstrained NLP

 x1 
x 
 
Find X =  2  which minimizes f ( X)
 
 xn 
The various methods of solving the unconstrained minimization problem.
There are search methods available to get the optimum solutions without going for derivatives of the function as
like in case of classical optimization methods.

Dr. Darla / SMEC / VIT Vellore 2


Darla

Prof. Darla / SMEC / VIT Vellore 1


Minimization Methods/Algorithms for Unconstrained NLP
Two distinct types of algorithms.
• Direct search methods use only objective function values to locate the
minimum point, and
• Gradient-based methods use the first and/or the second-order derivatives of
the objective function to locate the minimum point.

Dr. Darla / SMEC / VIT Vellore 3


Darla

Minimization Methods/Algorithms for Unconstrained NLP

Direct search methods Indirect Search (Descent) methods


• Random search method • Steepest descent (Cauchy method)
• Grid search method • Fletcher-Reeves method
• Univariate method • Newton’s method
• Pattern search methods • Marquardt method
• Powell’s method • Quasi-Newton methods
• Hooke-Jeeves method • Davidon-Fletcher-Powell method
• Rosenbrock’s method • Broyden-Fletcher-Goldfarb-Shanno
• Simplex method method

Dr. Darla / SMEC / VIT Vellore 4


Darla

Prof. Darla / SMEC / VIT Vellore 2


Unconstrained NLP- General approach for search method
General approach for search method
All unconstrained minimization methods are iterative in nature and hence they start from an
initial trial solution and proceed toward the minimum point in a sequential manner.
𝑋𝑖+1 = 𝑋𝑖 + λ∗𝑆𝑖 Various method
to find search
where 𝑋𝑖 is the starting point, direction
𝑆𝑖 is the search direction,
λ∗ is the optimal step length,
and 𝑋𝑖+1 is the final point in iteration 𝑖.

Note: It is important to note that all the unconstrained minimization methods


Require an initial point
Differ from one another only in the method of generating the new point and in
testing the new point for optimality

Dr. Darla / SMEC / VIT Vellore 5


Darla

Unconstrained NLP- General approach for search method

Iterative process of optimization

Dr. Darla / SMEC / VIT Vellore 6


Darla

Prof. Darla / SMEC / VIT Vellore 3


Unconstrained NLP- General approach for search method
A sequence of improved approximations to the optimum according to the following
scheme:

Dr. Darla / SMEC / VIT Vellore 7


Darla

Unconstrained NLP: One dimensional search methods


For fixed values of Xi and Si, f(Xi+1) = f(Xi +  Si) = f() is reduced to minimization of f()
problem i.e. single variable problem. Here f() becomes a function of one variable  only
optimization problem.
So, the methods of finding the step length  in the process of solution procedure of the
given problem are called single-variable minimization methods.

Several methods are available for solving a one-dimensional (i.e. single-variable)


minimization problems.

Dr. Darla / SMEC / VIT Vellore 8


Darla

Prof. Darla / SMEC / VIT Vellore 4


Unconstrained NLP: One dimensional search methods
Various Single Variable Search Methods

We can use any suitable method for solving one dimensional search method i.e. finding
optimal step length (λ∗) in solving process of 𝑋𝑖+1 = 𝑋𝑖 + λ∗𝑆𝑖 for the given problem.
Dr. Darla / SMEC / VIT Vellore 9
Darla

Unconstrained NLP: Methods


Recall the topic (slide no.4)
Direct search methods Indirect Search (Descent) methods
• Random search method • Steepest descent (Cauchy method)
• Grid search method • Fletcher-Reeves method
• Univariate method • Newton’s method
• Pattern search methods • Marquardt method
• Powell’s method • Quasi-Newton methods
• Hooke-Jeeves method • Davidon-Fletcher-Powell method
• Rosenbrock’s method • Broyden-Fletcher-Goldfarb-Shanno
• Simplex method method

Dr. Darla / SMEC / VIT Vellore 10


Darla

Prof. Darla / SMEC / VIT Vellore 5


Unconstrained NLP: Direct Search Methods
• Univariate Method
In this method, we change only one variable at a time and seek to produce a sequence of improved
approximations to the minimum point.

By starting at a base point Xi in the ith iteration, we fix the values of n-1 variables and vary the remaining
variable.

Since only one variable is changed, the problem becomes a one-dimensional minimization problem and any
of the methods (single variable algorithms) can be used to produce a new base point X i+1.
The search procedure is continued in a new direction.
In fact, the search procedure is continued by taking each coordinate direction in turn.

After all the n directions are searched sequentially, the first cycle is complete and hence we repeat the entire
process of sequential minimization.

The procedure is continued until no further improvement is possible in the objective function in any of the n
directions of a cycle.
Dr. Darla / SMEC / VIT Vellore 11
Darla

Unconstrained NLP: Direct Search Methods


Univariate Method
1. Choose an arbitrary starting point X1 and set i=1
2. Find the search direction Si in any one of the coordinate direction as

For the current direction Si , this means find whether the function value decreases in the
positive or negative direction.
For this, we take a small probe length () in either direction and evaluate.

Dr. Darla / SMEC / VIT Vellore 12


Darla

Prof. Darla / SMEC / VIT Vellore 6


Unconstrained NLP: Direct Search Methods-Univariate Method
For this, we take a small probe length () in either direction and evaluate the following:

Dr. Darla / SMEC / VIT Vellore 13


Darla

Unconstrained NLP: Direct Search Methods-Univariate Method


3. Find the optimal step length such that

4. .
and find

5. Set the new value of i=i+1 , and go to step 2.


Continue this procedure until no significant change is achieved in the value of the
objective function.

Dr. Darla / SMEC / VIT Vellore 14


Darla

Prof. Darla / SMEC / VIT Vellore 7


Univariate Method-Example

Dr. Darla / SMEC / VIT Vellore 15


Darla

Unconstrained NLP: Indirect (or Descent) Search Methods


Gradient of a Function
The gradient of a function is an n-component vector given
by

If we move along the gradient direction from any


point in n-dimensional space, the function value
increases at the fastest rate. Hence the gradient
direction is called the direction of steepest ascent.
Since the gradient vector represents the direction of steepest ascent, the negative of the
gradient vector denotes the direction of steepest descent.

Dr. Darla / SMEC / VIT Vellore 16


Darla

Prof. Darla / SMEC / VIT Vellore 8


Unconstrained NLP: Indirect (or Descent) Search Methods
Steepest Descent (Cauchy) Method
The use of the negative of the gradient vector as a direction for minimization was first made by
Cauchy.
In this method we start from an initial trial point X1 and iteratively move along the steepest
descent directions until the optimum point is found.

Dr. Darla / SMEC / VIT Vellore 17


Darla

Unconstrained NLP: Indirect (or Descent) Search Methods


Steepest Descent (Cauchy) Method

Dr. Darla / SMEC / VIT Vellore 18


Darla

Prof. Darla / SMEC / VIT Vellore 9


Steepest Descent (Cauchy) Method - Example

Dr. Darla / SMEC / VIT Vellore 19


Darla

Steepest Descent (Cauchy) Method - Example

Dr. Darla / SMEC / VIT Vellore 20


Darla

Prof. Darla / SMEC / VIT Vellore 10


Rate of Convergence
In general, an optimization method is said to have convergence of order p if

where Xi = point obtained at the end of iteration i


Xi+1 = point obtained at the end of iteration i + 1
X ∗ = optimum point,
||X|| = length or norm of the vector X =

If p = 1 and 0 ≤ k ≤ 1, the method is said to be linearly convergent (corresponds to slow convergence).


If p = 2, the method is said to be quadratically convergent (corresponds to fast convergence).

Dr. Darla / SMEC / VIT Vellore 21


Darla

Rate of Convergence: Convergence Criteria


1. When the change in function value in two consecutive iterations is small:

2. When the partial derivatives (components of the gradient) of f are small:

3. When the partial derivatives (components of the gradient) of f are small:

Dr. Darla / SMEC / VIT Vellore 22


Darla

Prof. Darla / SMEC / VIT Vellore 11


CONJUGATE GRADIENT (FLETCHER–REEVES) METHOD
• In this method, the method of conjugate direction is considered involving the use of the
gradient of the function.
• It is developed by modifying the steepest descent method to make it quadratically convergent.
• It is concept of two successive directions (i.e. current direction and previous direction)

Dr. Darla / SMEC / VIT Vellore 23


Darla

CONJUGATE GRADIENT (FLETCHER–REEVES) METHOD


Steps:
Initial point X1
Step1: First search direction S1 = -  f(X1) = -  f1
Step2: Now, second point X2 is
X2 = X 1 +  S1
where  is the optimal step length (finding = * optimal step length)
Set i = 2 and go to the next step
Step3: Find  fi =  f(Xi) , and set
|∇ 𝑓𝑖 |2
𝑆𝑖 =−∇𝑓𝑖 + 𝑆
∇ 𝑓𝑖−1 2 𝑖−1
Step4: Next new point Xi+1
Xi+1 = Xi +  Si
where  is the optimal step length (finding = * optimal step length)
Step5: Test for the optimality of point Xi+1.
If Xi+1 is optimum, stop the process. Otherwise, set i = i+1 and go to step 3.

Dr. Darla / SMEC / VIT Vellore 24


Darla

Prof. Darla / SMEC / VIT Vellore 12


FLETCHER–REEVES METHOD- Example

Dr. Darla / SMEC / VIT Vellore 25


Darla

Thank you

Dr. Darla / SMEC / VIT Vellore 26


Darla

Prof. Darla / SMEC / VIT Vellore 13


BME215L Engineering Optimization
Module-6: Unconstrained and constrained
non-linear optimization
Lecture: Constrained Non-Linear Optimization

Dr. Siva Prasad Darla


M.Tech., Ph.D.

School of Mechanical Engineering


VIT, Vellore

Introduction to constrained NLP

 x1 
x 
 
Find X =  2  which minimizes f ( X)
 
 xn 
subject to the constraints
gj(X)  0, j= 1,2, ….m
hk(X) = 0, k= 1,2, ….p

X≥0

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 14


Characteristics of Constrains NLP

Dr. Darla / SMEC / VIT Vellore


Darla

Characteristics of Constrains NLP

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 15


Characteristics of Constrains NLP

Dr. Darla / SMEC / VIT Vellore


Darla

Characteristics of Constrains NLP

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 16


Methods for Contrained NLP

Direct Methods Indirect Methods


• Random search methods • Transformation of variables technique
• Heuristic search methods • Sequential unconstrained minimization
• Complex Method techniques
• Objective and constraint • Interior penalty function method
approximation methods
• Sequential linear programming • Exterior penalty function method
method (cutting plane method) • Augmented Lagrange multiplier
• Sequential quadratic method
programming method
• Methods of feasible directions
• Zoutendijk’s method
• Rosen’s gradient projection
method
• Generalized reduced gradient method

Dr. Darla / SMEC / VIT Vellore


Darla

Sequential linear programming method (cutting plane method)


• In the sequential linear programming (SLP) method, the solution of the original nonlinear programming problem
is found by solving a series of linear programming (LP) problems.
• Each LP problem is generated by approximating the nonlinear objective and constraint functions using first-order
Taylor series expansions about the current design vector, Xi.
• The resulting LP problem is solved using the simplex method to find the new design vector Xi+1. If Xi+1 does not
satisfy the stated convergence criteria, the problem is relinearized about the point Xi+1 and the procedure is
continued until the optimum solution X* is found.

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 17


Sequential linear programming method (cutting plane method)
• Consider the following problem

Dr. Darla / SMEC / VIT Vellore


Darla

Sequential linear programming method (cutting plane method)


Method:
Consider boundary conditions on x as

Starting solution, x = c

Now i = 1,
linearize the problem at x1 = c
i.e. linearize f(x) at x1, f(x) = c1 x
linearize g(x) at x1,

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 18


Sequential linear programming method (cutting plane method)
Method:
LP1:

Solve the LP1 and


optimal solution is x* = e
Evaluate the x* at the given all
constraints, if any constraint is violated
by considered  = constant
Then, go to next iteration.

Dr. Darla / SMEC / VIT Vellore


Darla

Sequential linear programming method (cutting plane method)


Method:
Now i=2, linearize the constraint at x* = e
and add to the LP1 to get new LP as LP2
LP2:

Solve the LP2 and


optimal solution is x* = f

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 19


Sequential linear programming method (cutting plane method)
Method:
LP2:

Solve the LP2 and


optimal solution is x* = f
Now, evaluate the x* at the given all
constraints, if any constraint is violated
by considered  = constant
Then, go to next iteration.
Repeat iteration until to get closer to solution, stop the process.
Dr. Darla / SMEC / VIT Vellore
Darla

Penalty Function Method


The penalty function method reduces the constrained NLP problem to sequence of unconstrained
NLP problems.
Consider inequality constrains of NLP problem

Convert into an unconstrained NLPP by constructing a function of the form

where Gj is some function of the constraint gj


rk is a positive constant known as the penalty parameter.
If the unconstrained minimization of the φ function is repeated for a sequence of values of the penalty parameter rk
(k = 1, 2, . . .), the solution may be brought to converge to that of the original problem

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 20


Penalty Function Method
Form of penalty function: for inequality constrain
Interior Penalty Function Method:

Exterior Penalty Function Method:

Dr. Darla / SMEC / VIT Vellore


Darla

Penalty Function Method

Interior Penalty Function Method


In this method, the unconstrained minima of k all lie in the feasible region and sequence of
feasible points are generated and converge to the optimal solution of the given original problem,
as rk penalty parameter is varied in a particular manner (decreased sequentially).

Exterior Penalty Function Method


In this method, the unconstrained minima of k all lie in the infeasible region, sequence of
infeasible points are generated and converge to the optimal solution from the outside feasible
region ,
as rk penalty parameter is varied in a particular manner (increased sequentially).

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 21


Penalty function method
Consider the simple problem:

Now, the unconstrained minima of k function


(interior penalty function method):
−1
Φ𝑘 = Φ (X, 𝑟𝑘 ) = 𝛼𝑥1 + 𝑟𝑘 𝛽−𝑥1
The convergence of the unconstrained minima
of k is illustrated:

It convergence is happening as the parameter rk is


decreased sequentially.
If rk 0, the sequence of optimal solutions
converges to optimal X*.

Dr. Darla / SMEC / VIT Vellore


Darla

Penalty function method


Consider the simple problem:

Now, the unconstrained minima of k function:


Φ𝑘 = Φ (X, 𝑟𝑘 ) = 𝛼𝑥1 + 𝑟𝑘 𝑚𝑎𝑥 0, 𝛽 − 𝑥1 2
The convergence of the unconstrained minima of k is
illustrated:

It convergence is happening as the parameter rk is


increased sequentially.

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 22


Penalty function method- Interior
Iterative procedure:

Dr. Darla / SMEC / VIT Vellore


Darla

Penalty function method

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 23


Interior Penalty Function Method - Example

Dr. Darla / SMEC / VIT Vellore


Darla

Interior Penalty Function Method - Example

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 24


Interior Penalty Function Method - Example

Dr. Darla / SMEC / VIT Vellore


Darla

Interior Penalty Function Method - Example

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 25


Exterior Penalty function method- Example
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑋 = − 𝑥1 𝑥2
subjected to
𝑔 𝑋 = 𝑥1 + 2𝑥2 − 4 ≤ 0

Unconstrained NLPP as exterior function:


2
∅ 𝑋, 𝑟 = − 𝑥1 𝑥2 + 𝑟. 𝑚𝑎𝑥 0, 𝑥1 + 2𝑥2 − 4

𝜕∅
= −𝑥2 + 2𝑟 𝑥1 + 2𝑥2 − 4 = 0 ……..(1) for infeasible points
𝜕𝑥1

𝜕∅
= −𝑥1 + 4𝑟 𝑥1 + 2𝑥2 − 4 = 0 ………(2) for infeasible points
𝜕𝑥2

Dr. Darla / SMEC / VIT Vellore


Darla

Exterior Penalty Function Method- Example


Solving equations (1) and (2), 𝑥1 = 2𝑥2

2
we get, 𝑥1∗ = 1
1−
8𝑟

1
𝑥2∗ =
1
1 − 8𝑟

𝑤ℎ𝑒𝑛 𝑟 → ∞, 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒𝑠 𝑥1∗ =2, 𝑥2∗ =1

𝑓𝑚𝑖𝑛 = −2

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 26


Exterior Penalty Function Method- Example
The convergence of this method as r increases gradually can be seen from the table.

Value of r 𝑥1∗ 𝑥2∗ ∅𝑚𝑖𝑛 (𝑟) 𝑓𝑚𝑖𝑛 (𝑟)


0.001
0.01
0.1
1
10
100
1000
10000
∞ 2 1 -2
Dr. Darla / SMEC / VIT Vellore
Darla

Thank you

References: Rao, S.S., 2019. Engineering optimization: theory and practice. John Wiley & Sons

Dr. Darla / SMEC / VIT Vellore


Darla

Prof. Darla / SMEC / VIT Vellore 27

You might also like