0% found this document useful (0 votes)
78 views94 pages

Program Elective:: EC-MDPE12

The document discusses optimization techniques in design. It defines optimization as finding the best or most suitable solution under given constraints by maximizing desired outcomes or minimizing undesired ones. Various tools can be used for design optimization, including techniques for single-variable, multi-variable, constrained, specialized, and non-traditional optimization problems. Classical optimization methods use analytical approaches to find optimal solutions for continuous objective functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views94 pages

Program Elective:: EC-MDPE12

The document discusses optimization techniques in design. It defines optimization as finding the best or most suitable solution under given constraints by maximizing desired outcomes or minimizing undesired ones. Various tools can be used for design optimization, including techniques for single-variable, multi-variable, constrained, specialized, and non-traditional optimization problems. Classical optimization methods use analytical approaches to find optimal solutions for continuous objective functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 94

EC-MDPE12

Program Elective:

Optimization Techniques in Design


Design for Optimization

1.What is an optimization?

2.Different optimization tools utilized to optimize the


design
Defining Optimum Design

 In principle, an optimum design means the best or the most suitable of


all the feasible conceptual designs. (best possible result under given
circumstances).

 The optimization is the process of maximizing of a desired quantity or


the minimizing of an undesired one.

 Primary objective may not be optimize absolutely but to compromise


effectively & thereby produce the best formulation under a given set
of restrictions .
Why Optimization is necessary?
 For example, optimization is often used by the mechanical
engineers to achieve either a minimum manufacturing cost or a
maximum component life.

 The aerospace engineers may wish to minimize the overall


weight of an aircraft.

 The production engineers would like to design the optimum


schedules of various machining operations to minimize the idle
time of machines and overall job completion time, and so on.
Historical development

 Isaac Newton (1642-1727) :The development of differential calculus


methods of optimization.

 Joseph-Louis Lagrange (1736-1813) :Calculus of variations,


minimization of functional, method of optimization for constrained
problems.

 Augustin-Louis Cauchy (1789-1857) :Solution by direct substitution,


steepest descent method for unconstrained optimization.

 George Bernard Dantzig (1914-2005):Linear programming and


Simplex method (1947).

 Albert William Tucker (1905-1995):Necessary and sufficient


conditions for the optimal solution of programming problems,
nonlinear programming.
OPTIMIZATION PARAMETERS

Objective function
An objective function expresses the main aim of the model which is
either to be minimized or maximized.

For example: in a manufacturing process, the aim may be


to maximize the profit or minimize the cost.

The two exceptions are:


• No objective function
• Multiple objective functions
Variables

A set of unknowns or variables control the value of the objective


function.
variables can be broadly classified as:

• Independent variable
• Dependent variable
Design variables:

 The formulation of an optimization problem begins


with identifying the underlying design variables, which
are primarily varied during the optimization process.
 A design problem usually involves many design
parameters, of which some are highly sensitive to the
proper working of the design.
 These parameters are called design variables in the
optimization procedures.
 The first thumb rule of the formulation of an
optimization problem is to choose as few design
variables as possible.
Design Constraints

The restrictions that must be satisfied to produce an


acceptable design are collectively called design constraints.
Constraints can be broadly classified as:

•Behavioral or Functional
•Geometric
Tools for Design Optimization

 No single optimization method is available for solving all


optimization problems in an unique efficient manner.

 Several optimization methods have been developed till date for


solving different types of optimization problems.
Optimization Design
 The optimization methods are generally classified under different
groups such as,

(a) single variable optimization

(b) multi-variable optimization

(c) constrained optimization

(d) specialized optimization, and

(e) non-traditional optimization.


Advantages of optimization

 Optimization is a design tool that assists designers


automatically to identify the optimal design from a number of
possible options, or even from an infinite set of options.

 Optimization design is increasingly applied in industry since it


provides engineers a cheap and flexible means to identify
optimal designs before physical deployment.

 Optimization capabilities have also been increasingly integrated


with CAD/CAM/CAE software such as Adams, Nastran, and
OptiStruct.
 Even in our daily life, we are constantly optimizing our goals
(objectives) within the limit of our resources. For example, we
may minimize our expenditure or maximize our saving while
maintaining a certain living level.

 When shopping for a car, we may try to meet our preference


(performance of the car, safety, fuel economy, etc.) maximally on
the condition that the price does not exceed what we can afford.
It is the same case in engineering design where we optimize
performances of the product while meet all the design
requirements.
The general process of optimization design is given in Fig
 The first step of optimization design is to create an optimization
model in mathematical formulations. This step is called
optimization modeling.

 In this step, several decisions are to be made, such as what will be


optimized, what design variables will be changed to produce an
optimal design, and what requirements should be met.
 Modeling is the most important step in optimization design, and
designers may spend a significant portion of time on modeling
during the optimization process. The second step is solving the
optimization model.

 Three methods are usually used, including analytical method,


graphical method, and numerical method.

 The last step is the posterior analysis. In this step, designers


perform some analyses on the optimal solution.
Engineering Optimization Problems

• A majority of the optimization problems arise from design and


manufacturing areas. Since designs have to work efficiently, have to be cost-
effective, have to have least weight in certain applications, and have to satisfy
many other criteria, it makes perfect sense to apply an optimization algorithm
to find such a solution.

• Similarly, a manufacturing process involves various parameters


and operations that must be optimized to produce a competitive product or
to achieve a quality product. Hence, the use of an optimization procedure is
essential in most manufacturing problem-solving tasks.
Optimal design of a truss structure

• Consider the seven-bar truss structure shown in Figure . The loading is also shown
in the figure. The length of the members AC = CE = ℓ = 1 m.
• Connectivity of the truss is given, the cross-sectional area and the material
properties of the members are the design parameters.
• Let us choose the cross-sectional area of members as the design variables for this
problem.
• There are seven design variables, each specifying the cross-section of a
member(A1 to A7). Using the symmetry of the truss structure and
loading, we observe that for the optimal solution, A7 = A1, A6 = A2, and
A5 = A3. Thus, there are practically four design variables (A1 to A4). This
completes the first task of the optimization procedure.
In the following, we present the above truss structure problem in NLP form,
• This shows the formulation of the truss structure problem (Deb et al.,
2000). The seven-bar truss shown in Figure 1.3 is statically determinate
and the axial force, stress, and deflection were possible to compute exactly.
In cases where the truss is statically indeterminate and large (for hand
calculations), the exact computations of stress and deflection may not be
possible.

• A finite element software may be necessary to compute the stress


and deflection in any member and at any point in the truss. Although similar
constraints can then be formulated with the simulated stresses and
deflections, the optimization algorithm which may be used to solve the above
seven-bar truss problem may not be efficient to solve the resulting NLP
problem for statically indeterminate or large truss problems.

• The difficulty arises due to the inability to compute the gradients of the
constraints.
Classical optimization techniques
 The classical optimization techniques are very useful to obtain
the optimal solution of problems involving continuous and
differentiable functions. Such types of techniques are analytical
in nature to obtain maximum and minimum points for
unconstrained and constrained continuous objective functions.
Single-variable Optimization Algorithms
Non-constrained optimization
 Single-variable functions involve only one variable,

Solve minimization problems of the following type:

Minimize f(x),

 where f(x) is the objective function and x is a real variable.


 The purpose of an optimization algorithm is to find a solution x, for which the
function f(x) is minimum.
 Two distinct types of algorithms are presented here:
 Direct search methods use only objective function values to locate the
minimum point, and
 Gradient-based methods use the first and/or the second-order derivatives of
the objective function to locate the minimum point.
Optimality Criteria

 Presents the necessary and sufficient conditions for optimality.

Before presenting the optimality conditions for a point, here, define three different
types of optimal points.

(i) Local optimal point:

A point or solution x∗ is said to be a local optimal point, if there exists no point


in the neighbourhood of x∗ which is better than x∗. In the minimization
problems, a point x∗ is a locally minimal point if no point in the neighbourhood
has a function value smaller than f(x∗).
(ii) Global optimal point:

A point or solution x∗∗ is said to be a global optimal point, if there exists no point
in the entire search space which is better than the point x∗∗. Similarly, a point x∗∗
is a global minimal point if no point in the entire search space has a function value
smaller than f(x∗∗).

(iii) Inflection point:

A point x∗ is said to be an inflection point, if the function value increases locally as x∗


increases and decreases locally as x∗ reduces
or
if the function value decreases locally as x∗ increases and increases locally as x∗
decreases.

Certain characteristics of the objective function can be use to check whether a point
is either a local minimum or a global minimum, or an inflection point.
• Assuming that the first and the second order derivatives of the objective
function f(x) exist in the chosen search space,

• we may expand the function in Taylor’s series at any point x and satisfy the
condition that any other point in the neighbourhood has a larger function
value.

• It can then be shown that conditions for a point x to be a minimum point is


that f′(x) = 0 and f′′(x) > 0, where f′ and f′′ represent the first and second
derivatives of the function.

• The first condition alone suggests that the point is either a minimum, a
maximum, or an inflection point.
In general, the sufficient conditions of optimality are given as follows:

Suppose at point x∗, the first derivative is zero and the first nonzero higher
order derivative is denoted by n; then

• If n is odd, x∗ is an inflection point.


• If n is even, x∗ is a local optimum.
Consider two simple mathematical functions to illustrate a minimum
and an inflection point.
Consider the optimality of the point x = 0 in the function f(x) = x3. The function is
shown in Figure below. It is clear from the figure that the point x = 0 is an inflection
point, since the function value increases for x ≥ 0 and decreases for x ≤ 0 in a small
neighbourhood of x = 0.
We may use sufficient conditions for optimality to demonstrate this fact.
First of all, the first derivative of the function at x = 0 is

f′(x = 0) = 3x2|x=0 = 0.

Searching for a nonzero higher-order derivative, we observe that

f′′(x = 0) = 6x|x=0 = 0 and

f′′′(x = 0) = 6|x=0 = 6 (a nonzero number).

Thus, the nonzero derivative occurs at n = 3, and since n is odd, the point
x = 0 is an inflection point.
Consider the optimality of the point x = 0 in function f(x) = x4. A plot of
this function is shown in Fig. below.
The point x = 0 is a minimal point
as can be seen from the figure. Since f′(x = 0) = 4x3|x=0 = 0,

we calculate

higher-order derivatives in search of a nonzero derivative at x = 0: f′′(0) = 0,


f′′′(0) = 0, and f′′′′(0) = 24. Since the value of the fourth-order derivative
is positive, n = 4, which is an even number. Thus the point x = 0 is a local
minimum point.
1. Bracketing Methods

 The minimum of a function is found in two phases. A technique is used to


find a lower and an upper bound of the minimum. Thereafter, a more
sophisticated method is used to search within these limits and find the
optimal solution with the desired accuracy.

i. Exhaustive Search Method


ii. Bounding Phase Method
1.1 Exhaustive Search Method
 In the exhaustive search method, the optimum of a function is bracketed
by calculating the function values at a number of equally spaced points
(Fig).
 Usually, the search begins from a lower bound on the variable and three
consecutive function values are compared at a time based on the
assumption of unimodality of the function.
 Based on the outcome of comparison, the search is either terminated or
continued by replacing one of the three points by a new point. The search
continues until the minimum is bracketed.

Fig.The exhaustive search method that uses equally spaced points.


EXERCISE
Consider the problem:
Minimize f(x) = x2 + 54/x in the interval (0, 5). A plot of the function is shown in
Figure , The plot shows that the minimum lies at x∗ = 3.
1.2 Bounding Phase Method

• Bounding phase method is used to bracket the minimum of a


function.
• This method guarantees to bracket the minimum of a unimodal
function.
• The algorithm begins with an initial guess and thereby finds a
search direction based on two more function evaluations in the
vicinity of the initial guess.
• Thereafter, an exponential search strategy is adopted to reach
the optimum. In the following algorithm, an exponent of two is
used,
• If the chosen Δ is large, the bracketing accuracy of the
minimum point is poor but the bracketing of the minimum
is faster.
• On the other hand, if the chosen Δ is small, the bracketing
accuracy is better, but more function evaluations may be
necessary to bracket the minimum.
• This method of bracketing the optimum is usually faster
than exhaustive search method discussed in the previous
section.
Comparison between bracketing methods

• The algorithm approaches the optimum


exponentially but the accuracy in the obtained
interval may not be very good,
• whereas in the exhaustive search method the
iterations required to attain near the optimum
may be large, but the obtained accuracy is good.
• An algorithm with a mixed strategy may be more
desirable.
With Δ = 0.5, the obtained bracketing is poor, but the number of function
evaluations required is only 7. It is found that with x(0) = 0.6 and Δ =
0.001, the obtained interval is (1.623, 4.695), and the number of function
evaluations is 15.
2. Region-Elimination Methods

• In this method , we describe the algorithm that primarily work


with the principle of region elimination and require
comparatively smaller function evaluations.
• Depending on the function values evaluated at two points and
assuming that the function is unimodal in chosen space.
• The fundamental rule for region-elimination methods is as
follows:
• Consider a unimodal function drawn in Figure 2.5. If the
function value at x1 is larger than that at x2, the minimum
point x∗ cannot lie on the left-side of x1. Thus, we can
eliminate the region (a, x1) from further consideration.
Therefore, we reduce our interval of interest from (a, b) to (x1,
b).
• Similarly, the second possibility (f(x1) < f(x2)) can be explained.
If the third situation occurs, that is, when f(x1) = f(x2) (this is a
rare situation, especially when numerical computations are
performed),
• we can conclude that regions (a, x1) and (b, x2) can be
eliminated with the assumption that there exists only one
local minimum in the search space (a, b).
2.1 Interval Halving Method

• In this method, function values at three different points are


considered. Three points divide the search space into four
regions.
• Fundamental region elimination rule is used to eliminate a
portion of the search space based on function values at the
three chosen points.
• Three points chosen in the interval (a, b) are all equidistant
from each other and equidistant from the boundaries by the
same amount.
• Figure shows these three points in the interval. Two of the
function values are compared at a time and some region is
eliminated.
• There are three scenarios that may occur. If f(x1) < f(xm),
then the minimum cannot lie beyond xm. Therefore, we reduce
the interval from (a, b) to (a, xm).
• The point xm being the middle of the search space, this
elimination reduces the search space to 50 per cent of the
original search space. On the other hand, if f(x1) > f(xm), the
minimum cannot lie in the interval (a, x1).
• The point x1 being at one-fourth point in the search space, this
reduction is only 25 per cent. Thereafter, we compare function
values at xm and x2 to eliminate further 25 per cent of the
search space.
• This process continues until a small enough interval is found.
Since in each iteration of the algorithm, exactly half of the
search space is retained, the algorithm is called the interval
halving method.
2.2 Fibonacci Search Method

You might also like