UNIT_I_Introduction of Optimization Notes
UNIT_I_Introduction of Optimization Notes
Introduction of Optimization
Introduction of Optimization
Several systems can usually accomplish the same task, and that some systems are better
than others. However, to analyse and design all possibilities can be time-consuming
and costly. Usually, one type is selected based on some preliminary analyses and is
designed in detail.
To optimize refers to trying to bring whatever we are dealing with towards its ultimate
state.
Optimization is act to obtain the best result under certain given conditions.
Optimization is the act of obtaining the best result under given circumstances.
Since the effort required or the benefit desired in any practical situation can be
expressed as a function of certain decision variables, optimization can be defined as
the process of finding the conditions that give the maximum or minimum value of a
function.
Historical Development
The existence of optimization methods can be traced to the days of Newton, Lagrange, and
Cauchy. The development of differential calculus methods for optimization was possible
because of the contributions of Newton and Leibnitz to calculus. The foundations of calculus
of variations, which deals with the minimization of functions, were laid by Bernoulli, Euler,
Lagrange, and Weistrass. The method of optimization for constrained problems, which involve
the addition of unknown multipliers, became known by the name of its inventor, Lagrange.
Cauchy made the first application of the steepest descent method to solve unconstrained
optimization problems. By the middle of the twentieth century, the high-speed digital
computers made implementation of the complex optimization procedures possible and
stimulated further research on newer methods. Spectacular advances followed, producing a
massive literature on optimization techniques. This advancement also resulted in the
emergence of several well defined new areas in optimization theory. Some of the major
developments in the area of numerical methods of unconstrained optimization are outlined here
with a few milestones.
• Development of the simplex method by Dantzig in 1947 for linear programming problems
• The enunciation of the principle of optimality in 1957 by Bellman for dynamic programming
problems
• Work by Kuhn and Tucker in 1951 on the necessary and sufficient conditions for the optimal
solution of programming problems laid the foundation for later research in non-linear
programming.
• The contributions of Zoutendijk and Rosen to nonlinear programming during the early 1960s
have been very significant.
• Work of Carroll and Fiacco and McCormick facilitated many difficult problems to be solved
by using the well-known techniques of unconstrained optimization.
• Geometric programming was developed in the 1960s by Duffin, Zener, and Peterson.
•Gomory did pioneering work in integer programming, one of the most exciting and rapidly
developing areas of optimization. The reason for this is that most real world applications fall
under this category of problems.
• Dantzig and Charnes and Cooper developed stochastic programming techniques and solved
problems by assuming design parameters to be independent and normally distributed.
The necessity to optimize more than one objective or goal while satisfying the physical
limitations led to the development of multi-objective programming methods. Goal
programming is a well-known technique for solving specific types of multi-objective
optimization problems. The goal programming was originally proposed for linear problems by
Charnes and Cooper in 1961. The foundation of game theory was laid by von Neumann in 1928
and since then the technique has been applied to solve several mathematical, economic and
military problems. Only during the last few years has game theory been applied to solve
engineering problems.
constrained optimization
unconstrained optimization
x >=0
y >=0
Based on the nature of design variables encountered, optimization problems can be classified
into two broad categories.
In the first category, the problem is to find values to a set of design parameters that make
some prescribed function of these parameters minimum subject to certain constraints. Such
problems are called parameter or static optimization problems.
In the second category of problems, the objective is to find a set of design parameters, which
are all continuous functions of some other parameter, that minimizes an objective function
subject to a set of constraints. This type of problem, where each design variable is a function
of one or more parameters, is known as a trajectory or dynamic optimization problem.
Classification Based on the Nature of the Equations Involved
If any of the functions among the objective and constraint functions is nonlinear, the problem
is called a nonlinear programming (NLP) problem.
Depending on the values permitted for the design variables, optimization problems can be
classified as integer- and real-valued programming problems.
If some or all of the design variables X1, X2, . . . , Xn of an optimization problem are
restricted to take on only integer (or discrete) values, the problem is called an integer
programming problem.
If all the design variables are permitted to take any real value, the optimization problem is
called a real-valued programming problem.
Optimization, in its broadest sense, can be applied to solve any engineering problem. To
indicate the wide scope of the subject, some typical applications from different engineering
disciplines are given below.
In the above formulation where X is an n-dimensional vector called the design vector.
f(X) is termed the objective function.
gj (X) is known as inequality constraints.
lj (X) are known as equality constraints.
Some optimization problems do not involve any constraints and can be stated as:
The problem stated above is called a unconstrained optimization problem.
Constraint Surface
The set of values of X that satisfy the equation gj (X) = 0 forms a hypersurface in the design
space and is called a constraint surface.
Objective Function
A criterion has to be chosen for comparing the different alternative acceptable designs and for
selecting the best one. The criterion with respect to which the design is optimized, when
expressed as a function of the design variables, is known as the criterion or merit or objective
function.
The locus of all points satisfying f(X) = constant forms surface, which are called as objective
function surface.
Some optimization problems do not involve any constraints and can be stated as:
The design of many engineering systems can be a complex process. Assumptions must
be made to develop realistic models that can be subjected to mathematical analysis by
the available methods, and the models must be verified by experiments.
For example, the design of a high-rise building involves designers from architectural,
structural, mechanical, electrical, and environmental engineering
In block 4, the conventional design method checks to ensure that the performance
criteria are met, whereas the optimum design method checks for satisfaction of all of
the constraints for the problem formulated in block 0.
In block 5, stopping criteria for the two methods are checked, and the iteration is
stopped if the specified stopping criteria are met.
In block 6, the conventional design method updates the design based on the designer’s
experience and intuition and other information gathered from one or more trial designs;
the optimum design method uses optimization concepts and procedures to update the
current design.
A function f(x) of n variables has a global (absolute) minimum at x* if the value of the
function at x* is less than or equal to the value of the function at any other point x in the
feasible set S.
For all x in the feasible set S. If strict inequality holds for all x other than x* then x* is called
a strong (strict) global minimum; otherwise, it is called a weak global minimum.
A function f(x) of n variables has a local (relative) minimum at x* if Inequality holds for all x
in a small neighborhood N (vicinity) of x* in the feasible set S. If strict inequality holds, then
x* is called a strong (strict) local minimum; otherwise, it is called a weak local
minimum.
Optimality criteria
Optimality Criteria Methods—Optimality criteria are the conditions a function must
satisfy at its minimum point. Optimization methods seeking solutions (perhaps using
numerical methods) to the optimality conditions are often called optimality criteria or
indirect methods.
The conditions that must be satisfied at the optimum point are called necessary. Stated
differently, if a point does not satisfy the necessary conditions, it cannot be optimum.
If a candidate optimum point satisfies the sufficient condition, then it is indeed an
optimum point. If the sufficient condition is not satisfied, however, or cannot be used,
we may not be able to conclude that the candidate design is not optimum.
Optimum points must satisfy the necessary conditions. Points that do not satisfy them
cannot be optimum.
A point satisfying the necessary conditions need not be optimum; that is, non optimum
points may also satisfy the necessary conditions.
A candidate point satisfying a sufficient condition is indeed optimum.
If the sufficiency condition cannot be used or it is not satisfied, we may not be able to
draw any conclusions about the optimality of the candidate point.
The optimality conditions for unconstrained or constrained problems can be used in two
ways:
1. They can be used to check whether a given point is a local optimum for the problem.
2. They can be solved for local optimum points.