Linear Programming - Defined As The Problem of Maximizing or Minimizing A Linear Function Subject To
Linear Programming - Defined As The Problem of Maximizing or Minimizing A Linear Function Subject To
system of linear constraints. The constraints may be equalities or inequalities. The linear function is
called the objective function , of the formf(x,y)=ax+by+cf(x,y)=ax+by+c . The solution set of the system
of inequalities is the set of possible or feasible solution , which are of the form(x,y)(x,y) .
For many general nonlinear programming problems, the objective function has many locally
optimal solutions; finding the best of all such minima, the global solution, is often difficult.
Quadratic programming (QP) is the process of solving a special type of mathematical
optimization problem—specifically, a (linearly constrained) quadratic optimization problem, that is, the
problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to
linear constraints on these variables. Quadratic programming is a particular type of nonlinear
programming.
The decision variables must be strictly positive (non-zero, non-negative) quantities. This
is a good fit for engineering design equations (which are often constructed to have only
positive quantities), but any model with variables of unknown sign (such as forces and
velocities without a predefined direction) may be difficult to express in a GP. Such
models might be better expressed as Signomials.
- is an area of multiple criteria decision making, that is concerned with mathematical optimization
problems involving more than one objective function to be optimized simultaneously. Multi-objective
optimization has been applied in many fields of science, including engineering, economics and logistics
where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting
objectives.
- Involve more than one objective function that are to be minimized or maximized
- Answer is set of solutions that define the best tradeoff between competing objectives
Generally, multi-objective optimization methods are classified based on three different approaches:
• A posteriori approach – the optimization process determines a set of Pareto solutions, and then the
decision-maker chooses a solution from the set of solutions provided by the algorithm.
• Interactive approach – there is progressive interaction between the decision-maker and the solver, i.e.
knowledge gained during the optimization process helps the decision-maker to define the preferences.
1.Calculus-based techniques- Numerical methods, also called calculus-based methods, use a set of
necessary and sufficient conditions that must be satisfied by the solution of the optimization problem.
They can be further subdivided into two categories, viz. direct and indirect methods. Direct search
methods perform hill climbing in the function space by moving in a direction related to the local
gradient. In indirect methods, the solution is sought by solving a set of equations resulting from setting
the gradient of the objective function to zero. The calculus-based methods are local in scope and also
assume the existence of derivatives. These constraints severely restrict their application to many real-
life problems, although they can be very efficient in a small class of unimodal problems.
2. Enumerative techniques - Enumerative techniques involve evaluating each and every point of the
finite, or discretized infinite, search space in order to arrive at the optimal solution [24, 112]. Dynamic
programming is a well-known example of enumerative search. It is obvious that enumerative techniques
will break down on problems of even moderate 2.2 Single-Objective Optimization Techniques 19 size
and complexity because it may become simply impossible to search all the points in the space.
3. Random techniques - Guided random search techniques are based on enumerative methods, but they
use additional information about the search space to guide the search to potential regions of the search
space These can be further divided into two categories, namely single-point search and multiple-point
search, depending on whether it is searching just with one point or with several points at a time. The
guided random search methods are useful in problems where the search space is huge, multimodal, and
discontinuous, and where a nearoptimal solution is acceptable. These are robust schemes, and they
usually provide near-optimal solutions across a wide spectrum of problems. In the remaining part of this
chapter, we focus on such methods of optimization for both single and multiple objectives.