A) Mathematical Programming Techniques: I Ntroduction

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

INTRODUCTION:-

Optimization is the technique of obtaining best results under given circumstances .


The 'Optimization Techniques’ also known as Mathematical programming
techniques' are the methods which gives the best results, under the given
conditions, to the given programming problems. The optimum seeking methods
(optimization techniques) are generally studied as a part of ‘Operations Research ‘
Operations Research is a branch of mathematics, in which we deal with the applications
of scientific methods and techniques to the complicated decision making problems
arising in Engineering, Science, Industry etc., for establishing the best or optimal
solution.

METHODS OF OPERATIONS RESEARCH

Methods of Operations Research can mainly be divided into three parts. This division of
course, is not unique

(a) Mathematical Programming Techniques


(b) Statistical Process Techniques
(c) Statistical Methods

a) Mathematical Programming Techniques


1. Linear Programming : This deals with the problems in which the objective Function
along with the constraints are linear. Also all the decision variables should be positive.

2. Integer Programming: The linear programming problem in which some or all the
decision variables are integer.

3. Quadratic Programming :in this the objective function is quadratic but the
constraints are linear

4 Non-linear Programming : Objective and constraints, both the functions can be non
linear, but at least one of the two (objective or constraints) must be non linear

5. Dynamic Programming: Break up the given problem in different stages and then
solve it.

6. Game Theory: Deals with the strategies of a game.

7. Geometric Programming.

8. Stochastic Programming.

9. Seperable Programming:

10. Multi Objective Programming.

11. Calculus of Variation.

12.Network methods: CPM and PERT etc.


b) Stochastic Process Techniques
1. Statistical decision theory

2. Markov Process

3. Queueing theory

4.Renewal theory

5. Simulation Method.

6 Reliability theory etc.

(c) Statistical Methods


1. Regression Analysis.

2. Cluster Analysis, pattern recognition.

3. Design of experiments

4. Discriminate Analysis etc.

HISTORY OF OPTIMIZATION
The main origin is during the second world war. At that time the scientist of England
were asked to study the strategic and tactical problems related to air and land defence
of the country. Due to the limited resources (military etc.), it was necessary to make the
most utilization of them.

During world-war II the military commands of U.S.A. and U.K. form a inter disciplinary
teams of scientists for scientific research into strategic and tactical military operations)
Their mission was to formulate a plan for military commands so that they can make the
best use of their scarce military resources, food/and effords and also to implement the
decision effectively. These scientists were not actually engaged in fighting the war of
military operations but their strategical initiatives of military commands and other
intellectual support helped them to win the war.

After the end of the war, because of the success of the military team in war, the
industrial managers were attracted towards this team of scientists. They wanted that
this team of scientists should help them, to find a way to minimize the cost and
maximize the profit. The first mathematical technique in this field was developed as
Simplex Method in Linear Programming in 1947. Since then a lot of new techniques and
applications have been developed in this field.
OPTIMIZATION PROBLEM
An optimization or a mathematical programming problem can be stated as follows:

Minimize f(X)

Subject to gj (X) < 0 j= 1,2,…… m

h(X) = 0 k= 1,2,…….P

where X is an n-dimensional vector (x1,x2…………. xn.)T called the decision vector, f(x) is
the objective function and gj(X) < 0 and hk(x)=0 are the inequality and equality
constraints respectively. This problem is a constrained optimization problem,

Note that m and p are not related by any relation but they are according to the
requirements of the problem.

If there are no constraints present, the optimization problem becomes

Minimize f(X)

where X=(x1,x2…………. xn.)T

This is called an unconstrained optimization problem.

1.Decision Variables

In any engineering system we deal with a number of parameters quantities ), Some of


these parameters have preassigned values where as others are not fixed.

These parameters which are not fixed are called the decision variables. The decision
variables are collectively represented as a decision vector X=(x 1,x2…………. xn.)T

2. Objective Function

Every problem has a criterion (tim) to be satisfied eg to produce the acceptable design
or to have a profit or cost should be less etc. These all criterions can be expressed with
the help of the decision variables called the objective function. In general function to be
optimized is called the objective function.

In some problems, there may be more than one criterion to be satisfied, cg consider a
salesman travelling different cities to collect money from his customers

His criterion will be to maximize the collection travelling minimum distance. An


optimization problem involving multiple objective function is known as multiobjective
programming problem.
3. Constraints

 In most of the problem, decision variables depend on certain conditions requirements
and can not be chosen arbitrarily. The restrictions that must be satisfied by the decision
variables are called the constraints.

4. Objective function Surfaces

Let the objective function is f(X) where X is a vector. The locus of all points satisfying
f0)-C (constant ) is a hypersurface in n-dimensional space of C gives a different
member of the family of surfaces. These surface objective function surfaces

5. Constraint Surfaces

In the optimization problem let gj(x)=0 be one of the constraints locus of all points
satisfying gi( x)=0 form a hypersurface in n-dimensional space called the constraint
surface. In the n- dimensional space the into two regions g(x)> 0 and g(x) < 0.
The points lying in the region
gi(x)>0 are infeasible and in the region gi(x)<0 are feasible (w.rt gi(x) < 0).
The points lying on the hypersurface g1(x ) = 0 satisfy the constraint critically.

 ENGINEERING APPLICATIONS OF OPTIMIZATION

Optimization can solve almost all the problems in engineering. Some of the applications
from different branches of engineering are given below:

1. In bringing out new design ( more efficient) of a machine.


2. Finding the optimal trajectories of space vehicles.
3. Design of water resources system for maximum benefit.
4. Minimum weight design of structures for earth quake, wind etc.
5. Optimum design of gears, machine tools and other mechanical equipments.
6. Optimum design of electrical equipments.
7. Electrical Networking 
8. Design of Aircraft and aerospace structure for minimum weight.
9. Design of structures: frames, bridges, towers etc. for minimum cost.
10. Production planning, scheduling and controlling
11. Travelling Salesman Problem
12. Design of chemical processing equipments and plants.-
13. Planning of maintenance and replacement of equipment to reduce operating costs.
14. Inventory Control
15. Design of control system.
16. Controlling the idle and waiting times and queueing in production lines to reduce the
costs etc

There are many more applications of optimization in engineering and also in other fields
of studies

You might also like