Heat and Mass Transfer Rajput

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261707331

Das, 2013, Un-NLP-D. U

Data · April 2014

CITATIONS READS

0 166

2 authors:

Haridas Kumar Das Mohammad Babul Hasan


University of Dhaka University of Dhaka
19 PUBLICATIONS   15 CITATIONS    36 PUBLICATIONS   51 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Statistics View project

Linear programming View project

All content following this page was uploaded by Haridas Kumar Das on 18 April 2014.

The user has requested enhancement of the downloaded file.


Dhaka Univ. J. Sci. 61(1): 75-80, 2013 (January)

A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming


Problems
H.K. Das and M. B. Hasan
Department of Mathematics, University of Dhaka, Dhaka-1000, Bangladesh,
Email: [email protected]
Received on 20.01.2012. Accepted for Publication on 10. 07.2012
Abstract
An unconstrained problem with nonlinear objective function has many applications. This is often viewed as a discipline in and of itself.
In this paper, we develop a computer technique for solving nonlinear unconstrained problems in a single framework incorporating with
Golden section, Gradient Search method. For this, we first combine this algorithm and then develop a generalized computer technique
using the programming language MATHEMATICA. We demonstrate our computer technique with a number of numerical examples.
Index-Terms— Unconstrained NLPP, Optimization, Computer Algebra.

I. Introduction how we can solve unconstrained problems with


In this paper, we review the basic algorithms for convex and maximization and minimization. This technique is
concave quadratic programming (QP) that are part of the incorporated with one dimensional Golden section method
Optimization subroutine Library. Optimization might be and multivariable Gradient Search method.
defined as the science of determining the “best” solution to In this section, we discuss some basic definitions and
certain mathematically defined problems, which are often theorems.
models of physical reality. It involves the study of
Preliminaries
optimality criteria for problems, the determination of
algorithmic methods of solution, the study of the structure of Unconstrained NLPP Optimization
such methods and computer experimentation with methods, Unconstrained optimization problems have no constraints,
both under trial conditions and on real life problems. There so the objective is simply to
is an extremely diverse range of practical applications.
These include chemical reactor design, resource allocation, Maximize over all values of
scheduling, blending, data fitting and penalty functions. By The necessary conditions that a particular solution be
nature, optimization techniques are iterative, and in most optimal when f(x) is a differentiable function and when
cases, usually contain a line search step. The aim of this is a concave function, this condition also is sufficient,
work is to numerically evaluate the performance so then solving for reduce to solving the system of n
unconstrained Non-linear Programming Problems (NLPP). equations obtained by setting the n partial derivatives equal
Like linear programming (LP), NLPP is a mathematical to zero. Unfortunately, for nonlinear functions , these
equations often are going to be nonlinear as well, in which
technique for determining the optimal solutions to many
case you are unlikely to be able to solve analytically for
business problems. But NLPP come in many different
their simultaneous solution. When a variable does have
shapes and forms. Unlike the simplex method for LP, no
single algorithm can solve all these different types of non negativity constraint 0, the preceding necessary
problems. So, we study here not only for the unconstrained and (perhaps) sufficient condition changes slightly to
NLPP optimization problem but also the constrained NLPP
optimization. Finally, we successfully complete this work
into a single framework. A 1-D simplex search algorithm
was presented by Choo and Kim[20] and they worked only For each such .
with minimization unconstrained NLP problems. However,
because of difficulty of analyzing non-linear calculations, In this paper, we develop a computer technique incorporate
the vast majority of questions that are important to the with Golden Section, Gradient Search. Our program can
performance of optimization algorithms in practice usually solve any kind of unconstrained NLP with faster than the
left unanswered M. J. D. Powell[21]. above methods which we mention earlier. So, this program
will save our time. And also one does not need to be
Actually, Ayoade[18] worked on the one dimensional confused about unconstrained type of the NLPP.
simplex search only minimization type of problems for
The rest of the paper is organized as follows. In the section
several variables using a Fortran 77 program was developed
2 and 3 are based on one variable and multivariable
to implement his technique. So that our aim is to develop unconstrained Optimization problems respectively. In
76 H.K. Das and M. B. Hasan

section 4, we take care on results and discussions with a may not be able to solve the equation analytically. If not, the
number of numerical examples. In Section 5, we combine one-dimensional search procedure provides a
algorithm and develop a computer oriented program for straightforward way of solving the problem numerically.
solving any type of unconstrained NLPP problems using
III. Multivariable Unconstrained Optimization
MATHEMATICA with input output procedure. In the last
section, we give a comparison. Now consider the problem of maximizing a concave
II. One Variable Unconstrained Optimization function of multiple variables
when there are no constraints on the feasible values.
We now begin discussing how to solve some of the types of Suppose again that the necessary and sufficient condition for
problems just describe by considering the simplest case optimality, given by the system of equations obtained by
unconstrained optimization with just a single variable x, setting the respective partial derivatives equal to zero,
where the differentiable function to be maximized is cannot be solved analytically, so that a numerical search
concave. Thus the necessary and sufficient condition for a procedure must be used. Hence one-dimensional search
particular solution to be optimal (a global procedure be extended to this multidimensional problem.
maximum) is
IV. Results & Discussion
In this section, we present a number of numerical examples
to show the efficiency of our technique. Also we show the
If this equation can be solved directly for , we can done. complexity of the manual process of different types of
However, if is not a particularly simple functions, so problems for solving unconstrained NLP problems
the derivative is not just a linear or quadratic function, it

Table. 1. Examples view for Optimality


.Example Type Number Initial Optimal solution Optimal value
No. of variables value
1 Maximize 1 No 29
2 Maximize 2 1
3 Maximize 1 No .
4 Maximize 2 (1,1) 4.66667
5 Maximize 1 (-3,3) 2.5 6.25
6 Maximize 2 No (55/8,9/2) $392.81
7 Maximize 3 No (0,0,0) 0

Example 1. ,
Find the optimal solution to Max
s /t. with in an initial length of 0.8 , and since
(Wayne L. Winston[ 4] ). Then the new interval of uncertainty is
Solution
Now
Here
Iteration 2
,
, and since

Then the new interval of uncertainty is and


Iteration 1

Iteration 3
,
A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming Problems 77

, and since Iteration 6

Then the new interval of uncertainty is and

Iteration 4
,
, and since
This implies t=7/8.
Then the new interval of uncertainty is
and
with
Iteration 5
Similarly, we will get the desired results.
,
Example 3
, and since
The function to be maximized is (Liberman[3]) .
,
Then the new interval of uncertainty is
and Example 4
Consider maximizing the function (Taha [9])
The optimal solution lies in . f( )= - -2 -2
Example 2 Real life examples of Unconstrained NLP
Consider maximizing the functions ( Liberman [3]). Example 5
If costs a monopolist $5/unit to produce a product. If he
Max: g= -2
produces x units of the product, then each can be sold for
10-x dollars (0≤x ≤ 10). To maximize profit, how much
Solution should the monopolist produce?(Wayne L. Winston[4] ).
It is easy to see that the given problem is concave. Example 6
Now we will solve this problem by Gradient Search method. A monopolist producing a single product has two types of
Iteration 1 customers. If units are produced for customer 1, then
customer 1 is willing to pay a price of 70- dollars. If
Let, is the initial solution.
Units are produced for customer 2, then customer 2 is
willing to pay a price of 150-15 dollars. For x>0, the cost
of manufacturing x units is 100+15x dollars. To maximize
profit, how much should the monopolist sell to each
customer? ( Winston [4 ] ).
Example 7
Consider the following unconstrained optimization problem:
This implies (Liberman [3 ])
Maximize
=(1,0) with =1. Starting with initial solution (1,1,1).
Iteration 2 V. A Generalized NLP Technique
In this section, we first improve an algorithm for solving
=(0,1/2) + t (1,0)=(t,1/2) unconstrained type of NLPP. Then we develop a code using
f =f(t,1/2)= t - -1 programming language MATHEMATICA [7].
f’(t)=1-2t=0 Algorithm
This implies t=1/2. Step1: Input number of variables v.
(1/2,1/2). Step 2: If number of variables v=1, then go to the following
Sub-Step.
=(0,1) with =0.707
Sub-Step 1:Let, be the optimal solution to the NLP Max
Similarly after some iteration we get the iteration 6 f(x) s/t. .
78 H.K. Das and M. B. Hasan

Sub-Step 2:Find out two points and


evaluate and
,

Sub-Step 3: Case 1:If Then


Case2:If Then
Case 3: If Then
Then the interval in which is called the interval of
uncertainty.
Sub-Step 4: Determine which of case 1-3 holds and find a
reduced interval of uncertainty
Sub-Step 5: Repeat the process until the interval is
sufficiently small. If not, go to step 3.
Step 3: If number of variables v≥2 then go to the following
Sub-Step.
Sub-Step 1: Select and any initial trial solution . Max
, .
Sub-Step 2: Express as a function t by
setting and

Sub-Step 3:Using the one dimensional search procedure


find s.t.
is maximized over
Sub-Step 4: Calculate at new If i.e. If
.
Stop.
In the next section, we develop a computer technique for
solving unconstrained type of NLP problems.

Computer technique for Unconstrained NLP


In this section, we developed a code for solving NLP
problems.
A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming Problems 79

When we run the above program this call the number of


variables, initial values & the objective function e.t.c.. When
these requirements complete then it automatically choose
any of the module function between the two module
functions. Similarly, it these sub module function take some
input such as number of variables in the form {x1, x2, … ,
xn}. When we give the required statement we must press
“Enter” individually for each required statement. Finally, we
will get the desired results as the following way.

Example 1
Input
main[unconcons]
Output
Iteration Number = 5 and it's interval
of uncertainity is:
[ -3 , -2.27884 ]
The Functional value is : 1.41564
TimeUsed[]
0.114
Example 2
Input
main[unconcons]
Output
For limiting page we don’t show the
complete table by step by step.

Programming Input and Output systems


In this section we have to take data of the various types of After 35 iterations it gives the
problems using the run file “Local kernel box” to get the solution approximate (1,1) and the
results. In this program, we have used two module functions optimal value approximate to 1.
BA[GOLDEN_], MA[GRADIENT_]. The main module Example 5
function is main [unconcons_] which call all the module Input:
functions define above. The combined programming input main[unconcons]
output is presented as follows. Output:
80 H.K. Das and M. B. Hasan

Iteration Number = 15 and it's interval Conclusion


of uncertainty is: In this paper, we developed a combined algorithm and its
[2.49654, 2.50387] computer program incorporated with the conservatory of
The Functional value is 6.25 traditional Golden section, Gradient Search method for
solving unconstrained NLP problems. We demonstrated our
Example 7
algorithm and program by a number of numerical examples.
After 21th iteration we get the approximate result is (0,0,0)
We observed that the result obtained by our procedure is
with optimal value 0.
completely identical with that of the other methods which
Example 5
are laborious and time consuming. We therefore, hope that
Input:
our program can solve any type of unconstrained NLP and
main[unconcons]
Output: will save our time and labor.

-------------
1. Ravindran, Phillips & Solberg”, Operation Research ”, John Wiley
and Sons, New York.
2. Swarup, K., P. k. Gupta and Man Mohan (2003),”Tracts in Operation
Research”, Revised Edition, ISBN:81-8054-059-6.
3. Hillier, F. S., and G. J. Lieberman, “Introduction to Operation
Research”, Mc Graw-Hill International Edition, USA, 1995.
4. Wayne L. Winston, “Application and algorithms”, Duxbury press,
Belmont, California, U.S.A, 1994.
5. Das H. K., T. Saha & M. B. Hasan, “Numerical Experiments by
improving a numerical methods for solving Game problems thorough
computer algebra”, IJDS, 3,1, 23-58, 2011.
6. Wolfe, P., “The simplex method for quadratic programming,”
Econometrica, 27, 3, 382-398, 1959.
7. Wolfram, S., “Mathematica,” Addision-Wesly Publishing Company,
Melno Park, California, New York, 2001.
8. WWW.eom.springer.de.com
VI. Comparison and Discussion 9. Hamdhy.A.T., “Operation Research, 8th edition”, Prentice Hall of
India, PVT. LTD, New Delhi, 1862.
In this section, we give a time comparison chart to show the
efficiency of our technique. We used the computer 10. Kambo N. S., “Mathmatical Programming Techniques, Revised
configuration as: Processor: Intel(R) Pentium(R) Dual CPU edition”, Affiliated east-west press PVT LTD., Banglore (1984,1991).
[email protected] 2.00GHZ, Memory (RAM):1.00 GB and 11. Gupta P.K., D.S Hira, "Problems IN Opereation Reasearch",S.Chand
the System type: 32-bit operating system. The input-output & Company LTD,Ram Nagar,New Delhi-110055.
shows that our technique can be worked on the multiple 12. Don, E., “Theory and Problems of Mathmatica”, Schaum’s Outline
Series, McGraw-Hill,Washington, D.C, 2001.
variables of NLP problems. We also present the time
comparison of our technique and the build in command. 13. Van C. D. P., A.Whinstone; E.M.L.Bele, “A Comparison of Two
methods for quardratic Programming”, Operation Research, 14, .3,
422-443, 1966.
Table 2. Comparison with Build in Command
14. Sanders J.L., “A Nonlinear Decomposition Principle”, Operation
No NO Iteration Programming Direct Research, 13,.2, 266-271, 1965.
Of Use Command Command 15. Jenson D..L, A.J.King, “A Decomposition method for quardratic
Variable Time Time programming”, IBM SYSREMS JOURNAL, 31,1,1992.
1 1 5 0.114 0.121 16. Thomas L. Saaty, “Mathematical Methods of Operations
Research”,McGraw-Hill Book Company,Inc,New York St.Louis San
2 2 35 0.21 0.219 Francisco .
3 1 21 0.13 0.14 17. Sasieni M., A. Yaspan, L. Friendman, “Operations Research methods
and problems”, 9th edition, John Wiley & Sons,Int., 1966.
4 2 23 0.11 0.12
18. Kue A.., “Numerical Experiments with One Dimensional nonlinear
5 1 15 0.0125 0.21 simplex search”, Computers & Operations Research, 18, 497-506,
1991.
6 2 29 0.212 0.221
19. Xue G., “One the convergence of one dimensional simplex search”,
7 3 21 0.221 0.233 Computer & Operations Research, 16,113-116,1989.
20. Choo E. & C. Kim, “One dimension simplex search”, Computer &
Operations Research 14,47-54,1987.
A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming Problems 81

21. Powell M. J. D., Convergence properties of algorithms for nonlinear


optimization, SIAM ,Rev, 28 487-500,1986.

View publication stats

You might also like