Heat and Mass Transfer Rajput
Heat and Mass Transfer Rajput
Heat and Mass Transfer Rajput
net/publication/261707331
CITATIONS READS
0 166
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Haridas Kumar Das on 18 April 2014.
section 4, we take care on results and discussions with a may not be able to solve the equation analytically. If not, the
number of numerical examples. In Section 5, we combine one-dimensional search procedure provides a
algorithm and develop a computer oriented program for straightforward way of solving the problem numerically.
solving any type of unconstrained NLPP problems using
III. Multivariable Unconstrained Optimization
MATHEMATICA with input output procedure. In the last
section, we give a comparison. Now consider the problem of maximizing a concave
II. One Variable Unconstrained Optimization function of multiple variables
when there are no constraints on the feasible values.
We now begin discussing how to solve some of the types of Suppose again that the necessary and sufficient condition for
problems just describe by considering the simplest case optimality, given by the system of equations obtained by
unconstrained optimization with just a single variable x, setting the respective partial derivatives equal to zero,
where the differentiable function to be maximized is cannot be solved analytically, so that a numerical search
concave. Thus the necessary and sufficient condition for a procedure must be used. Hence one-dimensional search
particular solution to be optimal (a global procedure be extended to this multidimensional problem.
maximum) is
IV. Results & Discussion
In this section, we present a number of numerical examples
to show the efficiency of our technique. Also we show the
If this equation can be solved directly for , we can done. complexity of the manual process of different types of
However, if is not a particularly simple functions, so problems for solving unconstrained NLP problems
the derivative is not just a linear or quadratic function, it
Example 1. ,
Find the optimal solution to Max
s /t. with in an initial length of 0.8 , and since
(Wayne L. Winston[ 4] ). Then the new interval of uncertainty is
Solution
Now
Here
Iteration 2
,
, and since
Iteration 3
,
A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming Problems 77
Iteration 4
,
, and since
This implies t=7/8.
Then the new interval of uncertainty is
and
with
Iteration 5
Similarly, we will get the desired results.
,
Example 3
, and since
The function to be maximized is (Liberman[3]) .
,
Then the new interval of uncertainty is
and Example 4
Consider maximizing the function (Taha [9])
The optimal solution lies in . f( )= - -2 -2
Example 2 Real life examples of Unconstrained NLP
Consider maximizing the functions ( Liberman [3]). Example 5
If costs a monopolist $5/unit to produce a product. If he
Max: g= -2
produces x units of the product, then each can be sold for
10-x dollars (0≤x ≤ 10). To maximize profit, how much
Solution should the monopolist produce?(Wayne L. Winston[4] ).
It is easy to see that the given problem is concave. Example 6
Now we will solve this problem by Gradient Search method. A monopolist producing a single product has two types of
Iteration 1 customers. If units are produced for customer 1, then
customer 1 is willing to pay a price of 70- dollars. If
Let, is the initial solution.
Units are produced for customer 2, then customer 2 is
willing to pay a price of 150-15 dollars. For x>0, the cost
of manufacturing x units is 100+15x dollars. To maximize
profit, how much should the monopolist sell to each
customer? ( Winston [4 ] ).
Example 7
Consider the following unconstrained optimization problem:
This implies (Liberman [3 ])
Maximize
=(1,0) with =1. Starting with initial solution (1,1,1).
Iteration 2 V. A Generalized NLP Technique
In this section, we first improve an algorithm for solving
=(0,1/2) + t (1,0)=(t,1/2) unconstrained type of NLPP. Then we develop a code using
f =f(t,1/2)= t - -1 programming language MATHEMATICA [7].
f’(t)=1-2t=0 Algorithm
This implies t=1/2. Step1: Input number of variables v.
(1/2,1/2). Step 2: If number of variables v=1, then go to the following
Sub-Step.
=(0,1) with =0.707
Sub-Step 1:Let, be the optimal solution to the NLP Max
Similarly after some iteration we get the iteration 6 f(x) s/t. .
78 H.K. Das and M. B. Hasan
Example 1
Input
main[unconcons]
Output
Iteration Number = 5 and it's interval
of uncertainity is:
[ -3 , -2.27884 ]
The Functional value is : 1.41564
TimeUsed[]
0.114
Example 2
Input
main[unconcons]
Output
For limiting page we don’t show the
complete table by step by step.
-------------
1. Ravindran, Phillips & Solberg”, Operation Research ”, John Wiley
and Sons, New York.
2. Swarup, K., P. k. Gupta and Man Mohan (2003),”Tracts in Operation
Research”, Revised Edition, ISBN:81-8054-059-6.
3. Hillier, F. S., and G. J. Lieberman, “Introduction to Operation
Research”, Mc Graw-Hill International Edition, USA, 1995.
4. Wayne L. Winston, “Application and algorithms”, Duxbury press,
Belmont, California, U.S.A, 1994.
5. Das H. K., T. Saha & M. B. Hasan, “Numerical Experiments by
improving a numerical methods for solving Game problems thorough
computer algebra”, IJDS, 3,1, 23-58, 2011.
6. Wolfe, P., “The simplex method for quadratic programming,”
Econometrica, 27, 3, 382-398, 1959.
7. Wolfram, S., “Mathematica,” Addision-Wesly Publishing Company,
Melno Park, California, New York, 2001.
8. WWW.eom.springer.de.com
VI. Comparison and Discussion 9. Hamdhy.A.T., “Operation Research, 8th edition”, Prentice Hall of
India, PVT. LTD, New Delhi, 1862.
In this section, we give a time comparison chart to show the
efficiency of our technique. We used the computer 10. Kambo N. S., “Mathmatical Programming Techniques, Revised
configuration as: Processor: Intel(R) Pentium(R) Dual CPU edition”, Affiliated east-west press PVT LTD., Banglore (1984,1991).
[email protected] 2.00GHZ, Memory (RAM):1.00 GB and 11. Gupta P.K., D.S Hira, "Problems IN Opereation Reasearch",S.Chand
the System type: 32-bit operating system. The input-output & Company LTD,Ram Nagar,New Delhi-110055.
shows that our technique can be worked on the multiple 12. Don, E., “Theory and Problems of Mathmatica”, Schaum’s Outline
Series, McGraw-Hill,Washington, D.C, 2001.
variables of NLP problems. We also present the time
comparison of our technique and the build in command. 13. Van C. D. P., A.Whinstone; E.M.L.Bele, “A Comparison of Two
methods for quardratic Programming”, Operation Research, 14, .3,
422-443, 1966.
Table 2. Comparison with Build in Command
14. Sanders J.L., “A Nonlinear Decomposition Principle”, Operation
No NO Iteration Programming Direct Research, 13,.2, 266-271, 1965.
Of Use Command Command 15. Jenson D..L, A.J.King, “A Decomposition method for quardratic
Variable Time Time programming”, IBM SYSREMS JOURNAL, 31,1,1992.
1 1 5 0.114 0.121 16. Thomas L. Saaty, “Mathematical Methods of Operations
Research”,McGraw-Hill Book Company,Inc,New York St.Louis San
2 2 35 0.21 0.219 Francisco .
3 1 21 0.13 0.14 17. Sasieni M., A. Yaspan, L. Friendman, “Operations Research methods
and problems”, 9th edition, John Wiley & Sons,Int., 1966.
4 2 23 0.11 0.12
18. Kue A.., “Numerical Experiments with One Dimensional nonlinear
5 1 15 0.0125 0.21 simplex search”, Computers & Operations Research, 18, 497-506,
1991.
6 2 29 0.212 0.221
19. Xue G., “One the convergence of one dimensional simplex search”,
7 3 21 0.221 0.233 Computer & Operations Research, 16,113-116,1989.
20. Choo E. & C. Kim, “One dimension simplex search”, Computer &
Operations Research 14,47-54,1987.
A Generalized Computer Technique for Solving Unconstrained Non-Linear Programming Problems 81