0% found this document useful (0 votes)
15 views

NLP Notes

This document provides lecture notes on operations research techniques. It covers topics like nonlinear programming, one dimensional minimization methods, and unconstrained optimization. Specifically, it discusses unrestricted and restricted search methods for one dimensional minimization, including the Fibonacci method for restricted search. It provides examples and step-by-step solutions to minimize simple functions using different search methods.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

NLP Notes

This document provides lecture notes on operations research techniques. It covers topics like nonlinear programming, one dimensional minimization methods, and unconstrained optimization. Specifically, it discusses unrestricted and restricted search methods for one dimensional minimization, including the Fibonacci method for restricted search. It provides examples and step-by-step solutions to minimize simple functions using different search methods.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

EEU623: Operations Research Techniques

Lecture Notes by:


Dr. P. P. Bedekar
Professor of Electrical Engineering
Govt. College of Engineering, Amravati (M.S.)

Topics covered:
Nonlinear programming: One dimensional minimization methods,
unconstrained optimization

1) One dimensional minimization methods:


i) Unrestricted search –
(a) Search with fixed step size
(b) Search with accelerated step size
ii) Restricted search
(a) Fibonacci method
(b) Golden section method

2) Unconstrained optimization:
i) Steepest descent method
ii) Conjugate gradient method

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 1 of 20


Non-linear Programming
Unimodal function:
A function is unimodal if,
i) x1 < x2 < x* implies that f(x*) < f(x2) < f(x1)
ii) x2 > x1 > x* implies that f(x*) < f(x1) < f(x2)
where x* the minimum point.
A function is unimodal if, it has only one peak (maximum) or only one valley (minimum) in a given
interval. Some examples of unimodal functions are given below-

f(x) f(x)

a bx a bx
The following function, defined in the range a ≤ x ≤ b, is not an unimodal function, as it has
more than one minima in the given range (moreover it has some maxima also in the given range)

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 2 of 20


One dimensional minimization – Unrestricted search:
In most practical problems, the optimum solution is known to lie within restricted ranges of
the decision variables
In some cases this range is not known, and hence the search has to be made with no
restrictions on the values of the variables
Search with fixed step size:
(Considering the problem of minimization, with the assumption of unimodality)
01. Start with initial choice of x i.e. x1
02. Assume step size s (for better accuracy, the step size should be small)
03. Find f1 = f(x1)
04. Find x2 (x2 = x1 + s)
05. Find f2 = f(x2)
06. If f2 < f1
then the search direction is correct, go to step 07
else change search direction and go to step 07
07. i = 2
08. xi = xi–1 + s
09. fi = f(xi)
10. If fi < fi-1
then i = i + 1, go to step 08
else go to step 11
11. Print xi–1 and fi-1 OR Print xi and fi
12. Stop

Q. 01: Find the minimum of f = x (x – 1.5) starting from x = 0. Take step size s = 0.1
Soln.: Given x1 = 0 and s = 0.1
f(x1) = 0, x2 = 0 + 0.1 = 0.1, f(x2) = – 0.14
As f2 < f1 hence the search direction is correct. The iterations performed are shown in the
table below –
Iteration count xi f(xi) Is f(xi) < f(xi-1)
1 0 0 ---
2 0.10 – 0.14 Yes
3 0.20 – 0.26 Yes
4 0.30 – 0.36 Yes

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 3 of 20


5 0.40 – 0.44 Yes
6 0.50 – 0.5 Yes
7 0.60 – 0.54 Yes
8 0.70 – 0.56 Yes
9 0.80 – 0.56 No

Hence, x* = 0.8 and fmin = – 0.56

Q. 02: Find the minimum of f = x (x – 1.2) starting from x = 1.6. Take step size s = 0.1
Soln.: Given x1 = 1.6 and s = 0.1
f(x1) = 0.64, x2 = 1.6 + 0.1 = 1.7, f(x2) = 0.85
f2 > f1 indicates that the search direction is wrong, hence the search direction is changed i.e.
s = –s = –0.1. The iterations performed are shown in the table below –
Iteration count xi f(xi) Is f(xi) < f(xi-1)
1 1.6 0.64 ---
2 1.5 0.45 Yes
3 1.4 0.28 Yes
4 1.3 0.13 Yes
5 1.2 0 Yes
6 1.1 – 0.11 Yes
7 1.0 – 0.20 Yes
8 0.9 – 0.27 Yes
9 0.8 – 0.32 Yes
10 0.7 – 0.35 Yes
11 0.6 – 0.36 Yes
12 0.5 – 0.35 No

Hence, x* = 0.6 and fmin = – 0.36

Search with accelerated step size:


Although the search with a fixed step size appears to be very simple, its major limitation
comes because of the unrestricted nature of the region in which the minimum can lie. This involves
a large amount of computational work.
An improvement can be achieved by increasing the step size gradually until a
minimum point is bracketed. A simple method consists of doubling the step size as long as the
move results in an improvement of the objective function.

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 4 of 20


The algorithm is same (as that of search with fixed step size) except that in each iteration,
the step size is calculated as si = si-1 + ∆s (if the step size is doubled in each iteration, then ∆s = si-1)
Algorithm (Considering the problem of minimization, with the assumption of unimodality):
01. Start with initial choice of x i.e. x1
02. Assume step size s (for better accuracy, the step size should be small)
03. Find f1 = f(x1)
04. Find x2 (x2 = x1 + s)
05. Find f2 = f(x2)
06. If f2 < f1
then the search direction is correct, go to step 07
else change search direction and go to step 07
07. i = 2, si = s
08. xi = xi–1 + si
09. fi = f(xi)
10. If fi < fi-1
then i = i + 1, si = si-1 + ∆s, go to step 08
else go to step 11
11. Print xi–1 and fi-1 OR Print xi and fi
12. Stop
Q.: Find the minimum of f = x (x – 1.5) starting from x = 0. Take initial step size s = 0.05
Soln.: Given x1 = 0 and s = 0.05
f(x1) = 0, x2 = 0 + 0.05 = 0.05, f(x2) = – 0.0725
As f2 < f1 hence the search direction is correct. The iterations performed are shown in the
table below –
Iteration count Step (s) xi f(xi) Is f(xi) < f(xi-1)
1 -- 0 0 ---
2 0.05 0.05 – 0.0725 Yes
3 0.10 0.10 – 0.140 Yes
4 0.20 0.20 – 0.260 Yes
5 0.40 0.40 – 0.440 Yes
6 0.80 0.80 – 0.560 Yes
7 1.60 1.60 + 0.160 No

Hence, x* = 0.8 and fmin = – 0.56

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 5 of 20


One dimensional minimization – Restricted search:
Fibonacci Method:
This method can be used to find the minimum of a function on one variable even if the
function is not continuous. This is an elimination method, and has the following limitations –
i) The initial interval of uncertainty, in which the optimum lies, has to be known.
ii) The function which is to be optimized, has to be unimodal in the given interval
iii) The exact optimum cannot be located in this method. Only an interval, known as the
final interval of uncertainty will be known.
iv) The number of function evaluations (no. of iterations/ no. of experiments) has to be
specified beforehand.
The method makes use of sequence of Fibonacci numbers. These numbers are defined as
F0 = F1 = 1
Fn = Fn-1 + Fn-2, n = 2, 3, 4, …
which yield the sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ….
Procedure:
1. Let L0 be the initial interval of uncertainty defined by a ≤ x ≤ b and n be the total number of
Fn 2
experiments to be conducted. Define L*2  L0 and place the first two experiment points
Fn
x1 and x2 which are located at a distance of L2* from each ends of L0.
2. Discard part of the interval using the unimodality assumption. Then there remains a smaller
interval of uncertainty, with one experiment point left in it.
3. Find the distance of this point from one side and place the next point at the same distance
from the other side. This is next experiment point.
4. Repeat step (2) and (3), up to the specified number of experiments (iteration).
After performing n experiment, the final interval of uncertainty will be known which is denoted
as Ln. The ratio of the final interval of uncertainty to the initial interval is called reduction ratio.
Ln F0 1
Reduction ratio =   aaa
L0 Fn Fn
It can be seen that the reduction depends only on the number of experiments to be performed
and does not depend on the function to be optimized.
Que. 01: Minimize f = 2 x3 – 4 x + 3 in the range [0 , 2] using Fibonacci method. Perform six
iterations.
Solution: Given function f(x) = 2 x3 – 4 x + 3
The range in which minimum lies; a = 0, b = 2

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 6 of 20


The number of experiments (iterations) to be performed; n = 6
Fn = F6 = 13
F0 = 1, F1 = 1, F2 = 2, F3 = 3, F4 = 5, F5 = 8, F6 = 13 ……
Initial interval L0 = b – a = 2 – 0 = 2
L2* = (Fn-2 / Fn) x L0 = (5/13) x 2 = 0.7692
Experiment No. 2: Distance of two experiments (points) from two ends = 0.7692
x1 = 0 + 0.7692 = 0.7692, and x2 = 2 – 0.7692 = 1.2308

f(x1) = 0.8334 and f(x2) =1.8058. As f(x1) < f(x2), hence x2 is discarded

The new range available is [0 to 1.2308]


Experiment No. 3: In the range [0 to 1.2308], one point (x1) is at a distance of 0.7692 from left side.
Hence place a new point (x3) at the same distance (0.7692) from right side, i.e. place x3 at 1.2308 –
0.7692 (at 0.4616).

f(x1) = 0.8334 and f(x3) =1.3503. As f(x1) < f(x3), hence x3 is discarded
The new range available is [0.4616 to 1.2308]
Experiment No. 4: In the range [0.4616 to 1.2308], one point (x1) is at a distance of 0.3076 (=
0.7696 – 0.4616) from left side. Hence place a new point (x4) at the same distance (0.3076) from
right side, i.e. place x4 at 1.2308 – 0.3076 (at 0.9232).

f(x1) = 0.8334 and f(x4) =0.8809. As f(x1) < f(x4), hence x4 is discarded
The new range available is [0.4616 to 0.9232]
Experiment No. 5: In the range [0.4616 to 0.9232], one point (x1) is at a distance of 0.3076 from left
side. Hence place a new point (x5) at the same distance (0.3076) from right side, i.e. place x5 at
0.9232 – 0.3076 (at 0.6156).

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 7 of 20


f(x1) = 0.8334 and f(x5) =1.0042. As f(x1) < f(x5), hence x5 is discarded
The new range available is [0.6156 to 0.9232]
Experiment No. 6: In the range [0.6156 to 0.9232], one point (x1) is at a distance of 0.1536 from left
side. Hence place a new point (x6) at the same distance (0.1536) from right side, i.e. place x6 at
0.9232 – 0.1536 (at 0.7696).

f(x1) = 0.8334 and f(x6) =0.8332. As f(x6) < f(x1), hence x1 is discarded
The new range available is [0.7692 to 0.9232]

As it is asked to perform six experiments, hence we can conclude that the minimum lies in the
range [0.7692 to 0.9232].

The final range (after performing six experiments) L6= [0.9232 – 0.7692] = 0.154
After performing 6 experiments (iterations), we get reduction ratio of L6 / L0 = [0.154 / 2] = 0.077,
which is same as 1 / Fn = [1 / 13] = 0.0769

Que. 02: Minimize f = x3 – 4 x in the range [1 , 4] using Fibonacci method. Perform six iterations.
Solution: Given function f(x) = x3 – 4 x
The range in which minimum lies; a = 1, b = 4
The number of experiments (iterations) to be performed; n = 6
Fn = F6 = 13
F0 = 1, F1 = 1, F2 = 2, F3 = 3, F4 = 5, F5 = 8, F6 = 13 ……
Initial interval L0 = b – a = 4 – 1 = 3
L2* = (Fn-2 / Fn) x L0 = (5/13) x 3 = 1.1538
Experiment No. 2: Distance of two experiments (points) from two ends = 1.1538
x1 = 1 + 1.1538 = 2.1538, and x2 = 4 – 1.1538 = 2.8462
f(x1) = 1.3759 and f(x2) = 11.6718. As f(x1) < f(x2), hence x2 is discarded
The new range available is [1 to 2.8462]
Experiment No. 3: In the range [1 to 2.8462], one point (x1) is at a distance of 1.1538 from left side.
Hence place a new point (x3) at the same distance (1.1538) from right side, i.e. place x3 at 2.8462 –
1.1538 (at 1.6924).

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 8 of 20


f(x1) = 1.3759 and f(x3) = –1.92219. As f(x3) < f(x1), hence x1 is discarded
The new range available is [1 to 2.1538]
Experiment No. 4: In the range [1 to 2.1538], one point (x3) is at a distance of 0.6924 from left side.
Hence place a new point (x4) at the same distance (0.6924) from right side, i.e. place x4 at 2.1538 –
0.6924 (at 1.4614).
f(x3) = – 1.92219 and f(x4) = – 2.7245. As f(x4) < f(x3), hence x3 is discarded
The new range available is [1 to 1.6924]
Experiment No. 5: In the range [1 to 1.6924], one point (x4) is at a distance of 0.4614 from left side.
Hence place a new point (x5) at the same distance (0.4614) from right side, i.e. place x5 at 1.6924 –
0.4614 (at 1.231).
f(x4) = – 2.7245 and f(x5) = – 3.0585. As f(x5) < f(x4), hence x4 is discarded
The new range available is [1 to 1.4614]
Experiment No. 6: In the range [1 to 1.4614], one point (x5) is at a distance of 0.231 from left side.
Hence place a new point (x6) at the same distance (0.231) from right side, i.e. place x6 at 1.4614 –
0.231 (at 1.2304).
f(x5) = – 3.0585 and f(x6) = – 3.0589. As f(x6) < f(x5), hence x5 is discarded
The new range available is [1 to 1.231]

The final range (after performing six experiments) L6= [1.231 – 1] = 0.231
After performing 6 experiments (iterations), we get reduction ratio of L6 / L0 = [0.231 / 3] = 0.077,
which is same as 1 / Fn = [1 / 13] = 0.0769

Golden Section Method:


The golden section method is same as the Fibonacci method, except that in golden section
method the number of experiments need not be specified.
This method can be used to find the minimum of a function on one variable even if the
function is not continuous. This is an elimination method, and has the following limitations –
i) The initial interval of uncertainty, in which the optimum lies, has to be known.
ii) The function which is to be optimized, has to be unimodal in the given interval
iii) The exact optimum cannot be located in this method. Only an interval, known as the
final interval of uncertainty will be known.
Location of experiment points is obtained by,
L0
L*2 
2

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 9 of 20


With γ = 1.61796, L2* = 0.382 L0. The ratio γ has a historical background. Ancient Greek
d b d
architects believed that a building having sides d and b satisfying    , will be having
d b
the most pleasing properties.

Rectangular building of sides b and d


It is also found in Euclid’s geometry that the division of a line segment into two parts is
such that the ratio of the whole to the larger part is equal to the ratio of larger to smaller, then it is
known as “golden section” or “golden mean”
Procedure:
1. Let L0 be the initial interval of uncertainty defined by a ≤ x ≤ b. Define L2* = 0.382 L0 and
place the first two experiment points x1 and x2 which are located at a distance of L2* from
each ends of L0.
2. Discard part of the interval using the unimodality assumption. Then there remains a smaller
interval of uncertainty, with one experiment point left in it.
3. Find the distance of this point from one side and place the next point at the same distance
from the other side. This is next experiment point.
4. Repeat step (2) and (3), so long as we do not get a very smaller interval of uncertanity.

Que.: Minimize f = x (x – 1.5) in the range [0 , 2] using Golden section method.


Solution: Given function f(x) = x (x – 1.5)
The range in which minimum lies; a = 0, b = 2
Initial interval L0 = b – a = 2 – 0 = 2
Define L2* = 0.382 L0 and place first two points x1 & x2 which are located at a distance of L2* from
each end of L0
Distance of experiments (points) from two ends = L2* = 0.382 L0 = 0.382 x 2 = 0.764
We will perform four experiments as given below –
Experiment No. 2: Distance of two experiments (points) from two ends = 0.764
x1 = 0 + 0.764 = 0.764, and x2 = 2 – 0.764 = 1.236
f(x1) = – 0.5623 and f(x2) = – 0.3263. As f(x1) < f(x2), hence x2 is discarded
The new range available is [0 to 1.236]

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 10 of 20


Experiment No. 3: In the range [0 to 1.236], one point (x1) is at a distance of 0.764 from left side.
Hence place a new point (x3) at the same distance (0.764) from right side, i.e. place x3 at 1.2308 –
0.764 (at 0.472).
f(x1) = – 0.5623 and f(x2) = – 0.4852. As f(x1) < f(x3), hence x3 is discarded
The new range available is [0.472 to 1.236]
Experiment No. 4: In the range [0.472 to 1.236], one point (x1) is at a distance of 0.292 from left
side. Hence place a new point (x4) at the same distance (0.292) from right side, i.e. place x4 at 1.236
– 0.292 (at 0.944).
f(x1) = – 0.5623 and f(x4) = – 0.5248. As f(x1) < f(x4), hence x4 is discarded
The new range available is [0.472 to 0.944]

The final range (after performing four experiments) L4= [0.944 to 0.472] = 0.472
After performing 6 experiments (iterations), we get reduction ratio of L4/ L0 = [0.472 / 2] = 0.236.

Unconstrained Optimization –
The gradient of an n-component vector has a very important property. If we move along the
gradient direction from any point in n-dimensional space, the function value increases at the fastest
rate. Hence the gradient direction is called the “direction of steepest ascent.” Unfortunately, the
direction of steepest ascent is a local property and not global one.
Since the gradient vector represents the direction of steepest ascent, the negative of gradient
vector denotes the direction of steepest descent.
Any method that makes use of gradient vector can be expected to give the minimum faster
than one that does not make use of gradient vector. All the descent methods make use of gradient
vector, either directly or indirectly, in finding the search direction
Steepest Descent Method (Cauchy’s):
The use of the negative of the gradient vector as a direction for minimization was first made
by Cauchy in 1847. In this method, we start from a initial trial point X1 and iteratively move along
the steepest descent directions until the optimum point is found
Algorithm (for function minimization)
01. Start with initial solution i.e. initial design vector (X1). If it is not given, then it is taken as zero
vector.
02. Set iteration count i = 1.
03. Find gradient of function f at X = Xi. It is written as f i and is defined as (Gradient is vector of
partial derivatives of function w.r.t design variables)

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 11 of 20


 f 
 x 
 1
 f 
 x 2 
f i   . 
 
 . 
 . 
 f 
 
 x n 
04. Gradient gives the direction of “ascent”. The search direction is taken as direction of “descent”.
Hence search direction S i   f i
05. Find X i  i S i in terms of i
06. Write f ( X i  i S i )

f ( X i  i S i )
07. Find optimum value of i i.e. *i . For this take 0
 i

08. Determine next point X i 1  X i  *i S i


09. Check for optimality
If X i 1  X i   then go to step (11)

else go to step (10)


10. Increment iteration count by 1, i.e. i = i + 1, and go to step (03)
11. Print results
12. Stop

Q. 01: Minimize f ( x1 , x 2 )  x1  x 2  2 x12  2 x1 x 2  x 22 , starting from point (0,0). Solve up to two


iterations using Steepest Descent method
Soln.: Given function f ( x1 , x 2 )  x1  x 2  2 x12  2 x1 x 2  x 22
Its gradient is
 f 
 x   1  4 x1  2 x 2  0
f   1     Starting point X 1    (Given)
 f   1  2 x1  2 x 2  0
 x 2 

Iteration I:

1
Gradient, f 1   
 1
 1
Search direction S1  f 1   
1

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 12 of 20


  
X i  i S i   1 
 1 
f ( X i  i S i )  1  1  212  212  12
 12  21
f ( X i   i S i )
 0  21  2  0  1*  1
 i

 1
New point X 2  X 1  1* S1  X 2   
1

1
Check for optimality X 2  X 1   
1
Difference is not small, so proceed for iteration II.

Iteration II:

  1
Gradient, f 2   
  1
1
Search direction S 2   f 2   
1
 1 1  1   2 
X 2   2 S 2     2     
1 1  1   2 
f ( X 2   2 S 2 )  ( 1   2 )  (1   2 )  2( 1   2 ) 2  2( 1   2 )(1   2 )  (1   2 ) 2
 522  2 2  1

f ( X i   i S i )
 0  10 2  2  0  *2  0.2
i

  0.8
New point X 3  X 2  *2 S 2  X 3   
 1.2 

 0.8
As it is asked to perform two iterations only, hence X *    and fmin = – 1.2
 1.2 

Q. 02: Minimize f ( x1 , x 2 )  6 x12  2 x 22  6 x1 x 2  x1  2 x 2 by Steepest Descent method, starting


from point (0,0). Solve up to two iterations.

Soln.: Given function f ( x1 , x 2 )  6 x12  2 x 22  6 x1 x 2  x1  2 x 2

 f 
 x  12 x  6 x 2  1
Its gradient is f   1    1 
 f   4 x 2  6 x1  2 
 x 2 

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 13 of 20


0
Starting point X 1    (Given)
0
Iteration I:

  1
Gradient, f 1   
 2
1 
Search direction S1  f 1   
 2
 
X i  i S i   1 
21 
f ( X i  i S i )  612  812  1212  1  41
 212  51
f ( X i  i S i )
 0  41  5  0  1*  1.25
i

1.25 
New point X 2  X 1  1* S1  X 2   
2.50

1.25 
Check for optimality X 2  X 1   
2.50
Difference is not small, so proceed for iteration II.

Iteration II:
  1
Gradient, f 2   
0 . 5 
 1 
Search direction S 2  f 2   
 0.5
1.25   1   1.25  2 
X 2  2 S 2     2   
2.50   0.5 2.5  0.52 
f ( X 2   2 S 2 )  9.522  1.25 2  3.125

f ( X i  i S i )
 0  192  1.25  0  *2  0.0658
i

1.3158 
New point X 3  X 2  *2 S 2  X 3   
 2.4608
1.3158 
As it is asked to perform two iterations only, hence X *    and fmin = – 3.1658
2.4608
Q. 03: The profit per acre of a farm is given by
20 x1  26 x 2  4 x1 x 2  4 x12  3 x 22
where x1 and x 2 are labor cost and fertilizer cost respectively. Find the values of x1 and x 2 to
maximize the profit. Use Steepest Descent method and perform two iterations.

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 14 of 20


Soln.: Given function 20 x1  26 x 2  4 x1 x 2  4 x12  3x 22
As this function is to be maximized, hence we take,
f ( x1 , x 2 )  20 x1  26 x 2  4 x1 x 2  4 x12  3 x 22

 f 
 x    20  4 x 2  8 x1 
Its gradient is f   1    
 f   26  4 x1  6 x 2 
 x 2 

0
Starting point X 1    (Assumed)
0
Iteration I:

 20
Gradient, f 1   
 26
20
Search direction S1  f 1   
26
201 
X i  i S i   
261 
f ( X i   i S i )  154812  10761
f ( X i  i S i )
 0  30961  1076  0  1*  0.35
 i

7
New point X 2  X 1  1* S1  X 2   
9.1

7
Check for optimality X 2  X 1   
9.1
Difference is not small, so proceed for iteration II.

Iteration II:

 0.4
Gradient, f 2   
 0.6 
 0. 4 
Search direction S 2   f 2   
 0.6
7  0.4   7  0.4 2 
X 2  2 S 2     2   
9.1  0.6 9.1  0.6 2 
f ( X 2   2 S 2 )  2.44 22  0.52 2  186.97
f ( X i   i S i )
 0  4.88 2  0.52  0  *2  0.106
 i

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 15 of 20


7.0424
New point X 3  X 2  *2 S 2  X 3   
9.0364 
As it is asked to perform two iterations only, hence
7.0424 7 
X*       and fmin = – 187 i.e. Profitmax = 187
9.0364  9 

Conjugate Gradient Method (Fletcher-Reeves method):


The convergence characteristics of the steepest descent method can be improved by greatly
by modifying it into a conjugate gradient method
This method is similar to Steepest Descent method. The initial search direction (i.e. search
direction in first iteration) is obtained as S1  f1 (same as that in Steepest Descent method). The
search direction for further iterations is found out as below:
2
f i
S i  f i  2
.S i 1
f i 1
Algorithm (for function minimization)

01. Start with initial solution i.e. initial design vector (X1). If it is not given, then it is taken as zero
vector.
02. Set iteration count i = 1.
03. Find gradient of function f at X = X1. It is written as f 1 .
04. Find the first search direction, S1   f 1
05. Find X i  i S i in terms of i
06. Write f ( X i  i S i )

f ( X i  i S i )
07. Find optimum value of i i.e. *i . For this take 0
 i

08. Determine next point X i 1  X i  *i S i


09. Check for optimality
If X i 1  X i   then go to step (14)

else go to step (10)


10. Increment iteration count by 1, i.e. i = i + 1.
11. Find f i
12. Find new search direction S i
13. Go to step (05)
14. Print results
15. Stop

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 16 of 20


Q. 01: Minimize f ( x1 , x 2 )  x1  x 2  2 x12  2 x1 x 2  x 22 , starting from point (0,0). Solve up to two
iterations using Conjugate Gradient method
Soln.: Given function f ( x1 , x 2 )  x1  x 2  2 x12  2 x1 x 2  x 22
Its gradient is
 f 
 x   1  4 x1  2 x 2 
f   1    
 f   1  2 x1  2 x 2 
 x 2 

0
Starting point X 1    (Given)
0
Iteration I:

1
Gradient, f 1   
 1
 1
Search direction S1  f 1   
1
  
X i  i S i   1 
 1 
f ( X i  i S i )  1  1  212  212  12
 12  21
f ( X i   i S i )
 0  21  2  0  1*  1
 i

 1
New point X 2  X 1  1* S1  X 2   
1

1
Check for optimality X 2  X 1   
1
Difference is not small, so proceed for iteration II.

Iteration II:

  1
Gradient, f 2   
  1
2
f 2 1 2  1 0 
Search direction S 2  f 2  2
S1         
f 1 1 2  1  2
 1 0    1 
X 2  2 S 2     2     
1 2 1  2 2 
f ( X 2   2 S 2 )  1  1  2 2  2  2  4 2  1  4 22  4 2
 4 22  2 2  1

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 17 of 20


f ( X i   i S i )
 0  8 2  2  0  *2  0.25
 i

  1
New point X 3  X 2  *2 S 2  X 3   
1.5

  1
As it is asked to perform two iterations only, hence X *    and fmin = –1.25
1.5

Q. 02: Minimize f ( x1 , x 2 )  6 x12  2 x 22  6 x1 x 2  x1  2 x 2 by Conjugate Gradient method, starting


from point (0,0). Solve up to two iterations.

Soln.: Given function f ( x1 , x 2 )  6 x12  2 x 22  6 x1 x 2  x1  2 x 2


Its gradient is
 f 
 x  12 x  6 x 2  1
f   1    1 
 f   4 x 2  6 x1  2 
 x 2 

0
Starting point X 1    (Given)
0
Iteration I:

  1
Gradient, f 1   
 2
1 
Search direction S1  f 1   
 2
 
X i  i S i   1 
21 
f ( X i  i S i )  612  412  1212  1  41
 212  51
f ( X i   i S i )
 0  41  5  0  1*  1.25
i

  1.25 
New point X 2  X 1  1* S1  X 2   
 2.50

1.25 
Check for optimality X 2  X 1   
2.50
Difference is not small, so proceed for iteration II.

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 18 of 20


Iteration II:
 1 
Gradient, f 2   
 4.5
2
f 2  1  21.25 1  5.25
Search direction S 2  f 2  S1     
f 1
2
 4.5 5 2  13 
  1.25  5.25  1.25  5.25 2 
X 2  2 S 2     2   
 2.50  13    2.5  13 2 

f ( X 2   2 S 2 )  93.87522  11.75 2  9.375

f ( X i   i S i )
 0  197.75 2  11.75  0  *2  0.06
 i

 0.935
New point X 3  X 2  *2 S 2  X 3   
  1.72 
 0.935
As it is asked to perform two iterations only, hence X *    and fmin = 5.888
  1.72 

Q. 03: The profit per acre of a farm is given by


20 x1  26 x 2  4 x1 x 2  4 x12  3 x 22
where x1 and x 2 are labor cost and fertilizer cost respectively. Find the values of x1 and x 2 to
maximize the profit. Use Conjugate Gradient method and perform two iterations.

Soln.: Given function 20 x1  26 x 2  4 x1 x 2  4 x12  3x 22


As this function is to be maximized, hence we take,
f ( x1 , x 2 )  20 x1  26 x 2  4 x1 x 2  4 x12  3 x 22

 f 
 x    20  4 x 2  8 x1  0
Its gradient is f   1     Starting point X 1    (Assumed)
 f   26  4 x1  6 x 2  ,
 0
 x 2 

Iteration I:

 20
Gradient, f 1   
 26
20
Search direction S1  f 1   
26
201 
X i  i S i   
261 
f ( X i   i S i )  154812  10761
f ( X i  i S i )
 0  30961  1076  0  1*  0.35
 i

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 19 of 20


7
New point X 2  X 1  1* S1  X 2   
9.1

7
Check for optimality X 2  X 1   
9.1
Difference is not small, so proceed for iteration II.

Iteration II:

 0.4
Gradient, f 2   
 0.6 
2
f 2  0.4  0.52  20  0.41 
Search direction S 2  f 2  S1      
  0.6 1076  26   0.59
2
f 1
7  0.4   7  0.4 2 
X 2  2 S 2     2   
9.1  0.6 9.1  0.6 2 

f ( X 2   2 S 2 )  2.6843 22  0.518 2  186.97


f ( X i  i S i )
 0  5.3686  2  0.518  0  *2  0.096
i
7.0394
New point X 3  X 2  *2 S 2  X 3   
9.0433
As it is asked to perform two iterations only, hence
7.0394 7 
X*       and fmin = – 187 i.e. Profitmax = 187
9.0434  9 
------------------------------------------------------------------------------------------------------------------------

Reference:

S. S. Rao, Engineering Optimization – Theory and Practice, 3 rd edition,


New Age International (P) Ltd. Publisher
------------------------------------------------------------------------------------------------------------------------

Lecture Notes by Dr. P. P. Bedekar, GCoE, Amravati Page 20 of 20

You might also like