Optimization Nonlinear
Optimization Nonlinear
Nonlinear programming:
One dimensional minimization methods
Introduction
The basic philosophy of most of the numerical methods of
optimization is to produce a sequence of improved approximations
to the optimum according to the following scheme:
• Analytical method
• The values of the objective function are first found at various combinations
of the decision variables
• The direct root methods are root finding methods that can be considered to
be equivalent to quadratic interpolation
Unimodal function
• A unimodal function is one that has only one peak (maximum)
or valley (minimum) in a given interval
– f1 < f2
– f1 > f2
– f1 = f2
Unimodal function
• If the outcome is f1 < f2, the minimizing x can not lie to the
right of x2
• If the outcome is f (x1) > f (x2) , the interval [0, x1] can be
discarded to obtain a new smaller interval of uncertainty, [x1,
1].
Unimodal function
• In Fig (c), two more experiments are to be placed in the new interval in
order to find a reduced interval of uncertainty.
Unimodal function
UNRESTRICTED SEARCH
• Simple to implement
In this case, the points x6 and x7 do not really bracket the minimum point but
provide information about it.
Number 2 3 4 5 6 … n
of trials
Ln/L0 2/3 2/4 2/5 2/6 2/7 … 2/(n+1)
Exhaustive search
• Since the function is evaluated at all n points simultaneously, this
method can be called a simultaneous search method.
1 1
or n 9
n 1 10
Example
By taking n = 9, the following function values can be calculated:
i 1 2 3 4 5 6 7 8 9
fi=f(xi) -0.14 -0.26 -0.36 -0.44 -0.50 -0.54 -0.56 -0.56 -0.54
L0
x1
2 2
L
x2 0
2 2
experiments
Final interval
1 L0 1 L0
of uncertainty (L0+ )/2
2 2 2 2 4 2 2
1 1 1 1
1 n / 2
2n / 2 1000 2 5
i.e.
999 1 995 999
or 2 n/2
5.0
1000 2 n / 2 5000 199
Since n has to be even, this inequality gives the minimum admissable value
of n as 6. The search is made as follows: The first two experiments are
made at:
L0
x1 0.5 0.0005 0.4995
2 2
L
x2 0 0.5 0.0005 0.5005
2 2
Dichotomous Search
with the function values given by:
f1 f ( x1 ) 0.4995(1.0005) 0.49975
f 2 f ( x2 ) 0.5005(0.9995) 0.50025
Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The
second pair of experiments is conducted at :
1.0 0.4995
x3 (0.4995 ) 0.0005 0.74925
2
1.0 0.4995
x4 (0.4995 ) 0.0005 0.75025
2
which gives the function values as:
f 3 f ( x3 ) 0.74925(0.75075) 0.5624994375
f 4 f ( x4 ) 0.75025(0.74975) 0.5624994375
Dichotomous Search
Since f3 > f4 , we delete (0.4995,x3) and obtain the new interval of
uncertainty as:
(x3,1.0)=(0.74925,1.0)
The final set of experiments will be conducted at:
1.0 0.74925
x3 (0.74925 ) 0.0005 0.874125
2
1.0 0.74925
x4 (0.74925 ) 0.0005 0.875125
2
which gives the function values as:
f 5 f ( x5 ) 0.874125(0.625875) 0.5470929844
f 6 f ( x6 ) 0.875125(0.624875) 0.5468437342
Dichotomous Search
Since f5 < f6 , the new interval of uncertainty is given by (x3, x6)
(0.74925,0.875125). The middle point of this interval can be taken as
optimum, and hence:
xopt 0.8121875
f opt 0.5586327148
Interval halving method
In the interval halving method, exactly one half of the current
interval of uncertainty is deleted in every stage. It requires three
experiments in the first stage and two experiments in each
subsequent stage.
Remarks
1. In this method, the function value at the middle point of
the interval of uncertainty, f0, will be available in all the
stages except the first stage.
Interval halving method (cont’d)
Remarks
2. The interval of uncertainty remaining at the end of n
experiments ( n 3 and odd) is given by
( n 1) / 2
1
Ln L0
2
Example
Find the minimum of f = x (x-1.5) in the interval (0.0,1.0) to within 10% of the
exact value.
Solution: If the middle point of the final interval of uncertainty is taken as the
optimum point, the specified accuracy can be achieved if:
( n 1) / 2
1 L 1 L0
Ln 0 or L0 (E1)
2 10 2 5
Since L0=1, Eq. (E1) gives
1 1
( n 1) / 2
or 2(n -1)/2 5 (E2)
2 5
Example
Solution: Since n has to be odd, inequality (E2) gives the minimum permissable
value of n as 7. With this value of n=7, the search is conducted as follows. The
first three experiments are placed at one-fourth points of the interval L0=[a=0,
b=1] as
x1 0.25, f1 0.25(1.25) 0.3125
x0 0.50, f 0 0.50(1.0) 0.5000
x2 0.75, f 0 0.75(0.75) 0.5625
Since f1 > f0 > f2, we delete the interval (a,x0) = (0.0,0.5), label x2 and x0 as the
new x0 and a so that a=0.5, x0=0.75, and b=1.0. By dividing the new interval of
uncertainty, L3=(0.5,1.0) into four equal parts, we obtain:
Again we note that f1 > f0 and f2>f0, and hence we delete both the intervals
(a,x1) and (x2,b) to obtain the new interval of uncertainty as
L7=(0.6875,0.8125). By taking the middle point of this interval (L7) as
optimum, we obtain:
xopt 0.75, and f opt 0.5625
This solution happens to be the exact solution in this case.
Fibonacci method
As stated earlier, the Fibonacci method can be used to find
the minimum of a function of one variable even if the
function is not continuous. The limitations of the method
are:
F0 F1 1
Fn Fn 1 Fn 2 , n 2,3,4,
Fn j
L
*
j L j 1
Fn ( j 2)
Fn ( j 1)
Lj L0
Fn
Fibonacci method
Procedure:
Lj Fn ( j 1)
L0 Fn
Ln F1 1
L0 Fn Fn
Fibonacci method
• The ratio Ln/L0 will permit us to determine n, the required number
of experiments, to achieve any desired accuracy in locating the
optimum point.Table gives the reduction ratio in the interval of
uncertainty obtainable for different number of experiments.
Fibonacci method
Position of the final experiment:
• In this method, the last experiment has to be placed with some
care. Equation
Fn j
L
*
j L j 1
Fn ( j 2 )
gives
L*n F 1
0 for all n
Ln 1 F2 2
FN 2 F F
L3 lim L0 lim N 2 N 1 L0
N F N F
N N 1 FN
2
FN 1
lim L0
N
FN
By defining a ratio as
FN
lim
N F
N 1
Golden Section Method
The equation
FN FN 2
1
FN 1 FN 1
can be expressed as:
1
1
that is:
2 1 0
Golden Section Method
This gives the root =1.618, and hence the equation
k 1
F
Lk lim N 1 L0
N
FN
yields: 1
k 1
Lk L0 (0.618) k 1 L0
2
F
In the equation L3 lim N 1 L0
N
FN
Ratio FN-1/FN 0.5 0.667 0.6 0.625 0.6156 0.619 0.6177 0.6181 0.6184 0.618
Golden Section Method
The ratio has a historical background. Ancient Greek architects
believed that a building having the sides d and b satisfying the
relation
d b d
d b
will be having the most pleasing properties. It is also found in
Euclid’s geometry that the division of a line segment into two
unequal parts so that the ratio of the whole to the larger part is equal
to the ratio of the larger to the smaller, being known as the golden
section, or golden mean-thus the term golden section method.
Comparison of elimination methods
• The efficiency of an elimination method can be measured in terms of the ratio of the
final and the initial intervals of uncertainty, Ln/L0
• The values of this ratio achieved in various methods for a specified number of
experiments (n=5 and n=10) are compared in the Table below:
• It can be seen that the Fibonacci method is the most efficient method, followed by the
golden section method, in reducing the interval of uncertainty.
Comparison of elimination methods
• A similar observation can be made by considering the number of
experiments (or function evaluations) needed to achieve a specified
accuracy in various methods.
• The results are compared in the Table below for maximum permissable
errors of 0.1 and 0.01.
• It can be seen that to achieve any specified accuracy, the Fibonacci method
requires the least number of experiments, followed by the golden section
method.
Interpolation methods
• The interpolation methods were originally developed as one
dimensional searches within multivariable optimization
techniques, and are generally more efficient than Fibonacci-type
approaches.
df
( ) 0
d
• Stage 2: Let
h( ) a b c2
be the quadratic function used for approximating the function f (). It
is worth noting at this point that a quadratic is the lowest-order
polynomial for which a finite minimum can exist.
Quadratic Interpolation Method
• Stage 2 cont’d: Let
dh
b 2c 0
d
that is,
~ b
*
2c
The sufficiency condition for the minimum of h () is that
d 2h
0
d ~*
2
that is,
c>0
Quadratic Interpolation Method
Stage 2 cont’d:
• To evaluate the constants a, b, and c in the Equation
h( ) a b c2
f B a bB cB 2
f C a bC cC 2
Quadratic Interpolation Method
Stage 2 cont’d:
• The solution of
f A a bA cA2
f B a bB cB 2
f C a bC cC 2
gives
f A BC (C B) f B CA( A C ) f C AB( B A)
a
( A B)( B C )(C A)
f A ( B 2 C 2 ) f B (C 2 A2 ) f C ( A2 B 2 )
b
( A B)( B C )(C A)
f ( B C ) f B (C A) f C ( A B)
c A
( A B)( B C )(C A)
Quadratic Interpolation Method
Stage 2 cont’d:
• From equations
f A ( B 2 C 2 ) f B (C 2 A2 ) f C ( A2 B 2 )
~ b b
* ( A B)( B C )(C A)
2c f ( B C ) f B (C A) f C ( A B)
c A
( A B)( B C )(C A)
~* b f A ( B 2 C 2 ) f B (C 2 A2 ) f C ( A2 B 2 )
2c 2[ f A ( B C ) f B (C A) f C ( A B)]
a fA
4 f 3 f A fC 4 f B 3 f A fC
b B * t
2t 4 f B 2 fC 2 f A
f f A 2 fB
c C
2t 2
provided that
fC f A 2 f B
c 2
0
2t
Quadratic Interpolation Method
Stage 2 cont’d:
• The inequality
fC f A 2 f B
c 2
0
2t
can be satisfied if
f A fC
fB
2
i.e., the function value fB should be
smaller than the average value of
fA and fC as shown in figure.
Quadratic Interpolation Method
Stage 2 cont’d:
• The following procedure can be used not only to satisfy the inequality
f A fC
fB
2
~
but also to ensure that the minimum ~ * lies in the interval 0 < *< 2t.
1. Assuming that fA = f (=0) and the initial step size t0 are known, evaluate
the function f at =t0 and obtain f1=f (=t0 ).
Quadratic Interpolation Method
Stage 2 cont’d:
f A fC
fB
2
~
*
Quadratic Interpolation Method
Stage 2 cont’d:
2. If f1 > fA is realized as shown in figure, set fC = f1 and evaluate the function f at
~
= t0 /2 and * using the equation
4 f B 3 f A fC
* t
4 f B 2 fC 2 f A
with t= t0 / 2.
Quadratic Interpolation Method
Stage 2 cont’d:
3. If f1 ≤ fA is realized as shown in figures, set fB = f1 and evaluate the function f at
= 2 t0 to find f2=f (= 2 t0 ). This may result in any of the equations shown
in the figure.
Quadratic Interpolation Method
Stage 2 cont’d:
4. If f2 turns out to be greater than f1 as shown in the figures, set fC= f2 and
~
compute * according to the equation below with t= t0.
4 f B 3 f A fC
* t
4 f B 2 fC 2 f A
5. If f2 turns out to be smaller than f1, set new f1= f2 and t= 2t0 and repeat steps 2
to 4 until we are able to find ~ *.
Quadratic Interpolation Method
Stage 3: The ~ * found in Stage 2 is the minimum of the approximating
~
quadratic h() and we have to make sure that this * is sufficiently
~
close to the true minimum * of f () before taking * = * . Several
tests are possible to ascertain this.
~ ~ ~
• One possible test is to compare f ( *) with h( *) and consider * a
sufficiently close good approximation if they differ not more than by a
small amount. This criterion can be stated as:
~ ~
h( *) f ( *)
~ 1
f ( *)
Quadratic Interpolation Method
Stage 3 cont’d:
~ ~ ~ ~
f ( * *) f ( * *)
~ 2
2 *
• To evaluate the constants a’, b’ and c’, the three best function values of
the current f A=f (=0), f B=f (=t0), f C=f (=2t0), and ~f f ( ~*)
are to be used.
• This process of trying to fit another polynomial to obtain a better
~
approximation to * is known as refitting the polynomial.
Quadratic Interpolation Method
Stage 3 cont’d: For refitting the quadratic, we consider all possible situations and select
~
the best three points of the present A, B, and C, and * . There are four possibilities. The
best three points to be used in refitting in each case are given in the table.
Quadratic Interpolation Method
Stage 3 cont’d:
Quadratic Interpolation Method
~
Stage 3 cont’d: A new value of * is computed by using the general
formula:
~* b f A ( B 2 C 2 ) f B (C 2 A2 ) f C ( A2 B 2 )
2c 2[ f A ( B C ) f B (C A) f C ( A B)]
~
If this * does not satisfy the convergence criteria stated in
~ ~ ~ ~ ~ ~
h( *) f ( *) f ( * *) f ( * *)
~ 1 ~ 2
f ( *) 2 *
• The first stage normalizes the S vector so that a step size =1 is
acceptable.
• The second stage establishes bounds on *, and the third stage finds
the value of * by approximating f () by a cubic polynomial h ().
~
• If the * found in stage 3 does not satisfy the prescribed convergence
criteria, the cubic polynomial is refitted in the fourth stage.
Cubic Interpolation Method
~
• *
Stage 1: Calculate =max|si|, where |si | is the absolute value
of the ith component of S and divide each component of S by .
Another method of normalization is to find =(s12+ s22+ …+sn2 )1/2 .
and divide each component of S by .
• Stage 2:To establish lower and upper bounds on the optimal
step size ~ *, we need to find two points A and B at which the
slope df / d has different signs. We know that at = 0,
df
ST f (X) 0
d 0
into
~ c (c 2 3bd )1/ 2
*
3d
d 2h ~
~ 2c 6 d *0
d2 *
Cubic Interpolation Method
Stage 3 cont’d: We obtain:
~ f A Z Q
* A ( B A)
f A f B 2Z
where
Q ( Z 2 f A f B )1/ 2
2( B A)( 2Z f A f B )( f A Z Q)
2( B A)( f A Zf B 3Zf A 2Z 2 )
2
2( B A) f A f B 0
Cubic Interpolation Method
Stage 3 cont’d: By specializing all the equations below:
1
a f A bA cA2 dA3 with b 2
( B 2
f
A A 2
f B 2 ABZ )
(A - B)
1 1
c 2
[( A B) Z Bf A Af B ] and d 2
(2Z f A f B ) where
(A - B) 3(A - B)
3( f A f B )
Z f A f B
B A ~ f A Z Q
* A ( B A)
dh f A f B 2Z
b 2c 3d2 0
d where
that is Q ( Z 2 f A f B )1/ 2
~ c (c 2 3bd )1/ 2 2( B A)( 2Z f A f B )( f A Z Q)
*
3d
2( B A)( f A Zf B 3Zf A 2Z 2 )
2
d 2h ~ 2( B A) f A f B 0
~ 2c 6 d *0
d2 *
Cubic Interpolation Method
Stage 3 cont’d:
For the case where A=0, we obtain:
1 1
a fA b f A c ( Z f A ) d 2
(2Z f A f B )
B 3B
~ f A Z Q
* B Q ( Z 2 f A f B )1/ 2 0
f A f B 2Z
3( f A f B )
Z f A f B
B
Cubic Interpolation Method
Stage 3 cont’d: The two values of ~ *in the equations:
~ f A Z Q
* A ( B A)
f A f B 2 Z
~ f A Z Q
* B
f A f B 2 Z
correspond to the two possibilities for the vanishing of h’() [i.e., at a maximum of
h() and at a minimum]. To avoid imaginary values of Q, we should ensure the
satisfaction of the condition
Z 2 f A f B 0
in equation
Q (Z 2 f A f B )1/ 2
Cubic Interpolation Method
Stage 3 cont’d:
This inequality is satisfied automatically since A and B are
selected such that f’A <0 and f’B ≥0. Furthermore, the sufficiency
condition when A=0 requires that Q > 0, which is already
~
satisfied. Now, we compute * using
f A Z Q
* B
f A f B 2Z
and proceed to the next stage.
Cubic Interpolation Method
~
Stage 4: The value of * found in stage 3 is the true minimum of h() and may not
be close to the minimum of f (). Hence the following convergence criteria
can be used before choosing * ~ *
~ ~
h( *) f ( *)
~ 1
f ( *)
df
S T f 2
d
~
~ *
*
where 1 and 2 are small numbers whose values depend on the
accuracy desired.
Cubic Interpolation Method
Stage 4: The criterion
df
ST f 2
d
~
~ *
*
can be stated in nondimensional form as
Sf
2
S f ~
*
~ f A Z Q
* A ( B A)
f A f B 2Z
is to be used for finding the optimal step size ~ *. If f (~*) 0 , the new points A and
~ ~
B are taken as * and B, respectively; otherwise if f ( *) 0 , the new points A and B
are taken as A and ~ * and equations
~ ~
h( *) f ( *) Sf
1 and 2
~
f ( *) S f ~
*
f 5 53 20 5
By the cubic interpolation method.
To find B at which df/d is nonnegative, we start with t0=0.4 and evaluate the
derivative at t0, 2t0, 4t0,……This gives
Example
This gives:
f (t0 0.4) 5(0.4) 4 15(0.4) 2 20.0 22.272
f (2t0 0.8) 5(0.8) 4 15(0.8) 2 20.0 27.552
f (4t0 1.6) 5(1.6) 4 15(1.6) 2 20.0 25.632
f (8t0 3.2) 5(3.2) 4 15(3.2) 2 20.0 350.688
3(5.0 113.0)
Z 20.0 350.688 229.588
3 .2
Q [ 229.5882 ( 20.0)(350.688)]1/ 2 244.0
Hence
~ - 20.0 229.588 244.0
* 3.2 1.88 or - 0.1396
- 20.0 350.688 459.176
By discarding the negative value, we have :
~
* 1.84
Example
Convergence criterion: If ~ * is close to the true minimum *, then
~ ~
f ( *) df ( *) / d
should be approximately zero. Since
f 54 152 20
~
f ( *) 5(1.84) 4 15(1.84) 2 20 13.0
~
Since this is not small, we go to the next iteration or refitting. As f ( *) 0 ,
we take A= ~ * and
~
f A f ( *) (1.84) 5 5(1.84) 3 20(1.84) 5 41.70
Thus,
A 1.84, f A 41.70, f A 13.0
B 3.2, f B 113.0, f B 350.688
~
A * B
Example
Iteration 2
3(41.7 113.0)
Z 13.00 350.688 3.312
(3.20 1.84)
Q [( 3.312) 2 (13.0)(350.688)]1/ 2 67.5
Hence,
~ - 13.0 - 3.312 67.5
* 1.84 (3.2 1.84) 2.05
- 13.0 350.688 6.624
Convergence Criteria
~
f ( *) 5.0(2.05) 4 15.0(2.05) 2 20.0 5.35
~ ~
Since this value is large, we go to the next iteration with B * 2.05 [as f ( *) 0] and
f B (2.05) 5 5.0(2.05) 3 20.0(2.05) 5.0 42.90
Thus,
A 1.84, f A 41.70, f A 13.00
B 2.05, f B 42.90, f B 5.35
A * B
Example
Iteration 3 3(41.7 42.90)
Z 13 5.35 9.49
(2.05 1.84)
Q [(9.49) 2 (13.0)(5.35)]1/ 2 12.61
Hence
~ - 13.0 9.49 12.61
* 1.84 (2.05 1.84) 2.0086
- 13.0 5.35 18.98
Convergence criterion:
~
f ( *) 5.0(2.0086) 4 15.0(2.0086) 2 20.0 0.855
Assuming that this value is close to zero, we can stop the iterative process and take
~
* * 2.0086
Direct root methods
The necessary condition for f () to have a minimum of * is that
f ( ) 0
Three root finding methods will be considered here:
• Newton method
• Quasi-Newton method
• Secant methods
Newton method
Consider the quadratic approximation of the function f () at = i
using the Taylor’s series expansion:
1
f ( ) f (i ) f (i )( i ) f (i )( i ) 2
2
By setting the derivative of this equation equal to zero for the minimum
of f (), we obtain:
f (i 1 )
where is a small quantity
Newton method
• FIGURE 5.18a sayfa 318
Newton method
• If the starting point for the iterative process is not close to the true
solution *, the Newton iterative process may diverge as illustrated:
Newton method
Remarks:
• The method requires both the first- and second-order derivatives of f ().
Example
Find the minimum of the function:
0.75 1 1
f ( ) 0.65 0.65 tan
1 2
Using the Newton-Raphson method with the starting point 1=0.1. Use =0.01 in
the equation
f (i 1 )
where is a small quantity
for checking the convergence.
Example
Solution: The first and second derivatives of the function f () are given
by :
1.5 0.65 1 1
f ( ) 0.65 tan
(1 2 ) 2 1 2
1.5(1 32 ) 0.65(1 2 ) 0.65 2.8 3.22
f ( )
(1 )
2 3
(1 )
2 2
1 2
(1 2 ) 3
Iteration 1
1=0.1, f (1) = -0.188197, f’ (1) = -0.744832, f’’ (1)=2.68659
f (1 )
2 1 0.377241
f (1 )
Convergence check: | f (2)| =|-0.138230| >
Example
Solution cont’d:
Iteration 2
f (2 ) = -0.303279, f’ (2) = -0.138230, f’’ (2) = 1.57296
f (2 )
3 2 0.465119
f (2 )
Convergence check: | f’(3)| =|-0.0179078| >
Iteration 3
f (3 ) = -0.309881, f’ (3) = -0.0179078, f’’ (3) = 1.17126
f (3 )
4 3 0.480409
f (3 )
Convergence check: | f’(4)| =|-0.0005033| <
Since the process has converged, the optimum solution is taken as *
4=0.480409
Quasi-Newton Method
If the function minimized f () is not available in closed form or is
difficult to differentiate, the derivatives f’ () and f’’ () in the equation
f (i )
i 1 i
f (i )
f (i )
into i 1 i
f (i )
leads to
[ f (i ) f (i )]
i 1 i
2[ f (i ) 2 f (i ) f (i )]
Quasi-Newton Method
This iterative process is known as the quasi-Newton method. To test the
convergence of the iterative process, the following criterion can be used:
f (i 1 ) f (i 1 )
f (i 1 )
2
where a central difference formula has been used for evaluating the
derivative of f and is a small quantity.
Remarks:
The equation
[ f (i ) f (i )]
i 1 i
2[ f (i ) 2 f (i ) f (i )]
requires the evaluation of the function at the points i+ and i - in
addition to i in each iteration.
Example
Find the minimum of the function
0.75 1 1
f ( ) 0.65 0.65 tan
1 2
using the quasi-Newton method with the starting point 1=0.1 and the
step size =0.01 in central difference formulas. Use =0.01 in equation
f (i 1 ) f (i 1 )
f (i 1 )
2
where s is the slope of the line connecting the two points (A, f’(A)) and (B,
f’(B)), where A and B denote two different approximations to the correct
solution, *. The slope s can be expressed as:
f ( B) f ( A)
s
B A
Secant method
The equation:
f (i ) f ( A)( B A)
i 1 i A
s f ( B) f ( A)
The iterative process given by the above equation is known as the secant
method. Since the secant approaches the second derivative of f () at A as
B approaches A, the secant method can also be considered as a quasi-
Newton method.
Secant method
Secant method
• It can also be considered as a form of elimination technique since part of
the interval, (A,İ+1) in the figure is eliminated in every iteration.
1. Set 1=A=0 and evaluate f’(A). The value of f’(A) will be negative.
Assume an initial trial step length t0.
2. Evaluate f’(t0).
3. If f’(t0)<0, set A= i=t0, f’(A)= f’(t0), new t0=2t0, and go to step 2.
4. If f’(t0)≥0, set B= t0, f’(B)= f’(t0), and go to step 5.
5. Find the new approximate solution of the problem as:
f ( A)( B A)
i 1 A
f ( B) f ( A)
Secant method
6. Test for convergence:
f (i 1 )
where is a small quantity. If the above equation is satisfied, take * i+1
and stop the procedure. Otherwise, go to step 7.
8. If f’(İ+1) < 0, set new A= İ+1, f’(A) =f’(İ+1), i=i+1, and go to step 5.
Secant method
Remarks:
• When the first derivatives of the function being minimized are available,
the cubic interpolation method or the secant method are expected to be
very efficient.
• On the other hand, if both the first and the second derivatives of the
function are available, the Newton method will be the most efficient one in
finding the optimal step length, *.