0% found this document useful (0 votes)
104 views

One Variable Optimization

This document discusses optimization of one-variable functions. It begins by stating the optimality conditions for a function F(x) to have a local minimum at x=x*: F'(x*)=0 and F"(x*)>0. Two iterative methods for finding local minima are then presented: the bisection method and the secant method. The bisection method systematically evaluates F(x) at points to reduce the size of the bracket containing the minimum by half each iteration. The secant method uses linear interpolation between two points to iteratively find where F'(x)=0. Examples are given to demonstrate how each method works.

Uploaded by

Cristóbal Lara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views

One Variable Optimization

This document discusses optimization of one-variable functions. It begins by stating the optimality conditions for a function F(x) to have a local minimum at x=x*: F'(x*)=0 and F"(x*)>0. Two iterative methods for finding local minima are then presented: the bisection method and the secant method. The bisection method systematically evaluates F(x) at points to reduce the size of the bracket containing the minimum by half each iteration. The secant method uses linear interpolation between two points to iteratively find where F'(x)=0. Examples are given to demonstrate how each method works.

Uploaded by

Cristóbal Lara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 2

ONE-VARIABLE OPTIMIZATION

1. Opt imality conditions


We begin with a formal statement of the conditions which hold at a minimum
of a one-variable differentiable function. We have already made use of these
conditions in the previous chapter.
Definition Suppose that F ( x ) is a continuously differentiable function of the
scalar variable x, and that, when x = x*,

dF d2F
-=O and - > 0 .
dx dx2

The function F ( x ) is then said to have a local minimum at x*.


Conditions (2.1.1) imply that F ( x * ) is the smallest value of F in some region
near x* . It may also be true that F (x*) 5 F ( x ) for all x but condition (2.1.1)
does not guarantee this.
<
Definition If conditions (2.1.1) hold at x = x* and if F ( x * ) F ( x ) for all x
then x* is said to be the global minimum.
In practice it is usually hard to establish that x* is a global minimum and so we
shall chiefly be concerned with methods of finding local minima.
Conditions (2.1.1) are called optimality conditions. For simple problems they
can be used directly to find a minimum, as in section 3 and 1.3 of chapter 1.
As another example, consider the function
Here, and in what follows, we shall sometimes use the notation

dF d 2 ~
~ ' ( x=) - and ~ " ( x=) -
dx dx2 '

Thus, for (2.1.2), F 1 ( x )= 3x2 - 6x and so F 1 ( x )= 0 when x = 0 and x = 2.


These values represent two stationary points of F ( x ); and to determine which
is a minimum we must consider F1I(x)= 6x - 6. We see that F has a minimum
at x = 2 because F1'(2) > 0. However, Fl1(O)is negative and so F ( x ) has a
local maximum at x = 0.
We can only use this analytical approach to solve minimization problems when
it is easy to form and solve the equation F 1 ( x )= 0. This may not be the case
for functions F ( x )which occur in practical problems -for instance Maxretlm.
Therefore we shall usually resort to iterative techniques.
Some iterative methods are called direct search techniques and are based on
simple comparison of function values at trial points. Others are known as
gradient methods. These use derivatives of the objective function and can be
viewed as iterative algorithms for solving the nonlinear equation F 1 ( x )= 0.
Gradient methods tend to converge faster than direct search methods. They
also have the advantage that they permit an obvious convergence test - namely
stopping the iterations when the gradient is near zero. Gradient methods are
not always suitable, however - i.e. when F ( x ) has discontinuous derivatives,
as on a piecewise linear function. We have already seen an example of this in
function (1.7.8).
Exercises
1. Show that, if the optimality conditions (2.1.1) hold, F(x* + h ) > F ( x * ) for
h sufficiently small.
2. Find the stationary points of F ( x ) = 4cosx2 - sinx2 - 3.

2. The bisection method


A simple (but inefficient) way of estimating the least value of F ( x ) in a range
< <
a x b would be calculate the function at many points in [a,b] and then
pick the one with lowest value. The bisection method uses a more systematic
approach to the evaluation of F in [a,b]. Each iteration uses a comparison
between function values at five points to reduce by half the size of a bracket
containing the minimum. It can be shown that this approach will locate a
minimum if it is applied to a function F ( x ) that is unimodal- i.e., one that has
only one minimum in the range [a,b ] .
One-variable optimization,

A statement of the algorithm is given below.

Bisection Method for minimizing F(x) on the range [a,b]


Setx, =a,xh = bandx, = ?1 ( a + b ) .
Calculate Fa = F (xa),F h = F ( ~ h )Fm , = F(x,)
Repeat
1 1
setxr = Z ( ~ a + ~ m ) ,X r = q ( x ~ n + ~ b )
calculate fi = F (xl) and Fr = F (x,)
let F,& = min{Fa, F h , F,, 4,Fr)
iffinin=Fa o r 6 then setxb =x,, X, = X I , Fh =F,, F, = f i
else if Fmi,= F, then set xa = xl, xh = x,, Fa = fi, F h = F,
else if FMin = F, or Fb then set x, = x,, x, = x,, Fa = F,, F, = F,
until Ixb - xa( is sufficiently small
Proposition If F(x) is unimodal for a 5 x 5 b with a minimum at x* then the
number of iterations taken by the bisection method to locate x* within a bracket
of width less than lo-" is K, where K is the smallest integer which exceeds

Proof The size of the bracket containing the solution is halved on each itera-
tion. Hence, after k iterations the width of the bracket is 2-k(b - a ) . To find
the value of k which gives

we take logs of both sides and get


loglo(b- a ) - kloglo(2) = -s
and so the width of the bracket is less than lo-" once k exceeds (2.2.1).
We can show how the bisection method works by applying it to the problem
Minimize F (x) = x" 3x2 for 0 5 x 5 3.
Initially xa = 0, XI? = 3, x, = 1.5, and the first iteration adds the points xj = 0.75
and xr = 2.25. We then find

The least function value is at xr = 2.25 and hence the search range for the next
iteration is [x,, x;,] = [1.5, 3.01. After re-labelling the points and computing
new values xl ,x, we get
and Fa = -3.375; fi = -3.955; F, = -3.797; F, = -2.584; Fh= 0.
Now the least function value is at xl and the new range is [x,,x,] = [ I S , 2.251.
Re-labelling and adding the new xl and x, we get
xu = 1.5; xl = 1.6875; x, = 1.875; x, = 2.0625; xh = 2.25
and Fa = -3.375; fi = -3.737; F, = -3.955; F, = -3.988; Fh= -3.797.
These values imply the minimum lies in [xl,x,] = [1.875,2.25]. After a few
more steps we have an acceptable approximation to the true solution at x = 2.
The application of the bisection method to a maximum-return problem is illus-
trated below in section 6 of this chapter.

Finding a bracket for a minimum


We now give a systematic way of finding a range a < x < b which contains
a minimum of F(x). This method uses the slope F' to determine whether the
search for a minimum should be to the left or the right of an initial point xo.
If F1(xo)is positive then lower values of the function will be found for x < xo,
while F1(xo)< 0 implies that lower values of F occur when x > xo. The al-
gorithm merely takes larger and larger steps in a "downhill" direction until the
f~mctionstarts to increase, indicating that a minimum has been bracketed.

Finding a and b to bracket a local minimum of F (x)


Choose an initial point xo and a step size a ( > 0)
Set 8 = -a x sign(F1(xo))
Repeat fork = 0,1,2,...
xk+l = ~ k + 8 , 8 = 28
until F ( x ~ +>~ F(x-
) k)
ifk=Othenseta=xoandb=xl
if k > 0 then set a = x ~ and
- ~b = xk+1
Exercises
1. Apply the bisection method to F(x) = 2- 2x in the interval 0 5 x 5 1.
2. Do two iterations of the bisection method for the function F (x) = x3 ++ - x
<
in the range 0 x 5 1. How close is the best point found to the exact minimum
of F ? What happens if we apply the bisection method in the range -2 5 x 5 O?
3. Use the bracketing technique with xg = 1 and a = 0.1 to find a bracket for a
minimum of F (x) = ex - 2x.
4. Apply the bracketing algorithm to the function (1.7.5) with V, = 0.00123
and p = 10 and using xo = 0 and a = 0.25 to determine a starting range for
One-variable optimization 23

the bisection method. How does the result compare with the bracket obtained
when xo = 1 and a = 0.2?

3. The secant method


We now consider an iterative method for solving F 1 ( x )= 0. This will find a
local minimum of F ( x )provided we use it in a region where the second deriva-
tive F1'(x)remains positive. The approach is based on linear interpolation. If
we have evaluated F' at two points x = xl and x = x2 then the calculation

gives an estimate of the point where F' vanishes.


We can show that formula (2.3.1)is exact if F is a quadratic function. Consider
F ( x ) = x2 - 3x - 1 for which F 1 ( x )= 2x - 3 and suppose we use xl = 0 and
x2 = 2. Then ( 2 . 3 . 1 )gives

Hence (2.3.1) has yielded the minimum of F ( x ) . However, when F is not


quadratic, we have to apply the interpolation formula iteratively. The algo-
rithm below shows how this might be done.

Secant method for solving F 1 ( x )= 0


Choose xo, xl as two estimates of the minimum of F ( x )
Repeat fork = 0 , 1 , 2...

until IF ' ( x ~ +1 ~is)sufficiently small.


This algorithm makes repeated use of formula (2.3.1)based upon the two most
recently calculated points. In fact, this may not be the most efficient way to
proceed. When k > 1, we would normally calculate Xk+2 using xk+l together
with either xk or xk-1 according to one of a number of possible strategies:
( a ) Choose whichever of xk and xk-1 gives the smaller value of I F'I.
( b ) Choose whichever of xk and xk-1 gives F' with opposite sign to F ' ( x ~ + ~ )
( c ) Choose whichever of xk and xk-1 gives the smaller value of F .
Strategies ( a ) and ( c ) are based on using points which seem closer to the mini-
mum; strategy ( b )reflects the fact that linear interpolation is more reliable than
linear extrapolation (which occurs when F ' ( X ~ +and ~ ) F 1 ( x k )have the same
sign). Strategy (b), however, can only be employed if we have chosen our
initial xo and xl so that F 1 ( x o )and F 1 ( x l )have opposite signs.
We can demonstrate the secant method with strategy ( a ) on F ( x ) = x3 - 3 2
for which F 1 ( x )= 3x2 - 6x. If xo = 1.5 and X I = 3 then F 1 ( x o )= -2.25 and
F 1 ( x l )= 9. Hence the first iteration gives

and so F 1 ( x 2 )= - 1.08. For the next iteration we re-assign xl = 1.5, since


IF1(1.5)1< IF1(3)1,and so we get

The iterations appear to be converging towards the correct solution at x = 2,


and the reader can perform further calculations to confirm this.
Exercises
1 . Apply the secant method to F ( x ) = ex - 2x in the range 0 5 x 5 1 .
2 . Show that (2.3.1)will give F 1 ( x )= 0 when applied to any quadratic function

3 . Use the secant method with strategy (b) on F ( x ) = x3 - 3x2 with xo = 1.5
and xl = 3 . What happens if the starting values are xo = 0.5 and xl = 1.5?

4. The Newton method


This method seeks the minimum of F ( x ) using both first and second deriva-
tives. In its simplest form it is described as follows.

Newton method for minimizing F ( x )


Choose xo as an estimate of the minimum of F ( x )
Repeat for k = 0,1,2...

until I F I ( X ~ +I is
~ )sufficiently small.
This algorithm is derived by expanding F ( x ) as a Taylor series about xk
One-variable o p t i m i z a t i o n

Differentiation with respect to h gives a Taylor series for F1(x)

If we assume that h is small enough for the 0 ( h 2 )term in (2.4.3)to be neglected


) give F1(xk+ h ) z 0.
then it follows that the step h = - F 1 ( x k ) / F " ( x kwill
As an illustration, we apply the Newton method to F ( x ) = "x 3x2 for which
F1(x)= 3x2 - 6x and F1'(x)= 6x - 6. At the initial guess xo = 3, F1 = 9 and
F" = 12 and so the next iterate is given by

Iteration two uses F1(2.25)= 1.6875 and F"(2.25) = 7.5 to give

After one more iteration x3 z 2.0003 and so Newton's method is converging to


the solution x* = 2 more quickly than either bisection or the secant method.

Convergence of the Newton method


Because the Newton iteration is important in the development of optimization
methods we study its convergence more formally. We define

as the error in the approximate minimum after k iterations.


Proposition Suppose that the Newton iteration (2.4.1)converges to x*, a local
minimum of F ( x ) , and that F1'(x*)= m > 0. Suppose also that there is some
neighbourhood, N, of x* in which the third derivatives of F are bounded, so
that, for some M > 0 ,

M > ~ " ' ( x>) -M for all x E N. (2.4.5)

If ek is defined by (2.4.4)then there exists an integer K such that, for all k > K,

Proof Since the iterates xk converge to x* there exists an integer K such that
m
xk E N and lekl < - f o r k > K.
2M
Then the bounds (2.4.5) on F1I1imply

Combining this with the bound on ( e k (we


, get

Now, by the mean value form of Taylor's theorem ,

5
for some between x* and xk; and since F 1 ( x * )= 0 we deduce

1 2 Ill
F ' ( x ~= +
) - e k ~ I 1 ( x k ) -ekF (6).
2
The next estimate of the minimum is xk+l = xk - 6xk where

Hence the error after k + 1 iterations is

T h ~ x(2.4.6) follows, using (2.4.5) and (2.4.7).


This result shows that, when xk is near to x*, the error ek+l is proportional to e i
and so the Newton method ultimately approaches the minimum very rapidly.
Definition If, for some constant C, the errors ek, ek+l on successive steps of
an iterative method satisfy

then the iteration is said to have a quadratic rate of ultimate convergence.

Implementation of the Newton method


The convergence result in section 4 depends on certain assumptions about
higher derivatives and this should warn us that the basic Newton iteration may
not always converge. For instance, if the iterations reach a point where F U ( x )
is zero the calculation will break down. It is not only this extreme case which
can cause difficulties, however, as the following examples show.
Consider the function F ( x ) = x3 - 3x2, and suppose the Newton iteration is
started from xo = 1.1. Since F 1 ( x )= 3x2 - 6 x and F1I(x)= 6 x - 6 , we get

However, the minimum of "x 3x2 is at x = 2; and hence the method has
overshot the minimum and given xl further away from the solution than xo.
Suppose now that the Newton iteration is applied to "x 3x2 starting from
xo = 0.9. The new estimate of the minimum turns out to be

and the direction ofthe Newton step is away from the minimum. The iteration
is being attracted to the maximum of F ( x ) at x = 0. (Since the Newton method
solves F 1 ( x )= 0, this is not an unreasonable outcome.)
These two examples show that convergence of the basic Newton iteration de-
pends upon the behaviour of F1'(x) and a practical algorithm should include
safeguards against divergence. We should only use (2.4. I ) if F t l ( x )is strictly
positive; and even then we should also check that the new point produced by
the Newton formula is "better7' than the one it replaces. These ideas are in-
cluded in the following algorithm which applies the Newton method within a
bracket [a,h] such as can be found by the algorithm in section 2.

Safeguarded Newton method for minimizing F ( x ) in [a,b ]


Make a guess xo (a < xo < b ) for the minimum of F ( x )
Repeat for k = 0,1,2, ..
if F1'(xk)> 0 then 6x = - F 1 ( x k ) / F " ( x k )
else 6x = -F 1 ( x k )
if 6x < 0 then a = min(1, (a - x k ) / 6 x )
if 6x > 0 then a = min(1, ( b - x k ) / 6 x )
Repeat for j = 0,1, ...
a = 0.5ja
+
until F (xk a 6 x ) < F ( x k )
Set xk+l = xk + a 6 x
until IF ' ( X ~ +1 is
~ )sufficiently small.
As well as giving an alternative choice of 6x when F" 5 0, the safeguarded
Newton algorithm includes a step-size, a. This is chosen first to prevent the
correction steps from going outside the bracket [a,b] and then, by repeated
halving, to ensure that each new point has a lower value of F than the previous
one. The algorithm always tries the full step ( a= 1 ) first and hence it can have
the same fast ultimate convergence as the basic Newton method.
We can show the worlung of the safeguarded Newton algorithm on the function
F ( x ) = x' - 3x2 in the range [ I ,4] with xo = 1.1. Since

F ( l . l ) = -2.299, ~ ' ( 1 . 1 =
) -2.97 and ~ " ( 1 . 1 =
) 0.6

the first iteration gives 6x = 4.95. The full step, a = 1 , gives xk + a6x = 6.05
which is outside the range we are considering and so we must re-set

However, F ( 4 ) = 16 > F ( 1 . 1 ) and a is reduced again (to about 0.293) so that

Now F (2.45) = -3.30 1 which is less than F ( 1 . 1 ) . Therefore the inner loop of
the algorithm is complete and the next iteration can begin.
Under certain assumptions, we can show that the inner loop of the safeguarded
Newton algorithm will always terminate and hence that the safeguarded New-
ton method will converge.
Exercises
1 . Use Newton's method to estimate the minimum of ex - 2x in 0 < x 5 1.
Compare the rate of convergence with that of the bisection method.
2. Show that, for any starting guess, the basic Newton algorithm converges in
one step when applied to a quadratic function.
3. Do one iteration of the basic Newton method on the function F ( x ) ="x 3x2
starting from each of the following initial guesses:

Explain what happens in each case.


4. Do two iterations of the safeguarded Newton method applied to the function
"x 3x2 and starting from xo = 0.9.

5. Met hods using quadratic or cubic interpolation


Each iteration of Newton's method generates xk+l as a stationary point of the
interpolating quadratic function defined by the values of F ( x k ) , F 1 ( x k )and
F1'(xk).In a similar way, a direct-search iterative approach can be based on
One-variable optimization 29

locating the minimum of the quadratic defined by values of F at three points


xk: xk-, , xk-2; and a gradient approach could minimize the local quadratic
approximation given by, say, F (xkP1) , F 1(xk- 1 ) and F ( x k ) If a quadratically
predicted minimum xk+l is found to be "close enough7' to x* (e.g. because
F ' ( X ~ +=~ 0)
) then the iteration terminates; otherwise xk+l is used instead of
one of the current points to generate a new quadratic model and hence predict
a new minimum.
As with the Newton method, the practical implementation of this basic idea
requires certain safeguards, mostly for dealing with cases where the interpo-
lated quadratic has negative curvature and therefore does not have a minimum.
The bracketing algorithm given earlier may prove useful in locating a group of
points which implies a suitable quadratic model.
A similar approach is based on repeated location of the minimum of a cubic
polynomial fitted either to values of F at four points or to values of F and F'
at two points. This method can give faster convergence, but it also requires
fall-back options to avoid the search being attracted to a maximum rather than
a minimum of the interpolating polynomial.
Exercises
1 . Suppose that F ( x ) is a quadratic function and that, for any two points x,, xb,
the ratio D is defined by

Show that D = 0.5 when xb is the minimum of F ( x ) . What is the expression


for D if F ( x ) is a cubic function?
2. Explain why the secant method can be viewed as being equivalent to quadratic
interpolation for the function F ( x ).
3. Design an algorithm for minimizing F ( x ) by quadratic interpolation based
on function values only.

6. Solving maximum-return problems


The SAMPO software was mentioned briefly in section 5 of chapter 1. The
program sample2 is designed to read asset data like that in Table 1.3 and
then to solve problem Maxretlm using either the bisection, secant or Newton
method. In this section we shall quote some results from sample2 in order
to compare the performance of the three techniques. The reader should be
able to obtain similar comparative results using other implementations of these
one-variable optimization algorithms. (It should be understood, however, that
two different versions of the same iterative optimization method may not give
exactly the same sequence of iterates even though they eventually converge to
the same solution. Minor discrepancies can arise because of rounding errors
in computer arithmetic when the same calculation is expressed in two different
ways or when two computer systems work to different precision.)
We recall that the maximum-return problem Maxretlm involves minimizing
the function (1.7.4). In particular, using data in Table 1.3, (1.7.4) becomes

where p is a weighting parameter and V, denotes an acceptable value for risk.


In order to choose a sensible value for V, it can be helpful, first of all, to solve
MinriskO to obtain the least possible value of risk, Vmi,.For the data in Table
1.3 we find that Vmi, = 0.00 1 12. Suppose now that we seek the maximum
return when the acceptable risk is V, = 1.5Vmi, = 0.00168. Table 2.1 shows
results obtained when we use the bisection, secant and Newton methods to
minimize (2.6.1) with p = 1.

Method ;i y~ y2 V R
Bisection 0.7 0.3 0.00169 1.233%
Secant 0.7 0.3 0.00169 1.233%
Newton 0.7 0.3 0.00169 1.233%

Table 2.1. Solutions of Maxretlm for assets in Table 1.3

We see that all three minimization methods find the same solution but that they
use different numbers of iterations.
When drawing conclusions from a comparison like that in Table 2.1 it is im-
portant to ensure that all the methods have used the same (or similar) initial
guessed solutions and convergence criteria. In the above results, both bisec-
tion and the secant method were given the starting values x = 0 and x = 1.
The Newton method was started from x = 1 (it only needs one initial point).
The bisection iterations terminate when the bracket containing the optimum
has been reduced to a width less than lop4. Convergence of the secant and
Newton methods occurs when the gradient IF1(x)I < lo-'. Therefore it seems
reasonable to conclude that the Newton method is indeed more efficient than
the secant method which in turn is better than bisection.
We now consider another problem to see if similar behaviour occurs. This time
we use data for the first two assets in Table 1.2 and we consider the minimiza-
tion of (1.7.4) with p = 10 and V, = 0.00072. Table 2.2 summarises results
One-variable optimization 31

obtained with sample2 using the same starting guessed values and conver-
gence criteria as for Table 2.1 together with an additional result for Newton's
method when the initial solution estimate is xo = 1.

Method

Table 2.2. Solutions of Maxretlm for first two assets in Table 1.2

The first three rows of Table 2.2 again show the secant and Newton meth-
ods outperforming bisection by finding the same solution in fewer iterations.
In fact the number of bisection steps depends only on the size of the starting
bracket and the convergence criterion while the iteration count for the secant
and Newton methods can vary from one problem to another.
The last row of Table 2.2 shows that a different solution is obtained if Newton's
method is started from xo = 1 instead of xo = 0. This alternative solution is
actually better than the one in the first three rows of the table because the
return R is larger. However, both the solutions are valid local minima of (1.7.4)
and we could say, therefore, that the bisection and secant methods have been
"unlucky" in converging to the inferior one. As mentioned in section 1, most
of the methods covered in this book will terminate when the iterations reach
a local optimum. If there are several minima, it is partly a matter of chance
which one is found, although the one "nearest" to the starting guess is probably
the strongest contender.
Exercises (To be solved using sample2 or other suitable software.)
1. Using the data for the first two assets in Table 1.2, determine the coefficients
of the function (1.7.4). Hence find the maximum return for an acceptable risk
V, = 0.0005 by using the bisection method to minimize (1.7.4). How does your
solution change for different values of p in the range 0.1 5 p 5 lo?
2. Solve the maximum-return problem in question 1 but using the bisection
method to minimize the non-smooth function (1.7.7). Explain why the results
differ from those in question 1.
3. Using data for the first two assets in Table 1.2, form the function (1.7.4) and
minimize it by the secant method when V, = 0.002 and p = 10. Use starting
guesses 0 and 1 for x. Can you explain why a different solution is obtained
when the starting guesses for x are 0.5 and I? (It may help to sketch the func-
tion being minimized.)
4. Minimize (2.6.1) by Newton's method using the initial guess xo = 0.5 and
explain why the solution is different from the ones quoted in Table 2.1. Find
starting ranges for the bisection and secant methods from which they too will
converge to this alternative local minimum.
< <
5. Plot the graph of (2.6.1) for 0.55 x 0.75, when Va = 0.00123 and p = 10
and observe the two local minima. Also plot the graph when V, = 0.001. Can
you explain why there is a unique minimum in this case? What is the smallest
value of Va for which two minima occur?
<
6. Plot the graph of (1.7.8) with p = 1 in the range 0.55 x 5 0.75 first for
Va = 0.001 and then for Va = 0.00123. Does the function have a continuous
first derivative in both cases? Explain your answer.
7. Using any suitable optimization method, minimize (2.6.1) and hence solve
the maximum-return problem for the data in Table 1.3 for values of V, in the
< <
range 0.001 1 V, 0.0033. Plot the resulting values of yl and y2 against Va.
Do these show a linear relationship? How does the graph of maximum-return
R against Va compare with the efficient frontier you would obtain by solving
Minrisklm for the data in Table 1.3?
8. As an alternative to the composite function in Risk-Retl, another function
whose minimum value will give a low value of risk coupled with a high value
for expected return is

For a two-asset problem, use similar ideas to those in section 1.3 of chapter
1 and express F as a function of invested fraction yl only. Use the bisection
method to minimize this one-variable form of (2.6.2) for the data in Table 1.3.
Also obtain an expression for dF/dyl and use the secant method to estimate
the minimum of F for the same set of data.
http://www.springer.com/978-1-4020-8110-1

You might also like