A Simple Technique For The Generation of Correlated Random Number Sequences
A Simple Technique For The Generation of Correlated Random Number Sequences
2, FEBRUJARY 1979
sium on Modeling and Simulation Methodology, B. Zeigler, Ed. Amsterdam, random number sequence with reference to another so as to obtain the
North-Holland, 1979. desired correlation between the two. With no specifications on the
[2] R. W. Blanning, "The sources and uses of sensitivity information," Interfaces,
vol. 4, no. 4, pp. 32-38, Aug. 1974 (see also vol. 5, no. 3, pp. 24-25, May 1975). distribution of the random number sequence (linear prediction does
[3] J. J. M. Braat, "The I.P.S.O. control system," Int. J. Production Research, vol. not assume knowledge of the probability law of the random variables)
11, no. 4, pp. 417-436, 1973. the method should therefore be applicable to all continuous distribu-
[4] J. E. Ertel and E. B. Fowlkes, "Methods for fitting linear spline and piecewise tions. The validity of the algorithm has been justified through
multiple linear regression," in Proc. Computer Science and Statistics: 8th Ann. the derivations of the expressions for the expectation and variance of
Sym. on the Interface, J. W. Frane, Ed., Health Sciences Computing Facility,
Univ. of California, Los Angeles, 1975. the correlation coefficient. Satisfactory results have been obtained for
[5] A. Geoffrion, "The purpose of mathematical programming is insight, not num- three typical distributions, viz., normal, uniform, and exponential.
bers," InterJaces, vol. 7, no. 1, pp. 81-92, Nov. 1976. The assumptions made regarding the behavior of the error of
[6] G. B. Goldstein and T. Dushane, Repro-modeling Applied to the Simplification of
Taradcom Computer Models, Rep. 12243, U.S. Army Tank-Automotive R&D prediction have been verified through computations. The method is
Command, Dec. 1976. expected to find applications in various disciplines.
[7] A. Horowitz and W. S. Meisel, The Application of Repro-modeling to the
Analysis oJ a Photochemical Air Pollution Model, Rep. EPA-650/4-74-001, En- INTRODUCTION
vironmental Protection Agency, Dec. 1973.
[8] E. J. Ignall, P. Kolesar, and W. E. Walker, "Using simulation to develop and Monte Carlo simulation provides a convenient means for the
validate analytic models: some case studies," Operations Research, vol. 26, no. 2, qualitative investigation of the behavior of a stochastic system.
pp. 237-253, Mar.-Apr. 1978.
[9] R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and The scope of this method is determined by the extent to which the
Value Tradeoffs. New York: Wiley, 1976. statistical characteristics of the random number generator
[10] J. P. C. Kleijnen, Statistical Techniques in Simulation. New York: Marcel resemble those of the system variables. There are many well-
Dekker, 1974/1975.
[11] ' Generalizing Simulation Results Through Metamodels, Working paper, known techniques by which it is possible to generate random
Dep. of Business and Economics, Katholieke Hogeschool, Tilburg, The Nether- numbers having the required distribution characteristics [1], [2].
lands, June 1978. In some cases, the system variables display statistical inter-
[12] J. P. C. Kleijnen and P. J. Rens, "IMPACT revisited: A critical analysis of
IBM's inventory package 'IMPACT," Production and Inventory Management, dependence. As an example, consider the problem of reliability
vol. 19, no. 1, pp. 71-90, 1978. analysis of a power system. In this case, failure of one of the power
[13] J. P. C. Kleijnen, A. J. Van Den Burg, and R. T. Van Der Ham, "Generalization transmission links
of simulation results: practicality of statistical methods," European Opera-
J. may cause overloading of another, thereby in-
tional Research (in press). creasing its failure probability. While simulating such a system, it
[14] G. F. Koons and B. Peric, A Study of Rolling-mill Productivity Utilizing a becomes necessary to generate sequences of random numbers
Statistically Designed Simulation Experiment, Research Laboratory, United
States Steel Corp., Monroeville, PA, 1977. having the prescribed mutual cross correlations among them.
[15] R. W. Lawless, L. H. Williams, and C. G. Richie, "A sensitivity analysis tool for It is well-known that a multivariate normal sample can be gen-
simulation with application to disaster planning," Simulation, vol. 17, no. 6, pp. erated
217-223, Dec. 1971. through appropriate linear combinations of independent
[16] W. S. Meisel and D. C. Collins, "Repro-modeling: An approach to efficient normal random variables. These techniques are applicable only to
model utilization and interpretation," IEEE Trans. Syst., Man, and Cybern., vol. the case of the normal distribution. Moreover, they involve prob-
SMC-3, no. 4, pp. 349-358, July 1973.
[17] D. C. Montgomery and V. M. Bettencourt, "Multiple response surface methods lems like computation of the square root of the covanrance matrix
in computer simulation," Simulation, vol. 29, no. 4, pp. 113-121, Oct. 1977. or the solution of a system of nonlinear algebraic equations [3].
[18] C. C. Pegels, Systems Analysis for Production Operations. New York: Gordon Li and Hammond [4] suggest a procedure for generating cor-
and Breach Science, 1976.
[19] M. R. Rose and R. Harmsen, "Using sensitivity analysis to simplify ecosystem related random variables with specified nonnormal probability
models: A case study," Simulation, vol. 31, no. 1, pp. 15-26, July 1978. distribution functions. In this procedure, a multivariate nonnor-
[20] W. A. Sherden, "Origin of simultaneity in corporate models," in Winter Simula- mal sample is obtained through appropriate nonlinear transfor-
tion Conf., H. J. Highland, T. J. Schriber, and R. G. Sargent, Eds. New York:
ACM, 1976. mation of a multivariate normal sample. The method involves
[21] W. A. H. Thissen, Investigations into the Club of Rome's World 3 Model; Lessons operations like "predistortioin" of the desired correlation matrix
for Understanding Complicated Models, Doctoral dissertation, Technical Univ., (in order to compensate for the distortion during the nonlinear
Eindhoven, The Netherlands, 1978.
[22] J. K. Weeks and J. S. Fryer, "A methodology for assigning minimum cost transformation), evaluation of the square root of the correlation
due-dates," Management Sci., vol. 23, no. 8, pp. 872-881, Apr. 1977. matrix (for the generation of multivariate normal sample), and
[23] H. C. Yen and W. Pierskalla, A Simulation of a Centralized Blood Banking transformation from normal to the desired distribution.
System, Michael Reese Hospital, Office of Operations Research, Chicago, IL, Although
1977. standard procedures are available for all these operations, the
overall procedure is quite tedious. If the inverse of the desired
distribution is not known in closed form, the transformation is
same distribution as Y. However, when averaged over the entire Expressions for the parameters a and b can be obtained by
sample size, the (average) distribution of [YN] does match that of equating the respective partial derivatives of the above expression
Y). The interdependence between X and Y is specified by the to zero. The best linear predictor of Y for X = x is given by [5]
coefficient of linear correlation between them. The random
number sequence [YN] is generated by rearranging another E*(YIX=x)= Y-p *Y[X-X] (TX
(2)
random number sequence [ZN] with reference to [XN]. The
process of rearrangement is governed by the optimal linear where
predictor of Y and the error of prediction. Since the method is X, Y means of X and Y,
based on the theory of optimal linear prediction, knowledge of the a,, ay standard deviations of X and Y,
joint probability law (which is very difficult to obtain in the case p coefficient of linear correlation specifying the inter-
of nonnormal distribution) is not required. The method can easily dependence between X and Y.
be extended to the case of multidimensional random variables.
It can be shown that the predictor of (2) is nothing but the
OPTIMAL PREDICTION projection of the random variable Y onto the random variable X.
If two random variables are jointly normally distributed, then The existence and uniqueness of the best linear predictor are
the conditional expectation of one, for a given value of the other, provided by the well-known projection theorem of abstract Hil-
can be calculated from the knowledge of their means, standard bert space theory [9]. The derivation of the linear predictor does
deviations, and the coefficient of correlation [5]. If these moments not require knowledge of the probability law of the random var-
are not known a priori, they may be calculated from the observed iables [10]. The only necessary condition for the above predictor
data. to be realizable is that the random variables under consideration
It may happen that the joint probability law of the two random be square integrable (i.e., both a' and U2 should be finite and
variables X and Y is unknown. In the case of nonnormal distribu- nonnegative).
tions, even if the joint probability law is known, it is seldom It is observed that the expression for the best linear predictor
possible to derive an expression for the conditional expectation. coincides with that for the conditional expectation of a jointly
In such cases, the prediction problem can be attempted as follows. normally distributed random variable. Therefore, the best linear
Let predictor could be interpreted alternatively as the conditional
E*(Y X) the predictor of Y for given X, expectation, if the random variables under consideration are
q = Y - E*(Y X) = the error of prediction, known to be jointly normally distributed. The error of prediction
L(q) the loss function, which, for every true value of the q is given by
random variables Y and its predictor, assigns a loss q = Y-E*(YI X)
or cost. (3)
Since there is no reason to believe that the error is more likely
The choice of the loss function is usually governed more by to be positive than negative, or vice versa, expectation of the error
mathematically manipulative convenience than by any physical can be assumed to be equal to zero. This assumption can be
significance. The essential characteristics of the loss function are verified through actual computations during the simulation:
i) L(q) = 0, if q = 0;
ii) monotonic, i.e., L(q2)> L(qI), if q2 > q, > 0;
E(q)= 0. (4)
iii) symmetric, i.e., L(q) = L(- q). Then variance of the error is given by [5]
It is desirable, though not essential, that the loss function be a4 = E(q2) = a'(1 _ p2). (5)
convex and continuous in the range of interest [6]. The risk func-
tion is defined as GENERATION OF CORRELATED' RANDOM NUMBER SEQUENCES
R(q)= E[L(q)], The problem to be considered is that of the generation of two
which is the expectation of the loss function over the admissible random number sequences [XN] and [YN] of sample size N, to
range of errors. represent the two jointly distributed random variables X and Y,
If the conditional distribution function of Y happens to be i) so that the following conditions are satisfied:
symmetric about the mean Y and ii) convex for Y < Y, then it has i) The marginal distributions of [XN] and [YN], when averaged
been shown by Sherman [7] that the random variable that mini- over the entire sample size, are the same as those of X and Y,
mizes the risk function is the conditional expectation. respectively (i.e., according to the specifications described by the
Deutsch [8] has shown that, if the loss function is chosen to be practical situation).
the squared error, then the above theory leads to an optimal ii) The estimate of the coefficient of correlation between [XN]
predictor, even without the conditional distribution function and [YN]
being required to be either symmetric or convex. This is the prin-
E(X-N1 j=1 (Xj _Y)(Yj ) --+ Pd, as N c,
N
cipal reason why the loss function is chosen to be the squared 1 Y X Y x
p (x)
reasonable approximation for the determination of the error
bound in the case of the probability density curves which asymp-
D = 'aX V3
-
totically approach the zero value.
If the error bounds are tightened (i.e., the area included under
0
the probability density curve is reduced), which is equivalent to
(a) assuming a stronger dependence between the two random var-
iables, the magnitudes of the correlation coefficient will increase.
However, if the error bounds are slackened (i.e., the area included
Ptx)
under the probability density curve is increased), the result will be
a decrease in the magnitudes of the correlation coefficient. If the
probability law happens to be asymmetric about the mean value,
bounds for the magnitudes of positive and negative errors are
different (see Fig. l(c)).
Now the method of obtaining correlated random number seq-
uences can be described as follows. We start with a random
(b) number sequence [XN] of simple size N having the required
distribution. For every particular xj, the best linear predictor of Y
can be calculated from (2). (Means and standard deviations of X
and Y are specified by their marginal distributions which are
For 4ve values of error assumed to be known. We substitute p by Pd, the desired correla-
D1= 4xl =2. 69
6x
tion coefficient.) The procedure then consists of selecting yj from
For -ye values of error
an appropriate parent population (one which has the same dis-
D2 - °
0975 tribution as that of the random variable Y) of random numbers, in
a random manner, such that the error of prediction is within the
bounds specified by the probability law of the random variable. In
the case of populations of finite size, it may happen that for some
(c) xj, no suitable match (i.e., a random number satisfying the condi-
Fig. 1. Fixation of error bounds. (a) Uniform distribution. (b) Normal distribution. tion of maximum allowable error) is available. Under such cir-
(c) Exponential distribution. cumstances, a random number from the same population, whlich
gives the smallest squared error, can be accepted without seriously
From (4)-(6), for every particular value xj or x, it is possible to affecting the simulation results. It is observed
that even if the
specify the range in which the corresponding yj is likely to be parent population from which the sequence [Y.>] is generated, is of
situated with a high probability. Assuming the probability law of the same sample size N, on an average, 95 percent of the xJ find
the random variable q to be the same as that of Y (this assumption suitable matches.
can be verified through computations, see Fig. 5), it is possible to THE ALGORITHM
fix upper bounds on the magnitudes of positive and negative
errors.
i) Generate two random number sequences [X>,] and [Z_]
If the range of the random variable Y happens to be finite (as in having the required distribution characteristics with the help of
in [1] and [2]. [Z\] is to be used as a
the case of a uniform distribution) determination of the error the techniques described
bounds is straightforward (see Fig. 1(a)). From (5), it may be parent population for generating [YN]-
ii) For every xi, calculate the best linear predictor of Y
observed that the range of the error must be smaller than that of
the random variable to be predicted. In the case of those probabi-
lity laws in which the probability density curve asymptotically E*(YIX= xi)Y Pd *(X \,)
approaches the zero value, strictly speaking, the range of the
random variable (and hence that of the error as well) extends to
infinity. As a result, while fixing the error bounds, one is faced iii) For every xi search through [ZN] uintil a Zj is found such
with the absurd task of comparing two infinities. This difficulty that
can be minimized by a simple approximation as described below.
Although the probability density curve may extend to infinity, q= [ZjE*(YIX = \)]2 < D o2, for qj > (
in a practical sample, the random numbers are distributed only
over a finite range. In the case of the normal distribution, it can be or
assumed (to a certain approximation) that the practical values of q2 D2 . ,2, for qj <O
the random variable are restricted to the range bounded by
+ 1.96r (which includes 95 percent area under the probability
variance given by (5) and DI and D2 are the
density curve). Therefore, the error bounds are chosen so as to where q7 the error
is
constants to be determined from the probability law of the
include 95 percent of the total area under the probability density
random variable Y as illustrated in Fig. 1. If the probability law
curve (see Fig. 1(b)).
about the mean value, then Di = D2.
The same approximation can be used for determining the error happens to be symmetric which satisfies the condition of the
bounds in the case of other distributions provided the assumed iv) Choose the first zj,
with xi as the corresponding yi.
restricted range of the random variable is consistent with that of previous step, to be paired
v) If none of the zj satisfy the condition of step iii), choose that
the sample. Although it is difficult to give a rigorous explanation
for the choice of the cutoff level, the range that includes 95 percent zj to be paired with xi, which gives the minimum squared error.
Fig. 2 shows the computer flow chart for the above algorithm.
of the area under the probability density curve appears to be a
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 2, FEBRUARY 1979 99
TABLE I
VALUES OF CORRELATION COEFFICIENT
Desired Standard Devia- D i s t r i b u t i o n
tion
2) Normal:
F (q)
XN and YN are exponentially distributed. Number of YN sequences gener-
ated for each XN iS M - 100
Assumed Obtained
1R0j Mean 0 0 0 0
Varionce 0.2 7 50 0-2335
-2.0 -1 0 0 1.Q 2 0
Fig. 3. Normalized autocorrelation function of error of prediction q for exponential tion does not impose any restrictions on the dimensionalities of
distribution with correlation coefficient Pd = 0.8.
the random variables.
vi) Since the method is based on the theory of optimal linear
of the desired correlation confirms the above conjecture about the prediction (which does not assume an a priori knowledge of the
trend of the simulation results. probability law of the random variables under consideration),
iii) The algorithm rearranges one random number sequence there is no reason which prevents it from being applied to the case
with reference to another. It is necessary to investigate the effect of of random variables having dissimilar marginal distributions.
this random rearrangement on the statistical characteristics of the This particular aspect, in our opinion, deserves deeper investiga-
random number sequence [YN]. Figs. 3 and 4 show graphs of the tion elsewhere.
autocorrelation functions of the error of prediction and vii) One of the referees has pointed out that the elements yj of
the random number sequence [YN]. The impulse-like nature of the sequence [YN] as generated by the procedure described in the
the autocorrelation functions suggests that the rearrangement has correspondence will, in general, not be identically and indepen-
not introduced any significant amount of serial correlation in the dently distributed (i.i.d). The individual distributionls of the vj are
random number sequence [YN]. bound to differ from each other as the selection of each yj depends
iv) Fig. 5 shows the empirical distribution function [11] of the on the value of the corresponding xj. However, when averaged
error of prediction for the exponential distribution with correla- over the entire sample size, the distribution of [Yl\] does matcl
tion coefficient Pd 0.8. The nature of the curve confirms the
= that of the random variable Y. Therefore, the procedure, althoughl
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 2, FEBRUARY 1979 101
only approximate in terms of the i.i.d. property, it is believed, The variance of the correlation coefficient is given by
would be useful in many practical applications.
c2 = var [N x ,yj-l (X-x) ( )]
CONCLUSIONS
The technique described in the correspondence enables genera-
tion of random number sequences with prescribed correlation. It
has been tested and found satisfactory for three typical probability
= N |E (X-xj)2. var (Y-yj)
laws, viz., normal, uniform, and exponential, and hence appears
N2 a2 <X ¢y( 3
applicable to any continuous distribution. The method is essen- dy
tially an approximate one especially in the case of nonnormal 1 j=3
distributions. Even with this limitation, the results of the Monte N
(12)
Carlo simulation procedure employed here are applicable, e.g., in
the type of problems like the one mentioned in the introduction.
Moreover, in the absence of a better technique, the present one APPENDIX II
provides a unified and plausible treatment for problems involving PREDICTION OF MULTIDIMENSIONAL
jointly distributed nonnormal random variables. RANDOM VARIABLES
Let [X] and [Y] denote the random vectors
ACKNOWLEDGMENT
The authors wish to thank the referees for their comments and [X] [XA1, X2,
= , Xm] (13)
suggestions. [Y] =,n].
[YI, Y2,
APPENDIX I Let [X] and [Y] denote their respective mean vectors
DERIVATION OF THE MOMENTS OF THE
CORRELATION COEFFICIENT [X] [XI,X2, ,Xm] (14)
Since the objective of this correspondence is to generate a
prespecified cross correlation between the random number se-
quences, evaluation of the moments of the correlation coefficient Then the best linear predictor of [Y] for given [X] is
provides an estimate of the errors that are likely to occur during
simulation. It is possible to generate an infinite number of [YN] [Y*] = E*[Y/X] = [Y] {[X] [X]} [#] - -
(15)
sequences which will bear the same correlation with a given [XN].
Therefore, expressions for the moments of correlation coefficient where [,B] is an m x n matrix of coefficients so chosen as to mini-
can be derived by taking expectations with respect to Y. The mize the mean squared error E[{[Y] - [y*]}2],
manner in which every yj is selected can be assumed to be random The necessary and sufficient condition that [Y*] be a unique
provided the parent sequence [ZN] does not exhibit any specific function of random vector [X] (i.e., the projection of [ Y] onto [X])
trend. This condition is easily satisfied in almost every random or satisfying the minimum mean squared error criterion is that the
pseudorandom sequence. following equality of the corresponding product moments is
Consider a simulation experiment in which we try to generate a satisfied:
number of [YN] sequences bearing the same expected correlation
Pd with [XN]. Then each yj will be distributed with the same
cov {[Y*], [X]} = cov {[Y], [X]} (16)
probability law as that of Y with mean and variance given by *, Y-Ex [/] = [YR ]
NOMENCLATURE FOR FLOWCHART OF FIG. 2 If the distribution is symmetric about the mean
then ER1 = ER2.
I, J, K? L, REFERENCES
M, N Running indices. [1] RAND Corp., A Million Random Digits with 100,000 Normal Deviates. New
NSMP Sample size. York: New York Free, 1955.
X(') Random number sequence representing random [2] J. M. Hammersley and D. C. Handscomb, Monte Carlo Methods. London:
variable X. Methuen, 1965.
[3] M. Nakamura, "Obtaining a normal sequence with prescribed covanances from
Random number sequence representing random an independent uniform distribution of random variables," IEEE Tranis. Syst.
variable Y. Sci. Cybern., vol. SSC4, no. 3, p. 191, July 1968.
[4] S. T. Li and J. L. Hammond, "Generation of pseudorandom numbers with
Z(') Random number sequence used as a parent specified univariate distributions and correlation coefficients," IEEE Trans.
source to generate Y( ). Syst., Man, Cybern., vol. SMC-5, no. 5, pp. 557-561, Sept. 1975.
RH Desired value of correlation coefficient, (Pd). [5] Emanuel Parzen, Modern Probability Theory and Its Applications. New York:
Wiley, 1960.
XM Mean value of X('), (X). [6] M. H. De Groot and M. M. Rao, "Bayes estimation with convex loss," .4in.
ZM Mean value of Z(-) is mean value of Y(-), (Y). Math. Statist., vol. 34, pp. 839--846, Sept. 1963.
Xs Standard deviation of X( ), (o)j [7] S. Sherman, "Nonmean-square error criteria," IRE Trans. PGIT, IT-4, pp.
125-126, Sept. 1958.
Zs Standard deviation of Z( ) = standard deviation [8] Ralph Deutsch, Estimation Theory, Englewood Chffs, NJ: Prentice-Hall, pp.
of Y( ), (o'). 9-18, 1965.
[9] Emanuel Parzen, "A new approach to the synthesis of optimal smoothing and
El' Best linear predictor of Y for given X, [E*( Y X)]. prediction systems," Mathematical Optimization. Techniques. Univ. of Califor-
DIF Squared error, (q2). nia, Berkeley, pp. 75-108, 1963.
4 Some constant, large enough so that inequality [10] -, "Statistical inference on time series by Hilbert space methods 1," Tech.
Rep. no. 23, O.N.R. Contract 225(21), Statistics Dept., Stanford Univ., Jan. 2,
DIF > SDIF (i.e., DIF > A) is not satisfied at the 1959.
first encounter. [11] Marek Fise, Theory of Probability and Mathematical Statistics New York:
Wiley, 1963.
ER1 Error bound for positive error is D' ac2(1 pd) [12] C. R. Rao, Linear Statistical Inference and Its Applicatiotns. New York: Wiley,
ER2 Error bound for negative error is D2-r(1 - d 1965.
Book Reviews
optimum systems control. A short summary of the subsequent chapters is
Optimum Systems Control--A. P. Sage and C. C. White, III (Englewood
also given. Chapter 2 provides an introduction to the calculus of extrema
Cliffs, NJ: Prentice-Hall, 1977, 2nd ed., 413 pp.). Reviewed by George M.
of two and more variables. Entitled "Calculus of Extrema and Single-
Siouris, Aerospace Guidance and Metrology Center (AFLC), Newark Air
Stage Decision Processes," the chapter discusses such familiar topics as
Force Station, Newark, OH 43055.
unconstrained extrema, extrema of functions with equality constraints.
This authoritative book is the second edition of Professor Sage's "Opti- and nonlinear programming. In Chapter 3, the authors treat the classical
mum Systems Control," providing a concise and timely analysis of opti- calculus of variations. Topics discussed include dynamic optimization
mal control theory. Since the appearance of the first edition almost ten without constraints, transversality conditions, sufficient conditions for
years ago, optimum systems control theory has developed into a full- weak extrema, unspecified terminal time problems, the Euler-Lagrange
fledged area of study. As everyone who has worked in this area knows, its equations and transversality conditions, dynamic optimization with equa-
applications span a wide spectrum of scientific disciplines. Seen in this lity constraints and Lagrange multipliers, and dynamic optimization with
context, it is indeed a pleasure to welcome the second edition, which is inequality constraints. In the classical sense, the problem is to find the
undoubtedly another contribution to the optimal control theory. The pre- particular functions y(x) and z(x) which minimize (or maximize) the inte-
sent volume has been reduced from the first edition's 562 pp. to 413 pp. It Jx2 f(x, y, y', I') dx subject to the constraint r/(x, y, y',t -_
z,
has been completely revised, and the treatment is more geared for the The general form of the transversality condition then takes the form
engineer. The revisions have improved and modernized the overall presen- [(F y' Fy) dx + EZ FY dyj]- Owhere F = f + = ',an-l
=
tation by the addition of new material. XiO0 (i 1 ,m). With regard to modern optimal control theory, one
The concepts presented in the second edition are well motivated and is interested in minimizing the cost function J(x) ff 'I[x(t), x (t), t] dt on
only introduced after sufficient justification has been given and provide a the interval [to, tf]. Chapter 4 is a natural extension of Chapter 3 where
well-balanced blend of theory and practical applications. The text takes more general solutions are obtained. In particular, this chapter covers the
the student step-by-step through every aspect of modern control theory. Pontryagin maximum principle, the Weierstrass- Erdmann conditions, the
At the end of each chapter there are many exercises of varying degree of Bolza problem with and without inequality constraints, anid
difficulty for the student to work out and test his mastery of the material the Hamilton--Jacobi equations and continuous dynamic programming.
presented. Moreover, the text is amply illustrated with completely All the topics addressed are handled very well, and the authors are on
worked-out examples, which gives it an added dimension as a text. solid ground. Chapter 5 deals with optimum systems control examples. In
The present edition is divided into ten chapters. The first chapter defines this chapter an attempt has been made to illustrate those optimal control
the problems of deterministic optimum control, state estimation, sto- problems for which closed-loop solutions exist. The linear regulatoi, the
chastic control, parameter estimation, and adaptive control, which arise in linear servomechanism, the bang-bang control and minimum time prob-