Full download Identification of Nonlinear Physiological Systems IEEE Press Series on Biomedical Engineering 1st Edition David T. Westwick pdf docx
Full download Identification of Nonlinear Physiological Systems IEEE Press Series on Biomedical Engineering 1st Edition David T. Westwick pdf docx
Full download Identification of Nonlinear Physiological Systems IEEE Press Series on Biomedical Engineering 1st Edition David T. Westwick pdf docx
https://ebookgate.com/product/elements-of-tidal-electric-engineering-
ieee-press-series-on-power-engineering-1st-edition-robert-h-clark/
ebookgate.com
https://ebookgate.com/product/sourcebook-of-atm-and-ip-
internetworking-ieee-press-series-on-network-management-1st-edition-
khalid-ahmad/
ebookgate.com
https://ebookgate.com/product/system-of-systems-engineering-wiley-
series-in-systems-engineering-and-management-1st-edition-mohammad-
jamshidi/
ebookgate.com
IDENTIFICATION OF
NONLINEAR
PHYSIOLOGICAL SYSTEMS
IEEE Press Series in Biomedical Engineering
The focus of our series is to introduce current and emerging technologies to biomedical and electrical engineer-
ing practitioners, researchers, and students. This series seeks to foster interdisciplinary biomedical engineering
education to satisfy the needs of the industrial and academic areas. This requires an innovative approach that
overcomes the difficulties associated with the traditional textbook and edited collections.
Advisory Board
Thomas Budinger Simon Haykin Richard Robb
Ingrid Daubechies Murat Kunt Richard Satava
Andrew Daubenspeck Paul Lauterbur Malvin Teich
Murray Eden Larry McIntire Herbert Voigt
James Greenleaf Robert Plonsey Lotfi Zadeh
Editorial Board
Eric W. Abel Gabor Herman Kris Ropella
Dan Adam Helene Hoffman Joseph Rosen
Peter Adlassing Donna Hudson Christian Roux
Berj Bardakjian Yasemin Kahya Janet Rutledge
Erol Basar Michael Khoo Wim L. C. Rutten
Katarzyna Blinowska Yongmin Kim Alan Sahakian
Bernadette Bouchon-Meunier Andrew Laine Paul S. Schenker
Tom Brotherton Rosa Lancini G. W. Schmid-Schönbein
Eugene Bruce Swamy Laxminarayan Ernest Stokely
Jean-Louis Coatrieux Richard Leahy Ahmed Tewfik
Sergio Cerutti Zhi-Pei Liang Nitish Thakor
Maurice Cohen Jennifer Linderman Michael Unser
John Collier Richard Magin Eugene Veklerov
Steve Cowin Jaakko Malmivuo Al Wald
Jerry Daniels Jorge Monzon Bruce Wheeler
Jaques Duchene Michael Neuman Mark Wiederhold
Walter Greenleaf Banu Onaral William Williams
Daniel Hammer Keith Paulsen Andy Yagle
Dennis Healy Peter Richardson Yuan-Ting Zhang
A list of books in the IEEE Press Series in Biomedical Engineering can be found
on page 262.
IDENTIFICATION OF
NONLINEAR
PHYSIOLOGICAL SYSTEMS
DAVID T. WESTWICK
ROBERT E. KEARNEY
Technical Reviewers
Metin Akay
Robert F. Kirsch
John A. Daubenspeck
Copyright
c 2003 by the Institute of Electrical and Electronics Engineers. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the
1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through
payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400, fax 978-750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011,
fax (201) 748-6008, e-mail: [email protected].
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book,
they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and
specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created
or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable
for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or
other damages. For general information on our other products and services please contact our Customer Care Department
within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a
variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.
QP33.6.M36W475 2003
612 .01 5118—dc21
2003043255
10 9 8 7 6 5 4 3 2 1
CONTENTS
Preface xi
1 Introduction 1
1.1 Signals / 1
1.1.1 Domain and Range / 2
1.1.2 Deterministic and Stochastic Signals / 2
1.1.3 Stationary and Ergodic Signals / 3
1.2 Systems and Models / 3
1.2.1 Model Structure and Parameters / 4
1.2.2 Static and Dynamic Systems / 5
1.2.3 Linear and Nonlinear Systems / 6
1.2.4 Time-Invariant and Time-Varying Systems / 7
1.2.5 Deterministic and Stochastic Systems / 7
1.3 System Modeling / 8
1.4 System Identification / 8
1.4.1 Types of System Identification Problems / 9
1.4.2 Applications of System Identification / 11
1.5 How Common are Nonlinear Systems? / 11
2 Background 13
References 251
Index 259
Since it first appeared in 1978, Advanced Methods in Physiological Modeling: The White
Noise Approach by P. Z. Marmarelis and M. Z. Marmarelis has been the standard ref-
erence for the field of nonlinear system identification, especially as applied in biomed-
ical engineering and physiology. Despite being long out of print, Marmarelis and Mar-
marelis is still, in many cases, the primary reference. Over the years, dramatic advances
have been made in the field, many of which became practical only with the advent of
widespread computing power. Many of these newer developments have been described
in the three volumes of the series Advanced Methods in Physiological Modeling, edited
by V. Z. Marmarelis. While these volumes have been an invaluable resource to many
researchers, helping them to stay abreast of recent developments, they are all collections
of research articles. As a resource for someone starting out in the field, they are some-
what lacking. It is difficult for a newcomer to the field to see the relationships between
myriad contributions. Choosing which approach is best for a given application can be an
arduous task, at best.
This textbook developed out of a review article (Westwick and Kearney, 1998) on the
same subject. The goal of the review article was to bring the various analyses that have
been developed by several groups of researchers into a common notation and framework,
and thus to elucidate the relationships between them. The aim of this book was to go one
step farther and to provide this common framework along with the background necessary
to bring the next generation of systems physiologists into the fold.
In this book, we have attempted to provide the student with an overview of many of
the techniques currently in use, and some of the earlier methods as well. Everything is
presented in a common notation and from a consistent theoretical framework. We hope
that the relationships between the methods and their relative strengths and weaknesses
will become apparent to the reader. The reader should be well-equipped to make an
informed decision as to which techniques to try, when faced with an identification or
modeling problem.
xi
xii PREFACE
We have assumed that readers of this book have a background in linear signals and
systems equivalent to that given by a junior year signals and systems course. Back-
ground material beyond that level is summarized, with references given to more detailed,
pedagogical treatments.
Each chapter has several theoretical problems, which can be solved with pencil and
paper. In addition, most of the chapters conclude with some computer exercises. These
are intended to give the reader practical experience with the tools described in the text.
These computer exercises make use of MATLAB∗ and the nonlinear system identifica-
tion (NLID) toolbox (Kearney and Westwick, 2003). More information regarding the NLID
toolbox can be found at www.bmed.mcgill.ca. In addition to implementing all of the
system identification tools as MATLAB m-files, the toolbox also contains the data and
model structures used to generate the examples that run throughout the text.
Although our primary goal is to educate informed users of these techniques, we have
included several theoretical sections dealing with issues such as the generality of some
model structures, convergence of series-based models, and so on. These sections are
marked with a dagger, †, and they can be skipped by readers interested primarily in
practical application of these methods, with little loss in continuity.
The dedication in Marmarelis and Marmarelis reads “To an ambitious breed: Systems
Physiologists.” We feel that the sentiment reflected in those words is as true today as it
was a quarter century ago. The computers are (much) faster, and they will undoubtedly
be faster still in a few years. As a result, the problems that we routinely deal with
today would have been inconceivable when M & M was first published. However, with
increased computational abilities come more challenging problems. No doubt, this trend
will continue. We hope that it is an interesting ride.
DAVID T. WESTWICK
ROBERT E. KEARNEY
INTRODUCTION
The term “Biomedical Engineering” can refer to any endeavor in which techniques from
engineering disciplines are used to solve problems in the life sciences. One such under-
taking is the construction of mathematical models of physiological systems and their
subsequent analysis. Ideally the insights gained from analyzing these models will lead to
a better understanding of the physiological systems they represent.
System identification is a discipline that originated in control engineering; it deals with
the construction of mathematical models of dynamic systems using measurements of their
inputs and outputs. In control engineering, system identification is used to build a model
of the process to be controlled; the process model is then used to construct a controller.
In biomedical engineering, the goal is more often to construct a model that is detailed
enough to provide insight into how the system operates. This text deals with system
identification methods that are commonly used in biomedical engineering. Since many
physiological systems are highly nonlinear, the text will focus on methods for nonlinear
systems and their application to physiological systems. This chapter will introduce the
concepts of signals, systems, system modeling, and identification. It also provides a brief
overview of the system identification problem and introduces some of the notation and
terminology to be used in the book. The reader should be acquainted with most of the
material covered in this chapter. If not, pedagogical treatments can be found in most
undergraduate level signals and systems texts, such as that by Kamen (1990).
1.1 SIGNALS
The concept of a signal seems intuitively clear. Examples would include speech, a televi-
sion picture, an electrocardiogram, the price of the NASDAQ index, and so on. However,
formulating a concise, mathematical description of what constitutes a signal is somewhat
involved.
1
2 INTRODUCTION
s:T →Y
where t ∈ T is a member of the domain set, usually time. In continuous time, T is the
real line; in discrete time, it is the set of integers. In either case, the value of the signal
is in the range set, Y . The range of the signal is given by applying the mapping to the
domain set, and is therefore s(T ).
The above definition really describes a function. A key point regarding the domain
set of a signal is the notion that it is ordered and thus has a direction. Thus, if x1 and x2
are members of the domain set, there is some way of stating x1 > x2 , or the reverse. If
time is the domain, t1 > t2 is usually taken to mean that t1 is later than t2 .
The analysis in this book will focus on signals with one-dimensional domains—usually
time. However, most of the ideas can be extended to signals with domains having
two dimensions (e.g., X-ray images), three dimensions (e.g., MRI images), or more
(e.g., time-varying EEG signals throughout the brain).
can be predicted exactly, provided that its frequency f and phase φ are known. In
contrast, if yr (k) is generated by repeatedly tossing a fair, six-sided die, there is no way
to predict the kth value of the output, even if all other output values are known. These
represent two extreme cases: yd (t) is purely deterministic while yr (k) is completely
random, or stochastic.
The die throwing example is an experiment where each repetition of the experiment
produces a single random variable: the value of the die throw. On the other hand, for
a stochastic process the result of each experiment will be a signal whose value at each
time is a random variable. Just as a single throw of a die produces a single realization of
a random variable, a random signal is a single realization of a stochastic process. Each
experiment produces a different time signal or realization of the process. Conceptually,
the stochastic process is the ensemble of all possible realizations.
In reality, most signals fall between these two extremes. Often, a signal may be
deterministic but there may not be enough information to predict it. In these cases, it
SYSTEMS AND MODELS 3
Similar integrals are used to compute higher-order moments. Conceptually, these integrals
can be viewed as averages taken over an infinite ensemble of all possible realizations of
the random variable, x.
The value of a random signal at a point in time, considered as a random variable,
will have a PDF, f (x, t), that depends on the time, t. Thus, any statistic obtained by
integrating over the PDF will be a function of time. Alternately, the integrals used to
compute the statistics can be viewed as averages taken over an infinite ensemble of
realizations of the stochastic process, at a particular point in time. If the PDF, and hence
statistics, of a stochastic process is independent of time, then the process is said to be
stationary.
For many practical applications, only a single realization of a stochastic process will
be available; therefore, averaging must be done over time rather than over an ensemble
of realizations. Thus, the mean of a stochastic process would be estimated as
T
1
µ̂x = x(t) dt
T 0
Many stochastic process are ergodic, meaning that the ensemble and time averages are
equal.
Figure 1.1 shows a block diagram of a system in which the “black box,” N, transforms
the input signal, u(t), into the output y(t). This will be written as
to indicate that when the input u(t) is applied to the system N, the output y(t) results. Note
that the domain of the signals need not be time, as shown here. For example, if the system
operates on images, the input and output domains could be two- or three-dimensional
spatial coordinates.
This book will focus mainly on single-input single-output (SISO) systems whose
domain is time. Thus u(t) and y(t) will be single-valued functions of t. For multiple-input
4 INTRODUCTION
Input(s) Output(s)
u(t) y(t)
N
Figure 1.1 Block diagram of a “black box” system, which transforms the input(s) u(t), into the
output(s), y(t). The mathematical description of the transformation is represented by the operator N.
multiple-output (MIMO) systems, Figure 1.1, equation (1.1), and most of the develop-
ment to follow will not change; the input and output simply become vector-valued func-
tions of their domains. For example, a multidimensional input signal may be written as
a time-dependent vector,
u(t) = u1 (t) u2 (t) . . . un (t) (1.2)
where the caret, or “hat,” indicates that ŷ(t) is an estimate of the system output, y(t).
In general, a model will depend on a set of parameter parameters contained in the
parameter vector θ . For example, if the model, M(θ), was a third-degree polynomial,
Note that in equation (1.4) the dependence of the output, ŷ(θ , t), on the parameter vector,
θ , is shown explicitly.
Models are often classified as being either parametric or nonparametric. A parametric
model generally has relatively few parameters that often have direct physical interpreta-
tions. The polynomial in equation (1.4) is an example of a parametric model. The model
structure comprises the constant, linear, quadratic and third-degree terms; the parameters
are the coefficients associated with each term. Thus each parameter is related to a par-
ticular behavior of the system; for example, the parameter c(2) defines how the output
varies with the square of the input.
In contrast, a nonparametric model is described by a curve or surface defined by its
values at a collection of points in its domain, as illustrated in Figure 1.2. Thus, a set
of samples of the curve defined by equation (1.4) would be a nonparametric model of
the same system. Here, the model structure would contain the domain values, and the
“parameters” would be the corresponding range values. Thus, a nonparametric model
usually has a large number of parameters that do not in themselves have any direct
physical interpretation.
SYSTEMS AND MODELS 5
−2
−4
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Input: u(t)
Figure 1.2 A memoryless nonlinear system. A parametric model of this system is y(t) =
−3 − u(t) + u2 (t) − 0.5u3 (t). A nonparametric model of the same system could include a list
of some of the domain and range values, say those indicated by the dots. The entire curve is
also a nonparametric model of the system. While the parametric model is more compact, the
nonparametric model is more flexible.
y(t) = u(t − τ )
y(t) = max(u(τ ))
τ ≤t
retains the largest value of the past input and consequently depends on the entire history
of the input.
Dynamic systems can be further classified according to whether they respond to the
past or future values of the input, or both. The delay and peak-hold operators are both
examples of causal systems, systems whose outputs depend on previous, but not future,
values of their inputs. Systems whose outputs depend only on future values of their
inputs are said to be anti-causal or anticipative. If the output depends on both the past
and future inputs, the system said to be noncausal or mixed causal anti-causal.
Although physical systems are causal, there are a number of situations where noncausal
system descriptions are needed. For example, behavioral systems may display a predictive
ability if the input signal is deterministic or a preview is available. For example, the
dynamics of a tracking experiment may show a noncausal component if the subject is
permitted to see future values of the input as well as its current value.
Sometimes, feedback can produce behavior that appears to be noncausal. Consider the
system in Figure 1.3. Suppose that the experimenter can measure the signals labeled u(t)
and y(t), but not w1 (t) and w2 (t). Let both N1 and N2 be causal systems that include
delays. The effect of w1 (t) will be measured first in the “input,” u(t), and then later in the
6 INTRODUCTION
w2 (t)
w1 (t) u(t) y(t)
N1
N2
Figure 1.3 A feedback loop with two inputs. Depending on the relative power of the inputs
w1 (t) and w2 (t), the system N1 , or rather the relationship between u(t) and y(t), may appear to
be either causal, anti-causal, or noncausal.
“output,” y(t). However, the effect of the other input, w2 (t), will be noted in y(t) first,
followed by u(t). Thus, the delays in the feedback loop create what appears to be non-
causal system behavior. Of course the response is not really noncausal, it merely appears
so because neither u(t) nor y(t) was directly controlled. Thus, inadequate experimental
design can lead to the appearance of noncausal relationships between signals.
In addition, as will be seen below, there are cases where it is advantageous to reverse
the roles of the input and output. In the resulting analysis, a noncausal system description
must be used to describe the inverse system.
Let c be a constant scalar. Then if the response to the input c · u(t) satisfies
for any constant c, the system is said to obey the principle of proportionality or to have
the scaling property.
Consider two pairs of inputs and their corresponding outputs,
then the operator N is said to obey the superposition property. Systems that obey both
superposition and scaling are said to be linear.
Nonlinear systems do not obey superposition and scaling. In many cases, a system
will obey the superposition and scaling properties approximately, provided that the inputs
SYSTEMS AND MODELS 7
lie within a restricted class. In such cases, the system is said to be operating within its
“linear range.”
Systems for which equation (1.7) does not hold are said to be time-varying.
where v(t) is independent of the input, u(t). Although the measured output, z(t), has both
deterministic and random components, the system (1.8) is still referred to as deterministic,
since the “true” output, y(t), is a deterministic function of the input.
Alternatively, the output may depend on an unmeasurable process disturbance, w(t),
where w(t) is a white, Gaussian signal that cannot be measured. In this case, the system is
said to be stochastic, since there is no “noise free” deterministic output. The process noise
term, w(t), can be thought of as an additional input driving the dynamics of the system.
Measurement noise, in contrast, only appears additively in the final output. Clearly, it
is possible for a system to have both a process disturbance and measurement noise, as
illustrated in Figure 1.4, leading to the relation
w(t) v(t)
u(t) y(t)
z(t)
N
Figure 1.4 Block diagram of a system including a process disturbance, w(t), and measurement
noise, v(t).
8 INTRODUCTION
In many cases, a mathematical model of a system can be constructed from “first princi-
ples.” Consider, for example, the problem of modeling a spring. As a first approxima-
tion, it might be assumed to obey Hooke’s law and have no mass so that it could be
described by
y = −ku (1.11)
where the output, y, is the force produced, the input, u, is the displacement, and k
is the spring constant. If the spring constant were known, then equation (1.11) would
constitute a mathematical model of the system. If the spring constant, k, was unknown, it
could be estimated experimentally. Whether or not the assumptions hold, equation (1.11)
is a model of the system (but not necessarily a good model). If it yields satisfactory
predictions of the system’s behavior, then, and only then, can it be considered to be
a good model. If it does not predict well, then the model must be refined, perhaps by
considering the mass of the spring and using Newton’s second law to give
d 2 u(t)
y(t) = −ku(t) + m (1.12)
dt 2
Other possibilities abound; the spring might be damped, behave nonlinearly, or have
significant friction. The art of system modeling lies in determining which terms are likely
to be significant, and in limiting the model to relevant terms only. Thus, even in this
simple case, constructing a mathematical model based on “first principles” can become
unwieldy. For complex systems, the approach can become totally unmanageable unless
there is a good understanding of which effects should and should not be incorporated
into the model.
The scheme outlined in the previous paragraph is impractical for a number of reasons.
Most importantly, numerical differentiation amplifies high-frequency noise. Thus, the
numerically computed derivatives of the input and output, particularly the high-order
derivatives, will be dominated by high-frequency noise that will distort the parameter
estimates. Thus, a more practical approach to estimating the system dynamics from
input–output measurements is required.
First, note that a system need not be represented as a differential equation. There
are many possible parametric and nonparametric representations or model structures for
both linear and nonlinear systems. Parameters for many of these model structures can be
estimated reliably from measured data. In general, the model structure will be represented
by an operator, M, having some general mathematical form capable of representing a
wide variety of systems. The model itself will depend on a list of parameters, the vector θ .
From this viewpoint, the system output may be written as
where it is assumed that the model structure, M, and parameter vector, θ , exactly rep-
resent the physical system. Thus, the physical system, N, can be replaced with an exact
model, M(θ).
The objective of system identification is to find a suitable model structure, M, and
corresponding parameter vector, θ , given measurements of the input and output. Then,
the identified model will have a parameter vector, θ̂ , and generate
where ŷ(t) is an estimate of the system output, y(t). Similarly, M(θ̂ , u(t)) represents the
model structure chosen together with a vector of estimated parameters. The system iden-
tification problem is then to choose the model structure, M, and find the corresponding
parameter vector, θ̂ , that produces the model output, given by equation (1.15), that best
predicts the measured system output.
Often, instead of having the system output, y(t), only a noise corrupted measurement
will be available. Usually, this measurement noise is assumed to be additive, random,
and statistically independent of the system’s inputs and outputs. The goal, then, is to find
the model, M(θ̂, u(t)), whose output, ŷ(t, θ̂ ), “best approximates” the measured output,
z(t). The relationship between the system, model, and the various signals, is depicted in
Figure 1.5.
v(t)
y(t)
z(t)
N
u(t)
ŷ(t, θ)
M(θ)
Figure 1.5 The deterministic system identification problem in the presence of measurement noise.
û(t)
n1 (t) n2 (t) w(t)
v(t)
µ(t) u(t) y(t)
z(t)
Actuator N
dynamics
Figure 1.6 A more realistic view of the system being identified, including the actuator, which
transforms the ideal input, µ, into the applied input, u(t), which may contain the effects of the
process noise term, n1 (t). Furthermore, the measured input, û(t), may contain noise, n2 (t). As
before, the plant may be affected by process noise, w(t), and the output may contain additive
noise, v(t).
noise in the output signals. However, to deal with noise at the input, it is necessary to
adopt a “total least-squares” or “errors in the variables” framework, both of which are
much more computationally demanding. To avoid this added complexity, identification
experiments are usually designed to minimize the noise in the input measurements. In
some cases, it may be necessary to adopt a noncausal system description so that the
measurement with the least noise may be treated as the input. Throughout this book it
will be assumed that n2 (t) is negligible, unless otherwise specified.
The system may also include an unmeasurable process noise input, w(t), and the
measured output may also contain additive noise, v(t). Given this framework, there are
three broad categories of system identification problem:
• Deterministic System Identification Problem. Find the relationship between u(t)
and y(t), assuming that the process noise, w(t), is zero. The measured output,
z(t), may contain additive noise, v(t). The identification of deterministic systems
is generally pursued with the objective of gaining insight into the system function
and is the problem of primary interest in this text.
• Stochastic System Identification Problem. Find the relationship between w(t) and
y(t), given only the system output, z(t), and assumptions regarding the statistics
of w(t). Usually, the exogenous input, u(t), is assumed to be zero or constant.
This formulation is used where the inputs are not available to the experimenter, or
HOW COMMON ARE NONLINEAR SYSTEMS? 11
where it is not evident which signals are inputs and which are outputs. The myriad
approaches to this problem have been reviewed by Brillinger (1975) and Caines
(1988).
• Complete System Identification Problem. Given both the input and the output, esti-
mate both the stochastic and deterministic components of the model. This problem
formulation is used when accurate output predictions are required, for example in
model-based control systems (Ljung, 1999; Söderström and Stoica, 1989).
Many physiological systems are highly nonlinear. Consider, for example, a single joint
and its associated musculature. First, the neurons that transmit signals to and from the
muscles fire with an “all or nothing” response. The geometry of the tendon insertions
is such that lever arms change with joint angle. The muscle fibers themselves have
nonlinear force–length and force–velocity properties as well as being only able exert
force in one direction. Nevertheless, this complex system is often represented using a
simple linear model.
12 INTRODUCTION
BACKGROUND
This chapter will review a number of important mathematical results and establish the
notation to be used throughout the book. Material is drawn from diverse areas, some of
which are not well known and thus extensive references are provided with each section.
Many of the techniques presented in this text use numerical methods derived from linear
algebra. This section presents a brief overview of some important results to be used in
the chapters that follow. For a more thorough treatment, the reader should consult the
canonical reference by Golub and Van Loan (1989).
Vectors will be represented using lowercase, boldface letters. The same letter, in
lightface type, will be used for the elements of the vector, subscripted with its position
in the vector. Thus, an M element vector will be written as follows:
T
θ = θ1 θ2 . . . θM
13
14 BACKGROUND
X = QR
where R is upper triangular, so that ri,j = 0 for i > j , and Q is an orthogonal matrix,
QT Q = I. Note that the columns of Q are said to be orthonormal (i.e., orthogonal and
normalized); however, the matrix itself is said to be orthogonal.
The singular value decomposition (SVD) takes a matrix, X, of any size and shape and
replaces it with the product,
X = USVT
Gaussian random variables, and signals derived from them, will play a central role in
much of the development to follow. A Gaussian random variable has the probability
density (Bendat and Piersol, 1986; Papoulis, 1984)
1 (x − µ)2
f (x) = √ exp − (2.1)
2π σ 2 2σ 2
that is completely defined by two parameters: the mean, µ, and variance, σ 2 . By con-
vention, the mean, variance, and other statistical moments, are denoted by symbols sub-
scripted by the signal they describe. Thus,
µx = E[x]
σx2 = E (x − µx )2
Figure 2.1 shows a single realization of a Gaussian signal, the theoretical probability
density function (PDF) of the process that generated the signal, and an estimate of the
PDF derived from the single realization.
GAUSSIAN RANDOM VARIABLES 15
(A)
10
5 m+s
Range
m
0
m−s
−5
0 0.1 0.2 0.3 0.4 0.5
Time (s)
(B) (C)
0.2 0.2
m m
Probability
0 0
−10 −5 0 5 10 −10 −5 0 5 10
Range Range
Figure 2.1 Gaussian random variable, x, with mean µx = 2 and standard deviation σx = 3.
(A) One realization of the random process: x(t). (B) The ideal probability distribution of the
sequence x. (C) An estimate of the PDF obtained from the realization shown in A.
1. Form every distinct pair of random variables and compute the expected value of
each pair. For example, when n is 4, the expected values would be
2. Form all possible distinct combinations, each involving n/2 of these pairs, such
that each variable is included exactly once in each combination. For n = 4, there
16 BACKGROUND
3. For each combination, compute the product of the expected values of each pair,
determined from step 1. Sum the results of all combinations to get the expected
value of the overall product. Thus, for n = 4, the combinations are
Similarly, when n is 6:
For the special case where all signals are identically distributed, this yields the relation
Correlation functions describe the sequential structures of signals. In signal analysis, they
can be used to detect repeated patterns within a signal. In systems analysis, they are used
to analyze relationships between signals, often a system’s input and output.
x(t) − µx
x0 (t) = (2.4)
σx
Figure 2.2 illustrates time records of several typical signals together with their autocorre-
lations. Evidently, the autocorrelations reveal structures not apparent in the time records
of the signals.
CORRELATION FUNCTIONS 17
(A) (B)
10
1
0
−10
(C) (D)
4
1
0
−4
(E) (F)
1
1
0 0
−1
−1
(G) (H)
10
1
0
−10
0 5 −1 0 1
Time (s) Lag (s)
Figure 2.2 Time signals (left column) and their autocorrelation coefficient functions (right col-
umn). (A, B) Low-pass filtered white-noise signal. (C, D) Low-pass filtered white noise with a
lower cutoff. The resulting signal is smoother and the autocorrelation peak is wider. (E, F) Sine
wave. The autocorrelation function is also a sinusoid. (G, H) Sine wave buried in white noise. The
sine wave is more visible in the autocorrelation than in the time record.
Thus, the autocorrelation, φxx , at any lag, τ , depends on both the mean, µx , and the
variance, σx2 , of the signal.
In many applications, particularly where systems have been linearized about an oper-
ating point, signals will not have zero means. It is common practice in these cases
to remove the mean before calculating the correlation, resulting in the autocovariance
function:
which is the signal’s variance. Dividing the autocovariance by the variance gives the
autocorrelation coefficient function,
Cxx (τ )
rxx (τ ) =
Cxx (0)
= E[x0 (t − τ )x0 (t)] (2.9)
−4 0 0 0
4 5 1 1
−4 0 0 0
40 100 100 1
−40 0 0 0
400 10 K 10 K 1
−400 0 0 0
0 1 −10 10 −10 10 −10 10
Time (s) Lag (ms) Lag (ms) Lag (ms)
Figure 2.3 Examples of autocorrelation functions. The first column shows the four time signals,
the second column shows their autocorrelation functions, the third column shows the corresponding
autocovariance functions, and the fourth column the equivalent autocorrelation coefficient functions.
First Row: Low-pass filtered sample of white, Gaussian noise with zero mean and unit variance.
Second Row: The signal from the first row with an offset of 2 added. Note that the mean is clearly
visible in the autocorrelation function but not in the auto-covariance or autocorrelation coefficient
function. Third Row: The signal from the top row multiplied by 10. Bottom Row: The signal from
the top row multiplied by 100. Scaling the signal changes the values of the autocorrelation and
autocovariance function but not of the autocorrelation coefficient function.
As before, removing the means of both signals prior to the computation gives the
cross-covariance function,
where µx is the mean of x(t), and µy is the mean of y(t). Notice that if either µx = 0
or µx = 0, the cross-correlation and the cross-covariance functions will be identical.
The cross-correlation coefficient function of two signals, x(t) and y(t), is defined by
Cxy (τ )
rxy (τ ) = (2.13)
Cxx (0)Cyy (0)
The value of the cross-correlation coefficient function at zero lag, rxy (0), will be unity
only if the two signals are identical to within a scale factor (i.e., x(t) = ky(t)). In this
20 BACKGROUND
case, the cross-correlation coefficient function will be the same as the autocorrelation
coefficient function of either signal.
As with the autocorrelation coefficient function, the values of the cross-correlation
coefficient function can be interpreted as correlations in the statistical sense, ranging
from complete positive correlation (1) through 0 to complete negative correlation (−1).
Furthermore, the same potential for confusion exists; the cross-covariance and cross-
correlation coefficient functions are often referred to simply as cross-correlations.
The various cross-correlation formulations are neither even nor odd, but do satisfy the
interesting relations:
and
φxy (τ ) ≤ φxx (0)φyy (0) (2.15)
that is, y(t) is a delayed, scaled version of x(t) added to an uncorrelated noise signal,
v(t). Then,
φxy (τ ) = αφxx (τ − τ0 )
That is, the cross-correlation function is simply the autocorrelation of the input signal,
x(t), displaced by the delay τ0 , and multiplied by the gain α. As a result, the lag at which
the cross-correlation function reaches its maximum provides an estimate of the delay.
where n(t) and v(t) are independent of x(t) and y(t) and of each other.
First, consider the autocorrelation of one signal,
φzz (τ ) = E (y(t − τ ) + v(t − τ ))(y(t) + v(t))
= φyy (τ ) + φyv (τ ) + φvy (τ ) + φvv (τ )
The terms φyv ≡ φvy ≡ 0 will disappear because y and v are independent. However, the
remaining term, φvv , is the autocorrelation of the noise sequence and will not be zero.
As a result, additive noise will bias autocorrelation estimates,
φwz (τ ) = E[w(t)z(t + τ )]
= E (x(t − τ ) + n(t − τ ))(y(t) + v(t))
= φxy (τ ) + φxv (τ ) + φny (τ ) + φnv (τ )
and φxv ≡ φny ≡ φnv ≡ 0, since the noise signals are independent of each other by
assumption. Thus,
φwz (τ ) = φxy (τ )
In any practical application, x(t) and y(t) will be finite-length, discrete-time signals.
Thus, it can be assumed that x(t) and y(t) have been sampled every t units from
t = 0, t , . . . , (N − 1)t , giving the samples x(i) and y(i) for i = 1, 2, . . . , N .
By using rectangular integration, the cross-correlation function (2.16) can be approxi-
mated as
N
1
φ̂xy (τ ) = x(i − τ )y(i) (2.17)
N −τ
i=τ
This is an unbiased estimator, but its variance increases with lag τ . To avoid this, it is
common to use the estimator:
N
1
φ̂xy (τ ) = x(i − τ )y(i) (2.18)
N
i=τ
which is biased, because it underestimates correlation function values at long lags, but
its variance does not increase with lag τ .
Similar estimators of the auto- and cross-covariance and correlation coefficient func-
tions may be constructed. Note that if N is large with respect to the maximum lag, the
biased and unbiased estimates will be very similar.
∗ Strictly speaking, a deterministic signal, such as a sinusoid, is nonstationary and is certainly not ergodic.
Nevertheless, computations based on time averages are routinely employed with both stochastic and determin-
istic signals. Ljung (1999) defines a class of quasi-stationary signals, together with an alternate expected value
operator, to get around this technicality.
22 BACKGROUND
Taking the inverse Fourier transform of (2.22) yields (2.18), the biased estimate of the
cross-correlation. In practice, correlation functions are often computed this way, using
the FFT to transform to and from the frequency domain.
An alternative approach, the averaged periodogram (Oppenheim and Schafer, 1989),
implements the time average differently. Here, the signal is divided into D segments of
length ND , and the ensemble of segments is averaged to estimate the expected value.
Thus, the averaged periodogram spectral estimate is
D
1
Ŝuu (f ) = Ud∗ (f )Ud (f ) (2.23)
DND
d=1
where Ud (f ) is the Fourier transform of the dth segment of ND points of the signal
u(t), and the asterisk, U ∗ (f ), denotes the complex conjugate.
It is common to overlap the data blocks to increase the number of blocks averaged.
Since the FFT assumes that the data are periodic, it is also common to window the
blocks before transforming them. The proper selection of the window function, and of
the degree of overlap between windows, is a matter of experience as well as trial and
error. Further details can be found in Bendat and Piersol (1986).
The averaged periodogram (2.23) is the most commonly used nonparametric spectral
estimate, and it is implemented in MATLAB’s spectrum command. Parametric spectral
estimators have also been developed and are described in Percival and Walden (1993).
2.3.6 Applications
The variance of a biased correlation estimate (2.18) is proportional to 1/N and thus
decreases as N increases. Furthermore, the effect of the bias is a scaling by a factor
of N/(N − τ ), which decreases as N increases with respect to τ . Thus, in general the
length of a correlation function should be much shorter than the data length from which
it is estimated; that is, N τ . As a rule of thumb, correlation functions should be no
more than one-fourth the length of the data and should never exceed one-half of the data
length.
Autocovariance functions determined from stochastic signals tend to “die out” at
longer lags. The lag at which the autocovariance function has decreased to values that
24 BACKGROUND
cannot be distinguished from zero provides a subjective measure of the extent of the
sequential structure in a signal, or its “memory.”
In contrast, if there is an underlying periodic component, the autocorrelation function
will not die out but will oscillate at the frequency of the periodic component. If there
is substantial noise in the original signal, the periodicity may be much more evident in
the correlation function than in the original data. This periodicity will be evident in the
autocorrelation function at large lags, after the contribution from the noise component
has decayed to zero. An example of this is presented in the bottom row of Figure 2.2.
A common use for the cross-covariance function, as illustrated in the middle panel
of Figure 2.4, is to estimate the delay between two signals. The delay is the lag at which
0.5
0 0
−3 −3
−0.5
0.5
0 0
−3 −3
−0.5
0.5
0 0
−3 −3
−0.5
0 5 0 5 −0.2 0 0.2
Time (s) Time (s) Lag (s)
Figure 2.4 Examples of the cross-correlation coefficient function. Top Row: The signals in A and
B are uncorrelated with each other. Their cross-correlation coefficient function, C, is near zero at
all lags. Middle Row: E is a delayed, scaled version of D with additive noise. The peak in the
cross-correlation coefficient function, F, indicates the delay between input and output. Bottom Row:
G is a low-pass filtered version of H, also with additive noise. The filtering appears as ringing in I.
MEAN-SQUARE PARAMETER ESTIMATION 25
the cross-covariance function is maximal. For example, the delay between two signals
measured at two points along a nerve can be determined from the cross-covariance
function (Heetderks and Williams, 1975). It should be remembered that dynamics can
also give rise to delayed peaks in the cross-correlation function.
Note that there is some confusion in the literature about the terminology for this function.
Some authors (Korenberg, 1988) use the nomenclature “second-order,” as does this book;
others have used the term “third-order” (Marmarelis and Marmarelis, 1978) to describe
the same relation.
A common definition for the “best approximation” is that which minimizes the mean-
square error between the measured output, z(t), and the model output, ŷ(θ, t):
2
MSE(M, θ, u(t)) = E z(t) − ŷ(θ, t) (2.29)
If the signals are ergodic, then this expectation can be evaluated using a time average
over a single record. In discrete time, this results in the summation:
N
1 2
VN (M, θ, u(t)) = z(t) − ŷ(θ , t) (2.30)
N
t=1
which is often referred to as the “mean-square error,” even though it is computed using
a time average rather than an ensemble average.
Note that the MSE depends on the model structure, M, the parameter vector, θ , and
the test input, u(t). For a particular structure, the goal is to find the parameter vector, θ̂ ,
that minimizes (2.30). That is (Ljung, 1999):
ŷ(θ) = Uθ (2.32)
∂VN 2
= (UT Uθ − UT z)
∂θ N
The minimum mean-square error solution, θ̂, is found by setting the gradient to zero and
solving:
UT Uθ̂ = UT z
(2.33)
θ̂ = (UT U)−1 UT z
Thus, for any model structure where the output is linear in the parameters, as
in equation (2.32), the optimal parameter vector may be determined directly by from
equations (2.33), called the normal equations.∗ Many of the model structures examined
in this text are linear in their parameters even though they describe nonlinear systems.
Thus, solutions of the normal equations and their properties are fundamental to much of
what will follow.
2.4.1.1 Example: Polynomial Fitting Consider, for example, the following prob-
lem. Given N measurements of an input signal, u1 , u2 , . . . , uN and output z1 , z2 , . . . , zN ,
find the third-order polynomial that best describes the relation between uj and zj . To do
so, assume that
∗ In the MATLAB environment, the normal equation, (2.33), can be solved using the “left division” operator.
θ̂ = U\z.
MEAN-SQUARE PARAMETER ESTIMATION 27
where v is a zero-mean, white Gaussian noise sequence. First, construct the regression
matrix
1 u1 u21 u31
1 u2 u22 u32
U = . . . . (2.34)
.. .. .. ..
1 uN u2N u3N
Then, rewrite the expression for z as the matrix equation
z = Uθ + v (2.35)
T
where θ = c(0) c(1) c(2) c(3) , and use equations (2.33) to solve for θ.
1. The model structure is correct; that is, there is a parameter vector such that the
system can be represented exactly as y = Uθ .
2. The output, z = y + v, contains additive noise, v(t), that is zero-mean and statisti-
cally independent of the input, u(t).
If conditions 1 and 2 hold, then the expected value of the estimated parameter vector is
where M is the number of model parameters. Thus the covariance matrix for the param-
eter estimates, Cθ̂ , reduces to
θ̃ = θ − θ̂
= θ − (UT U)−1 UT z
= −(UT U)−1 UT v (2.42)
∂ 2 VN (θ)
H(i, j ) = = UT U (2.43)
∂θi ∂θj
Furthermore, if the measurement noise is white, the inverse of the Hessian is proportional
to the parameter covariance matrix, Cθ̂ , given in equation (2.41).
Let ν = UT v be the second term in equation (2.42). Substituting these two expres-
sions gives
θ̃ = −H−1 ν
Now, consider the effect of a small change in ν on the parameter estimate. The Hessian
is a non-negative definite matrix, so its singular value decomposition (SVD) (Golub and
Van Loan, 1989) can be written as
H = VSVT
where V = [v1 v2 . . . vM ] is an orthogonal matrix, VT V = I, and S is a diagonal matrix,
S = diag[s1 , s2 . . . , sM ], where s1 ≥ s2 ≥ · · · ≥ sM ≥ 0. Using this, the Hessian can be
expanded as
M
H= si vi vi T
i=1
POLYNOMIALS 29
Notice that if the noise term, ν, changes in the direction parallel to the kth singular
vector, vk , then the change in θ̂ will be multiplied by 1/sk . Consequently, the ratio
of the largest to smallest singular values will determine the relative sensitivity of the
parameter estimates to noise. This ratio is referred to as the condition number (Golub
and Van Loan, 1989) of the matrix and ideally should be close to 1.
2.5 POLYNOMIALS
Q
= c(q) M(q) (u)
q=0
1. The columns of U will have widely different amplitudes, particularly for high-order
polynomials, unless σu ≈ 1. As a result, the singular values of U, which are the
square roots of the singular values of the Hessian, will differ widely.
30 BACKGROUND
2. The columns of U will not be orthogonal. This is most easily seen by examining
the Hessian, UT U, which will have the form
1 E[u] . . . E[uq ]
E[u2 ]
E[u] E[u2 ] E[u3 ] . . . E[uq+1 ]
2 3 . . . E[uq+2 ]
4
H = N E[u ] E[u ] E[u ]
. . . .. ..
.. .. .. . .
q q+1 q+2
E[u ] E[u ] E[u ] . . . E[u ] 2q
Since H is not diagonal, the columns of U will not be orthogonal to each other.
Note that the singular values of U can be viewed as the lengths of the semiaxes of
a hyperellipsoid defined by the columns of U. Thus, nonorthogonal columns will
stretch this ellipse in directions more nearly parallel to multiple columns and will
shrink it in other directions, increasing the ratio of the axis lengths, and hence the
condition number of the estimation problem (Golub and Van Loan, 1989).
This demonstrates that a particular polynomial basis function, P (q) (u), will be orthogonal
only for a particular input probability distribution. Thus each polynomial family will
be orthogonal for a particular input distribution. Figure 2.5 shows the basis functions
corresponding to three families of polynomials: the ordinary power series, as well as the
Hermite and Tchebyshev families of orthogonal polynomials to be discussed next.
POLYNOMIALS 31
Order 0
1 1 0
0 0 −1
10 5 1
Order 1
0 0 0
10 −5 −1
100 10 1
Order 2
50 0 0
0 −10 −1
1k 20 1
Order 3
0 0 0
−1 k −20 −1
10 k 50 1
5k 0 0 Order 4
0 −50 −1
100 k 50 1
Order 5
0 0 0
−100 k −50 −1
−10 0 10 −3 0 3 −1 0 1
Figure 2.5 Power series, Hermite and Tchebyshev polynomials of orders 0 through 5. Left Col-
umn: Power series polynomials over the arbitrarily chosen domain [−10 10]. Middle Column:
Hermite polynomials over the domain [−3 3] corresponding to most of the range of the unit-
variance, normal random variable. Right Column: Tchebyshev polynomials over their full domain
[−1 1].
Using equation (2.46) and the results for the expected value of products of Gaussian
variables (2.3), the Hermite polynomials can be shown to be
n/2
(−1)m
H(n) (u) = n! u(n−2m) (2.47)
m!2m (n − 2m)!
m=0
H(0) (u) = 1
H(1) (u) = u
H(2) (u) = u2 − 1
H(3) (u) = u3 − 3u
The Hermite polynomials may also be generated using the recurrence relation,
Note that these polynomials are only orthogonal for zero-mean, unit variance, Gaus-
sian inputs. Consequently, input data are usually transformed to zero mean and unit
variance before fitting Hermite polynomials. The transformation is retained as part of the
polynomial representation and used to transform any other inputs that may be applied to
the polynomial.
T (0) (u) = 1
T (1) (u) = u
T (2) (u) = 2u2 − 1
T (3) (u) = 4u3 − 3u
In practice, input data are transformed to [−1 1] prior to fitting the coefficients, and
the scale factor is retained as part of the polynomial representation.
M(1) (uk ) = uk
However, the second-order terms will involve products of two inputs, either two copies
of the same signal or two distinct signals. Thus, the second-order terms will be of two
types,
Similarly, the order-q terms will involve from one to q inputs, raised to powers such
that the sum of their exponents is q. For example, the third-order terms in a three-input
polynomial are
I was sorry that the Doctor had arrived in time to catch the
Altruist’s last remarks. She waited until he was gone, then sank
wearily into a chair.
“How the angels in heaven must smile at that man’s assurance,”
she exclaimed. “I wish, I wish he could tell the difference between his
voice and the voice of God!”
I was in no mood to defend the Altruist, and so said nothing.
“If the Altruist knows what all this trouble means, he knows a great
deal more than I do,” she went on grimly. “I cannot see, I cannot see
how the Lad could so forget all the people who cared for him.”
The sentence ended in a half sob that almost frightened me. It had
never occurred to me that the Doctor could shed tears.
“Have you seen Janet?” I asked, attempting to change the subject.
I succeeded only in turning the Doctor’s wrath back upon the
Altruist.
“Yes,” she said, “I have seen Janet, and I wish the Altruist were in
Timbuctoo! He has been at the house and has utterly unnerved her.”
“How?” I asked.
“It is hard to believe, even of the Altruist. How do you suppose he
greeted that hurt child? ‘Janet,’ he said, ‘I have always had an
intuition that you were not meant for mere happiness.’”
I groaned. “He doesn’t mean to be cruel,” I said, “but he has not
the simple instinct—”
“A few of the simpler human instincts are really necessary,”
interrupted the Doctor, “in any attempt to help human beings. If the
Altruist had more feeling and less transcendentalism, it would be
better.”
“It isn’t a week,” I responded, “since he had an intuition of a
directly opposite kind. And then I was trying to help him,” I
confessed, for a sudden sense of guilt overcame me as I met the
Doctor’s clear eyes, “in his attempt to explain to God what He
means.”
The fierce expression in her face was changing into a look of
tenderness.
“Go to see the child,” she said huskily, “to-morrow, not to-day. She
will be quieter then.”
But I waited two long days. The hours were tedious and dull and
heavy, full of cloud and rain. No birds were singing in the sunless air,
and the grass had forgotten to grow. It seemed to me that in the
ending of a life dear to me, all life had paused.
CHAPTER XLII
“For the agony of the world’s struggle is the very life of God. Were He mere
spectator, perhaps He too would call life cruel. But in the unity of our lives with
His, our joy is His joy; our pain is His.”
We are all busy still, and yet the world is not saved.
The Anarchist is perfecting the process that shall bring his
millennium to be, and the young Socialists in Barnet House are
working out the details of their new economic order. The Altruist still
translates the infinite into finite terms; the Young Reformer is on the
platform; I toil daily in the self-same Cause, but the world is not
saved.
Many times since we closed ranks and marched onward over the
Lad’s grave I have paused, disheartened. Full assurance has not been
granted me, and it is my lot in doing battle to strike often in the dark.
Yet I have moments when I know that the strife is not in vain. In
these I wonder why we are so troubled about our duty to our fellow-
man, and about our knowledge of God. The one command in regard
to our neighbour is not obscure. And our foreboding lest our faith in
God shall escape us seems futile, inasmuch as we cannot escape from
our faith.
THE END
THE IRIS SERIES
TRYPHENA IN LOVE.
BY
WALTER RAYMOND,
Author of “Love and Quiet Life,” Etc.
ILLUSTRATED BY
J. WALTER WEST.
WILL BE
A LOST ENDEAVOUR.
BY
GUY BOOTHBY,
ILLUSTRATED BY
STANLEY L. WOOD.
MAUREEN’S FAIRING.
BY
JANE BARLOW,
“It is a study of the inner workings of the human heart, and if the
motives of a soul were ever laid bare, it has been done in ‘The Wings
of Icarus.’... A good story, told in an intensely natural and interesting
manner.”—Providence News.
“In ‘The Wings of Icarus,’ Laurence Alma Tadema has given us a
book which, for its literary excellence and for its exquisite pen
coloring and finish in every detail, is as artistic a piece of work as
ever her distinguished father has produced with his brush.”—Boston
Home Journal.
“It is at once delicate and forcible, and holds in its story a depth of
passion whose expression is yet kept within the limits of a true
refinement.”—Philadelphia Evening Bulletin.
“It is exquisite in style, spontaneous and well-sustained in
movement.”
“It is a story of Italian coloring delicately suggestive, artistic rather
than strong, dreamy rather than aggressive.”—Chicago Evening
Post.
UNIFORM WITH “THE WINGS OF ICARUS.”
S. R. CROCKETT,
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookgate.com