Fundamentals of Signal Processing in Metric Spaces With Lattice Properties Algebraic Approach
Fundamentals of Signal Processing in Metric Spaces With Lattice Properties Algebraic Approach
Andrey Popoff
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we
may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including pho-
tocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission
from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
List of Figures ix
Preface xiii
Introduction xix
v
vi Contents
Conclusion 385
Bibliography 389
Index 403
List of Figures
ix
x List of Figures
7.4.14 Useful signal s(t), realization V ∗ (t) of signal V (t), and realization
EV∗ (t) of its envelope . . . . . . . . . . . . . . . . . . . . . . . . 338
7.4.15 Block diagram of processing unit that realizes LFM signal detec-
tion with joint estimation of time of signal arrival (ending) . . 349
7.4.16 Useful signal s(t) and realization w∗ (t) of signal w(t) in output
of adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.4.17 Useful signal s(t) and realization v ∗ (t) of signal v (t) in output of
median filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.4.18 Useful signal s(t) and realization v ∗ (t) of signal v (t) in output of
median filter, and δ-pulse determining time position of estimator
t̂1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.4.19 Useful signal s(t) and realization Ev∗ (t) of envelope Ev (t) of signal
v (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.5.1 Block diagram of deterministic signals classification unit . . . . 355
7.5.2 Signals in outputs of correlation integral computation circuit
zi (t) and strobing circuit ui (t) . . . . . . . . . . . . . . . . . . 357
7.6.1 Block diagram of signal resolution unit . . . . . . . . . . . . . . 366
7.6.2 Signal w(t) in input of limiter in absence of interference (noise) 368
7.6.3 Normalized time-frequency mismatching function ρ(δτ, δF ) of fil-
ter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.6.4 Cut projections of normalized time-frequency mismatching func-
tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.6.5 Realization w∗ (t) of signal w(t) including signal response and
residual overshoots . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.6.6 Realization w∗ (t) of stochastic process w(t) in input of limiter
and residual overshoots . . . . . . . . . . . . . . . . . . . . . . 376
7.6.7 Realization v ∗ (t) of stochastic process v (t) in output of median
filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
7.6.8 Normalized time-frequency mismatching function of filter in pres-
ence of strong interference . . . . . . . . . . . . . . . . . . . . . 377
7.7.1 Block diagram of mapping unit . . . . . . . . . . . . . . . . . . 379
7.7.2 Directional field patterns FA (θ) and FB (θ) of antennas A and B 380
C.1 Suggested scheme of interrelations between information theory,
signal processing theory, and algebraic structures . . . . . . . . 387
K34934/Signal Processing in Metric Spaces with Lattice Properties: Algebraic Approach
Chapter-by-chapter abstracts
Chapter 1 has an introductory character and introduces the reader to the circle of the most
general methodology questions of classical Signal Theory and Information Theory. Brief
analysis of general concepts, notions, and ideas of Natural Science, Signal Theory and
Information Theory is carried out. Chapter 1 considers a general system of notions and principles
of modern research methodology in Natural Science, and also a particular system of notions of
selected research direction (both Signal Processing Theory and Information Theory). The
relations between Information Theory and Natural Sciences are shortly discussed. Chapter 1
considers methodological foundations of classical Signal Theory and Information Theory.
Theoretical difficulties within Signal Theory and Information Theory are shown. In this chapter,
we outline the ways of logical troubles overcoming in both theories. Chapter 1 is finished with
the book principal concept formulation.
Chapter 2 formulates the approach to constructing a signal space on the basis of generalized
Boolean algebra with a measure. The notion of information carrier space is defined. Chapter 2
considers main relationships between the elements of metric space built upon generalized
Boolean algebra with a measure. The notions of main geometric objects of such metric space are
defined. Axiomatic system of metric space built upon generalized Boolean algebra with a
measure is formulated. This axiomatic system implies that the axioms of connection and the
axioms of parallels are characterized with essentially the lesser constraints than the axioms of
analogous groups of Euclidean space. It is shown that geometry of generalized Boolean algebra
with a measure contains in itself some other known geometries. Chapter 2 establishes metric and
trigonometrical relationships in space built upon generalized Boolean algebra with a measure.
Chapter 2 investigates both geometric and algebraic properties of metric space built upon
generalized Boolean algebra with a measure. Chapter 2 studies informational properties of such
metric space. They are introduced axiomatically by the axiom of a measure of a binary operation
of generalized Boolean algebra.
Chapter 4 deals with the notions of informational and physical signal spaces. On the basis of
probabilistic and informational characteristics of the signals and their elements, that are
introduced in the previous chapter, we consider the characteristics and the properties of
informational signal space built upon generalized Boolean algebra with a measure. At the same
time, the separate signal, carrying information, is considered as subalgebra of generalized
Boolean algebra with a measure. It is underlined that a measure on Boolean algebra
accomplishes twofold function: firstly, it is a measure of information quantity, and secondly, it
induces metric in signal space. The interconnection between introduced measure and logarithmic
measure of information quantity is shown. Some homomorphic mappings in informational signal
space are considered. Particularly, for this signal space, the sampling theorem is formulated.
Theorems on isomorphisms are established for informational signal space. The informational
paradox of additive signal interaction in linear signal space is considered. Informational
relations, taking place under signal interaction in signal spaces with various algebraic properties,
are established. It is shown, that from the standpoint of providing minimum losses of information
contained in the signals, one should carry out their processing in signal spaces with lattice
properties.
Chapter 6 considers quality indices of signal processing in metric spaces with L-group
properties. These indices are based on metric relationships between the instantaneous values (the
samples) of the signals, determined in the third chapter. The obtained quality indices correspond
to the main problems of signal processing, i.e. signal detection; signal filtering; signal
classification; signal parameter estimation; and signal resolution. Chapter 6 provides brief
comparative analysis of the obtained relationships, determining the difference of signal
processing quality in the spaces with mentioned properties. It is shown, that potential quality
indices of signal processing in metric spaces with lattice properties are characterized by the
invariance property with respect to parametric and nonparametric prior uncertainty conditions.
The relationships, determining capacity of communication channel operating in the presence of
interference (noise) in metric spaces with L-group properties, are obtained. All main signal
processing problems are considered from the standpoint of estimating the signals and/or their
parameters.
Chapter 7 deals with the methods of synthesis of algorithms and units of signal processing in
metric spaces with lattice properties are considered, so that the developed approaches to the
synthesis allow operating with minimum necessary prior data concerning characteristics and
properties of interacting useful and interference signals. The last one means that, firstly, no prior
data concerning probabilistic distribution of useful signals and interference are supposed to be
present. Secondly, a priori, the kind of the useful signal (signals) is assumed to be known, i.e. we
know that it is either deterministic (quasi-deterministic) or stochastic one. Within the seventh
chapter, the quality indices of synthesized signal processing algorithms and units are obtained. It
is shown, that algorithms and units of signal processing in metric spaces with lattice properties
are characterized by the invariance property with respect to parametric and nonparametric prior
uncertainty conditions. Chapter 7 is finished with methods of mapping of signal spaces with
group (semigroup) properties into signal space with lattice properties.
Preface
Electronics is one of the main high-end technological sectors of the economics pro-
viding development and manufacture of civil, military, and double-purpose produc-
tion, whose level defines the technological, economical, and informational security
of the leading countries of the world. Electronics serves as a catalyst and a loco-
motive of scientific and technological progress, promoting the stable growth of the
various industry branches, and world economics as a whole. The majority of elec-
tronic systems, means, and sets are the units of information (signal) transmitting,
receiving, and processing.
As in other branches of science, the progress in both information theory and
signal processing theory is directly related to understanding and investigating their
fundamental principles. While it is acceptable in the early stages of the development
of a theory to use some approximations, over the course of time it is necessary to
have a closed theory which is able to predict unknown earlier phenomena and
facts. This book was conceived as an introduction into the field of signal processing
in non-Euclidean spaces with special properties based on measure of information
quantity.
Successful research in the 21st century is impossible without comprehensive
study of a specific discipline. Specialists often have poor understanding of their
colleagues working in adjacent branches of science. This is not surprising because a
severance of scientific disciplines ignores the interrelations existing between various
areas of science. To a degree, detriment caused by a narrow specialization is com-
pensated by popular scientific literature acquainting the reader with a wider range
of the phenomena and also offset by the works pretending to cover some larger
subject matter domain of research.
The research subject of this book includes the methodology of constructing the
unified mathematical fundamentals of both information theory and signal process-
ing theory, the methods of synthesis of signal processing algorithms under prior
uncertainty conditions, and also the methods of evaluating their efficiency.
While this book does not constitute a transdisciplinary approach, it starts with
generalized methodology based on natural sciences to create new research concepts
with application to information and signal processing theories.
Two principal problems will be investigated. The first involves unified mathe-
matical fundamentals of information theory and signal processing theory. Its solu-
tion is provided by definition of an information quantity measure connected in a
unique way to the notion of signal space. The second problem is the need to increase
signal processing efficiency under parametric and nonparametric prior uncertainty
conditions. The resolution of that problem rests on the first problem solution us-
xiii
xiv Preface
ing signal spaces with various algebraic properties such as groups, lattices, and
generalized Boolean algebras.
This book differs from traditional monographs in (1) algebraic structure of signal
spaces, (2) measures of information quantities, (3) metrics in signal spaces, (4)
common signal processing problems, and (5) methods for solving such problems
(methods for overcoming prior uncertainty).
This book is intended for professors, researchers, post-graduates under-graduate
students, and specialists in signal processing and information theories, electronics,
radiophysics, telecommunications, various engineering disciplines including radio-
engineering, and information technology. It presents alternative approaches to con-
structing signal processing theory and information theory based upon Boolean al-
gebra and lattice theory. The book may be useful for mathematicians and physicists
interested in applied problems in their areas. The material contained in the book
differs from the traditional approaches of classical information theory and signal
processing theory and may interest pre-graduate students specializing in the direc-
tions: “radiophysics”, “telecommunications”, “electronic systems”, “system analysis
and control”, “automation and control”, “robotics”, “electronics and communication
engineering”, “control systems engineering”, “electronics technology”, “information
security”, and others.
Signal processing theory is currently one of the most active areas of research
and constitutes a “proving ground” in which mathematical ideas and methods find
their realization in quite physical analogues. Moreover, the analysis of IT tendencies
in the 21st century, and perspective transfer towards quantum and optic systems
of information storage, transmission, and processing allow us to claim that in the
nearest future the parallel development of signal processing theory and information
theory will be realized on the basis of their stronger interpenetration of each other.
Also, the author would like to express his confidence that these topics could awaken
interest of young researchers, who will test their own strengths in this direction.
The possible directions of future applications of new ideas and technologies
based on signal processing in spaces with lattice properties described in the book
could be extremely wide. Such directions could cover, for example, search for ex-
traterrestrial intelligence (SETI) program, electromagnetic compatibility problems,
and military applications intended to provide steady operation of electronic systems
under severe jam conditions.
Models of real signals discussed in this book are based on stochastic processes.
Nevertheless, the use of generalized Boolean algebra with a measure allows us to
expand the results upon the signal models in the form of stochastic fields that
constitutes the subject of independent discussing. Traditional aspects of classical
signal processing and information theories are not covered in the book because
they have been presented widely in the existing literature. Readers of this book
should understand the basics of set theory, abstract algebra, mathematical analysis,
and probability theory. The final two chapters require knowledge of mathematical
statistics or statistical radiophysics (radioengineering).
The book is arranged in the following way. Chapter 1 is introductory in nature
and describes general methodology questions of classical signal theory and informa-
Preface xv
tion theory. Brief analysis of general concepts, notions, and ideas of natural science,
signal theory, and information theory is carried out. The relations between informa-
tion theory and natural sciences are briefly discussed. Theoretical difficulties within
foundations of both signal and information theories are shown. Overcoming these
obstacles is discussed.
Chapter 2 formulates an approach to constructing a signal space on the basis
of generalized Boolean algebra with a measure. Axiomatic system of metric space
built upon generalized Boolean algebra with a measure is formulated. Chapter 2
establishes metric and trigonometrical relationships in space built upon generalized
Boolean algebra with a measure and considers informational properties of metric
space built upon generalized Boolean algebra with a measure. The properties are
introduced by the axiom of a measure of a binary operation of generalized Boolean
algebra.
Chapter 3 introduces probabilistic characteristics of stochastic processes which
are invariant with respect to groups of their mappings. The interconnection be-
tween introduced probabilistic characteristics and metric relations between the in-
stantaneous values (the samples) of stochastic processes in metric space is shown.
Informational characteristics of stochastic processes are introduced. Chapter 3 es-
tablishes the necessary condition according to which a stochastic process possesses
the ability to carry information. The mapping that allows considering an arbitrary
stochastic process as subalgebra of generalized Boolean algebra with a measure is
introduced. Informational properties of stochastic processes are introduced on the
base of the axiom of a measure of a binary operation of generalized Boolean algebra.
The main results are formulated in the form of corresponding theorems.
Chapter 4 together with Chapter 7 occupies the central place in the book and
deals with the notions of informational and physical signal spaces. On the basis
of probabilistic and informational characteristics of the signals and their elements,
that are introduced in the previous chapter, Chapter 4 considers the characteristics
and the properties of informational signal space built upon generalized Boolean
algebra with a measure. At the same time, the separate signal carrying information
is considered as a subalgebra of generalized Boolean algebra with a measure. We
state that a measure on Boolean algebra accomplishes a twofold function: firstly,
it is a measure of information quantity and secondly it induces a metric in signal
space. The interconnection between introduced measure and logarithmic measure
of information quantity is shown. Some homomorphic mappings in informational
signal space are considered. Particularly for this signal space, the sampling theorem
is formulated. Theorems on isomorphisms are established for informational signal
space. The informational paradox of additive signal interaction in linear signal space
is considered. Informational relations, taking place under signal interaction in signal
spaces with various algebraic properties, are established. It is shown that from the
standpoint of providing minimum losses of information contained in the signals,
one should carry out their processing in signal spaces with lattice properties.
Chapter 5 establishes the relationships determining the quantities of informa-
tion carried by discrete and continuous signals. The upper bound of information
quantity, which can be transmitted by discrete random sequence, is determined by
xvi Preface
a number of symbols of the sequence only, and does not depend on code base (al-
phabet size). This fact is stipulated by the fundamental property of information,
i.e., by its ability to exist exclusively within a statistical collection of structural
elements of its carrier (the signal). On the basis of introduced information quantity
measure, relationships are established to determine the capacities of discrete and
continuous noiseless communication channels. Boundedness of capacity of discrete
and continuous noiseless channels is proved. Chapter 5 concludes with examples
of evaluating the capacity of noiseless channel matched with stochastic stationary
signal characterized by certain informational properties.
Chapter 6 considers quality indices of signal processing in metric spaces with
L-group properties (i.e., with both group and lattice properties). These indices
are based on metric relationships between the instantaneous values (the samples)
of the signals determined in Chapter 3. The obtained quality indices correspond
to the main problems of signal processing, i.e., signal detection; signal filtering;
signal classification; signal parameter estimation; and signal resolution. Chapter 6
provides brief comparative analysis of the obtained relationships, determining the
differences of signal processing qualities in the spaces with mentioned properties.
Potential quality indices of signal processing in metric spaces with lattice proper-
ties are characterized by the invariance property with respect to parametric and
nonparametric prior uncertainty conditions. The relationships determining capac-
ity of communication channel operating in the presence of interference (noise) in
metric spaces with L-group properties are obtained. All the main signal processing
problems are considered from the standpoint of estimating the signals and/or their
parameters.
Chapter 7 deals with synthesis of algorithms and units of signal processing in
metric spaces with lattice properties, so that the developed approaches allow oper-
ating with minimum necessary prior data concerning characteristics and properties
of interacting useful and interference signals. This means that no prior data con-
cerning probabilistic distribution of useful signals and interference are supposed
to be present. Second, a priori, the kinds of useful signal (signals) are assumed to
be known, i.e., we know that they are either deterministic (quasi-deterministic) or
stochastic. The quality indices of synthesized signal processing algorithms and units
are obtained. Algorithms and units of signal processing in metric spaces with lattice
properties are characterized by the invariance property with respect to parametric
and nonparametric prior uncertainty conditions. Chapter 7 concludes with methods
of mapping of signal spaces with group (semigroup) properties into signal spaces
with lattice properties.
The first, second, and seventh chapters are relatively independent and may be
read separately. Before reading the third and the fourth chapters it is desirable to
get acquainted with main content of the first and second chapters. Understanding
Chapters 4 through 6 to a great extent relies on the main ideas stated in the third
chapter. Proofs provided for an exigent and strict reader may be ignored at least
on first reading.
There is a triple numeration of formulas in the book: the first number corre-
sponds to the chapter number, the second one denotes the section number, the third
Preface xvii
one corresponds to the equation number within the current section, for example,
(2.3.4) indicates the 4th formula in Section 2.3 of Chapter 2. The similar numeration
system is used for axioms, theorems, lemmas, corollaries, examples, and definitions.
Proofs of theorems and lemmas are denoted by symbols. Endings of examples are
indicated by the 5 symbols. Outlines generalizing the obtained results are placed
at the end of the corresponding sections if necessary. In general, the chapters are
not summarized.
While translating the book, the author attempted to provide “isomorphism” of
the monograph perception within principal ideas, approaches, methods, and main
statements formulated in the form of mathematical relationships along with key
notions.
As remarked by Johannes Kepler, “. . . If there is no essential strictness in terms,
elucidations, proofs and inference, then a book will not be a mathematical one. If
the strictness is provided, the book reading becomes very tiresome . . . ”.1
The author’s intention in writing this book was to provide readers with con-
cise comprehensible content while ensuring adequate coverage of signal processing
and information theories and their applications to metric signal spaces with lattice
properties. The author recognizes the complexity of the subject matter and the fast
pace of related research. He welcomes all comments by readers and can be reached
(at [email protected]).
The author would like to extend his sincere appreciation, thanks, and gratitude
to Victor Astapenya, Ph.D.; Alexander Geleseff, D.Sc.; Vladimir Horoshko, D.Sc.;
Sergey Rodionov, Ph.D.; Vladimir Rudakov, D.Sc.; and Victor Seletkov, D.Sc. for
attention, support, and versatile help that contributed greatly to expediting the
writing of this book and improving its content.
The author would like to express his frank acknowledgment to Allerton Press,
Inc. and also to its Senior Vice President Ruben de Semprun for granted permis-
sions that allow using the material published by the author in Radioelectronics and
Communications System.
The author would like to acknowledge understanding, patience, and support
provided within CRC Press by its staff and the assistance from Nora Konopka,
Michele Dimont, Kyra Lindholm, and unknown copy editor(s).
Finally, the author would like to express his thanks to all LATEX developers
whose tremendous efforts greatly lighten the author’s burden.
Andrey Popoff
1
J. Kepler. Astronomia Nova, Prague, 1609.
Introduction
At the frontier of the 21st century, the amounts of information obtained and pro-
cessed in all fields of human activity, from oceanic depths to remote parts of cosmic
space, increase exponentially every year. It is impossible to satisfy growing needs
of humanity in transmitting, receiving, and processing of all sorts of information
without continuous improvement of acoustic, optic, electronic systems and signal
processing methods. The last two tendencies provide the development of informa-
tion theory, signal processing theory, and synthesis foundations of such systems.
The subject of information theory includes the analysis of qualitative and quan-
titative relationships that take place through transmitting, receiving, and pro-
cessing of information contained in both messages and signals, so that the use-
ful signal s(t) is considered a one-to-one function of a transmitted message m(t):
s(t) = M [c(t), m(t)] (where M is the modulating one-to-one function and c(t) is a
signal carrier; m(t) = M −1 [c(t), s(t)]).
The subject of signal processing theory includes the analysis of probabilistic-
statistic models of the signals interacting properly with each other and statistical
inference in specific aspects of the process of extracting information contained in
signals. Information theory and signal processing theory continue to develop in
various directions.
Thus, one should refer the following related directions to information theory in
its classical formulation:
xix
xx Introduction
taking into account the random character of the received signals. An interaction
between useful and interference (noise) signals usually is described through the
superposition principle; however, it has neither fundamental nor universal character.
This means that while constructing fundamentals of information theory and signal
processing theory, there is no need to confine the space of information material
carriers (signal space) artificially by the properties which are inherent exclusively
to linear spaces.
Meanwhile, in most publications, signal processing problems are formulated
within the framework of additive commutative groups of linear spaces, where in-
teraction between useful and interference (noise) signals is described by a binary
operation of addition. Rather seldom these problems are considered in terminol-
ogy of ring binary operations, i.e., in additive commutative groups with introduced
multiplication operation connected with addition by distributive laws. In this case,
interaction between useful and interference (noise) signals is described by multipli-
cation and addition operations in the presence of multiplicative and additive noise
(interference), respectively.
Capabilities of signal processing in linear space are confined within potential
quality indices of optimal systems demonstrated by the results of classical signal
processing theory. Besides, modern electronic systems and means of different func-
tionalities operate under prior uncertainty conditions adversely affecting quality
indices of signal processing.
Under prior uncertainty conditions, the efficiency of signal processing algorithms
and units can be evaluated upon some i-th distribution family of interference (noise)
Di [{ai,k }], where {ai,k } is a set of shape parameters of this distribution fam-
ily; i, k ∈ N, N is the set of natural numbers, on the basis of the dependences
Q(m21 /m2 ), Q(q 2 ) of some normalized signal processing quality index on the ratio
m21 /m2 between squared average m21 and the second order moment m2 of interfer-
ence (noise) envelope [156], and also on signal-to-noise (signal-to-interference) ratio
q 2 , respectively. By normalized signal processing quality index Q we will mean any
signal processing quality index that takes its values in the interval [0, 1], so that 1
and 0 correspond to the best and the worst values of Q, respectively. For instance, it
could be conditional probability of correct detection; correlation coefficient between
useful signal and its estimator, etc. By the envelope Ex (t) of stochastic
p process
x(t) (particularly of interference) we mean the function: Ex (t) = x2 (t) + x2H (t),
R∞ x(τ )
where xH (t) = − π1 τ −t dτ (as a principal value) is the Hilbert transform of
−∞
initial stochastic process x(t). Dependences Q(m21 /m2 ) of such normalized signal
processing quality index Q on the ratio m21 /m2 (q 2 =const) for an arbitrary i-family
of interference (noise) distribution Di [{ai,k }] may look like the curves 1, 2, and 3
shown in Fig. I.1. Optimal Bayesian decisions [143, 146, 150, 152] and the decisions
obtained on the basis of robust methods [157, 166, 168] and on the basis of non-
parametric statistics methods [150, 169–172], at a qualitative level, as a rule, are
characterized by the curves 1, 2, and 3, respectively. Figure I.1 conveys generalized
behaviors of some groups of signal processing algorithms operating in nonpara-
metric prior uncertainty conditions, when (1) distribution family of interference is
xxii Introduction
FIGURE I.1 Dependences Q(m21 /m2 ) of FIGURE I.2 Dependences Q(q 2 ) of some
some normalized signal processing quality in- normalized signal processing quality index Q
dex Q on the ratio m21 /m2 that characterize on the signal-to-noise ratio q 2 that character-
(1) optimal Bayesian decisions; decisions, ob- ize (1) the best case of signal receiving; (2)
tained on the basis of (2) robust methods; (3) the worst case of signal receiving; (3) desir-
nonparametric statistics methods; (4) desir- able dependence
able dependence
known, but concrete type of distribution is unknown (i.e., m21 /m2 is unknown); (2)
the worst case, when even distribution family of interference is unknown. Condi-
tions when m21 /m2 ∈]0, 0.7], m21 /m2 ∈]0.7, 0.8], m21 /m2 ∈]0.8, 1.0[ correspond to
interference of pulse, intermediate, and harmonic kind, respectively. At the same
time, while solving the signal processing problems under prior uncertainty con-
ditions, it is desirable for all, or at least several interference (noise) distribution
families, to obtain the dependence in the form of the curve 4 shown in Fig. I.1:
Q(m21 /m2 ) → 1(m21 /m2 − ε) − 1(m21 /m2 − 1 + ε), where 1(∗) is Heaviside step
function and ε is an indefinitely small positive number.
The curve 4 in Fig. I.1 is desirable, since quality index Q is equal to 1 in the whole
interval ]0, 1[ of the ratio m21 /m2 , i.e., signal processing algorithm is absolutely
robust (Q=1) with respect to nonparametric prior uncertainty conditions.
Figure I.2 illustrates generalized behaviors of signal processing algorithms op-
erating in parametric prior uncertainty conditions, when (1) interference distri-
bution is known and energetic/spectral characteristics of useful and interference
signals are known (curve 1) and (2) interference distribution is unknown and/or
energetic/spectral characteristics of useful and/or interference signals are unknown
(curve 2).
Figure I.2 shows dependences Q(q 2 ) of some normalized signal processing quality
index Q on the signal-to-noise ratio q 2 that characterize: (1) the best case of signal
receiving, (2) the worst case of signal receiving; (3) desirable dependence. The
curve 3 in Fig. I.2 is desirable, since quality index Q is equal to 1 in the whole
interval ]0, ∞[ of q 2 , i.e., signal processing algorithm is absolutely robust (Q=1)
with respect to parametric prior uncertainty conditions.
Thus, depending on interference (noise) distribution and also on characteristics
of a useful signal, optimal variants of a solution of some signal processing problem
provide the location of functional dependence Q(q 2 ) within the interval between
the curves 1 and 2 in Fig. I.2, which, at a qualitative level, characterize the most
Introduction xxiii
and the least desired cases of useful signal receiving (the best and the worst cases,
respectively). However, while solving the signal processing problems under prior
uncertainty conditions, for all interference (noise) distribution families, the most
desired dependence Q(q 2 ) undoubtedly is the curve 3 in Fig. I.2: Q(q 2 ) → 1(q 2 − ε),
where, similarly, 1(∗) is Heaviside step function and ε is indefinitely small positive
number. Besides, at a theoretical level, the maximum possible proximity between
the functions Q(q 2 ) and 1(q 2 ) should be indicated.
Providing constantly growing requirements for the real signal processing quality
indices under prior uncertainty conditions on the basis of known approaches for-
mulated, for instance, in [150, 152, 157, 166], appears to be problematic; thus, other
ideas should be used in this case. This book is devoted to a great large-scale prob-
lem, i.e., providing signal processing quality indices Q in the form of dependencies
4 and 3 shown in Figs. I.1 and I.2, correspondingly.
The basic concepts for signal processing theory and information theory are signal
space and information quantity, respectively. The development of the notions of in-
formation quantity and signal space leads to important methodological conclusions
concerning interrelation between informational theory (within its syntactical as-
pects) and signal processing theory. But this leads to other questions. For example,
what is the interrelation between, on the one hand, set theory (ST), mathemati-
cal analysis (MA), probability theory (PT), and mathematical statistics (MS); and
on the other hand, between information theory (IT) and signal processing theory
(SPT)? Earlier it seemed normal to formulate axiomatic grounds of probability
theory on the basis of set theory by introducing a specific measure, i.e., the prob-
abilistic one and the grounds of signal theory (SgT) on the basis of mathematical
analysis (function spaces of special kind, i.e., linear spaces with scalar product). In
a classical scheme, the following one: the connection ST → PT → IT was traced
separately, the relation PT → MS → SPT was observed rather independently, and
absolutely apart there existed the link MA → SgT. Here and below, arrows indicate
how one or another theory is related to its mathematical foundations.
The “set” interpretation of probability theory has its own weak position noted
by many mathematicians, including its author A.N. Kolmogoroff. Since the second
part of the 20th century, we know an approach, according to which Boolean algebra
with a measure (BA) forms an adequate mathematical model of the notion called
an “event set”; thus, the interrelation is established: BA → PT [173–176]. Corre-
spondingly, in abstract algebra (AA) within the framework of lattice theory (LT),
Boolean algebras are considered to be further development of algebraic structures
with special properties, namely, the lattices. Instead of traditional schemes of the-
ories’ interrelations, in this work, to develop the relationship between information
theory and signal processing theory, we use the following:
LT → BA → {PT → {IT ↔ SPT ← {MS + (SgT ← {AA + MA})}}}.
Undoubtedly, this scheme is simplified, because known interrelations between ab-
stract algebra, geometry, topology, mathematical analysis, and also adjacent parts
of modern mathematics are not shown.
The choice of Boolean algebra as mathematical apparatus for foundations of
xxiv Introduction
both signal processing theory and syntactical aspects of information theory is not
an arbitrary one. Boolean algebra, considered as a set of the elements (in this case,
each signal is a set of the elements of its instantaneous values) possessing the certain
properties, is intended for signal space description. A measure defined upon it is
intended for the quantitative description of informational relationships between the
signals and their elements, i.e., the instantaneous values (the samples).
In this work, special attention is concentrated on interrelation between infor-
mation theory and signal processing theory (IT ↔ SPT). Unfortunately, such a
direct and tangible interrelation between classical variants of these theories is not
observed. The answer to the principal question of signal processing theory—how, in
fact, one should process the results of interaction of useful and interference (noise)
signals, is given not with information theory, but by means of applying the special
parts of mathematical statistics, i.e., statistical hypothesis testing and estimation
theory. While investigating such an interrelation (IT ↔ SPT), it is important that
information theory and signal processing theory could answer interrelated questions
below.
So, information theory with application to signal processing theory has to resolve
certain issues:
1.1. Information quantity measure, its properties and its relation to signal
space as a set of material carriers of information with special properties.
1.2. The relation between the notion of signal space in information theory
and the notion of signal space in signal processing theory.
1.3. Main informational relationships between the signals in signal space that
is the category of information theory.
1.4. Interrelation between potential quality indices (confining the efficiency)
of signal processing in signal space which is the category of information
theory, and the main informational relationships between the signals.
1.5. Algebraic properties of signal spaces, where the best signal processing
quality indices may be obtained by providing minimum losses of infor-
mation contained in useful signals.
1.6. Informational characteristics of communication channels built upon sig-
nal spaces with the properties mentioned in Item 1.5.
Signal processing theory with application to information theory has the following
issues:
2.1. Main informational relations between the signals in signal space that is
the category of signal processing theory.
2.2. Informational interrelation between main signal processing problems.
2.3. Interrelation between potential quality indices (confining the efficiency)
of signal processing in signal space with certain algebraic properties and
main informational relationships between the signals.
Introduction xxv
2.5. Quality indices of signal processing algorithms and units in signal space
with special algebraic properties, taking into consideration Item 1.5 (the
analysis problem).
Notion Abbreviation
Abstract algebra AA
Autocorrelation function ACF
Characteristic function CF
Cumulative distribution function CDF
Decision gate DG
Envelope computation unit ECU
Estimator formation unit EFU
Generalized Boolean algebra GBA
Hyperspectral density HSD
Information distribution density IDD
Information theory IT
Lattice theory LT
Least modules method LMM
Least squares method LSM
Linear frequency modulated signal LFM signal
Matched filtering unit MFU
Mathematical analysis MA
Mathematical statistics MS
Median filter MF
Mutual information distribution density mutual IDD
Mutual normalized function of statistical interrelationship mutual NFSI
Normalized function of statistical interrelationship NFSI
Normalized measure of statistical interrelationship NMSI
Overall quantity of information o.q.i.
Power spectral density PSD
Probabilistic measure of statistical interrelationship PMSI
Probability density function PDF
Probability theory PT
Radio frequency RF
Relative quantity of information r.q.i.
Set theory ST
Signal detection unit SDU
Signal extraction unit SEU
Signal processing theory SPT
Signal theory SgT
White Gaussian noise WGN
xxvii
Notation System
Notion Notation
xxix
xxx Notation System
Notion Notation
Notion Notation
All scientific research has both subject and methodological content. The last is
connected with critical reconsideration of existing conceptual apparatus and ap-
proaches for interpretation of phenomena of interest.
A researcher working with real world physical objects eventually must choose
a methodological basis to describe researched phenomena. This basis determines a
proper mathematical apparatus.
The choice of correct methodological basis is the key to success and often the
reason more useful information than expected is obtained. Mathematical principles
and laws contain a lot of hidden information accumulated during the ages. Math-
ematical ideas could give much more than expected before. That is why a person
engaged in some fundamental mathematical research may not foresee all the pos-
sible applications in natural science, sometimes creating precedent when “the most
principal thing has been said by someone, who does not understand it”. For exam-
ple, Johannes Kepler possessed all the necessary information to formulate universal
gravitation law, but he did not do so, etc.
It is clear that any mathematical model developed on the basis of a specific
methodology may describe the real phenomenon studied with a certain amount of
accuracy. But it may happen that the mathematical model used until now does
not satisfy the requirements of completeness, adequacy, and internal consistency
anymore. In such a case, a researcher has two choices. The first is to explain all
known facts, all new facts, and appearing paradoxes within the Procrustean bed of
an old theory. The second option is to devise a new mathematical model to explain
all the available facts.
The modern methodologies of all natural sciences involve its base, i.e., method-
ological basis (a certain mathematical apparatus), general system of notions and
principles, and also particular system of notions and principles of a given scientific
direction. These components driving natural science research are interconnected
within their disciplines and to other branches of science.
We first consider general system of notions and principles of modern research
methodology within natural sciences and then discuss a particular system of notions
of specific research directions, in this case signal processing theory and information
theory. We also explain the relationships of these subjects to various branches of
the natural sciences.
Phenomena and processes related to information and signal processing theories
are essential constituent parts of an entire physical world. The foundations of these
1
2 1 General Ideas of Natural Science, Signal Theory, and Information Theory
theories rely on the same fundamental notions and the same research techniques
utilized by all branches of science. The first step is to find out the content of
general system of notions used in research methodology of natural sciences, without
going into details of its structure, which is well stated in the proper literature.
Next, the researcher should analyze the suitability of the systems of notions that
are used in classical information theory and signal processing theory. Finally, one
should determine how both theories converge within the framework of research
methodology of natural sciences.
understand the structure of the universe and the geometry and algebra of intra-
atomic space. In the opinion of Paul Dirac:
The modern physical developments have required a mathematics that continu-
ally shifts its foundation and gets more abstract. Non-Euclidean geometry and
noncommutative algebra, which at one time were considered to be purely fic-
tions of the mind and pastimes of logical thinkers, have now been found to be
very necessary for the description of general facts of the physical world [187].
Modern science has no definitive answers to these questions. At present, we can
express only the most general understanding. The main elements of cosmic space
are objects called straight lines. They are trajectories of light wave movements
or trajectories of movements of particles bearing light energy, i.e., photons. The
gravity field lines surrounding all masses of matter are considered rectilinear. Tra-
jectories of material particles (cosmic rays) moving freely throughout the universe
are rectilinear.
All these straight lines analyzed on Earth’s scale are considered identical but
that conclusion may not be correct. We have no reason yet to speak about the
geometry of the universe. We can speak only about the geometries of light rays and
gravitation fields, etc. It is quite possible that these geometries can be absolutely
different, and the issue becomes even more complicated because the concepts of
general relativity theory, electromagnetic waves, and gravity fields are dependent
on each other.
The violation of the rectilinearity of light waves within gravity fields was estab-
lished theoretically and confirmed by observations. Light rays passing a heavy body,
for example, near the Sun, are distorted. The geometry of light rays in space is com-
plicated because huge masses of matter are distributed nonuniformly throughout
the universe.
General relativity theory revealed the interdependence of gravity field space,
electromagnetic field space, and time. These objects define four-dimensional space
whose laws have been explained by modern physicists, astronomers, and mathe-
maticians.
At present, we can say only that the properties of these objects are not described
by Euclidean geometry.
The geometry of the intra-atomic world is more ambiguous. In cosmic space,
we can indicate straight lines in a certain sense, but it is impossible to do this with
atomic nuclei. We have little to say about the geometry of intra-atomic space but
we can certainly say there is no Euclidean geometry there.
Although the word information has served as a catch-all for a long time, its use
in the middle of the 20th century evolved to describe a specific concept that plays a
critical role in all fields of science. Application of the information approach expanded
greatly since then and Claude Shannon reminded scientists working in social and
humanitarian disciplines of the need to keep their houses in first class order [188]. At
present, there are a lot of definitions of this notion. The definition choice sufficiently
depends on researchers’ directions, goals, techniques, and available technologies of
research in every individual case.
1.2 Information Theory and Natural Sciences 5
After the appearance of the works of Claude Shannon [51] and Norbert Wiener
[85], interest in information theory and its utility in the fields of physics, biology,
psychology, and other hard and soft science fields increased. Solomon Kullback con-
nects it also with Wiener’s statement, that in practice of statistics, his (Wiener’s)
definition of information could be used instead of Fischer’s [89]. We should note
Leonard Savage’s remark that, “The ideas of Shannon and Wiener, though con-
cerned with probability, seem rather far from statistics. It is, therefore, something
of an accident that the term ‘information’ coined by them should be not altogether
inappropriate in statistics”.
The main thesis of Wiener’s book titled Cybernetics, or Control and Commu-
nication in the Animal and the Machine [85] was the similarity of control and com-
munications processes in machines, living organisms, and societies. These processes
encompass transmission, storage, and processing of information (signals carrying
messages).
One of the brightest examples are the processes of genetic information transmis-
sion. Genetic information transmission plays a vital role in all forms of life. About 2
million species of flora and fauna inhabit the Earth. Transmission of genetic infor-
mation determines the development of all organisms from single cells to their adult
forms. Transmitted genetic data governs species structures and individual features
for both present and future generations. All this information is preserved within a
small volume of elementary cell nucleus and is transmitted through intricate ways
to all the other cells originated from a given one by cell fission; this information
is preserved also during the process of further reproduction of next generations of
similar species.
Every field of natural science and technology relies on information transmission,
receiving, and transformation. Visible light reports a lot of live creatures up to
90% of data concerning the surrounding world, electromagnetic waves and fluxes of
particles carry an imprint of the processes from the universe remote parts. All living
organisms depend on information on their relationships with each other, with other
living things, and with inanimate objects. Physicists, philosophers, and all others
who studied all aspects of our existence and our world depended on the availability
of information and information quantity long before Shannon and Wiener defined
those terms.
When information theory developed in the works of mathematicians, it dropped
out of sight of physics and other nature sciences, and some scientists opined that
information was an intangible notion that had nothing to do with energy transfer
and other physical phenomena. The reasons for such misunderstandings could be
explained by some peculiarities of the origin of information theory, its further de-
velopment and application. Mathematical communication theory appearance was
stimulated by the achievements of electronics which in those times was based on
classical electrodynamics with its inherent ideas of continuity and absolute simul-
taneous measurability of all the parameters of physical objects.
As communication technologies evolved, information theory was used to solve
the problems of both communication channel optimization and encoding method op-
timization, which transformed into a vast independent part of mathematics, having
1.2 Information Theory and Natural Sciences 7
pedia Britannica, if the circuit noise did not hold us to an accuracy of measurement
of perhaps one part in ten thousand”.
Another important issue is the specificity of measurement processes that has
no analogues in classical physics. According to quantum theory, probabilistic pre-
diction of measurement results of signal receiving cannot be determined only by a
signal state; a list of measurable parameters should be specified. Exact measure-
ment of some parameters usually fails to include reliable estimates regarding other
parameters. For example, while aspiring to measure velocity of an elementary par-
ticle in quantum physics or a target in radiolocation or hydrolocation, the ability
to obtain information about an object’s position in space is limited.
The essential peculiarity of some measurement processes, especially those based
upon indirect methods of measurements, is nonlinearity of a measurement space (or
a sample space). An appearing problem of data processing optimization methods
is not trivial, and in general has not been resolved satisfactorily as of this writing.
The development of electronic systems that serve various functions and are
characterized by the use of the microwave and optic ranges of the electromagnetic
spectrum requires study of the influences of physical phenomena on the processes
of signal transmitting, receiving, and processing.
The application of physical methods to information theory involves reverse pro-
cess: the use of information to solve some key problems of theoretical physics.
An information interchange is an example of a process developing from the past
into the future. One can say, time “has a direction”. A listener cannot understand
a compact disk spinning in the reverse direction. Reverse pulling of film strips was
used widely to create surrealistic effects in the early phases of cinematography
development. The world may look absurd when the courses of events are reversed.
The laws of classical mechanics discovered by Isaac Newton are reversible. In
the equations time can figure with either positive or negative signs. Thus, time
can be considered reversible or irreversible. Time direction plays an important role
in research on life processes, meteorology, thermodynamics, quantum physics, and
other scientific areas.
The concept of an obvious irreversibility of time has found the most clear-cut
formulation in thermodynamics in the form of the Second Law, according to which
a quantitative characteristic called entropy never decreases. Originally, thermody-
namics was used to study the properties of the gases, i.e., large ensembles of particles
that are in persistent movement and interaction with each other. We can obtain
only partial data about such ensembles. Although Newton’s laws are applicable to
every single particle, it is impossible to observe each of them, or to distinguish one
particle from another. Their characteristics can not be determined exactly; they
can be studied only on the base of probabilistic relationships.
One can obtain certain data concerning macroscopic properties of this ensemble
of particles, for instance, concerning a number of degrees of freedom or dimen-
sion, pressure, volume, temperature, energy. Some properties can be represented by
statistical distributions, for instance, particles’ velocity distribution. Also one can
observe some microscopic movements, but it is impossible to obtain complete data
about every particle.
1.2 Information Theory and Natural Sciences 9
methods of continuous signal processing without information losses will create nu-
merous problems.
One theoretical disadvantage of the sampling theorem is its orientation toward
using deterministic signals with known parameters that cannot carry information.
Nikolai Zheleznov [198] suggested applying the sampling theorem interpretation to
stochastic signals. This idea considers the signals as nonstationary stochastic pro-
cesses with some power spectral densities. Zheleznov also proposed using an idea
concerning the boundedness of correlation interval and its smallness as against the
signal duration. The correlation interval would be considered equal to the sam-
pling interval. That is, the utility of using the sampling theorem with stochastic
signals lies in the requirement for lack of correlation of neighbor samples during the
transformation of a continuous signal to a discrete one.
The main feature of Zheleznov’s variant of sampling theorem is the statement
that the sequence of discrete samples in time domain can provide only an approx-
imation of the initial signal. Meanwhile, we know that classical formulation of the
sampling theorem claims absolute accuracy of signal representation.
Unfortunately this interpretation also presents disadvantages. The correlation
concept used here describes only linear statistical relations and thus limits the use of
this interpretation on nonGaussian random processes (signals). Even if one assumes
the processed signal is completely deterministic and is described by some analytic
function (even a very complicated one), the use of the sampling theorem will create
theoretical and practical difficulties.
First, a real deterministic signal has a finite duration T . In the frequency do-
main, the signal has an unbounded spectrum. However, due to the properties of
real signal sources and the boundedness of real channel passband, one can consider
the signal spectrum to be bounded by some limited frequency F . The spectrum is
usually defined on the basis of an energetic criterion, i.e., it is bounded within the
frequency interval from 0 to F where most of the signal energy concentrates.
This spectrum boundedness leads to a loss of some information. As a result,
restoration of a bounded signal by its samples in time domain based on the sampling
theorem constrained by limits on the signal spectrum can be only approximate.
Errors arise also from the constraints on the finite number of samples lying within
time interval T (equal to 2F T according to the theorem). These errors appear to
be due to neglecting the infinite number of expansion functions corresponding to
samples outside the interval T .
Second, the restoration procedure causes another error arising from the impos-
sibility of creating pulses of infinitesimal duration and transmitting them through
real communication channels. The maximum output signal corresponding to the
reaction of an ideal low-pass filter on delta-function action has a delay time that
tends to infinity. For a finite time T , every sample function and their sums that
are copies of initial continuous signals will be formed only approximately. Less T
means more rough approximation.
Nevertheless, some formulations of the sample theorem are free of these disad-
vantages, as will be shown in Chapter 4.
The first steps in developing the notion of information quantity were under-
14 1 General Ideas of Natural Science, Signal Theory, and Information Theory
H = N log q.
A number of authors obtained this equation by various methods. Shannon [51] and
Wiener [85] based their work on a statistical approach to information transmission.
However, statistical approaches vary. In the approach to information quantity
above, we dealt with average values, not information transmitted by a single symbol.
Thus, Equation (1.3.1) represents an average value. It can be overwritten as:
H = −log pi .
step ∆x of the random variable ξ is small enough against its range, the probability
that random variable will take its values within the i-th quantization interval will
be approximately equal to:
pi = p(xi )∆x,
where p(xi ) is PDF value at the point xi . Substituting the value pi into Equation
(1.3.1), we have:
X
H (x) = lim {− p(xi )∆x log[p(xi )∆x]} =
∆x→0
i
X X
= lim {− [p(xi ) log p(xi )]∆x − log ∆x p(xi )∆x}.
∆x→0
i i
X Z∞
lim {− [p(xi ) log p(xi )]∆x} = − p(x) log p(x)dx,
∆x→0
i −∞
we obtain:
Z∞
H (x) = − p(x) log p(x)dx − lim (log ∆x). (1.3.2)
∆x→0
−∞
Thus, to define continuous random variable entropy, Shannon’s approach [51] lies
in a limit passage from discrete random variables to continuous ones while ignoring
infinity, rejecting a continuous random variable entropy devoid of sense, and re-
placing it by differential entropy. Shannon’s preference for discrete communication
channels does not look logical based on recent developments in communications
technology. Shannon’s information quantity during the transition to continuous
noiseless channels is coupled too closely with thermodynamic entropy so problems
arising from both concepts have the same characteristics caused by the same reason.
The additive noise that limits the signal receiving accuracy according to the clas-
sical information theory imparts a finite unambiguous sense to information quantity.
16 1 General Ideas of Natural Science, Signal Theory, and Information Theory
P
C = F log(1 + ), (1.3.4)
N
where F is a channel bandwidth, P is the signal power, N is the noise power.
Analysis of this expression leads to a conclusion about the possibility of trans-
mitting any information quantity per second by indefinitely weak signals in the
absence of noise. This result is based on the assumption that at low levels of inter-
ference (noise), one can distinguish two indefinitely close to each other signals with
any reliability, causing an unlimited increase in channel capacity while decreasing
noise power to zero. This assumption seems absurd from a theoretical view because
Nature’s fundamental laws limit measurement accuracy and they are insuperable
by any technological methods and means.
Despite attempts to eliminate infinity during a transition from discrete ran-
dom variable entropy to continuous random variable entropy, Shannon’s theory can
cause the same problem arising from defining continuous channel capacity based on
differential entropy. The indispensable condition is a presence of noise in channels,
because information quantity transmitted by a signal per time unit tends to infinity
in the absence of noise.
Another difficulty with classical information theory arises when differential en-
tropy (1.3.3) compared with Equation (1.3.1) is not preserved under bijection map-
pings of stochastic signals; this situation can produce paradoxical results. For ex-
ample, the process y (t) obtained from initial signal x(t) by its amplification k times
(k > 1) possesses greater differential entropy than the original signal x(t) in the
input of the amplifier. This, of course, does not mean that the signal y (t) = kx(t)
carries more information than the original one x(t). Note an important circum-
stance. Shannon’s theory excludes the notion of quantity of absolute information
generally contained in a signal.
The question of the quantity of information contained in a signal (stochastic
process) x(t) has no place in Shannon’s theory. In this theory, the notion of infor-
mation quantity makes sense only with respect to a pair of signals. In that case, the
appropriate question is: how much information does the signal y (t) contain with
respect to the signal x(t)? If the signals are Gaussian with correlation coefficient
ρxy , quantity of mutual information I [x(t), y (t)] contained in the signal y (t) with
respect to the signal x(t), assuming their linear relation y (t) = kx(t), is equal to
infinity:
Z∞ Z∞
p(x, y ) q
I [x(t), y (t)] = p(x, y ) log dxdy = − log 1 − ρ2xy = ∞.
p(x)p(y )
−∞ −∞
Evidently the answer to the question about the quantity of mutual informa-
tion and the quantity of absolute information is too weak. The question about the
quantity of absolute information can be formulated more generally and neutrally.
1.3 Overcoming Logical Difficulties 17
Let y (t) be the result of a nonlinear (in general case) one-to-one transformation of
Gaussian stochastic signal x(t):
The question is: will the information contained in signals x(t) and y (t) be the
same if their probabilistic-statistical characteristics differ? In the case above, their
probability density functions, autocorrelation functions, and power spectral densi-
ties could differ. For example, if the result of nonlinear transformation (1.3.5) of
quasi-white Gaussian stochastic process x(t) with power spectral density width Fx
is the stochastic process y (t) with power spectral density width Fy , Fy > Fx , does
the resulting stochastic process y (t) carry more information than its original in the
input of a transformer? It is hard to believe, that classical information theory can
provide a perspicuous answer to the question.
The so-called cryptographic encoding paradox relates to this question and can
be explained. According to Shannon [72], information quantities obtained from two
independent sources, should be added. The cryptographic encoder must provide
statistical independence between the input x(t) and the output y (t) signals:
p(x, y ) = p(x)p(y ).
where p(x, y ) and p(x), p(y ) are joint and univariate probability density functions,
respectively.
We can consider the signals at the input x(t) and the output y (t) of a crypto-
graphic encoder as independent message sources. The general quantity of informa-
tion I obtained from each of them (Ix , Iy ) should equal their sum: I = Ix + Iy .
However, under any one-to-one transformation under Equation (1.3.5), the identity:
I = Ix = Iy must hold inasmuch as both x(t) and y (t) carry the same information.
The information theory conclusion that Gaussian noises possessing the most in-
terference effect (maximum entropy) among all types of noises with limited average
power is connected closely with differential entropy noninvariance property [Equa-
tion (1.3.3)] with respect to a group of signal mappings. This statement contradicts
known results, for example, of signal detection theory and estimation theory in
which the examples of interference (noise) exert stronger influence on signal pro-
cessing systems than Gaussian interference (noise).
Shannon’s measure of information quantity, like Hartley’s measure, cannot pre-
tend to cover all factors determining “uncertainty of outcome” in an arbitrary sense.
For example, these measures do not account for time aspects of signals. Entropy
(1.3.1) is defined by the probabilities pi of various outcomes. It does not depend on
the nature of outcomes — whether they are close or distant [199]. The uncertainty
degree will be the same for two discrete random variables ξ and η characterized by
identical probability distributions pξ (x) and pη (y ):
X ξ X η
pξ (x) = pi · δ (x − xi ), pη (y ) = pi · δ (y − yi ), pξi = pηi ,
i i
R∞ R∞
where mξ = xpξ (x)dx, mη = ypη (y )dy.
−∞ −∞
Applying these concepts to the messages (signals) means that under Shannon’s
approach, one should consider so-called conditional entropy to account for the sta-
tistical relationships between single fragments of the messages.
If the chaos is interpreted as the absence of statistical coupling between time
series of events, then Shannon’s entropy (1.3.1) is an uncertainty measure in time-
less space or in space with a time disorder. Jonathan Swift described the situation
in Gulliver’s Travels. The protagonist visits Laputian Academy of Lagado and en-
counters a wonder machine with which “the most ignorant person, at a reasonable
charge, and with a little bodily labour, might write books in philosophy, poetry,
politics, laws, mathematics, and theology, without the least assistance from genius
of study.” The sentences in such books are formed by random combinations of “par-
ticles, nouns, and verbs, and other parts of speech.” Every press of the machine’s
handle produces a new phrase with the help of “all the words of their language, in
their several moods, tenses, and declensions, but without any order.”
Émile Borel described an experiment involving a monkey and a typewriter. A
monkey randomly pressing the keys could create “texts” with maximal information
content according Shannon. It seems appropriate here to repeat Ilya Prigogine’s
statement about the “impossibility of surrounding world description without con-
structive role of time” [200]. We contend that neglecting the time component (or
statistical relations between the instantaneous values of the signals) is unsatisfac-
tory when constructing the foundations of signal theory and information theory.
The paradox of subjective perception of a message by sender and addressee is
specific to statistical information theory. The message M represents a deterministic
set of the elements (signs, symbols, etc.) to the sender. Thus, from the standpoint
of the sender, the message contains the quantity of information I (M ) equal to zero.
For the addressee, the same message M ∗ is a probabilistic-statistical totality of the
elements. Therefore, from the standpoint of the addressee, this message contains
the quantity of information I (M ∗ ) that does not equal to zero. A paradoxical situ-
ation occurs when the sender knowingly sends a message that contains a quantity
of information equal to zero while the receiver is sure the message contains real
content.
Three considerations for constructing the foundations of information theory are:
1. Accepting the sender’s view that the message is completely known and its
elements form a deterministic totality
2. Considering the view of the addressee — the message is unknown and its
elements form a probabilistic-statistical totality
1.3 Overcoming Logical Difficulties 19
3. As an ideal observer, attempting to unify the views of the sender and the
addressee
The authors of the sampling theory accept the view of the sender who knows the
content of the message sent and they prefer to work with deterministic functions.
The creators of statistical information theory considered the view of the addressee
indisputable. Researchers in semantic information theory tried to improve the sit-
uation by treating the message as an invariant of information [91], [201]. From this
view, the quantity of semantic information transmitted by a message has to be the
same for both the sender and the addressee.
Analogously, quantity of syntactical information I (M ) contained in determinis-
tic (for the sender) message M must be equal to the quantity of syntactical informa-
tion I (M ∗ ) contained in the received message M ∗ that represents for the addressee
the probabilistic-statistical totality of the elements: I (M ) = I (M ∗ ).
This circumstance demands appropriate elucidation of information theory to
ensure that a measure of information quantity combine both probabilistic and de-
terministic approaches.
The author sees a resolution to this predicament using an approach that does
not require identification of measure of information quantity and physical entropy.
Its essence lies in representation of the signals carrying information by a physical
system with a set of probabilistic states, and not by abstract random variables and
a set of numbers.
The informational structure of a random function (stochastic signal) represents
an internal organization of the system (the signal) that determines robust relations
between its elements. The totality of the elements of informational structure of the
signal is the set of the elements of metric signal space where the metric between
any two elements is invariant with respect to a group of signal mappings.
In summary, these considerations can be divided into two groups. The first
group concerns signal space concept, its current state (Group 1 below) and its
suggested development (Group 1A below). The second group covers issues related
to a measure of information quantity, its current state (Group 2 below) and its
suggested development (Group 2A below).
Group 1:
1. Linear spaces with scalar product (Euclidean spaces) serve as the basis of
the modern variant of signal theory construction.
2. In classical signal theory, any signal, whether stochastic (carrying a certain
quantity of information) or deterministic (containing no information), is
represented by an element of linear space with scalar product (a vector).
Regardless of their informational properties, all signals are represented
equally in the signal space.
3. Using linear space with scalar product to describe the real signals im-
poses strong constraints on signal properties and signal processing. The
probabilistic properties of signals are described, mainly, by Gaussian dis-
tribution, and optimal processing of such signals is confined within linear
20 1 General Ideas of Natural Science, Signal Theory, and Information Theory
Group 2:
Axiom 1.3.1. The main axiom of signal processing theory. Information quantity
I [y ] contained in the signal y in the output of arbitrary processing unit f [∗] does
not exceed the information quantity I [x] contained in the initial signal x before
processing:
I [y ] ≤ I [x]; y = f [x], (1.3.6)
so that the inequality (1.3.6) turns into identity under the condition, if signal pro-
cessing realizes one-to-one mapping of the signal:
f
x y; I [x] = I [y ], (1.3.6a)
f −1
The identity (1.3.6) will be formulated and proved in the form of the corre-
sponding theorems in Chapters 2, 3, and 4. Note that some signal mappings not
related to one-to-one transformations provide identity (1.3.6). We consider some of
these issues in Chapter 4.
W.G. Tuller [10] formulated the idea of comparing information quantities in the
input and output of a processing unit in the form of (1.3.6).
The methodological approach applied to construction of information theory by
Shannon is based on elementary techniques and methods of probability theory. It
ignores the basis of probability theory built on Boolean algebra with a measure.
Thus, we should expect to use Boolean algebra with a measure to construct the
foundations of signal processing theory and information theory. This approach could
be a unified factor allowing us to provide the unity of theoretical foundations of
aforementioned directions of mathematical science, imparting to them force, uni-
versality, and commonality.
Furthermore, the physical character of signal processing theory and its applied
orientation may stimulate the development of information theory, probability the-
ory, mathematical statistics, and other mathematical directions and application of
their research methods to all branches of natural science.
The principal concept of the work is constructing the unified foundations of
signal processing theory and information theory on the basis of the notion of sig-
nal space built upon generalized Boolean algebra with a measure. The last induces
metric in this space, and simultaneously it is a measure of information quantity
contained in the signals. That consideration provides the unity of a theoretical
foundation of interrelated directions of mathematical science: probability theory,
signal processing theory, and information theory, based upon the unified method-
ological basis: generalized Boolean algebra with a measure.
2
Information Carrier Space Built upon Generalized
Boolean Algebra with a Measure
In Chapter 1, the requirements for signal space, i.e., the space of material carriers
of information and the requirements for a measure of information quantity were
formulated. In this chapter, we show that the space built upon Boolean algebra with
a measure meets all the properties of signal space and a measure of information
quantity in full.
The difficulties of attempting to define some general notion of scientific use are
well known. So, the fundamental notion of a “set” has no direct definition, but this
fact does not interfere with the study of mathematics; it is enough to know the
main theses on set theory.
In this chapter, our attention will be concentrated on a fundamental, but more
vague notion of “information”. Even a very superficial analysis produces consid-
erable difficulty in defining information. Undoubtedly, the science needs such a
definition. Signal processing theory and information theory require distinctly for-
mulated axioms and theorems describing, on the one hand, the properties of a space
of material carriers of information (signal space), and on the other hand, the prop-
erties of information itself, its measure, and the peculiarities of processing of its
carriers (signals).
Boolean algebra with a measure and metric space built upon Boolean algebra
and induced by this measure are well investigated [173–176, 202–213]. Main results
on generalized Boolean algebras and rings are contained in several works [214–218].
Study of interrelations between lattices and geometries begins from [219], [220],
[221]. The papers [204] and [222] are the first to describe geometric properties of
Boolean ring and Boolean algebra respectively.
Analysis of the development of signal processing theory and information theory
suggests their independent and isolated existence along with their weak interac-
tions. Often, one can gain the impression, that information carriers (signals) and
principles of their processing exist per se, while transmitted information and the
approaches intended to describe a wide range of informational processes exist apart
from information carriers. This is shown by the fact that the founders of signal
theory, signal processing theory, and information theory considered a signal space
irrespective of information carried by the signals [122], [127], [143], and on the
other hand, a measure of information quantity regardless of its material carriers
(signals) [50], [51], [85]. This contradiction will inevitably question the necessity of
unifying mathematical foundations of both signal processing theory and information
theory. It is shown in this chapter that this theoretical difficulty can be overcome
25
26 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b) · a = a, (a · b) + a = a (absorption)
Over a lattice a null O (unit I) element (also called zero and unity, respectively)
can be defined:
a + O = a, a · O = O; a + I = I, a · I = a.
Lattice L is called lattice with relative complements, if for any element a from any
2.1 Information Carrier Space 27
interval [b, c] it can be found that element d, d ∈ [b, c], and that a + d = c and
a · d = b. Element d is called relative complement of element a in the interval [b, c].
Lattice L with zero O and unity I is called complemented lattice, if every element
has a relative complement in the interval [O, I]. Relative complements in the interval
[O, I] are simply called complements.
In distributive lattice L with zero O and unity I, every element a possessing the
complement a0 has also a relative complement d = (a0 + b)c in any interval [b, c],
a ∈ [b, c].
Distributive lattice L with null element O and relative complements is called
generalized Boolean lattice BL in which a relative complement of the element a in
the interval [O, a + b] is called the difference of the elements b and a and is denoted
by b−a. BL with unit element I is called Boolean lattice BL0 . One can also say that
Boolean lattice BL0 is a distributive lattice with complements, i.e., a distributive
lattice L with zero O and unity I.
Generalized Boolean lattice BL, considered as a universal algebra BL =
(X, TBL ) with the signature TBL = (+, · , −, O) of the type (2, 2, 2, 0), is called
generalized Boolean algebra B = (X, TB ), TB ≡ TBL . Let a∆b = (a + b) − ab in
BL; then we obtain generalized Boolean ring BR = (X, TBR ) with the signature
TBR = (∆, · , O) of the type (2, 2, 0). On the contrary, any generalized Boolean
ring BR = (X, TBR ) with the signature TBR = (∆, · , O) of the type (2, 2, 0)
can be turned into generalized Boolean lattice BL = (X, TBL ) with the signature
TBL = (+, · , −, O) of the type (2, 2, 2, 0), assuming a + b = a∆b∆ab.
Generalized Boolean algebra B can be defined by the following system of iden-
tities:
(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b) · a = a, (a · b) + a = a (absorption)
a · (b + c) = ab + ac, a + bc = (a + b)(a + c) (distributivity)
a · (b − a) = O, a + ( b − a) = a + b.
Boolean lattice BL0 , considered as universal algebra BL0 = (X, TBL0 ) with the
signature TBL0 = (+, · , 0 , O, I) of the type (2, 2, 1, 0, 0), is called Boolean algebra
B0 = (X, TB0 ), TB0 ≡ TBL0 .
Boolean algebra B0 can be defined by the following system of identities:
(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b)a = a, (a · b) + a = a (absorption)
a(b + c) = ab + ac, a + bc = (a + b)(a + c) (distributivity)
a+O = a, a·I = a;
a · a0 = O, a + a0 = I.
There are supplementary operations in Boolean algebra derived from the main
28 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
ones. The most important ones are difference a − b = ab0 and symmetric difference
a∆b = (a − b) + (b − a).
Let a∆b = ab0 + a0 b in Boolean algebra B0 ; we obtain Boolean ring BR0 with the
signature (∆, ·, O, I) of the type (2, 2, 0, 0). Conversely, any BR0 with the signature
(∆, ·, O, I) of the type (2, 2, 0, 0) can be transformed into Boolean algebra B0 with
the signature (+, ·,0 , O, I) of the type (2, 2, 1, 0, 0), assuming a + b = a∆b∆ab,
a0 = a∆I. Thus, Stone’s duality between Boolean algebras and Boolean rings is
established.
Finite additive measure on generalized Boolean algebra B is a finite real function
m on B satisfying the following conditions:
Definition 2.1.1. Information carrier space Ω is a set of the elements {A, B, . . .}:
A, B, . . . ⊂ Ω called information carriers, that possesses the following properties:
XY = O, (m(XY ) = 0).
AB⊥A0 ⊥B 0 ⊥AB⊥A∆B,
FIGURE 2.2.1 Hexahedron built upon FIGURE 2.2.2 Tetrahedron built upon ele-
a set of all subsets of element A + B ments O, A, B, C
O (0, 0, 0), AB (0, 0, m(AB )), A0 (m(A0 ), 0, 0), B 0 (0, m(B 0 ), 0);
ρ(A, B ) = |xA B A B A B
1 − x1 | + |x2 − x2 | + |x3 − x3 |. (2.2.4)
It is obvious that ρ(A, B ) ≥ d(A, B ). The sides of triangle ∆OAB in the space Ω,
according to Equation (2.2.4), are equal to:
For an arbitrary triplet of the elements A, B, C and null elements O of the space
Ω, the tetrahedron metric relationships hold (see Fig. 2.2.2):
Definition 2.2.1. Line l in the metric space Ω with metric (2.2.1) is a set con-
taining at least three elements A, B, X: A, B, X ∈ l ⊂ Ω, if the metric identity
holds:
ρ(A, B ) = ρ(A, X ) + ρ(X, B ). (2.2.6)
Lemma 2.2.1. For an arbitrary pair of the elements A, B of metric space Ω, the
elements A, X = A + B, B lie on the same line, so that the element X = A + B is
situated between the elements A and B:
Lemma 2.2.2. For an arbitrary pair of the elements A, B of metric space Ω, the
elements A, Y = A · B, B lie on the same line, so that the element Y = A · B is
situated between the elements A and B:
For the vertices of hexahedron built upon the points O, AB, A0 , B 0 , A, B, A∆B,
A + B, the following relationships of belonging to the same line lij in Ω hold:
1. AB, B, A + B, A ∈ l1
2. O, B 0 , A∆B, A0 ∈ l2
3. O, AB, A, A0 ∈ l3
4. O, AB, B, B 0 ∈ l4
5. A, A0 , A∆B, A + B ∈ l5
36 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
6. B, B 0 , A∆B, A + B ∈ l6
7. O, AB, A + B, A∆B ∈ l7
8. A0 , A, B 0 , B ∈ l8
9. O, A, A + B, B 0 ∈ l9
10. AB, B 0 , A∆B, A0 ∈ l10
11. O, B, A + B, A0 ∈ l11
12. AB, B 0 , A∆B, A ∈ l12
Therefore, line l1 is identical to the lines l3 . . . l8 , l9 , l11 :
⇒ AB, B, A + B, A ∈ c1 .
⇒ O, B 0 , A∆B, A0 ∈ c2 .
⇒ O, AB, B, B 0 ∈ c4 .
Q1 , Xα , Q2 ∈ [Q1 , Q2 ] ⇒ Q1 , Xα , Q2 ∈ l ⇔ Q1 ≺ Xα ≺ Q2 .
Q1 < Xα ⇒ Q1 = Q1 · Xα , Xα < Q2 ⇒ Xα = Xα · Q2 ,
where ρ(Q1 , Q2 ) = m(Q1 ∆Q2 ) is metric between the elements Q1 , Q2 in the space
Ω.
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 39
Definition 2.2.3. The mapping ϕ associating every element α ∈ Φ with the el-
ement Aα ∈ Ω is called the indexed set {Aα }α∈Φ on generalized Boolean algebra
B(Ω) with a measure m: ϕ : α → Aα .
Definition 2.2.4. If the indexed set A = {Aα }α∈Φ is a subset of generalized
Boolean algebra B(Ω), whose elements are connected by a linear order, i.e., for an
arbitrary pair of the elements Aα , Aβ ∈ A, α ≤ β, the inequality Aα ≤ Aβ holds,
then the set A is a linearly ordered indexed set or indexed chain.
Theorem 2.2.1. For the elements {Aj }, j = 0, 1, . . . , n of linearly ordered indexed
set A = {Aj }, A ⊂ Ω, defined on generalized Boolean algebra B(Ω) with a measure
m, the metric identity holds:
n
X
m(A0 ∆An ) = m(Aj−1 ∆Aj ). (2.2.8)
j=1
Proof. Since for the elements of linearly ordered set A = {Aj } defined on general-
ized Boolean algebra B(Ω) with a measure m, an implication Aj−1 ≤ Aj ⇔ Aj−1 =
Aj−1 · Aj and the metric identity hold:
Inserting the right part of the identity (2.2.9) into the sum of the right part of the
identity (2.2.8), we get the value of the series’ sum:
n
X n
X
m(Aj−1 ∆Aj ) = [m(Aj ) − m(Aj−1 )] = m(A0 ∆An ).
j=1 j=1
Based on Theorem 2.2.1, the condition of three points belonging to the same line in
the space Ω (2.2.6) can be extended to the case of arbitrary large numbers of points.
We consider that the elements {Aj } of the indexed set A = {Aj }, j = 0, 1, . . . , n lie
on the same line in metric space Ω, if the metric identity holds (compare with [222]):
n
X
ρ(A0 ∆An ) = ρ(Aj−1 ∆Aj ). (2.2.10)
j=1
A = {Aj : ∀Aj ∈ l : A0 ≺ A1 ≺ . . . ≺ Aj ≺ . . . ≺ An }.
Definition 2.2.5. If all the elements of the line l in metric space Ω form a partially
ordered set, then the line is called the line with partially ordered elements.
40 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
Definition 2.2.6. If all the elements of the line l in metric space Ω form a linearly
ordered set, then the line is called the line with linearly ordered elements.
Thus, in metric space Ω, we shall differentiate lines with partially and linearly
ordered elements.
Mutual situation of two distinct lines in the space Ω is characterized by the
following feature in contrast with the lines in Euclidean space.
A ∩ B = Q = {Qk }, k = 0, 1, . . . , K.
Then, according to the Corollary 2.2.1 of Theorem 2.2.1, the lines lA and lB , on
which the elements {Aj } of the set A and the elements {Bi } of the set B are
situated, respectively, intersect each other on the set Q:
lA ∩ lB = Q = {Qk }, k = 0, 1, . . . , K. 5
This example implies that two distinct lines lA and lB in the space Ω could have
an arbitrarily large number K + 1 of common points {Qk } belonging to the same
interval [Q0 , QK ]: {Qk } ⊂ [Q0 , QK ].
The condition (2.2.12) implies that the intersection of the sets A0 = {A0j } and
A00 = {A00i } contains two elements of generalized Boolean algebra B(Ω), O and A:
A0 ∩ A00 = {O, A}. Then, according to Corollary 2.2.1 of Theorem 2.2.1, the lines
l0 and l00 , on which the elements {A0j } of the set A0 and the elements {A00i } of the
set A00 are situated, intersect at two points of the space Ω: l0 ∩ l00 = {O, A}. 5
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 41
This example implies that two distinct lines l0 and l00 in space Ω, passing through
the points of linearly ordered sets A0 = {A0j }, A00 = {A00i }, respectively: A0 ⊂ l0 and
A00 ⊂ l00 , being the subsets of the interval [O, A]: A0 ⊂ [O, A] and A00 ⊂ [O, A],
intersect at two extreme points of this interval O, A.
Definition 2.2.7. Linearly ordered indexed set A = {Aα }α∈Φ is called an every-
where dense set, if for all Aα , Aβ ∈ A: Aα < Aβ an element Aγ can be found, such
that: Aα < Aγ < Aβ .
Example 2.2.3. Consider two linearly ordered everywhere dense indexed sets A =
{Aα }α∈Φ1 and B = {Bβ }β∈Φ2 defined in the intervals [A0 , QA ] and [B0 , QB ] of
generalized Boolean algebra B(Ω), respectively: A ⊂ [A0 , QA ], B ⊂ [B0 , QB ], so
that [A0 , QA ] ∩ [B0 , QB ] = [Q1 , Q2 ], A0 < Q1 & B0 < Q1 , Q2 < QA & Q2 < QB ,
and the limits of the sequences A and B coincide with the extreme points of the
interval [Q1 , Q2 ], respectively:
(
limα→α1 Aα = limβ→β1 Bβ = Q1 ;
(2.2.13)
limα→α2 Aα = limβ→β2 Bβ = Q2 .
Identities (2.2.13) imply that the intersection of linearly ordered sets A = {Aα }α∈Φ1
and B = {Bβ }β∈Φ2 contains two elements of generalized Boolean algebra B(Ω): Q1
and Q2 : A ∩ B = {Q1 , Q2 }.
Then, according to Corollary 2.2.1 of Theorem 2.2.1, the lines lA and lB , on
which the elements A = {Aα }α∈Φ1 , B = {Bβ }β∈Φ2 of the sets A, B are situated,
respectively, intersect at two points of the space Ω: lA ∩ lB = {Q1 , Q2 }. 5
This example implies that two distinct lines lA and lB , passing through the
points of linearly ordered sets A = {Aα }α∈Φ1 , B = {Bβ }β∈Φ2 , respectively: {Aα } ⊂
lA , {Bβ } ⊂ lB , can intersect at two or more points of the space Ω.
A + B = (AB ) + (A − B ) + (B − A),
where the elements AB, A − B, B − A are pairwise orthogonal and differ from null
element:
B − A (0, m(B − A), 0), A (m(A − B ), 0, m(AB )), B (0, m(B − A), m(AB ));
A∆B (m(A − B ), m(B − A), 0), A + B (m(A − B ), m(B − A), m(AB )).
The points O, AB, A − B, B − A, A, B, A∆B, A + B of the sheet LAB belong
to the sphere Sp(OAB , RAB ) in R3 , so that the center OAB and radius RAB of the
sphere are determined by the Equations (2.2.2) and (2.2.3), respectively.
Definition 2.2.10. In metric space Ω, sheet LAB is a set of the points which
contains two elements A, B ∈ Ω that are not connected by relation of an order
(A 6= AB & B 6= AB ), lines passing through two given points; every line that
passes through an arbitrary pair of the points on the lines passing through two
given points.
By Definitions 2.2.9 and 2.2.10, the algebraic and geometric essences of a sheet
are established.
Definition 2.2.11. Two points A and B are called generator points of the sheet
LAB in metric space Ω, if they are not connected by relation of an order on gener-
alized Boolean algebra B(Ω): A 6= AB & B 6= AB.
Definitions 2.2.9 and 2.2.10 imply that the sheet LAB given by generator points
A, B contains the points O, AB, A − B, B − A, A, B, A∆B, A + B, and also the
points {Xα } of a linearly ordered indexed set X lying on the line passing through
the extreme points O, X of the interval [O, X ], where X = Y ∗ Z, Y = A ∗ B,
Z = A ∗ B, and the asterisk ∗ is an arbitrary signature operation of generalized
Boolean algebra B(Ω).
To denote one-to-one correspondence between an algebraic notion W and its
geometric image Geom(W ), we shall write:
W ↔ Geom(W ). (2.2.14)
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 43
For instance, the correspondence between the sheet LAB and subalgebra B(A + B )
of generalized Boolean algebra B(Ω) will be denoted as LAB ↔ A + B.
Applying the correspondence (2.2.14), one can define the relationships between
the objects of metric space Ω with some given properties. For instance, the inter-
section of two sheets LAB and LCD , determined by the generator points A, B and
C, D, respectively, is the sheet with generator points (A + B )C and (A + B )D or
the sheet with generator points (C + D)A and (C + D)B:
= A + AD + AB + BD = A + BD = A,
i.e.: (A ≡ C ) & (BD = O) ⇒ LAB ∩ LCD = A.
Definition 2.2.12. In metric space Ω, plane αABC passing through three points
A, B, C, which are not pairwise connected by relation of an order:
By Definitions 2.2.12 and 2.2.13, the algebraic and geometric essences of a plane
are established, respectively.
Definition 2.2.14. Three points A, B, C are called generator points of the plane
αABC in metric space Ω, if they are not pairwise connected by relation of an order:
Using the relation (2.2.14), we obtain that the intersection of two planes αABC and
αDEF , where the points A, B, C and D, E, F are not connected with each other by
relation of order, is the plane with generator points A(D + E + F ), B (D + E + F ),
C (D + E + F ), or the plane with generator points D(A + B + C ), E (A + B + C ),
F (A + B + C ):
= A + B + C,
then the intersection of these planes is the plane αABC :
If the points A, B and D, E of the planes αABC , αDEF are pairwise identical A ≡ D,
B ≡ E, then the intersection of these planes is the plane αA,B,CF with generator
points A, B, CF :
= (A + B + C )(A + B + F ) = A + B + CF ↔ αA,B,CF ,
i.e., A ≡ D, B ≡ E ⇒ αABC ∩ αDEF = αA,B,CF . If the points A, B and D, E of
the planes αABC , αDEF are pairwise identical A ≡ D, B ≡ E, and the elements C
and F are orthogonal CF = O, then the intersection of these planes is the sheet
LAB with generator points A, B:
= (A + B + C )(A + B + F ) = A + B + CF = A + B,
i.e., (A ≡ D, B ≡ E ) & (CF = O) ⇒ αABC ∩ αDEF = LAB .
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 45
But if the points A and D of the planes αABC , αDEF are identical A ≡ D, and the
elements B, C and E + F are pairwise orthogonal B (E + F ) = O, C (E + F ) = O,
then the intersection of these planes is the element A:
↔ (A + B + C )(D + E + F ) = A(A + E + F ) + B (A + E + F ) + C (A + E + F ) =
= A + BA + CA + B (E + F ) + C (E + F ) = A,
i.e., (A ≡ D) & (B (E + F ) = O) & (C (E + F ) = O) ⇒ αABC ∩ αDEF = A.
Example 2.2.4. Let the elements A and D of the planes αABC , αDEF be identical
A ≡ D, and the elements B, C and E + F are pairwise orthogonal B (E + F ) = O,
C (E + F ) = O. Then the intersection of these planes is the element A:
Example 2.2.5. Consider the sheets LAB , LAE with the generator points A, B
and A, E belonging to the planes αABC , αAEF , respectively. Let the lines lAB , lAE
passing through the generator points A, B and A, E of these sheets, respectively,
intersect each other A, so that:
lAB ∩ lAE = A;
A, B ∈ lAB ⊂ LAB ⊂ αABC ;
A, E ∈ lAE ⊂ LAE ⊂ αAEF . 5
This example implies that two planes αABC , αAEF , two sheets LAB , LAE , and two
lines lAB , lAE belonging to these planes αABC , αAEF , respectively, can intersect
each other in a single point in metric space Ω.
Example 2.2.7. Consider, as well as in the previous example, three planes {αi },
i = 1, 2, 3 in metric space Ω, each of them with generator points {Ai , Bi , Ci },
i = 1, 2, 3 connected with each other by relation of order: A1 < A2 < A3 , B1 <
B2 < B3 , C1 < C2 < C3 . Every pair of the generator points {Ai , Bi }, i = 1, 2, 3
determines the sheets {LABi } belonging to the corresponding planes {αi }. The
points {Ai }, {Bi }, {Ci }, i = 1, 2, 3 lie on the lines lA , lB , lC , respectively: Ai ∈ lA ,
Bi ∈ lB , Ci ∈ lC . Let also the lines lA , lB intersect in the point AB: lA ∩ lB = AB,
so that AB = Ai Bi , i = 1, 2, 3. It should be noted that AB < A1 < A2 , AB <
B1 < B2 , hence, the lines lA2 , lB2 pass through the points AB, A1 , A2 ; AB, B1 , B2 ,
respectively, AB, A1 , A2 ∈ lA2 ; AB, B1 , B2 ∈ lB2 belong to the plane α2 , so that
lA2 ∩ lB2 = AB. Therefore two crossing lines lA2 , lB2 belong to the same plane α2 ;
however, two crossing lines lA , lB do not belong to the same plane α2 . In metric
space Ω the corollary from axioms of connection of “absolute geometry” that “two
crossing lines belong to the same plane” [224, Section 2] does not hold. Thus, in
metric space Ω, two crossing lines can belong or not belong to the same plane. 5
If the generator points {Ai , Bi , Ci } of the planes {αi } are connected by relation
of order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 , Ci−1 < Ci < Ci+1 and, there-
fore, lie on the same line respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB ;
Ci−1 , Ci , Ci+1 ∈ lC , then we shall consider that the planes αi−1 ,αi ,αi+1 belong to
one another: αi−1 ⊂ αi ⊂ αi+1 .
If the generator points {Ai , Bi } of the sheets {Li } are connected by relation of
order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 and, therefore, lie on the same line
respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB , then we shall consider that
the sheets Li−1 ,Li ,Li+1 belong to one another: Li−1 ⊂ Li ⊂ Li+1 .
If the generator points {Ai , Bi , Ci } of the planes {αi } are connected by relation
of order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 , Ci−1 < Ci < Ci+1 and, there-
fore, lie on the same line, respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB ;
Ci−1 , Ci , Ci+1 ∈ lC , then we shall consider that the lines lAi−1 , lAi , lAi+1 :
A11 . There exists at least one line passing through two given points.
A12 . There exist at least three points that belong to every line.
A13 . There exist at least three points that do not belong to the same line.
A15 . There exists only one sheet passing through null element and two
points that do not lie with null element on the same line.
A16 . A null element and at least two points belong to each sheet.
A17 . There exist at least three points that do not belong to the same sheet
and differ from the null element.
A18 . Two different crossing sheets have at least one point in common, which
differs from null element.
A19 . There exists only one plane passing through null element and three
points that do not lie on the same sheet.
A110 . A null element and at least three points belong to every plane.
A111 . There exist at least four points that do not belong to the same plane
and differ from the null element.
A112 . Two different crossing planes have at least one point in common that
differs from the null element.
C11 . There exists only one plane passing through a sheet and a point that
does not belong to it (from A19 and A15 ).
C21 . There exists only one plane passing through two distinct sheets cross-
ing in a point which differs from the null element (from A19 and A15 ).
Axioms of order
A22 . Among any three points on a line, there exists no more than one point
lying between the other two.
48 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
AB ≺ A ≺ A + B ≺ B ≺ AB or AB A A + B B AB ;
∀A, B : (A 6= AB ) & (B 6= AB ) ⇒ A, B, B − A, A − B ∈ l2 :
A ≺ B ≺ B − A ≺ A − B ≺ A or A B B − A A − B A.
⇔ A0 ≺ A1 ≺ . . . ≺ Aj ≺ . . . ≺ An ⇔ {Aj } ⊂ l.
Definition 2.2.15. In the space Ω, triangle ∆ABC is a set of the elements be-
longing to the union of intersections of each of three sheets LAB , LBC , LCA of the
i j k
plane αABC with the collections of the lines {lAB }, {lBC }, {lCA } passing through
the generator points A, B; B, C; C, A of these sheets respectively:
[ [ j [
i k
∆ABC = [LAB ∩ ( lAB )] ∪ [LBC ∩ ( lBC )] ∪ [LCA ∩ ( lCA )].
i j k
Definition 2.2.16. Angle ∠B∆ABC of the triangle ∆ABC in the space Ω is a set of
the elements belonging to the union of intersections of each of two sheets LAB LBC
i j
of the plane αABC with the collections of the lines {lAB }, {lBC } passing through
the generator points A, B; B, C of these sheets respectively:
[ [ j
i
∠B∆ABC = [LAB ∩ ( lAB )] ∪ [LBC ∩ ( lBC )].
i j
Axiom of congruence
A31 . If, in a group G of mappings of the space Ω into itself, there exists the
mapping g ∈ G preserving a measure m, at the same time the points
A, B, C ∈ Ω, determining the plane αABC , and pairwise determining
the sheets LAB , LBC , LCA , are mapped into the points A0 , B 0 , C 0 ∈ Ω;
the triangle ∆ABC is mapped into the triangle ∆A0 B 0 C 0 ; the sheets
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 49
LAB , LBC , LCA are mapped into the sheets LA0 B 0 , LB 0 C 0 , LC 0 A0 , and
the plane αABC is mapped into the plane αA0 B 0 C 0 respectively:
g g g g
A → A0 , B → B 0 , C → C 0 ; ∆ABC → ∆A0 B 0 C 0 ;
g g g g
LAB → LA0 B 0 , LBC → LB 0 C 0 , LCA → LC 0 A0 ; αABC → αA0 B 0 C 0 ,
then corresponding sheets, and also planes, triangles, and angles
formed by the sheets are congruent:
LAB ∼
= LA0 B 0 , LBC ∼
= LB 0 C 0 , LCA ∼
= LC 0 A0 ;
αABC ∼
= αA0 B 0 C 0 ; ∆ABC ∼
= ∆A0 B 0 C 0 ;
∠A∆ABC ∼
= ∠A0∆A0 B 0 C 0 , ∠B∆ABC ∼ 0
= ∠B∆A ∼ 0
0 B 0 C 0 , ∠C∆ABC = ∠C∆A0 B 0 C 0 .
C13 . If triangles ∆ABC and ∆A0 B 0 C 0 are congruent in the space Ω, then
there exists a mapping f ∈ G preserving a measure m, which maps
the triangle ∆ABC into the triangle ∆A0 B 0 C 0 :
f
∆ABC → ∆A0 B 0 C 0 .
Axiom of continuity
X ∗ = {x : O ≤ x ≤ X},
at the same time, on a set X ∗ , there is no interval that lies within all
the intervals of the system, then there exists the only element that
belongs to all these intervals.
Axioms of parallels
Axioms of parallels for a sheet
A51 (a) In a given sheet LAB , through a point C ∈ LAB that does not belong
to a given line lAB passing through the points A, B, there can be
drawn at least one line lC not crossing a given one lAB , under the
condition that the point C is the element of linearly ordered indexed
set {Cγ }γ∈Γ , C ∈ {Cγ }γ∈Γ .
50 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
A52 (a) In a given sheet LAB , through a point C ∈ LAB that does not belong
to a given line lAB passing through the points A, B, there can be
drawn only one line lC not crossing a given lAB under the condition
that the point C is not the element of linearly ordered indexed set
{Cγ }γ∈Γ , C ∈
/ {Cγ }γ∈Γ .
A53 (a) In a given sheet LX , through a point C ∈ LX that does not belong
to a given line lAB passing through the points A, B, one cannot draw
a line not crossing a given lAB under the condition that the points
A, B belong to an interval [O, X ] : A, B ∈ [O, X ], so that:
A51 (b) In a given plane αABC , through a point C that does not belong to a
given line lAB passing through the points A, B, there can be drawn
at least one line lC not crossing a given lAB under the condition that
the point C and the line lAB belong to the distinct sheets of the
plane αABC :
A52 (b) In a given plane αABC , through a point C that does not belong to a
given line lAB passing through the points A, B, there can be drawn
only one line lC not crossing a given lAB , under the condition that the
point C is not the element of linearly ordered indexed set {Cγ }γ∈Γ ,
C ∈/ {Cγ }γ∈Γ , and along with the line lAB , it belongs to the same
sheet LAB .
A53 (b) In a given plane αABC , through a point C that does not belong to
a given line lAB passing through the points A, B, one cannot draw
a line not crossing a given lAB under the condition that the points
A, B belong to an interval [O, X ] : A, B ∈ [O, X ], so that:
A given axiomatic system of the space Ω, built upon generalized Boolean algebra
B(Ω) with a measure m implies that the axioms of connection and the axioms of
parallels are characterized with essentially the lesser constraints than the axioms
of analogous groups of Euclidean space. Thus, it may be assumed that geometry of
generalized Boolean algebra B(Ω) with a measure m contains in itself some other
geometries. In particular, axioms of parallels both for a sheet and for a plane contain
the axioms of parallels of hyperbolic, elliptic, and parabolic (Euclidean) geometries.
a = A − (B + C ), b = B − (A + C ), c = C − (A + B ), d = ABC ; (2.2.15)
Theorem 2.2.2. Cosine theorem of the space Ω. For a triangle ∆ABC in the space
Ω, the following relationships hold:
m(c) + m(c0 )
= ;
m(a) + m(b) + m(c) + m(a0 ) + m(b0 ) + m(c0 )
m[A∆B ]
sin2 ϕAB = = (2.2.28)
m[(A∆C ) + (C ∆B )]
m(a) + m(b) + m(a0 ) + m(b0 )
= ;
m(a) + m(b) + m(c) + m(a0 ) + m(b0 ) + m(c0 )
cos2 ϕAB + sin2 ϕAB = 1.
Theorem 2.2.3. Sine theorem of the space Ω. For a triangle ∆ABC in the space
Ω, the following relationship holds:
m[A∆B ] m[B ∆C ] m[C ∆A]
2 = 2 = = (2.2.29)
sin ϕAB sin ϕBC sin2 ϕCA
= m(A + B + C ) − m(ABC ) = p(A, B, C ).
Proof of theorem follows directly from the Equations (2.2.26) and (2.2.28).
Theorem 2.2.4. Theorem on cos-invariant of the space Ω. For a triangle ∆ABC in
the space Ω, the identity that determines cos-invariant of angular measures of this
triangle holds:
cos2 ϕAB + cos2 ϕBC + cos2 ϕCA = 1. (2.2.30)
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 53
Proof of theorem follows from the Equations (2.2.20) through (2.2.22) and (2.2.26),
and also from the Equation (2.2.27).
There is two-sided inequality that follows from the equality (2.2.30) and deter-
mines constraint relationships between the angles of a triangle in the space Ω:
√ √
3 arccos(1/ 3) ≤ ϕAB + ϕBC + ϕCA ≤ 3π + 3 arccos(1/ 3). (2.2.31)
According to (2.2.31), one can emphasize four of the most essential intervals
of domain of definition of a triangle angle sum in the space Ω that determine the
belonging of a triangle to a space with hyperbolic, Euclidean, elliptic, and other
geometries, respectively:
√
3 arccos(1/ 3) ≤ ϕAB + ϕBC + ϕCA < π (hyperbolic space);
ϕAB + ϕBC + ϕCA = π (Euclidean space);
π < ϕAB + ϕBC + ϕCA < 3π (elliptic space);
√
3π ≤ ϕAB + ϕBC + ϕCA ≤ 3π + 3 arccos(1/ 3).
Thus, commonality of the properties of the space Ω built upon generalized Boolean
algebra B(Ω) with a measure m extends on three well studied geometries—
hyperbolic, Euclidean, and elliptic.
Theorem 2.2.5. Theorem on sin-invariant of the space Ω. For a triangle ∆ABC in
the space Ω, the identity that determines sin-invariant of angular measures of this
triangle holds:
sin2 ϕAB + sin2 ϕBC + sin2 ϕCA = 2. (2.2.32)
Proof of theorem follows from the Equations (2.2.17) through (2.2.19) and (2.2.26),
and also from the Equation (2.2.28).
Theorem 2.2.6. Function (2.2.28) is a metric.
Proof. Write an obvious inequality:
2 sin2 ϕCA ≤ 2. (2.2.33)
According to the identity (2.2.32), we substitute 2 in the right part of the inequality
(2.2.33) for the sum of squared sines of triangle’s angles:
sin2 ϕAB + sin2 ϕBC + sin2 ϕCA ≥ 2 sin2 ϕCA .
The next inequality follows from the obtained one:
sin2 ϕAB + sin2 ϕBC ≥ sin2 ϕCA . (2.2.34)
Equation (2.2.28) implies that sin2 ϕAB = 0 only if m(A∆B ) = 0, or if A ≡ B:
A ≡ B ⇒ sin2 ϕAB = 0. (2.2.35)
Taking into account the symmetry of the function sin2 ϕAB = sin2 ϕBA and also
the properties (2.2.34) and (2.2.35), it is easy to conclude that the function (2.2.28)
satisfies all the axioms of the metric.
54 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
m(A∆B )
µ(A, B ) = . (2.2.36)
m(A + B )
Formally, this result could be obtained from (2.2.28) if let C = O, where O is a null
element of the space Ω. Thus, while analyzing the metric relationships in a plane
in the space Ω, instead of a triangle ∆ABC, one should consider a tetrahedron
OABC. Metric (2.2.36) was introduced in [225].
Then, according to Lemma 2.2.1, the elements Aij , Ajk , Aki are situated on the
corresponding sides of a triangle 4Ai Aj Ak in metric space Ω, as shown in Fig. 2.2.3,
so that:
Ai , Aij , Aj ∈ lij ; Aj , Ajk , Ak ∈ ljk ; Ak , Aki , Ai ∈ lki ,
where lij ; ljk ; lki are the lines of metric space Ω.
In the plane of triangle 4Ai Aj Ak , the point Aijk = Ai + Aj + Ak is situated upon
the intersection of lines lijk ; ljki ; lkij , so that the following relations of belonging
hold:
Aij , Aijk , Ak ∈ lijk ; Ajk , Aijk , Ai ∈ ljki ; Aki , Aijk , Aj ∈ lkij .
Lemma 2.2.4. In the space Ω, all the elements {Aj } of n-dimensional simplex
n
P
Sx(A): A = Aj , A0 ≡ O belong to some n-dimensional sphere Sp(OA , RA ) with
j=0
center OA and radius RA : ∀Aj ∈ Sx(A): Aj ∈ Sp(OA , RA ).
In the space Ω, n-dimensional sphere
Sp(OA , RA ) can be drawn around an ar-
bitrary n-dimensional simplex Sx(A). It
would be a good thing here to draw an
analogy with an Euclidean sphere in R3 ,
where a distance between a pair of points
A, B is determinated by an Euclidean met-
ric (2.2.5), and by angular measure of an
arc of a large circle passing through a given
pair of the points A, B and the center of
FIGURE 2.2.3 Simplex Sx(Ai + Aj + the sphere. Distances between the elements
Ak ) in metric space Ω {Aj } of n-dimensional simplex Sx(A), ly-
ing on n-dimensional sphere Sp(OA , RA ) in
the space Ω, are uniquely determined by metric µ(Ai , Aj ) (36), which is the function
of angular measure ϕij between the elements Ai and Aj :
m(Ai ∆Aj )
µ(Ai , Aj ) = sin2 ϕij = . (2.2.37)
m(Ai + Aj )
56 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
Consider the main properties of metric space ω with metric µ(A, B ) (2.2.36), which
is a subset of metric space Ω with metric ρ(A, B ) = m(A∆B ). At the same time,
we require that three elements A, X, B, belonging to the same line l in the space
Ω: A, X, B ∈ l, l ∈ Ω, so that the element X lies between the elements A and B,
would also belong to the same line l0 in the space ω: A, X, B ∈ l0 , l0 ∈ ω. Then two
metric identities jointly hold:
ρ(A, B ) = ρ(A, X ) + ρ(X, B );
(2.2.38)
µ(A, B ) = µ(A, X ) + µ(X, B ).
The first equation of the system (2.2.39), as noted above (see the identities (2.2.6)
and (2.2.7)), has two solutions with respect to X: (1) X = A + B and (2) X = A·B.
Substituting these values of X into the second equation of (2.2.39), we notice that
it turns into an identity on X = A + B, but it does not hold on X = A · B.
This implies that the following equivalent requirement to the space ω: ω should be
closed under the operation of addition of generalized Boolean algebra B(Ω) with a
measure m.
We now give a general definition of the space ω.
of the space ω with coordinates x0 = [x01 , x02 , . . . , x0n ]; thus, some transformation of
the elements of the space is established; it is that method of interpretation we shall
deal with henceforth while considering the properties of the space ω.
We now give one more definition of the space ω.
Definition 2.2.19. Metric space ω with metric ρω (A, B ) is the set of the elements
{a, b, . . .} in which each pair a, b is the result of mapping f of the corresponding
pair of elements A, B from metric space Ω:
f : A → a, B → b; A, B ∈ Ω; a, b ∈ ω,
and the distance ρω (A, B ) between the elements a and b in the space ω is equal to
m(A∆B )/m(A + B ), and three elements A, X, B belonging to the same line l in
the space Ω: A, X, B ∈ l, l ∈ Ω, so that the element X lies between the elements A
and B, mapped into three elements a, x, b respectively:
f : A → a, X → x, B → b;
thus, they belong to the same line l0 in the space ω: a, x, b ∈ l0 , l0 ∈ ω, and the
element x lies between the elements a and b.
We now summarize the properties of the space (ω, ρω ).
Group of geometric properties
xa = k · xA , xb = k · xB , k = 1/m(A + B ). (2.2.40)
Axiom 2.3.1. Axiom of a measure of a binary operation. Measure m(a) of the el-
ement a: a = b ◦ c; a, b, c ∈ Ω, considered as a result of a binary operation ◦ of
60 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
Remark 2.3.1. Quantity of absolute information IAα may be considered the quan-
tity of overall information IAα +Aα = m(Aα + Aα ) or the quantity of mutual infor-
mation IAα ·Aα = m(Aα · Aα ) contained in the element Aα with respect to itself.
FIGURE 2.3.1 Information quantities between two sets A and B: (a) quantity of overall
information; (b) quantity of mutual information; (c) quantity of particular relative infor-
mation; (d) quantity of relative information
where d(Aj , Ak ), ρ(Aj , Ak ) are metrics determined by the relationships (2.3.2) and
(2.1.2), respectively.
Here, the values of indices are denoted modulo n + 1, i.e., An+1 ≡ A0 .
The set of the elements with a continuous structure {Aα } in metric space Ω is a
continuous closed line l({Aα }) situated upon a sphere Sp(O, R), which at the same
time is a fragment of n-dimensional simplexSx(A) inscribed into a sphere Sp(O, R),
and in series connects the vertices {Aj } of a given simplex.
On the base of these
S notions, one can distinguish two forms of structural di-
versity of a set A = α Aα and correspondingly, two measures of collections of the
elements of a set A, i.e., overall quantity of information and relative quantity of
information introduced by the following definitions.
I (A) = IA .
ring BR(Ω) with signature (∆, ·, O) of the type (2, 2, 0) that are isomorphic between
each other [176], [215].
Information, contained in a set A of elements, is revealed at the same time in
similarity and distinction between the single elements of collection {Aα }. If a set
A consists of the identical elements . . . = Aα = Aβ = . . . of a collection {Aα },
then the overall quantity of information I (A) contained in a set A is equal to 1:
I (A) = 1, and relative quantity of information I∆ (A) contained in a set A is equal
to zero: I∆ (A) = 0. This means that overall quantity of information contained in
a set A consisting of a collection of identical elements {Aα } is equal to a measure
of an element: I (A) = m(Aα ) = 1, whereas an information quantity that can be
extracted from such a set is determined by relative quantity of information I∆ (A)
and equal to zero.
Consider the main relationships
P that characterize the introduced measures of
collections of the elements m( Aj ) and m(∆ Aj ) for a set A with discrete structure
j j
{Aj }:
Xn n
X X X
m( Aj ) = m(Aj )− m(Aj Ak ) + m(Aj Ak Al ) − . . . (2.3.5)
j=0 j=0 0≤j<k≤n 0≤j<k<l≤n
n
Y
. . . + (−1)n m( Aj );
j=0
n n n
n X X X
m( ∆ Aj ) = m( Aj ) − m( Aj Ak ) + m( Aj Ak Al ) − . . . (2.3.6)
j=0
j=0 0≤j<k≤n 0≤j<k<l≤n
n
Y
n
. . . + (−1) m( Aj ).
j=0
If a set A consists of disjoint elements (Aj · Ak = O), then the inequality (2.3.8)
transforms to the identity:
X
I∆ (A) = IA = m(Aj ). (2.3.9)
j
It can be shown that for a set A with discrete structure {Aj }, j = 0, 1, . . . , n, the
following relationship holds:
X Y
m( Aj ) = P ({Aj }) + m( Aj ), (2.3.10)
j j
66 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
n n
1
P P
where P ({Aj }) = 2 m(Aj ∆Aj+1 ) = d(Aj , Aj+1 ) is perimeter of a closed
j=0 j=0
polygonal line l({Aj }) that in series connects the ordered elements {Aj } of a set
A in metric space of information carriers Ω. Here the values of indices are denoted
modulo n + 1, i.e., An+1 ≡ A0 .
The relationships (2.3.6) and (2.3.10) imply the equality:
j+k
Y Y
k
+(−1) P ({ Ai }) + . . . + m( Aj ) · mod 2 (n − 1),
i=j j
1, n = 2k, k ∈ N;
where mod 2 (n − 1) =
0, n = 2k + 1,
n
P
P ({Aj }) = d(Aj , Aj+1 ) is perimeter of a closed polygonal line l({Aj }) that in
j=0
series connects the ordered elements {Aj } of a set A in metric space of information
carriers Ω (see Fig. 2.3.2);
n
P
P ({Aj · Aj+1 }) = d[(Aj−1 · Aj ), (Aj · Aj+1 )] is perimeter of a closed polygonal
j=0
line l({AΠj }) that in series connects the ordered elements {AΠj }: AΠj = Aj · Aj+1
S
of a set AΠ = AΠj in metric space Ω (see Fig. 2.3.2);
j
j+k
Q n
P j+k
Q j+k+1
Q
P ({ Ai }) = d[ Ai , Ai ] is perimeter of a closed polygonal line
i=j j=0 i=j i=j+1
j+k
Q
l({AΠj,j+k }) that in series connects the ordered elements {AΠj,k }: AΠj,k = Ai
i=j
S
of a set AΠk = AΠj,k in metric space Ω.
j
The relationship (2.3.10) implies that overall quantity of information I (A) is an
information quantity contained in a set A in the consequence of a presence of metric
distinctions between the elements of a collection {Aj }. This overall quantity of
information is determined by perimeter P ({Aj }) of a closed polygonal line l({Aj })
in metric space Ω and also by quantity
Q of mutual information defined by a measure
of the product of the elements m( Aj ).
j
The relationship (2.3.11) implies that relative quantity of information I∆ (A)
is an information quantity contained in a set A owing to the presence of metric
distinctions between the elements of a collection {Aj }. But unlike overall quantity of
information I (A), relative quantity of information I∆ (A) is determined by perimeter
P ({Aj }) of a closed polygonal line l({Aj }) in metric space Ω, taking into account
j+k
Q
the influence of metric distinctions between the products Ai of the elements of
i=j
a collection {Aj }. S
Both measures of structural diversity of a set of the elements A = Aj
j
2.3 Informational Properties of Information Carrier Space 67
P
(m( Aj ) and m(∆ Aj )) are the functions of perimeter P ({Aj }) of the ordered
j j
structure of a set A in the form of a closed polygonal line l({Aj }) that in se-
ries connects the elements of a collection {Aj } in metric space Ω with metric
d(Aj , Ak ) = 21 m(Aj ∆Ak ). The first of these measures, i.e., overall quantity of in-
formation I (A), has a sense of entire quantity of information contained in a set A
considered a collection of elements with an ordered structure. Relative quantity of
information I∆ (A) has a sense of information quantity that may be extracted from
a set by a proper processing.
Under discretization, a set A with continuous structure {Aα } is represented by
a finite (countable) set {Aj }. In its turn, a set of the elements A0 with discrete
structure {Aj } may be associated with the first one:
[ [
A= Aα → A0 = Aj ,
α j
so that each element Aj of a set A0 with discrete structure is, at the same time, the
element of a set A with continuous structure: Aj ∈ A.
Thus, a discretization D : A → A0 of a set of the elements A with continuous
structure {Aα } may be considered a mapping of a set A = Aα into a set A0 = Aj
S S
α j
with discrete structure {Aj }, so that each element of a set A0 with discrete structure
is, at the same time, the element of a set A with continuous structure, and the
distinct elements Aα , Aβ ∈ A, Aα 6= Aβ of a set A are mapped into the distinct
elements Aj , Ak ∈ A0 , Aj 6= Ak of a set A0 :
D : A → A0 ; (2.3.12)
0
Aα , Aβ ∈ A, Aα 6= Aβ , Aα → Aj , Aβ → Ak , Aj , Ak ∈ A , Aj 6= Ak . (2.3.12a)
Aα + Aβ = Aj + Ak (addition)
Aα · Aβ = Aj · Ak (multiplication)
Aα − Aβ = Aj − Ak (difference / obtaining a relative complement)
OA ≡ OA0 ≡ O (identity of null element)
In common case, discretization of a set of the elements A with continuous structure
{Aα } is not an isomorphic mapping that preserves both measures of structural
diversity I (A) (2.3.3) and I∆ (A) (2.3.4):
I (A) − I (A0 )
c(A) = . (2.3.16)
I (A)
Information quantity IL00 is called information losses of the second genus. For the
result of discretization, i.e., for a set A0 of the elements with discrete structure {Aj }
2.3 Informational Properties of Information Carrier Space 69
of the space Ω, one can introduce a measure of informational redundancy r(A0 ) that
characterizes informational interrelations between the elements of discrete structure
{Aj } of a set A0 (or simply, mutual information between them):
I (A0 ) − I∆ (A0 )
r(A0 ) = . (2.3.18)
I (A0 )
The sense of the expression (2.3.19) can be elucidated in the following way: from
a set of the elements A with continuous structure, one cannot extract more infor-
mation quantity than a value of relative quantity of information I∆ (A0 ) contained
in A owing to diversity between the elements of its structure. Information losses IL0
and IL00 take place, on the one hand, owing to curvature c(A) of a structure {Aα }
of a set A in metric space Ω, and on the other hand, as a consequence of some
informational redundancy r(A0 ) of a discrete set A0 (mutual information between
its elements {Aj }).
Discretization D : A → A0 , according to (2.3.19), must provide a maximum
of the ratio of relative quantity of information I∆ (A0 ), contained in a set A0 with
discrete structure, to overall quantity of information I (A) contained in a set A with
continuous structure:
I∆ (A0 )
→ max, (2.3.20)
I (A)
or is equivalent to provide a minimum sum of information losses of the first IL0 and
the second IL00 genus:
IL0 + IL00 → min .
In this case, the following theorem can be formulated.
This identity provides the possibility of extracting the same information quan-
tity contained in a set A, from a finite set A0 without any information losses. Such a
representation of a set A with continuous structure by a finite set A0 with discrete
structure, when the relation (2.3.21) holds, is equivalent from the standpoint of
preserving overall quantity of information.
In this case, between both measures of structural diversity of the sets of the ele-
ments with continuous {Aα } and discrete {Aj } structures respectively, i.e., between
measures of sets A and A0 , the identity holds:
X X
m( Aα ) ≡ m( Aj ),
α j
and also the equality between aforementioned measures and measure of symmetric
difference of a set m(∆ Aj ) holds:
j
X X
m( Aα ) = m( Aj ) = m(∆ Aj ),
j
α j
I∆ (A0 ), contained in a set of the elements A0 with discrete structure, the equality
holds:
I∆ (A) = I∆ (A0 ). (2.3.22)
The group of the identities (2.3.25a through d) implies the following corollary.
72 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
Corollary 2.3.2. Isomorphic mapping g (2.3.23) preserves all the sorts of infor-
mation quantities between an arbitrary pair of the elements Aα1 , Aα2 ∈ A of a set A
considered as subalgebra B(A) of generalized Boolean algebra B(Ω) with signature
(+, ·, −, O) of the type (2, 2, 2, 0):
g g
A A0 ; B B 0 ; g ∈ G, (2.3.27)
g −1 g −1
The group of identities (2.3.28a) through (2.3.28d) implies the following corol-
lary.
2.3 Informational Properties of Information Carrier Space 73
Corollary 2.3.5. Isomorphic mapping g (2.3.27) preserves all the sorts of informa-
tion quantity between a pair of sets A, B ⊂ A∪B, whose union A∪B is considered a
subalgebra B(A + B ) of generalized Boolean algebra B(Ω) with signature (+, ·, −, O)
of the type (2, 2, 2, 0):
IA+B = IA0 +B 0 ; (2.3.29a)
IA·B = IA0 ·B 0 ; (2.3.29b)
IA−B = IA0 −B 0 ; (2.3.29c)
IA∆B = IA0 ∆B 0 . (2.3.29d)
75
76 3 Informational Characteristics and Properties of Stochastic Processes
Z∞
= [M{ξ (t1 )/ξ (t2 )} − M{ξ (t1 )}]2 p(x2 , t2 )dx2 ,
−∞
ψξ (tj , tk ), which is characterized by metric d(p, p0 ) between joint PDF of the sam-
ples p2 (xj , xk ; tj , tk ) and the product of their univariate PDFs p1 (xj , tj ), p1 (xk , tk ):
Z∞ Z∞
1
d(p, p0 ) = |p2 (xj , xk ; tj , tk ) − p1 (xj , tj )p1 (xk , tk )|dxj dxk . (3.1.2a)
2
−∞ −∞
Definition 3.1.1. The function ψξ (tj , tk ), which is defined by the expression (3.1.2)
and characterizes a measure of statistical interrelationship between two samples
ξ (tj ), ξ (tk ) of stochastic process ξ (t), is called the normalized function of statistical
interrelationship (NFSI).
where g [x] is some deterministic function, which is rather exactly (with an error not
larger than 1% in the interval x ∈ [0, 2/3] and not larger than 10% in the interval
x ∈ [2/3, 1]) approximated by the following expression:
r
2
g [x] ≈ 1 − 1 − arcsin[|x|]. (3.1.4)
π
In a similar manner, we define a measure of statistical interrelationship between
two samples ξ (tj ) and η (t0k ) of stochastic processes ξ (t) and η (t0 ), introducing the
function:
where dξη (p, p0 ) is metric between joint PDF p2 (xj , yk ; tj , tk ) of the samples ξ (tj )
and η (t0k ) and the product of their univariate PDFs p1 (xj , tj ) and p1 (yk , t0k ).
Definition 3.1.2. The function ψξη (tj , t0k ), which is defined by (3.1.5) and charac-
terizes a measure of statistical interrelationship between two samples ξ (tj ), η (t0k ) of
stochastic processes ξ (t) and η (t0 ), respectively, is called mutual normalized function
of statistical interrelationship (mutual NFSI).
78 3 Informational Characteristics and Properties of Stochastic Processes
Mutual NFSI ψξη (tj , t0k ) of a pair of Gaussian stochastic processes ξ (t) and η (t0 ),
from Equation (3.1.5), is defined by normalized cross-correlation function rξη (tj , t0k ):
Components ψξη (tj , t0k ) and dξη (tj , t0k ) appearing in formula (3.1.8) are closeness
measure and distance measure between two samples ξ (tj ) and η (t0k ) of stochastic
processes ξ (t) and η (t0 ), respectively. Using these functions, one can introduce sim-
ilar measures for a pair of stochastic processes ξ (t) and η (t0 ) by determining the
quantities ψξη and dξη in the following way:
We have closeness measure and distance measure for a pair of stochastic processes
ξ (t) and η (t0 ). The quantity ψξη determined by the formula (3.1.9a) we shall call co-
efficient of statistical interrelation, and the quantity dξη determined by the formula
(3.1.9b) we shall call metric between stochastic processes ξ (t) and η (t0 ). Coefficient
of statistical interrelation and metric between stochastic processes ξ (t) and η (t0 )
are connected by a relationship similar to (3.1.8):
According to (3.1.11),
Z∞
1
σξ (0) = ψξ (τ )dτ . (3.1.12)
2π
−∞
thus, this relationship implies that effective width of NFSI ∆τ is equal to:
Z∞
∆τ = ψξ (τ )dτ . (3.1.14)
0
thus, this relationship implies that effective width of HSD ∆ω is equal to:
Z∞
1 1
∆ω = σξ (ω )dω = . (3.1.15)
σξ (0) 2σξ (0)
0
The product of effective width of NFSI ∆τ (3.1.14) and effective width of HSD
∆ω (3.1.15) of stationary stochastic process ξ (t), taking into account the relation-
ship (3.1.12), is equal to:
Z∞
1
∆τ · ∆ω = ψξ (τ )dτ = π/2. (3.1.16)
2σξ (0)
0
The expression (3.1.16) characterizes the known uncertainty relation for the func-
tions connected by Fourier transform; the larger is an effective width of NFSI ∆τ
the smaller is an effective width of HSD ∆ω, and vice versa.
Let f be bijective mapping of stochastic process ξ (t), determined in the interval
Tξ = [t0 , t∗ ], into stochastic process η (t0 ), determined in the interval Tη = [t00 , t0∗ ],
from the group of mappings G, f ∈ G, assuming the inverse mapping f −1 exists:
where ψξ (tj , tk ), ψη (t0j , t0k ) are NFSIs of stochastic processes ξ (t) and η (t0 ) respec-
tively.
where d(p, p0 ) is metric between joint PDF pξ (xj , xk ; tj , tk ) of the samples ξ (tj ),
η (t0k ) and the product of their univariate PDFs pξ (xj , tj ) , pξ (xk , tk ).
3.1 Normalized Function of Statistical Interrelationship of Stochastic Processes 81
f : ξ (tj,k ) → η (t0j,k ),
pξ (xj , tj )dxj = pη (yj , t0j )dyj ; pξ (xk , tk )dxk = pη (yk , t0k )dyk .
These relations imply, that under bijective mappings of stochastic processes
(3.1.17), the metric (3.1.19) is preserved, and therefore, NFSI (3.1.18) is preserved
too:
ψξ (tj , tk ) = ψη (t0j , t0k ). (3.1.20)
Corollary 3.1.1. Under the mapping (3.1.17) preserving the domain of definition
of stochastic process Tξ = [t0 , t∗ ] = Tη the identity holds:
Corollary 3.1.2. Under the bijective mapping (3.1.17) of stationary stochastic pro-
cess ξ (t), when the condition (3.1.21) holds, besides the fact that NFSI is preserved:
ψξ (τ ) = ψη (τ ); HSD is also preserved σξ (ω ):
This means that under nonlinear mapping f of stochastic process ξ (t), the power
spectral density Sη (ω ) of the process η (t) is distributed within a wider frequency
band than power spectral density Sξ (ω ) of the process ξ (t) is distributed.
Similarly, if one more nonlinear mapping of stochastic process η (t): h : η (t) →
82 3 Informational Characteristics and Properties of Stochastic Processes
η ∗ (t) is realized, then power spectral density Sη∗ (ω ) of the process η ∗ (t) is distributed
within a wider frequency band than power spectral density Sη (ω ) of the process
η (t) is distributed. However, if the stochastic process η (t) is transformed with the
inverse function h (with respect to the initial mapping f : h = f −1 ), then we obtain
the initial stochastic process ξ (t): η ∗ (t) = ξ (t).
It is obvious that power spectral density of the processes η ∗ (t) and ξ (t) should be
identically equal: Sη∗ (ω ) = Sξ (ω ), but that contradicts an initial statement concern-
ing an extension of power spectral density under nonlinear mapping of a stochastic
process.
The obtained paradox can be easily elucidated with the use of HSD σξ (ω ) of
stochastic process ξ (t) and Corollary 3.1.2 of Theorem 3.1.1 on its invariance under
one-to-one transformation of stochastic process. In the mapping f : ξ (t) → η (t), we
have an identity between HSDs of the initial ξ (t) and the resultant η (t) processes:
σξ (ω ) = ση (ω ), and under the mapping h : η (t) → η ∗ (t), the similar identity be-
tween HSDs of the processes η (t) and η ∗ (t) holds: ση (ω ) = ση∗ (ω ). Thus, HSD ση∗ (ω )
of stochastic process η ∗ (t) is identically equal to HSD σξ (ω ) of initial stochastic pro-
cess ξ (t): ση∗ (ω ) = σξ (ω ), and if the secondary mapping is inverse with respect to
the initial one: h = f −1 , then no paradoxical conclusions appear. 5
There exists a neutral element (zero) 0 in the additive group Γ(+) of L–group
Γ(+, ∨, ∧): 0 ∈ Γ(+), such that for ∀x ∈ Γ(+), the inverse element (−x) exists,
and the identity holds: x + (−x) = 0.
Most cases of signal processing problems deal with stochastic signals (processes)
ξ (t), η (t) with symmetric (even) univariate PDFs pξ (x), pη (y ) of the following kind:
pξ (x) = pξ (−x); pη (y ) = pη (−y ). The characteristics of statistical interrelation
of a pair of instantaneous values (samples) ξt , ηt of stochastic signals (processes)
ξ (t), η (t) ∈ Γ introduced in this section and main results formulated in the form
of theorems are predominantly oriented on the class of the signals with those ex-
act properties, i.e., with even univariate PDFs pξ (x), pη (y ); and these signals (pro-
cesses) interact in partially ordered sets with the properties of lattice-ordered group
Γ(+, ∨, ∧) (L-group).
Theorem 3.2.1. For the samples ξt , ηt of stochastic processes ξ (t), η (t) interacting
in L-group Γ, ξt , ηt ∈ Γ, the functions µ(ξt , ηt ), µ0 (ξt , ηt ) equal to:
are metrics.
Proof. Consider the probabilities P[ξt ∧ ηt > 0], P[ξt ∨ ηt > 0], P[ξt ∧ ηt < 0],
P[ξt ∨ ηt < 0], which, according to the formulas [115, (3.2.80)] and [115, (3.2.85)],
are equal to:
P[ξt ∧ ηt > 0] = 1 − (Fξ (0) + Fη (0) − Fξη (0, 0)); (3.2.2a)
P[ξt ∨ ηt > 0] = 1 − Fξη (0, 0); (3.2.2b)
P[ξt ∧ ηt < 0] = Fξ (0) + Fη (0) − Fξη (0, 0); (3.2.2c)
84 3 Informational Characteristics and Properties of Stochastic Processes
P[ξt > 0] + P[ηt > 0] = P[ξt ∨ ηt > 0] + P[ξt ∧ ηt > 0]. (3.2.3)
Valuation P[ξt > 0] is isotonic, inasmuch as an implication holds [221, § X.1 (V2)]:
Joint fulfillment of the equations (3) and (4), according to theorem 1 [221, § X.1],
implies the quantity µ(ξt , ηt ) that is equal to:
i.e., the functions µ(ξt , ηt ), µ0 (ξt , ηt ) defined by Equations (3.2.1) and (3.2.1a) are
identically equal to:
µ(ξt , ηt ) = µ0 (ξt , ηt ),
and the function µ0 (ξt , ηt ) is also metric.
Thus, partially ordered set Γ with operations χ(t) = ξ (t)∨η (t), χ̃(t) = ξ (t)∧η (t),
t ∈ T is metric space (Γ, µ) with respect to metric µ (3.2.1).
Then any pair of stochastic processes ξ (t), η (t) ∈ Γ with even univariate PDFs
can be associated with the following normalized measure between the samples ξt ,
ηt .
The last relationship and formula (3.2.5) together imply that NMSI ν (ξt , ηt ) is
determined by joint CDF Fξη (x, y ) and univariate CDFs Fξ (x), Fη (y ) of the samples
ξt , ηt :
ν (ξt , ηt ) = 1 + 4Fξη (0, 0) − 2(Fξ (0) + Fη (0)). (3.2.8)
Theorem 3.2.2. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , the metric µ(ξt , ηt ) between the samples
ξt , ηt is an invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H;
α, β ∈ A of stochastic processes, which preserve zero (neutral/null element) 0 of the
group Γ(+): hα,β (0) = 0:
where ξt0 , ηt0 are the samples of stochastic processes ξ 0 (t), η 0 (t) in L–group Γ0 : hα,β :
Γ → Γ0 .
Proof. Under bijective mappings {hα,β }, hα,β ∈ H (9a,b), the invariance prop-
erty of probability differential holds, which implies the identity between joint CDF
Fξη (x, y ), Fξ0 η0 (x0 , y 0 ) and univariate CDFs Fξ (x), Fξ0 (x0 ); Fη (y ), Fη0 (y 0 ) of the pairs
of the samples ξt , ηt ; ξt0 , ηt0 respectively:
Thus, taking into account (3.2.5), Equations (3.2.10), (3.2.10a), and (3.2.10b) imply
the identity (3.2.9).
Corollary 3.2.1. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , NMSI ν (ξt , ηt ) of the samples ξt , ηt is
an invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of
stochastic processes, which preserve zero 0 of group Γ(+): hα,β (0) = 0:
where ξt0 and ηt0 are the samples of stochastic processes ξ 0 (t), η 0 (t) in L–group Γ0 :
hα,β : Γ → Γ0 .
Theorem 3.2.3. For a pair of Gaussian centered stochastic processes ξ (t) and η (t)
with correlation coefficient ρξη between the samples ξt , ηt , NMSI ν (ξt , ηt ) is equal
to:
2
ν (ξt , ηt ) = arcsin[ρξη ]. (3.2.12)
π
86 3 Informational Characteristics and Properties of Stochastic Processes
Proof. Find an expression for joint CDF Fξη (x, y ) of the processes ξ (t) and η (t) at
the point x = 0, y = 0, which, according to Equation (12) of Appendix II of the
work [113], is determined by double integral K00 (α):
q
1 − ρ2ξη
Fξη (0, 0) = K00 (α),
π
where α = π − arccos(ρξη ), q
K00 (α) = α/(2 sin α) (see Equation (14) of the Ap-
pendix II in [113]), sin α = 1 − ρ2ξη . Then, after necessary transformations we
obtain the resultant expression for Fξη (0, 0):
2
Fξη (0, 0) = 1 + arcsin[ρξη ] /4.
π
Substituting the last expression into (3.2.8), we obtain the required identity (3.2.12).
Theorem 3.2.4. For Gaussian centered stochastic processes ξ (t) and η (t), which
additively interact in L–group Γ: χ(t) = ξ (t)+ η (t), t ∈ T , the following relationship
between NMSIs ν (ξt , χt ), ν (ηt , χt ), and ν (ξt , ηt ) of the corresponding pairs of their
samples ξt , χt ; ηt , χt ; ξt , ηt holds:
Using the relationships [234, (I.3.5)], it is easy to obtain a sum of the first two items
of the right side of Equation (3.2.15):
2 2 2
arcsin[ρξχ ] + arcsin[ρηχ ] = (π − arcsin[c]) =
π π π
2 q
= (π − arcsin[ 1 − ρ2ξη ]),
π
q q
where c = ρξχ 1 − ρ2ηχ + ρηχ 1 − ρ2ξχ .
Substituting the sum of arcsines into the right side of (3.2.15), we calculate a sum
of NMSIs of the pairs of samples ξt , χt ; ηt , χt ; ξt , ηt :
Corollary 3.2.2. For Gaussian centered stochastic processes ξ (t), η (t), which ad-
ditively interact in partially ordered set Γ: χ(t) = ξ (t) + η (t), t ∈ T , and their
corresponding pairs of the samples ξt , ηt ; ξt , χt ; χt , ηt , the metric identity holds:
Joint fulfillment of the identities (3.2.7) and (3.2.13) implies this metric identity.
Thus, Theorem 3.2.4 establishes invariance relationship (3.2.13) for NMSI of the
pairs of the samples ξt , χt ; ηt , χt ; ξt , ηt of additively interacting Gaussian stochastic
processes ξ (t), η (t). This identity does not depend on their energetic characteristics
despite the fact that NMSIs ν (ξt , χt ) and ν (ηt , χt ) are the functions of the variances
Dξ and Dη of the samples ξt and ηt of Gaussian processes ξ (t) and η (t).
The results (3.2.13) and (3.2.16) of Theorem 3.2.4 could be generalized upon
additively interacting processes ξ (t) and η (t), not demanding a Gaussian property
for their distributions. This generalization is provided by the following theorem.
Theorem 3.2.5. For nonGaussian stochastic processes ξ (t) and η (t) with even
univariate PDFs, which additively interact with each other in L–group Γ: χ(t) =
ξ (t) + η (t), t ∈ T , and also for their corresponding pairs of the samples ξt , ηt ; ξt ,
χt ; χt , ηt , the metric identity holds:
Metrics µ(ξt , χt ) and µ(ηt , χt ), according to the identity (3.2.5), are determined by
the following relationships:
Taking into account the last identity, the sum of metrics µ(ξt , χt ), µ(ηt , χt ) is equal
to:
µ(ξt , χt ) + µ(ηt , χt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0) = µ(ξt , ηt ).
Corollary 3.2.3. For nonGaussian stochastic processes ξ (t) and η (t) with even
univariate PDFs, which additively interact with each other in L–group Γ: χ(t) =
ξ (t) + η (t), t ∈ T , the following relationship between NMSIs ν (ξt , ηt ), ν (ξt , χt ),
ν (ηt , χt ) of the corresponding pairs of the samples ξt , ηt ; ξt , χt ; ηt , χt holds:
Theorem 3.2.6. For the samples ξt and ηt of stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations of join
and meet, respectively: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , the functions
µ(ξt , χt ) and µ(ξt , χ̃t ) that are equal according to (3.2.1):
µ(ξt , χ̃t ) = 2(P[ξt ∨ χ̃t > 0] − P[ξt ∧ χ̃t > 0]), (3.2.19b)
are metrics between the corresponding samples ξt , χt and ξt , χ̃t .
Theorem 3.2.7. For stochastic processes ξ (t) and η (t) with even univariate PDFs,
which interact in L–group Γ with operations of join and meet, respectively: χ(t) =
ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , between metrics µ(ξt , χt ), µ(ξt , χ̃t ) of the
corresponding pairs of the samples ξt , χt and ξt , χ̃t , the following relationships
hold:
µ(ξt , χt ) = 2[Fξ (0) − Fξη (0, 0)]; (3.2.20a)
µ(ξt , χ̃t ) = 2[Fη (0) − Fξη (0, 0)]. (3.2.20b)
Proof. According to the lattice absorption property, the following identities hold:
ξt ∧ χt = ξt ∧ (ξt ∨ ηt ) = ξt ; (3.2.21a)
ξt ∨ χ̃t = ξt ∨ (ξt ∧ ηt ) = ξt . (3.2.21b)
According to the lattice idempotency property, the following identities hold:
ξt ∨ χt = ξt ∨ (ξt ∨ ηt ) = ξt ∨ ηt ; (3.2.22a)
ξt ∧ χ̃t = ξt ∧ (ξt ∧ ηt ) = ξt ∧ ηt . (3.2.22b)
According to Definition 3.2.1, metric µ(ξt , χt ) is equal to:
µ(ξt , χt ) = 2(P[ξt ∨ χt > 0] − P[ξt ∧ χt > 0]). (3.2.23)
According to Equations (3.2.22a) and (3.2.2b), the equality holds:
P[ξt ∨ χt > 0] = P[ξt ∨ ηt > 0] = 1 − Fξη (0, 0), (3.2.24)
and according to (3.2.21a), the equality holds:
P[ξt ∧ χt > 0] = P[ξt > 0] = 1 − Fξ (0), (3.2.25)
where Fξη (x, y ) is the joint CDF of the samples ξt and ηt ; Fξ (x) is the univariate
CDF of the sample ξt .
Substituting the values of probabilities (3.2.24), (3.2.25) into (3.2.23), we obtain:
µ(ξt , χt ) = 2[Fξ (0) − Fξη (0, 0)]. (3.2.26)
Similarly, according to Definition 3.2.1, metric µ(ξt , χ̃t ) is equal to:
µ(ξt , χ̃t ) = 2(P[ξt ∨ χ̃t > 0] − P[ξt ∧ χ̃t > 0]). (3.2.27)
According to (3.2.21b), the equality holds:
P[ξt ∨ χ̃t > 0] = P[ξt > 0] = 1 − Fξ (0), (3.2.28)
and according to the relationships (3.2.22b) and (3.2.2a), the equality holds:
P[ξt ∧ χ̃t > 0] = P[ξt ∧ ηt > 0] = 1 − [Fξ (0) + Fη (0) − Fξη (0, 0)], (3.2.29)
where Fξη (x, y ) is joint CDF of the samples ξt and ηt ; Fξ (x) and Fη (y ) are univariate
CDFs of the samples ξt and ηt .
Substituting the values of probabilities (3.2.28) and (3.2.29) into (3.2.27), we
obtain:
µ(ξt , χ̃t ) = 2[Fη (0) − Fξη (0, 0)]. (3.2.30)
90 3 Informational Characteristics and Properties of Stochastic Processes
The identities (3.2.32a) and (3.2.32b) imply that the samples ξt , χt , ηt (ξt , χ̃t ,
ηt ) of stochastic processes ξ (t), η (t), χ(t), χ̃(t), respectively, lie on the same line in
metric space Γ.
Corollary 3.2.6. For stochastic processes ξ (t) and η (t), which interact in L–group
Γ with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and also for their
NMSIs ν (ξt , ηt ), ν (ξt , χt ), ν (ηt , χt ), ν (ξt , χ̃t ), ν (ηt , χ̃t ) of the corresponding pairs of
the samples ξt , ηt ; ξt , χt ; ηt , χt ; ξt , χ̃t ; ηt , χ̃t , the following relationships hold:
Proof of the corollary follows directly from the relationships (3.2.32a), (3.2.32b),
and (3.2.7).
Corollary 3.2.7. For stochastic processes ξ (t) and η (t), which interact in L–group
Γ with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , the relationships be-
tween metrics µ(ξt , χt ), µ(ξt , χ̃t ) (3.2.20a), (3.2.20b) and between NMSIs ν (ξt , χt ),
ν (ξt , χ̃t ) (3.2.31a), (3.2.31b) of the corresponding pairs of their samples ξt , χt ,
and ξt , χ̃t are invariants of a group H of continuous mappings {hα,β }, hα,β ∈ H;
α, β ∈ A of stochastic processes (3.2.9a), (3.2.9b):
Proof of the corollary is realized by a direct substitution of CDF values into the rela-
tionships (3.2.20) and (3.2.31): Fξ (0) = 0.5, Fη (0) = 0.5, Fξη (0, 0) = Fξ (0)Fη (0) =
0.25.
Corollary 3.2.10. For statistically independent stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations of join
and meet: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , metric µ(χt , χ̃t ) and NMSI
ν (χt , χ̃t ) between the samples χt and χ̃t are, respectively, equal to:
Thus, Theorem 3.2.7 establishes invariance relations for metrics and NMSIs
of the pairs of the samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t of stochastic signals
(processes) ξ (t), η (t) that interact in L–group Γ with operations: χ(t) = ξ (t) ∨ η (t),
χ̃(t) = ξ (t) ∧ η (t), t ∈ T , so that these identities do not depend on the energetic
relationships between interacting processes.
The obtained results allow us to generalize the geometric properties of metric
signal space (Γ, µ) with metric µ (1).
Stochastic signals (processes) ξ (t), η (t), t ∈ T interact with each other in metric
signal space (Γ, µ) with the properties of L–group Γ(+, ∨, ∧) and metric µ (1), so
that the results of their interaction χ+ (t), χ∨ (t), χ∧ (t) are described by binary
operations of addition +, join ∨ and meet ∧, respectively:
χ∧
t = ξt ∧ ηt . (3.2.41c)
Consider a number of theorems, which can elucidate the main geometric properties
of metric signal space (Γ, µ) with the properties of L–group Γ(+, ∨, ∧) and metric
µ (1).
Theorem 3.2.8. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the result
of their interaction χ+ t , defined by operation of addition (3.2.41a), lie on the same
line l+ : ξt , χt , ηt ∈ l+ , so that the result of interaction χ+
+
t lies between the samples
ξt and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ+ +
t ) + µ(χt , ηt ). (3.2.42)
Theorem 3.2.8 is an overformulation of Theorem 5 through the notion of line
and ternary relation of “betweenness”.
Theorem 3.2.9. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the result
of their interaction χ∨ t , defined by operation of join (3.2.41b), lie on the same line
l∨ : ξt , χ∨
t , ηt ∈ l∨ , so that the result of interaction χ∨t lies between the samples ξt
and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ∨ ∨
t ) + µ(χt , ηt ). (3.2.43)
Theorem 3.2.10. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t) and η (t), t ∈ T and the result
of their interaction χ∧ t , defined by operation of meet (3.2.41c), lie on the same line
l∧ : ξt , χt , ηt ∈ l∧ , so that the result of interaction χ∧
∧
t lies between the samples ξt
and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ∧ ∧
t ) + µ(χt , ηt ). (3.2.44)
Theorems 3.2.9 and 3.2.10 are an overformulation of Corollary 3.2.5 of Theorem
3.2.7 through the notions of line and ternary relation of “betweenness”. These two
theorems could be united by the following one.
Theorem 3.2.11. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the results of
their interaction χ∨ ∧
t /χt , defined by operations of join (3.2.41b) and meet (3.2.41c),
lie on the same line l∨,∧ : ξt , χ∨ ∧
t , χt , ηt ∈ l∨,∧ : l∨,∧ = l∨ = l∧ , so that the result of
∨ ∧
interaction χt /χt lies between the samples ξt , ηt , and each sample of a pair ξt
and ηt lies between the samples χ∨ ∧
t and χt , and the metric identities between the
∨ ∧
samples ξt , χt , χt , ηt hold:
µ(ξt , χ∧ ∨
t ) = µ(χt , ηt ); (3.2.45a)
µ(ξt , χ∨ ∧
t ) = µ(χt , ηt ); (3.2.45b)
µ(ξt , ηt ) = µ(χ∨ ∧
t , χt ); (3.2.45c)
µ(χ∨ ∧ ∨ ∧
t , χt ) = µ(χt , ξt ) + µ(ξt , χt ); (3.2.45d)
µ(χ∨ ∧
t , χt ) = µ(χ∨
t , ηt ) + µ(ηt , χ∧
t ). (3.2.45e)
94 3 Informational Characteristics and Properties of Stochastic Processes
Proof. Theorem 3.2.7 implies the following metric identities between the samples
ξt , χ∨ ∧
t , χt , ηt :
µ(ξt , χ∨
t ) = 2[Fξ (0) − Fξη (0, 0)];
∧
µ(ξt , χt ) = 2[Fη (0) − Fξη (0, 0)];
µ(ηt , χ∨
t ) = 2[Fη (0) − Fξη (0, 0)];
µ(ηt , χ∧
t ) = 2[Fξ (0) − Fξη (0, 0)],
and these systems imply the identities (3.2.45a) and (3.2.45b). Besides, the following
triplets of the samples lie on the same lines, respectively: ξt , χ∨ ∧ ∧
t , ηt ; ηt , χt , ξt ; χt ,
∨ ∨ ∧ ∨ ∧
ξt , χt ; χt , ηt , χt . Hence, all four samples ξt , χt , χt , ηt belong to the same line.
Summing pairwise the values of metrics in the first and in the second equality
systems respectively, we obtain the value of metric µ(χ∨ ∧
t , χt ) between the samples
∨ ∧
χt , χt :
µ(χ∨ ∧ ∨ ∧ ∨ ∧
t , χt ) = µ(χt , ξt ) + µ(ξt , χt ) = µ(χt , ηt ) + µ(ηt , χt ) =
FIGURE 3.2.1 Metric relationships between samples of signals in metric signal space (Γ, µ)
that correspond to the relationships (a) (3.2.42); (b) (3.2.43), (3.2.44), (3.2.45); (c) (3.2.46);
and (d) (3.2.48)
Theorem 3.2.13. In metric signal space (Γ, µ) with metric µ (3.2.1), the results
of interaction χ∨ ∧
t , χt of a pair of samples ξt and ηt of stochastic signals (processes)
ξ (t), η (t), t ∈ T , defined by the operations (3.2.41b) and (3.2.41c), and also neutral
element 0 of the group Γ(+) lie on the same line l0 : χ∨ ∧
t , χt , 0 ∈ l0 , so that the
element χ∧ ∨
t lies between the elements χt , 0; and the metric identity holds:
µ(0, χ∨ ∧ ∧ ∨
t ) = µ(0, χt ) + µ(χt , χt ). (3.2.48)
µ(0, χ∨
t ) = 2(P[(ξt ∨ ηt ) ∨ 0 > 0] − P[(ξt ∨ ηt ) ∧ 0 > 0]) = 2P[ξt ∨ ηt > 0];
µ(0, χ∧
t ) = 2(P[(ξt ∧ ηt ) ∨ 0 > 0] − P[(ξt ∧ ηt ) ∧ 0 > 0]) = 2P[ξt ∧ ηt > 0];
µ(χ∧ ∨
t , χt ) = 2(P[(ξt ∨ ηt ) ∨ (ξt ∧ ηt ) > 0] − P[(ξt ∨ ηt ) ∧ (ξt ∧ ηt ) > 0]) =
Lemma 3.2.1. In metric signal space (Γ, µ) with metric µ (3.2.1), under the map-
ping hα from the group H of continuous mappings hα ∈ H = {hα } preserving
neutral element 0 of the group Γ(+): hα (0) = 0, for the result of interaction χt
(3.2.49) of a pair of samples ξt and ηt of stochastic signals (processes) ξ (t), η (t),
96 3 Informational Characteristics and Properties of Stochastic Processes
FIGURE 3.2.2 Metric relationships between samples of signals in metric space (Γ, µ) elu-
cidating Theorem 3.2.14
Theorem 3.3.1. PMSI νP (ξt , ηt ) is defined by the joint CDF Fξη (x, y ) of the
samples ξt , ηt :
νP (ξt , ηt ) = 4Fξη (0, 0) − 1. (3.3.2)
Proof. According to the formula [115, (3.2.85)], random variable ξt ∨ηt is character-
ized by CDF Fξ∨η (z ) equal to Fξ∨η (z ) = Fξη (z, z ). So, the probability P[ξt ∨ηt > 0]
is equal to P[ξt ∨ ηt > 0] = 1 − Fξ∨η (0) = 1 − Fξη (0, 0).
As follows from the equalities (3.2.8) and (3.3.2), for stochastic signals (pro-
cesses) ξ (t), η (t) ∈ Γ with joint CDF Fξη (x, y ) and even univariate PDFs pξ (x),
pη (y ) of a sort: pξ (x) = pξ (−x); pη (y ) = pη (−y ), the notions of NMSI ν (ξt , ηt ) and
PMSI νP (ξt , ηt ) coincide:
ν (ξt , ηt ) = νP (ξt , ηt ).
3.3 Probabilistic Measure of Statistical Interrelationship of Stochastic Processes 99
Thus the theorems listed below except special cases, repeat the corresponding the-
orems of the previous section where the properties of NMSI ν (ξt , ηt ) are considered
and remain valid for PMSI νP (ξt , ηt ).
where P[ξt ∧ ηt < 0] is the probability that random variable ξt ∧ ηt , which is equal
to meet of the samples ξt and ηt , takes a value less than zero.
Proof. According to the formula [115, (3.2.82)], random variable ξt ∧ηt is character-
ized by CDF Fξ∧η (z ) equal to Fξ∧η (z ) = Fξ (z ) + Fη (z ) − Fξη (z, z ), where Fξ (u) and
Fη (v ) are univariate CDFs of the samples ξt and ηt , respectively. So, the probability
P[ξt ∧ ηt < 0] is equal to:
Theorem 3.3.3. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , PMSI νP (ξt , ηt ) of a pair of samples ξt ,
ηt is invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of
stochastic processes, which preserve neutral element 0 of the group Γ(+): hα,β (0) =
0:
where ξt0 and ηt0 are the samples of stochastic processes ξ 0 (t) and η 0 (t) in L–group
Γ’: hα,β : Γ → Γ0 .
Theorem 3.3.4. For Gaussian centered stochastic processes ξ (t) and η (t) with
correlation coefficient ρξη between the samples ξt and ηt , their PMSI νP (ξt , ηt ) is
equal to:
2
νP (ξt , ηt ) = arcsin[ρξη ]. (3.3.5)
π
The proof of theorem is the same as well as the proof of Theorem 3.2.3.
Joint fulfillment of the relationship (3.3.5) of Theorem 3.3.4 and Equation (3.1.6)
implies, that mutual NFSI ψξη (t, t) and PMSI νP (ξt , ηt ) between two samples ξt
and ηt of stochastic processes ξ (t) and η (t) introduced by Definitions 3.1.2 and
3.3.1, respectively, are rather exactly connected by the following approximation:
p
ψξη (t, t) ≈ 1 − 1 − νP (ξt , ηt ).
100 3 Informational Characteristics and Properties of Stochastic Processes
Theorem 3.3.5. For Gaussian centered stochastic processes ξ (t) and η (t), which
additively interact in L–group Γ: χ(t) = ξ (t)+η (t), t ∈ T , between PMSIs νP (ξt , χt ),
νP (ηt , χt ), νP (ξt , ηt ) of the corresponding pairs of their samples ξt , χt ; ηt , χt ; ξt ,
ηt , the following relationship holds:
The proof of the theorem is the same as the proof of Theorem 3.2.5.
For partially ordered set Γ with lattice properties, where the processes ξ (t) and
η (t) interact with the results of interaction χ(t), χ̃(t): χ(t) = ξ (t) ∨ η (t), χ̃(t) =
ξ (t) ∧ η (t), t ∈ T in the form of lattice binary operations ∨, ∧, the notion of
normalized measure between the samples ξt and ηt of stochastic processes ξ (t) and
η (t) needs refinement. It is stipulated by the fact that the sameness of the PMSI
definition through the functions (3.3.1) and (3.3.3) does not hold in partially ordered
set Γ with lattice properties because of violation of evenness property of PDFs of
the samples χt and χ̃t .
Definition 3.3.2. Probabilistic measures of statistical interrelationship (PMSIs)
between the samples ξt , χt and ξt , χ̃t of stochastic processes ξ (t), χ(t), χ̃(t) ∈ Γ:
χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T in partially ordered set Γ with lattice
properties are the quantities νP (ξt , χt ), νP (ξt , χ̃t ) that are respectively equal to:
Theorem 3.3.7. For stochastic processes ξ (t) and η (t), which interact in partially
ordered set Γ with lattice properties: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T ,
and for their PMSIs νP (ξt , χt ), νP (ηt , χt ); νP (ξt , χ̃t ), νP (ηt , χ̃t ) of the corresponding
pairs of their samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t , the following relationships
hold:
νP (ξt , χt ) = 1, νP (ηt , χt ) = 1; (3.3.9a)
νP (ξt , χ̃t ) = 1, νP (ηt , χ̃t ) = 1. (3.3.9b)
Proof. According to absorption axiom of lattice, the identities hold:
ξt ∧ χt = ξt ∧ (ξt ∨ ηt ) = ξt ; (3.3.10a)
ηt ∧ χt = ηt ∧ (ξt ∨ ηt ) = ηt ; (3.3.10b)
ξt ∨ χ̃t = ξt ∨ (ξt ∧ ηt ) = ξt ; (3.3.10c)
ηt ∨ χ̃t = ηt ∨ (ξt ∧ ηt ) = ηt . (3.3.10d)
Substituting the results of (3.3.10) into the formula (3.3.8) and taking into account
the evenness of univariate PDFs pξ (x) and pη (y ) of the samples ξt , ηt , the proba-
bilities in (3.3.8) are determined by univariate CDFs Fξ (x), Fη (y ) on x = y = 0:
P[ξt ∧ χt < 0] = Fξ (0) = 1/2;
(3.3.11a)
P[ηt ∧ χt < 0] = Fη (0) = 1/2;
P[ξt ∨ χ̃t > 0] = Fξ (0) = 1/2;
(3.3.11b)
P[ηt ∨ χ̃t > 0] = Fη (0) = 1/2.
Substituting the results (3.3.11) into the formulas (3.3.8) we obtain (3.3.9).
Thus, Theorem 3.3.7 establishes invariance relationships for PMSIs (3.3.9) of the
pairs of samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t of stochastic processes ξ (t) and η (t),
which interact in partially ordered set Γ with lattice properties: χ(t) = ξ (t) ∨ η (t),
χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and these identities do not depend on probabilistic
distributions of interacting signals and their energetic relationships.
where d(p, p0 ) is metric determined by (3.1.2a); d[iξ (tj , t); iξ (tk , t)] is the metric be-
tween the functions iξ (tj , t) and iξ (tk , t) of the samples ξ (tj ) and ξ (tk ), respectively.
102 3 Informational Characteristics and Properties of Stochastic Processes
Definition 3.4.1. The function iξ (tj , t) connected with NFSI ψξ (tj , tk ) by the
relationship (3.4.1) is called information distribution density (IDD) of stochastic
process ξ (t).
For stationary stochastic process ξ (t) with NFSI ψξ (τ ), the expression (3.4.1)
takes the form:
iξ (tj , tj − τ ) = iξ (tj , tj + τ ).
Taking into account the evenness of the functions iξ (τ ), ψξ (τ ) and the property of
an integral with variable upper limit of integration, it is easy to obtain the coupling
equation between IDD iξ (τ ) and NFSI ψξ (τ ) of stochastic process:
1 0
iξ (τ ) = 2 ψ− (2τ ), τ < 0;
(3.4.3)
1 0
− 2 ψ+ (2τ ), τ ≥ 0,
0 0
where ψ− (τ ) and ψ+ (τ ) are the derivatives of NFSI ψξ (τ ) on the left and on the
right, respectively.
For Gaussian stationary stochastic process, IDD iξ (τ ) is completely defined by
the normalized correlation function rξ (τ ):
1 0
iξ (τ ) = 2 g− [rξ (2τ )], τ < 0;
(3.4.4)
1 0
− 2 g+ [rξ (2τ )], τ ≥ 0,
3.4 Information Distribution Density of Stochastic Processes 103
0 0
where g− [rξ (τ )] and g+ [rξ (τ )] are derivatives of deterministic functions g [x] on the
left and on the right, respectively (see Equations (3.1.3) and (3.1.4)).
The IDD iξ (τ ) of a stochastic process, as follows from (3.4.3), has clear physical
sense that IDD iξ (τ ) characterizes a rate of change of statistical interrelationships
between the samples of a stochastic process.
The formula (3.1.3) implies that, for stochastic process ξ (t) in the form of white
Gaussian noise, NFSI ψξ (τ ) is equal to:
1, τ = 0;
ψξ (τ ) =
0, τ 6= 0,
and formula (3.4.3) implies that IDD iξ (tj , t) of an arbitrary sample ξ (tj ) has a
form of delta-function iξ (tj , t) = δ (t − tj ). This means that a single sample ξ (tj ) of
white Gaussian noise does not carry any information regarding the other samples
of this stochastic process.
It is easy to determine the IDD of stochastic process with help of the coupling
equation (3.4.3), if its NFSI is known.
Considering the relation between NFSI and IDD, one should note that IDD is
a primary characteristic of this pair (IDD and NFSI) for the stochastic processes
that are capable of carrying information. The IDD determines NFSI, not vice versa.
This means that the expression (3.4.3) has a sense when known beforehand that the
IDD of stochastic process exists, or the stochastic process can carry information.
Naturally, there are some classes of stochastic processes for which the use of (3.4.3)
is impossible or is possible but with a proviso. Based on the simplicity of consid-
eration, we consider Gaussian stochastic processes as the examples of stochastic
processes of various kinds.
Analytical stochastic processes form a wide class of stochastic processes, for
which the use of (3.4.3) makes no sense [115], [4].
Stochastic process ξ (t) is called analytical in the interval [t0 , t0 + T ], if most
realizations of the process assume an analytical continuation in this interval. Ana-
lyticity of a process ξ (t) in a neighborhood of the point t0 implies a possibility of
representation of this process realization by Taylor’s series with random coefficients:
∞
X (t − t0 )k
ξ (t) = ξ (k) (t0 ) , (3.4.5)
k!
k=0
where ξ (k) (t) is a k-th order derivative of stochastic process ξ (t) in mean square
sense.
The following theorem determines necessary and sufficient conditions of analyt-
icity of the Gaussian stochastic process [115].
Theorem 3.4.1. Let the normalized correlation function rξ (t1 , t2 ) of Gaussian
stochastic process ξ (t) be analytical function of two variables in a neighborhood of
the point (t0 , t0 ). Then stochastic process ξ (t) is analytical in a neighborhood of
this point.
Below are examples of analytical stochastic processes.
Example 3.4.4. Consider a Gaussian stationary stochastic process with a constant
power spectral density S (ω ) = N0 = const bounded by the band [−∆ω/2, ∆ω/2]:
N0 , ω ∈ [−∆ω/2, ∆ω/2];
S (ω ) = (3.4.6)
0, ω ∈/ [−∆ω/2, ∆ω/2].
According to this, its normalized correlation function r(τ ) is determined by the
function:
sin(∆ωτ /2)
r(τ ) = . 5 (3.4.7)
∆ωτ /2
Example 3.4.5. Consider a Gaussian stationary stochastic process with the nor-
malized correlation function:
r(τ ) = exp(−µτ 2 ). (3.4.8)
In this case, its power spectral density S (ω ) is determined by the expression:
ω2
r
π
S (ω ) = exp − . 5 (3.4.9)
µ 4µ
3.4 Information Distribution Density of Stochastic Processes 105
Example 3.4.6. Consider a Gaussian stationary stochastic process with the nor-
malized correlation function:
τ 2 −1
r(τ ) = 1 + . (3.4.10)
α
S (ω ) = A exp(−α|ω|). 5 (3.4.11)
A wide class of stochastic processes indicates that the use of the formula (3.4.3) is
possible with a proviso; it is formed by narrowband Gaussian stationary stochastic
processes with oscillated normalized correlation function:
1
R∞ ρ(x)
and ρ⊥ (τ ) = π τ −x dx is a function connected with an initial one by Hilbert
−∞
transform.
106 3 Informational Characteristics and Properties of Stochastic Processes
2. Modules of derivatives of NFSI in the point (tj , tj ) on the left and on the
right are not equal to zero:
Proof. Let ξ (t) be an arbitrary stochastic process possessing the ability to carry
information. This means that for any its sample ξ (tj ), there exists IDD iξ (tj , t)
with the properties 1, 2, 3 listed above on page 102. We can find the derivatives of
NFSI ψξ (tj , tk ) in the point (tj , tj ) on both the left and right.
The definition of the derivative implies:
∆ψξ (tj , tj − 0) ψξ (tj , tj ) − ψξ (tj , tj − 0)
ψξ0 (tj , tj − 0) = lim = lim ; (3.4.14a)
∆t→0 ∆t ∆t→0 ∆t
∆ψξ (tj , tj + 0) ψξ (tj , tj + 0) − ψξ (tj , tj )
ψξ0 (tj , tj + 0) = lim = lim . (3.4.14b)
∆t→0 ∆t ∆t→0 ∆t
Taking into account the relationship (3.4.1), the expressions (3.4.14) take the form:
and
d[iξ (tj , t); iξ (tj ± ∆t, t)] 6= 0. (3.4.17)
According to the definition of metric and the property of IDD evenness, the identity
holds:
|iξ (tj , t) − iξ (tj − ∆t, t)| = |iξ (tj , t) − iξ (tj + ∆t, t)| . (3.4.18)
3.4 Information Distribution Density of Stochastic Processes 107
Joint fulfillment of the formulas (3.4.15), (3.4.16), and (3.4.18) implies that deriva-
tives of NFSI on the left and on the right have unlike signs in the same module:
The theorem can be described on a qualitative level in the following way. For
stochastic process ξ (t) to possess the ability to carry information, it is necessary
for the NFSI ψξ (tj , tk ) to be characterized by the narrowed peak in a neighbor-
hood of the point (tj , tj ). Moreover, the sharper the peak of NFSI ψξ (tj , tk ) (i.e.,
the larger the module of derivative |ψξ0 (tj , tj )|), the larger the maximum iξ (tj , tj )
of IDD iξ (tj , t), and correspondingly, the larger the overall quantity of information
I (T ) that can be carried by stochastic process within time interval [t0 , t0 + T ]. Thus,
Theorem 3.4.2 states that not all stochastic processes possess the ability to carry
information.
Example 3.4.7. A wide class of stochastic processes can be obtained by passing
white Gaussian noise through the forming Butterworth filters with squared module
of amplitude-frequency characteristic Kn (ω ) [236]:
ω 2n −1
2
|Kn (ω )| = 1 + ,
W
where W is Butterworth filter bandwidth at 0.5 power level; n is an order of But-
terworth filter.
Butterworth filters satisfy the Paley-Wiener condition of physical realizability.
Power spectral density Sn (ω ) of Gaussian Butterworth stochastic processes of n-th
order is determined by an expression similar to the previous one:
ω 2n −1
Sn (ω ) = 1 + ,
W
where n is an order of Butterworth stochastic process.
The existence of all the derivatives R(2k) (t1 , t2 ) of covariation function R(t1 , t2 )
up to 2N -th order inclusively is a necessary and sufficient condition of N -times
differentiability of Gaussian stochastic process. This condition is equivalent to the
following one. Power spectral density Sn (ω ) decay faster than ω −2N −1 is necessary
and sufficient condition of N -times differentiability of Gaussian stochastic process.
The formulated condition implies that Butterworth stochastic process of n-th
order is n − 1-times differentiable, i.e., Butterworth stochastic processes of n > 1-th
order are differentiable, and according to Theorem 3.4.2, they do not possess the
108 3 Informational Characteristics and Properties of Stochastic Processes
W
R(τ ) = exp[−W |τ |],
2
is nondifferentiable, and has the ability to carry information. 5
Thus, the aforementioned properties 1, 2, 3 listed on page 102 hold for IDD iξ (tj , t).
While considering informational properties of stochastic processes, the essential
distinctions between their domains are not drawn as shown below.
Note that for an arbitrary pair of samples ξ (tj ) and ξ (tk ) of stochastic process
ξ (t) with IDDs iξ (tj , t) and iξ (tk , t), respectively (see Fig. 3.5.1), the metric identity
holds: Z
1 1
djk = |iξ (tj , t) − iξ (tk , t)|dt = m[Xj ∆Xk ], (3.5.2)
2 2
Tξ
3.5 Informational Characteristics and Properties of Stochastic Processes 109
1
R
where djk = 2 |iξ (tj , t) − iξ (tk , t)|dt is the metric between IDD iξ (tj , t) of the sam-
Tξ
ple ξ (tj ) and IDD iξ (tk , t) of the sample ξ (tk ); Xj = ϕ[iξ (tj , t)], Xk = ϕ[iξ (tk , t)];
m[Xj ∆Xk ] is a measure of symmetric difference of the sets Xj and Xk .
The mapping (3.5.1) transfers IDDs iξ (tj , t), iξ (tk , t) into corresponding equiv-
alent sets Xj , Xk and defines isometric mapping of set (I, d) of IDDs {iξ (tα , t)}
with metric djk (3.5.2) into the metric space (X, d∗ ) of the sets {Xα } with metric
d∗jk = 12 m[Xj ∆Xk ].
The normalization property 2 of IDD
(see Section 3.4, page 102) provides the nor-
malization of a measure m(Xj ) of an ar-
bitrary sample Rξ (tj ) of stochastic process
ξ (t): m(Xj ) = iξ (tj , t)dt = 1.
Tξ
The mapping (3.5.1) allows describing
any stochastic process ξ (t) possessing the
ability to carry information as a collection
of the samples {ξ (tα )} andS
also as a collec-
tion of the sets {Xα }: X = Xα . Note that
α
FIGURE 3.5.1 IDDs iξ (tj , t), iξ (tk , t) of
a measure of a single element Xα possesses
samples ξ(tj ), ξ(tk ) of stochastic process
the normalization property: m(Xα ) = 1.
ξ(t)
Thus, any stochastic process (signal)
may be considered a system of the elements {Xα } of metric space (X, d∗ ) with
metric d∗jk between a pair of elements Xj and Xk : d∗jk = 21 m[Xj ∆Xk ].
The mapping (3.5.1) allows considering an arbitrary stochastic process ξ (t) as a
collection of statistically dependent samples {ξ (tα )} and also as subalgebra B(X )
of generalized Boolean algebra B(Ω) with a measure m and signature (+, ·, −, O)
of the type (2, 2, 2, 0) with the following operations over the elements {Xα } ⊂ X,
X ∈ Ω:
Axiom 3.5.1. Axiom of a measure of binary operation. Measure m(Xα ) of the ele-
ment Xα , Xα = Xβ ◦ Xγ ; Xα , Xβ , Xγ ∈ X, considered a result of binary operation
“◦” of subalgebra B(X ) of generalized Boolean algebra B(Ω) with a measure m,
defines information quantity I (Xα ) = m(Xα ) that corresponds to the result of this
operation.
110 3 Informational Characteristics and Properties of Stochastic Processes
Based on the axiom, the measure m(Xα ) of the element Xα of the space (X, d∗ )
defines a quantitative aspect of information it contains whereas binary operation
“◦” of generalized Boolean algebra B(Ω) defines a qualitative aspect of information.
Within the framework of the formulated Axiom 3.5.1 depending on the kinds of re-
lations between the elements {ξ (tα )} of stochastic process ξ (t), we shall distinguish
the following types of information quantities that according to Axiom 3.5.1, are de-
fined by the corresponding binary operation of general Boolean algebra B(Ω) (see
Definitions 2.3.1 through 2.3.5).
+
Definition 3.5.1. Quantity of overall information Ijk contained in an arbitrary
pair of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t) is an information quantity
equal to a measure of sum m(Xj + Xk ) of the elements Xj and Xk :
+
Ijk = m(Xj + Xk ) = m(Xj ) + m(Xk ) − m(Xj · Xk ) = 2 − ψξ (tj , tk ),
where Xj = ϕ[iξ (tj , t)]; Xk = ϕ[iξ (tk , t)]; m(Xj · Xk ) is a measure of product of the
elements Xj and Xk ; ψξ (tj , tk ) is a normalized function of statistical interrelation-
ship (NFSI) between the samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t).
Definition 3.5.2. Quantity of mutual information Ijk contained simultaneously
in both the sample ξ (tj ) and the sample ξ (tk ) of stochastic process ξ (t) is an infor-
mation quantity equal to a measure of product m(Xj · Xk ) of the elements Xj and
Xk :
Ijk = m(Xj · Xk ) = ψξ (tj , tk ).
Definition 3.5.3. Quantity of absolute information Ij contained in an arbitrary
sample ξ (tj ) of stochastic process ξ (t) is a quantity of mutual information Ijj equal
to 1:
Ij = Ijj = ψξ (tj , tj ) = m(Xj · Xj ) = m(Xj ) = 1.
Thus, a measure m(Xj ) of the element Xj defines an information quantity
contained in the sample ξ (tj ) concerning itself.
−
Definition 3.5.4. Quantity of particular relative information Ijk contained in the
sample ξ (tj ) with respect to the sample ξ (tk ) is an information quantity equal to a
measure of difference m(Xj − Xk ) between the elements Xj and Xk :
−
Ijk = m(Xj − Xk ) = m(Xj ) − m(Xj · Xk ) = 1 − ψξ (tj , tk ).
∆
Definition 3.5.5. Quantity of relative information Ijk contained in the sample
ξ (tj ) with respect to the sample ξ (tk ) and, vice versa, contained in the sample ξ (tk )
with respect to the sample ξ (tj ), is an information quantity equal to a measure of
symmetric difference m(Xj ∆Xk ) between the elements Xj and Xk :
∆ − −
Ijk = m(Xj ∆Xk ) = m(Xj − Xk ) + m(Xk − Xj ) = Ijk + Ikj = 2(1 − ψξ (tj , tk )).
Thus, IDD iξ (tj , t) of the sample ξ (tj ) of stochastic process ξ (t) may be defined
as the limit of the ratio of information quantity ∆I (tj , t ∈ [t0 , t0 + ∆t]), contained
in the sample ξ (tj ) concerning the instantaneous values of stochastic process ξ (t)
within the interval [t0 , t0 + ∆t], to the value of this interval ∆t, while the last one
tends to zero:
∆I (tj , t ∈ [t0 , t0 + ∆t])
iξ (tj , t) = lim .
∆t→0 ∆t
The last relationship implies that IDD iξ (tα , t) is a dimensional function, and its
dimensionality is inversely proportional to the dimensionality of the time parameter.
Thus, one can draw the following conclusion: on the one hand, IDD iξ (tα , t)
of stochastic process ξ (t) characterizes a distribution of information, contained in
a single sample ξ (tα ) concerning all the instantaneous values of stochastic process
ξ (t). On the other hand, IDD iξ (tα , t) characterizes a distribution of information
along the samples {ξ (tα )} of stochastic process ξ (t) concerning the sample ξ (tα ).
IDD is inherent to any sample of stochastic process owing to a sample is an
element of a collection of statistically interconnected other samples {ξ (tα )} forming
as the stochastic process ξ (t) and cannot be associated with an arbitrary random
variable as opposed to the sample ξ (tα ) of stochastic process ξ (t). So, a single
random variable considered outside the statistical collection of random variables
does not carry any information.
We can generalize the properties of quantity of mutual information Ijk of a pair
of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t):
0 ≤ Ijk ≤ Ijj = 1.
Ijk = ψξ (tj , tk ).
3.5 Informational Characteristics and Properties of Stochastic Processes 113
The last identity allows us to define NFSI ψξ (tj , tk ), introduced earlier by the
relationship (3.1.2), based on quantity of mutual information Ijk of a pair of samples
ξ (tj ) and ξ (tk ) of stochastic process ξ (t). It is obvious that the first three properties
of NFSI are similar to the properties of quantity of mutual information on the list
above, and a list of the general properties of NFSI ψξ (tj , tk ) of stochastic processes
possessing the ability to carry information looks like:
0 ≤ ψξ (tj , tk ) ≤ ψξ (tj , tj ) = 1.
4. Modules of derivatives of NFSI in the point (tj , tj ) on the left and on the
right are not equal to zero:
5. Modules of derivatives of NFSI τ = 0 on the left and on the right are not
equal to zero:
|ψξ0 (τ − 0)| = |ψξ0 (τ + 0)| 6= 0.
Taking into account the aforementioned considerations, the corollary list of the
Theorem 3.1.1 may be continued.
Corollary 3.5.1. Under bijective mapping (3.1.17), the measures of all sorts of
information contained in a pair of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t)
are preserved:
Quantity of mutual information Ijk :
∆
Quantity of relative information Ijk :
∆
Ijk = m(Xj ∆Xk ) = 2(1 − ψξ (tj , tk )) = 2(1 − ψη (t0j , t0k )) = m(Yj ∆Yk ), (3.5.13)
where Xj = ϕ[iξ (tj , t)]; Xk = ϕ[iξ (tk , t)]; Yj = ϕ[iη (t0j , t0 )]; Yk = ϕ[iη (t0k , t0 )] are
the ordinate sets of IDDs iξ (tα , t), iη (tβ , t) of stochastic processes ξ (t) and η (t),
respectively, defined by the formula (3.5.1).
Corollary 3.5.2. Under bijective mapping of stochastic processes ξ (t) (3.1.17), the
differential of information quantity iξ (tj , t)dt is preserved:
where d[iξη (t0k , t); iξ (tj , t)] is the metric between mutual IDD iξη (t0k , t) of the sample
η (t0k ) and IDD iξ (tj , t) of the sample ξ (tj ).
Definition 3.5.9. The function iξη (t0k , t) connected with mutual NFSI ψξη (tj , t0k )
and IDD iξ (tj , t) by the relationship (3.5.17) is called mutual information distribu-
tion density (mutual IDD).
The function d[iξη (t0k , t); iξ (tj , t)] may be considered a metric between the sam-
ples ξ (tj ), η (t0k ) of stochastic processes ξ (t) and η (t0 ), respectively. According to
(3.5.17), the functions iξη (t0k , t), iξ (tj , t) and ψξη (tj , t0k ), d[iξη (t0k , t); iξ (tj , t)] are con-
nected by the identity:
we obtain a closeness measure and a distance measure for stochastic processes ξ (t)
and η (t0 ), respectively.
116 3 Informational Characteristics and Properties of Stochastic Processes
The quantity ψξη determined by the formulas (3.1.9a) and (3.5.19a) we shall
call coefficient of statistical interrelation, and the quantity ρξη determined by the
formula (3.5.1b), we shall call metric between stochastic processes ξ (t) and η (t0 ).
As against the metric dξη defined by the formula (3.1.9b), the metric ρξη (3.5.19b)
is based on a distance between mutual IDD and IDD of the samples of stochastic
processes ξ (t) and η (t0 ).
Coefficient of statistical interrelation ψξη of stochastic processes ξ (t), η (t0 ) and
the metric ρξη between them are connected by a relationship similar to the identity
(3.1.8):
ψξη + ρξη = 1. (3.5.20)
Mutual IDD iξη (t0k , t) along with IDD iη (t0k , t0 ) are characteristics of a single
sample η (t0k ) of the sample collection {η (t0β )} of stochastic process η (t0 ), but unlike
IDD, the sample η (t0k ) is considered within its relation with the sample collection
{ξ (tα )} of another stochastic process ξ (t).
Mutual IDD iηξ (tj , t0 ) characterizes the distribution of information along the
time parameter t0 (along the samples {η (t0β )} of stochastic process η (t0 ) concerning
the sample ξ (tj ) of stochastic process ξ (t)) and is connected with mutual NFSI
ψηξ (t0k , tj ) by the equation:
Consider the main properties of mutual IDD iξη (t0k , t) of a pair of stochastic pro-
cesses ξ (t) and η (t0 ).
7. For stationary coupled stochastic processes ξ (t) and η (t0 ), the metric
(3.5.24) between mutual IDD iξη (t0k , t) of the sample η (t0k ) and IDD iξ (tj , t)
of the sample ξ (tj ) depends only on the time difference τ = t0k −tj between
the samples η (t0k ) and ξ (tj ) of stochastic processes η (t0 ) and ξ (t):
Z∞
1
d[iξη (t0k , t); iξ (tj , t)] = |iξη (tj + τ, t) − iξ (tj , t)|dt =
2
−∞
(a) Let ξ (t) and η (t0 ) are functionally interconnected stochastic processes
(3.5.25). Then for each sample η (t0k ) of stochastic process η (t0 ), there
exists such a sample ξ (tj ) of stochastic process ξ (t), that the metric
(3.5.24) between mutual IDD iξη (t0k , t) of the sample η (t0k ) and IDD
iξ (tj , t) of the sample ξ (tj ) is equal to zero:
d[iξη (t0k , t); iξ (tj , t)] = 0.
The eight properties of mutual IDD directly imply general properties of mutual
NFSI.
1. Mutual NFSI ψξη (tj , t0k ) is nonnegative and bounded: 0 ≤ ψξη (tj , t0k ) ≤ 1.
2. Mutual NFSI ψξη (tj , t0k ) is symmetric:
ψξη (tj , t0k ) = ψηξ (t0k , tj ).
3. The equality ψξη (tj , t0k ) = 1 holds if and only if stochastic processes ξ (t),
η (t0 ) are connected by one-to-one correspondence (3.5.25).
4. The equality ψξη (tj , t0k ) = 0 holds if and only if stochastic processes ξ (t),
η (t0 ) are statistically independent.
Let each IDD iξ (tj , t), iη (t0k , t0 ) and also each mutual IDD iηξ (tj , t0 ), iξη (t0k , t) of
an arbitrary pair of samples ξ (tj ), η (t0k ) of statistically dependent (in general case)
stochastic processes ξ (t), η (t0 ) be associated with their ordinate sets Xj , Yk , XjY ,
YkX (see Fig. 3.5.2) by the mapping ϕ (3.5.1):
Xj = ϕ[iξ (tj , t)] = {(t, y ) : t ∈ Tξ ; 0 ≤ y ≤ iξ (tj , t)}; (3.5.26a)
Yk = ϕ[iη (t0k , t0 )] = {(t0 , z ) : t0 ∈ Tη ; 0 ≤ z ≤ iη (t0k , t0 )}; (3.5.26b)
0 0 0 0
XjY = ϕ[iηξ (tj , t )] = {(t , z ) : t ∈ Tξ , t ∈ Tη ; 0 ≤ z ≤ iηξ (tj , t )}; (3.5.26c)
YkX = ϕ[iξη (t0k , t)] 0
= {(t, y ) : t ∈ Tξ , t ∈ Tη ; 0 ≤ y ≤ iξη (t0k , t)}, (3.5.26d)
where Tξ = [t0 , t0 + Tx ], Tη = [t00 , t00 + Ty ] are the domains of definitions of stochastic
processes ξ (t), η (t0 ), respectively.
For an arbitrary pair of the samples ξ (tj ), η (t0k ) of statistically dependent stochas-
tic processes ξ (t), η (t0 ) with IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ),
iξη (t0k , t), the metric identities hold:
dξη 0
jk = d[iξ (tj , t); iξη (tk , t)] =
Z
1 1
= |iξ (tj , t) − iξη (t0k , t)|dt = m(Xj ∆YkX ); (3.5.27a)
2 2
Tξ
dηξ 0 0 0
kj = d[iη (tk , t ); iηξ (tj , t )] =
Z
1 1
= |iη (t0k , t0 ) − iηξ (tj , t0 )|dt0 = m(XjY ∆Yk ), (3.5.27b)
2 2
Tη
3.5 Informational Characteristics and Properties of Stochastic Processes 119
FIGURE 3.5.2 IDD iξ (tj , t) and mutual IDD iξη (t0k , t) of samples ξ(tj ), η(t0k ) of stochastic
processes ξ(t), η(t0 )
where dξη ηξ 0
jk /dkj are the metrics between mutual IDD of the sample η (tk )/ξ (tj ) and
0
IDD of the sample ξ (tj )/η (tk ), respectively.
According to the equality (3.5.23), the metric identity holds:
1 1
m(Xj ∆YkX ) = m(XjY ∆Yk ).
2 2
Besides, for an arbitrary pair of the samples ξ (tj )/η (t0k ) and ξ (tl )/η (t0m ) of stochastic
process ξ (t)/η (t0 ), the metric identities hold (see relationship (3.5.2)):
The mappings (3.5.26) transform IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs
iηξ (tj , t0 ), iξη (t0k , t) into the corresponding equivalent sets Xj , Yk , XjY , YkX , defin-
ing isometric mapping of the function space (I, dξη ξ η
jk , djl , dkm ) of stochastic pro-
cesses ξ (t), η (t0 ) with metrics dξη ξ η
jk (3.5.27) and djl , dkm (3.5.28) into the set space
(Ω, dξη ξ η 1 X 1 1
jk , djl , dkm ) with metrics 2 m(Xj ∆Yk ), 2 m(Xj ∆Xl ), 2 m(Yk ∆Ym ).
The normalization property of IDD and mutual IDD of a pair of samples ξ (tj ),
η (t0k ) of stochastic processes ξ (t), η (t0 ) provides the normalization of the measures
m(Xj ), m(Yk ), m(XjY ), m(YkX ):
Z Z
m(Xj ) = iξ (tj , t)dt = 1, m(Xj ) = iηξ (tj , t0 )dt0 = 1;
Y
Tξ Tη
Z Z
m(Yk ) = iη (t0k , t0 )dt0 = 1, m(YkX ) = iξη (t0k , t)dt = 1.
Tη Tξ
120 3 Informational Characteristics and Properties of Stochastic Processes
The mappings (3.5.26) allow us to consider stochastic processes (signals) ξ (t), η (t0 )
with IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ), iξη (t0k , t) as the collections
of statistically dependent samples {ξ (tα )}, {η (t0β )}, and also as the collections of
sets X = {Xα }, Y = {Yβ }:
[ [
ϕ : ξ (t) → X = Xα ; η (t0 ) → Y = Yβ .
α β
where X and Y are the results of mappings (3.5.1) of stochastic processes ξ (t) and
η (t0 ) into the sets of the ordinate sets {Xα }, {Yβ }: X = Xα , Y = Yβ ; Xα , YβX
S S
α β
are the results of mappings (3.5.26) of IDD iξ (tα , t) and mutual IDD iξη (t0β , t) of
the samples ξ (tα ) and η (t0β ) of stochastic processes ξ (t) and η (t0 ).
The properties 1 through 4 of mutual NFSI (see page 118) of stochastic processes
ξ (t) and η (t0 ) may be supplemented with the following theorem.
Theorem 3.5.1. Let f be a bijective mapping of stochastic processes ξ (t) and η (t0 ),
defined in the intervals Tξ = [t0ξ , tξ ], Tη = [t0η , tη ], into stochastic processes α(t00 )
and β (t000 ) defined in the intervals Tα = [t0α , tα ], Tβ = [t0β , tβ ], respectively:
α(t00 ) = f [ξ (t)], ξ (t) = f −1 [α(t00 )]; (3.5.29a)
β (t000 ) = f [η (t0 )], η (t0 ) = f −1 [β (t000 )]. (3.5.29b)
Then the function f that maps the stochastic processes ξ (t) and η (t0 ) into stochastic
processes α(t00 ) and β (t000 ), respectively, preserves mutual NFSI:
ψξη (tj , t0k ) = ψαβ (t00j , t000
k ),
The proof of the Theorem 3.5.1 is similar to the proof of the Theorem 3.1.1.
Corollary 3.5.5. Under bijective mappings (3.5.29), the measures of all sorts of
information contained in a pair of the samples ξ (tj ), η (t0k ) of stochastic processes
ξ (t) and η (t0 ) are preserved:
Quantity of mutual information Ijk :
−
Quantity of particular relative information Ijk :
−
Ijk = m(Xj − YkX ) = 1 − ψξη (tj , t0k ) = 1 − ψαβ (t00j , t000 A
k ) = m(Aj − Bk ); (3.5.32)
∆
Quantity of relative information Ijk :
∆
Ijk = m(Xj ∆YkX ) = 2(1−ψξη (tj , t0k )) = 2(1−ψαβ (t00j , t000 A
k )) = m(Aj ∆Bk ), (3.5.33)
where Xj = ϕ[iξ (tj , t)]; YkX = ϕ[iξη (t0k , t)]; Aj = ϕ[iα (t00j , t00 )]; BkA = ϕ[iαβ (t000 000
k , t )]
0 0 0
are ordinate sets of IDDs iξ (tj , t), iξη (tk , t ) of stochastic processes ξ (t) and η (t ) and
also IDDs iα (t00j , t00 ), iαβ (t000 000 00 000
k , t ) of stochastic processes α(t ) and β (t ) respectively,
defined by the relationships (3.5.26).
Corollary 3.5.6. Under bijective mappings (3.5.29), the quantity of mutual in-
formation I [ξ (t), η (t0 )] contained in a pair of stochastic processes ξ (t) and η (t0 ), is
preserved:
X X
I [ξ (t), η (t0 )] = m(X, Y ) = m[ (YβX · Xα )] =
β α
X X
= m[ (BδA · Aγ )] = m(A, B ) = I [α(t00 ), β (t000 )], (3.5.34)
δ γ
where Xα , YβX are the results of mappings (3.5.26) of IDD iξ (tα , t) and mutual IDD
iξη (t0β , t) of the samples ξ (tα ) and η (t0β ) of stochastic processes ξ (t) and η (t0 ); X,
Y are the results of mappings (3.5.1) of S stochastic processes ξ (t) and η (t0 ) into the
sets of ordinate sets {Xα }, {Yβ }: X = Xα , Y = Yβ ; Aγ , BδA are the results
S
α β
of mappings (3.5.26) of IDD iα (t00γ , t00 ) and mutual IDD iαβ (t000 00
δ , t ) of the samples
00 000 00 000
α(tγ ) and β (tδ ) of stochastic processes α(t ) and β (t ); A, B are the results of
mappings (3.5.1) of stochastic
S processes
S α(t00 ) and β (t000 ) into the sets of ordinate
sets {Aγ }, {Bδ }: A = Aγ , B = Bδ .
γ δ
The principal ideas and results of Chapters 2 and 3 taking into account the re-
quirements to both a signal space concept and a measure of information quantity
formulated in Section 1.3, allow using the apparatus of generalized Boolean algebra
to construct the models of spaces where informational and real physical signal in-
teractions occur. On the other hand, these ideas and results permit us to establish
the main regularities and relationships of signal interactions in such spaces.
However, one should clearly distinguish these two models of signal spaces and
their main geometrical and informational properties. Within the framework of the
signal space model, it is necessary to obtain results that define quantitative infor-
mational relationships between the signals based on their homomorphic and iso-
morphic mappings. It is interesting to investigate the quantitative informational
relationships between the signals and the results of their interactions in a signal
space with properties of additive commutative group (in linear space) and also in
signal spaces with other algebraic properties.
Definition 4.1.1. The set Γ of stochastic signals {ξ (t), η (t), . . .} interacting with
each other forms the physical signal space that is a commutative semigroup SG =
(Γ, ⊕) with respect to a binary operation ⊕ with the following properties:
123
124 4 Signal Spaces with Lattice Properties
Definition 4.1.2. Binary operation ⊕: χ(t) = ξ (t) ⊕ η (t) between the elements
ξ (t) and η (t) of semigroup SG = (Γ, ⊕) is called physical interaction of the signals
ξ (t) and η (t) in physical signal space Γ.
The mappings (3.5.26) allow us to consider the stochastic processes (signals) ξ (t)
and η (t0 ) with IDDs iξ (tj , t) and iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ) and iξη (t0k , t)
as the collections of statistically dependent samples {ξ (tα )}, {η (t0β )} and also as
the collections of sets X = {Xα }, Y = {Yβ }:
[ [
ϕ : ξ (t) → X = Xα ; η (t0 ) → Y = Yβ . (4.1.1)
α β
ϕ: Γ → Ω. (4.1.2)
The mapping ϕ (4.1.1) describes the stochastic processes (signals) from general
algebraic positions on the basis of the notion of information carrier space, whose
content is elucidated in Chapter 2.
In order to fix the interrelation (4.1.2) between physical and informational sig-
nal spaces while denoting the signals ξ (t) and η (t) and their images X and Y ,
connected by the relationship (4.1.1), the notations ξ (t)/X and η (t)/Y will be
used, respectively.
Generally, in physical signal space Γ = {ξ (t), η (t), . . .}, the instantaneous values
{ξ (tα )} and {η (tβ )} of the signals ξ (t) and η (t) interact in the same coordinate
dimension according to the Definition 4.1.2: ξ (tα ) ⊕ η (tα ). Since the interrelations
between informational characteristics of the signals are considered in physical and
informational signal spaces, to provide the commonality of approach, we suppose
that the interaction of instantaneous values {ξ (tα )}, {η (t0β )} of the signals ξ (t), η (t0 )
in physical signal space, in a general case, may be realized in the distinct reference
systems ξ (tα ) ⊕ η (t0α ).
We define the notion of informational signal space Ω, using the analogy with
the notion of information carrier space (see Section 2.1), on the basis of apparatus
of Boolean algebra B(Ω) with a measure m.
Definition 4.1.3. Informational signal space Ω is a set of the elements {X, Y, . . .}:
{X, Y, . . .} ⊂ Ω: {ϕ : ξ (t) → X, η (t0 ) → Y, . . .} characterized by the following
general properties.
2. In the space Γ/Ω, the signal ξ (t)/X with IDD iξ (tα , t) and NFSI ψξ (tα , tβ )
forms a collection of the elements {ξ (tα )}/{Xα } with normalized measure:
3. In the space Γ/Ω, the signal ξ (t)/X with IDD iξ (tα , t) possesses
the property of continuity if, for an arbitrary pair of the elements
ξ (tα ), ξ (tβ )/Xα , Xβ , there exists such an element ξ (tγ )/Xγ , that: tα <
tγ < tβ (α < γ < β ). We call this a signal with continuous structure.
The signals that do not possess this property are called the signals with
discrete structure.
4. In the space Ω, the following operations are defined over the elements
{Xα } ⊂ X, {Yβ } ⊂ Y , X ∈ Ω, Y ∈ Ω that characterize informational
interrelations between them:
P P
(a) Addition — X + Y = Xα + Yβ ;
α β
P P
(b) Multiplication — Xα1 ·Xα2 , Yβ1 ·Yβ2 , Xα ·Yβ , X ·Y = ( Xα ) · ( Yβ );
α β
(c) Difference — Xα1 − Xα2 , Yβ1 − Yβ2 , Xα − Yβ , X − Y = X − (X · Y );
(d) Symmetric difference — Xα1 ∆Xα2 , Yβ1 ∆Yβ2 , Xα ∆Yβ , X ∆Y = (X −
Y ) + (Y − X );
(e) Null element O: X ∆X = O; X − X = O.
5. A measure m introduces a metric upon generalized Boolean algebra B(Ω)
defining the distance ρ(X, Y ) between the signals ξ (t)/X and η (t)/Y (be-
tween the sets X, Y ∈ Ω) by the relationship:
Axiom 4.1.1. Axiom of a measure of binary operation. Measure m(Z ) of the element
Z: Z = X ◦ Y ; X, Y, Z ∈ Ω considered as the result of binary operation ◦ of
generalized Boolean algebra B(Ω) with a measure m defines the information quantity
IZ = m(Z ) that corresponds to the result of this operation.
The axiom implies that a measure m of the element Z of the space Ω defines a
quantitative aspect of information contained in this element, while a binary oper-
ation “◦” of generalized Boolean algebra B(Ω) defines a qualitative aspect of this
information.
Within the S
framework of Axiom 4.1.1, depending on relations between the sig-
0
S
nals ξ (t)/X = Xα and η (t )/Y = Yβ , we shall distinguish the following types
α β
of information quantities. The same information quantities are denoted in a twofold
manner, for instance, as in Definitions 3.5.1 through 3.5.5, with respect to the signals
ξ (t) and η (t0 ) directly, and on the other hand, with respect to the sets X = Xα ,
S
S α
Y = Yβ associated with the signals by the mapping (4.1.1).
β
an information quantity contained in the set that is the product of the elements X
and Y of generalized Boolean algebra B(Ω):
X X X X
I [ξ (t), η (t0 )] = IXY = m(X · Y ) = m[ (XαY · Yβ )] = m[ (YβX · Xα )],
α β β α
where X and Y are the results of the mapping (4.1.1) of stochastic processesS(sig-
nals) ξ (t) and η (t0 ) into the sets of ordinate sets {Xα } and {Yβ }: X = Xα ,
α
Y = Yβ ; Xα and YβX are the results of mappings (3.5.26) of IDD iξ (tα , t) and
S
β
mutual IDD iξη (t0β , t) of the samples ξ (tα ) and η (t0β ) of stochastic processes (signals)
ξ (t) and η (t0 ), respectively.
Remark 4.1.1. Quantity of absolute information IX contained in the signal ξ (t)
may be interpreted as the quantity of overall information IX+X = m(X + X ), or
as the quantity of mutual information IXX = m(X · X ) contained in the signal ξ (t)
with respect to itself.
Definition 4.1.7. Quantity of particular relative information I − [ξ (t), η (t0 )] =
IX−Y contained in the signal ξ (t)/X with respect to the signal η (t0 )/Y (or vice
versa, contained in the signal η (t0 )/Y with respect to the signal ξ (t)/X, i.e.,
I − [η (t0 ), ξ (t)] = IY −X ), is an information quantity contained in the difference be-
tween the elements X and Y of generalized Boolean algebra B(Ω):
I − [ξ (t), η (t0 )] = IX−Y = m(X − Y ) = m(X ) − m(XY );
I − [η (t0 ), ξ (t)] = IY −X = m(Y − X ) = m(Y ) − m(XY ).
Definition 4.1.8. Quantity of relative information I ∆ [ξ (t), η (t0 )] = IX∆Y con-
tained in the signal ξ (t)/X with respect to the signal η (t0 )/Y is an information
quantity contained in the symmetric difference between the elements X and Y of
generalized Boolean algebra B(Ω):
I ∆ [ξ (t), η (t0 )] = IX∆Y = m(X ∆Y ) = m(X − Y ) + m(Y − X ) = IX−Y + IY −X .
Quantity of relative information IX∆Y is identically equal to an introduced
metric:
IX∆Y = ρ(X, Y ).
Regarding the units of information quantity, as a unit of information quantity
in signal space Ω, we take the quantity of absolute information I [ξ (tα )] contained
in a single element ξ (tα ) of the signal ξ (t) with a domain of definition Tξ (in the
form of a discrete set or continuum) and information distribution density (IDD)
iξ (tα , t), and according to the relationship (3.5.3), it is equal to:
Z
I [ξ (tα )] = m(Xα )|tα ∈Tξ = iξ (tα ; t)dt = 1abit. (4.1.4)
Tξ
The metric signal space Ω is an informational space and allows us to give the
following geometrical interpretation of the main introduced notions.
S
In signal space Ω, all the elements of the signal ξ (t)/X = Xα are situated on
α
the surface of some n-dimensional sphere Sp(O, R), whose center is null element O
of the space Ω, and its radius R is equal to 1:
The distance from null element O of the space Ω to an arbitrary signal ξ (t)/X is
equal to a measure of this signal m(X ) or a quantity of absolute information IX
contained in this signal ξ (t):
IX+Y = IX + IY − IXY ;
IX + IY ≥ IX+Y ≥ IX∆Y ;
IX + IY ≥ IX+Y ≥ max[IX , IY ];
4.1 Physical and Informational Signal Spaces 129
p
max[IX , IY ] ≥ IX IY ≥ min[IX , IY ] ≥ IXY .
Informational relationships for an arbitrary triplet of the signals ξ (t)/X, η (t0 )/Y ,
ζ (t00 )/Z and null elements O of signal space Ω, that are equivalent to metric re-
lationships of tetrahedron (see the relationships described in Subsection 2.2.1 on
page 32), hold:
IX∆Y = IX + IY − 2IXY ;
IY ∆Z = IY + IZ − 2IY Z ;
IZ∆X = IZ + IX − 2IZX ;
IX∆Y = IX∆Z + IZ∆Y − 2I(X∆Z)(Z∆Y ) .
To conclude discussion of the main relationships characterizing an informational
interaction between the signals in the space Ω, we begin considering the main
relationships within a single signal of the space Ω.
It is obvious that all the relationships between the elements of the signal space
hold
S with respect to arbitrary elements (samples) {Xα } of the signal ξ (t)/X =
Xα . For all the elements {Xα } of the signal ξ (t)/X with normalized measure
α
m(Xα ) = 1, it is convenient to introduce a metric between the elements Xα and
Xβ that is equivalent to the metric (4.1.3):
1 1
d(Xα , Xβ ) = m(Xα ∆Xβ ) = [m(Xα ) + m(Xβ ) − 2m(Xα · Xβ )] = 1 − m(Xα · Xβ );
2 2
1
d(Xα , Xβ ) = ρ(Xα , Xβ ). (4.1.5)
2
S
Signal ξ (t)/X = Xj with discrete structure {Xj } (with discrete time domain)
j
in metric signal space Ω is represented by the vertices {Xj } of polygonal line
l({Xj }) that lie upon a sphere Sp(O, R) and at the same time are the vertices
of n-dimensional simplex Sx(X ) inscribed into the sphere Sp(O, R). The length
(perimeter) P ({Xj }) of the closed polygonal line l({Xj }) is determined by the
expression:
n n
X 1X
P ({Xj }) = d(Xj , Xj+1 ) = ρ(Xj , Xj+1 ),
j=0
2 j=0
where d(Xj , Xk ) and ρ(Xj , Xk ) are the metrics determined by the relationships
(4.1.5) and (4.1.3), respectively. Here, the values of indices are denoted modulo
n + 1, i.e., Xn+1 ≡ X0 . S
The signal ξ (t)/X = Xα with continuous structure {Xα } (with continuous
α
time domain) in metric space Ω may be represented by a continuous closed line
l({Xα }) situated upon a sphere Sp(O, R), which at the same time is a fragment
of n-dimensional simplex Sx(X ) inscribed into this sphere Sp(O, R), and in series
connects the vertices {Xj } of a given simplex.
130 4 Signal Spaces with Lattice Properties
X n
Y
+ m(Xj Xk Xl ) − . . . + (−1)n m( Xj ); (4.1.8)
0≤j<k<l≤n j=0
4.1 Physical and Informational Signal Spaces 131
X X
m(∆ Xj ) = m( Xj ) − m( Xj Xk )+
j
j 0≤j<k≤n
X n
Y
+m( Xj Xk Xl ) − . . . + (−1)n m( Xj ). (4.1.9)
0≤j<k<l≤n j=0
If the signal ξ (t)/X considered as a collection {Xj } consists of the disjoint elements
(Xj · Xk = O), then the inequality (4.1.11) transforms to the identity:
X
I∆ [ξ (t)] = I [ξ (t)] = m(Xj ). (4.1.12)
j
For the signal ξ (t)/X with discrete structure {Xj }, j = 0, 1, . . . , n, the following
relationship holds: X Y
m( Xj ) = P ({Xj }) + m( Xj ), (4.1.13)
j j
n n
1
P P
where P ({Xj }) = 2 m(Xj ∆Xj+1 ) = d(Xj , Xj+1 ) is the perimeter of a closed
j=0 j=0
polygonal line l({Xj }) that connects in series the ordered elements {Xj } of the
signal ξ (t)/X in metric signal space Ω. Here, the values of indices are denoted
modulo n + 1, i.e., Xn+1 ≡ X0 .
The relationships (4.1.9) and (4.1.13) imply the equality:
j+k
Y Y
+(−1)k P ({ Xi }) + . . . + m( Xj ) · mod 2 (n − 1),
i=j j
1, n = 2k, k ∈ N;
where mod 2 (n − 1) =
0, n = 2k + 1,
n
P
P ({Xj }) = d(Xj , Xj+1 ) is the perimeter of a closed polygonal line l({Xj }) that
j=0
in series connects the ordered elements {Xj } of the signal ξ (t)/X in metric signal
Pn
space Ω; P ({Xj · Xj+1 }) = d[(Xj−1 · Xj ), (Xj · Xj+1 )] is the perimeter of a
j=0
closed polygonal line l({XΠj }) that connects in series the ordered elements {XΠj }:
132 4 Signal Spaces with Lattice Properties
S j+k
Q
XΠj = Xj · Xj+1 of a set XΠ = XΠj in metric space Ω; and P ({ Xi }) =
j i=j
n
P j+k
Q j+k+1
Q
d[ Xi , Xi ] is the perimeter of a closed polygonal line l({XΠj,j+k }) that
j=0 i=j i=j+1
j+k
Q
connects in series the ordered elements {XΠj,k }: XΠj,k = Xi of a set XΠk =
i=j
S
XΠj,k in metric space Ω.
j
The relationship (4.1.13) implies that overall quantity of information I [ξ (t)] is an
information quantity contained in the signal ξ (t)/X due to the presence of metric
distinctions between the elements of a collection {Xj }. This overall quantity of
information is determined by perimeter P ({Xj }) of a closed polygonal line l({Xj })
in metric space Ω and also by the quantity of mutual information between Q the
elements of a collection defined by measure of the product of the elements m( Xj ).
j
The relationship (4.1.14) implies that relative quantity of information I∆ [ξ (t)] is
an information quantity contained in the signal ξ (t)/X due to the presence of metric
distinctions between the elements of a collection {Xj }, as well as overall quantity
of information. Unlike overall quantity of information I [ξ (t)], relative quantity of
information I∆ [ξ (t)] is defined by perimeter P ({Xj }) of a closed polygonal line
l({Xj }) in metric space Ω, taking into account the influence of metric distinctions
j+k
Q
between the products Xi of the elements of a collection {Xj }.
i=j
For the signal ξ (t)/X with continuous structure {Xα }, the identity holds:
X
m( Xα ) = P ({Xα }), (4.1.15)
α
1
P
where P ({Xα }) = lim m(Xα ∆Xα+dα ) is a length of some line l({Xα }) (in
dα→0 2 α∈A
general case, not a straight one) that connects in series the ordered elements {Xα } of
the signal ξ (t)/X in space Ω; A = [α0 , αI ] is an interval of definition of a parameter
α; lim 12 m(XαI ∆XαI +dα ) = 12 m(XαI ∆Xα0 ).
dα→0
The identity (4.1.15) means that overall quantity of information I [ξ (t)] con-
tained in the signal ξ (t)/X considered a collection of the elements {Xα } is nu-
merically equal to the perimeter P ({Xα }) of a closed polygonal line l({Xα }) that
in series connects the elements {Xα } in metric space Ω with metric d(Xα , Xβ ) =
1
2 m(Xα ∆Xβ ).
For a signal ξ(t)/X characterized by IDD in the form of δ-function or Heaviside
step function a1 1(τ + a2 ) − 1(τ − a2 ) , the measures of structural diversity (4.1.7)
and (4.1.6) are equivalent:
X
m(∆ Xα ) = m( Xα ), (4.1.16)
α
α
and the relative quantity of information I∆ [ξ (t)] contained in the signal ξ (t)/X is
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 133
For a signal ξ (t)/X characterized by IDD in the form of the other functions,
the measure of structural diversity (4.1.7) is equal to a half of the measure (4.1.6):
1 X
m(∆ Xα ) = m( Xα ), (4.1.18)
α 2 α
Definition 4.2.1. By homomorphism of stochastic process ξ (t) with IDD iξ (tα , t),
defined in the interval Tξ = [t0 , t∗ ], t ∈ Tξ , into stochastic process η (t0 ) with IDD
iη (t0β , t0 ), defined in the interval Tη = [t00 , t0∗ ], t0 ∈ Tη , in the terms of the mapping
(3.5.1): [
ϕ : {iξ (tα , t)} → {Xα }, ξ (t) → X = Xα ;
α
[
ϕ : {iη (t0β , t0 )} → {Yβ }, η (t0 ) → Y = Yβ ,
α
and every element of a set X 0 with discrete structure is simultaneously the element
of a set X with continuous structure.
Thus, discretization (sampling) D: ξ (t) → {Xj } of continuous stochastic process
ξ (t) with IDD iξ (tα , t)Smay be considered a mapping of the set X with continuous
structure {Xα }, X = Xα into the set X 0 = Xj with discrete structure {Xj },
S
α j
and every element of the set X 0 with discrete structure is simultaneously the element
of a set X with continuous structure, and the distinct elements Xα , Xβ ∈ X, Xα 6=
Xβ of the set X are mapped into the distinct elements Xj , Xk ∈ X 0 , Xj 6= Xk of
the set X 0 :
D : X → X 0; (4.2.1)
Xα → Xj , Xβ → Xk ; (4.2.1a)
0
Xα , Xβ ∈ X, Xα 6= Xβ ; Xj , Xk ∈ X , Xj 6= Xk .
1. Xα + Xβ = Xj + Xk (addition)
2. Xα · Xβ = Xj · Xk (multiplication)
3. Xα − Xβ = Xj − Xk (difference/relative complement operation)
4. OX ≡ OX 0 ≡ O (identity of null element)
In a general case, discretization (sampling) of continuous stochastic process ξ (t) is
not an isomorphic mapping preserving both measures of structural diversity of a
signal I [ξ (t)] (4.1.6) and I∆ [ξ (t)] (4.1.7):
I (X ) 6= I (X 0 ), I∆ (X ) 6= I∆ (X 0 ), (4.2.2b)
where I [ξ (t)] is overall quantity of information contained in continuous signal ξ (t),
I [ξ (t)] = I (X ); I [{ξ (tj )}] is overall quantity of information contained in a sampled
signal {ξ (tj )}, I [{ξ (tj )}] = I (X 0 ); I∆ [ξ (t)] is relative quantity of information con-
tained in a continuous signal ξ (t), I∆ [ξ (t)] = I∆ (X ); I∆ [{ξ (tj )}] is relative quantity
of information contained in a sampled signal {ξ (tj )}, I∆ [{ξ (tj )}] = I∆ (X 0 ).
It should be noted that notations (4.2.2a) and (4.2.2b), here and below, are used
to denote informational characteristics of continuous signal ξ (t) and the result of its
discretization {ξ (tj )}, and also to denote equivalent informational characteristics of
their images X, X 0 : ξ (t) → X, {ξ (tj )} → X 0 considered in terms of general Boolean
algebra B(X ).
Since discretization D of a continuous signal (4.2.1), like an arbitrary homo-
morphism, in a general case, does not preserve measures of structural diversity
I [ξ (t)], I∆ [ξ (t)] (4.2.2a) of a signal, it is impossible to formulate strictly the sam-
pling theorem valid for arbitrary stochastic processes. So, the principle of equivalent
representation on any formulation of the sampling theorem will be used depending
on informational properties of the signals.
Consider the main informational relationships characterizing a representation
of continuous stochastic process (signal) ξ (t) by a finite set of samples in a bounded
time interval Tξ = [0, T ], t ∈ Tξ .
The following informational inequalities hold under discretization D : X → X 0
(4.2.1):
I (X ) ≥ I (X 0 ), (4.2.3a)
I (X 0 ) ≥ I∆ (X 0 ). (4.2.3b)
The first inequality (4.2.3a) is stipulated by the relationship X 0 ⊂ X ⇒ m(X 0 ) ≤
m(X ). The second (4.2.3b) is stipulated by the relationship (4.1.10) between over-
all and relative quantity of information contained in the result of discretization
{ξ (tj )} of the signal ξ (t). The relationship between relative quantity of information
I∆ (X ) in continuous signal ξ (t), and relative quantity of information I∆ (X 0 ) in the
result of its discretization {ξ (tj )} requires comment. This relationship essentially
depends on the kind of IDD of stochastic process ξ (t). For continuous weakly (wide-
sense) stationary stochastic process ξ (t) with IDD iξ (τ ) that is differentiable in the
point τ = 0, there exists such an interval of discretization ∆t between the samples
{ξ (tj )} that relative quantity of information I∆ (X ) in continuous signal ξ (t), and
relative quantity of information I∆ (X 0 ) in the result of its discretization {ξ (tj )} are
connected by the inequality:
I∆ (X ) ≤ I∆ (X 0 ). (4.2.4)
For continuous stochastic process (signal) ξ (t) with IDD iξ (τ ) that is non-
differentiable in the point τ = 0 on arbitrary values of discretization interval (sam-
pling interval) ∆t between the samples {ξ (tj )}, relative quantity of information
I∆ (X ) in continuous signal ξ (t) and relative quantity of information I∆ (X 0 ) in the
result of its discretization {ξ (tj )} are connected by the relationship:
I∆ (X ) ≥ I∆ (X 0 ). (4.2.5)
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 137
I (X ) = I (X 0 ) + IL0 . (4.2.6)
Information quantity IL0 is called information losses of the first genus. For a set
of the elements X of a stochastic process (signal) ξ (t) with a continuous structure
{Xα } in the space Ω, we can introduce a measure of a curvature of a structure
c(X ), which characterizes a deflection of a locus of a structure {Xα } of a set X
from a line:
I (X ) − I (X 0 )
c(A) = . (4.2.7)
I (X )
Evidently, the curvature c(X ) of a set of the elements X can be varied within
0 ≤ c(X ) ≤ 1. If for an arbitrary pair of the adjacent elements Xj , Xj+1 ∈ X 0 of a
discrete set X 0 and the element Xα ∈ X, Xα ∈ [Xj Xj+1 , Xj + Xj+1 ], the metric
identity holds:
d[Xj , Xj+1 ] = d[Xj , Xα ] + d[Xα , Xj+1 ],
where d[Xα , Xβ ] = 21 m(Xα ∆Xβ ) is a metric in the space Ω, then the curvature
c(X ) of a set of elements X with continuous structure is equal to zero.
Overall quantity of information I (X 0 ) contained in a set X 0 with discrete struc-
ture {Xj } is equal to the sum of relative quantity of information I∆ (X 0 ) of this set
and a quantity of redundant information IL00 contained in a set X 0 due to nonempty
pairwise intersections (due to mutual information) between its elements:
I (X 0 ) = I∆ (X 0 ) + IL00 . (4.2.8)
Information quantity IL00 is called information losses of the second genus. For the
result of discretization of continuous stochastic process (signal) ξ (t), i.e., a set of
the elements X 0 with discrete structure {Xj } of the space Ω, one can introduce
a measure of informational redundancy r(X 0 ), which characterizes a presence of
informational interrelations between the elements of discrete structure {Xj } of a
set X 0 (or simply mutual information between them):
I (X 0 ) − I∆ (X 0 )
r(X 0 ) = . (4.2.9)
I (X 0 )
The meaning of the relationship (4.2.10) can be elucidated. One cannot extract
more information quantity from the signal ξ (t) (i.e., a collection of the elements X
with continuous structure) than a value of relative quantity of information I∆ (X 0 )
contained in this signal ξ (t) due to diversity between the elements of its structure.
Information losses IL0 and IL00 take place, on the one hand, in the consequence of
curvature c(X ) of a structure {Xα } of signal ξ (t) in metric space Ω, and on the
other hand, due to some informational redundancy r(X 0 ) of a discrete set X 0 (some
mutual information between the samples {Xj }).
Discretization D: ξ (t) → {ξ (tj )}, according to the identity (4.2.10), has to
be realized to provide maximum ratio of relative quantity of information I∆ (X 0 )
contained in the sampled signal {ξ (tj )} to overall quantity of information I (X )
contained in continuous signal ξ (t) in a bounded time interval Tξ = [0, T ], t ∈ Tξ :
I∆ (X 0 )
→ max, (4.2.11)
I (X )
or, equivalently, to provide minimum sum of information losses of the first IL0 and
the second IL00 genera:
IL0 + IL00 → min .
On the base of criterion (4.2.11), the sampling theorem may be formulated for sig-
nals with various informational and probabilistic-statistical properties. We limit the
consideration of possible variants of the sampling theorem to a stationary stochastic
process (signal), whose IDD iξ (tα , t) of an arbitrary sample ξ (tα ) is characterized
by the property: iξ (tα , t) ≡ iξ (τ ), τ = t − tα .
This identity makes possible the extraction of the entire information quantity,
contained in this process ξ (t) in time interval Tξ = [0, T ], from a finite set of
the samples Ξ = {ξ (tj )} of stochastic process ξ (t) without any information losses.
Such representation of continuous stochastic process ξ (t) by a finite set of samples
Ξ = {ξ (tj )} in the interval Tξ = [0, T ], t ∈ Tξ , when the relationship (4.2.12) holds,
is equivalent from the standpoint of preserving overall quantity of information.
The main condition of the Theorem 4.2.1, i.e., orthogonality of the elements of
continuous structure of stochastic process (signal) ξ (t), is provided if and only if its
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 139
IDD iξ (τ ) has a uniform distribution in a bounded interval [−∆/2, ∆/2] and takes
the form:
1 ∆ ∆
iξ (τ ) = [1(τ + ) − 1(τ − )],
∆ 2 2
where 1(t) is a Heaviside step function.
In this case, the identity holds between both measures of structural diversity
of continuous stochastic process (signal) ξ (t) and a set of the samples Ξ = {ξ (tj )}
(the sets of the elements with continuous {Xα } and discrete {Xj } structures, re-
spectively), i.e., between the measures of the sets X and X 0 :
X X
m(X ) = m( Xα ) ≡ m( Xj ) = m(X 0 ),
α j
I (X ) = I (X 0 ) = I∆ (X 0 ) = I∆ (X ).
This identity provides the possibility of extracting the same relative quantity of
information contained in signal ξ (t) in the interval Tξ = [0, T ] from a finite set of
the samples Ξ = {ξ (tj )} of stochastic process (signal) ξ (t). Such representation of
continuous stochastic process ξ (t) determined in the interval Tξ = [0, T ] by a finite
set of the samples Ξ = {ξ (tj )} under the condition (4.2.13) is equivalent from the
standpoint of preserving relative quantity of information.
Theorem 4.2.2 has the following corollaries.
Corollary 4.2.1. Continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) that has a uniform distribution in the bounded interval [−∆/2, ∆/2] of the
form:
1 ∆ ∆
iξ (τ ) = 1 τ+ −1 τ − , (4.2.14)
∆ 2 2
may be equivalently represented by a finite set of the samples Ξ = {ξ (tj )} if the
discretization interval (the sampling interval) ∆t = tj+1 − tj is chosen to be equal
to ∆t = 1/iξ (0).
Corollary 4.2.2. Continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) of the following form:
( h i
1 |τ |
a 1 − a , |τ | ≤ a;
iξ (τ ) =
0, |τ | > a,
In the cases cited in Corollaries 4.2.1 and 4.2.2 between measures of structural
diversity of continuous stochastic process (signal) ξ (t) and a set of the samples Ξ =
{ξ (tj )} (sets of the elements with continuous {Xα } and discrete {Xj } structures,
respectively), namely between measures of symmetric difference of the sets X and
X 0 , the identity holds:
m(∆ Xα ) = m(∆ Xj ),
α j
I∆ (X ) = I∆ (X 0 ).
For stochastic processes with IDD iξ (τ ) differentiable in the point τ = 0, the sam-
pling theorem may be formulated in the following way.
I∆ (X 0 )
∆topt = arg max . (4.2.15)
∆t I (X )
Remark 4.2.1. Stationary stochastic process (signal) ξ (t) with IDD iξ (τ ) (0 <
iξ (0) < ∞) defined in the interval Tξ = [0, T ], t ∈ Tξ may be represented by a finite
set of the samples Ξ = {ξ (tj )} that follow through the interval:
(X 0 )
h i
Analysis of the dependences of the ratio I∆ I(X) (∆t) shown in Fig. 4.2.1 shows
certain features typical for discretization of continuous stochastic processes (sig-
nals).
1. For the plotted dependences, the limit of the ratio [I∆ (X 0 )/I (X )] (∆t) on
∆t → 0 is equal to 1/2:
I∆ (X 0 )
1
lim (∆t) = .
∆t→0 I (X ) 2
This confirms the result (4.1.19) concerning the ratio of relative quantity
142 4 Signal Spaces with Lattice Properties
For most stochastic processes possessing the ability to carry information, the
values iξ (0) belong to the interval [2∆f, 4∆f ], where ∆f is the real effective width of
R∞
hyperspectral density σξ (ω ) equal to: ∆f = [ σξ (ω )dω ]/[2πσξ (0)], so the values of
0
the discretization (sampling) interval ∆t = 1/iξ (0) could change within the interval
[1/(4∆f ), 1/(2∆f )].
Aforementioned variants of the sampling theorem are formulated for stationary
stochastic processes possessing the ability to carry information (see Theorem 3.4.2)
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 143
The conclusions of Theorems 4.2.4 and 2.3.4 may be generalized upon an iso-
morphic mapping of stochastic processes ξ (t), η (t0 ).
preserves the measures of all the binary operations between the sets X, Y ⊂ X ∪ Y ,
whose union X ∪ Y is also a subalgebra B(X + Y ) of generalized Boolean algebra
B(Ω):
m(X + Y ) = m(X 0 + Y 0 );
m(X · Y ) = m(X 0 · Y 0 );
m(X − Y ) = m(X 0 − Y 0 );
m(X ∆Y ) = m(X 0 ∆Y 0 ).
On the basis of classical information theory one can evaluate mutual information
I [x, a] contained in the signal x concerning the signal a, if, for instance, the signal
x is the result of additive interaction of two signals a and b: x = a + b in linear
signal space LS. However, it is impossible to evaluate quantity of information losses
that take place during such an interaction. One cannot compare the sum Ia + Ib
of information quantities contained in the signals a and b, respectively, and an
information quantity Ix contained in the signal x, which is a result of interaction
of signals a and b. The reason is the same, inasmuch as the values Ix , Ia , Ib ,
corresponding to information quantities contained in each signal x, a, b separately,
are not defined.
The goal of this section is to further develop the approach evaluating informa-
tional relationships between the signals covered in Sections 3.4, 3.5, 4.1, and 4.2,
to carry out the comparative analysis of informational properties of signal spaces
with various algebraic properties stipulated by signal interactions in these spaces.
There are two problems formulated and solved in this section. First, the main
informational relationships between the signals before and after their interaction in
signal spaces with various algebraic properties are determined. Second, depending
on a quantitative content of these informational relationships, at a quality level,
the types of interactions between the signals in signal spaces are distinguished.
Inasmuch as this section provides a research transition from informational signal
space to the physical signal spaces, we change the notation system based on the
Greek alphabet accepted for the signals (stochastic processes) before. Here and
below we will use the Latin alphabet to denote useful and interference signals
while considering the questions of their processing that form the main content of
Chapter 7.
rx (τ ) = ra (τ ) = rb (τ ) = r(τ ). (4.3.2)
For Gaussian stochastic signals, the normalized correlation function defines its nor-
malized function of statistical interrelationship (NFSI) introduced in Section 3.1.
The equality (4.3.2), according to the formula (3.1.3), implies the identity of NFSIs
ψx (τ ), ψa (τ ), ψb (τ ) of the signals x(t), a(t), b(t):
ψx (τ ) = ψa (τ ) = ψb (τ ), (4.3.3)
ix (τ ) = ia (τ ) = ib (τ ). (4.3.4)
On the basis of Axiom 4.1.1 formulated in Section 4.1, depending on the sort of sig-
nature relations between the images A, B of the signals a(t), b(t) in informational
signal space Ω built upon a generalized Boolean algebra B(Ω) with a measure m
and signature (+, ·, −, O), A ∈ Ω, B ∈ Ω, the following main types of information
quantity are defined: quantity of absolute information IA , IB , quantity of mutual
information IAB , and quantity of overall information IA+B determined by the fol-
lowing relationships, respectively:
IA+B = IX ?
∆I = (IA + IB )/2 = IA = IB = IX . 5
Ia = IA , Ib = IB . (4.3.9)
Thus, the notion of quantity of absolute information does not need refinement,
inasmuch as it is defined irrespective of other signals of physical signal space; it is
introduced exclusively with respect to a single signal.
Now, for physical signal space Γ with binary operation ⊕ defined on it, infor-
mational inequality (4.3.8) can be written in the form:
Definition 4.3.1. Two signals a(t) and b(t) of physical signal space Γ, a(t), b(t) ∈ Γ
are called identical in an informational sense, if for them the inequality (4.3.10)
turns to the identity:
Ia⊕b = IA+B = IA = IB ,
so that the images A and B of the signals a(t) and b(t) in informational space Ω
are identical: A ≡ B.
As follows from Definition 4.3.1, any signal a(t) is identical to itself in an in-
formational sense. For instance, in the case of additive interaction in linear space
LS, the signals a(t) and b(t) are identical in an informational sense, if they are
connected by linear dependence: a(t) = k · b(t), where k = const.
According to formulated informational inequality (4.3.10), the following ques-
tions may be formulated.
3. If the answer to the second question is affirmative, what are the signal
spaces where such losses are minimal? If the answer to the second question
is negative, what are the signal spaces where the signal interaction is
possible without losses of information?
One can answer these questions, based on the following definition.
Definition 4.3.2. Ideal interaction x(t) = a(t) ⊕ b(t) between two statistically in-
dependent signals a(t) and b(t) in physical space Γ, where two binary operations are
defined: addition ⊕ and multiplication ⊗, is a binary operation ⊕, which provides
the quantities of overall information Ia⊕b , IA+B , contained in a pair of signals a(t),
b(t) and defined for both physical Γ and informational Ω signal spaces, respectively,
to be equivalent:
Ia⊕b = IA+B . (4.3.11)
Here and below, the binary operations of addition ⊕ and multiplication ⊗ are
understood as abstract operations over some algebraic structure. Find out, what are
the algebraic properties of the physical signal space providing the identity (4.3.11)
to hold. The answer to this question is given by the following theorem.
Theorem 4.3.1. Let there are two binary operations of addition ⊕ and multipli-
cation ⊗ defined in physical signal space Γ. Then for the identity (4.3.11) to hold,
it is necessary and sufficient that physical signal space Γ be a generalized Boolean
algebra B(Γ) with a measure mΓ .
Proof of necessity. Identity (4.3.11) may be written in more detailed form:
Ia⊕b = Ix = IX = IA+B = IA + IB − IAB = Ia + Ib − Iab . (4.3.12)
According to (4.3.9), the identities between the quantities of absolute information
of the signals a(t) and b(t) defined for both physical Γ and informational Ω signal
spaces, respectively, hold: Ia = IA , Ib = IB . Besides, the identity holds Iab =
IAB between the quantities of mutual information Iab and IAB defined for both
physical Γ and informational Ω signal spaces, respectively. Then a measure mΓ
of the elements (the signals) {a(t), b(t), . . .} of physical space Γ is isomorphic to a
measure m of informational signal space Ω: i.e., for every c(t) ∈ Γ, ∃C ∈ Ω, C =
ϕ[c(t)]: mΓ [c(t)] = m(C ) [176]; and the mapping ϕ: Γ → Ω defines isomorphism of
the spaces Γ and Ω into each other: i.e., ∀ϕ, ∃ϕ−1 : ϕ−1 : Ω → Γ. Thus, physical
signal space Γ, where the identity (4.3.11) holds, is the algebraic structure identical
to informational signal space Ω, i.e., generalized Boolean algebra B(Γ) with a
measure mΓ and signature (⊕, ⊗, −, O).
Proof of sufficiency. If physical signal space Γ is a generalized Boolean algebra B(Γ)
with a measure mΓ and signature (⊕, ⊗, −, O), then the mapping ϕ defined by the
relationship (4.1.2) ϕ: Γ → Ω is a homomorphism preserving all the signature
operations, and the following equations hold:
ϕ[a(t) ⊕ b(t)] = ϕ[a(t)] + ϕ[b(t)]; (4.3.13a)
ϕ[a(t) ⊗ b(t)] = ϕ[a(t)] · ϕ[b(t)], (4.3.13b)
152 4 Signal Spaces with Lattice Properties
where x(t) = a(t) ⊕ b(t), x̃(t) = a(t) ⊗ b(t); ϕ[a(t)] = A, ϕ[b(t)] = B, ϕ[a(t) ⊕ b(t)] =
X, ϕ[a(t) ⊗ b(t)] = X̃; a(t), b(t), x(t), x̃(t) ∈ Γ; A, B, X, X̃ ∈ Ω.
According to (4.3.9), the identities between the quantities of absolute informa-
tion of the signals a(t), b(t), x(t), x̃(t) defined for both physical Γ and informational
Ω signal spaces, respectively, hold: Ia = IA , Ib = IB , Ix = IX , Ix̃ = IX̃ . These
identities define isomorphism of the measures mΓ , m of both physical Γ and infor-
mational Ω signal spaces: mΓ [a(t)] = m(A), mΓ [b(t)] = m(B ), mΓ [x(t)] = m(X ),
mΓ [x̃(t)] = m(X̃ ). Then the mapping ϕ: Γ → Ω is an isomorphism preserving a
measure and mΓ [x(t)] = m(X ) ⇒ Ix = IX ⇒ Ia⊕b = IA+B .
Note that a measure mΓ [x̃(t)] of the signal x̃(t) = a(t) ⊗ b(t) gives a sense of
the quantity of mutual information Iab contained in both the signal a(t) and the
signal b(t), which will be also denoted below as Ia⊗b , indicating the relation of this
measure to the binary operation ⊗ of the space Γ.
Thus, the main content of the Theorem 4.3.1 claims that during the interaction
x(t) = a(t) ⊕ b(t) of the signals a(t) and b(t) in physical space Γ in the form of
generalized Boolean algebra, the measures of the corresponding sorts of informa-
tion quantities defined for both physical Γ and informational Ω signal spaces are
isomorphic and the identities hold:
1. During signal interaction in linear space LS, the losses of information take
place. The presence of such losses is explained by the fact that linear space
LS is not isomorphic to general Boolean algebra with a measure.
2. Interaction of the signals is accompanied by losses of information that do
not affect all the spaces. The exceptions are the spaces with the properties
of a generalized Boolean algebra with a measure.
Unfortunately, Theorem 4.3.1 does not give concrete recommendations for obtaining
the signal spaces with these useful informational properties. The requirement for the
informational identity (4.3.11) to be valid for practical application may be too strict.
In this case, it is enough to require the quantity of mutual information Iax = Ia⊗x ,
Ibx = Ib⊗x , contained in the signals a(t) and b(t) and in the result of their interaction
x(t), to be identically equal to the quantities of absolute information Ia and Ib
contained in these signals:
Ia⊗x = Ia ;
Ib⊗x = Ib .
On the basis of this approach, one may define a sort of interaction of the sig-
nals in physical signal space that differs from ideal interaction with respect to its
informational properties. This formulation is closer to applied aspects of signal
processing and is expanded in its algebraic interpretation.
Definition 4.3.3. Quasi-ideal interaction x(t) = a(t) ⊕ b(t) of two signals a(t)
and b(t) in physical signal space Γ, where two binary operations of addition ⊕ and
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 153
Find out what are the algebraic properties of physical signal space Γ for the
equation system (4.3.14) to hold. The answer to this question is given by the fol-
lowing theorem.
Theorem 4.3.2. Let there be two binary operations of addition ⊕ and multiplica-
tion ⊗ defined in physical signal space Γ. Then for the equation system (4.3.14) to
hold, it is necessary and sufficient that physical signal space Γ be a lattice with a
measure mΓ and operations of join ⊕ and meet ⊗.
Proof. If physical signal space Γ is a lattice with operations of join a(t) ⊕ b(t) and
meet a(t) ⊗ b(t), then the following relationships hold:
a(t) ⊗ x(t) = a(t); (a)
b(t) ⊗ x(t) = b(t); (b) (4.3.15)
x(t) = a(t) ⊕ b(t), (c)
where a(t) ⊕ b(t) = supΓ {a(t), b(t)}; a(t) ⊗ b(t) = inf Γ {a(t), b(t)}.
Identities (4.3.15a) and (4.3.15b) define axioms of absorption of lattice [223],
[221]. If physical signal space Γ is a lattice with a measure mΓ , then the system
(4.3.15) determines the following identities:
mΓ (a(t) ⊗ x(t)) = mΓ (a(t)); (a)
mΓ (b(t) ⊗ x(t)) = mΓ (b(t)); (b) (4.3.16)
x(t) = a(t) ⊕ b(t), (c)
Thus, for the identities (4.3.14a) and (4.3.14b) of the system (4.3.14) to hold,
i.e., for quasi-ideal interaction in physical signal space Γ to take place, it is sufficient
that physical signal space Γ be a lattice Γ(∨, ∧) with operations of join and meet,
respectively: a(t) ∨ b(t) = supΓ {a(t), b(t)}, a(t) ∧ b(t) = inf Γ {a(t), b(t)}.
Then for interaction of two signals a(t), b(t) in physical signal space Γ with
lattice operations: x(t) = a(t) ∨ b(t) or x̃(t) = a(t) ∧ b(t), the initial requirement
(4.3.14) could be written in slightly extended form owing to duality of lattice op-
eration properties with the help of two equation systems, respectively:
Ia∧x = Ia ; (a)
Ib∧x = Ib ; (b) (4.3.17)
x(t) = a(t) ∨ b(t), (c)
154 4 Signal Spaces with Lattice Properties
Ia∨x̃ = Ia ; (a)
Ib∨x̃ = Ib ; (b) (4.3.18)
x̃(t) = a(t) ∧ b(t). (c)
It should be noted that in physical signal space Γ, where ideal interaction of the sig-
nals exists (i.e., the identity (4.3.11) holds), the relationships (4.3.14a) and (4.3.14b)
unconditionally hold, and correspondingly there exists a quasi-ideal interaction of
the signals. Physical signal space Γ, which is a generalized Boolean algebra B(Γ)
with a measure mΓ and signature (⊕, ⊗, −, O), is also a lattice of signature (⊕, ⊗)
with operations of least upper bound and greatest lower bound (join and meet),
respectively:
where â(t) is some deterministic function of the signals a(t) and x(t).
Find out what are the algebraic properties of physical signal space Γ for the
identity (4.3.19) to hold. Also we must establish the kind of the function â(t) sat-
isfying Equation (4.3.19). The answer to this question is given by the following
theorem.
Theorem 4.3.3. Let there be two binary operations of addition ⊕ and multiplica-
tion ⊗, which are defined in physical signal space Γ. Then for the identity (4.3.19)
to hold, it is sufficient that physical signal space Γ is a lattice with signature (⊕, ⊗)
and operations of join and meet, respectively:
Proof. Definition 4.3.4 implies that in quasi-ideal interaction x(t) = a(t) ⊕ b(t) of a
completely known useful signal a(t) with interference (noise) signal b(t) in physical
signal space Γ, there exists a binary operation ⊗ that allows, with the help of
the estimator â(t) of the signal a(t) with completely known parameters (â(t) =
a(t)), obtaining (extracting) the useful signal a(t) from the result of interaction
x(t) without information losses:
The last interrelation determines the absorption property for a lattice with opera-
tions of join and meet, respectively: a(t) ⊕ b(t) and a(t) ⊗ b(t). This means that for
quasi-ideal interaction of useful a(t) and interference b(t) signals defined above to
exist in physical signal space Γ, it is sufficient that signal space Γ be a lattice with
signature (⊕, ⊗).
It should be noted that for physical signal space Γ with lattice properties and
signature (⊕, ⊗), for dual interaction of the signals in the form of x̃(t) = a(t) ⊗ b(t),
the relationship that is dual with respect to the equality (4.3.20), holds:
Example 4.3.2. Consider a model of interaction of the signal si (t) from the set
of deterministic signals S = {si (t)}, i = 1, . . . , m and noise n(t) in signal space
Γ(∨, ∧) with lattice properties and operations of join a(t) ∨ b(t) and meet a(t) ∧ b(t)
(a(t), b(t) ∈ Γ(∨, ∧)), respectively:
of information; see, for instance, [164]. The former example implies that while solv-
ing the classification problem of deterministic signals in the presence of noise, the
value Eb /N0 and the probability of error receiving of the signal si (t) from a set
of deterministic signals S = {si (t)}, i = 1, . . . , m may be arbitrarily small. This,
however, does not mean that the unbounded capacity of a communication channel
with noise can be provided in such signal spaces. In particular, Chapter 5 will show
that communication channel capacity, even in the absence of noise, is always a finite
value. 5
are equal to the quantity of absolute information Ix , Ix̃ contained in the results of
these binary operations x(t) = a(t) ⊕ b(t), x̃(t) = a(t) ⊗ b(t) between the signals
a(t), b(t), respectively:
Ix = Ia⊕b ; Ix̃ = Ia⊗b . (4.3.24)
Note that two aforementioned binary operations (of addition ⊕ and multipli-
cation ⊗) cannot be defined at the same time in any physical signal space Γ. The
exceptions are the signal spaces with group (semigroup) properties, where the only
binary operation is defined over the signals, i.e., either addition ⊕ or multiplication
⊗.
There exists a theoretical problem which will be considered within physical
signal space Γ with the properties of an additive group of linear space, where
only one binary operation between the signals is defined, i.e., addition ⊕. By a
method based on the relationships (4.3.24), we can define only a quantity of overall
information Ia⊕b contained in the result of interaction x(t) = a(t) ⊕ b(t) in such a
signal space. It is not possible to define similarly a quantity of mutual information
contained in both signals a(t) and b(t).
In order to provide an approach to analyze informational relationships taking
place in the signal space with various types interactions, it is necessary to define the
quantity of mutual information contained in both signals a(t) and b(t) in such a way,
that is acceptable for the signal spaces with minimal number of binary operations
between the signals.
It is obvious that the minimal number of operations is equal to one; in this case,
the only binary operation characterizes an interaction of the signals in physical sig-
nal space with group (semigroup) properties. On the other hand, a definition of a
quantity Iab should provide the introduced notion will not contradict the obtained
results. The necessary approach to a formulation of the quantity of mutual informa-
tion Iab that is acceptable for the signal spaces with arbitrary algebraic properties
is given by the following definition.
Definition 4.3.5. For stationary and stationary coupled stochastic signals a(t) and
b(t), which interact in physical signal space Γ with arbitrary algebraic properties,
quantity of mutual information, contained in the signals a(t) and b(t), is a quantity
Iab equal to:
Iab = νab min(Ia , Ib ), (4.3.25)
where νab = νP (at , bt ) is PMSI between the samples at and bt of stochastic signals
a(t) and b(t) in physical signal space Γ determined by the relationship (4.3.23).
With help from Definition 4.3.5, for an arbitrary signal a(t) in physical signal
space Γ with arbitrary algebraic properties, we introduce a measure of information
quantity mΓ defined by the Equation (4.3.25):
As shown in Section 3.3, for physical signal space Γ with group properties, the
notions of PMSI (see Definition 3.3.1) and normalized measure of statistical inter-
relationship (NMSI) (see Definition 3.2.2) coincide; as for signal space Γ with lattice
158 4 Signal Spaces with Lattice Properties
properties, these notions have different content. In this section, the further analy-
sis of informational relationships between the signals interacting in physical signal
space Γ with various algebraic properties will be performed on the base of quantity
of mutual information (4.3.25), and this quantity Iab will be used on the base of
PMSI (νab = νP (at , bt )), although it can be defined through NMSI (νab = ν (at , bt )).
So, for instance, in the next section, this notion will be defined and used on the
basis of NMSI (νab = ν (at , bt )).
As noted at the beginning of the section, in applied problems of radiophysics and
radioengeneering, an additive interaction of useful signal and interference (noise) is
considered. In some special cases, a multiplicative signal interaction is the subject
of interest [244]. These and other sorts of signal interactions in physical signal space
that differ in their informational properties from the above kinds of interaction will
refer to a large group of interactions with so-called usual informational properties
with help of the following definition based on the notion of quantity of mutual
information introduced by Definition 4.3.5.
Definition 4.3.6. Usual interaction of two signals a(t) and b(t) in physical signal
space Γ with semigroup properties is a binary operation ⊕ providing the quantity of
mutual information Iax , Ibx contained in the distinct signals a(t), b(t) (a(t) 6= b(t))
and in the result of their interaction x(t) = a(t) ⊕ b(t) is less than the quantity of
absolute information Ia , Ib contained in these signals, respectively:
Iax < Ia ; (a)
Ibx < Ib ; (b) (4.3.26)
x(t) = a(t) ⊕ b(t). (c)
Recall, that all the definitions of information quantities listed in Section 4.1 are
based upon an axiomatic statement, according to which, the sorts of information
quantities are completely defined by the kind of binary operation between the im-
ages A and B of the signals a(t) and b(t) in informational signal space Ω built upon
generalized Boolean algebra B(Ω) with a measure m and signature (+, ·, −, O).
Meanwhile, the quantity of mutual information is introduced differently in Defini-
tion 4.3.5, i.e., by PMSI and a quantity of absolute information contained in the
interacting signals a(t), b(t). First, it adjusts to the corresponding notion intro-
duced in Section 4.1. Second, it allows considering the features of informational
relationships between the signals interacting in physical signal space Γ with arbi-
trary algebraic properties.
For ideal interaction x(t) = a(t) ⊕ b(t) between the signals a(t) and b(t) in
physical signal space Γ with the properties of generalized Boolean algebra B(Γ) with
a measure mΓ and signature (⊕, ⊗, −, O), the following informational relationships
hold:
Ia + Ib − Iab = Ix ; (a)
Iax = Ia ; (b) (4.3.27)
Ibx = Ib , (c)
For quasi-ideal interaction x(t) = a(t) ⊕ b(t) between the signals a(t) and b(t)
in physical signal space Γ with lattice properties with a measure mΓ , the following
main informational relationships hold:
Ia + Ib − Iab > Ix ; (a)
Iax = Ia ; (b) (4.3.29)
Ibx = Ib , (c)
For interaction x(t) = a(t) ⊕ b(t) of a pair of stochastic signals a(t) and b(t) in
physical signal space Γ with group properties (i.e., for additive interaction x(t) =
a(t) + b(t) in linear signal space LS), the following main informational relationships
hold:
νP (at , xt ) + νP (bt , xt ) − νP (at , bt ) = 1; (a)
νP (at , xt ) < 1; (b) (4.3.31)
νP (bt , xt ) < 1, (c)
160 4 Signal Spaces with Lattice Properties
where νP (at , xt ), νP (bt , xt ), νP (at , bt ) are PMSIs of the corresponding pairs of their
samples at , xt ; bt , xt ; at , bt of stochastic signals a(t), b(t), x(t).
The equality (4.3.31a) represents the result (3.3.7) of the Theorem 3.3.6. The
inequalities (4.3.31b), (4.3.31c) are the consequences of the inequalities (4.3.26a),
(4.3.26b) of Definition 4.3.6. Considering the relationship (4.3.25) of Defini-
tion 4.3.5, the system (4.3.31) can be overwritten in the following form:
of interacting signals a(t) and b(t). No sorts of signal interactions allow achieving
more high performance informational relationships on the fixed sum Ia + Ib =const.
For quasi-ideal interaction x(t) = a(t) ⊕ b(t) of a pair of signals a(t) and b(t)
in physical signal space Γ with lattice properties, the dependence 2 is determined
by the point Iax = Ia , Ibx = Ib that corresponds to the identities (4.3.29b) and
(4.3.29c).
For interaction x(t) = a(t) ⊕ b(t) between statistically independent signals a(t)
and b(t) in physical signal space Γ with group properties (for additive interaction
x(t) = a(t) + b(t) in linear signal space LS), the dependence 3 is determined by the
line equation (4.3.32a) passing through the points (Ia , 0) and (0, Ib ) correspondingly,
so that for independent signals a(t) and b(t), quantity of mutual information is
equal to zero: Iab =0. The dependence 3 determines the lower bound of possible
values of quantities of mutual information Iax and Ibx of interacting signals a(t)
and b(t). Whatever the signal interactions, one cannot achieve worse informational
relationships on the fixed sum Ia + Ib =const.
The dependence 4 sup [Ibx (Iax )] |Iab=0 determines the upper bound of
Ia +Ib =const
possible values of quantities of mutual information Iax and Ibx of the signals a(t)
and b(t) additively interacting in physical signal space Γ with group properties (in
linear space LS). Whatever the signal interactions, one cannot achieve higher per-
formance informational relationships on the fixed sum Ia + Ib =const. This function
is determined by the relationship:
p p 2
sup [Ibx (Iax )] |Iab=0 = Ia + Ib − Iax . (4.3.33)
Ia +Ib =const
For the interaction x(t) = a(t) ⊕ b(t) between statistically dependent signals a(t)
and b(t) in physical signal space Γ with group properties (for additive interaction
x(t) = a(t) + b(t) in linear signal space LS), the dependence 5 is determined by the
line equation (4.3.32a) passing through the points (kIa , 0) and (0, kIb ), where k is
equal to:
k = 1 + [Iab / min(Ia , Ib )], (4.3.34)
and for the dependent signals a(t) and b(t), the quantity of mutual information Iab
is bounded by the quantity Iab ≤ [min(Ia , Ib )]2 / max(Ia , Ib ).
Dependence 5 determines the upper bound of possible values of quantities of
mutual information Iax and Ibx of statistically dependent signals a(t) and b(t)
interacting in physical signal space Γ with group properties (in linear signal space
LS). Whatever the signal interactions, one cannot achieve more high performance
informational relationships on the fixed sum Ia + Ib =const. If the interacting signals
a(t) and b(t) are identical in informational sense (see Definition 4.3.1), i.e., they
are characterized by the same information quantity Ia = Ib , so that information
contained in these signals has the same content, then the value of coefficient k
determined by relationship (4.3.34) becomes equal to 2: k = 2. In this (and only in
this) case, the dependences 5 and 1 coincide.
Thus, the locus of the lines, that characterize the dependence (4.3.32a) for
interaction x(t) = a(t) ⊕ b(t) of the signals a(t) and b(t) in physical signal space Γ
162 4 Signal Spaces with Lattice Properties
with group properties (in linear signal space LS), is between the curves 1 and 3 in
Fig. 4.3.1.
In summary, one can make the following conclusions.
is metric.
Before proving the theorem, we transform Equation (4.4.5) to the form that
is convenient for the following reasoning, using the relationship (4.4.4) and the
identity [221, Section XIII.4;(22)]:
with whose help the function dab may be written in the equivalent form:
We write the expression (4.4.7) in the form of the function of three variables:
dab = F (du , µ+ ∆ + ∆
u , µu ) = du µu + (1 − du )µu , (4.4.8)
+
where du = µ(at , bt ), µ+ ∆ ∆
u = µab = Ia + Ib , µu = µab = |Ia − Ib |.
Similarly, we introduce the following designations for the corresponding func-
tions dbc , dca between the pairs of signals b(t), c(t) and c(t), a(t):
dbc = F (dv , µ+ ∆ + ∆
v , µv ) = dv µv + (1 − dv )µv , (4.4.9)
+
where dv = µ(bt , ct ), µ+ ∆ ∆
v = µbc = Ib + Ic , µv = µbc = |Ib − Ic |;
dca = F (dw , µ+ ∆ + ∆
w , µw ) = dw µw + (1 − dw )µw , (4.4.10)
where dw = µ(ct , at ), µ+ + ∆ ∆
w = µca = Ic + Ia , µw = µca = |Ic − Ia |.
It should be noted that for the values du , µ+ ∆ + ∆ + ∆
u , µu ; dv , µv , µv ; dw , µw , µw , the
inequalities hold:
du ≤ dv + dw ; µ+ + +
u ≤ µv + µ w ; µ∆ ∆ ∆
u ≤ µv + µw , (4.4.11)
µ∆ +
u ≤ µu ; µ∆ +
v ≤ µv ; µ∆ +
w ≤ µw . (4.4.12)
166 4 Signal Spaces with Lattice Properties
F (xu,v,w ) = F (du,v,w , µ+ ∆
u,v,w , µu,v,w ), (4.4.13)
where xu,v,w is a variable denoting one of three variables of the function (4.4.13),
under a condition in which another two are the constants:
xu,v,w = du,v,w ; du,v,w = const; du,v,w = const;
µ+ = const ; or x u,v,w = µ +
; or µ+
u,v,w = const; (4.4.14)
u,v,w
∆ ∆
u,v,w
∆
µu,v,w = const, µu,v,w = const, xu,v,w = µu,v,w .
For the next proof, the generalization of lemma 9.0.2 in [245] will be used. It de-
fines the sufficient conditions for the function (4.4.13) to make a space to remain
pseudometric .
Lemma 4.4.1. Let F (du , µ+ ∆
u , µu ) be a monotonic, nondecreasing, convex upward
function such that F (du = 0, µu , µ∆
+ + ∆ + ∆
u ) = 0, F (du , µu = 0, µu ) ≥ 0, F (du , µu , µu =
0) ≥ 0, which is defined by the relationship (4.4.8). Then if (Γ, du ) is a pseudometric
space, then (Γ, F (du , µ+ ∆
u , µu )) is also a pseudometric space.
For nondecreasing convex upward function F (xu,v,w ), under the condition that
F (0) ≥ 0 for ∀x: 0 ≤ x ≤ xv + xw , the relation holds:
where x and xv,w are the variables denoting one of three in function (4.4.13), ac-
cording to accepted designations (4.4.14).
The inequality (4.4.16) implies the inequalities:
Proof of the identity (4.4.21) follows from the relations (4.4.18) and (4.4.20),
under the condition that νaa = 1.
Thus, quantity of mutual information Iab between the signals establishes a mea-
sure of information quantity that corresponds to a measure for informational signal
space introduced in Section 4.1.
+
Remark 4.4.2. Quantity of overall information Iab contained in a pair of stochastic
signals a(t), b(t) ∈ Γ interacting in physical signal space Γ is equal to the sum of
∆
the quantity of mutual information Iab and the quantity of relative information Iab
and is bounded below by max[Ia , Ib ]:
+ ∆
Iab = Iab + Iab ≥ max[Ia , Ib ]. (4.4.22)
Proof of the inequality (4.4.22) follows directly from joint fulfillment of triplets
of the relationships (4.4.18), (4.4.19), (4.4.20), and proof of the identity (4.4.22)
follows from the definition (4.4.20) by realizing identical transformations:
+
Iab = Ia + Ib − νab min[Ia , Ib ] =
= max[Ia , Ib ] + min[Ia , Ib ] − νab min[Ia , Ib ] =
= max[Ia , Ib ] + (1 − νab ) min[Ia , Ib ] ≥ max[Ia , Ib ].
Proof of remark. Write a triangle inequality dab : dab ≤ dax + dxb , substituting in
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 169
both parts the relationship between metric dab and quantity of relative information
∆
Iab (4.4.19) that implies the inequality:
Substituting into the last inequality the value of linear combination νax + νbx −νab =
1 defined by the relationship (3.2.33a), we obtain the initial inequality (4.4.23a).
Proof of the inequality (4.4.23b) is similar.
The proof of remark is the same as the previous one, except that the value of linear
combination νax + νbx − νab = 1 is defined by the relationship (3.2.18).
As accepted for informational signal space Ω, similarly, for physical signal space
Γ as a unit of information quantity measurement, we take the quantity of absolute
information I [ξ (tα )] contained in a single element ξ (tα ) of stochastic signal ξ (t),
and according to the relationship (4.1.4), it is equal to:
Z
I [ξ (tα )] = m(Xα )|tα ∈Tξ = iξ (tα ; t)dt = 1abit, (4.4.26)
Tξ
where Tξ is a domain of definition of the signal ξ (t) (in the form of a discrete set
or continuum); iξ (tα ; t) is an information distribution density (IDD) of the signal
ξ (t).
In the physical signal space Γ, the unit of information quantity is introduced
by the definition that corresponds to Definition 4.1.9.
arbitrary pair of stochastic signals a(t), b(t) ∈ Γ interacting in physical signal space
Γ with properties of a group Γ(+) and a lattice Γ(∨, ∧):
Ibx = Ib /2;
Ibx̃ = Ib /2; (4.4.29b)
Ibx + Ibx̃ = Ib ,
and if the signals a(t) and b(t) are statistically independent, then the identities that
directly follows from the relation (3.2.39b) hold:
Theorem 4.4.2. For a pair of stationary stochastic signals a(t) and b(t) with even
univariate PDFs in L-group Γ(+, ∨, ∧): a(t), b(t) ∈ Γ, t ∈ T , the quantities Iab ,
∆ +
Iab , Iab introduced by Definitions 4.4.1 through 4.4.3 are invariants of a group H
of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of stochastic signals preserving
neutral element (zero) 0 of a group Γ(+) of L-group Γ(+, ∨, ∧): hα,β (0) = 0:
+
∆
Iab = Ia0 b0 , Iab = Ia∆0 b0 , Iab = Ia+0 b0 ; (4.4.31)
0 0
hα : a(t) → a (t), hβ : b(t) → b (t); (4.4.31a)
h−1
α
0
: a (t) → a(t), h−1
β
0
: b (t) → b(t), (4.4.31b)
where Ia0 b0 , Ia∆0 b0 , Ia+0 b0 are the quantities of mutual, relative, and overall information
between a pair of signals a0 (t) and b0 (t), that are the results of mappings (4.4.31a)
of the signals a(t) and b(t), respectively.
Proof. Corollary 3.5.3 of the Theorem 3.1.1. (3.5.15) (and also Corollary 4.2.5 of the
Theorem 4.2.4) implies invariance property of the quantity of absolute information
Ia and Ib contained in the signals a(t) and b(t), respectively:
Ia = Ia0 , I b = I b0 . (4.4.32)
Joint fulfillment of the equalities (4.4.18), (4.4.32), and (3.2.11) implies the identity
that determines invariance of the quantity of mutual information Iab :
while joint fulfillment of the identities (4.4.32), (4.4.33), (4.4.27b) and (4.4.27c) im-
plies the identity that determines invariance of the quantity of relative information
∆
Iab :
∆
Iab = Ia∆0 b0 , (4.4.34)
and also implies the identity that determines the invariance of the quantity of overall
+
information Iab :
+
Iab = Ia+0 b0 . (4.4.35)
On the basis of Theorem 4.4.1 defining a metric dab (4.4.5) for the stationary
and stationary coupled stochastic signals a(t), b(t0 ) ∈ Γ in physical signal space
Γ, one can also define a metric dab (t, t0 ) between a pair of the samples at = a(t),
b0t = b(t0 ) for nonstationary signals:
ρab = inf
0
dab (t, t0 ) = Ia + Ib − 2νab min[Ia , Ib ], (4.4.37)
t,t ∈T
is a metric.
Proof. Two stochastic signals a(t), b(t0 ) ∈ Γ could be considered as the correspond-
ing nonintersecting sets of the samples A = {at }, B = {bt0 } with a distance between
the samples (4.4.36). Then the distance ρab between the sets A = {at }, B = {bt0 } of
the samples (between the signals a(t), b(t0 )) is determined by the identity [246, Sec-
tion IV.1;(1)]:
ρab = inf
0
dab (t, t0 ) = Ia + Ib − 2 sup ν (at , bt0 ) min[Ia , Ib ]. (4.4.38)
t,t ∈T t,t0 ∈T
Under the condition νab = sup ν (at , bt0 ), from the equality (4.4.38) we obtain the
t,t0 ∈T
initial relationship.
The condition νab = sup ν (at , bt0 ) appearing in Theorem 4.4.3 considers the
t,t0 ∈T
fact when the closest, i.e., statistically most dependent samples of two signals a(t)
and b(t0 ) interact at the same time instant t. When the signals a(t) and b(t0 ) are
statistically independent, this condition is always valid.
The results reported in this section allow us to draw the following conclusions.
1. The metric (4.4.5) introduced in physical signal space provide the ade-
quacy of the following analysis of informational relationships between the
signals interacting in physical signal space with both group and lattice
properties.
2. Quantity of mutual information introduced by Definition 4.4.1 establishes
a measure of information quantity, which completely corresponds to a
measure of information quantity for informational signal space introduced
earlier in Section 4.1.
3. The obtained informational relationships create the basis for the analysis
of quality indices (possibilities) of signal processing in signal spaces with
both group and lattice properties.
5
Communication Channel Capacity
where M [∗, ∗] is a modulating function; c(t) is a carrier signal (or simply a carrier);
u(t) is a transmitted message.
Modulation is changing some parameter of a carrier signal according to a trans-
mitted message. A variable (under a modulation) parameter is called an informa-
tional one. Naturally, to extract the transmitted message u(t) at the receiving side,
the modulating function M [∗, ∗] has to be a one-to-one relation:
where M −1 [∗, ∗] is a function that is an inverse with respect to the initial M [∗, ∗].
To transmit information over some distance, the systems use the signals possess-
ing the ability to be propagated as electromagnetic, hydroacoustic, and other oscil-
lations within the proper physical medium separating a sender and an addressee.
The wide class of such signals is described by harmonic functions. The transmitted
information must be included in high-frequency oscillation cos ω0 t called a carrier:
where the amplitude A(t) and/or phase ϕ(t) are changed according to the trans-
mitted message u(t).
Depending on which parameter of a carrier signal is changed, we distinguish
amplitude, frequency, and phase modulations. Modulation of a carrier signal by a
discrete message u(t) = {uj (t)}, j = 1, 2, . . . , n; uj (t) ∈ {ui }; i = 1, 2, . . . , q; q = 2k ;
k ∈ N is called keying, and the signal s(t) is a manipulated one. In the following
section, the informational characteristics of discrete (binary (q = 2) and m-ary
(q = 2k ; k > 1)) and continuous signals are considered.
173
174 5 Communication Channel Capacity
and univariate PDFs of the samples u(tj ) and u(tk ) of discrete stochastic process
u(t) are equal to:
p(x1 ) = 0.5δ (x1 − u1 ) + 0.5δ (x1 − u2 ); (5.1.3a)
p(x2 ) = 0.5δ (x2 − u1 ) + 0.5δ (x2 − u2 ), (5.1.3b)
where δ (x) is Dirac delta function.
Substituting the relationships (5.1.2), (5.1.3a), and (5.1.3b) into the formula
(3.1.2), we obtain the normalized function of statistical interrelationship (NFSI)
ψu (τ ) of stochastic process u(t) which for arbitrary values of state transition prob-
ability pc , is defined by the expression:
h i
τ |τ | τ
ψu (τ ) = |1 − 2pc | 0 1 − (1 − |1 − 2pc |) · − . (5.1.4)
τ
τ0 τ0
From the relationships (5.1.2) and (5.1.3) one can also find the normalized autocor-
relation function (ACF) ru (τ ) of stochastic process u(t), which, on arbitrary values
of state transition probabilities pc , is determined by the relationship:
h i
τ |τ | τ
ru (τ ) = (1 − 2pc ) 0 1 − 2pc − . (5.1.5)
τ
τ0 τ0
For stochastic process u(t) with state transition probability pc taking the values in
the interval [0; 0.5], the normalized ACF ru (τ ) is a strictly positive function, and
NFSI ψu (τ ) is identically equal to it: ψu (τ ) = ru (τ ) ≥ 0 (see Fig. 5.1.1(a)).
(a) (b)
FIGURE 5.1.1 NFSI ψu (τ ) and normalized ACF ru (τ ) of stochastic process u(t) with
state transition probability pc taking values in intervals (a) [0; 0.5], (b) ]0.5; 1]
For stochastic processes u(t) with state transition probability pc taking the
values in the interval ]0.5; 1], normalized ACF ru (τ ) has an oscillated character
possessing both positive and negative values.
An example of a relation between normalized ACF and NFSI for this case is
shown in the Fig. 5.1.1(b).
In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(pc = ps = 1/2), normalized ACF ru (τ ) and NFSI ψu (τ ) are determined by the
176 5 Communication Channel Capacity
expression:
|τ |
1− , |τ | ≤ τ0 ;
ψu (τ ) = ru (τ ) = τ0
0, |τ | > τ0 .
It should be noted that for stochastic processes with state transition probability
pc = P taking the values in the interval [0; 0.5], NFSI ψu (τ ) |P is identically equal to
NFSI ψu (τ ) |1−P of stochastic processes with state transition probability pc = 1 −P
taking the values in the interval [0.5; 1]:
ψu (τ ) |P = ψu (τ ) |1−P = ψu (τ ). (5.1.6)
FIGURE 5.1.2 IDD iu (τ ) of stochastic process u(t) that is determined by Equation (5.1.7)
In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(pc = ps = 1/2), IDD iu (τ ) is determined by the expression:
Iu (T ) = T iu (0), (5.1.8)
symbols (states {u1 , u2 }) (P{u1 } = P{u2 } = 0.5) and state transition probability
pc = 1/2, the quantity of absolute information Iu (T ) measured in abits is equal to
information quantity Iu [n, 2] measured in bits:
NFSI is preserved:
ψu (tj , tk ) = ψs (t0j , t0k ), (5.1.16)
and the overall quantity of information I [u(t)] (I [s(t)]) contained in stochastic pro-
cess u(t) (s(t)), is preserved too:
Assume the processes sAM (t), sP M (t), sF M (t) are stationary. Then, according to
Theorem 3.1.1, the relationships, which characterize the identities between single
characteristics of the signal s(t) and a message u(t), hold:
Identity between NFSIs:
tj = ∆ ± jτ0 ,
P{ul → um } = plm ,
where pc ∈]0; 1/(q − 1)]; ps ∈ [0; (q − 1)pc ], and the transition matrix of probabilities
of transition Πn from one state to another for n steps is equal to [115], [249]:
Π11 Π12 . . . Π1q
Π21 Π22 . . . Π2q
Πn = ... ... ... ... ,
and univariate PDFs p(x1 ), p(x2 ) of the samples u(tj ) and u(tk ) of discrete stochas-
tic process u(t) are, respectively, equal to:
q q
1X 1X
p(x1 ) = δ (x1 − ui ); p(x2 ) = δ (x2 − ui ). (5.1.20)
q i=1 q i=1
Substituting the relationships (5.1.19) and (5.1.20) into the formula (3.1.2), we
obtain NFSI ψu (τ ) of stochastic process u(t), which, on arbitrary values of state
transition probabilities pc , is determined by the expression:
h i
τ |τ | τ
ψu (τ ) = |1 − q · pc | 1 − (1 − |1 − q · pc |) · − . (5.1.21)
τ
0
τ0 τ0
From the relationships (5.1.19) and (5.1.20), we can also find the normalized ACF
5.1 Information Quantity Carried by Discrete and Continuous Signals 181
In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(plm = pc = ps = 1/q), normalized ACF ru (τ ) and NFSI ψu (τ ) are determined by
the relationship:
|τ |
1− , |τ | ≤ τ0 ;
ψu (τ ) = ru (τ ) = τ0
0, |τ | > τ0 .
According to the coupling equation (3.4.3) between IDD and NFSI (5.1.21), IDD
iu (τ ) of stochastic process u(t) is determined by the expression:
h i
2τ
iu (τ ) = (1 − |1 − q · pc |) · |1 − q · pc | . (5.1.23)
τ
0
The examples of IDDs for multipositional sequence u(t) with state transition proba-
bility pc equal to pc = 1/(q− 1) on q = 8; 64 are represented in Fig. 5.1.4. In the case
of statistical independence of the symbols {uj (t)} of a message u(t) (plm = 1/q),
IDD iu (τ ) is defined by the expression:
Iu (T ) = Iu [n, q ] = n, (5.1.25)
is introducing a new measure and its application with respect to discrete random
sequences necessary and reasonable? In the case of arbitrary discrete sequence with
q > 2, the equality (5.1.25) does not hold:
Iu (T ) 6= Iu [n, q ].
Moreover, discrete random sequences {uj (t)}, {wj (t)} with the same length n
(there is no statistical dependence between the symbols) and distinct cardinali-
ties Card{ui }, Card{wk } of the sets of the values {ui }, {wk } (i = 1, 2, . . . , qu ;
k = 1, 2, . . . , qw ; qu 6= qw ) contain the same quantity of absolute information Iu (T ),
Iw (T ) measured in abits:
Iu (T ) = Iw (T ),
5.1 Information Quantity Carried by Discrete and Continuous Signals 183
whereas information quantities Iu [n, qu ], Iw [n, qw ] measured in bits are not equal
to each other:
Iu [n, qu ] 6= Iw [n, qw ], qu 6= qw .
Thus, for a newly introduced unit of information quantity measurement, it is neces-
sary to establish the interrelation with known units based on logarithmic measure
of Hartley [50]. It may be realized by the definition of a new notion.
tj = ∆ + jτ0 ,
It is not difficult to notice that for an arbitrary pair of random variables zi and
zl from the set of values {zi } of the element ξj (t) of generalized discrete random
sequence ξ (t) with PDFs pi (ci , z ) and pl (cl , z ), respectively, the metric identity
holds: Z
1 1
dil = |pi (ci , z ) − pl (cl , z )|dz = m[Zi ∆Zl ], (5.1.27)
2 2
Dz
where dil is a metric between PDF pi (ci , z ) of random variable zi and PDF pl (cl , z )
of random variable zl ; m[Zi ∆Zl ] is a measure of symmetric difference of the sets Zi
and Zl ; Zi = ϕ[pi (ci , z )]; Zl = ϕ[pl (cl , z )].
The mapping (5.1.26) transforms PDFs pi (ci , z ), pl (cl , z ) into corresponding
equivalent sets Zi , Zl and defines isometric mapping of the function space of PDFs
{pi (ci , z )} — (P, d) with metric dil (5.1.27) into the set space {Zi } — (Z, d∗ ) with
metric d∗il = 21 m[Zi ∆Zl ].
184 5 Communication Channel Capacity
The mapping (5.1.26) permits considering a set of values {zi } of generalized discrete
random sequence ξ (t) as a collection of sets {Zi }:
q
[
Z= Zi ,
i=1
Example 5.1.2. Let ξ (t) be a generalized discrete random sequence with statisti-
cally independent elements {ξj (t)}, so that the element of a sequence ξj (t), t ∈ Tj
equiprobably takes the values from a set {zi }, i = 1, 2, . . . , q; and {zi } are statisti-
cally independent random variables with PDF pi (ci , z ):
1 |z − ci |
1− , |z − ci | ≤ b;
pi (ci , z ) = b b
0, |z − ci | > b,
Example 5.1.3. Let ξ (t) be a generalized discrete random sequence with statis-
tically independent elements {ξj (t)}, so that the element of a sequence ξj = ξj (t),
t ∈ Tj equiprobably takes the values from a set {zi }, i = 1, 2, . . . , q; and {zi } are
statistically independent random variables with PDF pi (ci , z ) = δ (z − ci ), ci 6= cl ,
i 6= l, and IDD iξ (τ ) of a sequence ξ (t) is determined by the expression (5.1.23):
h i
2τ
iu (τ ) = (1 − |1 − q · pc |) · |1 − q · pc | ,
τ
0
The result (5.1.29) obtained first by Hartley [50] corresponds to the informa-
tional characteristic of an ideal message source. From a formal standpoint, for a
finite time interval Tξ = [t0 , t0 + T ], an ideal message source is able to produce
information quantity Iξ (n, q ) equal to:
tj = ∆ ± jτ0 ,
Proof. Obviously, the fact that a part of identity in (5.1.34), corresponding to the
case when a strict inequality q < n holds, does not require a proof. So we consider
the proof of identity when q ≥ n. Let f be a function that realizes a bijection
mapping of discrete message u(t) in the form of a discrete random sequence into a
discrete sequence w(t):
f
u(t) w(t), (5.1.35)
f −1
and the mapping f is such that each symbol uj (t) of initial message u(t) = {uj (t)},
j = 1, 2, . . . , n, accepting one of q values of a set {ui }, i = 1, 2, . . . , q, (Card{ui } =
q), is mapped into corresponding symbol wl (t) of multipositional discrete sequence
w(t) = {wl (t)}, l = 1, 2, . . . , n, which can accept one of n values of a set {wl },
(Card{wl } = n):
f : uj (t) → wl (t). (5.1.36)
The possibility of such mapping f exists because the maximum number of distinct
symbols of initial sequence u(t) = {uj (t)}, j = 1, 2, . . . , n and the maximum number
of distinct symbols of the sequence w(t) = {wl (t)}, l = 1, 2, . . . , n are equal to the
number of sequence symbols n. Under bijection mapping f (5.1.35), the initial
discrete random sequence u(t) with n symbols and code base q transforms into
discrete random sequence w(t) with n symbols and code base n. Then, according
to the main axiom of signal processing theory 2.3.1, information quantity Iw [n, n]
contained in discrete random sequence w(t) is equal to information quantity Iu [n, q ]
contained in the initial discrete random sequence u(t):
Equality (5.1.37) can be interpreted in the following way: discrete random se-
quence u(t) = {uj (t)}, j = 1, 2, . . . , n, whose every symbol uj (t) at an arbitrary
instant equiprobably accepts the values from a set {ui }, i = 1, 2, . . . , q, contains the
information quantity I [u(t)] defined by Hartley’s logarithmic measure (5.1.34) but
not greater than n log2 n bit. Consider another proof of equality (5.1.34).
Proof. Consider the mapping h of a set of symbols Ux = {uj (t)} of discrete random
sequence u(t) onto a set of values Uz = {ui }:
The mapping h has a property, according to which, for each element ui from a set
of values Uz = {ui } there exists at least one element uj (t) from a set of symbols
188 5 Communication Channel Capacity
where CardUx = n.
Cardinality CardU of a set U of all possible mappings of the elements {uj (t)}
of a set Ux onto a set Uz = {ui } is equal to:
Theorem 5.1.1, taken with the relationship (5.1.34) and inequality (5.1.42), per-
mits us to draw some conclusions.
tj = ∆ + jτ0 ,
The result (5.1.28) may be generalized, introducing, as the variables, the ab-
stract measures MT , MX of a set of the symbols {ξj (t)}, j = 1, 2, . . . , n and a set
of the values {zi }, i = 1, 2, . . . , q, respectively:
is contained in a stochastic function ξ (t) (see Section 4.1). The stochastic function
ξ (t) is a one-to-one correspondence between the sets Tξ = {tα } and Ξ = {xα }.
This means that both the set of values of an argument Tξ =S{tα } and the set of
values of the function Ξ = {xα } can be associated with a set Xα with a measure
P α
m( Xα ). This implies that a measure MT of the set of the values of argument of
α
the function Tξ is equal to a measure
P MX of the set of the values of the function Ξ
and is equal to a measure m( Xα ) called overall quantity of information I [ξ (t)]:
Pα
MT = MX = I [ξ (t)] = m( Xα ) (see Definition 3.5.4), which is equal to the
α
quantity of absolute information (see Definition 4.1.4).
Thus, for a continuous stochastic signal, a measure MT of a domain of definition
Tξ = {tα } and a measure MX of a codomain Ξ = {xα } are identical and equal to
the overall quantity of information I [ξ (t)] contained in a stochastic signal in the
interval of its existence:
MT = MX = I [ξ (t)].
Thus, the overall quantity of information I [ξ (t)] contained in a stochastic signal ξ (t)
and evaluated by logarithmic measure (5.1.46) is determined by the expression:
Relative quantity of information Iξ,∆ incoming into the formula (5.1.47) is mea-
sured in abits and connected with overall quantity of information I [ξ (t)] by the
relationship (4.1.19).
It should be noted that overall quantity of information Iξ contained in con-
tinuous stochastic signal ξ (t) and evaluated by logarithmic measure (5.1.47) does
not possess the property of additivity, unlike overall quantity of information I [ξ (t)]
K
S
measured in abits. Let a collection ξk (t), t ∈ Tk form a partition of a stochastic
k=1
process ξ (t):
K
[
ξ (t) = ξk (t), t ∈ Tk ; (5.1.49)
k=1
K
[
Tk ∩ Tl = ∅, k 6= l, Tk = Tξ , Tξ = [t0 , t0 + T ];
k=1
5.1 Information Quantity Carried by Discrete and Continuous Signals 193
ξk (t) = ξ (t), t ∈ Tk ,
where Tξ = [t0 , t0 + T ] is a domain of definition of ξ (t).
Let I [ξk (t)] be an overall quantity of information contained in an elementary
signal ξk (t) and measured in abits, and Iξ,k be an overall quantity of information
contained in an elementary signal ξk (t) and evaluated by logarithmic measure:
Iξ,k = I [ξk (t)] log2 I [ξk (t)] (bit). (5.1.50)
Then the identity holds:
K
X
I [ξ (t)] = I [ξk (t)]. (5.1.51)
k=1
Substituting the equality (5.1.51) into the expression (5.1.47), we obtain the fol-
lowing relationship:
K
! K
!
X X
Iξ = I [ξk (t)] log2 I [ξk (t)] =
k=1 k=1
K K
!!
X X
= I [ξk (t)] · log2 I [ξk (t)] . (5.1.52)
k=1 k=1
Thus, the relationship (5.1.53) implies that overall quantity of information Iξ con-
tained in continuous stochastic signal ξ (t) and evaluated by logarithmic measure
(5.1.47), does not possess the property of additivity, and this overall quantity of
information is always greater than the sum of information quantities contained in
separate parts ξk (t) of an entire stochastic process (see (5.1.49)).
There are a lot of practical applications in which it is necessary to operate with
information quantity contained in an unknown nonstochastic signal (or an unknown
nonrandom parameter of this signal). An evaluation of this quantity is given by the
following theorem.
Theorem 5.1.3. Information quantity Is (T∞ ) contained in an unknown non-
stochastic signal s(t) defined in an infinite time interval t ∈ T∞ by some continuous
function:
s(t) = f (λ, t), t ∈ T∞ =] − ∞, ∞[,
where λ is an unknown nonrandom parameter is equal to 1 abit:
Is (T∞ ) = 1 (abit).
194 5 Communication Channel Capacity
is equal to 1 abit:
Iλ (T∞ ) = 1 (abit).
Is (T0 ) = 1 (abit).
is equal to 1 abit:
Iλ (T0 ) = 1 (abit).
Proof of Theorem 5.1.3 and Corollaries 5.1.2 through 5.1.4. According to a condi-
tion of the theorem, arbitrary samples s(t0 ) and s(t00 ), t00 = t0 + ∆ of the signal s(t)
are connected by one-to-one transformation:
The NFSI ψs (t0 , t00 ) of the signal s(t) (3.1.2) between its arbitrary samples s(t0 ) and
s(t00 ) is equal to one:
ψs (t0 , t00 ) = 1.
This identity holds if and only if an arbitrary sample s(t0 ) of the signal s(t) is
characterized by IDD is (t0 , t) of the following form:
h
0 1 0 a
0 a i
is (t , t) = lim 1 t − (t − ) − 1 t − (t + ) ,
a→∞ a 2 2
Hereinafter this statement will be expanded upon the signals (and their parameters)
defined on the closed interval [t0 , t0 + T ].
Thus, for continuous stochastic signal ξ (t) characterized by the partition
(5.1.49), the following statements hold.
T · is (0)
C = max = max is (0) (abit/s). (5.2.2)
s T s
I∆ (s)
C∆ = max (abit/s). (5.2.3)
s T
198 5 Communication Channel Capacity
In all the variants of the capacity Definitions (5.2.1) through (5.2.3), the choice
of the best and the most appropriate signal s(t) for this channel is realized over all
the possible signals from signal space. In this sense, the channel is considered to be
matched with the signal s(t) if its capacity evaluated by overall (relative) quantity
of information is equal to the ratio of overall (relative) quantity of information
contained in the signal s(t) to its duration T :
I (s) T · is (0)
C= = = is (0) (abit/s); (5.2.4a)
T T
I∆ (s)
C∆ = (abit/s), (5.2.4b)
T
where is (0) is a value of IDD is (τ ) in the point τ = 0.
As shown in Section 4.1, for the signals characterized by IDD in the form
of δ-function or in the form of the difference of two Heaviside step functions
1 a a
a 1(τ + 2 ) − 1(τ − 2 ) , relative quantity of information I∆ (s) contained in the
signal s(t) is equal to overall quantity of information I (s) (see Formula (4.1.17)):
I∆ (s) = I (s).
Meanwhile, for the signals characterized by IDD in the form of other functions,
relative quantity of information I∆ (s) contained in the signal s(t) is equal to a half
of overall quantity of information I (s) (see Formula (4.1.19)):
1
I∆ (s) = I (s).
2
Taking into account the interrelation between relative quantity of information I∆ (s)
and overall quantity of information I (s) contained in the signal s(t), the relationship
between noiseless channel capacity evaluated by relative quantity of information and
overall quantity of information is defined by IDD of the signal s(t).
of δ-function or in
As for the signals characterized by IDD in the form the form
of the difference of two Heaviside step functions a1 1(τ + a2 ) − 1(τ − a2 ) , noiseless
channel capacity C∆ (by r.q.i.) is equal to a capacity C (by o.q.i.):
C∆ = C. (5.2.5)
As for the signals characterized by IDD in the form of other functions, noiseless
channel capacity (by r.q.i.) C∆ is equal to a half of capacity (by o.q.i.) C:
1
C∆ = C. (5.2.6)
2
Using the relationships obtained in Section 5.1, we evaluate the capacities of both
discrete and continuous noiseless channels, which characterize information quanti-
ties carried by both discrete and continuous stochastic signals respectively.
5.2 Capacity of Noiseless Communication Channels 199
We shall assume that a discrete random sequence u(t) is matched with a channel.
Then a discrete noiseless channel capacity (by o.q.i.) C (bit/s) measured in bit/s
is equal to the ratio of overall quantity of information Iu [n, q ] contained in a signal
u(t) to its duration T :
1
Iu [n, q ] τ0 · log2 q, q < n;
C= = 1 (5.2.7)
T
· log2 n, q ≥ n,
τ0
where τ0 is a duration of elementary signal uj (t).
Consider the case in which discrete random sequence u(t) = {uj (t)} is charac-
terized by a statistical interrelation between the symbols with IDD iu (τ ), and is
matched with a channel. The discrete noiseless channel capacity (by o.q.i.) C (bit/s)
measured in bit/s, according to the relationship (5.1.45), is bounded by the quan-
tity:
C (bit/s) ≤ iu (0) log2 [iu (0)T ]. (5.2.8)
The relationships (5.2.7) and (5.2.8) imply that discrete noiseless channel capacity
(by o.q.i.) C (bit/s) measured in bit/s is always a finite quantity under the condition
that a duration T of a sequence is a bounded value: T < ∞.
Let a discrete random sequence u(t) with statistically independent symbols
{uj (t)} characterized by IDD iu (τ ) in the form of the difference of two Heavi-
side step functions τ10 1(τ + τ20 ) − 1(τ − τ20 ) be matched with a channel. Then the
discrete noiseless channel capacity (by r.q.i.) C∆ (bit/s) is equal to channel capacity
(by o.q.i.) C (bit/s):
C∆ (bit/s) = C (bit/s), (5.2.9)
and the last, according to the relationships (5.2.7) and (5.1.40), is determined by
the expression:
C (abit/s) · log2 q, q < n;
C (bit/s) = (5.2.10)
C (abit/s) · log2 n, q ≥ n,
where C (abit/s)= iu (0) = 1/τ0 .
200 5 Communication Channel Capacity
I [ξ (t)]
C (bit/s) = log2 I [ξ (t)] = C (abit/s) · log2 I [ξ (t)] =
T
= C (abit/s) · log2 [C (abit/s)T ], (5.2.11)
where C (abit/s) is continuous noiseless channel capacity (by o.q.i.) measured in
abit/s.
The expression (5.2.11) defines ultimate information quantity Iξ that can be
transmitted by the signal ξ (t) over the channel for a time equal to a signal duration
T.
The relative quantity of information Iξ,∆ contained in continuous stochastic sig-
nal ξ (t) and evaluated by logarithmic measure is determined by Expression (5.1.48):
I∆ [ξ (t)]
C∆ (bit/s) = log2 I∆ [ξ (t)] = C∆ (abit/s) · log2 I∆ [ξ (t)] =
T
= C∆ (abit/s) · log2 [C∆ (abit/s)T ], (5.2.12)
where C∆ (abit/s) is a continuous noiseless channel capacity (by r.q.i.) measured
in abit/s.
The expression (5.2.12) defines ultimate quantity of useful information Iξ,∆
that can be extracted from the transmitted signal ξ (t) for a time equal to a signal
duration T .
The formulas converting continuous noiseless channel capacity from unit mea-
sured in abit/s to unit measured in bit/s (5.2.11), (5.2.12) have the following fea-
tures. First, continuous noiseless channel capacity, evaluated by overall (relative)
quantity of information C (bit/s) (C∆ (bit/s)) and measured in bit/s, unlike chan-
nel capacity C (abit/s) (C∆ (abit/s)) measured in abit/s, depends on IDD iξ (τ ),
5.2 Capacity of Noiseless Communication Channels 201
and also on overall (relative) quantity of information I [ξ (t)] (I∆ [ξ (t)]) contained in
the signal ξ (t). Second, continuous noiseless channel capacity, evaluated by overall
(relative) quantity of information C(bit/s) (C∆ (bit/s)) and measured in bit/s, as
well as channel capacity C (abit/s) (C∆ (abit/s)) measured in abit/s, is always a fi-
nite quantity under the condition that duration T of continuous signal is a bounded
value: T < ∞. Third, continuous noiseless channel capacity, evaluated by overall
(relative) quantity of information C (bit/s) (C∆ (bit/s)) and measured in bit/s, is
uniquely determined only when a signal duration T is known. Fourth, (5.2.11) (and
(5.2.12)) imply that under any signal-to-noise ratio and by means of any signal pro-
cessing units, one cannot transmit (extract) greater information quantity (quantity
of useful information) for a signal duration T than the quantities determined by
these relationships, respectively.
Thus, when talking about a continuous noiseless channel capacity, one should
take into account the following considerations. On the base of introduced notions
of quantities of absolute, mutual, and relative information along with overall and
relative quantities of information, a continuous noiseless channel capacity can be
defined uniquely. Conversely, applying logarithmic measure of information quantity,
a continuous noiseless channel capacity can be uniquely defined exclusively for a
fixed time, for instance, for a signal duration T , or for a time unit (e.g., a second).
In the last case, the formulas converting continuous noiseless channel capacity eval-
uated by overall (relative) quantity of information (5.2.11), (5.2.12) may be written
as follows:
C (bit/s) = C (abit/s) · log2 [C (abit/s) · 1 s]; (5.2.13a)
C∆ (bit/s) = C∆ (abit/s) · log2 [C∆ (abit/s) · 1 s]. (5.2.13b)
Continuous noiseless channel capacity (by o.q.i.) C (bit/s) (5.2.13a) measured in
bit/s defines maximum information quantity that can be transmitted through a
noiseless channel for one second. Continuous noiseless channel capacity (by r.q.i.)
C∆ (bit/s) (5.2.13b) measured in bit/s defines maximum quantity of useful infor-
mation that can be extracted from a signal transmitted through a noiseless channel
for one second.
Substituting the value of NFSI ψs (τ ) (5.2.15) into (5.2.16), we obtain the expression
for HSD σs (ω ) of the signal s(t):
2
a sin(aω/2)
σs (ω ) = . (5.2.17)
2π aω/2
Effective width ∆ω of HSD σs (ω ) is equal to:
Z∞
1 1 π
∆ω = σs (ω )dω = = . (5.2.18)
σs (0) 2σs (0) a
0
Noiseless channel capacity (by o.q.i.) C, under the condition that the channel is
matched with a stochastic signal with a rectangular IDD, according to the formula
(5.2.4a), is determined by maximum value is (0) of IDD is (τ ):
1
C = is (0) = = 2∆f (abit/s), (5.2.19)
a
where ∆f = ∆ω/(2π ) = 1/(2a) = 0.5is (0) is a real effective width of HSD σs (ω ).
Noiseless channel capacity (by r.q.i.) C∆ , under the condition that the channel is
matched with a stochastic signal with a rectangular IDD, according to the formula
(5.2.5), is equal to channel capacity (by o.q.i.) C:
C∆ = C = is (0) = 2∆f (abit/s). (5.2.20)
On the base of a continuous noiseless channel capacity evaluated by overall (relative)
quantity of information C (abit/s) (C∆ (abit/s)) (5.2.19) and (5.2.20), we shall
determine a channel capacity C (bit/s) (C∆ (bit/s)) measured in bit/s on the
assumption that a duration T of the signal s(t) is equal to one second. In this case,
according to the formulas (5.2.13), the channel capacities C (bit/s) (C∆ (bit/s))
are equal to:
C (bit/s) = 2∆f · log2 [2∆f · 1 s]; (5.2.21a)
C∆ (bit/s) = 2∆f · log2 [2∆f · 1 s], (5.2.21b)
where ∆f = 0.5is (0) is a real effective width of HSD σs (ω ) of the signal s(t).
5.2 Capacity of Noiseless Communication Channels 203
As for Gaussian channels, one should note the following consideration. Squared
module of frequency response function (amplitude-frequency characteristic) K (ω )
of a channel, matched with Gaussian signal s(t), is equal to its power spectral
density S (ω ):
2
|K (ω )| = S (ω ). (5.2.22)
Despite the nonlinear dependence between NFSI ψs (τ ) and normalized autocorre-
lation function rs (τ ) of the signal s(t), which is defined by the relationship (3.1.3),
one may consider the power spectral density S (ω ) to be approximately proportional
to HSD σs (ω ):
S (ω ) ≈ Aσs (ω ), (5.2.23)
where A is a coefficient of proportionality providing the normalization property of
R∞
HSD: σξ (ω )dω = 1 (see Section 3.1).
−∞
The effective width ∆ωeff of a squared module of a frequency response function
K (ω ) of a channel, matched with Gaussian signal s(t), is approximately equal to
an effective width ∆ω of HSD σs (ω ):
Z∞
1
∆ωeff = |K (ω )|2 dω ≈ ∆ω. (5.2.24)
|K (0)|2
0
Noiseless channel capacity (by o.q.i.) C, under the condition that the channel is
matched with a stationary signal with Laplacian IDD, according to the formula
(5.2.4a), is determined by maximum value is (0) of IDD is (τ ):
duration T of a signal s(t) is equal to one second. In this case, according to the
formulas (5.2.13), channel capacities C (bit/s), C∆ (bit/s) are equal to:
According to the relationship (5.2.25), the formulas (5.2.33) for a noiseless chan-
nel matched with Gaussian signal s(t) may be written as follows:
In electronic systems that transmit, receive, and extract information, signal process-
ing is always accompanied with interactions between useful signals and interference
(noise). The presence of interference (noise) in the input of a signal processing unit
will prevent accurate reproduction of the useful signal or transmitted message in
the output of the signal processing unit because of information losses.
A signal processing unit may be considered optimal if the losses of information
are minimal. Solving signal processing problems requires various criteria and quality
indices.
The conditions of interactions of useful signals and interference (noise) require
optimal signal processing algorithms to ensure minimal information losses. The level
of information losses defines the potential quality indices of signal processing.
Generally, the evaluation of potential quality indices of signal processing and
also synthesis and analysis of signal processing algorithms are fundamental problems
of signal processing theory. In contrast to synthesis, the establishment of potential
quality indices of signal processing is not based on any possible criteria of optimality.
Potential quality indices of signal processing that determine the upper bound of
efficiency of solving signal processing problems are not defined by the structure of
signal processing unit. They follow from informational properties of the signal space
where signal processing is realized.
Naturally, under certain conditions of interactions between useful signals and
interference (noise), the algorithms of signal processing in the presence of inter-
ference (noise) obtained via the synthesis based on an optimality criterion cannot
provide quality indices of signal processing that are better than those of potential
ones.
The main results obtained in Chapter 4 with respect to physical signal space
create the basis for achieving the final results that are important for signal pro-
cessing theory which deals with establishment of potential quality indices of signal
processing in signal spaces with various algebraic properties. These final results also
are important for information theory with respect to the relationships defining the
capacity of discrete and continuous communication channels, taking into account
the influence of interference (noise) there.
We will formulate the main five signal processing problems, excluding the prob-
lem of signal recognition, that are covered in the known works on statistical sig-
nal processing, statistical radio engineering, and statistical communication the-
ory [159], [163], [155]. We will extend their content upon the signal spaces with
207
208 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
Undoubtedly, the problem of signal extraction (filtering) is more general and com-
plex than the problem of parameter estimation [163], [155], inasmuch as an esti-
mator λ̂ of parameter λ of a signal s(t) may be obtained from the solution of a
filtering problem (6.1.3) on the base of the relationship (6.1.4a):
the estimators ŝ(t) and λ̂(t) are equivalent within the sense, that an estimator ŝ(t)
of a signal s(t) makes it easy to obtain an estimator λ̂(t) of a parameter λ(t) and
vice versa (see the relationships (6.1.4)).
Generally, any estimator ŝ(t) of useful signal s(t) (or estimator λ̂(t) of its pa-
rameter λ(t)) is some function fŝ [x(t)] (fλ̂ [x(t)]) of an observed process x(t), which
is a result of interaction between a signal s(t) and interference (noise) n(t):
Is contained in useful signal s(t) in physical signal space Γ with arbitrary algebraic
properties.
With respect to useful signal s(t) and the result of interaction x(t) (6.2.1), the
relationship (6.2.3) takes the form:
where Isx is the quantity of mutual information contained in the result of interaction
x(t) concerning a useful signal s(t); νsx = ν (st , xt ) is NMSI of the samples st , xt of
stochastic signals s(t), x(t); Is , Ix is quantity of absolute information contained in
useful signal s(t) and observed process x(t), respectively.
On the base of Theorem 3.2.14 stated for metric signal space (Γ, µ) with metric
µ (3.2.1), one can formulate the similar theorem for an estimator ŝ(t) of useful
signal s(t), whose interaction with interference (noise) n(t) in physical signal space
is described by some binary operation of L-group Γ(+, ∨, ∧).
Theorem 6.2.1. Metric inequality of signal processing. In metric signal space (Γ, µ)
with metric µ (3.2.1), while forming the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of use-
ful signal s(t) by processing of the result of interaction xt = st ⊕ nt (1) between
the samples st and nt of stochastic signals s(t), n(t), t ∈ Ts defined by a binary
operation of L-group Γ(+, ∨, ∧), the metric inequality holds:
and inequality (6.2.5) turns into identity if and only if the mapping fŝ belongs to a
group H = {hα } of continuous mappings fŝ ∈ H preserving null (identity) element
0 of a group Γ(+): hα (0) = 0, hα ∈ H.
Thus, Theorem 6.2.1 implies the expression determining a lower bound of metric
µsŝ between useful signal s(t) and its estimator ŝ(t):
The sense of the expression (6.2.6) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties permit achieving a value
of metric µsŝ between useful signal s(t) and its estimator ŝ(t), that is less than
established by the relationship (6.2.6) (see Fig. 6.2.1).
To characterize the quality of the estimator ŝ(t) (6.2.2a) of useful signal s(t)
obtained as a result of its filtering (extraction) in the presence of interference (noise),
the following definition is introduced.
Definition 6.2.1. By quality index of estimator of a signal s(t), while solving the
problem of its filtering (extraction) under interference (noise) background within
the framework of the model (6.2.1), we mean the NMSI νsŝ = ν (st , ŝt ) between
useful signal s(t) and its estimator ŝ(t).
Taking into account the known relation between metric and NMSI (3.2.7), as
a corollary of Theorem 6.2.1, one can formulate independent theorems for NMSIs
characterizing the same pairs of the signals: s(t), x(t) and s(t), ŝ(t).
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group Properties 213
Theorem 6.2.2. Signal processing inequality. In metric signal space (Γ, µ) with
metric µ (3.2.1), while forming the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of useful signal
s(t) by processing of the result of interaction xt = st ⊕nt (6.2.1) between the samples
st , nt of stochastic signals s(t), n(t), t ∈ Ts defined by a binary operation of L-group
Γ(+, ∨, ∧), the inequality holds:
and inequality (6.2.7) turns into identity if and only if the mapping fŝ belongs to a
group H = {hα } of continuous mappings fŝ ∈ H preserving null (identity) element
0 of a group Γ(+): hα (0) = 0, hα ∈ H.
The sense of the relationship (6.2.8) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties can provide quality index
of a useful signal estimator determined by NMSI νsŝ between a useful signal s(t)
and its estimator ŝ(t) that is greater than the quantity of NMSI νsx between a
useful signal s(t) and the result of its interaction x(t) with interference (noise) n(t).
Proof. Multiplying the left and right parts of inequality (6.2.7) by the left and right
parts of obvious inequality min(Is , Iŝ ) ≤ min(Is , Ix ), respectively, we obtain:
The sense of the relationship (6.2.10) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties can provide the quantity
of mutual information Isŝ between a useful signal s(t) and its estimator ŝ(t) that
is greater than the value of quantity of mutual information Isx between the useful
signal s(t) and the result of its interaction x(t) with interference (noise) n(t).
In the case of additive interaction between a statistically independent Gaussian
useful signal s(t) and interference (noise) n(t) in the form of white Gaussian noise
(WGN) in linear signal space Γ(+):
and implies that the upper bound of quality index νsŝ of an estimator ŝ(t) of a
signal s(t), while solving the problem of its filtering (extraction) in signal space
Γ(∨, ∧) with lattice properties is equal to 1: sup νsŝ = 1.
The last relation means that the possibilities of signal filtering (extraction) under
interference (noise) background in signal space Γ(∨, ∧) with lattice properties are
not bounded by the conditions of parametric and nonparametric prior uncertainties.
There exists (at least, theoretically) possibility of signal processing without losses
of information contained in the processed signals x(t) and x̃(t).
In linear signal space Γ(+), as follows from (6.2.13), the potential possibilities
of extraction of useful signal s(t) while forming the estimator ŝ(t), as against the
signal space Γ(∨, ∧) with lattice properties, are essentially bounded by the energetic
relations between interacting useful and interference signals, as illustrated with the
following example.
Example 6.2.1. Consider a useful Gaussian signal s(t) with power spectral density
S (ω ):
S0
S (ω ) = (6.2.17)
1 + (ωT )2
and interference (noise) n(t) in the form of white Gaussian noise with power spectral
density (PSD) N0 additively interact with each other in linear signal space Γ(+).
The minimal variance of filtering error (fluctuation error of filtering) Dε is de-
termined by the value [159, (2.122)]:
S
Dε = M{[s(t) − ŝ(t)]2 } = p 0 , (6.2.18)
T ( 1 + q 2 + 1)
The relationships (6.2.18) and (6.2.19) imply that correlation coefficient ρsŝ
between useful signal s(t) and its estimator ŝ(t) is determined by the quantity:
p
1 + q2 − 1
ρsŝ = 1 − δε = p . (6.2.20)
1 + q2 + 1
The relationship (6.2.7) implies that, between NMSI νsx of the samples st , xt of
Gaussian signals s(t), x(t) and NMSI νsŝ of the samples st , ŝt of Gaussian signals
s(t), ŝ(t), the following inequality holds:
2 2
νsŝ = arcsin[ρsŝ ] ≤ νsx = arcsin[ρsx ]. (6.2.21)
π π
Substituting the values of NMSIs for Gaussian signals determined by the equality
(3.2.12) and also the values of correlation coefficients from the relations (6.2.20)
and (6.2.14) into the inequality (6.2.21), we obtain the inequality:
p ! !
2 1 + q2 − 1 2 q
νsŝ (q ) = arcsin p ≤ νsx (q ) = arcsin p . (6.2.22)
π 1 + q2 + 1 π 1 + q2
The relationships (3.2.51a), (6.2.22), and (3.2.7) imply that metric µsx between the
samples st , xt of Gaussian signals s(t), x(t) and metric µsŝ between the samples st ,
ŝt of Gaussian signals s(t), ŝ(t) are connected by the inequality:
p !
2 1 + q2 − 1
µsŝ (q ) = 1 − arcsin p ≥
π 1 + q2 + 1
!
2 q
≥ µsx (q ) = 1 − arcsin p . 5 (6.2.23)
π 1 + q2
The graphs of dependences of NMSIs νsŝ (q ), νsx (q ) and metrics µsŝ (q ), µsx (q )
on a signal-to-noise ratio q 2 = S0 /N0 are shown in Figs. 6.2.2 and 6.2.3, respectively.
The relationship (6.2.22) (see Fig. 6.2.2) implies that NMSI dependence νsx (q )
determines the upper bound of NMSI νsŝ (q ) between useful signal s(t) and its esti-
mator ŝ(t), which cannot be exceeded by any methods of Gaussian signal processing
in linear signal space Γ(+) and does not depend on the kinds of useful signal mod-
ulation. Similarly, the relationship (6.2.23) determines the lower bound µsx (q ) of
metric µsŝ (q ) between useful signal s(t) and its estimator ŝ(t), which cannot be
smaller while using methods of Gaussian signal processing in linear space.
Generally, the properties of the estimators of the signals extracted under inter-
ference (noise) background are not included directly into a subject of information
theory. At the same time, consideration of informational properties of signal esti-
mators in informational relationships between processed signals is of interest, first,
to establish constraint relationships for quality indices of signal extraction (filter-
ing) in the presence of interference (noise), and second, to determine the capacity
of communication channels with interference (noise) that operate in linear signal
spaces and in signal spaces with lattice properties. The questions of evaluation of
noiseless channel capacity are considered within Section 5.2.
respectively:
X+,i = λ + Ni ; (6.3.1a)
X∨,i = λ ∨ Ni ; (6.3.1b)
X∧,i = λ ∧ Ni . (6.3.1c)
where {Ni } are independent estimation (measurement) errors that are represented
by the sample N = (N1 , . . . , Nn ), Ni ∈ N , N ∈ L(X , BX ; +, ∨, ∧) with distribu-
tion from a distribution class with symmetric probability density function (PDF)
pN (z ) = pN (−z ) ; {X+,i }, {X∨,i }, {X∧,i } are the results of estimation (measure-
ment) represented by the sample X+ = (X+,1 , . . . , X+,n ), X∨ = (X∨,1 , . . . , X∨,n ),
X∧ = (X∧,1 , . . . , X∧,n ); X+,i ∈ X+ , X∨,i ∈ X∨ , X∧,i ∈ X∧ , respectively:
X+ , X∨ , X∧ ∈ L(X , BX ; +, ∨, ∧); +, ∨, ∧ are operations of addition, join, and
meet of sample space L(X , BX ; +, ∨, ∧) with properties of L-group L(X ; +, ∨, ∧),
respectively; i = 1, . . . , n is an index of the elements of statistical collections
{Ni }, {X+,i }, {X∨,i }, {X∧,i }; n represents size of the samples N = (N1 , . . . , Nn ),
X+ = (X+,1 , . . . , X+,n ), X∨ = (X∨,1 , . . . , X∨,n ), X∧ = (X∧,1 , . . . , X∧,n ).
For the model (6.3.1a), the estimator λ̂n,+ , which is a sample mean, is a uni-
formly minimum variance unbiased estimator [250], [251]:
n
1X
λ̂n,+ = X+,i . (6.3.2)
n i=1
As the estimator λ̂n,∧ of a parameter λ for the model (6.3.1b), we consider meet of
lattice L(X ; +, ∨, ∧):
n
λ̂n,∧ = ∧ X∨,i , (6.3.3)
i=1
n
where ∧ X∨,i = inf {X∨,i } is the least element of the sample X∨ =
i=1 X∨
(X∨,1 , . . . , X∨,n ).
As the estimator λ̂n,∨ of a parameter λ for the model (6.3.1c), we take the join
of lattice L(X ; +, ∨, ∧):
n
λ̂n,∨ = ∨ X∧,i , (6.3.4)
i=1
n
where ∨ X∧,i = sup{X∧,i } is the largest element of the sample X∧ =
i=1 X∧
(X∧,1 , . . . , X∧,n ).
Of course, there are no doubts concerning optimality of the estimator λ̂n,+ , at
least within Gaussian distribution of estimation (measurement) errors. At the same
time, it should be noted that the questions about the estimators λ̂n,∧ (6.3.3), λ̂n,∨
(6.3.4) on the basis of optimality criteria, are considered in Chapter 7.
For normally distributed errors of estimation (measurement) {Ni }, cumulative
distribution function (CDF) Fλ̂n,+ (z ) of the estimator λ̂n,+ is determined by the
formula:
Zz
Fλ̂n,+ (z ) = pλ̂n,+ (x)dx, (6.3.5)
−∞
220 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
where pλ̂n,+ (x) = (2πDλ̂n,+ )−1/2 exp −(x − λ)2 /2Dλ̂n,+ is the PDF of the esti-
mator λ̂n,+ ; Dλ̂n,+ = D/n is a variance of the estimator λ̂n,+ ; D is a variance of
estimation (measurement) errors {Ni }; λ is an estimated parameter.
Consider the expressions for CDFs Fλ̂n,∧ (z ), Fλ̂n,∨ (z ) of the estimators λ̂n,∧
(6.3.3) and λ̂n,∨ (6.3.4) and also for CDFs FX∨,i (z ), FX∧,i (z ) of estimation (mea-
surement) results X∨,i (6.3.1b) and X∧,i (6.3.1c), respectively, supposing that CDF
FN (z ) of estimation (measurement) errors {Ni } is an arbitrary one. The relation-
ships [115, (3.2.87)] and [115, (3.2.82)] imply that CDFs FX∨,i (z ), FX∧,i (z ) are,
respectively, equal to:
FX∨,i (z ) = Fλ (z )FN (z ); (6.3.6a)
FX∧,i (z ) = Fλ (z ) + FN (z ) − Fλ (z )FN (z ), (6.3.6b)
where Fλ (z ) = 1(z − λ) is CDF of unknown nonrandom parameter λ; 1(z ) is Heav-
iside step function; FN (z ) is the CDF of estimation (measurement) errors {Ni }.
The relationships [252, (2.1.2)] and [252, (2.1.1)] imply that CDFs Fλ̂n,∧ (z ),
Fλ̂n,∨ (z ) of the estimators λ̂n,∧ (6.3.3) and λ̂n,∨ (6.3.4) are, respectively, equal to:
(a) (b)
FIGURE 6.3.1 CDFs: (a) CDFs FX∨,i (z) (6.3.8a) and FX∧,i (z) (6.3.8b) of estimation (mea-
surement) results X∨,i (6.3.1b), X∧,i (6.3.1c); (b) CDFs Fλ̂n,∧ (z) (6.3.8c) and Fλ̂n,∨ (z)
(6.3.8d) of estimators λ̂n,∧ (6.3.3) and λ̂n,∨ (6.3.4)
To determine a quality of the estimators (6.3.2), (6.3.3), and (6.3.4) in sample space
with L-group properties L(X , BX ; +, ∨, ∧), we introduce the functions µ(λ̂n , Xi )
(3.2.1), µ0 (λ̂n , Xi ) (3.2.1a), which characterize distinctions between one of the esti-
mators λ̂n (6.3.2), (6.3.3), and (6.3.4) of a parameter λ and the estimation (measure-
ment) result Xi = λ⊕Ni , where ⊕ is one of the operations of L-group L(X ; +, ∨, ∧)
(6.3.1a), (6.3.1b), and (6.3.1c), respectively:
Theorem 6.3.1. The functions µ(λ̂n,+ , X+,i ), µ(λ̂n,∧ , X∨,i ) determined by the ex-
pression (6.3.10a) between the estimators λ̂n,+ (6.3.2), λ̂n,∧ (6.3.3) and the estima-
tion (measurement) results X+,i (6.3.1a), X∨,i (6.3.1b), respectively, are metrics.
222 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
Proof. We use general designations to denote one of the estimators (6.3.2), (6.3.3)
of a parameter λ as λ̂n and denote the estimation (measurement) result as Xi ,
Xi = λ ⊕ Ni , where ⊕ is one of operations of L-group L(X ; +, ∨, ∧) (6.3.1a) and
(6.3.1b), respectively. Consider the probabilities P[λ̂n ∨ Xi > λ], P[λ̂n ∧ Xi > λ]
that, according to the formulas [115, (3.2.80)] and [115, (3.2.85)], are equal to:
P[λ̂n > λ] + P[Xi > λ] = P[λ̂n ∨ Xi > λ] + P[λ̂n ∧ Xi > λ]. (6.3.12)
Valuation P[λ̂n > λ], P[Xi > λ] is isotonic, inasmuch as the implication holds [221,
Section X.1 (V2)]:
λ̂n ≥ λ̂0n ⇒ P[λ̂n > λ] ≥ P[λ̂0n > λ]; (6.3.13a)
Xi ≥ Xi0 ⇒ P[Xi > λ] ≥ P[Xi0 > λ]. (6.3.13b)
Joint fulfillment of the relationships (6.3.12) and (6.3.13a,b), according to Theo-
rem 6.3.1 [221, Section X.1], implies the quantity (6.3.10a) that is equal to:
is metric.
For the quantity µ0 (λ̂n , Xi ) determined by the relationship (6.3.10b), one can
formulate a theorem that is analogous to Theorem 6.3.1; and to prove it, the fol-
lowing lemma may be useful.
Lemma 6.3.1. The functions µ(λ̂n,∧ , X∨,i ), µ(λ̂n,∨ , X∧,i ); µ0 (λ̂n,∧ , X∨,i ),
µ0 (λ̂n,∨ , X∧,i ), determined by the expressions (6.3.10a,b) between the estimators
λ̂n,∧ (6.3.3), λ̂n,∨ (6.3.4) and measurement results X∨,i (6.3.1b), X∧,i (6.3.1c),
respectively, are equal to:
Proof. Determine the values of functions µ(λ̂n,∧ , X∨,i ), µ0 (λ̂n,∧ , X∨,i ) between the
estimator λ̂n,∧ (6.3.3) and the estimation (measurement) results X∨,i (6.3.1b). Join
λ̂n,∧ ∨ X∨,i , which appears in the initial formulas (6.3.10a,b), according to the
definition of the estimator λ̂n,∧ (6.3.3), and also according to the lattice absorption
property, is equal to the estimation (measurement) result X∨,i :
Meet λ̂n,∧ ∧ X∨,i , which appears in the initial formulas (6.3.10a,b) according to the
idempotency property of lattice, is equal to the estimator λ̂n,∧ :
Substituting the values of join and meet (6.3.16) and (6.3.17) into the initial for-
mulas (6.3.10a,b), we obtain the values of the functions µ(λ̂n,∧ , X∨,i ), µ0 (λ̂n,∧ , X∨,i )
between the estimator λ̂n,∧ and the estimation (measurement) result X∨,i :
Fλ̂n,∧ (z ) = inf [Fλ (z ), (1 − (1 − FN (z ))n )] is the CDF of the estimator λ̂n,∧ de-
z
termined by the expression (6.3.8c); FX∨,i (z ) = inf [Fλ (z ), FN (z )] is the CDF of
z
random variable X∨,i (6.3.1b) determined by the expression (6.3.8a).
Similarly, we obtain the values of the functions µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∨ , X∧,i )
determined by the expressions (6.3.10a) and (6.3.10b) between the estimator λ̂n,∨
(6.3.4) and the estimation (measurement) result X∧,i (6.3.1c). In this case, join
λ̂n,∨ ∨ X∧,i , which appears in the initial formulas (6.3.10a) and (6.3.10b), according
to the definition of the estimator λ̂n,∨ (6.3.4), and also according to the lattice
idempotency property, is equal to the estimator λ̂n,∨ :
Meet λ̂n,∨ ∧ X∧,i , which appears in the initial formulas (6.3.10a) and (6.3.10b), ac-
cording to the lattice absorption property, is equal to the estimation (measurement)
result X∧,i :
λ̂n,∨ ∧ X∧,i = [( ∨ X∧,j ) ∨ X∧,i ] ∧ X∧,i = X∧,i . (6.3.21)
j6=i
Substituting the values of join and meet (6.3.20) and (6.3.21) into the initial formu-
las (6.3.10a,b), we obtain the values of the functions µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∨ , X∧,i )
between the estimator λ̂n,∨ and the estimation (measurement) result X∧,i :
Corollary 6.3.1. The quantities µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∧ , X∨,i ) determined by the
expressions (6.3.22a), (6.3.18b) are, respectively, equal to zero ∀λ ∈] − ∞, ∞[:
Proof. We first prove the identity (6.3.26a) for the group L(X ; +), and then the
identity (6.3.26b) for the lattice L(X ; ∨, ∧).
According to the identity (6.3.12), metric µ(λ̂n,+ , X+,i ) is equal to:
thus, the function µ0 (λ̂n,+ , X+,i ), being the same as µ(λ̂n,+ , X+,i ), is metric. The
initial statement (the identity (6.3.26a)) of the Theorem 6.3.2 is proved.
We prove the second statement of the Theorem 6.3.2. For CDF FN (z ) of mea-
surement errors {Ni } and CDF Fλ (z ) of unknown nonrandom parameter λ that
are symmetrical with respect to medians, the identity holds [253, (7.1)]:
FN (−z ) = 1 − FN (z ); (6.3.30a)
2 1
q{λ̂n,+ } = µ(λ̂n,+ , X+,i ) = 1 − arcsin( √ ). (6.3.36)
π n
As shown by the graphs in Fig. 6.3.2, the dependences µ(λ̂n,∧ , X∨,i ) (6.3.18a),
µ0 (λ̂n,∨ , X∧,i ) (6.3.22b), under an arbitrary symmetric distribution FN (z ; σ ) of the
estimation (measurement) errors {Ni } with scale parameter σ and the property
228 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
(a) (b)
FIGURE 6.3.3 Dependences on size of samples n: (a) q{λ̂n,+ } (6.3.36), q{λ̂n,∧ } (6.3.38a),
and 1 − n1 ; (b) 1 − q{λ̂n,+ }, 1 − q{λ̂n,∧ }, 1/n
As seen in the graphs, the quality indices (6.3.38a,b) of the estimators in the
sample space with properties of the lattice L(X ; ∨, ∧) tend to 1 exponentially,
6.4 Quality Indices of Classification and Detection of Deterministic Signals 229
whereas the quality index (6.3.36) of the estimator in the sample space with prop-
erties of the group L(X ; +) tends to 1 more slowly than inversely proportional
dependence 1 − n1 . From an informational view, information, contained in the esti-
mation (measurement) results on a qualitative level is used better while processing
the estimation (measurement) results in the sample space with lattice properties,
than in sample space with group properties. We should note that while considering
information contained in the estimation (measurement) results (in the sample with
independent elements), we mean information in an unknown nonrandom parame-
ter λ and also information contained in the sample of the independent estimation
(measurement) results N = (N1 , . . . , Nn ), whose use provides achieving estimator
quality index determined by the sample size n or, in other words, by the quantity
of absolute information contained in this sample and measured in abits. Under an
arbitrary symmetric distribution FN (z ; σ ) of the estimation (measurement) results
{Ni } and rather small ratio λ/σ of estimated parameter λ to scale parameter σ
|λ/σ| → 0, one can consider that quality indices q{λ̂n,∧ }, q{λ̂n,∨ } (6.3.38a,b) of the
estimators λ̂n,∧ (6.3.3), λ̂n,∨ (6.3.4) in sample space with properties of the lattice
L(X ; ∨, ∧) are characterized by invariance property with respect to the conditions
of nonparametric prior uncertainty. This property is not inherent to the estima-
tor λ̂n,+ (6.3.2), whose quality is determined by CDF FN (z ; σ ) of the estimation
(measurement) errors {Ni } [250], [251].
where yi (t) is the estimator of energetic parameter of the signal si (t) (further,
simply the estimator ŝi (t) of the signal si (t)) in the i-th processing channel.
In the case of the presence of the signal sk (t) in the observed process x(t) =
sk (t) + n(t), the problem of signal classification is solved by maximization of suffi-
cient statistics (x(t), si (t)) (6.4.3):
Z
arg max [ x(t)si (t)dt] x(t)=sk (t)+n(t) = k̂,
i∈I;si (t)∈S
t∈Ts
where yi (t) is the estimator ŝi (t) of the signal si (t) in the i-th processing channel
in the presence of the signal sk (t) in the observed process x(t); M(∗) is the symbol
of mathematical expectation.
In the output of the i-th processing channel, the probability density function
(PDF) pŝi (y ) of the estimator yi (t) = ŝi (t) of the signal si (t), at the instant t =
t0 + T in the presence of the signal sk (t) in the observed process x(t), is determined
by the expression:
(y − rik E )2
−1/2
pŝi (y ) = (2πD) exp , (6.4.6)
2D
where D = EN0 is a noise variance in the output of the i-th processing channel; E
is the energy of the signal si (t); N0 is a power spectral density of interference (noise)
in the input of the processing unit; rik is a cross-correlation coefficient between the
signals si (t), sk (t).
232 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
Consider the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th processing
channel in signal presence in the observed process x(t) = sk (t) + n(t):
yk (t) = ŝk (t) x(t)=sk (t)+n(t) , t = t0 + T, (6.4.7)
and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of interference (noise) (n(t) = 0) in the
observed process x(t) = sk (t):
yk (t) n(t)=0 = ŝk (t) x(t)=sk (t) , t = t0 + T. (6.4.8)
To characterize the quality of the estimator yk (t) = ŝk (t) of the signal sk (t)
while solving the problem of classification of the signals from a set S = {si (t)},
i = 1, . . . , m that additively interact with interference (noise) n(t) (6.4.2) in the
group Γ(+) of L-group Γ(+, ∨, ∧), we introduce the function µ(yt , yt,0 ) that is
analogous to metric (3.2.1) and characterizes the difference between the estimator
yk (t) (6.4.7) of the signal sk (t) in the k-th processing channel in signal
presence in
the observed process x(t) = sk (t) + n(t) and the estimator yk (t) n(t)=0 (6.4.8) of
the signal sk (t) in the absence of interference (noise):
µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]. (6.4.9)
In the equation, yt is a sample of the estimator yk (t) (6.4.7) of the signal sk (t) in
the output of the k-th processing channel at the instant t = t0 + T in the presence
of the signal sk (t) in the observed process x(t); yt,0 is a sample of the estimator
yk (t) n(t)=0 (6.4.8) of the signal sk (t) in the output of the k-th processing channel
at the instant t = t0 + T in the absence of interference (noise) (n(t) = 0) in the
observed process x(t), equal to the energy E of the signal: yt,0 = E; h is some
threshold level h < E determined by an average of two mathematical expectations
of the processes in the outputs of the i-th (6.4.4) and the k-th (6.4.5) processing
channels:
h = (rik E + E )/2, (6.4.10)
rik is a cross-correlation coefficient between the signals si (t) and sk (t).
Theorem 3.2.1 states that for random variables with special properties, the func-
tion defined by the relationship (3.2.1) is a metric. Meanwhile, the establishment
of this fact for the function (6.4.9) requires a separate proof stated in the following
theorem.
Theorem 6.4.1. For a pair of samples yt,0 and yt of stochastic processes
yk (t) n(t)=0 , yk (t) in the output of the unit of optimal classification of the signals
from a set S = {si (t)}, i = 1, . . . , m, which additively interact with interference
(noise) n(t) (6.4.2) in a group Γ(+) of L-group Γ(+, ∨, ∧), the function µ(yt , yt,0 )
defined by the relationship (6.4.9) is metric.
Proof. Consider the probabilities P(yt ∨ yt,0 > h), P(yt ∧ yt,0 > h) in formula
(6.4.9). Joint fulfillment of the equality yt,0 = E and the inequality h < E implies
that these probabilities are equal to:
P(yt ∨ yt,0 > h) = P(yt,0 > h) = 1; (6.4.11a)
6.4 Quality Indices of Classification and Detection of Deterministic Signals 233
where rik is a cross-correlation coefficient between the signals si (t) and sk (t).
Consider the values of the quality index of the estimator of the signals (6.4.17)
for the orthogonal (rik = 0) and opposite (rik = −1) signals from a set S = {si (t)},
i = 1, . . . , m under their classification in signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧).
Based on the general formula (6.4.17), we obtain the dependences of quality
indices of the estimators νsŝ (q 2 ) on signal-to-noise ratio for orthogonal and opposite
signals, respectively:
p
νsŝ (q 2 ) |⊥ = 2Φ q 2 /4 − 1, si (t)⊥sk (t), rik = 0; (6.4.18a)
p
νsŝ (q 2 ) |− = 2Φ q 2 − 1, s1 (t) = −s2 (t), r12 = −1, (6.4.18b)
FIGURE 6.4.1 Dependences νsŝ (q 2 ) |⊥ per Equation (6.4.18a); νsŝ (q 2 ) |− per Equation
(6.4.18b)
µsŝ = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]; (6.4.22a)
µ̃sŝ = µ(ỹt , ỹt,0 ) = 2[P(ỹt ∨ ỹt,0 > h) − P(ỹt ∧ ỹt,0 > h)], (6.4.22b)
where yt and ỹt are the samples of the estimators yk (t) (6.4.20a) and ỹk (t) (6.4.20b)
of the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the presence of the signal sk (t) in the observed processes x(t), x̃(t); yt,0 and ỹt,0
are the samples of the estimators yk (t) n(t)=0 (6.4.21a) and ỹk (t) n(t)=0 (6.4.21b)
236 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
of the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the absence of interference (noise) (n(t) = 0) in the observed processes x(t), x̃(t);
h is some threshold level determined by energetic and correlation relations between
the signals from a set S = {si (t)}, i = 1, . . . , m.
Absorption axiom of a lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) contained in the
third part of each of four following multilink identities implies that the estimators
yk (t) (6.4.20a), ỹk (t) (6.4.20), and the estimators yk (t) n(t)=0 (6.4.21a), ỹk (t) n(t)=0
(6.4.21b) are identically equal to the received signal sk (t):
ỹk (t) = sk (t) ∨ x̃(t) = sk (t) ∨ [sk (t) ∧ n(t)] = sk (t); (6.4.23b)
yk (t) n(t)=0 = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ 0] = sk (t); (6.4.24a)
ỹk (t) n(t)=0 = sk (t) ∨ x̃(t) = sk (t) ∨ [sk (t) ∧ 0] = sk (t). (6.4.24b)
The obtained relationships imply that the samples yt and ỹt of the estimators
yk (t) (6.4.20a) and ỹk (t) (6.4.20b),
and the samples yt,0 and ỹt,0 of the estimators
yk (t) n(t)=0 (6.4.21a) and ỹk (t) n(t)=0 (6.4.21b) are identically equal to the received
signal sk (t):
yt = yk (t) = sk (t); (6.4.25a)
ỹt = ỹk (t) = sk (t); (6.4.25b)
yt,0 = yk (t) n(t)=0 = sk (t); (6.4.26a)
ỹt,0 = ỹk (t) n(t)=0 = sk (t). (6.4.26b)
Substituting the obtained values of the samples yt (6.4.25a) and yt,0 (6.4.26a) into
the relationship (6.4.22a), and also the values of the samples ỹt (6.4.25b) and
ỹt,0 (6.4.26b) into the relationship (6.4.22b), we note that the values of metrics
µ(yt , yt,0 ), µ(ỹt , ỹt,0 ) between the signal sk (t) and its estimators yk (t) and ỹk (t),
while solving the problem of classification of the signals from a set S = {si (t)},
i = 1, . . . , m in signal space with the properties of the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧), are identically equal to zero:
The relationships (6.4.28) imply very important conclusions on solving the problem
of classification of the signals from a set S = {si (t)}, i = 1, . . . , m in signal space
with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧). There exists a
possibility to provide absolute quality indices of the estimators of the signals νsŝ ,
ν̃sŝ . This fact creates the necessary conditions to extract deterministic signals in the
presence of interference (noise) without information losses. This is uncharacteristic
for solving the same problem within the signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧) (particularly, in linear signal space). Another
advantage for solving the problem of classification of deterministic signals from a
set S = {si (t)}, i = 1, . . . , m in signal space with the properties of the lattice
Γ(∨, ∧) of L-group Γ(+, ∨, ∧) is the invariance property of both metrics (6.4.27)
and quality indices of the estimators of the signals (6.4.28) with respect to the
conditions of parametric and nonparametric prior uncertainty. The quality indices
of the estimators of the signals in the signal space with lattice properties (6.4.28)
compared to the quality indices of the estimators of the signals in linear signal
space (6.4.18), do not depend on signal-to-noise ratio and on interference (noise)
distribution in the input of processing unit. The problem of synthesis of optimal
algorithm of signal classification in signal space with lattice properties demands
additional research that is the subject of consideration of the following chapter.
Generally, the results obtained in this subsection, will be used later to determine
the capacity of discrete communication channels functioning in the signal space with
the properties of the group Γ(+) and the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧).
t0 is known time of arrival of the signal s(t); t1 is known time of signal ending;
T = t1 − t0 is a duration of the signal s(t).
Within the general model (6.4.29), consider the additive interaction of deter-
ministic signal and interference (noise) in signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧):
Suppose that interference n(t) is white Gaussian noise with a power spectral density
N0 .
While solving the problem of signal detection in linear signal space, i.e., when the
interaction equation (6.4.30) holds, the sufficient statistics y (t) is a scalar product
(x(t), s(t))R of the signals x(t) and s(t) in Hilbert space equal to the correlation
integral x(t)s(t)dt:
t∈Ts
Z
y (t) = (x(t), s(t)) = x(t)s(t)dt, (6.4.31)
t∈Ts
where y (t) = ŝ(t) is an estimator of an energetic parameter of the signal s(t) (simply
the estimator ŝ(t) of the signal s(t)).
Signal detection problem can be solved by maximization of sufficient statistics
y (t) (6.4.31) on the domain of definition of the signal Ts and its comparison with a
threshold l0 : Z d1
y (t) = x(t)s(t)dt → max y (t) ≷ l0 ,
Ts d0
t∈Ts
where d1 and d0 are the decisions concerning the presence and the absence of the
signal s(t) in the observed process x(t); d1 : θ̂ = 1, d0 : θ̂ = 0, respectively; θ̂ is an
estimate of parameter θ, θ ∈ {0, 1}.
In the presence (θ = 1) of the signal s(t) inR the observed process x(t) (6.4.30),
at the instant t = t0 + T , correlation integral x(t)s(t)dt takes an average value
t∈Ts
equal to energy E of the signal s(t):
Z Z
M{y (t)} = M{ x(t)s(t)dt} |θ=1 = s(t)s(t)dt = E, (6.4.32)
t∈Ts t∈Ts
where y (t) is the estimator of the signal s(t) in the output of detector; M(∗) is
a symbol of mathematical expectation, and in the absence (θ = 0) of the signal
s(t) in theR observed process x(t) (6.4.30), at the instant t = t0 + T , the correlation
integral x(t)s(t)dt takes an average value equal to zero:
t∈Ts
Z
M{y (t)} = M{ x(t)s(t)dt} |θ=0 = 0. (6.4.33)
t∈Ts
6.4 Quality Indices of Classification and Detection of Deterministic Signals 239
In the output of detector, PDF pŝ (y ) θ∈{0,1} of the estimator y (t) of the signal s(t),
at the instant t = t0 + T , in the presence of signal (θ = 1) or in its absence (θ = 0)
in the observed process x(t), is determined by the expression:
(y − θE )2
−1/2
pŝ (y ) θ∈{0,1} = (2πD)
exp , (6.4.34)
2D
where D = EN0 is a noise variance in the output of detector; E is an energy of
the signal s(t); N0 is power spectral density of interference (noise) in the input of
processing unit.
Consider the estimator y (t) of the signal s(t) in the output of detector in its
presence
in the observed process x(t) = s(t) + n(t) (6.4.30), and also the estimator
y (t) n(t)=0 of the signal s(t) in the output of detector in the absence of interference
(noise) (n(t) = 0) in an observed process x(t), x(t) = s(t).
To characterize the quality of the estimator y (t) of the signal s(t) while solving
the problem of its detection in signal space with the properties of the group Γ(+) of
L-group Γ(+, ∨, ∧) (6.4.30), we introduce the function µ(yt , yt,0 ), which is analogous
to metric (6.4.9):
µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)], (6.4.35)
where yt is a sample of the estimator y (t) of the signal s(t) in the output of detector
at the instant t = t0 + T in the presence of the signal s(t) and interference (noise)
n(t) in the observed process x(t); yt,0 is a sample of the estimator y (t) n(t)=0 of
the signal s(t) in the output of detector at the instant t = t0 + T in the absence
of interference (noise) (n(t) = 0) in the observed process x(t), x(t) = s(t), which is
equal to a signal energy E: yt,0 = E; h is some threshold level determined by an
average of two mathematical expectations of the processes in the output of detector
(6.4.32) and (6.4.33):
h = E/2. (6.4.36)
Joint fulfillment of the equality yt,0 = E and the inequality h < E implies that the
probabilities appearing in the expression (6.4.35) are equal to:
P(yt ∨ yt,0 > h) = P(yt,0 > h) = 1; (6.4.37a)
P(yt ∧ yt,0 > h) = P(yt > h) = 1 − Fŝ (h) |θ=1 , (6.4.37b)
where Fŝ (y ) |θ=1 is the CDF of the estimator y (t) in the output of detector at the
instant t = t0 + T in the presence of the signal s(t) (θ = 1) in the observed process
x(t) (6.4.30), which, according to the PDF (6.4.34), is equal to:
Zy
(x − E )2
−1/2
Fŝ (y ) |θ=1 = (2πD) exp − dx. (6.4.38)
2D
−∞
Substituting the formulas (6.4.37) into the expression (6.4.35), we obtain the resul-
tant value for metric µ(yt , yt,0 ) between the signal s(t) and its estimator y (t) while
solving the problem of signal detection in the group Γ(+) of L-group Γ(+, ∨, ∧):
µsŝ = µ(yt , yt,0 ) = 2Fŝ (h) |θ=1 =
240 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
E−h p
= 2[1 − Φ √ ] = 2[1 − Φ( q 2 /4)], (6.4.39)
D
where Fŝ (y ) |θ=1 is the CDF of the estimator y (t) in the output of detector at
the instant t = t0 + T in the presence of the signal s(t) (θ = 1) in the ob-
served process x(t) (6.4.30), which is determined by the formula (6.4.38); Φ(z ) =
Rz n 2o
(2π )−1/2 exp − x2 dx is the probability integral; D = EN0 is a noise variance
−∞
in the output of detector; E is an energy of the signal s(t); h is a threshold level de-
termined by the relationship (6.4.36); N0 is a power spectral density of interference
(noise) n(t) in the input of detector; q 2 = E/N0 is signal-to-noise ratio.
By the analogy with the quality index of signal classification (6.4.16), we define
quality index of signal detection νsŝ .
Definition 6.4.2. By quality index of estimator of the signal νsŝ while solving the
problem of its detection we mean NMSI ν (yt , yt,0 ) between the samples yt,0 and yt of
stochastic processes y (t) n(t)=0 , y (t) in the output of detector, which is connected
with metric ν (yt , yt,0 ) (6.4.35) by the following relationship:
Substituting the value of metric (6.4.39) into the coupling equation (6.4.40),
we obtain the dependence of quality index of detection on signal-to-noise ratio
q 2 = E/N0 : p
νsŝ (q 2 ) = 2Φ( q 2 /4) − 1. (6.4.41)
Based on the obtained formula, quality index of detection νsŝ (q 2 ) (6.4.41) is iden-
tically equal to the quality index of classification of orthogonal signals νsŝ (q 2 ) |⊥
(6.4.18a): νsŝ (q 2 ) = νsŝ (q 2 ) |⊥ , that, of course, is trivial, taking into account the
known link between signal detection and signal classification.
Within the general model (6.4.29), consider the interaction of deterministic
signal s(t) and interference (noise) n(t) in signal space with the properties of the
lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧), which is described by two binary operations
of join and meet, respectively:
in the presence of the signal (θ = 1) in the observed processes x(t) (6.4.42a), x̃(t)
(6.4.42b):
y (t) = y (t) x(t)=s(t)∨n(t) = s(t) ∧ x(t); (6.4.43a)
ỹ (t) = ỹ (t) x̃(t)=s(t)∧n(t) = s(t) ∨ x̃(t). (6.4.43b)
Also consider the estimators y (t) n(t)=0 , ỹ (t) n(t)=0 of the signal s(t) in the output
of detector in the absence of interference (noise) (n(t) = 0) in the observed processes
x(t) (6.4.42a) and x̃(t) (6.4.42b):
y (t) n(t)=0 = s(t) ∧ x(t) x(t)=s(t)∨0 ; (6.4.44a)
ỹ (t) n(t)=0 = s(t) ∨ x̃(t) x̃(t)=s(t)∧0 . (6.4.44b)
Determine the values of metric (6.4.35) to evaluate the quality index of the estimator
of the signal (6.4.40) while solving the problem of detection in signal space with
the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧):
µsŝ = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]; (6.4.45a)
µ̃sŝ = µ(ỹt , ỹt,0 ) = 2[P(ỹt ∨ ỹt,0 > h) − P(ỹt ∧ ỹt,0 > h)], (6.4.45b)
where yt and ỹt are the samples of the estimators y (t) (6.4.43a) and ỹ (t) (6.4.43b)
of the signal s(t) in the output of detector at the instant t ∈ Ts in the presence
of the signal (θ = 1) in the observed processes x(t) (6.4.42a) and x̃(t) (6.4.42b);
yt,0 and ỹt,0 are the samples of the estimators y (t) n(t)=0 (6.4.44a) and ỹ (t) n(t)=0
(6.4.44b) of the signal s(t) in the output of detector at the instant t ∈ Ts in the
absence of interference (noise) (n(t) = 0) in the observed processes x(t) (6.4.42a)
and x̃(t) (6.4.42b); h is some threshold level.
The obtained relationships (6.4.43) and (6.4.44), and absorption axiom of a lat-
tice imply that the samples yt , ỹt of the estimators y (t) (6.4.43a) and ỹ (t) (6.4.43b),
and also the samples yt,0 , ỹt,0 of the estimators y (t) n(t)=0 (6.4.44a) and ỹ (t) n(t)=0
(6.4.44b) are identically equal to the received useful signal s(t):
yt,0 = y (t) n(t)=0 = s(t); (6.4.47a)
ỹt,0 = ỹ (t) n(t)=0 = s(t). (6.4.47b)
Substituting the obtained values of a pair of samples yt (6.4.46a) and yt,0 (6.4.47a)
into the relationship (6.4.45a) and also substituting the values of a pair of samples
ỹt (6.4.46b) and ỹt,0 (6.4.47b) into the relationship (6.4.45b), we obtain that the
values of metrics µ(yt , yt,0 ) and µ(ỹt , ỹt,0 ) between the signal s(t) and its estimators
y (t), ỹ (t), while solving the problem of signal detection in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧), are equal to zero:
On the base of the obtained general relationships (6.5.5) and (6.5.6), we consider
the peculiarities of evaluating the capacities of continuous channels operating in the
presence of interference (noise) for the cases of concrete kinds of interaction (6.5.1)
between useful signal s(t) and interference (noise) n(t) in physical signal space Γ
with the properties of L-group Γ(+, ∨, ∧), where ⊕ is some binary operation of
L-group: +, ∨, ∧.
For the case of additive interaction (6.5.1) between useful Gaussian signal s(t)
and interference (noise) n(t) in the form of white Gaussian noise (WGN), we deter-
mine the capacity Cn,+ of continuous channel functioning in physical signal space
Γ with the properties of additive commutative group Γ(+) of L-group Γ(+, ∨, ∧):
Substituting the relationships (6.2.13) and (6.2.14) into the formula (6.5.5), and the
relationships (6.2.12) and (6.2.14) into the formula (6.5.6), we obtain the expression
for the capacity Cn,+ of Gaussian continuous channel with WGN:
capacity C (abit/s) and does not depend on energetic relationships between use-
ful signal and interference (noise) and on their probabilistic-statistical properties.
The last circumstance defines the invariance property of continuous channels with
lattice properties with respect to parametric and nonparametric prior uncertainty
conditions.
The capacity Cn,∨/∧ (bit/s) of continuous channels, functioning in the presence
of interference (noise), where we find the simultaneous processing of the obser-
vations in the form of join (6.5.11a) and meet (6.5.11b) of the lattice Γ(∨, ∧),
measured in bit/s, according to the formulas (6.5.7) and (6.5.13), is determined by
the relationship:
Isŝ
Cn = max (abit/s). (6.5.17)
ŝ∈Γ T
According to the expression (6.2.4) determining the quantity of mutual information
Isŝ , the relationship (6.5.17) takes the form:
(νsx Is )
Cn = = νsx C (abit/s), (6.5.18)
T
where νsx = ν (st , xt ) is the NMSI of the samples st and xt of stochastic signals
s(t) and x(t); Is is a quantity of absolute information contained in the useful signal
s(t); C = Is /T is a discrete noiseless channel capacity.
We now determine the capacity Cn,+ of discrete channel functioning in the
presence of interference (noise) in the additive interaction (6.5.8) between the useful
Gaussian signal s(t) and interference (noise) n(t) in the form of white Gaussian noise
(WGN) in physical signal space Γ with the properties of the additive commutative
group Γ(+) of L-group Γ(+, ∨, ∧).
Substituting the relationship (6.4.17) into the formula (6.5.18), we obtain the
resultant expression for the capacity Cn,+ of a discrete channel with additive WGN:
p
Cn,+ = [2Φ q 2 (1 − rik )/2 − 1]C (abit/s), (6.5.19)
Unfortunately, this correspondence does not hold when it concerns the capacity of
a discrete channel transmitting discrete messages with cardinality Card{ui } = m
of a set of values {ui } of discrete random sequence u(t) = {uj (t)} that is greater
than 2: Card{ui } = m > 2, i.e., while transmitting m-ary signals.
In this case, the capacity of discrete channels with additive noise, measured in
bit/s, is connected with the capacity measured in abit/s by the relationship (5.2.10):
Cn,+ (abit/s) · log2 m, m < n;
Cn,+ (bit/s) = (6.5.21)
Cn,+ (abit/s) · log2 n, m ≥ n,
where ⊕ is some binary operation of L-group Γ(+, ∨, ∧); θi,k is a random parameter
that takes the values from the set {0, 1}: θi,k ∈ {0, 1}; Ts = [t0 , t0 + T ] is the domain
of definition of the signal si,k (t); t0 is the known time of arrival of the signal si,k (t);
T is a duration of the signal si,k (t).
Let the
R signals from the set S = {si (t), sk (t)} be characterized by an energy
2
Ei,k = si,k (t)dt and cross-correlation coefficient ri,k :
t∈Ts
Z p
ri,k = si (t)sk (t)dt/ Ei Ek .
t∈Ts
Consider the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th processing
channel in the presence of the signal sk (t) and in the absence of the signal si (t) in
the observed process x(t) = sk (t) + n(t) (θk = 1, θi = 0):
yk (t) = ŝk (t) x(t)=sk (t)+n(t) , t = t0 + T, (6.6.3)
and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of both interference (noise) (n(t) = 0)
and the signal si (t) in the observed process x(t)=sk (t):
yk (t) n(t)=0 = ŝk (t) x(t)=sk (t) , t = t0 + T. (6.6.4)
To evaluate the quality index of the estimator yk (t) = ŝk (t) of the signal sk (t),
while solving the problem of resolution-detection of the signals in the group Γ(+)
of L-group Γ(+, ∨, ∧) in the presence of the signal sk (t) and in the absence of the
signal si (t) in the observed process x(t) = sk (t) + n(t) (θk = 1, θi = 0), we use
the metric µsk ŝk = µ(yt , yt,0 ) (6.4.9) introduced in Section 6.4, which characterizes
the distinction between the estimator yk (t) (6.6.3) of the signal sk (t) in the k-th
processing channel and the estimator yk (t) n(t)=0 (6.6.4) of the signal sk (t) in the
absence of interference (noise):
µsk ŝk = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > hk ) − P(yt ∧ yt,0 > hk )], (6.6.5)
where yt is the sample of the estimator yk (t) (6.6.3) of the signal sk (t) in the output
of the k-th processing channel at the instant t = t0 + T in the presence of the signal
sk (t) and in the absence of the
signal si (t) in the observed process x(t); yt,0 is the
sample of the estimator yk (t) n(t)=0 (6.6.4) of the signal sk (t) in the output of the
k-th processing channel at the instant t = t0 + T in the absence of interference
(noise) (n(t) = 0) and the signal si (t) in the observed process x(t), which is equal
to a signal energy Ek : yt,0 = Ek ; hk is some threshold level hk < Ek determined by
an average of two mathematical expectations of the processes in the k-th processing
channel (6.4.32) and (6.4.33):
hk = Ek /2. (6.6.6)
The equality yt,0 = Ek and the inequality hk < Ek imply that the probabilities
appearing in the expression (6.6.5) are equal to:
Substituting the value of metric (6.6.9) into the coupling equation (6.6.10) be-
tween the NMSI and metric, we obtain the dependence of quality index of resolution-
detection νsk ŝk on the signal-to-noise ratio:
q
νsk ŝk (qk2 ) = 2Φ qk2 /4 − 1, (6.6.11)
where qk2 = Ek /N0 is the signal-to-noise ratio in the k-th processing channel.
Consider now the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th pro-
cessing channel in the presence of signals sk (t) and si (t) in the observed process
x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1):
yk (t) = ŝk (t) x(t)=s (t)+s (t)+n(t) , t = t0 + T,
k i
(6.6.12)
254 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of interference (noise) (n(t) = 0) in the
observed process x(t) = sk (t) + si (t):
yk (t) n(t)=0 = ŝk (t) x(t)=s (t)+s (t) , t = t0 + T.
k i
(6.6.13)
To determine the quality of the estimator yk (t) = ŝk (t) of the signal sk (t),
while solving the problem of resolution-detection of the signals in the group
Γ(+) of L-group Γ(+, ∨, ∧) in the presence of signals sk (t) and si (t) in the ob-
served process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1), we use the metric
µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.5), which characterizes the distinction between
the estimator yk (t) (6.6.12) of the signal sk (t) in the k-th processing channel and
the estimator yk (t) n(t)=0 (6.6.13) of the signal sk (t) in the absence of interference
(noise):
µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > hk ) − P(yt ∧ yt,0 > hk )], (6.6.14)
where yt is the sample of the estimator yk (t) (6.6.12) of the signal sk (t) in the output
of the k-th processing channel at the instant t = t0 + T in the presence of both the
signals sk (t) and si (t) in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1,
θi = 1); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.13) of the signal sk (t)
in the output of the k-th processing signal at the instant t = t0 + T in the absence
of interference (noise) (n(t) = 0) in the observed process x(t) = sk (t) + si (t), which
is equal to the sum of√the energy Ek of the signal sk (t) and mutual energy of both
the signals Eik = rik Ei Ek : yt,0 = Ek + Eik ; hk is some threshold level, hk < Ek
determined by an average of two mathematical expectations of the processes in the
k-th processing channel (6.4.32) and (6.4.33):
hk = Ek /2. (6.6.15)
The equality yt,0 = Ek + Eik and the inequality hk < Ek imply that the probabilities
in the expression (6.6.14) are equal to:
P(yt ∧ yt,0 > hk ) = P(yt > hk ) = 1 − Fŝk (hk ) |θk =1,θi =1 , (6.6.16b)
where Fŝk (y ) |θk =1,θi =1 is the CDF of the estimator yk (t) in the k-th processing
channel at the instant t = t0 + T in the presence of both the signals sk (t) and si (t)
in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1) (6.6.2), which,
according to PDF (6.4.34), is equal to:
Zy
[x − (Ek + Eik )]2
−1/2
Fŝk (y ) |θk =1,θi =1 = (2πDk ) exp − dx, (6.6.17)
2Dk
−∞
sk (t) and si (t); N0 is a power spectral density of interference (noise) in the input
of processing unit.
Substituting the formulas (6.6.16) into the expression (6.6.14), we obtain the
resultant value for metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) between the signal sk (t) and
its estimator yk (t) while solving the problem of resolution-detection in the group
Γ(+) of L-group Γ(+, ∨, ∧):
µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2Fŝk (hk ) |θk =1,θi =1 =
(Ek + Eik ) − hk
= 2[1 − Φ √ ] = 2[1 − Φ (0.5qk + rik qi )], (6.6.18)
Dk
where Fŝk (y ) |θk =1,θi =1 is the CDF of the estimator yk (t) in the k-th processing
channel at the instant t = t0 + T in the presence of both the signals sk (t) and si (t)
in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1) (6.6.2), which
Rz n 2o
is determined by the formula (6.6.17); Φ(z ) = (2π )−1/2 exp − x2 dx is the
−∞
integral of probability; Dk = Ek N0 is a variance of a noise √ in the k-th processing
channel; Ek is an energy of the signal sk (t); Eik = rik Ei Ek is mutual energy of
the signals sk (t) and si (t); rik is the cross-correlation coefficient of the signals sk (t)
and si (t); qk2 is signal-to-noise ratio in the k-th processing channel; qi2 is signal-to-
noise ratio in the i-th processing channel; hk is a threshold level determined by the
relationship (6.6.15); N0 is a power spectral density of interference (noise) in the
input of processing unit.
By analogy with the quality index of resolution-detection of the signals (6.6.10),
in the presence of the signal sk (t) and the absence of the signal si (t) in the observed
process x(t) = sk (t)+ n(t) (θk = 1, θi = 0), we define the quality index of resolution-
detection of the signals νsk ŝk in the presence of both signals sk (t) and si (t) in the
observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1).
Definition 6.6.2. By quality index of estimator of the signals νsk ŝk |θk =1,θi =1 ,
while solving the problem of resolution-detection in the presence of signals sk (t)
and si (t) in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1)
the NMSI ν (yt , yt,0 ) between the samples yt,0 and yt of stochastic
(6.6.1), we mean
process yk (t) n(t)=0 , yk (t) in the k-th processing channel of the resolution-detection
unit connected with metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.14) by the following
expression:
νsk ŝk |θk =1,θi =1 = ν (yt , yt,0 ) = 1 − µsk ŝk |θi =1,θk =1 . (6.6.19)
where rik is the cross-correlation coefficient of the signals sk (t) and si (t); qk2 , qi2 are
signal-to-noise ratios in the k-th and the i-th processing channels, respectively.
256 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
where yt is the sample of the estimator yk (t) (6.6.22) of the signal sk (t) in the output
of the k-th processing channel at the instant t ∈ Ts in the presence of the signal sk (t)
and in the absence of the signal si (t) in the observed process x(t)= sk (t) ∨ 0 ∨ n(t)
(θk = 1, θi = 0) (6.6.21); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.23) of
the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the absence of interference (noise) n(t) = 0 and the signal si (t) in the observed
process x(t) = sk (t) ∨ 0; h is some threshold level determined by energetic and
correlation relations between the signals sk (t) and si (t).
The absorption axiom of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) contained in
the third part of each
of two multilink identities implies that the estimators yk (t)
(6.6.22) and yk (t) n(t)=0 (6.6.23) are identically equal to the received useful signal
sk (t):
From (6.6.27) and the coupling equation (6.6.10), we obtain quality indices of the
estimator of the signals νsk ŝk , while solving the problem of their resolution-detection
in signal space with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) in
the presence of the signal sk (t) and in the absence of the signal si (t) in the observed
process x(t) = sk (t) ∨ 0 ∨ n(t) (θk = 1, θi = 0), that take absolute values equal to 1:
Consider now the estimator yk (t) of the signal sk (t) in the k-th processing channel
of the resolution-detection unit in the presence of signals sk (t) and si (t) in the
observed process x(t) = sk (t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1) (6.6.21):
yk (t) = yk (t) x(t)=sk (t)∨si (t)∨n(t) = sk (t) ∧ x(t), (6.6.29)
and also consider the estimator yk (t) n(t)=0 of the signal sk (t) in the k-th processing
258 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
To determine the quality of the estimator yk (t) = ŝk (t) of the signal sk (t) while
solving the problem of resolution-detection of the signals in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) in the presence of both the
signals sk (t) and si (t) in the observed process x(t) = sk (t) ∨si (t) ∨n(t) (θk = 1, θi =
1), we will use the metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.24), which characterizes
the distinctions between the estimator yk (t) (6.6.29) of the signal sk (t) in the k-th
processing channel and the estimator yk (t) n(t)=0 (6.6.30) of the signal sk (t) in the
absence of interference (noise):
µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)], (6.6.31)
where yt is the sample of the estimator yk (t) (6.6.29) of the signal sk (t) in the output
of the k-th processing channel at the instant t ∈ Ts in the presence of the signals
sk (t) and si (t) in the observed process x(t) = sk(t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1)
(6.6.21); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.30) of the signal sk (t)
in the output of the k-th processing channel at the instant t ∈ Ts in the absence of
interference (noise) (n(t) = 0) in the observed process x(t) = sk (t) ∨ si (t) ∨ 0; h is
some threshold level determined by energetic and correlation relations between the
signals sk (t) and si (t).
The absorption axiom of the lattice Γ(∨, ∧) contained in the third part of each
of two multilink identities implies that the estimators yk (t) (6.6.29) and yk (t) n(t)=0
(6.6.30) are identically equal to the received useful signal sk (t):
yk (t) = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ si (t) ∨ n(t)] = sk (t); (6.6.32a)
yk (t) n(t)=0 = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ si (t) ∨ 0] = sk (t). (6.6.32b)
The obtained relationships imply that the sample yt of the estimator yk (t) (6.6.29),
and also the sample yt,0 of the estimator yk (t) n(t)=0 (6.6.30) are identically equal
to the received useful signal sk (t):
The relationship (6.6.34) and the coupling equation (6.6.28) imply that quality
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 259
indices of the estimator of the signals νsk ŝk |θk =1,θi =1 while solving the problem of
their resolution-detection in signal space with the properties of the lattice Γ(∨, ∧)
of L-group Γ(+, ∨, ∧) in the presence of both the signals sk (t) and si (t) in the
observed process x(t) = sk (t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1), take absolute values
that are equal to 1:
νsk ŝk |θk =1,θi =1 = 1 − µsk ŝk |θk =1,θi =1 = 1. (6.6.35)
As follows from the comparative analysis of quality indices of signal resolution-
detection (6.6.28), (6.6.35) in signal space with the properties of the lattice Γ(∨, ∧)
of L-group Γ(+, ∨, ∧), regardless of cross-correlation coefficient rik and energetic
relations between the signals sk (t) and si (t), the following identity holds:
νsk ŝk (qk2 ) = νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 = 1.
It fundamentally differs them from the quality indices of signal resolution-detection
(6.6.11) and (6.6.20) in signal space with the properties of the group Γ(+) of L-
group Γ(+, ∨, ∧), which are essentially determined by both cross-correlation coeffi-
cient rik and energetic relations between the signals sk (t) and si (t).
The relationships (6.6.28) and (6.6.35) imply an important conclusion. While
analyzing the problem of signal resolution-detection in signal space with the prop-
erties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) we face a possibility of providing
absolute quality indices of signal resolution-detection νsk ŝk , νsk ŝk |θk =1,θi =1 . This
creates the necessary conditions for resolution-detection of deterministic signals pro-
cessed under interference (noise) background without losses of information. Note
that this situation stipulated by the absence of information losses is not typical
for solving a similar problem in signal space with group properties (particularly
in linear signal space). One more advantage of signal resolution-detection in signal
spaces with lattice properties is the invariance property of both metrics (6.6.27) and
(6.6.34) and the quality indices of signal resolution-detection (6.6.28) and (6.6.35)
with respect to parametric and nonparametric prior uncertainty conditions. The
quality indices of signal resolution-detection in signal space with lattice proper-
ties (6.6.28) and (6.6.35), as against quality indices of signal resolution-detection
in linear signal space (6.6.11) and (6.6.20), do not depend on signal-to-noise ratio
(signal-to-interference ratio) and interference (noise) distribution in the input of the
processing unit. The problem of synthesis of optimal signal resolution algorithm in
signal space with lattice properties demands additional investigation that will be
discussed in the next chapter.
the value of parameter λ, so that both tasks are solved in the presence of interfer-
ing signal s0 (t, λ0 ) and interference (noise) n(t). Furthermore, the interfering signal
s0 (t, λ0 ) is a copy of a useful signal with an unknown parameter λ0 , which differs
from λ: λ0 6= λ.
Consider the general model of interaction of the signals s(t, λ) and s0 (t, λ0 ) and
interference (noise) n(t) in signal space with the properties of the lattice Γ(∨, ∧):
where ⊕ is some binary operation of the lattice Γ(∨, ∧); θ, θ0 is a random parameter
that takes the values from the set {0, 1}: θ, θ0 ∈ {0, 1}; Tobs is an observation
interval, Tobs = Ts ∪ Ts0 ; Ts = [t00 , t00 + T ], Ts0 = [t00 , t00 + T ] are the domains of
definitions of the signals s(t, λ) and s0 (t, λ0 ), respectively; t0 and t00 are unknown
arrival times of the signals s(t, λ) and s0 (t, λ0 ); T is a duration of the signals s(t, λ)
and s0 (t, λ0 ).
Let the signals s(t, λ) and s0 (t, λ0 ) be periodic functions with a period T0 :
The questions dealing with obtaining the estimators ŝ∧ (t) (6.7.8) and ŝ∨ (t) (6.7.9)
on the basis of optimality criteria will be considered in Chapter 7.
Let some functions Tα and Tβ of the observed process x(t) be used as the
estimators ŝα (t) and ŝβ (t) of the signals sα (t) and sβ (t) in signal space Γ(⊕):
where dαβ is a metric between PDFs pŝα (z ) and pŝβ (z ) of the estimators ŝα (t) and
ŝβ (t) of the signals sα (t) and sβ (t) in signal space Γ(⊕), respectively.
To characterize the quality of the estimators ŝα (t) and ŝβ (t) while solving the
problem of joint resolution-estimation, we introduce the index based upon the met-
ric (6.7.11).
Definition 6.7.2. By quality index q↔ (ŝα , ŝβ ) of joint resolution-estimation of the
signals sα (t) and sβ (t), we mean the quantity equal to metric (6.7.11) between
PDFs pŝα (z ) and pŝβ (z ) of the estimators ŝα (t) and ŝβ (t) of these signals:
Z∞
1
q↔ (ŝα , ŝβ ) = d(ŝα , ŝβ ) = |pŝα (z ) − pŝβ (z )|dz. (6.7.12)
2
−∞
Determine now the quality indices q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) , q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) of
joint resolution-estimation (6.7.12) for the estimators ŝ∧ (t) (6.7.8) and ŝ∨ (t) (6.7.9)
in the case of separate fulfillment of the hypotheses H11 = Hθ=1,θ0 =1 and H01 =
Hθ=0,θ0 =1 (6.7.6) and (6.7.7), respectively.
By differentiating CDF (6.3.8c), under the condition that between the samples
of instantaneous values st , s0t of useful s(t, λ) and interfering s0 (t, λ0 ) signals, the
two-sided inequality holds 0 < s0t < st , we obtain the PDF of the estimator ŝ∧ (t):
pŝ∨ (z ) |H01 = δ (z − s0t )FnN (s0t − 0) + [1 − 1(z − s0t )]N FnN −1 (z )pn (z ), (6.7.14b)
On the qualitative level, the forms of PDFs (6.7.13a,b) and (6.7.14a,b) of the
estimators ŝ∧ (t) |H11 ,H01 , ŝ∨ (t) |H11 ,H01 , in the case of separate fulfillment of the
hypotheses H11 , H01 on N = 3, are shown in Fig. 6.7.1(a) and Fig. 6.7.1(b), re-
spectively.
Substituting the values of PDFs of the estimators ŝ∧ (t) |H11 ,H01 (6.7.13a) and
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 263
(a) (b)
FIGURE 6.7.1 PDFs of estimators: (a) ŝ∧ (t) |H11 ,H01 ; 1 = PDF pŝ∧ (z) |H11 (6.7.13a); 2 =
PDF pŝ∧ (z) |H01 (6.7.13b); 3 = PDF of interference (noise) pn (z); (b) ŝ∨ (t) |H11 ,H01 . 1 =
PDF pŝ∨ (z) |H11 (6.7.14a); 2 = PDF pŝ∨ (z) |H01 (6.7.14b); 3 = PDF of interference (noise)
pn (z)
(6.7.13b) into the formula (6.7.12), we obtain the quality index of joint resolution-
estimation q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) :
Z∞
1
q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) = |pŝ∧ (z ) |H11 − pŝ∧ (z ) |H01 |dz =
2
−∞
Z∞ Z∞
=1− pŝ∧ (z ) |H11 dz = 1 − N (1 − Fn (z ))N −1 pn (z )dz, st ≥ 0. (6.7.15)
st st
Similarly, substituting the values of PDFs of the estimators ŝ∨ (t) |H11 ,H01 (6.7.14a,b)
into the expression (6.7.12), we obtain the quality index of joint resolution-
estimation q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) :
Z∞
1
q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) = |pŝ∨ (z ) |H11 − pŝ∨ (z ) |H01 |dz =
2
−∞
Zst Zst
=1− pŝ∧ (z ) |H11 dz = 1 − N FnN −1 (z )pn (z )dz, st < 0. (6.7.16)
−∞ −∞
It is difficult to obtain the exact values of quality indices (6.7.15) and (6.7.16) even
on the assumption of normalcy of interference (noise) PDF. Meanwhile, due to
evenness of the function pn (z ): pn (z ) = pn (−z ), taking into account the positive
(negative) definiteness of the samples st of instantaneous values of the signal s(t)
for the estimators ŝ∧ (t) |H11 ,H01 and ŝ∨ (t) |H11 ,H01 , it is easy to obtain the values
of lower bounds of quality indices (6.7.15) and (6.7.16), which are determined by
the following inequalities:
taking into account the observed process (6.7.18), is determined by the following
relationship:
s1 (t) ∨ s2 (t), t ∈ Tŝ ;
ŝ∧ (t) = (6.7.20)
0, t ∈
/ Tŝ ,
Tŝ = [min[t01 , t02 ] + (N − 1)T0 , max[t01 , t02 ] + N T0 ], (6.7.20a)
where t0i is an unknown time of arrival of the signal si (t); N is a number of periods
of the harmonic signal si (t); T0 is a period of a carrier; i = 1, 2.
The relationship (6.7.20) shows that the filter forming the estimator (6.7.8)
provides the compression of a useful signal in N times. The result (6.7.20) can be
interpreted as the potential signal resolution in time domain under the condition
of extremely large signal-to-noise ratio in the input of the processing unit (filter).
The expression (6.7.20) also implies that the resolution ∆τmin of the filter in time
parameter (in time delay) is a quantity of a quarter of a carrier period order T0 :
∆τmin = T0 /4 + ε, where ε is an arbitrarily small value.
Fig. 6.7.2 illustrates the signal ŝ∧ (t) in the output of the filter forming the
estimator (6.7.8) under the interaction of two harmonic signals s1 (t) and s2 (t)
in the input of the processing unit (filter) in the absence of interference (noise)
n(t) = 0, on the value of time delay that is equal to t01 − t02 = T0 /3.
FIGURE 6.7.2 Signal ŝ∧ (t) in the output of the filter forming the estimator (6.7.8)
The curves shown in the figure denote: 1 is the signal s1 (t); 2 is the signal s2 (t);
3 is the response ŝ1 (t) of the signal s1 (t); 4 is the response ŝ2 (t) of the signal s2 (t),
the responses 3 and 4 of the s1 (t), s2 (t) are shown by the solid line.
The questions dealing with synthesis and analysis of signal resolution algorithm
in signal space with lattice properties are investigated in Chapter 7.
7
Synthesis and Analysis of Signal Processing
Algorithms
267
268 7 Synthesis and Analysis of Signal Processing Algorithms
level of prior uncertainty with respect to useful and/or interference signals. In his
work [118], P.M. Woodward expressed an opinion that the question of prior distri-
butions of informational and non-informational signal parameters will be a stum-
bling block on the way to the synthesis of optimal signal processing algorithms.
This utterance is fully applicable to most signal processing problems, where prior
knowledge of behavior and characteristics of useful and interference signals along
with their parameters plays a significant role in synthesizing the optimal signal
processing algorithms.
The methodology of the synthesis of signal processing algorithms in signal spaces
with lattice properties has not been studied yet. Nevertheless, it is necessary to
develop the such approaches to the synthesis to allow researchers to operate with
minimum prior data concerning characteristics and properties of interacting useful
and interference signals. First, no prior data concerning probabilistic distribution
of useful signals and interference are supposed to be present. Second, a priori, the
kind of useful signal (signals) is assumed to be known, i.e., it is either deterministic
(quasi-deterministic) or stochastic. As to interference, a time interval τ0 determining
the independence of interference samples is known, and it is assumed that the
relation τ0 << T holds, where T is a signal duration.
Such constraints for prior data content concerning the processed signals impose
their own peculiarities upon the approaches to the synthesis of signal processing
algorithms in signal space with lattice properties. The choice of optimality criteria
is not a mathematical problem.
Meanwhile, some considerations about the choice of criteria have to be stated.
The first is that optimality criteria of solving a signal processing problem should
not depend on possible distributions of useful and interference signals.
The other circumstance influencing the choice of appropriate optimality criteria
is the presence of information regarding time interval τ0 which, according to the
Theorems 4.2.1 and/or 4.2.2, is the basis for the sampling (discretization) of the
processed signals.
The third feature is the combination of the first and second ones implying that
an optimality criterion has to be a function of a set of the samples of the observed
process resulting from interactions of useful signal (signals) and interference (noise)
in signal space with lattice properties. The additional consideration for a proper
criterion of signal processing optimality in signal space with lattice properties is
the need to consider the metric properties of nonlinear signal space.
The last two considerations take into account the fundamental feature of the
considered approach to the synthesis of algorithms of optimal signal processing in
signal space with lattice properties. Under optimization, one should take into ac-
count the metric relationships between the samples of received (processed) realiza-
tion of the observed stochastic process. Thus, there is no need to take into account
the possible properties and characteristics of unreceived realizations of signals that
more completely reflect probabilistic-statistical characteristics and properties of the
ensemble of signal realizations.
The last circumstance fundamentally distinguishes the considered approach to
the synthesis from the classic one, where algorithms and units of signal processing
7.1 Signal Spaces with Lattice Properties 269
are optimal on average with respect to entire statistical ensemble of the received
(processed) signals. With the proposed approach, the algorithms and units of signal
processing are optimal in the sense of the only realization of the observed stochastic
process.
Before considering synthesis and analysis of signal processing algorithms in sig-
nal space with lattice properties, the algebraic properties of such spaces and their
relations with linear signal spaces should be considered.
The axioms above defining a lattice are not independent, so, for instance, the prop-
erty of idempotency follows from the axiom of absorption.
In this chapter, we deal mainly with physical signal spaces, which, according
to Definition 7.1.1 (4.1.1), are both semigroups and lattices, where each group
translation is isotonic. Such algebraic systems are called lattice-ordered groups LG
or L-groups [221], [223].
One assumption concerning isotonic property of group translations has the fol-
lowing form [221]: for ∀c(t), s(t) ∈ Γ: c(t) ≤ s(t), the following relationship holds:
a(t) ⊕ c(t) ⊕ b(t) ≤ a(t) ⊕ s(t) ⊕ b(t), for ∀a(t), b(t) ∈ Γ. (7.1.1)
the independent measurement errors with a distribution from the distribution class
with symmetric (even) probability density function pN (z ) = pN (−z ) represented
by the sample N = (N1 , . . . , Nn ), Ni ∈ N , so that N ∈ LS (X , BX ; +); {Xi } are the
measurement results represented by the sample X = (X1 , . . . , Xn ), Xi ∈ X: X ∈
LS (X , BX ; +); “+” is operation of addition of linear sample space LS (X , BX ; +);
i = 1, . . . , n is the index of the elements of statistical collections {Ni }, {Xi }; n is a
size of the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ).
The estimators, obtained on the basis of least squares method (LSM) and least
modules method (LMM), according to the criteria of minimum of sums of squares
and modules of measurement errors, respectively, are the first and simplest estima-
tors [231], [257]: ( )
X
λ̂LSM = arg min (Xi − f (λ))2 ; (7.2.1a)
λ i
( )
X
λ̂LMM = arg min |Xi − f (λ)| . (7.2.1b)
λ i
2
P P
Extrema of the functions (Xi − f (λ)) and |Xi − f (λ)| determined by criteria
i i
(7.2.1a) and (7.2.1b) are found as the roots of the equations:
X
d (Xi − f (λ̂))2 /dλ̂ = 0; (7.2.2a)
i
X
d |Xi − f (λ̂)|/dλ̂ = 0. (7.2.2b)
i
The values of the estimators λ̂LSM and λ̂LMM are the solutions of the Equations
(7.2.2a) and (7.2.2b) in the form of a function f −1 [∗] of the sample mean and the
sample median med{∗} of the observations {Xi }, respectively:
n
!
1 X
λ̂LSM = f −1 Xi ; (7.2.3a)
n i=1
λ̂LMM = f −1 [ med
T {Xi }], (7.2.3b)
i∈N [1,n]
where f −1 [∗] is a function that is inverse with respect to a function f (λ) of param-
eter λ; N is the set of natural numbers.
The estimators (7.2.3a) and (7.2.3b) are asymptotically effective in the case of
Gaussian and Laplacian distributions of measurement errors, respectively.
Consider two models of indirect measurement of an unknown nonrandom scalar
nonnegative location parameter λ ∈ R+ = [0, ∞[ in sample space with lattice
properties L(Y, BY ; ∨, ∧) respectively:
Yi = f (λ) ∨ Ni ; (7.2.4a)
n
λ̂n,∨ = arg min | ∨ (Ỹi − f (λ))|, (7.2.5b)
λ∈R+ i=1
n n
where ∧ Yi = inf {Yi } is the meet of a set Y = (Y1 , . . . , Yn ); ∨ Ỹi = sup{Ỹi } is
i=1 Y i=1
Ỹ
the join of a set Ỹ = (Ỹ1 , . . . , Ỹn ).
n n
We next find the extrema of the functions | ∧ (Yi − f (λ))| and | ∨ (Ỹi − f (λ))|
i=1 i=1
defined by the criteria (7.2.5a) and (7.2.5b), respectively, putting their derivatives
at the estimator λ̂ of a parameter λ to zero:
n n
d| ∧ (Yi − f (λ̂))|/dλ̂ = −sign[ ∧ (Yi − f (λ̂))]f 0 (λ̂) = 0; (7.2.6a)
i=1 i=1
n n
d| ∨ (Ỹi − f (λ̂))|/dλ̂ = −sign[ ∨ (Ỹi − f (λ̂))]f 0 (λ̂) = 0. (7.2.6b)
i=1 i=1
The values of the estimators λ̂n,∧ and λ̂n,∨ are the solutions of the equations (7.2.6a)
and (7.2.6b) in the form of the function f −1 [∗] of meet and join of the observation
results {Yi } and {Ỹi }, respectively:
n
λ̂n,∧ = f −1 ∧ Yi ; (7.2.7a)
i=1
−1 n
λ̂n,∨ = f ∨ Ỹi , (7.2.7b)
i=1
are the minimum points of these functions and the solutions of Equations (7.2.5a),
(7.2.5b) defining these estimation criteria.
We now carry out the comparative analysis of the quality characteristics of
the estimation of an unknown nonrandom nonnegative location parameter λ ∈
R+ = [0, ∞[ for the model of the direct measurement in both linear sample space
LS (X , BX ; +) and sample space with lattice properties L(Y, BY ; ∨, ∧), respectively:
Xi = λ + Ni ; (7.2.8)
Yi = λ ∨ Ni , (7.2.9)
where {Ni } are the independent measurement errors that are normally distributed
with zero expectation and variance D, represented by the sample N = (N1 , . . . , Nn ),
Ni ∈ N , and N ∈ LS (X , BX ; +) and N ∈ L(Y, BY ; ∨, ∧); {Xi }, {Yi } are the mea-
surement results represented by the samples X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn );
Xi ∈ X, Yi ∈ Y , respectively: X ∈ LS (X , BX ; +), Y ∈ L(Y, BY ; ∨, ∧); “ +” and “∨”
are operation of addition of linear sample space LS (X , BX ; +) and operation of join
of sample space with lattice properties L(Y, BY ; ∨, ∧), respectively; i = 1, . . . , n is
the index of the elements of statistical collections {Ni }, {Xi }, {Yi }; n is a size of
the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn ).
For the model (7.2.8), the estimator λ̂n,+ in the form of a sample mean is a
uniformly minimum variance unbiased estimator [250], [251]:
n
1X
λ̂n,+ = Xi . (7.2.10)
n i=1
As the estimator λ̂n,∧ of parameter λ for the model (7.2.9), we take the meet
(7.2.7a):
n
λ̂n,∧ = ∧ Yi , (7.2.11)
i=1
n
where ∧ Yi = inf {Yi } is the least value from the sample Y = (Y1 , . . . , Yn ).
i=1 Y
Cumulative distribution function (CDF) Fλ̂n,+ (z ) and probability density func-
tion (PDF) pλ̂n,+ (z ) of the estimator λ̂n,+ (7.2.10) for the model (7.2.8) are deter-
mined by the expressions [250], [251]:
Zz
Fλ̂n,+ (z ) = pλ̂n,+ (x)dx; (7.2.12)
−∞
F (z ) = FN (z ) · Fλ (z ), (7.2.15)
Rz
−z 2
FN (z ) = pN (x)dx is CDF of measurement error Ni ; pN (z ) = √1 exp
2πD 2D
−∞
is the PDF of measurement error Ni ; Fλ (z ) = 1(z − λ) is CDF of an unknown
nonrandom parameter λ ≥ 0; 1(z ) is Heaviside step function.
So, F (z ) (7.2.15) can be written in the form:
(
FN (z ), z ≥ λ;
F (z ) = (7.2.16)
0, z < λ.
Taking into account (7.2.16), formula (7.2.14) can be written in the form:
(
1 − [1 − FN (z )]n , z ≥ λ;
Fλ̂n,∧ (z ) =
0, z < λ,
According to its definition, the PDF pλ̂n,∧ (z ) of the estimator λ̂n,∧ is a derivative
of CDF Fλ̂n,∧ (z ):
0
pλ̂n,∧ (z ) = Fλ̂n,∧ (z ) =
Pc = 1 − [1 − FN (λ)]n , (7.2.20)
Pe = [1 − FN (λ)]n . (7.2.21)
The PDFs pλ̂n,∧ (z ) of the estimator λ̂n,∧ for n = 1, 2 are shown in Fig. 7.2.1. Each
random variable Ni is determined by even PDF pN (z ) with zero expectation, so
the inequality holds:
FN (λ) ≥ 1/2. (7.2.22)
276 7 Synthesis and Analysis of Signal Processing Algorithms
(a) (b)
Pe ≤ 2−n . (7.2.23)
Pc ≥ 1 − 2 n . (7.2.24)
The relationship (7.2.19) implies that the estimator λ̂n,∧ is biased; nevertheless, it
is both consistent and asymptotically unbiased, inasmuch as it converges in proba-
P p
bility λ̂n,∧ → λ and in distribution λ̂n,∧ → λ to the estimated parameter λ:
P
λ̂n,∧ → λ : lim P {|λ̂n,∧ − λ| < ε} = 1, for ∀ε > 0;
n→∞
p
λ̂n,∧ → λ : lim pλ̂n,∧ (z ) = δ (z − λ).
n→∞
Theorem 7.2.1. For the model of the measurement (7.2.9), variance D{λ̂n,∧ } of
the estimator λ̂n,∧ (7.2.11) of a parameter λ, is bounded above by the quantity:
" #
2
λ exp −λ / 2 D
D{λ̂n,∧ } ≤ Pc Pe λ2 + n · Pe D 1 + √ ,
[1 − FN (λ)] 2πD
Proof. The probability density function pλ̂n,∧ (z ) (7.2.19) of the estimator λ̂n,∧
(7.2.11) can be represented in the following form:
Pc · δ (z − λ), z = λ;
pλ̂n,∧ (z ) = (7.2.25)
2Pe · pN (z ) · K (z ), z > λ,
where K (z ) is a function determined by the formula:
K (z ) = n[1 − FN (z )]n−1 /(2 · Pe ). (7.2.26)
It should be noted that for any z ≥ λ, the inequality holds:
0 < [1 − FN (z )] ≤ [1 − FN (λ)],
which implies the inequality:
0 < K (z ) ≤ n/(2 · [1 − FN (λ)]). (7.2.27)
Taking into account the boundedness of the function K (z ) (7.2.26), appearing in
PDF pλ̂n,∧ (z ) (7.2.25) of the estimator λ̂n,∧ , one can obtain the upper bound
sup D{λ̂n,∧ } of its variance D{λ̂n,∧ } :
K(z)
2
sup D{λ̂n,∧ } = sup m2 {λ̂n,∧ } − inf m1 {λ̂n,∧ } , (7.2.28)
K(z) K(z) K(z)
Substituting the formula (7.2.25) into the definition of the second moment (7.2.29)
of the estimator λ̂n,∧ , we obtain the following expression:
Z∞ Z∞
2
m2 {λ̂n,∧ } = Pc z δ (z − λ)dz + 2Pe z 2 pN (z )K (z )dz. (7.2.31)
−∞ λ
λ exp(−λ2 /2D)
sup m2 {λ̂n,∧ } = Pc λ2 + n · Pe D 1 + √ . (7.2.32)
K(z) 2πD[1 − FN (λ)]
278 7 Synthesis and Analysis of Signal Processing Algorithms
Substituting the quantities determined by the formulas (7.2.32) and (7.2.34) into
the formula (7.2.28), we obtain the following expression for the upper bound
sup D{λ̂n,∧ } of a variance D{λ̂n,∧ } of the estimator λ̂n,∧ :
K(z)
" #
2
λ exp −λ / 2 D
sup D{λ̂n,∧ } = Pc Pe λ2 + n · Pe D 1 + √ , (7.2.35)
K(z) [1 − FN (λ)] 2πD
Proof. Investigate the behavior of the upper bound sup D{λ̂n,∧ } of variance
K(z)
D{λ̂n,∧ } of the estimator λ̂n,∧ on λ ≥ 0. Show that sup D{λ̂n,∧ } is a monotone
K(z)
decreasing function of a parameter λ, so that this function takes a maximum value
equal to n · 2−n D in the point λ = 0:
!
lim sup D{λ̂n,∧ } = n · 2−n D ≥ sup D{λ̂n,∧ } ≥ D{λ̂n,∧ }. (7.2.37)
λ→0+0 K(z) K(z)
We denote the function sup D{λ̂n,∧ } determined by the identity (7.2.35) by y (λ):
K(z)
Squaring both parts of the inequality (7.2.44), we obtain the intermediate inequal-
ity:
4Q2 (λ) ≤ 2πDf (λ) ⇒ 2Q2 (λ) ≤ πDf (λ),
280 7 Synthesis and Analysis of Signal Processing Algorithms
which implies that the inequality (7.2.40) holds in the case of large samples n >> 1
(it is sufficient that n > 2):
2Q2 (λ)(1 − Qn (λ)) ≤ 2Q2 (λ) ≤ πDf (λ) < Df (λ)(n2 − n). (7.2.45)
Thus, the inequality (7.2.45) implies, that the function y (λ) determined by the
formula (38) is monotone decreasing function on λ ≥ 0 and n > 2, so that y (0) =
n2−n D. Corollary 7.2.1 is proved.
Compare the quality of the estimators λ̂n,+ (7.2.10) and λ̂n,∧ (7.2.11) of a
parameter λ for the models of the measurement (7.2.8) and (7.2.9), respectively,
using the relative efficiency e{λ̂n,∧ , λ̂n,+ } equal to the ratio:
Theorem 7.2.2. The relative efficiency e{λ̂n,∧ , λ̂n,+ } of the estimator λ̂n,∧ of a
parameter λ in sample space with lattice properties L(Y, BY ; ∨, ∧) with respect to
the estimator λ̂n,+ of the same parameter λ in linear sample space LS (X , BX ; +)
is bounded below by the quantity:
Proof. Using the statement (7.2.36) of Corollary 7.2.1 of Theorem 7.2.1 in the
definition of relative efficiency e{λ̂n,∧ , λ̂n,+ } (7.2.46), we obtain that:
Using the result (7.2.35) of Theorem 7.2.1, one can determine the lower bound
inf e{λ̂n,∧ , λ̂n,+ } of relative efficiency e{λ̂n,∧ , λ̂n,+ } of the estimator λ̂n,∧ in the
K(z)
sample space with lattice properties (7.2.11) with respect to the estimator λ̂n,+ in
linear sample space (7.2.10) for some limit cases:
Assuming in the formula (7.2.35) that a parameter λ tends to zero and taking
into account the equality FN (0) = 1/2, we obtain the value of the limit of relative
efficiency lower bound of the estimator λn,∧ with respect to the estimator λ̂n,+ ,
which is identical to the statement (7.2.47) of Theorem 7.2.2:
lim inf e{λ̂n,∧ , λ̂n,+ } = 2n /n2 . (7.2.49)
λ→0+0 K(z)
Theorem 7.2.2 shows how large can be the worst value of relative efficiency of the
estimator λn,∧ in sample space with lattice properties L(Y, BY ; ∨, ∧) with respect
to well known estimator λ̂n,+ in the form of a sample mean in linear sample space
LS (X , BX ; +).
Theorem 7.2.2 can be explained in the following way. While increasing the
sample size n, the variance D{λ̂n,+ } of the estimator λ̂n,+ in linear sample space
LS (X , BX ; +) decreases with a rate proportional to n, whereas the variance of the
estimator D{λ̂n,∧ } in sample space with lattice properties L(Y, BY ; ∨, ∧) decreases
with a rate proportional to 2n /n, i.e., almost exponentially.
Such striking distinctions concerning estimator behavior in linear sample space
LS (X , BX ; +) and estimator behavior in sample space with lattice properties
L(Y, BY ; ∨, ∧) can be elucidated by the fundamental difference between algebraic
properties of these spaces is revealed by the best use of information contained in
the statistical collection processed in one sample space as against another.
In summary, there exist estimators in nonlinear sample spaces characterized on
Gaussian distribution of measurement errors by a variance that is noticeably smaller
than an efficient estimator variance defined by the Cramer-Rao lower bound. This
circumstance poses a question regarding the adequacy of estimator variance appli-
cation to determine the efficiency of an unknown nonrandom parameter estimation
on wide classes of sample spaces and on the families of symmetric distributions of
measurement errors.
First, not all the distributions are characterized by a finite variance. Second, a
variance does not contain all information concerning the properties of distribution.
Third, the existence of superefficient estimators casts doubt on the correctness of
a variance application to determine parameter estimation efficiency. Fourth, the
analysis of the properties of the estimator sequences {λ̂n } (n is a sample size)
on the basis of their variances does not allow taking into account the topological
properties of sample spaces and parameter estimators in these spaces.
On the ground of aforementioned considerations, another approach is proposed
to determine the efficiency of an unknown nonrandom parameter estimation. Such
an approach can determine the efficiency of unbiased and asymptotically unbiased
estimators; it is based on metric properties of the estimators and is considered
below.
282 7 Synthesis and Analysis of Signal Processing Algorithms
Xi = λ + Ni ; (7.2.51)
Yi = λ ⊕ Ni , (7.2.52)
where {Ni } are independent measurement errors, each with a PDF pα N (z ) from
some indexed (over a parameter α) set P = {pα N ( z ) } represented by the sample N=
(N1 , . . . , Nn ), Ni ∈ N , so that N ∈ LS (X , BX ; +) and N ∈ A(Y, BY ; S ); {Xi }, {Yi }
are the measurement results represented by the samples X = (X1 , . . . , Xn ), Y =
(Y1 , . . . , Yn ), Xi ∈ X, Yi ∈ Y , respectively: X ∈ LS (X , BX ; +), Y ∈ A(Y, BY ; S );
“ +” is a binary operation of additive commutative group LS (+) of linear sample
space LS (X , BX ; +); “⊕” is a binary operation of additive commutative semigroup
A(⊕) of sample space with the properties of universal algebra A(Y, BY ; S ) and a
signature S; i = 1, . . . , n is an index of the elements of statistical collections {Ni },
{Xi }, {Yi }; n represents a size of the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ),
Y = (Y1 , . . . , Yn ).
Let the estimators λ̂α α
n,+ , λ̃n,⊕ of an unknown nonrandom parameter λ, both
in linear sample space LS (X , BX ; +) and in sample space with the properties of
universal algebra A(Y, BY ; S ) and a signature S, be some functions T̂ and T̃ of the
samples X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn ), respectively, and, in general case, the
functions T̂ , T̃ are different T̂ 6= T̃ :
λ̂α
n,+ = T̂ [X ]; (7.2.53)
λ̃α
n,⊕ = T̃ [Y ]. (7.2.54)
Z∞
1
dα (λ̂α α
k,+ , λ̃n,⊕ ) = |pα
λ̂k,+
(z ) − pα
λ̃n,⊕
(z )|dz = dα (pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z )), (7.2.55)
2
−∞
1
R∞
where dα (pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z )) = 2 |pα
λ̂k,+
(z ) − pα
λ̃n,⊕
(z )|dz is a metric between
−∞
PDFs pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z ) of the estimators λ̂α α
k,+ and λ̃n,⊕ , respectively, under the
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 283
q{λ̃α α α α
n,⊕ } = d (pλ̂1,+ (z ), pλ̃n,⊕ (z )). (7.2.56)
Now we compare how distinct from each other are the PDFs pα
λ̂1,+
(z ), pα
λ̃n,⊕
(z )
of the estimators λ̂α α
1,+ and λ̃n,⊕ , i.e., the estimators obtained on the basis of the only
measurement result and n measurement results, respectively. By λ̃α n,⊕ we mean the
estimator (7.2.54) obtained within the model (7.2.52), so that a linear sample space
LS (X , BX ; +) can be also used as a sample space with the properties of universal
algebra A(Y, BY ; S ).
The following theorem helps to determine a relation between the values of qual-
ity indices of the estimation q{λ̃n,+ } and q{λ̃n,∧ } (7.2.56) for the estimators λ̃n,+
and λ̃n,∧ defined by the expressions (7.2.10) and (7.2.11) within the models of direct
measurement (7.2.8), (7.2.9) in both linear sample space LS (X , BX ; +) and sam-
ple space with lattice properties L(Y, BY ; ∨, ∧), respectively, on the assumption of
normalcy of distribution of measurement errors {Ni } with PDF pN (z ) in the form:
pN (z ) = (2πD)−1/2 exp −z 2 /2D .
Theorem 7.2.3. The relation between quality indices q{λ̃n,+ } and q{λ̃n,∧ } of the
estimators λ̃n,+ and λ̃n,∧ in linear sample space LS (X , BX ; +) and in sample space
with lattice properties L(Y, BY ; ∨, ∧), respectively, on the assumption of normalcy
of distribution of measurement errors, is determined by the following inequality:
r
−n 2
q{λ̃n,∧ } ≥ 1 − 2 > 1 − [2Φ(1) − 1] ≥ q{λ̃n,+ }, n ≥ 1, (7.2.57)
n+1
Rx 2
where Φ(x) = √1 exp − t2 dt.
2π
−∞
Rx 2
where Φ(x) = √1 exp − t2 dt;
2π
−∞
where pλ̂n,∧ (z ) is PDF of the estimator λ̂n,∧ determined by the formula (7.2.19)
when the sample Y = (Y1 , . . . , Yn ) with a size n is used within the model (7.2.9);
pλ̂1,+ (z ) is the PDF of the estimator λ̂n,+ determined by the formula (7.2.59), when
the only element X1 from the sample X = (X1 , . . . , Xk ) is used within the model
(7.2.8).
Substituting the expressions (7.2.19) and (7.2.59) into the formula (7.2.69), we
obtain the exact value of metric:
d(pλ̂1,+ (z ), pλ̂n,∧ (z )) = Pc = 1 − [1 − FN (λ)]n , (7.2.70)
Thus, Theorem 7.2.3 determines the upper ∆∧ (n) and lower ∆+ (n) bounds of
the rates of a convergence of quality indices q{λ̂n,∧ } and q{λ̂n,+ } of the estimators
λ̂n,∧ and λ̂n,+ to 1, described by the following functions, respectively:
r
−n 2
∆∧ (n) = 2 ; ∆+ (n) = [2Φ(1) − 1] ,
n+1
whose plots are shown in Fig. 7.2.2.
The analysis of relative efficiency of the estimators λ̂n,∧ , λ̂n,+ (7.2.47) and also
the relation between their quality indices (7.2.57) allows us to draw the following
conclusions.
286 7 Synthesis and Analysis of Signal Processing Algorithms
FIGURE 7.2.2 Upper ∆∧ (n) and lower ∆+ (n) bounds of quality indices q{λ̂n,∧ } and
q{λ̂n,+ }
Zi = λ ∧ Ni , (7.2.72)
n
where ∨ Zi = sup{Zi } is the largest value from the sample Z =
i=1
(Z1 , . . . , Zn ), then the estimator λ̂n,∨ is characterized by quality index
q{λ̂n,∨ }, which is determined according to the metric (7.2.56), and which
is equal to quality index q{λ̂n,∧ } of the estimator λ̂n,∧ : q{λ̂n,∨ } = q{λ̂n,∧ }.
5. The proposed quality index (7.2.56) can be used successfully to determine
the estimation quality on a wide class of sample spaces without constraints
with respect to their algebraic and probabilistic-statistical properties.
where y (t) and ỹ (t) functions are the solutions of minimization problems of metrics
between the observed statistical collections {x(tj )} and {x̃(tj )} and optimization
` a
variables, i.e., the functions y (t) and y (t), respectively; w(t) is the function F [∗, ∗]
uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} (7.3.2a) and {x̃(tj )} (7.3.2b); T ∗ is a processing interval; N ∈ N,
N is the set of natural numbers; N is a number of the samples of stochastic processes
x(t), x̃(t) used in primary processing; δd (N ) is a relative dynamic error of filtering
as a function of sample number N ; δd0 is a given quantity of relative dynamic error
of filtering:
2
M{u (t)} s(t)≡0 → min; (a)
L
IP [s(t)] = 2
M{[u(t) − s(t)] } n(t)≡0, ∆t→0 = ε ; (b) (7.3.5)
u(t) = L[w(t)], (c)
where v (t) = ŝ(t) is a result of filtering (the estimator ŝ(t) of the signal s(t))
that is the solution of minimization of a metric between the instantaneous values
of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t); tj =
t − j ∆t, j = 0, 1, . . . , N − 1, tj ∈ T ∗ ; tk = t − M
k
∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ ,
T̃ =]t − ∆T̃ , t]; T̃ is an interval in which the smoothing of stochastic process u(t)
is realized; ∆T̃ is a quantity of a smoothing interval T̃ ; M ∈ N, N is the set
of natural numbers; M is the number of samples of stochastic process u(t) used
during smoothing; δd (∆T̃ ), δf (M ) are relative dynamic and fluctuation errors of
smoothing as the dependences on the quantity ∆T̃ of the smoothing interval T̃ and
290 7 Synthesis and Analysis of Signal Processing Algorithms
the number of the samples M , respectively; δd,sm and δf,sm are relative dynamic
and fluctuation errors of smoothing, respectively.
The optimality criteria and single relations appearing in the systems (7.3.4),
(7.3.5), and (7.3.6) define consecutive stages of processing P F [s(t)], IP [s(t)],
Sm[s(t)] of the general algorithm of useful signal extraction (7.3.3).
The equations (7.3.4a), (7.3.4b) define a criterion of minimum metric between
the statistical sets of observations {x(tj )} and {x̃(tj )} and the results of primary
N −1 `
filtering y (t) and ỹ (t), respectively. The functions of metrics | ∧ [x(tj ) − y (t)]| and
j=0
N −1 a
| ∨ [x̃(tj ) − y (t)]| are chosen taking into account the metric convergence and the
j=0
N −1 N −1
convergence in probability of the sequences yN −1 = ∧ x(tj ), ỹN −1 = ∨ x̃(tj )
j=0 j=0
to the estimated parameter for the interactions of the kind (7.2.2a) and (7.2.2b)
(see Section 7.2). The equation (7.3.4d) defines the criterion of minimum metric
NP−1
|w(tj ) − s(tj )| between the useful signal s(t) and the process w(t) in the pro-
j=0
cessing interval T ∗ = [t − (N − 1)∆t, t]. This criterion establishes the function
F [y (t), ỹ (t)] (7.3.4c) uniting the results y (t) and ỹ (t) of primary processing of the
observed processes x(t) and x̃(t). Criterion (7.3.4d) is considered under two con-
straint conditions: (1) interference (noise) n(t) is absent in the input of the signal
processing unit; (2) the sample interval ∆t tends to zero: ∆t → 0. The equation
(7.3.4e) establishes the criterion of the choice of sample number N of stochastic
processes x(t) and x̃(t) providing a given quantity of a relative dynamic error δd0
of primary filtering.
The equations (7.3.5a) through (7.3.5c) establish the criterion of the choice of
functional transformation L[w(t)]. The equation (7.3.5a) defines the criterion of
minimum of the second moment of the process u(t) in the absence of useful signal
s(t) in the input of the signal processing unit. The equation (7.3.5b) establishes the
quantity of the second moment of the difference between the signals u(t) and s(t)
under two constraint conditions: (1) interference (noise) n(t) is absent in the input
of the signal processing unit; (2) sample interval ∆t tends to zero: ∆t → 0. The
relation (7.3.5c) defines a coupling equation between the processes u(t) and w(t).
M −1
|u(tk ) − v ◦ (t)|
P
The equation (7.3.6a) defines the criterion of minimum metric
k=0
between instantaneous values of the process u(t) and optimization variable v ◦ (t) in
the smoothing interval T̃ =]t − ∆T̃ , t], requiring the final processing of the signal
u(t) in the form of its smoothing. The equation (7.3.6b) establishes a criterion of
the choice of the quantity ∆T̃ of smoothing interval T̃ based on providing a given
quantity of dynamic error of smoothing δd,sm . The equation (7.3.6c) defines the
criterion of the choice of sample number M of stochastic process u(t) providing a
given quantity of fluctuation error of smoothing δf,sm .
We obtain the estimator ŝ(t) = v (t) of the signal s(t) in the output of the signal
processing unit by consecutivly solving the optimization relationships of the system
(7.3.4).
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 291
N −1 `
To solve the problem of minimization of the functions | ∧ [x(tj )− y (t)]| (7.3.4a),
j=0
N −1 a
| ∨ [x̃(tj ) − y (t)]| (7.3.4b), it is necessary to determine the extrema of these func-
j=0
` a
tions, setting their derivatives with respect to y (t) and y (t) to zero, respectively:
N −1 ^ ^ N −1 ^
d| ∧ [x(tj ) − y (t)]|/d y (t) = −sign( ∧ [x(tj ) − y (t)]) = 0; (7.3.7a)
j=0 j=0
N −1 _ _ N −1 _
d| ∨ [x̃(tj ) − y (t)]|/d y (t) = −sign( ∨ [x̃(tj ) − y (t)]) = 0. (7.3.7b)
j=0 j=0
The solutions of Equations (7.3.7a) and (7.3.7b) are the values of the estimators
y (t) and ỹ (t) in the form of meet and join of the observation results {x(tj )} and
{x̃(tj )}, respectively:
N −1 N −1
y (t) = ∧ x(tj ) = ∧ x(t − j ∆t); (7.3.8a)
j=0 j=0
N −1 N −1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − j ∆t). (7.3.8b)
j=0 j=0
N −1 ` N −1 a
The derivatives of the functions | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]|, accord-
j=0 j=0
ing to the relationships (7.3.7a) and (7.3.7b), change their sign from minus to plus
in the points y (t) and ỹ (t). Thus, the extrema determined by the formulas (7.3.8a),
(7.3.8b) are the minimum points of these functions and, respectively, are the solu-
tions of the equations (7.3.4a), (7.3.4b) defining these criteria of estimation (signal
filtering).
The conditions of the criterion (7.3.4d) of the system (7.3.4): n(t) ≡ 0, ∆t → 0
imply the equations of observations (7.3.2a,b) of the following form: x(tj ) = s(tj )∨0,
x̃(tj ) = s(tj ) ∧ 0, and according to the relationships (7.3.8a), (7.3.8b), the identities
hold:
N −1
y (t) n(t)≡0,∆t→0 = ∧ [s(tj ) ∨ 0] = s(t) ∨ 0; (7.3.9a)
j=0
N −1
ỹ (t) n(t)≡0,∆t→0 = ∨ [s(tj ) ∧ 0] = s(t) ∧ 0, (7.3.9b)
j=0
where tj = t − j ∆t, j = 0, 1, . . . , N − 1.
To provide the criterion (7.3.4d) of the system (7.3.4) on joint fulfillment of
the identities (7.3.9a), (7.3.9b), and (7.3.4c), it is necessary and sufficient that the
coupling equation (7.3.4c) between the stochastic process w(t) and a pair of the
results of primary processing y (t), ỹ (t) has the form:
w(t) n(t)≡0,∆t→0 = y (t) n(t)≡0,∆t→0 ∨ 0 + ỹ (t) n(t)≡0,∆t→0 ∧ 0 =
= s(t) ∨ 0 ∨ 0 + s(t) ∧ 0 ∧ 0 = s(t) ∨ 0 + s(t) ∧ 0 = s(t). (7.3.10)
−1
NP
Based on expression (7.3.10), the metric |w(tj ) − s(tj )|, that has to be min-
j=0
imized according to the criterion (7.3.4d) is minimal and equal to zero.
292 7 Synthesis and Analysis of Signal Processing Algorithms
It is obvious that the coupling equation (7.3.4c) has to be invariant with respect
to the presence (absence) of interference (noise) n(t), so the final coupling equation
can be written on the basis of the identity (7.3.10) in the form:
Thus, the identity (7.3.11) defines the kind of coupling equation (7.3.4c) obtained
on the basis of joint fulfillment of criteria (7.3.4a), (7.3.4b), and (7.3.4d).
The solution u(t) of the relationships (7.3.5a) through (7.3.5c) establishing the
criterion of the choice of the functional transformation of the process w(t) is the
function L[w(t)] defining the gain characteristic of the limiter:
a, w(t) ≥ a;
u(t) = L[w(t)] = w(t), −a < w(t) < a; (7.3.12)
−a, w(t) ≤ −a,
and its linear part provides the condition (7.3.5b), and its clipping part (above
and below) provides the minimization of the second moment of the process u(t)
according to the criterion (7.3.5a).
The relationship (7.3.12) can be written in terms of L-group L(+, ∨, ∧) in the
form:
u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.3.12a)
where, in the case of a Gaussian signal s(t) with a variance Ds that is√ smaller than
D: Ds < D, the limiter parameter a can be chosen proportional to D: a2 ∼ D
providing the equation (7.3.5b) holds.
We can finally obtain the estimator ŝ(t) = v (t) of the signal s(t) in the output
of the filtering unit by solving the minimization equation on the basis of crite-
M −1
|u(tk ) − v ◦ (t)|, setting its
P
rion (7.3.6a). We find the extremum of the function
k=0
derivative with respect to v ◦ (t) to zero:
M
X −1 M
X −1
d{ |u(tk ) − v ◦ (t)|}/dv ◦ (t) = − sign[u(tk ) − v ◦ (t)] = 0.
k=0 k=0
The solution of the last equation is the value of the estimator v (t) in the form of the
sample median med{∗} of a collection of the samples {u(tk )} of stochastic process
u(t):
v (t) = med{u(tk )}, (7.3.13)
tk ∈T̃
k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval in
which smoothing of the stochastic process u(t) is realized.
M −1
|u(tk ) − v ◦ (t)| in the point v (t) changes its
P
The derivative of the function
k=0
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 293
sign from minus to plus. Thus, the extremum determined by the function (7.3.13)
is a minimum of this function, and correspondingly, is the solution of Equation
(7.3.6a) determining this estimation criterion.
Thus, summarizing the relationships (7.3.13), (7.3.11), (7.3.11a,b), (7.3.8a,b),
one can draw a conclusion that the estimator ŝ(t) = v (t) of the signal s(t), extracted
in the presence of interference (noise) n(t), is the function of smoothing of the
stochastic process u(t) obtained by limiting the process w(t) that is the sum of
the results y (t) and ỹ (t) of the corresponding primary processing of the observed
stochastic processes x(t) and x̃(t) in the interval T ∗ = [t − (N − 1)∆t, t].
A block diagram of the processing unit, according to the general algorithm
Ext[s(t)], its stages P F [s(t)], IP [s(t)], Sm[s(t)], and the relationships (7.3.8a,b),
(7.3.11a,b), (7.3.11), (7.3.13), includes two processing channels, each containing
transversal filters; units of evaluation of positive y+ (t) and negative ỹ− (t) parts of
the processes y (t) and ỹ (t), respectively; an adder uniting the results of signal pro-
cessing in both channels; a limiter L[w(t)], and median filter (MF) (see Fig. 7.3.1).
FIGURE 7.3.1 Block diagram of processing unit realizing general algorithm Ext[s(t)]
Fy (z1 , z2 /s∗ ) = 1 − [1 − Fn (z1 , z2 )]N 1(z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 )). (7.3.17)
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 295
According to its definition, the PDF py (z1 , z2 /s∗ ) of instantaneous values y (t1,2 )
of statistics (7.3.8a) is a mixed derivative of CDF Fy (z1 , z2 /s∗ ), so the expression
evaluating it can be written in the following form:
∂ 2 Fy (z1 , z2 /s∗ )
py (z1 , z2 /s∗ ) = =
∂z1 ∂z2
= 1 − [1 − Fn (z1 , z2 )]N δ (z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+
+ N [1 − Fn (z1 , z2 )]N −1 {Fn (z2 /z1 )pn (z1 )1(z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+
+ Fn (z1 /z2 )pn (z2 )δ (z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 ))}+
N −1
+ N [1 − Fn (z1 , z2 )]N −1 { Fn (z2 /z1 )Fn (z1 /z2 )pn (z1 )pn (z2 )+
1 − Fn (z1 , z2 )
+ pn (z1 , z2 )} · 1(z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 )), (7.3.18)
where δ (z ) is Dirac delta function; Fn (z1 /z2 ) and Fn (z2 /z1 ) are conditional
CDFs of instantaneous values of interference (noise) n(t1,2 ), Fn (zi /z2 ) =
Rzi
pn (xi , zj )dxi /pn (zj ); pn (z1,2 ) is a univariate PDF of instantaneous values of
−∞
interference (noise) n(t1,2 ).
With help from analogous reasoning, one can obtain the univariate conditional
CDF Fy (z/s∗ ):
and also the univariate conditional PDF py (z/s∗ ) of the instantaneous value y (t)
of statistics (7.3.8a):
where P (Cc ) and P (Ce ) are the probabilities of correct and error formation of the
signal estimator ŝ(t), respectively:
negative instantaneous values of the signal s(t) < 0, the probability P (Cc ) of a
correct formation of the estimator ŝ(t), according to (7.3.19), becomes smaller than
the quantity 1 − 2−n .
To overcome this disadvantage (when s(t) < 0), within the filtering unit shown
in Fig. 7.3.1, we foresee the further processing of exclusively nonnegative values
y+ (t) of the process in the output of the filter y (t) (7.3.11a): y+ (t) = y (t) ∨ 0.
Applying similar reasoning, one can elucidate the necessity of formation of the
statistics ỹ− (t) determined by the expression (7.3.11b): ỹ− (t) = ỹ (t) ∧ 0.
The realization s∗ (t) of useful signal s(t) acting in the input of this filtering
unit, and also possible realization w∗ (t) of the process w(t) in the output of the
adder, obtained via statistical modeling, are shown in Fig. 7.3.2.
FIGURE 7.3.2 Realization s∗ (t) of useful signal s(t) acting in input of filtering unit and
possible realization w∗ (t) of process w(t) in output of adder
written on the basis of the PDF (7.3.18) of the process y (t) passed through the
signal limiter in the following form:
pw (z1 , z2 /s∗ ) = 1 − [1 − Fn (|z1 |, |z2 |)]N δ (z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+
+ N [1 − Fn (|z1 |, |z2 |)]N −1 {Fn (|z2 |/z1 )pn (z1 )hs (z1 )δ (z2 − s∗ (t2 ))+
+ Fn (|z1 |/z2 )pn (z2 )hs (z2 )δ (z1 − s∗ (t1 ))} + N [1 − Fn (|z1 |, |z2 |)]N −1 ×
N −1
×{ Fn (|z2 |/z1 )Fn (|z1 |/z2 )pn (z1 )pn (z2 )+
1 − Fn (|z1 |, |z2 |)
+ pn (z1 , z2 )} · hs (z1 )hs (z2 ), (7.3.20)
where Fn (|z2 |/z1 ), Fn (|z1 |/z2 ) are the conditional CDFs of instantaneous values
|z
Ri |
of interference (noise) n(t1,2 ): Fn (|zi |/zj ) = pn (xi , zj )dxi /pn (zj ); pn (z1,2 ) is the
−∞
univariate PDF of instantaneous values of interference (noise) n(t1,2 ); hs (z1,2 ) is the
function taking into account the sign of instantaneous values of realization s∗ (t1,2 )
equal to:
hs (z1,2 ) = [1 − sign(s∗ (t1,2 ))]/2 + sign(s∗ (t1,2 )) · 1(z1,2 − s∗ (t1,2 )). (7.3.21)
Univariate conditional PDF pw (z/s∗ ) of instantaneous values of the resulting pro-
cess w(t) in the output of the processing unit determined by the expression (7.3.11),
can be written on the basis of the formula (7.3.19) determining PDF py (z/s∗ ) of
the process y (t) (7.3.8a):
pw (z/s∗ ) = P ∗ (Cc ) · δ (z − s∗ (t)) + P ∗ (Ce )p0 (z ); (7.3.22)
P ∗ (Cc ) = 1 − [1 − Fn (|s∗ (t)|)]N ≥ 1 − 2−N ; (7.3.22a)
P ∗ (Ce ) = [1 − Fn (|s∗ (t)|)]N ≤ 2−N , (7.3.22b)
where p0 (z ) is PDF of noise component in the output of the processing unit equal
to:
p0 (z ) = p∗ (z ) · hs (z ); (7.3.22c)
p∗ (z ) = N · pn (z )[1 − Fn (|z|)]N −1 /P ∗ (Ce ),
hs (z ) is the function determined by the formula (7.3.21) that takes into account
the sign of s∗ (t).
The analysis of PDFs (7.3.20) and (7.3.22) implies that the process w(t) in
the output of the adder in the presence of the signal (s(t) 6= 0) is non-stationary;
whereas if the signal is absent (s(t) = 0), then this process is stationary. In the
absence of a signal, the stochastic process w(t) possesses an ergodic property with
respect to the univariate conditional PDF pw (z/s∗ ) (7.3.22). The random variables
w(t) and w(t + τ ) are independent under condition τ → ∞, since the samples n(t)
and n(t + τ ) of quasi-white Gaussian noise with normalized correlation function
(7.3.14), acting in the input of the processing unit, are asymptotically independent;
the condition holds [115]:
lim pw (z, z ; τ /s∗ ) = p2w (z/s∗ ).
τ →∞
298 7 Synthesis and Analysis of Signal Processing Algorithms
∗ ∗
R∞
where Φ+
wn (u) = P (Cc ) + P (Ce )Φ0 (u); Φ0 (u) = p0 (z + s∗ (t))ejuz dz.
−∞
The relationship (7.3.24) implies that CF Φ+ w (u ) of stochastic process w+ (t) =
∗
w(t) ∨ 0 = y+ (t) is the product of two CFs: ejus (t) and Φ+ wn (u), so the process
w+ (t) in the output of the adder can be represented as the sum of two independent
processes, i.e., the signal ws+ (t) and noise wn+ (t) components:
The signal component ws+ (t) is a stochastic process whose instantaneous values
∗
of its realization ws+ (t) take values equal to (1) the least positive instantaneous
value of the signal from the set {s(tj )}, tj = t − j ∆t, j = 0, 1, . . . , N − 1 with
the probability (7.3.22a) on s(t) ≥ 0 or (2) zero with the probability (7.3.22b) on
s(t) < 0. The noise component wn+ (t) is a stochastic process whose instantaneous
values of its realization take values equal to (1) the least positive instantaneous
value of interference (noise) from the set {n(tj )}, tj = t − j ∆t, j = 0, 1, . . . , N − 1
with the probability 2−N or (2) zero with the probability 1 − 2−N .
Similar reasoning could be extended onto negative part w− (t) = w(t) ∧ 0 = ỹ− (t)
of the stochastic process w(t) in the output of the adder, that also can be represented
as the sum of two independent processes, i.e., the signal ws− (t) and the noise
wn− (t) components: w− (t) = ws− (t)+ wn− (t). Thus, the analysis of the distribution
pw (z/s∗ ) (7.3.22) of the process w(t) in the output of the adder implies that this
process can be represented in the form of the signal and noise components:
pw (z/s∗ ) on s∗ (t) = 0, the expectation of the process wn (t) in the output of the
processing unit is equal to zero. Then Rwn (τ ) is determined on the basis of PDF
pw (z1 , z2 /s∗ ) (7.3.20) by the expression:
Z∞ Z∞
Rwn (τ ) = z1 z2 pw (z1 , z2 /s∗ )dz1 dz2 |s∗ (t)=0 . (7.3.26)
−∞ −∞
When N = 10, the correlative relations of instantaneous values of the noise com-
ponent wn (t) decrease to 10−8 over the interval τ = 5/4fn,max . This suggests that
the rate of their destroying is considerable, and the filter possesses asymptotically
whitening properties.
Next we determine the mean N + (H = 0) of the positive overshoots of stochastic
process w(t) in the output of the adder per time unit at the level H = 0 in the
absence of the signal (s(t) = 0). Generally, for the stationary stochastic process
w(t), the quantity N + (H ) is determined by the formula [116], [264]:
Z∞
+
N (H ) = w0 p(H, w0 )dw0 |s(t)=0 , (7.3.30)
0
the adder per time unit at the level H = 0 in the absence of the signal (s(t) = 0)
is determined by the expression:
√
N + (H = 0) = N · 2−(N −1) fn,max / 3, (7.3.31)
where P (w(t) > 0), according to the expression (7.3.22b), is equal to P ∗ (Ce ) = 2−N .
Then the average duration τ̄ (0) of positive overshoots of the process w(t) with
respect to the level H = 0, taking into account the expression (7.3.31), is determined
by the formula: √
τ̄ (0) = 3/(2N fn,max ). (7.3.33)
The noise component wn (t) of the signal w(t) in the output of the adder, according
to (7.3.25b), is equal to wn (t) = wn+ (t) + wn− (t). The component wn+ (t) (wn− (t))
is a pulse stochastic process with average duration of noise overshoots τ̄ (0) that is
small compared
√ to the correlation interval of interference (noise) ∆t = 1/2fn,max :
τ̄ (0) = 3∆t/N . Distribution of instantaneous values of noise overshoots of noise
component wn (t) is characterized by PDF p0 (z ) (7.3.22c). The component wn+ (t)
(wn− (t)) can be described by homogeneous Poisson process with a constant over-
shoot flow density λ = N + (H = 0) determined by the relationship (7.3.31).
While applying the known smoothing procedures [263], [265], [266] built upon
the basis of median estimators, a variance Dvn of noise component vn (t) of the
process v (t) in the output of the filter (7.3.13) (under condition that the signal s(t)
is a Gaussian narrowband process, and interference (noise) n(t) is very strong) can
be reduced to the quantity:
Z∞ ( )
2
∆T̃ 2 N 2 fn,max
2
Dv n ≤ Dw n pwn (τ )dτ = a exp − √ , (7.3.34)
8 π
∆T̃
(noise) n(t); pwn (τ ) is PDF of duration of overshoots of the noise component wn (t)
of the process w(t), which is approximated by experimentally obtained dependence:
τ2
τ
pwn (τ ) = exp − ,
Dwn ,τ 2Dwn ,τ
√
where Dwn ,τ = π/(N 2 fn,max2
).
2
The formula (7.3.34) implies that the signal-to-noise ratio qout in the output of
the filter is determined by the quantity:
( )
2 2 2
1 ∆ T̃ N f
2
qout = Ds /Dvn ≥ exp √ n,max . (7.3.35)
16 8 π
The formula (7.3.34) also implies that fluctuation component of signal estimator
error (relative filtering error ) δf is bounded by the quantity:
( )
2
∆T̃ 2 N 2 fn,max
2
δf = M{(s(t) − v (t)) }/2Ds s(t)≡0 ≤ 8 exp − √ , (7.3.36)
8 π
where ∆T̃ is a quantity of the smoothing interval T̃ of the useful signal s(t).
Thus, the relative error δ of the estimator of Gaussian narrowband signal
(7.3.15) in the presence of interference (noise) under their interaction in the signal
space with lattice properties is bounded by the errors determined by the relation-
ships (7.3.36), (7.3.39), and (7.3.40):
information quantity contained in the estimator ŝ(t) are the same. Quality indices
of these estimators are the same too.
If, while extracting the stochastic signal s(t), the addressee does not know the
modulating function M [∗, ∗], then, on the basis of the estimator ŝ(t) of the signal
s(t), one can determine its envelope Eŝ (t), phase Φŝ (t), and instantaneous frequency
ωŝ (t) (see Fig. 7.3.3): p
Eŝ (t) = ŝ2 (t) + s̃2 (t), (7.3.44)
Z∞
1 ŝ(τ )dτ
where s̃(t) = H[ŝ(t)] = is Hilbert transform of the estimator ŝ(t);
π t−τ
−∞
ωŝ (t) = Φ0ŝ (t) = [s̃0 (t)ŝ(t) − s̃(t)ŝ0 (t)]/Eŝ2 (t). (7.3.46)
with the same or almost the same quality regardless of their energetic
relationships. The relation of the prior information quantity contained in
these signals does not matter.
3. The obtained signal extraction unit operating in the space with L-group
properties almost does not distort the structure of the received useful
signal regardless of the conditions of parametric prior uncertainty. High
quality of the obtained signal estimator ŝ(t) allows solving the main signal
processing problems.
4. The essential distinctions of signal extracting quality between linear space
LS (+) and signal space L(+, ∨, ∧) with L-group properties may be ex-
plained by the fundamental differences in the types of interactions between
the useful signal s(t) and interference (noise) n(t) within these spaces: ad-
ditive s(t) + n(t), and interactions in the form of join s(t) ∨ n(t) and meet
s(t) ∧ n(t), respectively.
where Ts = [t0 , t0 + T ] is the domain of definition of the signal s(t); t0 is the known
time of arrival of the signal s(t); T is a duration of the signal s(t); θ is an unknown
nonrandom parameter taking value θ = 0 (the signal is absent) or θ = 1 (the signal
is present).
Thus, deterministic signal detection is equivalent to the estimation of an un-
known nonrandom parameter θ. For a solution, one has to obtain both algorithm
and block diagram of the optimal signal detector and its quality indices of de-
tection (conditional probabilities of correct detection and false alarm) have to be
determined.
Assume that interference (noise) n(t) is characterized by an arbitrary distribu-
tion with an even univariate probability density function pn (x) = pn (−x).
To synthesize the detectors in linear signal space LS (+), i.e., when the interac-
tion equality x(t) = θs(t)+ n(t) holds, θ ∈ {0, 1}, the part of the theory of statistical
inference called statistical hypotheses testing is used. The strategies of decision mak-
ing (within signal detection problem solving) considered in the literature suppose a
likelihood ratio computation deemed necessary to determine a likelihood function.
Such a function is determined by multivariate probability density function of inter-
ference (noise) n(t). During signal detection problem solving in linear signal space
LS (+), i.e., when the interaction equality x(t) = s(t) + n(t) holds, the trick of the
variable change is used to obtain likelihood function: n(t) = x(t) − s(t) [254], [163].
However, it is impossible to use this subterfuge to determine likelihood ratio un-
der interaction (7.4.1) between the signal and interference (noise), inasmuch as the
equation is unsolvable with respect to the variable n(t) because the lattice L(∨, ∧)
does not possess the group properties; another approach is necessary.
As applied to the case (7.4.1), solving the signal detection problem in the pres-
ence of interference (noise) n(t) with an arbitrary distribution lies in formation of an
306 7 Synthesis and Analysis of Signal Processing Algorithms
estimator ŝ(t) of the received signal s(t) which (on the basis of the chosen criteria)
would allow the observer to distinguish two possible situations of signal receiving
determined by the parameter θ. We formulate the problem of detection Det[s(t)]
of the signal s(t) on the basis of minimization of squared metric |y (t) − s(t)|2 dt
R
Ts
between the function y (t) = F [x(t)] of the observed process x(t) and the signal s(t)
under the condition that the observed process x(t) includes the signal θs(t) = s(t)
(θ = 1): x(t) = s(t) ∨ n(t):
yR(t) = F [x(t)] = ŝ(t); (a)
2
|y ( t ) − s ( t ) | dt | → min ; (b)
θ=1
Ts y(t)∈Y
Det[s(t)] = (7.4.2)
R
θ̂ = 1[max[ y (t)s(t)dt] θ∈{0,1} − l0 ]; (c)
t∈T s Ts
R R
y (t)s(t)dt] |θ=1 6= y (t)s(t)dt] |θ=0 , (d)
Ts Ts
between the function y (t) = F [x(t)] and the signal s(t) in its presence in the ob-
served process x(t): x(t) = s(t) ∨ n(t) follows directly from the absorption axiom
of the lattice L(∨, ∧) (see page 269) contained in the third part of the multilink
identity:
y (t) = s(t) ∧ x(t) = s(t) ∧ [s(t) ∨ n(t)] = s(t). (7.4.3)
7.4 Signal Detection in Metric Space with Lattice Properties 307
The identity (7.4.3) directly implies, first, the kind of function F [x(t)] from the
relation (7.4.2a) of the system (7.4.2):
The identity (7.4.4) implies that in the presence of the signal s(t) in the ob-
served
R process x(t) = s(t) ∨ n(t), at the instant t = t0 + T , correlation integral
y (t)s(t)dt |θ=1 takes a maximum value equal to the energy E of the signal s(t):
Ts
Z tZ
0 +T
The set of values of the estimator y (t) |θ=0 , according to the identity (7.4.4), is
determined by the expression:
The identity (7.4.6) implies that under joint fulfillment of the inequalities s(t) > 0
and n(t) ≤ 0, the estimator y (t) |θ=0 takes values equal to zero:
y (t) |θ=0 = 0,
while under joint fulfillment of the inequalities s(t) > 0 and n(t) > 0, when the
inequality s(t) ≤ 0 holds, the estimator y (t) |θ=0 takes values equal to the instan-
taneous values of useful signals s(t):
Z tZ
0 +T
The identities (7.4.5) and (7.4.7) confirm fulfillment of the constraint (7.4.2d) of
308 7 Synthesis and Analysis of Signal Processing Algorithms
the system (7.4.2), and specify the upper and lower bounds for the threshold level
l0 , respectively:
3E/4 < l0 < E. (7.4.8)
Thus, summarizing the relationships (7.4.2a) through (7.4.2c) of the system (7.4.2),
one can draw the conclusion that the deterministic signal detector has to form the
estimator y (t) = ŝ(t) that, according toR (7.4.4), is equal to ŝ(t) = s(t) ∧ x(t); also
it has to compute correlation integral y (t)s(t)dt in the interval Ts = [t0 , t0 + T ]
Ts
and to determine the presence or absence of the useful signal s(t). According to
the equation (7.4.2a) of the system (7.4.2), the decision θ̂ = 1 concerning the
presence of the signal s(t) (θ = 1) in the observed process xR(t) is made if, at the
instant t = t0 + T , the maximum value of correlation integral y (t)s(t)dt |θ=1 = E
Ts
exceeds the threshold level l0 . The decision θ̂ = 0 concerning the absence of the
useful signal s(t) (θ = 0R) in the observed process x(t) is made if the maximum value
of correlation integral y (t)s(t)dt |θ=0 = 3E/4 observed in the instant t = t0 + T
Ts
does not exceed the threshold level l0 .
The block diagram of a deterministic signal detection unit synthesized in signal
R includes the signal estimator ŝ(t) = y (t) formation
space with lattice properties
unit; correlation integral y (t)s(t)dt computing unit; the strobing circuit (SC)
Ts
and decision gate (DG) (see Fig 7.4.1). The correlation integral computing unit
consists of multiplier and integrator.
We now analyze metric relations between the signals θs(t) = s(t) (θ = 1),
θs(t) = 0 (θ = 0) and their estimators ŝ(t) |θ=1 = y (t) |θ=1 , ŝ(t) |θ=0 = y (t) |θ=0 .
For the signals θs(t) = s(t), θs(t) = 0 and their estimators y (t) |θ=1 , y (t) |θ=0 , the
following metric relationships hold:
2 2 2 2
k0 − y (t) |θ=1 k + ks(t) − y (t) |θ=1 k = k0 − s(t)k = ks(t)k ; (7.4.9a)
2 2 2 2
k0 − y (t) |θ=0 k + ks(t) − y (t) |θ=0 k = k0 − s(t)k = ks(t)k , (7.4.9b)
2 2 2
where ka(t) − b(t)k = ka(t)k + kb(t)k − 2(a(t), b(t)) is a squared metric between
2
the functions a(t), b(t) in Hilbert space HS; ka(t)k is a squared norm of a function
7.4 Signal Detection in Metric Space with Lattice Properties 309
R
a(t) in Hilbert space HS; (a(t), b(t)) = a(t)b(t)dt is a scalar product of the
T∗
functions a(t) and b(t) in Hilbert space HS; T ∗ is a domain of definition of the
functions a(t) and b(t).
The relationships (7.4.4) and (7.4.9a) imply that on an arbitrary signal-to-
interference (signal-to-noise) ratio in the input of a detection unit, the cross-
correlation coefficient ρ[s(t), y (t) |θ=1 ] between the signal s(t) and the estimator
y (t) |θ=1 of the signal s(t), according to their identity (y (t) |θ=1 = s(t)) is equal to
1:
ρ[s(t), y (t) |θ=1 ] = 1,
and the squared metrics taken from (7.4.9a) are determined by the following rela-
tionships:
2 2 2 2
ks(t) − y (t) |θ=1 k = 0; k0 − y (t) |θ=1 k = k0 − s(t)k = ks(t)k = E,
distinguished by their amplitude without any error. The identity (7.4.5) implies
that in the presence of the signal s(t) in x(t) = s(t) ∨ n(t) (θ = 1) in the output
of the integrator at the instant t = t0 + jT , j = 1, 3, the maximum value of
correlation integral (7.4.5) is equal to Eρ[s(t), y (t) |θ=1 ] = E. In the absence of the
signal in the observed process x(t) = 0 ∨ n(t) (θ = 0) in the output of the integrator
(see Fig. 7.4.1) at the instant t = t0 + jT , j = 2, 4, the value of correlation integral
is equal to Eρ[s(t), y (t) |θ=0 ] = 3E/4.
FIGURE 7.4.2 Signals z(t) and u(t) in FIGURE 7.4.3 Estimator θ̂ of unknown
outputs of correlation integral computing nonrandom parameter θ characterizing
unit (dash line) and strobing circuit (solid presence (θ = 1) or absence (θ = 0) of
line), respectively; strobing pulses v0 (t) useful signal s(t) in output of decision
(dot line) gate
Figure 7.4.3 shows the result of formation of the estimator θ̂ of unknown non-
random parameter θ, which characterizes the presence (θ = 1) or absence (θ = 0)
of the useful signal s(t) in the observed process x(t). The decision gate (DG)
(see Fig. 7.4.1) forms the estimator θ̂ according to the rule (7.4.2c) of the system
(7.4.2) by means of the comparison of the signal u(t) at the instants t = t0 + jT ,
j = 1, 2, 3, . . . with the threshold value l0 chosen according to two-sided inequality
(7.4.8).
Thus, regardless of the conditions of parametric and nonparametric prior un-
certainty and, respectively, independently of probabilistic-statistical properties of
interference (noise), the optimal deterministic signal detector in signal space with
lattice properties accurately detects the signals with the conditional probabilities
of the correct detection D = 1 and false alarm F = 0. Absolute values of quality
indices of signal detection D = 1, F = 0 are stipulated by the fact that the es-
timator y (t) = ŝ(t) of the received signal θs(t), θ ∈ {0, 1} formed in the input of
the correlation integral computing unit (see Fig. 7.4.1), regardless of instantaneous
values of interference (noise) n(t), can take only two values from a set {0, s(t)}.
The results of the investigation of algorithm and unit of deterministic signal
detection in signal space with lattice properties allow us to draw the following
conclusions.
is smoothing; they all form the processing stages of a general algorithm of useful
signal matched filtering (7.4.14).
The criteria of optimality determining every processing stage P F [s(t)], IP [s(t)],
Sm[s(t)] are united into the single systems:
P F [s(t)] =
J−1 ^
y (t) = arg min | ∧ [x(tj ) − y (t)]|; (a)
^ j=0
y (t)∈Y ;t,tj ∈Tobs
J−1 _
| ∨ [x̃(tj ) − y (t)]|;
ỹ (t) = _ arg min (b)
j=0
y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.15)
J = arg min[ |y(t)|dt] n(t)≡0 , |y(t)|dt 6= 0; (c)
y(t)∈Y ; Tobs Tobs
w(t) = F [y (t), ỹ (t)];
(d)
R
|w(t) − s(t)|dt] n(t)≡0 → min , (e)
[t1 −T0 ,t1 ] w(t)∈W
where y (t), ỹ (t) are the solution functions for minimization of the metric between
the observed statistical collections {x(tj )}, {x̃(tj )} and optimization variables, i.e.,
` a
the functions y (t), y (t), respectively; w(t) is the function F [∗, ∗] of uniting the
results y (t), ỹ (t) of minimization of the functions of the observed collections {x(tj )}
and {x̃(tj )}; Tobs is an observation interval of the signal; J is a number of samples of
stochastic processes x(t), x̃(t) used under processing J ∈ N; N is the set of natural
numbers;
2
M{u (t)} s(t)≡0 → min; (a)
L
IP [s(t)] = M{[u(t) − s(t)]2 } n(t)≡0 = ε ; (b) (7.4.16)
u(t) = L[w(t)], (c)
where M{∗} is the symbol of mathematical expectation; L[w(t)] is a functional
transformation of the process w(t) into the process u(t); ε is a constant that is
some function of a power of the signal s(t);
M −1
|u(tk ) − v ◦ (t)|;
P
v ( t ) = arg min (a)
v ◦ (t)∈V ;t,tk ∈T̃ k=0
Sm[s(t)] = ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.4.17)
0
M = arg max [δf (M )] M ∗ :δf (M ∗ )=δf,sm , (c)
M 0 ∈N∩[M ∗ ,∞[
where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t)) that
M −1
|u(tk ) − v ◦ (t)| between
P
is the solution of the problem of minimizing the metric
k=0
the instantaneous values of stochastic process u(t) and optimization variable, i.e.,
the function v ◦ (t); tk = t − M
k
∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is
an interval, in which smoothing of stochastic process u(t) is realized; M ∈ N, N is
the set of natural numbers; M is a number of the samples of stochastic process u(t)
used under smoothing on the interval T̃ ; δd (∆T̃ ) and δf (M ) are relative dynamic
and fluctuation errors of smoothing as the dependence on the quantity ∆T̃ of a
314 7 Synthesis and Analysis of Signal Processing Algorithms
Det[s(t)]/Est[t1 ] =
d1
Ev (t̂1 − T20 ) ≷ l0 (F );
(a)
= d0 (7.4.18)
t̂1 =
arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ∨Φ2 ;t◦ ∈Tobs
where Ev (t̂1 − T20 ) is an instantaneous value of the envelope Ev (t) of the estimator
v (t) = ŝ(t) of useful signal s(t) at the instant t = t̂1 − T20 : Ev (t) = v 2 (t) + vH
p
2 (t);
vH (t) = H[v (t)] is Hilbert transform; d1 , d0 are the decisions made concerning true
value of an unknown nonrandom parameter θ, θ ∈ {0, 1}; l0 (F ) is some threshold
level as a function of a given conditional probability of false alarm F ; t̂1 is the
estimator of time of signal ending t1 ; Mϕ {(t1 − t◦ )2 } is a mean squared difference
between true value of time of signal ending t1 and optimization variable t◦ ; Mϕ {∗}
is a symbol of mathematical expectation with averaging over the initial phase ϕ of
the signal; Φ1 and Φ2 are possible domains of definition of the initial phase ϕ of
the signal: Φ1 = [−π/2, π/2] and Φ2 = [π/2, 3π/2].
We now explain the optimality criteria and single relationships appearing in
the systems (7.4.15), (7.4.16), (7.4.17) determining the successive stages P F [s(t)],
IP [s(t)], and Sm[s(t)] of the general algorithm of useful signal processing (7.4.14).
Equations (7.4.15a) and (7.4.15b) of the system (7.4.15) define the criteria of
minimum of metrics between the statistical sets of the observations {x(tj )} and
{x̃(tj )} and the results of primary processing y (t) and ỹ (t), respectively. The func-
J−1 ` J−1 a
tions of metrics | ∧ [x(tj ) − y (t)]|, | ∨ [x̃(tj ) − y (t)]| are chosen to provide the
j=0 j=0
metric convergence and the convergence in probability to the useful signal s(t) of
J−1 J−1
the sequences y (t) = ∧ x(tj ) and ỹ (t) = ∨ x̃(tj ) for the interactions of both
j=0 j=0
kinds (7.4.13a) and (7.4.13b).
The relationship (7.4.15c) determines the criterion of the choice of a number J of
the samples of stochastic processes xR(t) and x̃(t) used during signal processing based
on the minimization of the norm |y (t)|dt. The criterion (7.4.15c) is considered
Tobs
under two constraints:
R (1) interference (noise) is identically equal toR zero: n(t) ≡ 0;
(2) the norm |y (t)|dt of the function y (t) is not equal to zero: |y (t)|dt 6= 0.
Tobs Tobs
Equation (7.4.15e) defines the criterion of minimum
R of the metric between the
useful signal s(t) and the function w(t), i.e., |w(t) − s(t)|dt n(t)≡0 in the
[t1 −T0 ,t1 ]
interval [t1 − T0 , t1 ]. This criterion establishes the kind of function F [y (t), ỹ (t)]
(7.4.15d) uniting the results y (t) and ỹ (t) of primary processing of the observed
7.4 Signal Detection in Metric Space with Lattice Properties 315
processes x(t) and x̃(t). The criterion (7.4.15e) is considered when interference
(noise) is identically equal to zero: n(t) ≡ 0.
The equations (7.4.16a), (7.4.16b), (7.4.16c) of the system (7.4.16) define the
criterion of the choice of functional transformation L[w(t)]. The equation (7.4.16a)
defines the criterion of minimum of the second moment of the process u(t) in the
absence of the useful signal s(t) in the input of signal processing unit. The equation
(7.4.16b) determines the quantity of the second moment of the difference between
the signals u(t) and s(t) in the absence of interference (noise) n(t) in the input of
a signal processing unit.
The equation (7.4.17a) of the system (7.4.17) defines the criterion of minimum of
M −1
|u(tk ) − v ◦ (t)| between the instantaneous values of the process u(t) and
P
metric
k=0
optimization variable v ◦ (t) within the smoothing interval T̃ =]t − ∆T̃ , t], requiring
the final processing of the signal u(t) in the form of smoothing under the condition
that the useful signal is identically equal to zero: s(t) ≡ 0. The relationship (7.4.17b)
establishes the rule of the choice of the quantity ∆T̃ of smoothing interval T̃ based
on a relative dynamic error δd,sm of smoothing. The equation (7.4.17c) defines the
criterion of the choice of a number M of the samples of stochastic process u(t)
based on a relative fluctuation error δf,sm of smoothing.
The equation (7.4.18a) of the system (7.4.18) defines the criterion of the deci-
sion d1 concerning the signal presence (if Ev (t̂1 − T20 ) > l0 (F )) or the decision d0
regarding its absence (if Ev (t̂1 − T20 ) < l0 (F )). The relationship (7.4.18b) defines
the criterion of forming the estimator t̂1 of time of signal ending t1 based on mini-
mization of the mean squared difference Mϕ {(t1 − t◦ )2 } between true value of time
of signal ending t1 and optimization variable t◦ under the conditions that averag-
ing is realized over the initial phase ϕ of the signal taken in one of two intervals:
Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], and interference (noise) is absent: n(t) ≡ 0.
J−1 `
To solve the problem of minimizing the functions | ∧ [x(tj ) − y (t)]| (7.4.15a),
j=0
J−1 a
| ∨ [x̃(tj ) − y (t)]| (7.4.15b), we find the extrema of these functions, setting their
j=0
` a
derivatives with respect to y (t) and y (t) to zero, respectively:
J−1 ^ ^ J−1 ^
d| ∧ [x(tj ) − y (t)]|/d y (t) = −sign( ∧ [x(tj ) − y (t)]) = 0; (7.4.19a)
j=0 j=0
J−1 _ _ J−1 _
d| ∨ [x̃(tj ) − y (t)]|/d y (t) = −sign( ∨ [x̃(tj ) − y (t)]) = 0. (7.4.19b)
j=0 j=0
The solutions of Equations (7.4.19a) and (7.4.19b) are the values of the estimators
y (t) and ỹ (t) in the form of meet and join of the observation results {x(tj )} and
{x̃(tj )}, respectively:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − jT0 ); (7.4.20a)
j=0 j=0
J−1 J−1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − jT0 ). (7.4.20b)
j=0 j=0
316 7 Synthesis and Analysis of Signal Processing Algorithms
J−1 ` J−1 a
The derivatives of the functions | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]|, according
j=0 j=0
to the relationships (7.4.19a) and (7.4.19b) at the points y (t) and ỹ (t), change their
sign from minus to plus. Thus, the extrema determined by the formulas (7.4.20a)
and (7.4.20b) are minimum points of these functions, and the solutions of the equa-
tions (7.4.15a) and (7.4.15b) determining these estimation criteria.
The condition n(t) ≡ 0 of the criterion (7.4.15c) of the system (7.4.15) implies
the corresponding changes in the observation equations (7.4.13a,b): x(tj ) = s(tj )∨0,
x̃(tj ) = s(tj ) ∧ 0. Thus, according to the relationships (7.4.20a) and (7.4.20b), the
identities hold:
J−1
y (t) n(t)≡0 = ∧ [s(tj ) ∨ 0] =
j=0
Z
4(Ns − J + 1)A/π, J ≤ Ns ;
|y (t)|dt = (7.4.22)
0, J > Ns ,
Tobs
J = Ns . (7.4.23)
When the last identity holds, the processes determined by the relationships (7.4.21a)
and (7.4.21b) in the interval [t1 − T0 , t1 ] are, respectively, equal to:
y (t) n(t)≡0 = s(t) ∨ 0; (7.4.24a)
ỹ (t) n(t)≡0 = s(t) ∧ 0. (7.4.24b)
To realize the criterion (7.4.15e) of the system (7.4.15) under joint fulfillment of the
identities (7.4.24a), (7.4.24b) and (7.4.15d), it is necessary and sufficient that the
coupling equation (7.4.15d) between stochastic process w(t) and results of primary
processing y (t) and ỹ (t) has the form:
w(t) n(t)≡0 = y (t) n(t)≡0 ∨ 0 + ỹ (t) n(t)≡0 ∧ 0 =
= s(t) ∨ 0 ∨ 0 + s(t) ∧ 0 ∧ 0 = s(t), t ∈ [t1 − T0 , t1 ]. (7.4.25)
7.4 Signal Detection in Metric Space with Lattice Properties 317
R
As follows from the expression (7.4.25), the metric |w(t) − s(t)|dt n(t)≡0 ,
[t1 −T0 ,t1 ]
that must be minimized according to the criterion (7.4.15e), is minimal and equal
to zero.
Obviously, the coupling equation (7.4.15d) has to be invariant with respect to
the presence (the absence) of interference (noise) n(t), so the final variant of the
coupling equation can be written on the base of (7.4.25) in the form:
Thus, the identity (7.4.26) determines the kind of the coupling equation (7.4.15d)
obtained from the joint fulfillment of the criteria (7.4.15a), (7.4.15b), and (7.4.15e).
The solution u(t) of the relationships (7.4.16a), (7.4.16b), (7.4.16c) of the system
(7.4.16), defining the criterion of the choice of functional transformation of the
process w(t), is the function L[w(t)] determining the gain characteristic of the
limiter:
a, w(t) ≥ a;
u(t) = L[w(t)] = w(t), −a < w(t) < a; (7.4.27)
−a, w(t) ≤ −a,
its linear part provides the condition (7.4.16b); its clipping part (above and below)
provides minimization of the second moment of the process u(t), according to the
criterion (7.4.16a).
The relationship (7.4.27) can be written in terms of L-group L(+, ∨, ∧) in the
form:
u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.4.27a)
where the limiter parameter a is chosen to be equal to: a = sup ∆A = Amax ,
A
∆A =]0, Amax ]; Amax is maximum possible value of useful signal amplitude.
We obtain the estimator v (t) = ŝ(t) of the signal s(t) by solving the equation of
minimization of the function on the base of criterion (7.4.17a) of the system (7.4.17).
M −1
|u(tk ) − v ◦ (t)|, setting its derivative with
P
We find the extremum of the function
k=0
respect to v ◦ (t) to zero:
M
X −1 M
X −1
◦ ◦
d{ |u(tk ) − v (t)|}/dv (t) = − sign[u(tk ) − v ◦ (t)] = 0.
k=0 k=0
The solution of the last equation is the value of the estimator v (t) in the form of
sample median med{∗} of the collection {u(tk )} of stochastic process u(t):
k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is a smoothing
interval of stochastic process u(t); ∆T̃ is a quantity of the smoothing interval T̃ .
318 7 Synthesis and Analysis of Signal Processing Algorithms
M −1
|u(tk ) − v ◦ (t)| at the point v (t) changes its
P
A derivative of the function
k=0
sign from minus to plus. Thus, the extremum determined by the formula (7.4.28)
is minimum of this function, and the solution of Equation (7.4.17a) defining this
criterion of signal processing.
The rule of making the decision d1 concerning the presence of the signal (if
Ev (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
Ev (t̂1 − T20 ) < l0 (F )), stated by the equation (7.4.18a), suggests (1) formation
of the envelope Ev (t) of the estimator v (t) = ŝ(t) of useful signal s(t), and (2)
comparison of the value of the envelope Ev (t̂1 − T20 ) with a threshold value l0 (F ) at
the instant t = t̂1 − T20 determined by the estimator t̂1 ; as the result, the decision
making is realized:
T0 d1
Ev (t̂1 − ) ≷ l0 (F ). (7.4.29)
2 d0
The relationship (7.4.18b) defines the criterion of forming the estimator t̂1 of time
of signal ending t1 based on minimizing the mean squared difference Mϕ {(t1 −t◦ )2 }
between true value of time of signal ending t1 and optimization variable t◦ under
the conditions that averaging is realized over the initial phase ϕ of the signal taken
in one of two intervals: Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], and interference
(noise) is identically equal to zero: n(t) ≡ 0.
The solution of optimization equation (7.4.18b) of the system (7.4.18) is deter-
mined by the identity:
t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2];
t̂1 = (7.4.30)
t̂+ + (T0 /2) − (T0 ϕ̂/2π ), ϕ ∈ Φ2 = [π/2, 3π/2],
R R
where t̂± = ( tv± (t)dt)/( v± (t)dt) is the estimator of barycentric coordinate
Tobs Tobs
of the positive v+ (t) or the negative v− (t) parts of the smoothed stochastic process
v (t), respectively; v+ (t) = v (t) ∨ 0, v− (t) = v (t) ∧ 0; Tobs is an observation interval of
the signal: Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ; ϕ̂ = arcsin[2(t̂+ − t̂− )/T0 ] is the estimator
of an unknown nonrandom initial phase ϕ of the useful signal s(t).
The value of mean squared difference Mϕ {(t1 − t̂1 )2 } between the true value
of time of signal ending t1 and its estimator t̂1 determined by Equation (7.4.30),
under the given conditions, is equal to zero:
If the initial phase ϕ of the signal can change within the interval from −π to
π, i.e., it is not known beforehand to which interval (Φ1 = [−π/2, π/2] or Φ2 =
[π/2, 3π/2] from the relationships (7.4.18b) and (7.4.30)) the phase ϕ belongs, then
the estimators t̂1 of time of signal ending t1 can be satisfactory and determined by
the identities:
t̂1 = max [t̂− , t̂+ ] + (T0 /4), ϕ ∈ [−π, π ]; (7.4.31a)
t̂± ∈Tobs
1 T0
or : t̂1 = (t̂− + t̂+ ) + , ϕ ∈ [−π, π ]. (7.4.31b)
2 2
7.4 Signal Detection in Metric Space with Lattice Properties 319
FIGURE 7.4.4 Block diagram of processing unit that realizes harmonic signal detection
with joint estimation of time of signal arrival (ending)
FIGURE 7.4.5 Useful signal s(t) (dot FIGURE 7.4.6 Useful signal s(t) (dot
line); realization w∗ (t) of signal w(t) in line); realization v ∗ (t) of signal in output
output of adder (dash line); realization of median filter v(t) (solid line); δ-pulse
v ∗ (t) of signal v(t) in output of median determining time position of estimator t̂1
filter (solid line) of time of signal ending t1
7.4 Signal Detection in Metric Space with Lattice Properties 321
Figure 7.4.6 illustrates: the useful signal s(t); realization v ∗ (t) of the signal in the
output of median filter v (t); δ-pulses determining time position of the estimators t̂±
of barycentric coordinates of positive v+ (t) or negative v− (t) parts of the smoothed
stochastic process v (t); and δ-pulse determining time position of the estimator t̂1
of time of signal ending t1 , according to the formula (7.4.30).
Figure 7.4.7 illustrates: the useful signal s(t); realization v ∗ (t) of the signal in the
output of median filter v (t); δ-pulses determining time position of the estimators t̂±
of barycentric coordinates of positive v+ (t) or negative v− (t) parts of the smoothed
stochastic process v (t); and δ-pulse determining time position of the estimator t̂1 of
time of signal ending t1 , according to the formulas (7.4.30), (7.4.31a), and (7.4.31b).
FIGURE 7.4.7 Useful signal s(t) (dot FIGURE 7.4.8 Useful signal s(t) (dot
line); realization v ∗ (t) of signal in output line); realization Ev∗ (t) of envelope Ev (t)
of median filter v(t) (solid line); δ-pulses of signal v(t) (solid line); δ-pulses deter-
determining time position of estimators mining time position of estimators t̂± ; δ-
t̂± ; δ-pulse determining time position of pulse determining time position of esti-
estimator t̂1 of time of signal ending t1 mator t̂1 of time of signal ending t1
Figure 7.4.8 illustrates: the useful signal s(t); realization Ev∗ (t) of the envelope
Ev (t) of the signal v (t) in the output of median filter; δ-pulses determining time
position of the estimators t̂± of barycentric coordinates of positive v+ (t) or negative
v− (t) parts of the smoothed stochastic process v (t); and δ-pulse determining time
position of the estimator t̂1 of time of signal ending t1 according to the formula
(7.4.30).
We can determine the quality indices of harmonic signal detection by a synthe-
sized processing unit (Fig. 7.4.4). We use the theorem from [251] indicating that
the median estimator v (t) formed by median filter (MF) converges in distribution
to a Gaussian random variable with zero mean.
Along with the signal w(t) in the output of the adder (see formula (7.3.25)),
the process v (t) in the output of a median filter can be represented as the sum of
signal vs (t) and noise vn (t) components:
As mentioned in Section 7.3, variance Dvn (7.3.34) of the noise component vn (t) of
the process v (t) in the output of the filter (7.4.28) (under the condition that the
signal s(t) is a harmonic oscillation in the form (7.4.12)) can be decreased to the
322 7 Synthesis and Analysis of Signal Processing Algorithms
quantity: ( )
∆T̃ 2 Ns2 fn,max
2
Dvn ≤ Dv,max = a2 exp − √ , (7.4.33)
8 π
(a)
(b)
FIGURE 7.4.9 Block diagram of unit processing observations defined by Equations: (a)
(7.4.11a); (b) (7.4.11b)
to the upper bound |δ t̂1 |max of absolute quantity of relative error |δ t̂1 | = |t̂1 − t1 |/T0
of the estimator t̂1 of time of signal ending t1 , obtained by the statistical modeling
method:
|δ t̂1 | = |t̂1 − t1 |/T0 ≤ |δ t̂1 |max = 0.5. (7.4.39)
The quantity of the upper bound |δ t̂1 |max of absolute quantity of relative error |δ t̂1 |
of the estimator t̂1 of time of signal ending t1 along with detection quality indices
(7.4.36) and (7.4.38) do not depend on energetic relationships between the useful
signal and interference (noise).
When we have prior information concerning belonging of the initial phase of
the signal to intervals Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], which determine the
methods of formation of the estimators t̂1 of time of signal ending t1 , according
324 7 Synthesis and Analysis of Signal Processing Algorithms
the independence of the samples of interference (noise) {n(tj )}; ∆t << 1/f0 , where
f0 is known carrier frequency of the signal s(t).
Taking into account these considerations, the equations of observations in two
processing channels (7.4.40a) and (7.4.40b) take the form:
where y (t), ỹ (t) are the solution functions of the problem of minimization of a metric
between the observed statistical collections {x(tj )}, {x̃(tj )} and the optimization
` a
variables, i.e., the functions y (t) and y (t), respectively; w(t) is the function F [∗, ∗]
326 7 Synthesis and Analysis of Signal Processing Algorithms
of uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} (7.4.42a) and {x̃(tj )} (7.4.42b); T ∗ is the processing interval;
N ∈ N, N is the set of natural numbers; N is a number of the samples of the
processes x(t) and x̃(t) used under their primary processing; δd (N ) is the relative
dynamic error of filtering as the dependence on the number of the samples N ; δd0
is the relative dynamic error of filtering;
2
M{u (t)} s(t)≡0 → min; (a)
L
IP [s(t)] = M{[u(t) − s(t)]2 } n(t)≡0, ∆t→0 = ε ; (b) (7.4.45)
u(t) = L[w(t)], (c)
ISm[s(t)] =
M −1
|u(tk ) − v ◦ (t)|;
P
v ( t ) = arg min (a)
v ◦ (t)∈V ; t,tk ∈T̃ k=0
= ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.4.46)
= arg max [δf (M 0 )] M ∗ :δf (M ∗ )=δf,sm ,
M (c)
0 ∗
M ∈N∩[M ,∞[
where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t)), that
is the solution of the problem of minimization of metric between the instantaneous
values of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t);
k
tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ , T̃ =]t − ∆T̃ , t]; T̃ is an interval on which
smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of a smoothing
interval T̃ ; M ∈ N, N is the set of natural numbers; M is a number of the samples
of stochastic process u(t) used under smoothing; δd (∆T̃ ) and δf (M ) are the relative
dynamic and fluctuation errors of smoothing as the dependences on the quantity
∆T̃ of the smoothing interval T̃ and the number of the samples M respectively;
δd,sm , δf,sm are relative dynamic and fluctuation errors of smoothing, respectively.
We now explain the optimality criteria and the single relationships involved
into the systems (7.4.44), (7.4.45), (7.4.46) determining the successive stages of
processing P F [s(t)], IP [s(t)], ISm[s(t)] of the general algorithm of signal extraction
(7.4.43).
Equations (7.4.44a) and (7.4.44b) of the system (7.4.44) define the criteria of
minimum of metrics between the statistical sets of the observations {x(tj )} and
{x̃(tj )} and the results of primary filtering y (t) and ỹ (t), respectively. The functions
N −1 ` N −1 a
of metrics | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]| are chosen to provide the metric
j=0 j=0
convergence and the convergence in probability to the estimated parameter of the
N −1 N −1
sequences yN −1 = ∧ x(tj ), ỹN −1 = ∨ x̃(tj ) for the interactions in the form
j=0 j=0
(7.2.2a) and (7.2.2b) (see Section 7.2).
The equation (7.4.44d) of the system (7.4.44) defines the criterion of minimum
7.4 Signal Detection in Metric Space with Lattice Properties 327
−1
NP
of the metric |w(tj ) − s(tj )| between the useful signal s(t) and the function
j=0
w(t) in the processing interval T ∗ = [t − (N − 1)∆t, t]. This criterion establishes the
kind of the function F [y (t), ỹ (t)] (7.4.44c) uniting the primary processing results
y (t) and ỹ (t) obtained from the observed processes x(t) and x̃(t). The criterion
(7.4.44d) is considered under two constraint conditions: (1) interference (noise) is
identically equal to zero: n(t) ≡ 0; (2) the sampling interval ∆t tends to zero:
∆t → 0. The equation (7.4.44e) of the system (7.4.44) determines the criterion of
the choice of the number of the samples N of stochastic processes x(t), x̃(t), based
on the quantity δd0 of relative dynamic error of primary filtering.
Equations (7.4.45a), (7.4.45b), (7.4.45c) of the system (7.4.45) define the choice
of the functional transformation L[w(t)]. Equation (7.4.45a) determines the crite-
rion of minimum of the second moment of the process u(t) in the absence of useful
signal s(t) in the input of the signal processing unit. Equation (7.4.45b) determines
the second moment of the difference between the signals u(t) and s(t) under two
constraint conditions: (1) interference (noise) n(t) in the input of signal processing
unit is absent; (2) the sampling interval ∆t tends to zero: ∆t → 0.
Equation (7.4.46a) of the system (7.4.46) defines the criterion of minimum of
M −1
|u(tk ) − v ◦ (t)| between the instantaneous values of the process u(t) and
P
metric
k=0
optimization variable v ◦ (t) in the smoothing interval T̃ =]t− ∆T̃ , t], requiring inter-
mediate smoothing of the signal u(t). Equation (7.4.46b) determines the criterion
of the choice of the quantity ∆T̃ of the smoothing interval T̃ , based on a given
quantity of dynamic error of intermediate smoothing δd,sm . Equation (7.4.46c) de-
termines the criterion of the choice of a number of the samples M of stochastic
process u(t), based on a given quantity of fluctuation error of primary smoothing
δf,sm .
Thus, the criteria (7.4.44) through (7.4.46) define the signal extraction algorithm
in the presence of interference in the form of quasi-white noise with independent
samples (see Equation (7.3.3) from Section 7.3).
The further problem of matched filtering M F [s(t)] in signal space L(+, ∨, ∧)
with L-group properties is solved similarly, as shown in Subsection 7.4.2, with the
only difference that the statistical collections {v+ (tj )} and {v− (tj )} of the positive
v+ (t) = v (t) ∨ 0 and negative v− (t) = v (t) ∧ 0 parts of the process v (t) = ŝ(t),
obtained on the basis of the criterion (7.4.46a) of the system (7.4.46), take part in
signal processing. The intermediate processing in the limiter is excluded from the
algorithm M F [s(t)], inasmuch as it is foreseen in the signal extraction algorithm
Ext[s(t)]. Thus, the problem of matched filtering M F [s(t)] of useful signal s(t) in
the presence of interference (noise) n(t) is formulated and solved on the basis of
step-by-step processing of statistical collections {v+ (tj )} and {v− (tj )} determined
by the equations v+ (t) = v (t) ∨ 0 and v− (t) = v (t) ∧ 0:
IF [s(t)]; (a)
M F [s(t)] = (7.4.47)
Sm[s(t)], (b)
where IF [s(t)] is intermediate filtering; Sm[s(t)] is smoothing that together form
328 7 Synthesis and Analysis of Signal Processing Algorithms
the successive processing stages of the general algorithm of the useful signal matched
filtering (7.4.47).
The optimality criteria determining every stage of processing IF [s(t)] and
Sm[s(t)] are interrelated and involved in the separate systems:
IF [s(t)] =
J−1 ^
Y (t) = arg min | ∧ [v+ (tj ) − Y (t)]|; (a)
^ j=0
Y (t)∈Y ;t,tj ∈Tobs
J−1 _
| ∨ [ṽ− (tj ) − Y (t)]|;
Ỹ (t) = arg min (b)
j=0
_
Y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.48)
J = arg min [ | Y(t) | dt ] n(t)≡0 , |Y(t)|dt 6= 0; (c)
Y (t)∈Y ; Tobs
Tobs
W (Rt) = F [Y (t), Ỹ (t)]; (d)
|W (t) − s(t)|dt] n(t)≡0 → min , (e)
[t1 −T0 ,t1 ] W (t)∈W
where {v+ (tj )} and {v− (tj )} are the processed statistical collections of the positive
v+ (t) = v (t) ∨ 0 and the negative v− (t) = v (t) ∧ 0 parts of the process v (t) = ŝ(t),
obtained on the basis of the criterion (7.4.46a) of the system (7.4.46); Y (t) and
Ỹ (t) are the solution functions of the problem of minimization of metric between
the observed statistical collections {v+ (tj )}, {v− (tj )} and optimization variables,
` a
i.e., the functions Y (t) and Y (t), respectively; W (t) is the function F [∗, ∗] of uniting
the results Y (t) and Ỹ (t) of minimizing the functions of the observed collections
{v+ (tj )} and {v− (tj )};
Sm[s(t)] =
M −1
|W (tk ) − V ◦ (t)| s(t)≡0 ;
P
V (t) = arg min (a)
V ◦ (t)∈V ;t,tk ∈T̃ k=0
= ∆T̃M F : δd (∆T̃M F ) = δd,sm MF
; (b) (7.4.49)
MM F = 0 arg max [δf (M 0 )] M ∗ :δf (M ∗ )=δf,sm
MF , (c)
∗
M ∈N∩[M ,∞[
where V (t) is the result of matched filtering that is the solution of the problem
of minimizing the metric between the instantaneous values of stochastic process
W (t) and optimization variable, i.e., the function V ◦ (t); tk = t − MM k
F
∆T̃M F ,
k = 0, 1, . . . , MM F − 1, tk ∈ T̃M F =]t − ∆T̃M F , t]; T̃M F is the interval in which
smoothing of the stochastic process W (t) is realized; M ∈ N, N is the set of natural
numbers; MM F is a number of the samples of stochastic process W (t) used during
smoothing in the interval T̃M F ; δd (∆T̃M F ) and δf (MM F ) are relative dynamic and
fluctuation errors of smoothing, as the dependences on the quantity ∆T̃M F of the
MF
smoothing interval T̃M F and a number of the samples MM F , respectively; δd,sm
MF
and δf,sm are the given quantities of relative dynamic and fluctuation errors of
smoothing, respectively.
We now explain the optimality criteria and the single relationships involved
7.4 Signal Detection in Metric Space with Lattice Properties 329
in the systems (7.4.48) and (7.4.49) determining the successive processing stages
IF [s(t)], Sm[s(t)] of the general matched filtering algorithm M F [s(t)] of the useful
signal (7.4.47).
Equations (7.4.48a) and (7.4.48b) of the system (7.4.48) define the criteria of
minimum of the metric between statistical sets of the observations {v+ (tj )} and
{v− (tj )} and the results of primary processing Y (t) and Ỹ (t), respectively. The
J−1 ^ J−1 _
functions of metrics | ∧ [v+ (tj ) − Y (t)]| and | ∨ [v− (tj ) − Y (t)]| are chosen to pro-
j=0 j=0
vide the metric convergence and the convergence in probability to the useful signal
J−1 J−1
s(t) of the sequences Y (t) = ∧ v+ (tj ) and Ỹ (t) = ∨ v− (tj ) based on the inter-
j=0 j=0
actions in the form (7.4.42a) and (7.4.42b). The relationship (7.4.48c) determines
the criterion of the choice of a number of the samples J of stochastic process
R v+ (t),
v− (t) used during signal processing on the basis of minimizing the norm |Y (t)|dt.
Tobs
The criterion (7.4.48c) is considered under two constraint conditions:R (1) interfer-
ence (noise) is identically equal to zero: n(t) ≡ 0; (2) the norm |Y (t)|dt of
R Tobs
the function Y (t) is not equal to zero: |Y (t)|dt 6= 0. Equation (7.4.48e) defines
Tobs R
the criterion of minimum of the metric |W (t) − s(t)|dt n(t)≡0 between the
[t1 −T0 ,t1 ]
useful signal s(t) and the function W (t) in the interval [t1 − T0 , t1 ]. This crite-
rion establishes the kind of the function F [Y (t), Ỹ (t)] (7.4.48d) uniting the results
Y (t) and Ỹ (t) of primary processing of the observed processes v+ (t) and v− (t).
The criterion (7.4.48e) is considered under the condition that interference (noise)
is identically equal to zero: n(t) ≡ 0.
Equation (7.4.49a) of the system (7.4.49) determines the criterion of minimum
M −1
|W (tk ) − V ◦ (t)| between the instantaneous values of the process
P
of the metric
k=0
W (t) and optimization variable V ◦ (t) in the smoothing interval T̃M F =]t−∆T̃M F , t],
requiring final processing of the signal W (t) by smoothing under the condition
that the useful signal s(t) is identically equal to zero: s(t) ≡ 0. The relationship
(7.4.49b) determines the rule of the choice of the quantity ∆T̃M F of the smoothing
MF
interval T̃M F , based on a given quantity of relative dynamic error δd,sm of smooth-
ing. Equation (7.4.49c) determines the criterion of the choice of a number of the
samples MM F of the stochastic process W (t), based on a given quantity of relative
MF
fluctuation error δf,sm of its smoothing.
The problem of joint detection Det[s(t)] of the signal s(t) and estimation Est[t1 ]
of time of its ending t1 is formulated on the detection and estimation criteria in-
volved in one system, which is a logical continuation of (7.4.43):
Det[s(t)]/Est[t1 ] =
d1
EV (t̂1 − T0
2 ) ≷ l0 (F ); (a)
= d0 (7.4.50)
t̂1 =
arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ;t◦ ∈Tobs
330 7 Synthesis and Analysis of Signal Processing Algorithms
where EV (t̂1 − T20 ) is an instantaneous value of the envelope EV (t) of the result
V (t) of matched filtering of the useful signal s(t) at the instant t = t̂1 − T20 : EV (t) =
p
V 2 (t) + VH2 (t); VH (t) = H[V (t)] is Hilbert transform; d1 and d0 are the decisions
concerning true values of unknown nonrandom parameter θ, θ ∈ {0, 1}; l0 (F ) is
some threshold level as the dependence on a given conditional probability of false
alarm F ; t̂1 is the estimator of the time of signal ending t1 ; Mϕ {(t1 −t◦ )2 } is a mean
squared difference between true value of time of signal ending t1 and optimization
variable t◦ ; Mϕ {∗} is the symbol of mathematical expectation with averaging over
the initial phase ϕ of the signal; Φ1 is a domain of definition of the initial phase ϕ
of the signal: Φ1 = [−π/2, π/2].
Equation (7.4.50a) of the system (7.4.50) determines the rule for making the
decision d1 concerning the presence of the signal (if EV (t̂1 − T20 ) > l0 (F )) or the
decision d0 concerning the absence of the signal (if EV (t̂1 − T20 ) < l0 (F )). The
relationship (7.4.50b) determines the criterion of formation of the estimator t̂1 of
time of signal ending t1 on the basis of minimization of mean squared difference
Mϕ {(t1 − t◦ )2 } between true value of the time of signal ending t1 and optimization
variable t◦ , when averaging is realized over the initial phase ϕ of the signal taken in
the interval Φ1 = [−π/2, π/2], and interference (noise) is identically equal to zero:
n(t) ≡ 0.
The problem of estimation of the amplitude A and the initial phase ϕ of useful
signal s(t) is formulated and solved on the basis of two estimation criteria within one
system that is a logical continuation of (7.4.44) through (7.4.46), (7.4.48) through
(7.4.50):
Est[A, ϕ] =
R
ϕ̂ = arg max
v (t) cos(ω0 t + ϕ)dt; (a)
ϕ∈Φ1 ;t∈Tobs Ts
= (7.4.51)
[v (t) − A cos(ω0 t + ϕ)]2 dt,
R
 = arg min
(b)
A∈∆A ;t∈Tobs Ts
where ϕ̂, Â are the estimators of the amplitude A and the initial phase ϕ of the
useful signal s(t), respectively; v (t) = ŝ(t) is the result of useful signal extraction
(the estimator ŝ(t) of the signal s(t)) obtained on the basis of the criterion (7.4.46a)
of the system (7.4.46); Ts is a domain of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is
an unknown time of arrival of the signal s(t); t1 is an unknown time of signal ending;
Tobs is the interval of the observation of the signal Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ;
Φ1 is a domain of definition of the initial phase of the signal: Φ1 = [−π/2, π/2]; ∆A
is a domain of definition of the signal amplitude: ∆A =]0, Amax ].
Solving of the signal processing problems determined by the equation systems
(7.4.43), (7.4.47), (7.4.50), (7.4.51) is realized in the following way.
Solving the problem of the extraction Ext[s(t)] of the useful signal s(t) in the
presence of interference (noise) n(t) is described in detail in Section 7.3. Here we
consider the intermediate results that determine the structure-forming elements of
the general processing algorithm.
The solutions of optimization equations (7.4.44a) and (7.4.44b) of the system
7.4 Signal Detection in Metric Space with Lattice Properties 331
(7.4.44) are the values of the estimators y (t), ỹ (t) in the form of meet and join of
the observation results {x(tj )} and {x̃(tj )}, respectively:
N −1 N −1
y (t) = ∧ x(tj ) = ∧ x(t − j ∆t); (7.4.52a)
j=0 j=0
N −1 N −1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − j ∆t). (7.4.52b)
j=0 j=0
The solution u(t) of the relationships (7.4.45a), (7.4.45b), (7.4.45c) of the system
(7.4.45), that define the criterion of the choice of transformation of the process w(t),
is the function L[w(t)] that determines the gain characteristic of the limiter:
where the parameter of the limiter a is chosen to be equal to a = sup ∆A = Amax and
A
∆A =]0, Amax ]; Amax is a maximum possible value of the useful signal amplitude.
The solution of optimization equation (7.4.46a) of the system (7.4.46) is a value
of the estimator v (t) in the form of the sample median med{∗} of the sample
collection {u(tk )} of stochastic process u(t):
k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is the interval,
in which smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of the
smoothing interval T̃ .
Summarizing the relationships (7.4.52) through (7.4.55), one can draw the con-
clusion that the estimator v (t) = ŝ(t) of the signal s(t) extracted in the presence
of interference (noise) n(t) is the function of smoothing of stochastic process u(t)
obtained by limiting the process w(t) that combines the results y (t) and ỹ (t) of
a proper primary processing of the observed stochastic processes x(t), x̃(t) in the
interval T ∗ = [t − (N − 1)∆t, t].
Solving the problem of matched filtering M F [s(t)] of useful signal s(t) in the sig-
nal space L(+, ∨, ∧) with L-group properties in the presence of interference (noise)
n(t) is described in detail in Subsection 7.4.2, so similarly we consider the interme-
diate results, which determine structure-forming elements of the general processing
algorithm.
The solutions of the optimization equations (7.4.48a) and (7.4.48b) of the system
332 7 Synthesis and Analysis of Signal Processing Algorithms
(7.4.48) are the values of the estimators Y (t) and Ỹ (t) in the form of meet and join
of the observation results {v+ (tj )} and {v− (tj )}, respectively:
J−1 J−1
Y (t) = ∧ v+ (tj ) = ∧ v+ (t − jT0 ); (7.4.56a)
j=0 j=0
J−1 J−1
Ỹ (t) = ∨ ṽ− (tj ) = ∨ ṽ− (t − jT0 ). (7.4.56b)
j=0 j=0
According to the criterion (7.4.48c), the optimal value of a number of the samples
J of stochastic process v+ (t), v− (t) used during primary processing (7.4.56a,b) is
equal to the number of periods Ns of the signal:
J = Ns . (7.4.57)
The coupling equation (7.4.48d) satisfying the criterion (7.4.48e) takes the form:
The solution of optimization equation (7.4.49a) of the system (7.4.49) is the value
of the estimator V (t) in the form of the sample median med{∗} of the collection of
the samples {W (tk )} of stochastic process W (t):
k
where tk = t − MM F
∆T̃M F , k = 0, 1, . . . , MM F − 1; tk ∈ T̃M F =]t − ∆T̃M F , t]; T̃M F
is the interval in which smoothing of stochastic process W (t) is realized; ∆T̃M F is
a quantity of the smoothing interval T̃M F .
The sense of the obtained relationships (7.4.56) through (7.4.59) lies in the fact
that the result of matched filtering V (t) of useful signal s(t) extracted and detected
in the presence of interference (noise) n(t) is the function of smoothing of stochastic
process W (t) that is a combination of the results Y (t) and Ỹ (t) of signal processing
of the positive v+ (t) and the negative v− (t) parts of the observed stochastic process
v (t) in the interval T ∗ = [t − (Ns − 1)∆t, t].
Solving the problem of joint detection Det[s(t)] of the signal s(t) and estimation
Est[t1 ] of its time of ending t1 is described in detail in Subsection 7.4.2, so here we
note only the intermediate results which determine the structure-forming elements
of the general processing algorithm.
The rule of making the decision d1 concerning the presence of the signal (if
EV (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
EV (t̂1 − T20 ) < l0 (F )), determined by Equation (7.4.50a) of the system (7.4.50),
supposes formation of the envelope EV (t) of the estimator V (t) = ŝ(t) of the useful
signal s(t) and the comparison of the value of the envelope EV (t̂1 − T20 ) with the
threshold value l0 (F ) at the instant t = t̂1 − T20 determined by the estimator t̂1 ,
and as the result, the decision is made:
T0 d1
EV (t̂1 − ) ≷ l0 (F ). (7.4.60)
2 d0
7.4 Signal Detection in Metric Space with Lattice Properties 333
The relationship (7.4.50b) of the system (7.4.50) determines the criterion of forming
the estimator t̂1 of time of signal ending t1 on the basis of minimization of mean
squared difference Mϕ {(t1 −t◦ )2 } between true value of time of signal ending t1 and
optimization variable t◦ when averaging is realized over the initial phase ϕ of the
signal taken in the interval Φ1 = [−π/2, π/2], and interference (noise) is identically
equal to zero: n(t) ≡ 0.
Generally, as it is shown in Subsection 7.4.2, the solution of optimization equality
(7.4.50b) of the system (7.4.50) is determined by the identity:
t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2];
t̂1 = (7.4.61)
t̂+ + (T0 /2) − (T0 ϕ̂/2π ), ϕ ∈ Φ2 = [π/2, 3π/2],
R R
where t̂± = ( tV± (t)dt)/( V± (t)dt) is the estimator of the barycentric coordi-
Tobs Tobs
nate of the positive V+ (t) or the negative V− (t) parts of the smoothed stochastic pro-
cess V (t), respectively; V+ (t) = V (t) ∨ 0, V− (t) = V (t) ∧ 0; Tobs is an interval of the
observation of the signal Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ; ϕ̂ = arcsin[2(t̂+ − t̂− )/T0 ]
is the estimator of an unknown nonrandom initial phase ϕ of the useful signal s(t).
However, as mentioned above, the initial phase is determined in the interval
Φ1 = [−π/2, π/2], that, according to (7.4.61), supposes the estimator t̂1 takes the
form:
t̂1 = t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2]. (7.4.61a)
The sense of the obtained relationships (7.4.61a) and (7.4.60) lies in the fact that
the estimator t̂1 of time of signal ending and the envelope EV (t) are formed on the
base of the proper processing of the result of matched filtering V (t) of the useful
signal s(t) extracted and detected in the presence of interference (noise) n(t). Signal
detection is fixed on the base of comparing the instantaneous value of the envelope
EV (t) with a threshold value at the instant t = t̂1 − T20 determined by the estimator
t̂1 .
Consider, finally, the problem of the estimation of unknown nonrandom ampli-
tude and initial phase of the signal stated, for instance, in [149], [155].
Equation (7.4.51a) of the system (7.4.51) implies that the estimator ϕ̂ of initial
phase ϕ is found by maximizing the expression (with respect to ϕ):
Z
Q(ϕ) = v (t) cos(ω0 t + ϕ)dt → max , (7.4.62)
ϕ∈Φ1
Ts
where v (t) is the result of extraction Ext[s(t)] of useful signal s(t) in the presence
of interference (noise), whose algorithm is described by the system (7.4.43); Ts is a
domain of definition of the signal s(t).
Factoring the cosine of the sum, we can compute the derivative of the function
Q(ϕ) with respect to ϕ, which we put to zero to determine the extremum:
Z Z
dQ(ϕ̂)
= − sin ϕ̂ v (t) cos(ω0 t + ϕ)dt − cos ϕ̂ v (t) sin(ω0 t + ϕ)dt = 0. (7.4.63)
dϕ̂
Ts Ts
334 7 Synthesis and Analysis of Signal Processing Algorithms
Equation (7.4.63) has a unique solution that is the maximum of the function
Q(ϕ):
Equation (7.4.51b) of the system (7.4.51) implies that the estimator  of the ampli-
tude A can be found on the basis of obtaining a minimum with respect to variable
A of the expression:
Z
Q(A) = [v (t) − A cos(ω0 t + ϕ)]2 dt → max . (7.4.65)
A∈∆A
Ts
We find the derivative of the function Q(A) with respect to A, setting it to zero to
find the extremum:
Z Z
dQ(Â)
= −2 v (t) cos(ω0 t + ϕ)dt + 2Â cos2 (ω0 t + ϕ)dt = 0. (7.4.66)
dÂ
Ts Ts
Equation (7.4.66) has a unique solution which determines the minimum of the
function Q(A): R
v (t) cos(ω0 t + ϕ)dt
Ts
 = R = 2Q(ϕ)/T, (7.4.67)
cos2 (ω0 t + ϕ)dt
Ts
where Q(ϕ) is the function determined by the relationship (7.4.62); T is the known
duration of the signal.
It is easy to make sure that the function Q(ϕ) can be represented in the form:
p
Q(ϕ) = vc2 + vs2 , (7.4.68)
where vs and vc are the quantities determined by the integrals (7.4.64a) and
(7.4.64b), respectively.
Taking into account (7.4.68), we write the final expression for the estimator Â
of the amplitude A of the signal:
p
 = 2 vc2 + vs2 /T. (7.4.69)
The block diagram of the processing unit, according to the results of solving
the optimization equations involved into the systems (7.4.43), (7.4.47), (7.4.50),
(7.4.51), is described by the relationships (7.4.52) through (7.4.60); (7.4.61a);
(7.4.64) and (7.4.69) and includes: signal extraction unit (SEU), matched filter-
ing unit (MFU), signal detection unit (SDU), and also, amplitude and initial phase
estimator formation unit (EFU) (see Fig. 7.4.10).
The SEU realizes signal processing, according to the relationships (7.4.52)
through (7.4.55) and includes two processing channels, each containing transversal
filter realizing primary filtering of the observed stochastic processes x(t) and x̃(t);
the units of formation of the positive y+ (t) and the negative ỹ− (t) parts of the pro-
cesses y (t) and ỹ (t), respectively; an adder summing the results of signal processing
in two processing channels; a limiter; a median filter (MF) realizing intermediate
smoothing of the process u(t) (see Fig. 7.4.10).
Matched filtering unit realizes signal processing, according to the relationships
(7.4.56) through (7.4.59), and contains: two processing channels, each including the
units of formation of the positive v+ (t) and the negative v− (t) parts of the process
v (t); transversal filters realizing primary filtering of the observed stochastic process
v+ (t) and v− (t); an adder summing the results of signal processing Y (t), Ỹ (t) in
two channels; a median filter (MF) that smooths the process W (t) = Y (t) + Ỹ (t)
(see Fig. 7.4.10).
The SDU realizes signal processing, according to the relationships (7.4.60) and
(7.4.61a), and includes time of ending estimator formation unit (EFU), envelope
computation unit (ECU), and decision gate (DG).
The amplitude and initial phase estimator formation unit realizes signal pro-
cessing according to the relationships (7.4.64) and (7.4.69).
Transversal filters of signal extraction unit in two processing channels realize
N −1 N −1
primary filtering P F [s(t)]: y (t) = ∧ x(t − j ∆t), ỹ (t) = ∨ x̃(t − j ∆t) of the
j=0 j=0
stochastic processes x(t) and x̃(t), according to the equations (7.4.52a,b), fulfilling
criteria (7.4.44a) and (7.4.44b) of the system (7.4.44). The units of formation of
the positive y+ (t) and the negative ỹ− (t) parts of the processes y (t) and ỹ (t) in two
processing channels form the values of these functions according to the identities
(7.4.53a) and (7.4.53b), respectively. The adder sums the results of signal processing
in two processing channels, according to the equality (7.4.53), providing fulfillment
of the criteria (7.4.44c) and (7.4.44d) of the system (7.4.44).
The limiter L[w(t)] realizes intermediate processing IP [s(t)] by clipping the
signal w(t) in the output of the adder, according to the criteria (7.4.45a), (7.4.45b)
of the system (7.4.45) to exclude noise overshoots whose instantaneous values exceed
value a from further processing.
The median filter (MF) realizes intermediate smoothing ISm[s(t)] of w(t) =
y+ (t)+ ỹ− (t), according to the formula (7.4.55), providing fulfillment of the criterion
(7.4.46a) of the system (7.4.46).
In a matched filtering unit (MFU), the units of formation of the positive v+ (t)
and the negative v− (t) parts of the process v (t) form the values of these functions,
according to the identities v+ (t) = v (t) ∨ 0 and v− (t) = v (t) ∧ 0, respectively.
336 7 Synthesis and Analysis of Signal Processing Algorithms
FIGURE 7.4.10 Block diagram of processing unit that realizes harmonic signal detection
with joint estimation of amplitude, initial phase, and time of signal arrival (ending)
Transversal filters of matched filtering unit in two processing channels realize inter-
Ns −1 Ns −1
mediate filtering IF [s(t)]: Y (t) = ∧ v+ (t−jT0 ) and Ỹ (t) = ∨ v− (t−jT0 ) of the
j=0 j=0
observed stochastic processes v+ (t) and v− (t), according to Equations (7.4.56a,b),
providing fulfillment of the criteria (7.4.48a) and (7.4.48b) of the system (7.4.48).
The adder sums the results of signal processing in two processing channels, ac-
7.4 Signal Detection in Metric Space with Lattice Properties 337
cording to the equality (7.4.58), providing fulfillment of the criteria (7.4.48d) and
(7.4.48e) of the system (7.4.48).
The median filter (MF) realizes smoothing Sm[s(t)] of the process W (t) =
Y (t) + Ỹ (t) according to the formula (7.4.59), providing fulfillment of the criterion
(7.4.49a) of the system (7.4.49).
In a signal detection unit (SDU), the envelope computation unit (ECU) forms
the envelope EV (t) of the signal V (t) in the output of the median filter (MF) of the
matched filtering unit. The time of the signal ending estimator formation unit (EFU
t̂1 ) forms the estimator t̂1 , according to Equation (7.4.61a), providing fulfillment
of the criterion (7.4.50b) of the system (7.4.50). At the instant t = t̂1 − T20 , the
decision gate (DG) compares the instantaneous value of the envelope EV (t) with
the threshold value l0 (F ), and as the result, it makes the decision d1 concerning
the presence of the signal (if EV (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning
the absence of the signal (if EV (t̂1 − T20 ) < l0 (F )), according to the rule (7.4.60) of
the criterion (7.4.50a) of the system (7.4.50).
Amplitude and initial phase estimator formation units compute the estimators
Â, ϕ̂ according to the formulas (7.4.69) and (7.4.64), respectively, so that the esti-
mator ϕ̂ is used to form the time of signal ending estimator t̂1 .
Figures 7.4.11 through 7.4.14 illustrate the results of statistical modeling of
signal processing by a synthesized unit under the following conditions: the useful
signal s(t) is harmonic with number of periods Ns = 8, and also with initial phase
ϕ = π/3. Signal-to-noise ratio E/N0 is equal to E/N0 = 10−10 , where E is an energy
of the signal; N0 is the power spectral density of noise. The product T0 fn,max = 64,
where T0 is the period of a carrier of the signal s(t); fn,max is maximum frequency
of power spectral density of noise n(t) in the form of quasi-white Gaussian noise.
Figure 7.4.11 illustrates the useful signal s(t) and realization w∗ (t) of the signal
w(t) in the output of the adder of the signal extraction unit (SEU). The noise
overshoots appear in the form of short pulses of considerable amplitude.
FIGURE 7.4.11 Useful signal s(t) (dot FIGURE 7.4.12 Useful signal s(t) (dot
line) and realization w∗ (t) of signal w(t) line) and realization v ∗ (t) of signal v(t)
in output of adder of signal extraction in output of median filter of signal ex-
unit (SEU) (solid line) traction unit (SEU) (solid line)
Figure 7.4.12 illustrates the useful signal s(t) and realization v ∗ (t) of the sig-
nal v (t) in the output of median filter of the signal extraction unit (SEU). Noise
overshoots in comparison with the previous figure are removed by median filter.
338 7 Synthesis and Analysis of Signal Processing Algorithms
Figure 7.4.13 illustrates the useful signal s(t) and realization W ∗ (t) of the signal
W (t) in the output of the adder of matched filtering unit (MFU). Comparing the
signal W ∗ (t) with the signal v ∗ (t), one can conclude that the remnants of noise
overshoots observed in the output of the median filter of signal extraction unit
(SEU), were removed during processing of the signal v (t) in the transversal filters
of matched filtering unit (MFU). The MFU compresses the harmonic signal s(t) in
such a way that duration of the signal W (t) in the input of median filter of the
MFU is equal to the period T0 of harmonic signal s(t), thus, compressing the useful
signal Ns times where Ns is a number of periods of harmonic signal s(t).
FIGURE 7.4.13 Useful signal s(t) (dot FIGURE 7.4.14 Useful signal s(t) (dot
line) and realization W ∗ (t) of signal W (t) line); realization V ∗ (t) of signal V (t)
in output of adder of matched filtering (solid line); realization EV∗ (t) of its en-
unit (MFU) (solid line) velope (dash line)
Figure 7.4.14 illustrates the useful signal s(t); realization V ∗ (t) of the signal V (t)
in the output of the median filter of the MFU; realization EV∗ (t) of the envelope
EV (t) of the signal V (t); δ-pulses determining the time position of the estimators
t̂± of barycentric coordinates of the positive v+ (t) or the negative v− (t) parts of
the smoothed stochastic process v (t); δ-pulse determining the time position of the
estimator t̂1 of time of signal ending t1 , according to the formula (7.4.61a). As can
be seen from the figure, the leading edge of realization V ∗ (t) of the signal V (t)
retards with respect to the useful signal s(t) for time (N − 1)/(2fn,max ), where
N is the number of the samples of stochastic processes x(t) and x̃(t) used during
primary processing in transversal filters of the SEU; fn,max is maximum frequency
of power spectral density of noise n(t) in the form of quasi-white Gaussian noise.
We can determine the quality indices of estimation of unknown nonrandom am-
plitude A and initial phase ϕ of harmonic signal s(t). The errors of the estimators
of these parameters are determined by dynamic and fluctuation components. Dy-
namic errors of the estimators  and ϕ̂ of the amplitude A and the initial phase
ϕ are caused by the method of their obtaining, while assuming the interference
(noise) absence in the input of signal processing unit. Fluctuation errors of these
estimators are caused by remains of noise in the input of the estimator formation
unit.
We first find the dynamic error ∆d ϕ̂ of the estimator of initial phase ϕ̂ of
7.4 Signal Detection in Metric Space with Lattice Properties 339
The signal v (t) in the output of median filter of signal extraction unit in the absence
of interference (noise) in the input of signal processing unit can be represented in
the form:
− +
s(t), t ∈ Ts+ ∪ Ts− ;
v (t) = + − (7.4.71)
s(t − ∆T ), t ∈ Ts+ ∪ Ts− ,
+ − − +
where Ts = Ts+ ∪ Ts+ ∪ Ts− ∪ Ts− is the domain of definition of the signals v (t) and
s(t); ∆T is the quantity of the interval of primary processing T ∗ : T ∗ = [t − (N −
1)∆t, t], ∆T = (N − 1)∆t; ∆t is the sample interval providing independence of the
+ − − +
interference (noise) samples {n(tj )}; Ts+ , Ts+ , Ts− , Ts− are the domains with the
following properties:
+
T = {t : (0 ≤ v (t) < s(t))&(s0 (t − ∆T ) > 0)};
s+
−
= {t : (0 ≤ v (t) = s(t))&(s0 (t) < 0)};
Ts+
− (7.4.72)
Ts− = {t : (s(t) < v (t) ≤ 0)&(s0 (t − ∆T ) < 0)};
+
Ts− = {t : (s(t) = v (t) ≤ 0)&(s0 (t) > 0)}.
Taking into account the obtained approximations for the quantities vs and vc
(7.4.76) and (7.4.79), the approximating expression for the estimator ϕ̂ of initial
phase ϕ (7.4.64) takes the form:
−Q sin ϕ + q cos ϕ r sin(α − ϕ)
ϕ̂ ≈ −arctg = −arctg = ϕ − α, (7.4.80)
Q cos ϕ + q sin ϕ r cos(α − ϕ)
p
where Q = AT /2; q = 2πA∆T T0 (0.25T0 − ∆T ); r = Q2 + q 2 , α = tg(q/Q).
Substituting the approximate value of the estimator ϕ̂ of initial phase ϕ (7.4.80)
into the initial formula (7.4.70), we obtain approximate value of the dynamic error
∆d ϕ̂ of the estimator of initial phase ϕ̂ of the harmonic signal s(t):
π ∆T 4∆T
∆d ϕ̂ = tg (1 − ) , ∆T << T0 < T, (7.4.81)
T T0
where T is known signal duration; T = Ns T0 ; Ns is a number of periods of harmonic
signal s(t), N s = T f0 ; f0 is a carrier frequency of the signal s(t); T0 is a period of
a carrier: T0 = 1/f0 .
We now find the relative dynamic error δd  of the estimator  of the amplitude
A of the useful signal s(t):
|A − Â|
δd  = . (7.4.82)
A
Taking into account the obtained approximations for the quantities vs (7.4.76) and
vc (7.4.79), the approximating expression for the estimator  of the amplitude A
(7.4.69) takes the form:
p
 ≈ 2 (−Q sin ϕ + q cos ϕ)2 + (Q cos ϕ + q sin ϕ)2 /T =
p
= 2 Q2 + q 2 /T, (7.4.83)
7.4 Signal Detection in Metric Space with Lattice Properties 341
Find the fluctuation errors of the estimators ϕ̂ and  of the initial phase ϕ and the
amplitude A of the harmonic signal s(t), which characterize the synthesized signal
processing unit (Fig. 7.4.10). We use the theorem from [251] concerning the median
estimator v (t) obtained by the median filter (MF) converges in distribution to the
Gaussian random variable with zero mean.
As noted in Section 7.3, the variance Dvn (see Formula (7.3.34)) of the noise
component vn (t) of the process v (t) in the output of median filter can be decreased
to the quantity:
( )
2
∆T̃ 2 N 2 fn,max
2
Dvn ≤ Dv,max = a exp − √ , (7.4.85)
8 π
P F [s(t)] =
J−1 ^
y (t) = arg min | ∧ [x(tj ) − y (t)]|; (a)
^ j=0
y (t)∈Y ;t,tj ∈Tobs
J−1 _
| ∨ [x̃(tj ) − y (t)]|;
ỹ (t) = arg min (b)
_ j=0
y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.93)
J = arg min[ |y(t)|dt] n(t)≡0 , |y(t)|dt 6= 0; (c)
y(t)∈Y ; Tobs Tobs
w(t) R= F [y (t), ỹ (t)]; (d)
|w(t) − s(t)|dt] n(t)≡0 → min , (e)
[t1 −T0,min ,t1 ] w(t)∈W
where y (t) and ỹ (t) are the solution functions of the problem of minimization of
metrics between the observed statistical collections {x(tj )}, {x̃(tj )} and optimiza-
` a
tion variables, i.e., the functions y (t) and y (t), respectively; w(t) is a function F [∗, ∗]
of uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} and {x̃(tj )}; Tobs is an observation interval of the signal; J is
344 7 Synthesis and Analysis of Signal Processing Algorithms
a number of the samples of stochastic processes x(t) and x̃(t) used under signal
processing, J ∈ N; N is the set of natural numbers;
2
M{u (t)} s(t)≡0 → min; (a)
L
IP [s(t)] = 2
M{[u(t) − s(t)] } n(t)≡0 = ε ; (b) (7.4.94)
u(t) = L[w(t)], (c)
where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t))
that is the solution of minimization of the metric between the instantaneous values
of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t); tk =
k
t− M ∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval, in which
smoothing of stochastic process u(t) is realized; M ∈ N, N is the set of natural
numbers; M is a number of the samples of stochastic process u(t) used during
smoothing in the interval T̃ ; δd (∆T̃ ), δf (M ) are relative dynamic and fluctuation
errors of smoothing, as the dependences on the quantity ∆T̃ of the smoothing
interval T̃ and the number of the samples M respectively; δd,sm , δf,sm are given
quantities of the relative dynamic and fluctuation errors of smoothing, respectively.
The problem of joint detection Det[s(t)] of LFM signal s(t) and estimation
Est[t1 ] of time of its ending t1 is formulated on the basis of the detection and
estimation criteria involved in the same system, which is a logical continuation of
the system (7.4.92):
Det[s(t)]/Est[t1 ] =
d1
Ev (t̂1 − T0,min
2 ) ≷ l0 (F ); (a)
= d0 (7.4.96)
1
t̂ = arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ∨Φ2 ;t◦ ∈Tobs
T
where Ev (t̂1 − 0,min2 ) is the instantaneous value of the envelope Ev (t) of the
T
estimator v (t) = ŝ(t) of the useful signal s(t) at the instant t = t̂1 − 0,min 2 :
p
2 2
Ev (t) = v (t) + vH (t), vH (t) = H[v (t)] is a Hilbert transform; d1 and d0 are
the decisions made about the true values of an unknown nonrandom parameter θ,
θ ∈ {0, 1}; l0 (F ) is some threshold level dependent on a given conditional prob-
ability of false alarm F ; t̂1 is the estimator of time of signal ending t1 ; T0,min is
the minimal period of the oscillation of the LFM signal s(t); Mϕ {(t1 − t◦ )2 } is the
7.4 Signal Detection in Metric Space with Lattice Properties 345
mean squared difference between true value of time of signal ending t1 and opti-
mization variable t◦ ; Mϕ {∗} is the symbol of mathematical expectation with the
averaging with respect to the initial phase ϕ of the signal; Φ1 and Φ2 are possible
domains of definition of the initial phase ϕ of the signal: Φ1 = [−π/2, π/2] and
Φ2 = [π/2, 3π/2].
We now explain the optimality criteria and some relationships appearing in
the systems (7.4.93), (7.4.94), (7.4.95) that define the successive stages of signal
processing P F [s(t)], IP [s(t)], Sm[s(t)] of the general algorithm of matched filtering
M F [s(t)] of the useful signal s(t) (7.4.92).
Equations (7.4.93a) and (7.4.93b) of the system (7.4.93) determine the criteria
of minimum of metrics between statistical sets of the observations {x(tj )}, {x̃(tj )}
and the results of primary processing y (t), ỹ (t), respectively.
J−1 ` J−1 a
The functions of metrics | ∧ [x(tj ) − y (t)]|, | ∨ [x̃(tj ) − y (t)]| are chosen to pro-
j=0 j=0
vide the metric convergence and the convergence in probability to the useful signal
J−1 J−1
s(t) of the sequences y (t) = ∧ x(tj ), ỹ (t) = ∨ x̃(tj ) based on the interactions of
j=0 j=0
the kind (7.4.91a), (7.4.91b).
The relationship (7.4.93c) determines the criterion of the choice of a number
of the samples J of stochastic processes x(tR), x̃(t) used during signal processing
on the basis of minimization of the norm |y (t)|dt. The criterion (7.4.93c) is
Tobs
considered under two constraint conditions:
R (1) interference (noise) is identically
equal to zero: n(t) ≡ 0; (2) the norm |y (t)|dt of the function y (t) is not equal
R Tobs
to zero: |y (t)|dt 6= 0.
Tobs
determines the rule of the choice of the quantity ∆T̃ of the smoothing interval T̃ ,
based on providing a given quantity of relative dynamic error δd,sm of smoothing.
Equation (7.4.95c) determines the criterion of the choice of a number of the
samples M of stochastic process u(t), based on providing a given quantity of relative
fluctuation error δf,sm of its smoothing.
Equation (7.4.96a) of the system (7.4.96) determines the rule of making the
T
decision d1 concerning the presence of the signal (if Ev (t̂1 − 0,min
2 ) > l0 (F )) or the
T0,min
decision d0 concerning the absence of the signal (if Ev (t̂1 − 2 ) < l0 (F )).
The relationship (7.4.96b) determines the criterion of formation of the estima-
tor t̂1 of time of signal ending t1 on the basis of minimization of the mean squared
difference Mϕ {(t1 − t◦ )2 } between true value of time of signal ending t1 and op-
timization variable t◦ under the conditions that the averaging is realized over the
initial phase ϕ of the signal, taken in one of two intervals: Φ1 = [−π/2, π/2] or
Φ2 = [π/2, 3π/2], and interference (noise) is identically equal to zero: n(t) ≡ 0.
Solving the problem of matched filtering M F [s(t)] of the useful signal s(t) in sig-
nal space L(+, ∨, ∧) with L-group properties in the presence of interference (noise)
n(t) is described in detail in Subsection 7.4.2, so we consider intermediate results,
which determine the structure-forming elements of the general signal processing
algorithm, paying the attention only to the features of LFM signal processing.
The solutions of optimization equations (7.4.93a), (7.4.93b) of the system
(7.4.93) are the values of the estimators y (t), ỹ (t) in the form of meet and join
of the observation results {x(tj )}, {x̃(tj )}, respectively:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − Tj+ ); (7.4.97a)
j=0 j=0
J−1 J−1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − Tj− ), (7.4.97b)
j=0 j=0
where the variable intervals Tj± are determined by the following relationships:
0 + 00 +
Tj+ = t+ + +
m,J − tm,j ; {tm,j } : (s (tm,j ) = 0)&(s (tm,j ) < 0);
t+ + +
m,J = max{tm,j }, Tj=0 ≡ 0; (7.4.98a)
j
Tj− = t− − − 0 − 00 −
m,J − tm,j ; {tm,j } : (s (tm,j ) = 0)&(s (tm,j ) > 0);
t− − −
m,J = max{tm,j }, Tj=0 ≡ 0, (7.4.98b)
j
−
where {t+ m,j }, {tm,j } are the positions of local maximums and minimums of the
LFM signal s(t) on the time axis, respectively; s0 (t) and s00 (t) are the first and the
second derivatives of the useful signal s(t) with respect to time.
The condition s(t) ≡ 0 of the criterion (7.4.93c) of the system (7.4.93) implies
the corresponding specifications of the observation equations (7.4.91a,b): x(tj ) =
s(tj ) ∨ 0 and x̃(tj ) = s(tj ) ∧ 0. So, according to the relationships (7.4.97a), (7.4.97b),
7.4 Signal Detection in Metric Space with Lattice Properties 347
J = Ns . (7.4.100)
Thus, the identity (7.4.101) determines the kind of the coupling equation (7.4.93d)
obtained from joint fulfillment of the criteria (7.4.93a), (7.4.93b), and (7.4.93e) of
the system (7.4.93).
The solution u(t) of the relationships (7.4.94a), (7.4.94b), (7.4.94c) of the system
(7.4.94), determining the criterion of the choice of functional transformation of the
process w(t), is the function L[w(t)] that determines the gain characteristic of the
limiter:
k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval
in which smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of the
smoothing interval T̃ .
Solving the problem of joint detection Det[s(t)] of the signal s(t) and estimation
Est[t1 ] of time of its ending t1 is described in Subsection 7.4.2. We consider only
348 7 Synthesis and Analysis of Signal Processing Algorithms
1 T0,min
or : t̂1 = (t̂− + t̂+ ) + , ϕ ∈ [−π, π ]. (7.4.106b)
2 2
Thus, summarizing the relationships (7.4.97), (7.4.100) through (7.4.105), one
can conclude that the estimator t̂1 of time of signal ending and the envelope Ev (t)
are formed on the basis of the further processing of the estimator v (t) = ŝ(t) of the
LFM signal s(t) detected in the presence of interference (noise) n(t). The estimator
ŝ(t) is the smoothing function of the stochastic process u(t) obtained by limitation
of the process w(t) that combines the results y (t) and ỹ (t) of the corresponding
primary processing of the observed stochastic process x(t) and x̃(t) in the interval
of observation Tobs .
The block diagram of signal processing unit, according to the relationships
(7.4.97), (7.4.100) through (7.4.105), includes: two processing channels, each con-
taining transversal filter; the units of formation of the positive y+ (t) and the neg-
ative ỹ− (t) parts of the processes y (t) and ỹ (t), respectively; the adder that sums
7.4 Signal Detection in Metric Space with Lattice Properties 349
the results of signal processing in two channels; the limiter; median filter (MF);
estimator formation unit (EFU), envelope computation unit (ECU), and decision
gate (DG) (see Fig. 7.4.15).
FIGURE 7.4.15 Block diagram of processing unit that realizes LFM signal detection with
joint estimation of time of signal arrival (ending)
sion gate (DG) compares the instantaneous value of the envelope Ev (t) with the
threshold value l0 (F ). The decision d1 concerning the presence of the signal (if
T
Ev (t̂1 − 0,min
2 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
T0,min
Ev (t̂1 − 2 ) < l0 (F )) is made according to the rule (7.4.104) of the criterion
(7.4.96a) of the system (7.4.96).
Figures 7.4.16 through 7.4.19 illustrate the results of statistical modeling of
signal processing realized by the synthesized unit under the following conditions: the
useful signal s(t) is LFM with an integer number of periods Ns = 10 and deviation
∆ω = ω0 /2, where ω0 is known carrier frequency of the signal s(t); and also with
initial phase ϕ = −π/6. The signal-to-noise ratio E/N0 is equal to E/N0 = 10−10 ,
where E is a signal energy; N0 is the power spectral density of noise. The product
T0,min fn,max = 125, where T0,min is a minimal period of the oscillation of the carrier
of the LFM signal s(t); fn,max is the maximum frequency of power spectral density
of noise n(t) in the form of quasi-white Gaussian noise.
Figure 7.4.16 illustrates the useful signal s(t); realization w∗ (t) of the signal w(t)
in the output of the adder; realization v ∗ (t) of the signal in the output of median
filter v (t).
FIGURE 7.4.16 Useful signal s(t) (dot FIGURE 7.4.17 Useful signal s(t) (dot
line) and realization w∗ (t) of signal w(t) line) and realization v ∗ (t) of signal v(t)
in output of adder (dash line) in output of median filter (solid line)
Figure 7.4.17 illustrates the useful signal s(t) and realization v ∗ (t) of the signal
v (t) in the output of median filter. The matched filter provides the compression of
the LFM signal s(t) in such a way that a duration of the signal v (t) in the output
of matched filter is equal to minimal period of the oscillation T0,min of the LFM
signal s(t).
Figure 7.4.18 illustrates: the useful signal s(t); realization v ∗ (t) of the signal v (t)
in the output of median filter; δ-pulses determining time positions of the estimators
t̂± of barycentric coordinates of the positive v+ (t) and negative v− (t) parts of the
smoothed stochastic process; δ-pulse determining time position of the estimator t̂1
of time of signal ending t1 , according to the formula (7.4.105).
Figure 7.4.19 illustrates the useful signal s(t); realization Ev∗ (t) of the envelope
Ev (t) of the signal v (t) in the output of median filter; δ-pulses determining time
position of the estimators t̂± of barycentric coordinates of the positive v+ (t) or the
negative v− (t) parts of the smoothed stochastic process v (t); δ-pulse determining
7.5 Signal Detection in Metric Space with Lattice Properties 351
FIGURE 7.4.18 Useful signal s(t) (dot FIGURE 7.4.19 Useful signal s(t) (dot
line) and realization v ∗ (t) of signal v(t) line) and realization Ev∗ (t) of envelope
in output of median filter (solid line) Ev (t) of signal v(t) (solid line)
time position of the estimator t̂1 of time of signal ending t1 , according to the formula
(7.4.105).
The results of the investigations of algorithms and units of signal detection in
signal space with lattice properties allow us to draw the following conclusions.
(noise) n(t). Thus, while solving the problem of signal classification in linear space,
i.e., when the interaction equation x(t) = si (t) + n(t) holds, i = 1, . . . , m, m ∈ N,
to determine the likelihood function, the methodical trick is used that supposes
a change of variable: n(t) = x(t) − si (t). However, it is not possible to use the
same subterfuge to determine likelihood ratio when the interaction between the
signal and interference (noise) takes the form (7.5.1), inasmuch as the equation is
unsolvable with respect to the variable n(t) since the lattice L(∨, ∧) has no group
properties; thus, another approach is necessary here.
Based on (7.5.1), the solution of the problem of classification of the signal si (t)
from the set of deterministic signals S = {si (t)}, i = 1, . . . , m in the presence of
interference (noise) n(t) lies in formation of an estimator ŝi (t) of the received sig-
nal, which best allows (from the standpoint of the chosen criteria) an observer to
classify these signals. In this section, the problem of classification of the signals
from theR set S = {si (t)}, i = 1, . . . , m is based on minimization of the squared
metric |yi (t) − si (t)|2 dt |i=k between the function yi (t) = Fi [x(t)] of the ob-
t∈Ts
served process x(t) and the signal si (t) in the presence of the signal sk (t) in x(t):
x(t) = sk (t) ∨ n(t):
y (t) = Fi [x(t)] = ŝi (t); (a)
iR
2
|yk ( t ) − sk ( t ) | dt x(t)=sk (t)∨n(t) → min ; (b)
yk (t)∈Y
t∈Ts
R (7.5.2)
k̂ = arg max [ yi (t)si (t)dt] x(t)=sk (t)∨n(t) ; (c)
i∈I;si (t)∈S t∈Ts
i ∈ I, I = N ∩ [0, m], m ∈ N, (d)
where yi (t) = ŝi (t) is the estimator of the signal si (t) in the presence of the signal
sk (t) in theR process x(t): x(t) = sk (t) ∨ n(t), 1 ≤ k ≤ m; Fi [∗] is some deterministic
2
function; |yk (t) − sk (t)|2 dt = kyk (t) − sk (t)k is the squared metric between the
t∈Ts
signals yk (t) and sk (t) in Hilbert space HS; k̂ is the decision concerning the number
of processing channel, which corresponds to the received signal sk (t) from the set
of deterministic signals S = {si (t)}, i = 1, .R. . , m; 1 ≤ k ≤ m, k ∈ I, I = N ∩ [0, m],
m ∈ N, N is the set of natural numbers; yi (t)si (t)dt = (yi (t), si (t)) is a scalar
t∈Ts
product of the signals yi (t) and si (t) in Hilbert space HS.
The relationship (7.5.2a) of the system (7.5.2) defines the rule of formation
of the estimator ŝi (t) of the received signal in the i-th processing channel in the
form of some deterministic function Fi [x(t)] of the process x(Rt). The relationship
(7.5.2b) determines the criterion of minimum squared metric |yk (t) − sk (t)|2 dt
t∈Ts
in Hilbert space HS between the signals yk (t) and sk (t) in the k-th processing chan-
nel. This criterion is considered under the condition when reception of the signal
sk (t): x(t) = sk (t) ∨ n(t) is realized. The relationship (7.5.2c) of the system (7.5.2)
determines the criterion of maximum value of the correlation integral between the
estimator yi (t) = ŝi (t) of the received signal sk (t) in the i-th processing channel
and the signal si (t). According to this criterion, the choice of a channel number k̂
354 7 Synthesis and Analysis of Signal Processing Algorithms
R
corresponds to maximum value of correlation integral yi (t)si (t)dt, i ∈ I. The
t∈Ts
relationship (7.5.2d) determines a domain of definition I of processing channel i.
The solution of the problem of minimization of the squared metric (7.5.2b)
between the function yk (t) = Fk [x(t)] and the signal sk (t) in its presence in the
process x(t): x(t) = sk (t) ∨ n(t), follows directly from the absorption axiom of the
lattice L(∨, ∧) (see page 269) contained in the third part of multilink identity:
The identity (7.5.3) directly implies the type of the function Fi [x(t)] from the
relationship (7.5.2a) of the system (7.5.2):
Also, the identity (7.5.3) directly implies that a squared metric is identically equal
to zero: Z
|yk (t) − sk (t)|2 dt x(t)=sk (t)∨n(t) = 0.
(7.5.5)
t∈Ts
The identity (7.5.4) implies that in the presence of the signal sk (t) in the process
x(t) = sk (t) ∨ n(t), the solution of optimization equation (7.5.2c) of the system
(7.5.2) is equal to:
Z
arg max [ yi (t)si (t)dt] x(t)=sk (t)∨n(t) = k̂, (7.5.6)
i∈I; si (t)∈S
t∈Ts
R
and at the instant t = t0 + T , correlation integral yi (t)si (t)dt takes maximum
t∈Ts
value on i = k equal to energy E of the signal si (t):
Z Z
yi (t)si (t)dt |i=k = sk (t)sk (t)dt = E. (7.5.7)
t∈Ts t∈Ts
To summarize the relationships (7.5.4) through (7.5.7), one can conclude that
the signal processing unit has to form the estimator yi (t) = ŝi (t) of the signal si (t)
in each of m processing channels, that is equal,R according to (7.5.4), to ŝi (t) =
si (t) ∧ x(t); compute the correlation integral yi (t)si (t)dt in the interval Ts =
t∈Ts
[t0 , t0 + T ] in each processing channel; and, according to Equation (7.5.2c), make
the decision that in the process x(t) = sk (t) ∨ n(t), there exists the signal sk (t),
which
R corresponds to the channel where the maximum value of correlation integral
yk (t)sk (t)dt is formed at the instant t = t0 + T equal to the signal energy E.
t∈Ts
The block diagram of the unit of classification of deterministic signals in a
signal space with lattice properties includes the decision gate (DG) and m parallel
processing channels, each containing the circuit of formation of Rthe estimator ŝi (t)
of the signal si (t); the correlation integral computation circuit ŝi (t)si (t)dt; and
t∈Ts
7.5 Classification of Deterministic Signals in Metric Space with Lattice Properties 355
strobing circuit (SC) (see Fig. 7.5.1). The correlation integral computation circuit
consists of a multiplier and an integrator.
We can analyze the relations between the signals si (t) and sk (t) and their es-
timators ŝi (t) and ŝk (t) in the corresponding processing channels in the presence
of the signal sk (t): x(t) = sk (t) ∨ n(t) in the process x(t). Let the signals from the
set S = {si (t)}, i = 1, . . . , m be characterized by the same energy Ei = E. For the
signals si (t) and sk (t) and their estimators ŝi (t) and ŝk (t) in the corresponding pro-
cessing channels, on an arbitrary signal-to-interference (signal-to-noise) ratio in the
process x(t) = sk (t) ∨ n(t) observed in the input of classification unit, the following
metric relationships hold:
2 2 2
ksi (t) − ŝk (t)k + ksk (t) − ŝk (t)k = ksi (t) − sk (t)k ; (7.5.8a)
2 2 2
ksi (t) − ŝi (t)k + ksk (t) − ŝi (t)k = ksi (t) − sk (t)k , (7.5.8b)
2 2 2
where ka(t) − b(t)k = ka(t)k + kb(t)k − 2(a(t), b(t)) is a squared metric between
2
the functions a(t) and b(t) in Hilbert space HS; ka
R (t)k is a squared norm of the
function a(t) in Hilbert space HS; (a(t), b(t)) = a(t)b(t)dt is scalar product of
t∈T ∗
∗
the functions a(t) and b(t) in Hilbert space HS; T is a domain of definition of the
functions a(t) and b(t).
356 7 Synthesis and Analysis of Signal Processing Algorithms
The relationships (7.5.4) and (7.5.8a) directly imply that on an arbitrary signal-
to-interference (signal-to-noise) ratio in x(t) = sk (t) ∨ n(t), the correlation coeffi-
cients ρ[sk (t), ŝk (t)] and ρ[si (t), ŝk (t)] between the signals si (t) and sk (t) and the
estimator ŝk (t) of the signal sk (t) in the k-th processing channel are, respectively,
equal to:
ρ[sk (t), ŝk (t)] = 1; (7.5.9a)
ρ[si (t), ŝk (t)] = rik , (7.5.9b)
and the squared metrics, according to (7.5.8a), are determined by the following
relationships:
2
ksk (t) − ŝk (t)k = 0; (7.5.10a)
2 2
ksi (t) − ŝk (t)k = ksi (t) − sk (t)k = 2E (1 − rik ), (7.5.10b)
where rik is the cross-correlation coefficient between the signals si (t) and sk (t); E
is energy of the signals si (t) and sk (t).
The relationship (7.5.8b) implies that on an arbitrary signal-to-interference
(signal-to-noise) ratio in the process x(t) = sk (t) ∨ n(t), the correlation coefficients
ρ[si (t), ŝi (t)] and ρ[sk (t), ŝi (t)] between the signals si (t) and sk (t) and the estimator
ŝi (t) of the signal si (t) in the i-th processing channel are equal to:
1
ρ[si (t), ŝi (t)] = 1 − (1 − rik ); (7.5.11a)
4
3
ρ[sk (t), ŝi (t)] = 1 − (1 − rik ), (7.5.11b)
4
and squared metrics from (7.5.8a) are determined by the following relationships:
2 1
ksi (t) − ŝi (t)k = E (1 − rik ); (7.5.12a)
2
2 3
ksk (t) − ŝi (t)k = E (1 − rik ); (7.5.12b)
2
2
ksi (t) − sk (t)k = 2E (1 − rik ). (7.5.12c)
The relationships (7.5.9a) and (7.5.11a) imply that while receiving the signal sk (t)
in the process x(t) = sk (t) ∨ n(t) in the k-th processing channel in the output of
integrator (see Fig. 7.5.1), at the instant t = t0 + T , the maximum value of the
correlation integral (7.5.7) formed is equal to E · ρ[sk (t), ŝk (t)] = E. In the i-th
processing
R channel (i 6= k) at the same time, the value of the correlation integral
yi (t)si (t)dt formed is equal to E · ρ[si (t), ŝi (t)] < E.
t∈Ts
Thus, regardless of the conditions of parametric and nonparametric prior uncer-
tainty and the probabilistic-statistical properties of interference (noise), the optimal
unit of deterministic signal classification (optimal demodulator) in signal space with
lattice properties realizes error-free classification of the signals from the given set
S = {si (t)}, i = 1, . . . , m.
The three segments of Fig. 7.5.2 illustrate the signals zi (t) and ui (t) in the
7.5 Classification of Deterministic Signals in Metric Space with Lattice Properties 357
outputs of the correlation integral computation circuit and the strobing circuit,
the strobing pulses in the first, second, and third processing channels obtained by
statistical modeling under the condition, that the signals s1 (t), s2 (t), s3 (t), and s1 (t)
were received in the input of classification unit in the mixture x(t) = si (t) ∨ n(t),
i = 1, . . . , m successively in time in the intervals [0, T ], [T, 2T ], [2T, 3T ], [3T, 4T ],
respectively.
The signals s1 (t), s2 (t), s3 (t) are orthogonal phase-shift keying with equal en-
ergies. Signal-to-interference (signal-to-noise) ratio E/N0 is equal to the quantity
E/N0 = 10−8 , where E is an energy of the signal si (t); N0 is a power spectral
density of interference (noise). In the case of receiving of the k-th signal sk (t) from
the set of deterministic signals S = {si (t)}, i = 1, . . . , m, in the k-th processing
channel, the function zk (t) observed in the output of the correlation integral compu-
tation circuit is linear. Conversely, if in the i-th processing channel the j-th signal is
received, j =6 i, then the function zi (t) formed in the output of correlation integral
computation circuit, differs from the linear one.
As shown in Fig. 7.5.2, although the signal-to-interference (signal-to-noise) ratio
is rather small, the signals ui (t) in the inputs of the decision gate (in the outputs
of the strobing circuits) in each processing channel can be accurately distinguished
over their amplitude. The relationships (7.5.9a) and (7.5.11a) imply that while
receiving the signal sk (t) in the observed process x(t) = sk (t) ∨ n(t), in the k-th
processing channel in the output of the integrator (see Fig. 7.5.1), at the instant
t = t0 + jT , j = 1, 2, 3, . . ., the maximum value of correlation integral (7.5.7) formed
is equal to E · ρ[sk (t), ŝk (t)] = E. In the i-th processing channel (i =
6 k), at the same
time, the value of the correlation integral formed is equal to E·ρ[si (t), ŝi (t)] = 0.75E.
The Shannon theorem on capacity of the communication channel with additive
white Gaussian noise assumes the existence of lower bound inf[Eb /N0 ] of the ratio
Eb /N0 of an energy Eb that falls at one bit of information transferred by the signal
to the quantity of noise power spectral density N0 called the ultimate Shannon
limit [51], [52], [164].
This value inf[Eb /N0 ] = ln 2 establishes the limit below which error-free infor-
mation transmitting cannot be realized. The previous example implies that while
solving the problem of deterministic signal classification in the presence of inter-
ference (noise) in signal space with lattice properties, the value inf[Eb /N0 ] can be
358 7 Synthesis and Analysis of Signal Processing Algorithms
arbitrarily small as can the probability of signal receiving error while receiving a
signal from the set of deterministic signals S = {si (t)}, i = 1, . . . , m.
However, this does not mean that in such signal spaces one can achieve un-
bounded values of the capacity of a noisy communication channel. Sections 5.2
and 6.5 show that the capacity of communication channel, even in the absence of
interference (noise), is a finite quantity. It is impossible “. . . to transmit all the in-
formation in the Encyclopedia Britannica in the absence of noise by the only signal
si (t)”, that, in fact, follows from Theorem 5.1.1.
The results of investigation of deterministic signal classification problem in sig-
nal space with lattice properties permit us to draw the following conclusions.
impossible due to specificity of the properties of the signals and the algorithms of
their processing in these signal spaces; so another approach is necessary here.
Parameter λ0 of received signal s0 (t, λ0 ) is usually mismatched with respect to
parameter λ of the expected useful signal s(t, λ) matched with some filter used for
signal processing. The effects of mismatching take place during signal detection,
and influence signal resolution and estimation of signal parameters. Mismatching
can be evaluated over the signal w(λ0 , λ) in the output of signal processing unit
matched with the expected signal s(t, λ). In the case of a signal s0 (t, λ0 ) with a
mismatched value of parameter λ0 in the input of the processing unit in the absence
of interference (noise), the output response w(λ0 , λ) is called a mismatching function
[267]. The normalized mismatching function ρ(λ0 , λ) is also introduced in [267]:
w(λ0 , λ)
ρ(λ0 , λ) = p . (7.6.1)
w(λ0 , λ0 )w(λ, λ)
The function so determined is called a normalized time-frequency mismatching func-
tion of a processing unit, if the vector parameter of the expected signal includes
two scalar parameters: delay time t00 and Doppler frequency shift F00 [267]. Vector
parameter λ0 of the received signal can be expressed by two similar scalar param-
eters t00 = t0 + τ and F00 = F0 + F , where τ and F are time delay and Doppler
frequency mismatching, respectively.
This section has a twofold goal. First, it is necessary to synthesize the algorithm
and unit of resolution of radio frequency (RF) pulses without intrapulse modulation
in signal space with lattice properties. Second, it is necessary to determine potential
resolution of this unit.
Synthesis and analysis of optimal algorithm of RF pulse resolution are fulfilled
on the following assumptions. In synthesis, interference (noise) distribution is con-
sidered arbitrary, and useful signals are considered harmonic whose amplitude, time
of arrival, and initial phase are unknown nonrandom. Other parameters of useful
signals are considered to be known. Under the further analysis of signal processing
algorithm, interference (noise) is assumed to be Gaussian.
Consider the model of interaction between two harmonic signals s1 (t) and s2 (t)
and interference (noise) n(t) in signal space L(∨, ∧) in the form of distributive
lattice properties with operations of join a(t) ∨b(t) and meet a(t) ∧b(t), respectively:
a(t) ∨ b(t) = supL (a(t), b(t)), a(t) ∧ b(t) = inf L (a(t), b(t)); a(t), b(t) ∈ L(∨, ∧):
Let the model of the received signals s1 (t), s2 (t) be determined by the expression:
Ai cos(ω0 t + ϕ), t ∈ Ti ;
si (t) = (7.6.3)
0, t∈/ Ti ,
P F [s1,2 (t)] =
J−1
y (t) = arg min | ∧ [x(tj ) − y ◦ (t)]|; (a)
y ◦ (t)∈Y ; t,tj ∈T ∗ j=0
w(t) = F [y (t)];R (b)
R
= J = arg min[ |y (t)|dt x(t)=si (t)∨0 ], |y (t)|dt 6= 0; (c) (7.6.6)
y(t)∈Y t∈T ∗ t∈T ∗
R
|w12 (t) − [w1 (t) ∨ w2 (t)]|dt |t01 −t02 |∈∆t0i → min ; (d)
∗
w(t)∈W
t∈T
w12 (t) = w(t) x(t)=s1 (t)∨s2 (t) , w1,2 (t) = w(t) x(t)=s1,2 (t)∨0 ,
J−1
where y (t) is the solution of the problem of minimization of metric | ∧ [x(tj ) −
j=0
y ◦ (t)]| between the observed statistical collection {x(tj )} and optimization variable
(function) y ◦ (t); w(t) is some deterministic function F [∗] of the result y (t) of min-
imization of the function of the observed collection {x(tj )} (4); t0i is an unknown
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 361
Sm[s1,2 (t)] =
M −1
|u(tk ) − v ◦ (t)|;
P
v (t) = arg min (a)
v ◦ (t)∈V ; t,tk ∈T̃ k=0
= ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.6.8)
[δf (M 0 )] M ∗ : δf (M ∗ )=δf,sm ,
M = 0 arg max (c)
∗
M ∈N∩[M ,∞[
where v (t) is smoothing function of the process u(t) that is the solution of the
M −1
|u(tk ) − v ◦ (t)| between the instantaneous
P
problem of minimizing the metric
k=0
values of stochastic process u(t) and optimization variable v ◦ (t); tk = t − M k
∆T̃ ,
k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval, in which smoothing of
stochastic process u(t) is realized; M ∈ N, N is the set of natural numbers; M is a
number of samples of stochastic process u(t) used during smoothing in the interval
T̃ ; δd (∆T̃ ) and δf (M ) are relative dynamic and fluctuation errors of smoothing as
the dependences on the quantity ∆T̃ of the smoothing interval T̃ and the number
of the samples M , respectively; δd,sm and δf,sm are the quantities of dynamic and
fluctuation errors of smoothing, respectively.
We now explain the optimality criteria and the single relationships included
into the systems (7.6.6), (7.6.7), (7.6.8) determining the successive processing
stages P F [s1,2 (t)], IP [s1,2 (t)], Sm[s1,2 (t)] of the general algorithm of resolution
Res[s1,2 (t)] of the signals s1 (t) and s2 (t) (7.6.5).
Equation (7.6.6a) of the system (7.6.6) determines the criterion of minimum
of metric between the statistical set of the observation {x(tj )} and the result of
J−1
primary processing y (t). The choice of function of metric | ∧ [x(tj ) − y ◦ (t)]| should
j=0
take into account the metric convergence and the convergence in probability to
the estimated parameter of the sequence for the interaction in the form (7.6.2a)
(see Section 7.2). Equation (7.6.6b) establishes the interrelation between stochastic
processes y (t) and w(t). The relationship (7.6.6c) determines the criterion of the
choice of number of periods J of the signalsR s1 (t), s2 (t), used while processing
on the basis of minimization of the norm |y (t)|dt. The criterion (7.6.6c) is
t∈T ∗
considered under three constraint conditions: (1) interference (noise) is identically
362 7 Synthesis and Analysis of Signal Processing Algorithms
The solution of Equation (7.6.9) is the value of the estimator y (t) in the form of
the meet of the observation results {x(tj )}:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − jT0 ). (7.6.10)
j=0 j=0
J−1
The derivative of the function | ∧ [x(tj ) − y ◦ (t)]|, according to the relationship
j=0
(7.6.9), at point y (t) changes its sign from minus to plus. Thus, the extremum
determined by the formula (7.6.10) is the minimum point of this function and the
solution of Equation (7.6.6a) that determines this criterion of estimation.
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 363
The condition of the criterion (7.6.6c) of the system (7.6.6) x(t) = si (t) ∨ 0,
i = 1, 2 determines the observation Equation (7.6.2) of the following form: x(tj ) =
si (tj ) ∨ 0, j = 0, 1, . . . , J − 1; therefore, according to the relationship (7.6.10), the
identities hold:
y (t) x(t)=si (t)∨0 = [si (t) ∨ 0] ∧ [si (t − (J − 1)T0 ) ∨ 0]. (7.6.11)
R
On the basis of the identity (7.6.11), we obtain the value of the norm |y (t)|dt
t∈T ∗
from the criterion (7.6.6c):
Z
4(N − J + 1)Ai /π, J ≤ N;
|y (t)|dt = (7.6.12)
0, J > N,
t∈T ∗
F [y (t)] = y (t) ∨ 0.
It is obvious that the coupling equation (7.6.6b) has to be invariant with respect
to the presence (absence) of interference (noise) n(t), so the final variant of the
coupling equation can be written on the basis of the Equation (7.6.18) in the form:
Hence, the identity (7.6.20) determines the form of the coupling equation (7.6.6b)
obtained on the basis of the criterion (7.6.6d). According to the relationship (7.6.20),
the noninformative component of the process y (t), determined by its negative part
y− (t) = y (t) ∧ 0, must be excluded from signal processing, while the positive part
y+ (t) = y (t) ∨ 0 of the process y (t) takes part in the further processing, and y+ (t) is
the informational component of y (t). From the energetic standpoint, informational
y+ (t) and noninformational
R y− (t) components contain 1/N and (N − 1)/N parts
of the norm |y (t)|dt x(t)=s1,2 (t) , respectively, in the presence of the only signal
t∈T ∗
s1 (t) or s2 (t) in the input of the processing unit: x(t) = s1,2 (t) .
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 365
In the absence of interference (noise) n(t) = 0, between the signals in the input
and output of processing unit, the following relationships hold:
The solution of the last equation is the value of the estimator v (t) in the form of the
sample median med{∗} of the sample collection {u(tk )} of the stochastic process
u(t) in the interval T̃ =]t − ∆T̃ , t]:
k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1, and the quantities ∆T̃ and M are chosen
according to the criteria (7.6.8b) and (7.6.8c) of the system (7.6.8), respectively.
M −1
|u(tk ) − v ◦ (t)||x(t)=n(t)∨0 at the point v (t)
P
The derivative of the function
k=0
366 7 Synthesis and Analysis of Signal Processing Algorithms
changes its sign from minus to plus. Hence, the extremum determined by the for-
mula (7.6.23) is minimum of this function and the solution of the equation (7.6.8a)
determining this estimation criterion.
Thus, summarizing the relationships (7.6.10), (7.6.13), (7.6.20), (7.6.22),
(7.6.23), we conclude that the estimator v (t) of the signals s1 (t) and s2 (t) received
in the presence of interference (noise) n(t) is the function of smoothing of a stochas-
tic process u(t) obtained by limitation of the process w(t) that is the positive part
y+ (t) of the process y (t): w(t) = y+ (t) = y (t) ∨ 0, which is the result of primary
processing of the observed statistical collection {x(t − jT0 )}, j = 0, 1, . . . , N − 1:
N −1
y (t) = ∧ x(t − jT0 ):
j=0
k
v (t) = med{u(t − ∆T̃ )}; (7.6.24a)
M
u(t) = L[w(t)]; (7.6.24b)
N −1
w(t) = [ ∧ x(t − jT0 )] ∨ 0, (7.6.24c)
j=0
We first shall analyze the resolution ability of the obtained processing unit on
the basis of normalized mismatching function (7.6.1), not taking into account the
influence of interference (noise), then we shall do it in the presence of interference
(noise) within the model (7.6.2). This analysis will be carried out by the signal w(t)
(7.6.24c) in the input of the limiter (see Fig. 7.6.1), that avoids inessential details
while preserving the physical sense of the features of this processing.
In the absence of interference (noise) n(t) = 0 in the input of the resolution unit
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 367
(see Fig. 7.6.1), under the signal model (7.6.3), interaction Equation (7.6.2) takes
the form:
x(t) = s(t) ∨ 0; (7.6.25a)
A cos(ω0 t + ϕ), t ∈ Ts ;
s(t) = (7.6.25b)
0, t∈
/ Ts ,
where all the variables have the same sense contained in the signal model (7.6.3):
A is an unknown nonrandom amplitude of the useful signal s(t); ω0 = 2πf0 ; f0 is
the known carrier frequency of the signal s(t); ϕ is an unknown nonrandom initial
phase of the signal s(t), ϕ ∈ [−π, π ]; Ts is a domain of definition of the signal s(t),
Ts = [t0 , t0 + T ]; t0 is an unknown time of arrival of the signal s(t); T is known signal
duration, T = N T0 ; N is a number of periods of harmonic signal s(t); N ∈ N, N
is the set of natural numbers; T0 is a period of carrier.
Information concerning the values of unknown nonrandom amplitude A and
initial phase ϕ of the signal s(t) is contained in its estimator ŝ(t) = w(t) in the
interval Tŝ , t ∈ Tŝ :
Tŝ = [t0 + (N − 1)T0 , t0 + N T0 ], (7.6.26)
where t0 is an unknown time of arrival of the signal s(t); N is a number of periods
of harmonic signal s(t); N ∈ N, N is the set of natural numbers; T0 is a period of
carrier.
We note that this filter cannot be described adequately by the pulse-response
characteristic used for linear filter definition. We can easily make sure that the filter
response on δ-function is identically equal to zero. The response of the filter that
realizes processing of the harmonic signal s(t) (7.6.25b) in the absence of interference
(noise) is the estimator ŝ(t), which, according to the expression (7.6.24c), takes the
values:
s(t) = A cos[ω0 (t − t0 ) + ϕ], s(t) ≥ 0, t ∈ Tŝ ;
ŝ(t) = (7.6.27)
0, s(t) < 0, t ∈ Tŝ or t ∈
/ Tŝ .
Due to this property, the filter of the signal space with lattice properties funda-
mentally differs from the filter of the linear signal space, matched with the same
signal s(t), whose response is determined by the autocorrelation function of the
signal. The relationship (7.6.27) shows that the filter, realizing primary processing
(7.6.24c), provides the compression of the useful signal in N = T f0 times, and as any
nonlinear device, expands the spectrum of the processed signal. Using the known
analogy, the result (7.6.27) can be interpreted as the potential possibilities of the
filter in signal resolution in a time domain under an extremely large signal-to-noise
ratio E/N0 → ∞ in the input. The expression (7.6.27) implies that filter resolution
∆τ in time parameter is about a quarter of a carrier period T0 : ∆τ ∼ 1/4f0 = T0 /4,
where f0 is a carrier frequency of the signal.
Figure 7.6.2 illustrates the signal w(t) in the output of the unit of formation of
the positive part (see Fig. 7.6.1) during the interaction of two harmonic signals s1 (t)
and s2 (t) in the input of a synthesized unit in the absence of interference (noise)
n(t) = 0. In the figure, 1 is the signal s1 (t); 2 is the signal s2 (t); 3 is the response
368 7 Synthesis and Analysis of Signal Processing Algorithms
FIGURE 7.6.2 Signal w(t) in input of limiter in absence of interference (noise). 1 and 2:
signals s1 (t) and s2 (t), respectively; 3 and 4: responses ŝ1 (t) and ŝ2 (t) of signals s1 (t) and
s2 (t), respectively
ŝ1 (t) of the signal s1 (t); 4 is the response ŝ2 (t) of the signal s2 (t). The responses 3
and 4 of the signals s1 (t) and s2 (t) are shown by the solid line.
We can determine normalized time-frequency mismatching function (7.6.1) of
the filter that realizes the primary processing algorithm (7.6.24c) of harmonic signal
s(t) within the model (7.6.25b) in signal space with lattice properties in the absence
of interference (noise), assuming for simplicity that ϕ = −π/2:
The received signal s0 (t) is transformed as a result of the Doppler effect, and its
time-frequency characteristics differ from those of the initial signal s(t):
where A0 is a changed amplitude of the received signal s0 (t); ω00 = 2πf00 is a changed
cyclic frequency of a carrier; f00 = f0 (1 + δF ) is a changed carrier frequency of the
received signal s0 (t); δF = F/f0 is a relative quantity of Doppler frequency shift;
F is an absolute quantity of Doppler frequency shift; t00 is time of arrival of the
received signal; T 0 = T /(1 + δF ) = N · T00 is a changed duration of the signal; N
is a number of periods of the received signal s0 (t); T00 = T0 /(1 + δF ) is a changed
period of a carrier.
The response w(t) of the signal s0 (t) in the output of the filter in the absence
of interference (noise) is described by the function:
where t01 and t02 are times of beginning and ending of the response w(t); t0m =
(t01 + t02 )/2 is time corresponding to maximum value of the response w(t); A0 is an
amplitude of the transformed signal s0 (t).
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 369
The first part w↑ (t) of the function w(t) characterizes the leading edge of the
pulse in the output of the filter, and the second part w↓ (t) is its trailing edge. The
leading edge w↑ (t) corresponds to the first quarter of the wave of the first period of
the signal s0 (t + (N − 1)T00 ) delayed on N − 1 periods T00 of a carrier oscillation. The
trailing edge w↓ (t) corresponds to the second quarter of wave of the last period of
the received signal s0 (t).
In the case of the positive δF > 0 and negative δF < 0 relative Doppler shifts,
the values t01 , t0m , and t02 are determined by the following relationships:
0
t1 = tm + (N − 1)∆T0 · 1(−δF ) − 0.25T00 ;
t0 = tm + (N − 1)∆T0 · 1(δF ) + 0.5∆T0 + 0.25T00 ; (7.6.30)
20
tm = (t01 + t02 )/2 = tm + 0.5[(N − 1) + 0.5]∆T0 ,
where δτ and δF are relative time delay and frequency shift, respectively; δτ1
and δτ2 are relative time of beginning and time of ending of mismatching function
ρ(δτ, δF ) on F =const, respectively; δτm = (δτ1 + δτ2 )/2 is relative time correspond-
ing to maximum value of mismatching function ρ(δτ, δF ) on F =const.
In the case of the positive δF > 0 and negative δF < 0 relative frequency shifts,
the values τ1 , τm , and τ2 are determined, according to the transformation (7.6.31)
and the relationship (7.6.30), by the following expressions:
δτ1 = (N − 1)δT0 · 1(−δF ) − 0.25;
δτ2 = (N − 1)δT0 · 1(δF ) + 0.5δT0 + 0.25; (7.6.33)
δτm = (δτ1 + δτ2 )/2 = 0.5[(N − 1) + 0.5]δT0 ,
where δT0 = (T00 − T0 )/T0 = (−δF )/(1 + δF ) is a relative difference of the carrier
periods of transformed s0 (t) and initial s(t) signals.
The form of normalized time-frequency mismatching function ρ(δτ, δF ) of the
filter on N = 50 is shown in Fig. 7.6.3. Cut projections of normalized time-frequency
mismatching function ρ(δτ, δF ) (7.6.32), made by horizontal planes ρ(δτ, δF )=const
that are parallel to coordinate plane (δτ, δF ), on N >> 1, are similar by form
370 7 Synthesis and Analysis of Signal Processing Algorithms
are considered resolvable, so that the quantities δτ and δF are the doubled minimal
values of the roots of the equations ρ(δτ, 0) = 0.5, ρ(0, δF ) = 0.5, respectively:
The relationships (7.6.32) and (7.6.33) imply that potential resolutions of the
filter, matched with harmonic signal (7.6.28a) in signal space with lattice properties,
in relative time delay δτ and relative frequency shift δF are equal to:
This means that potential resolutions of such a filter in time delay ∆τ and frequency
shift ∆F are determined by the relationships:
The relationship (7.6.38) implies that to provide simultaneously the desired values
of resolution in time delay ∆τ and frequency shift ∆F , it is necessary to use the
signals with sufficiently large numbers of periods N of oscillations. This implies that
to provide simultaneously high resolution in both time and frequency parameters
in a signal space with lattice properties, it is not necessary to use signals with large
time-bandwidth products, inasmuch as this problem can be solved easily by means
of harmonic signals in the form (7.6.28).
We now determine: how does the presence of interference (noise) affect the
filter resolution? While receiving the realization s∗ (t) of the signal s(t), conditional
probability density function (PDF) py (z ; t/s∗ ) ≡ py (z/s∗ ) of instantaneous value
N −1
y (t) of statistics (7.6.10): y (t) = ∧ x(t − jT0 ), t ∈ Tŝ (see Formula (7.6.26)) is
j=0
determined by the expression:
δ (z ), 1(z ) are the Dirac delta and Heaviside step functions, respectively; Fn (z ) is
the cumulative distribution function (CDF) of interference (noise) n(t).
Any interval Tŝ is a partition of two sets T 0 and T 00 , respectively:
R0
where P = P (Cc ) + N · pn (z )[1 − Fn (z )]N −1 dz, s(t) ≤ 0, t ∈ T 0 ;
s(t)
Obviously, when s(t) > 0, t ∈ T 00 ⊂ Tŝ , the PDF pw (z/s∗ ) of the stochastic process
w(t) in the output of the filter is identically equal to the PDF py (z/s∗ ) (7.6.39):
The probability density function pw (z/s∗ ), t ∈ Tŝ is also the PDF of the estimator
ŝ(t) of the instantaneous value of the signal s(t) in the input of the limiter.
Every random variable n(t) is characterized by a PDF pn (z ) with zero expecta-
tion, so assuming that s∗ (t) ≥ 0, the inequality holds:
According to the inequality (7.6.45), the estimator of upper bound of the probability
P (Ce ) of error formation of the estimator ŝ(t) is determined by the relationship:
Correspondingly, the lower bound of the probability P (Cc ) of the correct formation
of the estimator ŝ(t) is determined by the inequality:
Analysis of the relationship (7.6.44) allows us to conclude that the response of the
signal s(t), t ∈ Tŝ is observed in the filter output in the interval Tŝ = [t0 + (N −
1)T0 , t0 + N T0 ] with extremely high probability P (Cc ) ≥ 1 − 2−N regardless of
signal-to-noise ratio. The relationship (7.6.44) also implies that the estimator ŝ(t)
is biased; nevertheless, it is asymptotically unbiased and consistent, inasmuch as it
converges in probability and in distribution to the estimated value s(t).
The expressions (7.6.43) and (7.6.44) for conditional PDF pw (z/s∗ ) of the in-
stantaneous values of stochastic process w(t) in the limiter output for arbitrary
instants should be specified, inasmuch as they were obtained on the assumption
that t ∈ Tŝ , where the interval Tŝ (7.6.26) corresponds to the domain of definition
of the signal response in the filter output (7.6.27), and, respectively, to domain of
definition of the estimator ŝ(t) of the signal s(t).
Figure 7.6.5 illustrates the realization w∗ (t) of the signal w(t) including the
signal response and the residual overshoots in both sides of the signal response
with amplitudes equal to the instantaneous values of the signal s(t) ≥ 0 in the
random instants t ∈ Tk from the intervals {Tk } corresponding to the time positions
of the positive semiperiods of the autocorrelation function of the harmonic signal
s(t):
FIGURE 7.6.5 Realization w∗ (t) of signal w(t) including signal response and residual
overshoots. 1 = signal s(t); 2 = signal response ŝ(t); 3 = residual overshoots
Fig. 7.6.5 shows: 1 is the signal s(t); 2 is the signal response ŝ(t), t ∈ Tŝ ; 3
represents the residual overshoots.
Any interval Tk is a partition of two sets Tk0 and Tk00 , respectively:
Tk = Tk0 ∪ Tk00 , Tk0 ∩ Tk00 = ∅; s(t0 ) ≤ 0, t0 ∈ Tk0 ; s(t00 ) > 0, t00 ∈ Tk00 ; (7.6.49)
T0 ϕT0 3T0 ϕT0
Tk0 = [t0 +[(N − 1)+ k ]T0 +− ; t0 +[(N − 1)+ k ]T0 + − ]; Tk00 = Tk −Tk0 ,
4 2π 4 2π
and the measures of the intervals Tk0 and Tk00 are equal to m(Tk0 ) = m(Tk00 ) = T0 /2.
Formation of the signal w(t) in the input of the limiter is realized according to
the rule (7.6.24c):
N −1
w(t) = y (t) ∨ 0 = [ ∧ x(t − jT0 )] ∨ 0, t ∈ Tk , (7.6.50)
j=0
where Tk is determined by the formula (7.6.48), k = 0, ±1, . . . , ±(N − 1), Tk=0 ≡ Tŝ .
The expression (7.6.50) implies that as a result of N − |k| tests from N , at the
instant t ∈ Tk , the signal w(t) can take the values from the set {0, s(ti ), n(tl )},
i = 1, . . . , N − |k|; l = 1, . . . , N − |k|. As the result of |k| tests from N , it can take
the values from the set {0, n(tm )}, m = 1, . . . , |k|. Thus, in the intervals {Tk }, the
signal s(t), t ∈ Tk is used less often (at |k|) to form the estimator than in the interval
Tk=0 ≡ Tŝ , that naturally makes the signal processing quality worse. Thus, at the
instant t ∈ Tk , the signal w(t) is determined by the least ymin (t) = y1 (t) ∧ y2 (t) of
two random variables y1 (t), y2 (t):
the random variable y2 (t). Then, the PDF py (z/s∗), t ∈ Tk of random variable
ymin (t) = y1 (t) ∧ y2 (t) is determined by the relationship:
py (z/s∗ ) = py1 (z/s∗ = 0)[1 − Fy2 (z/s∗ )] + py2 (z/s∗ )[1 − Fy1 (z/s∗ = 0)], (7.6.52)
where, according to (7.6.39), the PDFs py1 (z/s∗ = 0) and py2 (z/s∗ ) of random
variables y1 (t) and y2 (t) are determined by the expressions:
py2 (z/s∗ ) = P (Cc )|N −|k| ·δ (z−s∗ (t))+(N −|k|) ·pn (z )[1 −Fn (z )]N −|k|−1 · 1(z−s∗ (t)),
and their CDFs Fy1 (z/s∗ = 0) and Fy2 (z/s∗ ) are determined by the relationships:
Fy1 (z/s∗ = 0) = P (Cc )||k| · 1(z ), Fy2 (z/s∗ ) = P (Cc )|N −|k| · 1(z − s∗ (t));
where P (Cc )|q = 1 − [1 − Fn (s∗ (t))]q and P (Ce )|q = [1 − Fn (s∗ (t))]q , q=const.
Then PDF py2 (z/s∗ ) (7.6.52) can be represented in the form:
py (z/s∗ ) = py1 (z/s∗ = 0){1 − 1(z − s∗ (t)) + [1 − Fn (z )]N −|k| · 1(z − s∗ (t))}+
+ py2 (z/s∗ ){1 − 1(z ) + [1 − Fn (z )]|k| · 1(z )}. (7.6.53)
Depending on the values taken by the signal s(t) t ∈ Tk in the interval Tk (7.6.49),
s(t0 ) ≤ 0, t0 ∈ Tk0 or s(t00 ) > 0, t00 ∈ Tk00 , PDF py (z/s∗ ) (7.6.53) is determined by
the expressions:
py (z/s∗ )|t∈Tk00 = P (Cc )||k|,s=0 ·δ (z )+ |k|·pn (z )[1 −Fn (z )]|k|−1 · [1(z ) − 1(z−s∗ (t))]+
+ P (Cc )|N −|k| P (Ce )||k| δ (z − s∗ (t))+
+ N · pn (z )[1 − Fn (z )]N −1 · 1(z − s∗ (t)). (7.6.55)
Due to its non-negative definiteness, under the condition that s(t) > 0, t ∈ Tk00 , the
PDF py (z/s∗ )|t∈Tk00 , pw (z/s∗ )|t∈Tk00 in the filter output is identically equal to PDF
(7.6.55):
pw (z/s∗ )|t∈Tk00 ≡ py (z/s∗ )|t∈Tk00 . (7.6.56)
If the signal s(t) takes the values s(t) ≤ 0, t ∈ Tk0 in the interval Tk , then the PDF
is equal to:
pw (z/s∗ )|t∈Tk0 = [P (Cc )||k| + P + P (Cc )||k|, s=0 · P (Ce )|N −|k|, s=0 ]δ (z )+
+ N · pn (z )[1 − Fn (z )]N −1 · 1(z ), (7.6.57)
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 375
where
Z0
P = (N − |k|) · pn (z )[1 − Fn (z )]N −|k|−1 dz ;
s∗ (t)
pws (z/s∗ ) =
[1 − 2−|k| + 2−N ]δ (z ) + [2−|k| − 2−N ]δ (z − s∗ (t)), t ∈ Tk00 ;
= (7.6.58)
δ (z ), t ∈ Tk0 .
Hence, mathematical expectation M{ws (t)} of the signal component ws (t) of the
process w(t) is equal to:
Z∞
M{ws (t)} = zpws (z/s∗ )dz =
−∞
Figure 7.6.6 illustrates the realization w∗ (t) of stochastic process w(t) in the output
376 7 Synthesis and Analysis of Signal Processing Algorithms
of the unit of formation of the positive part, and Fig. 7.6.7 shows the realization
v ∗ (t) of stochastic process v (t) in the output of median filter (see Fig. 7.6.1), under
the interaction between two harmonic signals s1 (t), s2 (t) and interference (noise)
n(t) in the input of synthesized unit obtained by statistical modeling of processing
of the input signal x(t). In the figures 1 denotes the signal s1 (t); 2 is the signal
s2 (t); 3 is the response of the signal s1 (t); 4 is the response of the signal s2 (t); 5
represents residual overshoots of the signal component ws∗ (t) of realization w∗ (t) of
stochastic process w(t).
The examples correspond to the following conditions. The signals s1 (t) and
s2 (t) are narrowband RF pulses without intrapulse modulation; interference is a
quasi-white Gaussian noise with the ratio of maximum frequency of interference
power spectral density fn,max to a carrier frequency f0 of the signals s1 (t) and s2 (t):
fn,max /f0 = 8; the signal-to-interference (signal-to-noise) ratios for the signals s1 (t)
and s2 (t) take the values Es1 /N0 = 8 · 10−7 and Es2 /N0 = 3.2 · 10−6 , respectively
(where Es1 , Es2 are the energies of the signals s1 (t) and s2 (t); N0 is power spectral
density of interference (noise)). The number of samples N of the input signal x(t)
used in signal processing and determined by the number of periods of a carrier of
the signals s1 (t) and s2 (t) is equal to 10. The delay of the signal s1 (t) with respect
to the signal s2 (t) is equal to 1.25T0 , where T0 is a period of a carrier oscillation:
T0 = 1/f0 .
The responses of the signals s1 (t) and s2 (t), shown in Fig. 7.6.6, are easily
distinguished in the remnants of the nonlinear interaction between interference
(noise) and the signals s1 (t), s2 (t) in the form of residual overshoots (line 5) of
the signal component ws∗ (t) of realization w∗ (t) of stochastic process w(t). As can
be seen from Fig. 7.6.7, median filter (see Fig. 7.6.1) removes residual overshoots
(line 5) of the signal component ws∗ (t) of realization w∗ (t) of stochastic process w(t)
and slightly cuts the top of the responses of the signals s1 (t) and s2 (t). The results
of statistical modeling of processing of the input signal x(t) shown in Figs. 7.6.6
and 7.6.7, confirm the high efficiency of harmonic signal resolution in the presence
of strong interference provided by the synthesized processing algorithm.
Using the identity w(t) = s∗ (t) with respect to the formula (7.6.59), where
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 377
w(t) is determined by the function (7.6.29), normalizing this function along with
transformation of the variable t (7.6.31), we obtain the expression for normalized
time-frequency mismatching function ρ(δτ, δF ) of the filter in the presence of strong
interference (see Fig. 7.6.8):
a · sin[2π (1 + δF )(δτ − δτ1,k )], δτ1,k ≤ δτ < δτm,k ;
ρ(δτ, δF ) = (7.6.60)
−a · sin[2π (1 + δF )(δτ − δτ2,k )], δτm,k ≤ δτ < δτ2,k ,
where δτ and δF are relative time delay and relative frequency shift; a is a multiplier
equal to (2−|k| − 2−N )/(1 − 2−N ); δτ1,k , δτ2,k are relative times of beginning and
ending of the intervals of definition of mismatching function ρ(δτ, δF ) on F =const;
δτm,k = (δτ1,k + δτ2,k )/2 is relative time corresponding to the maximum value of
mismatching function ρ(δτ, δF ) on F =const.
In cases of positive F > 0 and negative F < 0 relative frequency shifts, the values
δτ1,k , δτm,k , and δτ2,k are determined, according to the transformation (7.6.31) and
the relationship (7.6.30), by the following expressions:
δτ1,k = (N − 1)δT0 · 1(−δF ) − 0.25 + k ;
δτ2,k = (N − 1)δT0 · 1(δF ) + 0.5δT0 + 0.25 + k ; (7.6.61)
δτm,k = (δτ1,k + δτ2,k )/2 = 0.5[(N − 1) + 0.5]δT0 + k,
where δT0 = (T00 − T0 )/T0 = (−δF )/(1 + δF ) is a relative difference of the periods
of a carrier of the transformed s0 (t) and the initial s(t) signals.
The expression (7.6.60) and the relationships (7.6.61) imply that resolutions
in relative time delay δτ and relative frequency shift δF of the filter, matched
with the harmonic signal (7.6.28a) in signal space with lattice properties, remain
invariable under interaction between the signal and interference (noise) regardless
of the energetic relationships between them as determined by Equations (7.6.36).
378 7 Synthesis and Analysis of Signal Processing Algorithms
The multipeak character of the function ρ(δτ, δF ), that spreads along the axis of
relative time delay δτ in the interval [−0.25 − (N − 1), (N − 1) + 0.25] with maxima
in the points δτ = k, k = 0, ±1, . . . , ±(N − 1), stipulates the ambiguity of time
delay measurement.
The investigation of the resolution algorithm of RF pulses without intrapulse
modulation with a rectangular envelope in signal space with lattice properties allows
us to draw the following conclusions.
1. While solving the problem of signal resolution in signal space with lattice
properties, there exists the possibility of realization of so-called “needle-
shaped” response in the output of a signal processing unit without side
lobes. The independence of the responses of the transformed s0 (t) (7.6.28b)
and the initial signal s(t) (7.6.28a) in the range of time delay ∆τ and
frequency shift ∆F , that are out of the bounds of filter resolution in time
∆τ and in frequency ∆F parameters (|∆τ | > ∆τ , |∆F | > ∆F ), is achieved
by nonlinear processing in signal space with lattice properties.
2. The effect of an arbitrarily strong interference in signal space with lat-
tice properties does not change the filter resolutions in time delay and
in frequency shift. However, it does cause the ambiguity of time delay
measurement.
3. The absence of the constraints imposed by uncertainty principle of Wood-
ward [118] allows us to obtain any resolution in time delay ∆τ and in
frequency shift ∆F in signal space with lattice properties even under har-
monic signal use. The last feature is provided by the proper choice of car-
rier frequency f0 and number of periods N of carrier oscillations within
the signal duration T .
consider the construction of a signal space with lattice properties on the base of
spaces with semigroup properties.
7.7.1 Method of Mapping of Linear Signal Space into Signal Space with
Lattice Properties
Signal space with lattice properties L(∨, ∧) can be obtained by transformation of
the signals of linear space in such a way that the results of interactions x(t) and
x̃(t) between the signal s(t) and interference (noise) n(t) in signal space with lattice
properties L(∨, ∧) with operations of join ∨ and meet ∧ are realized according to
the relationships:
Based on the relationships (7.7.1), to form the results of interaction between the sig-
nal s(t) and interference (noise) n(t) in signal space with lattice properties L(∨, ∧),
it is necessary to have two linearly independent equations with respect to s(t) and
n(t). Let two linearly independent functions a(t) and b(t) of useful signal s(t) and
interference (noise) n(t), received by a directional antenna A and an omnidirec-
tional antenna B (see Fig. 7.7.1), arrive onto two inputs of the unit of mapping T
of linear signal space into signal space with lattice properties.
FIGURE 7.7.2 Directional field patterns FA (θ), FB (θ) of antennas A, B; θs , θn are direc-
tions of arrival of signal and interference (noise), respectively
There are two signals in two outputs of the mapping unit T : x(t) = s(t) ∨ n(t)
and x̃(t) = s(t) ∧ n(t) determined by the relationships (7.7.1). The useful signal
s(t) and interference (noise) n(t) are received by the antennas of channels A and
B (see Fig. 7.7.2), which are such that: (1) the antennas A and B have the same
phase center and (2) the antennas A and B are characterized by the directional field
patterns FA (θ) and FB (θ), respectively, so that FA (θs ) = G; FA (θn ) = g; G > g
and G > 1; FB (θ) = 1; complex gain-frequency characteristics K̇A (ω ) and K̇B (ω )
of the receiving channels A and B are identical: K̇A (ω ) = K̇B (ω ), and there are
two signals a(t) and b(t) in the inputs of the mapping unit T :
a(t) = G · s(t) + g · n(t); (a)
(7.7.2)
b(t) = s(t) + n(t), (b)
where G = FA (θs ) is the gain of antenna A from the direction of arrival of the
signal s(t); g = FA (θn ) is the gain of antenna A from the direction of arrival of
interference (noise) n(t).
We suppose that the direction of arrival of the signal θs is known, and the
direction of arrival of interference (noise) θn is unknown. We also consider that
interference n(t) is quasi-white Gaussian noise with independent samples {n(tj )},
j = 0, 1, 2, . . ., that are distant over time interval ∆t = |tj − tj±1 | = 1/(2fn,max ),
where fn,max is an upper bound frequency of power spectral density of interference,
and the useful signal s(t) changes slightly in the interval ∆t, i.e., s(t) = s(t ± ∆t).
The equation system (7.7.2) with respect to s(t) and n(t) cannot be solved due
to the presence of the additional unknown quantity g. To solve it, in addition to the
system (7.7.2), one more equation system can be used. It is formed on the basis of
the observations a(t) and b(t), delayed at the interval ∆t, taking into account the
last assumption concerning a slow (as against ∆t) signal change (s(t) = s(t ± ∆t)):
We now obtain the relationships determining the algorithm of the mapping unit
T : a(t), b(t) → x(t), x̃(t); a(t), b(t) ∈ LS (+); x(t), x̃(t) ∈ L(∨, ∧).
Joint fulfillment of both pairs of Equations (7.7.2a), (7.7.3a) and (7.7.2b),
(7.7.3b) implies the system:
The equations of the system (7.7.4) imply the identity determining the gain g of
antenna A, which affect interference (noise) n(t) arriving from the direction θn :
a(t) − a(t0 )
g = FA (θn ) = . (7.7.5)
b(t) − b(t0 )
To make the identity (7.7.6) to hold, it is necessary to select values k and q, which
are the roots of the system of equations:
q (G − k ) = 1;
q (g − k ) = −1.
By solving it, we obtain the values of coefficients k and q providing the identity to
hold (7.7.6):
k = (G + g )/2; (7.7.7a)
q = 2/(G − g ), (7.7.7b)
Taking into account the identity (7.7.6), one can compose the required relationship:
where k, q are the coefficients determined by the relationships (7.7.7a) and (7.7.7b),
respectively.
Substituting the sum s(t) + n(t) from Equation (7.7.2b) and the difference
s(t) − n(t) from Equation (7.7.8) into the identities (7.7.1a,b), we obtain the de-
sired relationships determining the algorithm of the mapping unit T : a(t), b(t) →
x(t), x̃(t); a(t), b(t) ∈ LS (+); x(t), x̃(t) ∈ L(∨, ∧):
x(t) = [b(t) + q|a(t) − k · b(t)|]/2; (a)
x̃(t) = [b(t) − q|a(t) − k · b(t)|]/2; (b)
T = k = (G + g ) /2; (c) (7.7.9)
q = 2 / ( G − g ); ( d)
g = [a(t) − a(t0 )]/[b(t) − b(t0 )]. (e)
382 7 Synthesis and Analysis of Signal Processing Algorithms
One possible variant of the block diagram of the signal space mapping unit is
described in [268], [269].
Generally, the mapping of linear signal space LS (+) into the signal space with
lattice properties L(∨, ∧) supposes the further use of the processing algorithms
that are invariant with respect to the conditions of parametric and nonparametric
prior uncertainties. If further signal processing is constrained by the use of the
algorithms that are critical with respect to energetic characteristics of the useful
and interference signals, the mapping unit T must form the signals in the outputs
x(t) and x̃(t) of the following form:
q1 = (G + 1)/(G − g ), (7.7.13b)
where k1,2 , q1,2 are the coefficients determined by the pairs of relationships (7.7.13a),
(7.7.14a) and (7.7.13b), (7.7.14b), respectively.
Substituting the values Gs(t) − n(t) and Gs(t) + n(t) from Equations (7.7.15)
into the identities (7.7.10), we obtain the required relationships determining the
algorithm of the mapping unit T : a(t), b(t) → x(t), x̃(t); a(t), b(t) ∈ LS (+);
x(t), x̃(t) ∈ L(∨, ∧):
x(t) = {q2 [a(t) + k2 b(t)] + q1 |a(t) − k1 · b(t)|}/2; (a)
x̃(t) = {q2 [a(t) + k2 b(t)] − q1 |a(t) − k1 · b(t)|}/2; (b)
k1 = G(1 + g )/(G + 1); (c)
T = q1 = (G + 1)/(G − g ); (d) (7.7.16)
k = G (1 − g ) / ( G − 1); ( e )
2
q = (G − 1)/(G − g ); (f )
2
0 0
g = (a(t) − a(t ))/(b(t) − b(t )). (g )
where the functions u(t) and v (t) are determined over operations of addition and
multiplication between the signal s(t) and interference n(t) that take place in signal
spaces with additive SG(+) and multiplicative SG(·) semigroups, respectively:
u(t) = s(t) + n(t); (a)
(7.7.18)
v (t) = s(t) · n(t). (b)
Based on the relationships (7.7.1), to form the results of interactions x(t) and x̃(t)
between the signal s(t) and interference n(t) in signal space with lattice properties
L(∨, ∧), it is necessary to have two independent equations with respect to s(t) and
n(t) (7.7.18) that form equations:
p
x(t) = s(t) ∨ n(t) = [u(t) + w(t)]/2 = [u(t) + u2 (t) − 4v (t)]/2; (7.7.19a)
p
x̃(t) = s(t) ∧ n(t) = [u(t) − w(t)]/2 = [u(t) − u2 (t) − 4v (t)]/2, (7.7.19b)
p
where w(t) = u2 (t) − 4v (t) = |s(t) − n(t)| (7.7.17).
The desired relationships determining the method of mapping T 0 : u(t), v (t) →
384 7 Synthesis and Analysis of Signal Processing Algorithms
x(t), x̃(t); u(t) ∈ SG(+), v (t) ∈ SG(·); and x(t), x̃(t) ∈ L(∨, ∧) are defined by the
equations system:
x(t) = s(t) ∨ n(t) = [u(t) + w(t)]/2; ( a)
x̃( t ) = s( t) ∧ n (t ) = [ u( t) − w ( t)]/2; (b)
0
p
T = 2
w(t) = u (t) − 4v (t) = |s(t) − n(t)|; (c) (7.7.20)
u ( t) = s (t ) + n ( t); ( d)
v (t) = s(t) · n(t);
(e)
The example of signal space with mentioned properties is also a ring R(+, ·), i.e.,
an algebraic structure in which two binary operations are defined (addition and
multiplication), which are such that: (1) R(+) is an additive group with neutral
element 0, 0 ∈ R(+): ∀a, a ∈ R(+): ∃ − a: a + 0 = a; a + (−a) = 0; (2) R(·) is mul-
tiplicative semigroup; (3) operations of addition and multiplication are connected
through distributive laws:
Generally, the methods and algorithms of signal processing within signal spaces
with lattice properties are single-channel types regardless of the number of interfer-
ence sources affecting input of the processing unit. These methods can be realized in
two ways: (1) utilizing the physical medium with lattice properties, where the wave
interactions are determined by lattice operations (the direct methods); (2) utilizing
the methods of mapping of signal spaces with group (semigroup) properties into
the signal spaces with lattice properties (the indirect methods).
Unfortunately, the materials with such properties are unknown to the author;
and realization of the second group methods inevitably causes various destabilizing
factors exerting their negative influence upon the efficiency of signal processing,
that is stipulated by inaccurate reproduction of the operations of join and meet
between the signals. This circumstance causes violation of the initial properties of
signal space. Signal processing methods used in signal space with lattice properties
cease to be optimal; thus, it is necessary to reoptimize signal processing algorithms
obtained under assumption that lattice properties hold within the signal space.
That is a subject for separate consideration.
Here, however, we note, that “reoptimized” signal processing algorithms oper-
ating in signal space with lattice properties in the presence of destabilizing factors
(for instance, those that decrease statistical dependence (or cross-correlation) of
interference in the receiving channels), can be more efficient than the known signal
processing algorithms functioning in linear signal space.
Certainly, the direct methods of realization of signal spaces with given proper-
ties are essentially more promising. They assume the use of physical medium with
nonlinear properties in which useful and interference signals interact on the basis of
lattice operations. The problem of synthesis of physical medium where interaction
of wave processes is described by the lattice operations oversteps the bounds of the
signal processing theory and requires special research.
Conclusion
The main hypothesis underlying this book can be formulated in the following way.
It is impossible to construct signal processing theory without providing unity of con-
ceptual basics with information theory (at least within its syntactical aspects), and
as a consequence, without their theoretical compatibility and harmonious association.
Apparently, the inverse statement is also true: it is impossible to state information
theory logically (within its syntactical aspects) in isolation from signal processing
theory.
There are two axiomatic statements underlying this book: the main axiom of
signal processing theory and axiom of a measure of binary operation between the el-
ements of signal space built upon generalized Boolean algebra with a measure. The
first axiom establishes the relationships between information quantities contained
in the signals in the input and output of processing unit. The second axiom deter-
mines qualitative and quantitative aspects of informational relationships between
the signals (and their elements) in a space built upon generalized Boolean algebra
with a measure. All principal results obtained in this work are the consequences
from these axiomatic statements.
The main content of this work can be formulated by the following statements:
1. Information can exist only in the presence of the set of its material carriers,
i.e., the signals forming the signal space.
2. Information contained in a signal exists only in the presence of the struc-
tural diversity between the signal elements.
3. Information contained in a couple of signals exists due to the identities
and the distinctions between the elements of these signal structures.
4. Signal space is characterized by properties peculiar to nonEuclidean ge-
ometry.
5. Information contained in the signals of signal space becomes measurable
if a measure of information quantity is introduced in such space.
6. Measure of information quantity is an invariant of a group of signal map-
pings in signal space.
7. Measure of information quantity induces metric in signal space.
8. From the standpoint of providing minimum losses of information contained
in the signals, it is expedient to realize signal processing within the signal
spaces with lattice properties.
385
386 Conclusion
FIGURE C.1 Suggested scheme of interrelations between information theory, signal pro-
cessing theory, and algebraic structures
the known signal processing problems within linear spaces on the basis of methods
of signal processing in spaces with L-group properties; expanding the book’s prin-
cipal ideas into the signal spaces in the form of stochastic fields; development of
information processing methods and units (including quantum and optic-electronic)
based on generalized Boolean algebra with a measure, and others.
The solution of nonlinear electrodynamics problem of synthesis of a physical
medium in which wave interaction is described by L-group operations requires spe-
cial efforts from the scientific community. However, such an advance would overstep
the bounds of signal processing theory subject and require a multidisciplinary ap-
proach. Practical application of suggested algorithms and units of signal processing
in metric spaces with lattice properties is impossible without the breakthroughs to
the aforementioned technologies and cooperative efforts noted above.
The main problem related to fundamentals of signal processing theory and infor-
mation theory rely on finding three necessary compromises among: (1) mathematics
and physics, (2) algebra and geometry, and (3) continuity and discreteness.
388 Conclusion
An attempt to compress the content of this book into a single paragraph would
read as follows:
The paragraph above summarizes the principal idea behind this book.
A secondary idea is the corollary from the principal one. It arises from the need
to increase the efficiency of signal processing by decreasing or even eliminating
the inevitable losses of information that accompany the interactions of useful and
interference signals. Achieving the goal of more efficient signal processing requires
researching and developing new technologies based upon suggested signal spaces,
in which interactions of useful and interference signals take place with essentially
lesser losses of information as against linear spaces.
The author hopes the book will inform a broad research audience and re-
spectable scientific community. We live in times when the most insane fantasies
become perceptible realities with an incredible speed. The author expresses his
confidence that the signal processing technologies and methods described in this
book will appear sooner rather than later.
Bibliography
[1] Whittaker, E. T. On the functions which are represented by the expansions of the
interpolation theory. Proc. Royal Soc. Edinburgh, 10(1):57–64, 1922.
[2] Carson, J.R. Notes on the theory of modulation. Proc. IRE, 10(1):57–64, 1922.
[3] Gabor, D. Theory of communication. J. IEE, 93(26):429–457, 1946.
[4] Belyaev, Yu.K. Analytical stochastic processes. Probability Theory and Appl.,
4(4):437–444, 1959 (in Russian).
[5] Zinoviev, V.A. and Leontiev, V.K. On perfect codes. Prob. Inf. Trans., 8(1):26–35,
1975 (in Russian).
[6] Wyner, A. The common information of two dependent random variables. IEEE Trans.
Inf. Theory, IT–21(2):163–179, 1975.
[7] Pierce, J.R. An Introduction to Information Theory: Symbols, Signals and Noise.
Dover Publications, New York, 2nd edition, 1980.
[8] Nyquist, H. Certain factors affecting telegraph speed. Bell Syst. Tech.J., 3:324–346,
1924.
[9] Nyquist, H. Certain topics in telegraph transmission theory. Trans. AIEE, 47:617–644,
1928.
[10] Tuller, W.G. Theoretical limitations on the rate of transmission of information. Proc.
IRE, 37(5):468–478, 1949.
[11] Kelly, J. A new interpretation of information rate. Bell Syst. Tech. J., 35:917–926,
1956.
[12] Blackwell, D., Breiman, L., and Thomasian, A.J. The capacity of a class of channels.
Ann. Math. Stat., 30:1209–1220, 1959.
[13] McDonald, R.A. and Schultheiss, P.M. Information rates of Gaussian signals under
criteria constraining the error spectrum. Proc. IEEE, 52:415–416, 1964.
[14] Wyner, A.D. The capacity of the band-limited Gaussian channel. Bell Syst. Tech. J.,
45:359–371, 1965.
[15] Pinsker, M.S. and Sheverdyaev, A.Yu. Transmission capacity with zero error and
erasure. Probl. Inf. Trans., 6(1):13–17, 1970.
[16] Ahlswede, R. The capacity of a channel with arbitrary varying Gaussian channel
probability function. Trans. 6th Prague Conf. Inf. Theory, 13–21, 1971.
[17] Blahut, R. Computation of channel capacity and rate distortion functions. IEEE
Trans. Inf. Theory, 18:460–473, 1972.
[18] Ihara, S. On the capacity of channels with additive non-Gaussian noise. Inf. Control,
(37(1)):34–39, 1978.
[19] El Gamal, A.A. The capacity of a class of broadcast channels. IEEE Trans. Inf.
Theory, 25(2):166–169, 1979.
389
390 Bibliography
[20] Gelfand, S.I. and Pinsker, M.S. Capacity of a broadcast channel with one deterministic
component. Probl. Inf. Trans., 16(1):17–21, 1980.
[21] Sato, H. The capacity of the Gaussian interference channel under strong interference.
IEEE Trans. Inf. Theory, 27(6):786–788, 1981.
[22] Carleial, A. Outer bounds on the capacity of the interference channel. IEEE Trans.
Inf. Theory, 29:602–606, 1983.
[23] Ozarow, L.H. The capacity of the white Gaussian multiple access channel with feed-
back. IEEE Trans. Inf. Theory, 30:623–629, 1984.
[24] Teletar, E. Capacity of multiple antenna Gaussian channels. Eur. Trans. Telecom-
mun., 10(6):585–595, 1999.
[25] Hamming, R.V. Error detecting and error correcting codes. Bell System Tech. J.,
29:147–160, 1950.
[26] Rice, S.O. Communication in the presence of noise: probability of error for two
encoding schemes. Bell System Tech. J., 29:60–93, 1950.
[27] Huffman, D.A. A method for the construction of minimum redundancy codes. Proc.
IRE, 40:1098–1101, 1952.
[28] Elias, P. Error-free coding. IRE Trans. Inf. Theory, 4:29–37, 1954.
[29] Shannon, C.E. Certain results in coding theory for noisy channels. Infor. Control.,
1:6–25, 1957.
[30] Shannon, C.E. Coding theorems for a discrete source with a fidelity criterion. IRE
Natl. Conv. Record, 7:142–163, 1959.
[31] Bose, R.C. and Ray-Chaudhuri, D.K. On a class of error correcting binary group
codes. Inf. Control, (3):68–79, 1960.
[32] Wozencraft, J. and Reiffen, B. Sequential Decoding. MIT Press, Cambridge, MA,
1961.
[33] Viterbi, A.J. On coded phase-coherent communications. IRE Trans. Space Electron.
Telem., 7:3–14, 1961.
[34] Gallager, R.G. Low Density Parity Check Codes. MIT Press, Cambridge, MA, 1963.
[35] Abramson, N.M. Information Theory and Coding. McGraw-Hill, New York, 1963.
[36] Viterbi, A.J. Error bounds for convolutional codes and an asymptotically optimum
decoding algorithm. IEEE Trans. Inf. Theory, IT-13:260–269, 1969.
[37] Forney, G.D. Convolutional codes: algebraic structure. IEEE Trans. Inf. Theory,
16(6):720–738, 1970.
[38] Berger, T. Rate Distortion Theory: A Mathematical Basis for Data Compression.
Prentice-Hall, Englewood Cliffs, 1971.
[39] Viterbi, A.J. Convolutional codes and their performance in communication systems.
IEEE Trans. Commun. Technol., 19(5):751–772, 1971.
[40] Ziv, J. Coding of sources with unknown statistics. Distortion relative to a fidelity
criterion. IEEE Trans. Inf. Theory, 18:389–394, 1972.
[41] Slepian, D. and Wolf, J.K. A coding theorem for multiple access channels with cor-
related sources. Bell Syst. Tech. J., 52:1037–1076, 1973.
[42] Pasco, R. Source Coding Algorithms for Fast Data Compression. PhD thesis, Stanford
University, Stanford, 1976.
Bibliography 391
[68] Dembo, A., Cover, T.M., and Thomas, J.A. Information theoretic inequalities. IEEE
Trans. Inf. Theory, (37 (6)):1501–1518, 1991.
[69] Ihara, S. Information Theory for Continuous Systems. World Scientific, Singapore,
1993.
[70] Cover, T.M. and Thomas, J.A. Elements of Information Theory. Wiley, Hoboken,
2nd edition, 2006.
[71] Bennett, W.R. Time-division multiplex systems. Bell Syst. Tech. J., 20:199–221,
1941.
[72] Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J., 28:656–
715, 1949.
[73] Wozencraft, J.M. and Jacobs, I.M. Principles of Communication Engineering. Wiley,
New York, 1965.
[74] Gallager, R.G. Information Theory and Reliable Communication. Wiley, New York,
1968.
[75] Fink, L.M. Discrete Messages Transmission Theory. Soviet Radio, Moscow, 1970 (in
Russian).
[76] Liao, H. Multiple Access Channels. PhD thesis, Department of Electrical Engineering,
University of Hawaii, Honolulu, 1972.
[77] Penin, P.I. Digital Information Transmission Systems. Soviet Radio, Moscow, 1976
(in Russian).
[78] Lindsey, W.G. and Simon, M.K. Telecommunication Systems Engineering. Prentice-
Hall, Englewood Cliffs, 1973.
[79] Thomas, C.M., Weidner, M.Y., and Durrani, S.H. Digital amplitude-phase keying
with m-ary alphabets. IEEE Trans. Commun., 22(2):168–180, 1974.
[80] Spilker, J. Digital Communications by Satellite. Prentice-Hall, Englewood Cliffs, 1977.
[81] Penin, P.I. and Filippov, L.I. Information Transmission Electronic Systems. Radio i
svyaz, Moscow, 1984 (in Russian).
[82] Varakin, L.E. Communication Systems with Noise-Like Signals. Radio i svyaz,
Moscow, 1985 (in Russian).
[83] Zyuko, A.G., Falko, A.I., and Panfilov, I.P. Noise Immunity and Efficiency of Com-
munication Systems. Radio i svyaz, Moscow, 1985 (in Russian).
[84] Zyuko, A.G., Klovskiy, D.D., Nazarov, M.V., and Fink, L.M. Signal Transmission
Theory. Radio i svyaz, Moscow, 1986 (in Russian).
[85] Wiener, N. Cybernetics, or Control and Communication in the Animal and the Ma-
chine. Wiley, New York, 1948.
[86] Bar-Hillel, Y. Semantic information and its measures. Trans. 10th Conf. Cybernetics,
33-48, 1952.
[87] Rashevsky, N. Life, information theory and topology. Bull. Math. Biophysics, 17:229–
235, 1955.
[88] Cherry, C.E. On Human Communication: A Review, a Survey, and a Criticism. MIT
Press, Cambridge, MA, 3rd edition, 1957.
[89] Kullback, S. Information Theory and Statistic. Wiley, New York, 1959.
[90] Brillouin, L. Science and Information Theory. Academic Press, New York, 1962.
Bibliography 393
[114] Malahov, A.N. Cumulant Analysis of Stochastic non-Gaussian Processes and Their
Transformations. Soviet Radio, Moscow, 1978 (in Russian).
[115] Tihonov, V.I. Statistical Radio Engineering. Radio i svyaz, Moscow, 1982 (in Russian).
[116] Tihonov, V.I. Nonlinear Transformations of Stochastic Processes. Radio i svyaz,
Moscow, 1986 (in Russian).
[117] Stark, H. and Woods, J.W. Probability and Random Processes with Applications to
Signal Processing. Prentice-Hall, Englewood Cliffs, 2002.
[118] Woodward, P.M. Probability and Information Theory, with Application to Radar.
Pergamon Press, Oxford, 1953.
[119] Siebert, W.M. Studies of Woodward’s uncertainty function. Technical report, R.L.E.
Massachusetts Institute of Technology, 1958.
[120] Wilcox, C.H. The synthesis problem for radar ambiguity functions. Technical Report
157, Mathematical Research Center, U.S. Army, University of Wisconsin, Madison,
1958.
[121] Cook, C.E. and Bernfeld, M. Radar Signals. An Introduction to Theory and Applica-
tion. Academic Press, New York, 1967.
[122] Franks, L.E. Signal Theory. Prentice-Hall, Englewood Cliffs, 1969.
[123] Papoulis, A. Signal Analysis. McGraw-Hill, New York, 1977.
[124] Varakin, L.E. Theory of Signal Systems. Soviet Radio, Moscow, 1978 (in Russian).
[125] Kolmogorov, A.N. Interpolation and extrapolation of stationary random sequences.
Izv. AN SSSR. Ser. Math., 5(1):3–11, 1941 (in Russian).
[126] North, D.O. Analysis of the factors which determine signal/noise discrimination in
pulsed carrier systems. Technical Report PTR-6C, RCA Lab., Princeton, N.J., 1943.
(reprinted in Proc. IRE, Volume 51, 1963, 1016–1027).
[127] Kotelnikov, V.A. Theory of Potential Noise Immunity. Moscow Energetic Institute,
Moscow, 1946 (in Russian).
[128] Wiener, N. Extrapolation, Interpolation and Smoothing of Stationary Time Series.
MIT Press, Cambridge, MA, 1949.
[129] Slepian, D. Estimation of signal parameters in the presence of noise. IRE Trans. Inf.
Theory, 3:68–89, 1954.
[130] Middleton, D. and Van Meter, D. Detection and extraction of signals in noise from
the point of view of statistical decision theory. J. Soc. Ind. Appl. Math., 3:192–253,
1955.
[131] Amiantov, I.N. Application of Decision Making Theory to the Problems of Signal
Detection and Signal Extraction in Background Noise. VVIA, Moscow, 1958 (in Rus-
sian).
[132] Kalman, R.E. A new approach to linear filtering and prediction problem. J. Basic
Eng. (Trans. ASME), 82D:35–45, 1960.
[133] Blackman, R.B. Data Smoothing and Prediction. Addison-Wesley, Reading, MA,
1965.
[134] Rabiner, L.R. and Gold, B. Theory and Application of Digital Signal Processing.
Prentice-Hall, Englewood Cliffs, 1975.
[135] Oppenheim, A.V. and Schafer, R.W. Digital Signal Processing. Prentice-Hall, Engle-
wood Cliffs, 1975.
Bibliography 395
[136] Applebaum, S.P. Adaptive arrays. IEEE Trans. Antennas Propag., 24(5):585–598,
1976.
[137] Monzingo, R. and Miller, T. Introduction to Adaptive Arrays. Wiley, Hoboken, NJ,
1980.
[138] Widrow, B. and Stearns, S.D. Adaptive Signal Processing. Prentice-Hall, Englewood
Cliffs, 1985.
[139] Alexander, S.T. Adaptive Signal Processing: Theory and Applications. Springer Ver-
lag, New York, 1986.
[140] Oppenheim, A.V. and Schafer, R.W. Discrete-Time Signal Processing. Prentice-Hall,
Englewood Cliffs, 1989.
[141] Lim, J.S. Two-dimensional Signal and Image Processing. Prentice-Hall, Englewood
Cliffs, 1990.
[142] Haykin, S. Adaptive Filter Theory. Prentice-Hall, Englewood Cliffs, 2002.
[143] Middleton, D. An Introduction to Statistical Communication Theory. McGraw-Hill,
New York, 1960.
[144] Vaynshtein, L.A. and Zubakov, V.D. Signal Extraction in Background Noise. So-
viet Radio, Moscow, 1960 (in Russian).
[145] Viterbi, A.J. Principles of Coherent Communication. McGraw-Hill, New York, 1966.
[146] Van Trees, H.L. Detection, Estimation, and Modulation Theory. Wiley, New York.
[147] Sage, A.P. and Melsa, J.L. Estimation Theory with Applications to Communications
and Control. McGraw Hill, New York, 1971.
[148] Amiantov, I.N. Selected Questions of Statistical Communication Theory. Soviet Radio,
Moscow, 1971 (in Russian).
[149] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Volume 2. Soviet
Radio, Moscow.
[150] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Volume 3. Soviet
Radio, Moscow.
[151] Gilbo, E.P. and Chelpanov, I.B. Signal Processing on the Basis of Ordered Selection:
Majority Transformation and Others That are Close to it. Soviet Radio, Moscow,
1977 (in Russian).
[152] Repin, V.G. and Tartakovsky, G.P. Statistical Synthesis under Prior Uncertainty and
Adaptation of Information Systems. Soviet Radio, Moscow, 1977 (in Russian).
[153] Sosulin, Yu.G. Detection and Estimation Theory of Stochastic Signals. Soviet Radio,
Moscow, 1978 (in Russian).
[154] Kulikov, E.I. and Trifonov, A.P. Estimation of Signal Parameters in the Background
of Noise. Soviet Radio, Moscow, 1978 (in Russian).
[155] Tihonov, V.I. Optimal Signal Reception. Radio i Svyaz, Moscow, 1983 (in Russian).
[156] Akimov, P.S., Bakut, P.A., Bogdanovich, V.A., et al. Signal Detection Theory. Radio
i Svyaz, Moscow, 1984 (in Russian).
[157] Kassam, S.A. and Poor, H.V. Robust techniques for signal processing: A Survey.
Proc. IEEE, 73(3):433–481, 1985.
[158] Trifonov, A.P. and Shinakov, Yu. S. Joint Classification of Signals and Estimation
of Their Parameters in the Noise Background. Radio i Svyaz, Moscow, 1986 (in
Russian).
396 Bibliography
[159] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Radio i Svyaz,
Moscow, 1989 (in Russian).
[160] Kay, S.M. Fundamentals of Statistical Signal Processing. Prentice-Hall, Englewood
Cliffs, 1993.
[161] Poor, H.V. An Introduction to Signal Detection and Estimation. Springer, New York,
2nd edition, 1994.
[162] Helstrom, C.W. Elements of Signal Detection and Estimation. Prentice-Hall, Engle-
wood Cliffs, 1994.
[163] Middleton, D. An Introduction to Statistical Communication Theory. IEEE, New
York, 1996.
[164] Sklar, B. Digital Communications: Fundamentals and Applications. Prentice-Hall,
Englewood Cliffs, 2nd edition, 2001.
[165] Minkoff, J. Signal Processing Fundamentals and Applications for Communications
and Sensing Systems. Artech House, Norwood, MA, 2002.
[166] Bogdanovich, V.A. and Vostretsov, A.G. Theory of Robust Detection, Classification
and Estimation of Signals. Fiz. Math. Lit, Moscow, 2004 (in Russian).
[167] Vaseghi, S.V. Advanced Digital Signal Processing and Noise Reduction. Wiley, New
York, 2006.
[168] Huber, P.J. Robust Statistics. Wiley, New York, 1981.
[169] Fraser, D.A. Nonparametric Methods in Statistics. Wiley, New York, 1957.
[170] Noether, G.E. Elements of Nonparametric Statistics. Wiley, New York, 1967.
[171] Hajek, J. and Sidek, Z. Theory of Rank Tests. Academic Press, New York, 1967.
[172] Lehmann, E.L. Nonparametrics: Statistical Methods Based on Ranks. Holden-Day,
Oakland, CA, 1975.
[173] Kolmogorov, A.N. Complete metric Boolean algebras. Philosophical Studies, 77(1):57–
66, 1995.
[174] Horn, A. and Tarski, A. Measures in Boolean algebras. Trans. Amer. Math. Soc.,
64:467–497, 1948.
[175] Vladimirov, D.A. Boolean Algebras. Nauka, Moscow, 1969 (in Russian).
[176] Sikorsky, R. Boolean Algebras. Springer, Berlin, 2nd edition, 1964.
[177] Boltyansky, V.G. and Vilenkin, N.Ya. Symmetry in Algebra. Nauka, Moscow, 1967
(in Russian).
[178] Weyl, H. Symmetry. Princeton University Press, 1952.
[179] Wigner, E.P. Symmetries and Reflections: Scientific Essays. Indiana University Press,
Bloomington, 1967.
[180] Dorodnitsin, V.A. and Elenin, G.G. Symmetry of nonlinear phenomena. In Computers
and Nonlinear Phenomena: Informatics and Modern Nature Science, pages 123–191.
Nauka, Moscow, 1988 (in Russian).
[181] Bhatnagar, P.L. Nonlinear Waves in One-dimensional Dispersive Systems. Clarendon
Press, Oxford, 1979.
[182] Kurdyumov, S.P., Malinetskiy, G.G., Potapov, A.B., and Samarskiy, A.A. Structures
in nonlinear medium. In Computers and Nonlinear Phenomena: Informatics and
Modern Nature Science, pages 5–43. Nauka, Moscow, 1988 (in Russian).
Bibliography 397
[183] Whitham, G.B. Linear and Nonlinear Waves. Wiley, New York, 1974.
[184] Dubrovin, B.A., Fomenko, A.T., and Novikov, S.P. Modern Geometry. Nauka,
Moscow, 1986 (in Russian).
[185] Klein, F. Non-Euclidean Geometry. Editorial URSS, Moscow, 2004 (in Russian).
[186] Rosenfeld, B.A. Non-Euclidean Spaces. Nauka, Moscow, 1969 (in Russian).
[187] Dirac, P.A.M. Principles of Quantum Mechanics. Clarendon Press, Oxford, 1930.
[188] Shannon, C.E. The bandwagon. IRE Trans. Inf. Theory, 2(1):3, 1956.
[189] Ursul, A.D. Information. Methodological Aspects. Nauka, Moscow, 1971 (in Russian).
[190] Berg, A.I. and Biryukov, B.V. Cybernetics: the way of control problem solving. In
Future of Science. Volume 3. Nauka, 1970 (in Russian).
[191] Steane, A.M. Quantum computing. Rep. Progr. Phys., (61):117–173, 1998.
[192] Feynman, R.P. Quantum mechanical computers. Found. Phys., (16):507–531, 1987.
[193] Wiener, N. I Am a Mathematician. Doubleday, New York, 1956.
[194] Chernin, A.D. Physics of Time. Nauka, Moscow, 1987 (in Russian).
[195] Leibniz, G.W. Monadology: An Edition for Students. University of Pittsburgh Press,
1991.
[196] Riemann, B. On the Hypotheses Which Lie at the Foundation of Geometry. Göttingen
University, 1854.
[197] Klein, F. Highest Geometry. Editorial URSS, Moscow, 2004 (in Russian).
[198] Zheleznov, N.A. On Some Questions of Informational Electric System Theory.
LKVVIA, Leningrad, 1960 (in Russian).
[199] Yaglom, A.M. and Yaglom, I.M. Probability and Information. Nauka, Moscow, 1973
(in Russian).
[200] Prigogine, I. and Stengers, I. Time, Chaos and the Quantum: Towards the Resolution
of the Time Paradox. Harmony Books, New York, 1993.
[201] Shreider, Yu.A. On quantitative characteristics of semantic information. Sci. Tech.
Inform., (10), 1963 (in Russian).
[202] Ogasawara, T. Compact metric Boolean algebras and vector lattices. J. Sci. Hi-
roshima Univ., 11:125—128, 1942.
[203] Mibu, Y. Relations between measures and topology in some Boolean spaces. Proc.
Imp. Acad. Tokio, 20:454–458, 1944.
[204] Ellis, D. Autometrized Boolean algebras. Canadian J. Math., 3:87–93, 1951.
[205] Tomita, M. Measure theory of complete Boolean algebras. Mem. Fac. Sci. Kyusyu
Univ., 7:51–60, 1952.
[206] Hewitt, E. A note on measures in Boolean algebras. Duke Math. J., 20:253–256, 1953.
[207] Vulih, B.Z. On Boolean measure. Uchen. Zap. Leningr. Ped. Inst., 125:95–114, 1956
(in Russian).
[208] Lamperti, J. A note on autometrized Boolean algebras. Amer. Math. Monthly, 64:188–
189, 1957.
[209] Heider, L.J. A representation theorem for measures on Boolean algebras. Mich. Math.
J., 5:213–221, 1958.
[210] Kelley, J.L. Measures in Boolean algebras. Pacific J. Math., 9:1165–1177, 1959.
398 Bibliography
[211] Vladimirov, D.A. On the countable additivity of a Boolean measure. Vestnik Leningr.
Univ. Mat. Mekh. Astronom, 16(19):5–15, 1961 (in Russian).
[212] Vinokurov, V.G. Representations of Boolean algebras and measure spaces. Math. Sb.,
56 (98)(3):374–391, 1962 (in Russian).
[213] Vladimirov, D.A. Invariant measures on Boolean algebras. Math. Sb., 67 (109)(3):440–
460, 1965 (in Russian).
[214] Stone, M.H. Postulates for Boolean algebras and generalized Boolean algebras. Amer.
J. Math., 57:703–732, 1935.
[215] Stone, M.H. The theory of representations for Boolean algebras. Trans. Amer. Math.
Soc., 40:37–111, 1936.
[216] McCoy, N.H. and Montgomery, D. A representation of generalized Boolean rings.
Duke Math. J., 3:455–459, 1937.
[217] Grätzer, G. and Schmidt, E.T. On the generalized Boolean algebras generated by a
distributive lattice. Nederl. Akad. Wet. Proc., 61:547–553, 1958.
[218] Subrahmanyan, N.V. Structure theory for generalized Boolean rings. Math. Ann.,
141:297–310, 1960.
[219] Whitney, H. The abstract properties of linear dependence. Amer. J. Math., 37:507–
533, 1935.
[220] Menger, K. New foundations of projective and affine geometry. Ann. Math., 37:456–
482, 1936.
[221] Birkhoff, G. Lattice Theory. American Mathematical Society, Providence, 1967.
[222] Blumenthal, L.M. Boolean geometry. Rend. Coirc. Math. Palermo, 1:1–18, 1952.
[223] Artamonov, V.A., Saliy, V.N., Skornyakov, L.A., Shevrin, L.N., and Shulgeyfer, E.G.
General Algebra. Volume 2. Nauka, Moscow, 1991 (in Russian).
[224] Hilbert, D. The Foundations of Geometry. Open Court Company, 2001.
[225] Marczewski, F. and Steinhaus, H. On certain distance of sets and the corresponding
distance of functions. Colloq. Math., 6:319–327, 1958.
[226] Zolotarev, V.M. Modern Theory of Summation of Independent Random Variables.
Nauka, Moscow, 1986 (in Russian).
[227] Buldygin, V.V. and Kozachenko, Yu.V. Metric Characterization of Random Variables
and Random Processes. American Mathematical Society, Providence, 2000.
[228] Samuel, E. and Bachi, R. Measure of distance of distribution functions and some
applications. Metron, 13:83–112, 1964.
[229] Dudley, R.M. Distances of probability measures and random variables. Ann. Math.
Statist, 39(5):1563–1572, 1968.
[230] Senatov, V.V. On some properties of metrics at the set of distribution functions.
Math. Sb., 31(3):379–387, 1977.
[231] Kendall, M.G. and Stuart, A. The Advanced Theory of Statistics. Inference and
Relationship. Charles Griffin, London, 1961.
[232] Cramer, H. Mathematical Methods of Statistics. Princeton University Press, 1946.
[233] Melnikov, O.V., Remeslennikov, V.N., Romankov, V.A., Skornyakov, L.A., and Shes-
takov, I.P. General Algebra. Volume 1. Nauka, Moscow, 1990 (in Russian).
[234] Prudnikov, A.P., Brychkov, Yu.A., and Marichev, O.I. Integrals and Series: Elemen-
tary Functions. Gordon & Breach, New York, 1986.
Bibliography 399
[235] Paley, R.E. and Wiener, N. Fourier Transforms in the Complex Domain. American
Mathematical Society, Providence, 1934.
[236] Baskakov, S.I. Radio Circuits and Signals. Vysshaya Shkola, Moscow, 2nd edition,
1988 (in Russian).
[237] Oxtoby, J. Measure and Category. Springer, New York, 2nd edition, 1980.
[238] Kotelnikov, V.A. On the transmission capacity of “ether” and wire in electro-
communications. In Modern Sampling Theory: Mathematics and Applications.
Birkhauser, Boston, 2000. (Reprint of 1933 edition).
[239] Whittaker, J.M. Interpolatory function theory. Cambridge Tracts on Math. and Math.
Physics, (33), 1935.
[240] Jerry, A.J. The Shannon sampling theorem: its various extensions and applications:
a tutorial review. Proc. IEEE, (65):1565–1596, 1977.
[241] Dmitriev, V.I. Applied Information Theory. Vysshaya shkola, Moscow, 1989 (in
Russian).
[242] Popoff, A.A. Sampling theorem for the signals of space built upon generalized Boolean
algebra with a measure. Izv. VUZov. Radioelektronika, (1):31–39, 2010 (in Russian).
Reprinted in Radioelectronics and Communications Systems, 53 (1): 25–32, 2010.
[243] Tihonov, V.I. and Harisov, V.N. Statistical Analysis and Synthesis of Electronic Means
and Systems. Radio i svyaz, Moscow, 1991 (in Russian).
[244] Harkevich, A.A. Noise Reduction. Nauka, Moscow, 1965 (in Russian).
[245] Deza, M.M. and Laurent, M. Geometry of Cuts and Metrics. Springer, Berlin, 1997.
[246] Aleksandrov, P.S. Introduction to Set Theory and General Topology. Nauka, Moscow,
1977 (in Russian).
[247] Borisov, V.A., Kalmykov, V.V., and Kovalchuk, Ya.M. Electronic Systems of Infor-
mation Transmission. Radio i svyaz, Moscow, 1990 (in Russian).
[248] Zyuko, A.G., Klovskiy, D.D., Korzhik, V.I., and Nazarov, M.V. Electric Communi-
cation Theory. Radio i svyaz, Moscow, 1999 (in Russian).
[249] Tihonov, V.I. and Mironov, M.A. Markov Processes. Soviet Radio, Moscow, 1977 (in
Russian).
[250] Zacks, S. The Theory of Statistical Inference. Wiley, New York, 1971.
[251] Lehmann, E.L. Theory of Point Estimation. Wiley, New York, 1983.
[252] David, H.A. Order Statistics. Wiley, New York, 1970.
[253] Gumbel, E.J. Statistics of Extremes. Columbia University Press, New York, 1958.
[254] Van Trees, H. L. Detection, Estimation, and Modulation Theory. Wiley, New York,
1968.
[255] Grätzer, G. General Lattice Theory. Akademie Verlag, Berlin, 1978.
[256] Le Cam, L. On some asymptotic properties of maximum likelihood estimates and
related bayes estimates. Univ. California Publ. Statist., 1:277–330, 1953.
[257] Mudrov, V.I. and Kushko, V.L. The Least Modules Method. Znanie, Moscow, 1971
(in Russian).
[258] Kendall, M.G. and Stuart, A. Distribution theory. In Advanced Theory of Statistics.
Volume 1. Charles Griffin, 1960.
[259] Cohn, P.M. Universal Algebra. Harper & Row, New York, 1965.
400 Bibliography
[280] Popoff, A.A. Comparative analysis of estimators of unknown nonrandom signal pa-
rameter in linear space and K-space. Izv. VUZov. Radioelektronika, (7):29–40, 2008
(in Russian). Reprinted in Radioelectronics and Communications Systems, 51 (7):
368–376, 2008.
[281] Popoff, A.A. Possibilities of processing the signals with completely defined parameters
under interference (noise) background in signal space with algebraic lattice properties.
Izv. VUZov. Radioelektronika, (8):25–32, 2008 (in Russian). Reprinted in Radioelec-
tronics and Communications Systems, 51 (8): 421–425, 2008.
[282] Popoff, A.A. Characteristics of processing the harmonic signals in interference (noise)
background under their interaction in K-space. Izv. VUZov. Radioelektronika, (10):69–
80, 2008 (in Russian). Reprinted in Radioelectronics and Communications Systems,
51 (10): 565–572, 2008.
[283] Popoff, A.A. Informational characteristics and properties of stochastic signal con-
sidered as subalgebra of generalized Boolean algebra with a measure. Izv. VUZov.
Radioelektronika, (11):57–67, 2008 (in Russian). Reprinted in Radioelectronics and
Communications Systems, 51 (11): 615–621, 2008.
[284] Popoff, A.A. Noiseless channel capacity in signal space built upon generalized Boolean
algebra with a measure. J. State Univ. Inform. Comm. Tech., 7(1):54–62, 2009 (in
Russian).
[285] Popoff, A.A. Geometrical properties of signal space built upon generalized Boolean
algebra with a measure. J. State Univ. Inform. Comm. Tech., 7(3):27–32, 2009 (in
Russian).
[286] Popoff, A.A. Characteristics and properties of signal space built upon generalized
Boolean algebra with a measure. Izv. VUZov. Radioelektronika, (5):34–45, 2009 (in
Russian). Reprinted in Radioelectronics and Communications Systems, 52 (5): 248–
255, 2009.
[287] Popoff, A.A. Peculiarities of continuous message filtering in signal space with alge-
braic lattice properties. Izv. VUZov. Radioelektronika, (9):29–40, 2009 (in Russian).
Reprinted in Radioelectronics and Communications Systems, 52 (9): 474–482, 2009.
[288] Popoff, A.A. Informational characteristics of scalar random fields that are invariant
with respect to group of their bijective mappings. Izv. VUZov. Radioelektronika,
(11):67–80, 2009 (in Russian). Reprinted in Radioelectronics and Communications
Systems, 52 (11): 618–627, 2009.
[289] Popoff, A.A. Analysis of stochastic signal filtering algorithm in noise background
in K-space of signals. J. State Univ. Inform. Comm. Tech., 8(3):215–224, 2010 (in
Russian).
[290] Popoff, A.A. Resolution of the harmonic signal filter in the space with algebraic lattice
properties. J. State Univ. Inform. Comm. Tech., 8(4):249–254, 2010 (in Russian).
[291] Popoff, A.A. Classification of the deterministic signals against background noise in
signal space with algebraic lattice properties. J. State Univ. Inform. Comm. Tech.,
9(3):209–217, 2011 (in Russian).
[292] Popoff, A.A. Advanced electronic Counter-Counter-Measures technologies under ex-
treme interference environment. Mod. Inform. Tech. Sphere Defence, (2):65–74, 2011.
[293] Popoff, A.A. Quality indices of APSK signal processing in signal space with L-group
properties. Mod. Special Tech., 25(2):61–72, 2011 (in Russian).
[294] Popoff, A.A. Invariants of groups of bijections of stochastic signals (messages) with ap-
plication to statistical analysis of encryption algorithms. Mod. Inform. Sec., 10(1):13–
20, 2012 (in Russian).
402 Bibliography
[295] Popoff, A.A. Detection of the deterministic signal against background noise in signal
space with lattice properties. J. State Univ. Inform. Comm. Tech., 10(2):65–71, 2012
(in Russian).
[296] Popoff, A.A. Detection of the harmonic signal with joint estimation of time of signal
arrival (ending) in signal space with L-group properties. J. State Univ. Inform. Comm.
Tech., 10(4):32–43, 2012 (in Russian).
[297] Popoff, A.A. Invariants of groups of mappings of stochastic signals samples in metric
space with L-group properties. J. State Univ. Inform. Comm. Tech., 11(1):28–38,
2013 (in Russian).
[298] Popoff, A.A. Comparative analysis of informational relationships under signal inter-
actions in spaces with various algebraic properties. J. State Univ. Inform. Comm.
Tech., 11(2):53–69, 2013 (in Russian).
[299] Popoff, A.A. Unit of digital signal filtering. Patent of Ukraine 57507, G 06 F 17/18,
2011.
[300] Popoff, A.A. Method of digital signal filtering. Patent of Ukraine 57507, G 06 F 17/18,
2011.
[301] Popoff, A.A. Radiofrequency pulse resolution unit. Patent of Ukraine 59021,
H 03 H 15/00, 2011.
[302] Popoff, A.A. Radiofrequency pulse resolution method. Patent of Ukraine 65236,
H 03 H 15/00, 2011.
[303] Popoff, A.A. Unit of signal filtering. Patent of Ukraine 60222, H 03 H 17/00, 2011.
[304] Popoff, A.A. Method of signal filtering. Patent of Ukraine 61607, H 03 H 17/00, 2011.
[305] Popoff, A.A. Deterministic signals demodulation unit. Patent of Ukraine 60223,
H 04 L 27/14, 2011.
[306] Popoff, A.A. Deterministic signals demodulation method. Patent of Ukraine 60813,
H 04 L 27/14, 2011.
[307] Popoff, A.A. Transversal filter. Patent of Ukraine 71310, H 03 H 15/00, 2012.
[308] Popoff, A.A. Transversal filter. Patent of Ukraine 74846, H 03 H 15/00, 2012.
[309] Popoff, A.A. Fundamentals of Signal Processing in Metric Spaces with Lattice Prop-
erties. Part I. Mathematical Foundations of Information Theory with Application to
Signal Processing. Central Research Institute of Armament and Defence Technologies,
Kiev, 2013 (in Russian).
Index
403
404 Index