100% found this document useful (1 vote)
375 views453 pages

Fundamentals of Signal Processing in Metric Spaces With Lattice Properties Algebraic Approach

signal processing

Uploaded by

derghal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
100% found this document useful (1 vote)
375 views453 pages

Fundamentals of Signal Processing in Metric Spaces With Lattice Properties Algebraic Approach

signal processing

Uploaded by

derghal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 453

Fundamentals of Signal

Processing in Metric Spaces


with Lattice Properties
A L G E B R A I C A P P R O A C H
Fundamentals of Signal
Processing in Metric Spaces
with Lattice Properties
A L G E B R A I C A P P R O A C H

Andrey Popoff

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2018 by Andrey Popoff


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper


Version Date: 20171004

International Standard Book Number-13: 978-1-138-09938-8 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we
may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including pho-
tocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission
from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Contents

List of Figures ix

Preface xiii

Introduction xix

List of Abbreviations xxvii

Notation System xxix

1 General Ideas of Natural Science, Signal Theory, and Information


Theory 1
1.1 General System of Notions and Principles of Modern Research
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Information Theory and Natural Sciences . . . . . . . . . . . . . . 5
1.3 Overcoming Logical Difficulties . . . . . . . . . . . . . . . . . . . 10

2 Information Carrier Space Built upon Generalized Boolean


Algebra with a Measure 25
2.1 Information Carrier Space . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Geometrical Properties of Metric Space Built upon Generalized
Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.1 Main Relationships between Elements of Metric Space Built
upon Generalized Boolean Algebra with a Measure . . . . . 29
2.2.2 Notion of Line in Metric Space Built upon Generalized
Boolean Algebra with a Measure . . . . . . . . . . . . . . . 32
2.2.3 Notions of Sheet and Plane in Metric Space Built
upon Generalized Boolean Algebra with a Measure . . . . . 41
2.2.4 Axiomatic System of Metric Space Built upon Generalized
Boolean Algebra with a Measure . . . . . . . . . . . . . . . 46
2.2.5 Metric and Trigonometrical Relationships in Space Built
upon Generalized Boolean Algebra with a Measure . . . . . 51
2.2.6 Properties of Metric Space with Normalized Metric Built
upon Generalized Boolean Algebra with a Measure . . . . . 54
2.3 Informational Properties of Information Carrier Space . . . . . . . 59

v
vi Contents

3 Informational Characteristics and Properties of Stochastic


Processes 75
3.1 Normalized Function of Statistical Interrelationship of Stochastic
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2 Normalized Measure of Statistical Interrelationship of Stochastic
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3 Probabilistic Measure of Statistical Interrelationship of Stochastic
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4 Information Distribution Density of Stochastic Processes . . . . . 101
3.5 Informational Characteristics and Properties of Stochastic Processes 108

4 Signal Spaces with Lattice Properties 123


4.1 Physical and Informational Signal Spaces . . . . . . . . . . . . . . 123
4.2 Homomorphic Mappings in Signal Space Built upon Generalized
Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.2.1 Homomorphic Mappings of Continuous Signal Into a
Finite (Countable) Sample Set in Signal Space Built
upon Generalized Boolean Algebra with a Measure . . . . . 133
4.2.2 Theorems on Isomorphism in Signal Space Built upon
Generalized Boolean Algebra with a Measure . . . . . . . . 143
4.3 Features of Signal Interaction in Signal Spaces with Various
Algebraic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.3.1 Informational Paradox of Additive Signal Interaction in
Linear Signal Space: Notion of Ideal Signal Interaction . . . 147
4.3.2 Informational Relationships Characterizing Signal Interaction
in Signal Spaces with Various Algebraic Properties . . . . . 156
4.4 Metric and Informational Relationships between Signals Interacting
in Signal Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

5 Communication Channel Capacity 173


5.1 Information Quantity Carried by Discrete and Continuous Signals 174
5.1.1 Information Quantity Carried by Binary Signals . . . . . . 174
5.1.2 Information Quantity Carried by m-ary Signals . . . . . . . 179
5.1.3 Information Quantity Carried by Continuous Signals . . . . 190
5.2 Capacity of Noiseless Communication Channels . . . . . . . . . . 196
5.2.1 Discrete Noiseless Channel Capacity . . . . . . . . . . . . . 199
5.2.2 Continuous Noiseless Channel Capacity . . . . . . . . . . . 200
5.2.3 Evaluation of Noiseless Channel Capacity . . . . . . . . . . 201
5.2.3.1 Evaluation of Capacity of Noiseless Channel
Matched with Stochastic Stationary Signal with
Uniform Information Distribution Density . . . . . 201
5.2.3.2 Evaluation of Capacity of Noiseless Channel
Matched with Stochastic Stationary Signal with
Laplacian Information Distribution Density . . . . 204
Contents vii

6 Quality Indices of Signal Processing in Metric Spaces with


L-group Properties 207
6.1 Formulation of Main Signal Processing Problems . . . . . . . . . 208
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.3 Quality Indices of Unknown Nonrandom Parameter Estimation . 218
6.4 Quality Indices of Classification and Detection of Deterministic
Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.4.1 Quality Indices of Classification of Deterministic Signals in
Metric Spaces with L-group Properties . . . . . . . . . . . . 229
6.4.2 Quality Indices of Deterministic Signal Detection in Metric
Spaces with L-group Properties . . . . . . . . . . . . . . . 237
6.5 Capacity of Communication Channels Operating in Presence of
Interference (Noise) . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.5.1 Capacity of Continuous Communication Channels Operating
in Presence of Interference (Noise) in Metric Spaces with
L-group Properties . . . . . . . . . . . . . . . . . . . . . . . 244
6.5.2 Capacity of Discrete Communication Channels Operating
in Presence of Interference (Noise) in Metric Spaces with
L-group Properties . . . . . . . . . . . . . . . . . . . . . . . 247
6.6 Quality Indices of Resolution-Detection in Metric Spaces with
L-group Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with
Lattice Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

7 Synthesis and Analysis of Signal Processing Algorithms 267


7.1 Signal Spaces with Lattice Properties . . . . . . . . . . . . . . . . 269
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with
Lattice Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.2.1 Efficiency of Estimator λ̂n,∧ in Sample Space with Lattice
Properties with Respect to Estimator λ̂n,+ in Linear Sample
Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.2.2 Quality Indices of Estimators in Metric Sample Space . . . 282
7.3 Extraction of Stochastic Signal in Metric Space with Lattice
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
7.3.1 Synthesis of Optimal Algorithm of Stochastic Signal
Extraction in Metric Space with L-group Properties . . . . 288
7.3.2 Analysis of Optimal Algorithm of Signal Extraction in Metric
Space with L-group Properties . . . . . . . . . . . . . . . . 294
7.3.3 Possibilities of Further Processing of Narrowband Stochastic
Signal Extracted on Basis of Optimal Filtering Algorithm in
Metric Space with L-group Properties . . . . . . . . . . . . 302
7.4 Signal Detection in Metric Space with Lattice Properties . . . . . 304
7.4.1 Deterministic Signal Detection in Presence of Interference
(Noise) in Metric Space with Lattice Properties . . . . . . 305
viii Contents

7.4.2 Detection of Harmonic Signal with Unknown Nonrandom


Amplitude and Initial Phase with Joint Estimation of Time
of Signal Arrival (Ending) in Presence of Interference (Noise)
in Metric Space with L-group Properties . . . . . . . . . . . 311
7.4.3 Detection of Harmonic Signal with Unknown Nonrandom
Amplitude and Initial Phase with Joint Estimation of
Amplitude, Initial Phase, and Time of Signal Arrival
(Ending) in Presence of Interference (Noise) in Metric Space
with L-group Properties . . . . . . . . . . . . . . . . . . . . 324
7.4.4 Features of Detection of Linear Frequency Modulated Signal
with Unknown Nonrandom Amplitude and Initial Phase
with Joint Estimation of Time of Signal Arrival (Ending) in
Presence of Interference (Noise) in Metric Space with L-group
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7.5 Classification of Deterministic Signals in Metric Space with Lattice
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.7 Methods of Mapping of Signal Spaces into Signal Space with Lattice
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.7.1 Method of Mapping of Linear Signal Space into Signal Space
with Lattice Properties . . . . . . . . . . . . . . . . . . . . 379
7.7.2 Method of Mapping of Signal Space with Semigroup
Properties into Signal Space with Lattice Properties . . . . 383

Conclusion 385

Bibliography 389

Index 403
List of Figures

I.1 Dependences Q(m21 /m2 ) of some normalized signal processing


quality index Q on the ratio m21 /m2 . . . . . . . . . . . . . . . xxii
I.2 Dependences Q(q 2 ) of some normalized signal processing quality
index Q on the signal-to-noise ratio q 2 . . . . . . . . . . . . . . xxii

2.2.1 Hexahedron built upon a set of all subsets of element A + B . 30


2.2.2 Tetrahedron built upon elements O, A, B, C . . . . . . . . . . . 30
2.2.3 Simplex Sx(Ai + Aj + Ak ) in metric space Ω . . . . . . . . . . 55
2.3.1 between two sets A and B . . . . . . . .
Information quantities S 61
2.3.2 Elements of set A = α Aα situated on n-dimensional sphere
Sp(O, R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.2.1 Metric relationships between samples of signals in metric signal


space (Γ, µ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2.2 Metric relationships between samples of signals in metric space
(Γ, µ) elucidating Theorem 3.2.14 . . . . . . . . . . . . . . . . 97
3.5.1 IDDs iξ (tj , t), iξ (tk , t) of samples ξ (tj ), ξ (tk ) of stochastic process
ξ (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.5.2 IDD iξ (tj , t) and mutual IDD iξη (t0k , t) of samples ξ (tj ), η (t0k ) of
stochastic processes ξ (t), η (t0 ) . . . . . . . . . . . . . . . . . . . 119

4.2.1 Graphs of dependences of ratio I∆ (X 0 )/I (X ) on discretization


interval ∆t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.3.1 Dependences Ibx (Iax ) of quantity of mutual information Ibx on
quantity of mutual information Iax . . . . . . . . . . . . . . . . 160

5.1.1 NFSI ψu (τ ) and normalized ACF ru (τ ) of stochastic process . 175


5.1.2 IDD iu (τ ) of stochastic process . . . . . . . . . . . . . . . . . . 176
5.1.3 NFSI of multipositional sequence . . . . . . . . . . . . . . . . . 181
5.1.4 IDD of multipositional sequence . . . . . . . . . . . . . . . . . 181

6.2.1 Metric relationships between signals elucidating Theorem 6.2.1 213


6.2.2 Dependences of NMSIs νsŝ (q ), νsx (q ) on signal-to-noise ratio
q 2 = S0 /N0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6.2.3 Dependences of metrics µsŝ (q ), µsx (q ) on signal-to-noise ratio
q 2 = S0 /N0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6.3.1 CDFs of estimation (measurement) results X∨,i , X∧,i and esti-
mators λ̂n,∧ , λ̂n,∨ . . . . . . . . . . . . . . . . . . . . . . . . . 221

ix
x List of Figures

6.3.2 Graphs of metrics µ(λ̂n,∧ , λ) and µ0 (λ̂n,∨ , λ) depending on pa-


rameter λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.3.3 Dependences q{λ̂n,+ } and q{λ̂n,∧ } on size of samples n . . . . 228
6.4.1 Dependences νsŝ (q 2 ) |⊥ and νsŝ (q 2 ) |− . . . . . . . . . . . . . . 235
6.5.1 Generalized structural scheme of communication channel func-
tioning in presence of interference (noise) . . . . . . . . . . . . 243
6.7.1 PDFs of estimators ŝ∧ (t) |H11 ,H01 and ŝ∨ (t) |H11 ,H01 . . . . . . 263
6.7.2 Signal ŝ∧ (t) in output of filter forming estimator (6.7.8) . . . . 265

7.2.1 PDF pλ̂n,∧ (z ) of estimator λ̂n,∧ . . . . . . . . . . . . . . . . . . 276


7.2.2 Upper ∆∧ (n) and lower ∆+ (n) bounds of quality indices q{λ̂n,∧ }
and q{λ̂n,+ } . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.3.1 Block diagram of processing unit realizing general algorithm
Ext[s(t)] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.3.2 Realization s∗ (t) of useful signal s(t) acting in input of filtering
unit and possible realization w∗ (t) of process w(t) in output of
adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7.3.3 Generalized block diagram of signal processing unit . . . . . . 303
7.4.1 Block diagram of deterministic signal detection unit . . . . . . 308
7.4.2 Signals z (t) and u(t) in outputs of correlation integral computing
unit and strobing circuit . . . . . . . . . . . . . . . . . . . . . . 310
7.4.3 Estimator θ̂ of unknown nonrandom parameter θ of useful signal
s(t) in output of decision gate . . . . . . . . . . . . . . . . . . . 310
7.4.4 Block diagram of processing unit that realizes harmonic signal
detection with joint estimation of time of signal arrival (ending) 319
7.4.5 Useful signal s(t) and realizations w∗ (t), v ∗ (t) of signals in out-
puts of adder and median filter . . . . . . . . . . . . . . . . . . 320
7.4.6 Useful signal s(t) and realization v ∗ (t) of signal in output of
median filter v (t) . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.4.7 Useful signal s(t), realization v ∗ (t) of signal in output of median
filter v (t), and δ-pulse determining time position of estimator t̂1 321
7.4.8 Useful signal s(t), realization Ev∗ (t) of envelope Ev (t) of signal
v (t), and δ-pulse determining time position of estimator t̂1 . . 321
7.4.9 Block diagram of unit processing observations defined by Equa-
tions: (a) (7.4.11a); (b) (7.4.11b) . . . . . . . . . . . . . . . . . 323
7.4.10 Block diagram of processing unit that realizes harmonic signal
detection with joint estimation of amplitude, initial phase, and
time of signal arrival (ending) . . . . . . . . . . . . . . . . . . . 336
7.4.11 Useful signal s(t) and realization w∗ (t) of signal w(t) in output
of adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
7.4.12 Useful signal s(t) and realization v ∗ (t) of signal v (t) in output of
median filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
7.4.13 Useful signal s(t) and realization W ∗ (t) of signal W (t) in output
of adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
List of Figures xi

7.4.14 Useful signal s(t), realization V ∗ (t) of signal V (t), and realization
EV∗ (t) of its envelope . . . . . . . . . . . . . . . . . . . . . . . . 338
7.4.15 Block diagram of processing unit that realizes LFM signal detec-
tion with joint estimation of time of signal arrival (ending) . . 349
7.4.16 Useful signal s(t) and realization w∗ (t) of signal w(t) in output
of adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.4.17 Useful signal s(t) and realization v ∗ (t) of signal v (t) in output of
median filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.4.18 Useful signal s(t) and realization v ∗ (t) of signal v (t) in output of
median filter, and δ-pulse determining time position of estimator
t̂1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.4.19 Useful signal s(t) and realization Ev∗ (t) of envelope Ev (t) of signal
v (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.5.1 Block diagram of deterministic signals classification unit . . . . 355
7.5.2 Signals in outputs of correlation integral computation circuit
zi (t) and strobing circuit ui (t) . . . . . . . . . . . . . . . . . . 357
7.6.1 Block diagram of signal resolution unit . . . . . . . . . . . . . . 366
7.6.2 Signal w(t) in input of limiter in absence of interference (noise) 368
7.6.3 Normalized time-frequency mismatching function ρ(δτ, δF ) of fil-
ter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.6.4 Cut projections of normalized time-frequency mismatching func-
tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.6.5 Realization w∗ (t) of signal w(t) including signal response and
residual overshoots . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.6.6 Realization w∗ (t) of stochastic process w(t) in input of limiter
and residual overshoots . . . . . . . . . . . . . . . . . . . . . . 376
7.6.7 Realization v ∗ (t) of stochastic process v (t) in output of median
filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
7.6.8 Normalized time-frequency mismatching function of filter in pres-
ence of strong interference . . . . . . . . . . . . . . . . . . . . . 377
7.7.1 Block diagram of mapping unit . . . . . . . . . . . . . . . . . . 379
7.7.2 Directional field patterns FA (θ) and FB (θ) of antennas A and B 380
C.1 Suggested scheme of interrelations between information theory,
signal processing theory, and algebraic structures . . . . . . . . 387
K34934/Signal Processing in Metric Spaces with Lattice Properties: Algebraic Approach

Chapter-by-chapter abstracts

Chapter 1 has an introductory character and introduces the reader to the circle of the most
general methodology questions of classical Signal Theory and Information Theory. Brief
analysis of general concepts, notions, and ideas of Natural Science, Signal Theory and
Information Theory is carried out. Chapter 1 considers a general system of notions and principles
of modern research methodology in Natural Science, and also a particular system of notions of
selected research direction (both Signal Processing Theory and Information Theory). The
relations between Information Theory and Natural Sciences are shortly discussed. Chapter 1
considers methodological foundations of classical Signal Theory and Information Theory.
Theoretical difficulties within Signal Theory and Information Theory are shown. In this chapter,
we outline the ways of logical troubles overcoming in both theories. Chapter 1 is finished with
the book principal concept formulation.

Chapter 2 formulates the approach to constructing a signal space on the basis of generalized
Boolean algebra with a measure. The notion of information carrier space is defined. Chapter 2
considers main relationships between the elements of metric space built upon generalized
Boolean algebra with a measure. The notions of main geometric objects of such metric space are
defined. Axiomatic system of metric space built upon generalized Boolean algebra with a
measure is formulated. This axiomatic system implies that the axioms of connection and the
axioms of parallels are characterized with essentially the lesser constraints than the axioms of
analogous groups of Euclidean space. It is shown that geometry of generalized Boolean algebra
with a measure contains in itself some other known geometries. Chapter 2 establishes metric and
trigonometrical relationships in space built upon generalized Boolean algebra with a measure.
Chapter 2 investigates both geometric and algebraic properties of metric space built upon
generalized Boolean algebra with a measure. Chapter 2 studies informational properties of such
metric space. They are introduced axiomatically by the axiom of a measure of a binary operation
of generalized Boolean algebra.

Chapter 3 introduces probabilistic characteristics of stochastic processes, which are invariant


with respect to a group of their mappings. The interconnection between introduced probabilistic
characteristics and metric relations between the instantaneous values (the samples) of stochastic
processes in metric space is shown. Informational characteristics of stochastic processes are
introduced. Chapter 3 establishes the necessary condition, according to which a stochastic
process possess the ability to carry information. The mapping is introduced that allows
considering an arbitrary stochastic process as subalgebra of generalized Boolean algebra with a
measure. Informational properties of stochastic processes are introduced axiomatically on the
base of the axiom of a measure of a binary operation of generalized Boolean algebra. The main
obtained results are formulated in the form of corresponding theorems. Invariants of bijective
mappings of stochastic processes introduced on the base of corresponding definitions provide
successful using generalized Boolean algebra with a measure to describe physical and
informational signal interactions in signal spaces.

Chapter 4 deals with the notions of informational and physical signal spaces. On the basis of
probabilistic and informational characteristics of the signals and their elements, that are
introduced in the previous chapter, we consider the characteristics and the properties of
informational signal space built upon generalized Boolean algebra with a measure. At the same
time, the separate signal, carrying information, is considered as subalgebra of generalized
Boolean algebra with a measure. It is underlined that a measure on Boolean algebra
accomplishes twofold function: firstly, it is a measure of information quantity, and secondly, it
induces metric in signal space. The interconnection between introduced measure and logarithmic
measure of information quantity is shown. Some homomorphic mappings in informational signal
space are considered. Particularly, for this signal space, the sampling theorem is formulated.
Theorems on isomorphisms are established for informational signal space. The informational
paradox of additive signal interaction in linear signal space is considered. Informational
relations, taking place under signal interaction in signal spaces with various algebraic properties,
are established. It is shown, that from the standpoint of providing minimum losses of information
contained in the signals, one should carry out their processing in signal spaces with lattice
properties.

Chapter 5 establishes the relationships determining the quantity of information carried by


discrete and continuous signals. It is shown that upper bound of information quantity, which can
be transmitted by discrete random sequence, is determined by a number of symbols of the
sequence only, and doesn't depends on code base (alphabet size). It is underlined that the reason
here can not be elucidated by both the absence of necessary technical means and ill effect of
interference/ noise, because this fact is stipulated by fundamental property of information, i.e. by
its ability to exist exclusively within a statistical collection of structural elements of its carrier
(the signal). On the basis of introduced information quantity measure, the relationships are
established, that determine the capacity of discrete and continuous noiseless communication
channels. Boundedness of capacity of both discrete and continuous noiseless channels is proved.
Chapter 5 is finished with the examples of evaluating the capacity of noiseless channel matched
with stochastic stationary signal characterized by quite certain informational properties.

Chapter 6 considers quality indices of signal processing in metric spaces with L-group
properties. These indices are based on metric relationships between the instantaneous values (the
samples) of the signals, determined in the third chapter. The obtained quality indices correspond
to the main problems of signal processing, i.e. signal detection; signal filtering; signal
classification; signal parameter estimation; and signal resolution. Chapter 6 provides brief
comparative analysis of the obtained relationships, determining the difference of signal
processing quality in the spaces with mentioned properties. It is shown, that potential quality
indices of signal processing in metric spaces with lattice properties are characterized by the
invariance property with respect to parametric and nonparametric prior uncertainty conditions.
The relationships, determining capacity of communication channel operating in the presence of
interference (noise) in metric spaces with L-group properties, are obtained. All main signal
processing problems are considered from the standpoint of estimating the signals and/or their
parameters.

Chapter 7 deals with the methods of synthesis of algorithms and units of signal processing in
metric spaces with lattice properties are considered, so that the developed approaches to the
synthesis allow operating with minimum necessary prior data concerning characteristics and
properties of interacting useful and interference signals. The last one means that, firstly, no prior
data concerning probabilistic distribution of useful signals and interference are supposed to be
present. Secondly, a priori, the kind of the useful signal (signals) is assumed to be known, i.e. we
know that it is either deterministic (quasi-deterministic) or stochastic one. Within the seventh
chapter, the quality indices of synthesized signal processing algorithms and units are obtained. It
is shown, that algorithms and units of signal processing in metric spaces with lattice properties
are characterized by the invariance property with respect to parametric and nonparametric prior
uncertainty conditions. Chapter 7 is finished with methods of mapping of signal spaces with
group (semigroup) properties into signal space with lattice properties.
Preface

Electronics is one of the main high-end technological sectors of the economics pro-
viding development and manufacture of civil, military, and double-purpose produc-
tion, whose level defines the technological, economical, and informational security
of the leading countries of the world. Electronics serves as a catalyst and a loco-
motive of scientific and technological progress, promoting the stable growth of the
various industry branches, and world economics as a whole. The majority of elec-
tronic systems, means, and sets are the units of information (signal) transmitting,
receiving, and processing.
As in other branches of science, the progress in both information theory and
signal processing theory is directly related to understanding and investigating their
fundamental principles. While it is acceptable in the early stages of the development
of a theory to use some approximations, over the course of time it is necessary to
have a closed theory which is able to predict unknown earlier phenomena and
facts. This book was conceived as an introduction into the field of signal processing
in non-Euclidean spaces with special properties based on measure of information
quantity.
Successful research in the 21st century is impossible without comprehensive
study of a specific discipline. Specialists often have poor understanding of their
colleagues working in adjacent branches of science. This is not surprising because a
severance of scientific disciplines ignores the interrelations existing between various
areas of science. To a degree, detriment caused by a narrow specialization is com-
pensated by popular scientific literature acquainting the reader with a wider range
of the phenomena and also offset by the works pretending to cover some larger
subject matter domain of research.
The research subject of this book includes the methodology of constructing the
unified mathematical fundamentals of both information theory and signal process-
ing theory, the methods of synthesis of signal processing algorithms under prior
uncertainty conditions, and also the methods of evaluating their efficiency.
While this book does not constitute a transdisciplinary approach, it starts with
generalized methodology based on natural sciences to create new research concepts
with application to information and signal processing theories.
Two principal problems will be investigated. The first involves unified mathe-
matical fundamentals of information theory and signal processing theory. Its solu-
tion is provided by definition of an information quantity measure connected in a
unique way to the notion of signal space. The second problem is the need to increase
signal processing efficiency under parametric and nonparametric prior uncertainty
conditions. The resolution of that problem rests on the first problem solution us-

xiii
xiv Preface

ing signal spaces with various algebraic properties such as groups, lattices, and
generalized Boolean algebras.
This book differs from traditional monographs in (1) algebraic structure of signal
spaces, (2) measures of information quantities, (3) metrics in signal spaces, (4)
common signal processing problems, and (5) methods for solving such problems
(methods for overcoming prior uncertainty).
This book is intended for professors, researchers, post-graduates under-graduate
students, and specialists in signal processing and information theories, electronics,
radiophysics, telecommunications, various engineering disciplines including radio-
engineering, and information technology. It presents alternative approaches to con-
structing signal processing theory and information theory based upon Boolean al-
gebra and lattice theory. The book may be useful for mathematicians and physicists
interested in applied problems in their areas. The material contained in the book
differs from the traditional approaches of classical information theory and signal
processing theory and may interest pre-graduate students specializing in the direc-
tions: “radiophysics”, “telecommunications”, “electronic systems”, “system analysis
and control”, “automation and control”, “robotics”, “electronics and communication
engineering”, “control systems engineering”, “electronics technology”, “information
security”, and others.
Signal processing theory is currently one of the most active areas of research
and constitutes a “proving ground” in which mathematical ideas and methods find
their realization in quite physical analogues. Moreover, the analysis of IT tendencies
in the 21st century, and perspective transfer towards quantum and optic systems
of information storage, transmission, and processing allow us to claim that in the
nearest future the parallel development of signal processing theory and information
theory will be realized on the basis of their stronger interpenetration of each other.
Also, the author would like to express his confidence that these topics could awaken
interest of young researchers, who will test their own strengths in this direction.
The possible directions of future applications of new ideas and technologies
based on signal processing in spaces with lattice properties described in the book
could be extremely wide. Such directions could cover, for example, search for ex-
traterrestrial intelligence (SETI) program, electromagnetic compatibility problems,
and military applications intended to provide steady operation of electronic systems
under severe jam conditions.
Models of real signals discussed in this book are based on stochastic processes.
Nevertheless, the use of generalized Boolean algebra with a measure allows us to
expand the results upon the signal models in the form of stochastic fields that
constitutes the subject of independent discussing. Traditional aspects of classical
signal processing and information theories are not covered in the book because
they have been presented widely in the existing literature. Readers of this book
should understand the basics of set theory, abstract algebra, mathematical analysis,
and probability theory. The final two chapters require knowledge of mathematical
statistics or statistical radiophysics (radioengineering).
The book is arranged in the following way. Chapter 1 is introductory in nature
and describes general methodology questions of classical signal theory and informa-
Preface xv

tion theory. Brief analysis of general concepts, notions, and ideas of natural science,
signal theory, and information theory is carried out. The relations between informa-
tion theory and natural sciences are briefly discussed. Theoretical difficulties within
foundations of both signal and information theories are shown. Overcoming these
obstacles is discussed.
Chapter 2 formulates an approach to constructing a signal space on the basis
of generalized Boolean algebra with a measure. Axiomatic system of metric space
built upon generalized Boolean algebra with a measure is formulated. Chapter 2
establishes metric and trigonometrical relationships in space built upon generalized
Boolean algebra with a measure and considers informational properties of metric
space built upon generalized Boolean algebra with a measure. The properties are
introduced by the axiom of a measure of a binary operation of generalized Boolean
algebra.
Chapter 3 introduces probabilistic characteristics of stochastic processes which
are invariant with respect to groups of their mappings. The interconnection be-
tween introduced probabilistic characteristics and metric relations between the in-
stantaneous values (the samples) of stochastic processes in metric space is shown.
Informational characteristics of stochastic processes are introduced. Chapter 3 es-
tablishes the necessary condition according to which a stochastic process possesses
the ability to carry information. The mapping that allows considering an arbitrary
stochastic process as subalgebra of generalized Boolean algebra with a measure is
introduced. Informational properties of stochastic processes are introduced on the
base of the axiom of a measure of a binary operation of generalized Boolean algebra.
The main results are formulated in the form of corresponding theorems.
Chapter 4 together with Chapter 7 occupies the central place in the book and
deals with the notions of informational and physical signal spaces. On the basis
of probabilistic and informational characteristics of the signals and their elements,
that are introduced in the previous chapter, Chapter 4 considers the characteristics
and the properties of informational signal space built upon generalized Boolean
algebra with a measure. At the same time, the separate signal carrying information
is considered as a subalgebra of generalized Boolean algebra with a measure. We
state that a measure on Boolean algebra accomplishes a twofold function: firstly,
it is a measure of information quantity and secondly it induces a metric in signal
space. The interconnection between introduced measure and logarithmic measure
of information quantity is shown. Some homomorphic mappings in informational
signal space are considered. Particularly for this signal space, the sampling theorem
is formulated. Theorems on isomorphisms are established for informational signal
space. The informational paradox of additive signal interaction in linear signal space
is considered. Informational relations, taking place under signal interaction in signal
spaces with various algebraic properties, are established. It is shown that from the
standpoint of providing minimum losses of information contained in the signals,
one should carry out their processing in signal spaces with lattice properties.
Chapter 5 establishes the relationships determining the quantities of informa-
tion carried by discrete and continuous signals. The upper bound of information
quantity, which can be transmitted by discrete random sequence, is determined by
xvi Preface

a number of symbols of the sequence only, and does not depend on code base (al-
phabet size). This fact is stipulated by the fundamental property of information,
i.e., by its ability to exist exclusively within a statistical collection of structural
elements of its carrier (the signal). On the basis of introduced information quantity
measure, relationships are established to determine the capacities of discrete and
continuous noiseless communication channels. Boundedness of capacity of discrete
and continuous noiseless channels is proved. Chapter 5 concludes with examples
of evaluating the capacity of noiseless channel matched with stochastic stationary
signal characterized by certain informational properties.
Chapter 6 considers quality indices of signal processing in metric spaces with
L-group properties (i.e., with both group and lattice properties). These indices
are based on metric relationships between the instantaneous values (the samples)
of the signals determined in Chapter 3. The obtained quality indices correspond
to the main problems of signal processing, i.e., signal detection; signal filtering;
signal classification; signal parameter estimation; and signal resolution. Chapter 6
provides brief comparative analysis of the obtained relationships, determining the
differences of signal processing qualities in the spaces with mentioned properties.
Potential quality indices of signal processing in metric spaces with lattice proper-
ties are characterized by the invariance property with respect to parametric and
nonparametric prior uncertainty conditions. The relationships determining capac-
ity of communication channel operating in the presence of interference (noise) in
metric spaces with L-group properties are obtained. All the main signal processing
problems are considered from the standpoint of estimating the signals and/or their
parameters.
Chapter 7 deals with synthesis of algorithms and units of signal processing in
metric spaces with lattice properties, so that the developed approaches allow oper-
ating with minimum necessary prior data concerning characteristics and properties
of interacting useful and interference signals. This means that no prior data con-
cerning probabilistic distribution of useful signals and interference are supposed
to be present. Second, a priori, the kinds of useful signal (signals) are assumed to
be known, i.e., we know that they are either deterministic (quasi-deterministic) or
stochastic. The quality indices of synthesized signal processing algorithms and units
are obtained. Algorithms and units of signal processing in metric spaces with lattice
properties are characterized by the invariance property with respect to parametric
and nonparametric prior uncertainty conditions. Chapter 7 concludes with methods
of mapping of signal spaces with group (semigroup) properties into signal spaces
with lattice properties.
The first, second, and seventh chapters are relatively independent and may be
read separately. Before reading the third and the fourth chapters it is desirable to
get acquainted with main content of the first and second chapters. Understanding
Chapters 4 through 6 to a great extent relies on the main ideas stated in the third
chapter. Proofs provided for an exigent and strict reader may be ignored at least
on first reading.
There is a triple numeration of formulas in the book: the first number corre-
sponds to the chapter number, the second one denotes the section number, the third
Preface xvii

one corresponds to the equation number within the current section, for example,
(2.3.4) indicates the 4th formula in Section 2.3 of Chapter 2. The similar numeration
system is used for axioms, theorems, lemmas, corollaries, examples, and definitions.
Proofs of theorems and lemmas are denoted by  symbols. Endings of examples are
indicated by the 5 symbols. Outlines generalizing the obtained results are placed
at the end of the corresponding sections if necessary. In general, the chapters are
not summarized.
While translating the book, the author attempted to provide “isomorphism” of
the monograph perception within principal ideas, approaches, methods, and main
statements formulated in the form of mathematical relationships along with key
notions.
As remarked by Johannes Kepler, “. . . If there is no essential strictness in terms,
elucidations, proofs and inference, then a book will not be a mathematical one. If
the strictness is provided, the book reading becomes very tiresome . . . ”.1
The author’s intention in writing this book was to provide readers with con-
cise comprehensible content while ensuring adequate coverage of signal processing
and information theories and their applications to metric signal spaces with lattice
properties. The author recognizes the complexity of the subject matter and the fast
pace of related research. He welcomes all comments by readers and can be reached
(at [email protected]).
The author would like to extend his sincere appreciation, thanks, and gratitude
to Victor Astapenya, Ph.D.; Alexander Geleseff, D.Sc.; Vladimir Horoshko, D.Sc.;
Sergey Rodionov, Ph.D.; Vladimir Rudakov, D.Sc.; and Victor Seletkov, D.Sc. for
attention, support, and versatile help that contributed greatly to expediting the
writing of this book and improving its content.
The author would like to express his frank acknowledgment to Allerton Press,
Inc. and also to its Senior Vice President Ruben de Semprun for granted permis-
sions that allow using the material published by the author in Radioelectronics and
Communications System.
The author would like to acknowledge understanding, patience, and support
provided within CRC Press by its staff and the assistance from Nora Konopka,
Michele Dimont, Kyra Lindholm, and unknown copy editor(s).
Finally, the author would like to express his thanks to all LATEX developers
whose tremendous efforts greatly lighten the author’s burden.

Andrey Popoff

1
J. Kepler. Astronomia Nova, Prague, 1609.
Introduction

At the frontier of the 21st century, the amounts of information obtained and pro-
cessed in all fields of human activity, from oceanic depths to remote parts of cosmic
space, increase exponentially every year. It is impossible to satisfy growing needs
of humanity in transmitting, receiving, and processing of all sorts of information
without continuous improvement of acoustic, optic, electronic systems and signal
processing methods. The last two tendencies provide the development of informa-
tion theory, signal processing theory, and synthesis foundations of such systems.
The subject of information theory includes the analysis of qualitative and quan-
titative relationships that take place through transmitting, receiving, and pro-
cessing of information contained in both messages and signals, so that the use-
ful signal s(t) is considered a one-to-one function of a transmitted message m(t):
s(t) = M [c(t), m(t)] (where M is the modulating one-to-one function and c(t) is a
signal carrier; m(t) = M −1 [c(t), s(t)]).
The subject of signal processing theory includes the analysis of probabilistic-
statistic models of the signals interacting properly with each other and statistical
inference in specific aspects of the process of extracting information contained in
signals. Information theory and signal processing theory continue to develop in
various directions.
Thus, one should refer the following related directions to information theory in
its classical formulation:

1. Analysis of stochastic signals and messages, including the questions on


information quantity evaluation
2. Analysis of informational characteristics of message sources and commu-
nication channels in the presence and absence of noise (interference)
3. Development of message encoding/decoding methods and means
4. Development of mathematical foundations of information theory
5. Development of communication system optimization methodology
6. Development of all other problems whose formulations includes the notion
of information
One can trace the development of information theory within the aforementioned six
directions in References [1–7], [8–24], [25–49], [50–70], [71–84], [85–98], respectively.
We distinguish several interrelated directions, along which the signal processing
theory developed:

xix
xx Introduction

1. Analysis of probabilistic-statistic characteristics of stochastic signals, in-


terference (noise), their transformations and their influence on the opera-
tion of electronic means and systems
2. Optimization and analysis of useful signals set used to solve the main
signal processing problems in electronic means and systems of different
functionality
3. Synthesis of signal processing algorithms and units while solving the
main signal processing problems: signal detection (including multiple-
alternative detection), classification of signals, signal extraction, signal
filtering, signal parameter estimation, resolution of the signals and recog-
nition of the signals
4. Development of mathematical foundations of signal processing theory
The main ideas and approaches to solving the signal processing theory problems in
their historical retrospective within the aforementioned four directions are included
in References [99–117], [118–124], [125–142], [143–167], correspondingly.
It should be noted that this bibliography does not include purely mathematical
works, especially in probability theory and mathematical statistics, which had an
impact on the course of the development of both information theory and signal
processing theory.
The analysis of the state of the art of signal processing theory and information
theory reveals a contradiction between relative proximity of subject matter domains
of these theories and their rather weak interactions at the levels of the key notions
and used methods.
Information quantity measure was introduced in information theory, ignoring
the notion of signal space. On the other hand, the approaches to the synthesis of
signal processing algorithms and units along with the approaches to their efficiency
evaluation within the framework of signal processing theory were developed without
use of information theory methods and categories.
At first, the material of this book was developed exclusively within the mathe-
matical foundations of signal processing theory in non-Euclidean spaces. However it
became impossible to write some acceptable variant of signal processing fundamen-
tals without referring to the basics of information theory. Besides, some aspects of
classical information theory need more serious refinement of principal approaches
than conceptual basics of the direction Lewis Franks named the signal theory. Dis-
satisfaction with both theories’ state of the art, when information carriers (the
signals) and the principles of their processing exist per se, while information trans-
ferred by them and the approaches intended to describe a wide range of informa-
tional processes are apart from information carriers made the author to focus his
effort on unifying the mathematical foundations of both signal processing theory
and information theory.
Most of the signal processing theory problems (along with the problems of
statistical radiophysics, statistical radioengineering, and automatic control theory)
make no sense in the absence of noise and interference and in general cases, without
Introduction xxi

taking into account the random character of the received signals. An interaction
between useful and interference (noise) signals usually is described through the
superposition principle; however, it has neither fundamental nor universal character.
This means that while constructing fundamentals of information theory and signal
processing theory, there is no need to confine the space of information material
carriers (signal space) artificially by the properties which are inherent exclusively
to linear spaces.
Meanwhile, in most publications, signal processing problems are formulated
within the framework of additive commutative groups of linear spaces, where in-
teraction between useful and interference (noise) signals is described by a binary
operation of addition. Rather seldom these problems are considered in terminol-
ogy of ring binary operations, i.e., in additive commutative groups with introduced
multiplication operation connected with addition by distributive laws. In this case,
interaction between useful and interference (noise) signals is described by multipli-
cation and addition operations in the presence of multiplicative and additive noise
(interference), respectively.
Capabilities of signal processing in linear space are confined within potential
quality indices of optimal systems demonstrated by the results of classical signal
processing theory. Besides, modern electronic systems and means of different func-
tionalities operate under prior uncertainty conditions adversely affecting quality
indices of signal processing.
Under prior uncertainty conditions, the efficiency of signal processing algorithms
and units can be evaluated upon some i-th distribution family of interference (noise)
Di [{ai,k }], where {ai,k } is a set of shape parameters of this distribution fam-
ily; i, k ∈ N, N is the set of natural numbers, on the basis of the dependences
Q(m21 /m2 ), Q(q 2 ) of some normalized signal processing quality index on the ratio
m21 /m2 between squared average m21 and the second order moment m2 of interfer-
ence (noise) envelope [156], and also on signal-to-noise (signal-to-interference) ratio
q 2 , respectively. By normalized signal processing quality index Q we will mean any
signal processing quality index that takes its values in the interval [0, 1], so that 1
and 0 correspond to the best and the worst values of Q, respectively. For instance, it
could be conditional probability of correct detection; correlation coefficient between
useful signal and its estimator, etc. By the envelope Ex (t) of stochastic
p process
x(t) (particularly of interference) we mean the function: Ex (t) = x2 (t) + x2H (t),
R∞ x(τ )
where xH (t) = − π1 τ −t dτ (as a principal value) is the Hilbert transform of
−∞
initial stochastic process x(t). Dependences Q(m21 /m2 ) of such normalized signal
processing quality index Q on the ratio m21 /m2 (q 2 =const) for an arbitrary i-family
of interference (noise) distribution Di [{ai,k }] may look like the curves 1, 2, and 3
shown in Fig. I.1. Optimal Bayesian decisions [143, 146, 150, 152] and the decisions
obtained on the basis of robust methods [157, 166, 168] and on the basis of non-
parametric statistics methods [150, 169–172], at a qualitative level, as a rule, are
characterized by the curves 1, 2, and 3, respectively. Figure I.1 conveys generalized
behaviors of some groups of signal processing algorithms operating in nonpara-
metric prior uncertainty conditions, when (1) distribution family of interference is
xxii Introduction

FIGURE I.1 Dependences Q(m21 /m2 ) of FIGURE I.2 Dependences Q(q 2 ) of some
some normalized signal processing quality in- normalized signal processing quality index Q
dex Q on the ratio m21 /m2 that characterize on the signal-to-noise ratio q 2 that character-
(1) optimal Bayesian decisions; decisions, ob- ize (1) the best case of signal receiving; (2)
tained on the basis of (2) robust methods; (3) the worst case of signal receiving; (3) desir-
nonparametric statistics methods; (4) desir- able dependence
able dependence

known, but concrete type of distribution is unknown (i.e., m21 /m2 is unknown); (2)
the worst case, when even distribution family of interference is unknown. Condi-
tions when m21 /m2 ∈]0, 0.7], m21 /m2 ∈]0.7, 0.8], m21 /m2 ∈]0.8, 1.0[ correspond to
interference of pulse, intermediate, and harmonic kind, respectively. At the same
time, while solving the signal processing problems under prior uncertainty con-
ditions, it is desirable for all, or at least several interference (noise) distribution
families, to obtain the dependence in the form of the curve 4 shown in Fig. I.1:
Q(m21 /m2 ) → 1(m21 /m2 − ε) − 1(m21 /m2 − 1 + ε), where 1(∗) is Heaviside step
function and ε is an indefinitely small positive number.
The curve 4 in Fig. I.1 is desirable, since quality index Q is equal to 1 in the whole
interval ]0, 1[ of the ratio m21 /m2 , i.e., signal processing algorithm is absolutely
robust (Q=1) with respect to nonparametric prior uncertainty conditions.
Figure I.2 illustrates generalized behaviors of signal processing algorithms op-
erating in parametric prior uncertainty conditions, when (1) interference distri-
bution is known and energetic/spectral characteristics of useful and interference
signals are known (curve 1) and (2) interference distribution is unknown and/or
energetic/spectral characteristics of useful and/or interference signals are unknown
(curve 2).
Figure I.2 shows dependences Q(q 2 ) of some normalized signal processing quality
index Q on the signal-to-noise ratio q 2 that characterize: (1) the best case of signal
receiving, (2) the worst case of signal receiving; (3) desirable dependence. The
curve 3 in Fig. I.2 is desirable, since quality index Q is equal to 1 in the whole
interval ]0, ∞[ of q 2 , i.e., signal processing algorithm is absolutely robust (Q=1)
with respect to parametric prior uncertainty conditions.
Thus, depending on interference (noise) distribution and also on characteristics
of a useful signal, optimal variants of a solution of some signal processing problem
provide the location of functional dependence Q(q 2 ) within the interval between
the curves 1 and 2 in Fig. I.2, which, at a qualitative level, characterize the most
Introduction xxiii

and the least desired cases of useful signal receiving (the best and the worst cases,
respectively). However, while solving the signal processing problems under prior
uncertainty conditions, for all interference (noise) distribution families, the most
desired dependence Q(q 2 ) undoubtedly is the curve 3 in Fig. I.2: Q(q 2 ) → 1(q 2 − ε),
where, similarly, 1(∗) is Heaviside step function and ε is indefinitely small positive
number. Besides, at a theoretical level, the maximum possible proximity between
the functions Q(q 2 ) and 1(q 2 ) should be indicated.
Providing constantly growing requirements for the real signal processing quality
indices under prior uncertainty conditions on the basis of known approaches for-
mulated, for instance, in [150, 152, 157, 166], appears to be problematic; thus, other
ideas should be used in this case. This book is devoted to a great large-scale prob-
lem, i.e., providing signal processing quality indices Q in the form of dependencies
4 and 3 shown in Figs. I.1 and I.2, correspondingly.
The basic concepts for signal processing theory and information theory are signal
space and information quantity, respectively. The development of the notions of in-
formation quantity and signal space leads to important methodological conclusions
concerning interrelation between informational theory (within its syntactical as-
pects) and signal processing theory. But this leads to other questions. For example,
what is the interrelation between, on the one hand, set theory (ST), mathemati-
cal analysis (MA), probability theory (PT), and mathematical statistics (MS); and
on the other hand, between information theory (IT) and signal processing theory
(SPT)? Earlier it seemed normal to formulate axiomatic grounds of probability
theory on the basis of set theory by introducing a specific measure, i.e., the prob-
abilistic one and the grounds of signal theory (SgT) on the basis of mathematical
analysis (function spaces of special kind, i.e., linear spaces with scalar product). In
a classical scheme, the following one: the connection ST → PT → IT was traced
separately, the relation PT → MS → SPT was observed rather independently, and
absolutely apart there existed the link MA → SgT. Here and below, arrows indicate
how one or another theory is related to its mathematical foundations.
The “set” interpretation of probability theory has its own weak position noted
by many mathematicians, including its author A.N. Kolmogoroff. Since the second
part of the 20th century, we know an approach, according to which Boolean algebra
with a measure (BA) forms an adequate mathematical model of the notion called
an “event set”; thus, the interrelation is established: BA → PT [173–176]. Corre-
spondingly, in abstract algebra (AA) within the framework of lattice theory (LT),
Boolean algebras are considered to be further development of algebraic structures
with special properties, namely, the lattices. Instead of traditional schemes of the-
ories’ interrelations, in this work, to develop the relationship between information
theory and signal processing theory, we use the following:
LT → BA → {PT → {IT ↔ SPT ← {MS + (SgT ← {AA + MA})}}}.
Undoubtedly, this scheme is simplified, because known interrelations between ab-
stract algebra, geometry, topology, mathematical analysis, and also adjacent parts
of modern mathematics are not shown.
The choice of Boolean algebra as mathematical apparatus for foundations of
xxiv Introduction

both signal processing theory and syntactical aspects of information theory is not
an arbitrary one. Boolean algebra, considered as a set of the elements (in this case,
each signal is a set of the elements of its instantaneous values) possessing the certain
properties, is intended for signal space description. A measure defined upon it is
intended for the quantitative description of informational relationships between the
signals and their elements, i.e., the instantaneous values (the samples).
In this work, special attention is concentrated on interrelation between infor-
mation theory and signal processing theory (IT ↔ SPT). Unfortunately, such a
direct and tangible interrelation between classical variants of these theories is not
observed. The answer to the principal question of signal processing theory—how, in
fact, one should process the results of interaction of useful and interference (noise)
signals, is given not with information theory, but by means of applying the special
parts of mathematical statistics, i.e., statistical hypothesis testing and estimation
theory. While investigating such an interrelation (IT ↔ SPT), it is important that
information theory and signal processing theory could answer interrelated questions
below.
So, information theory with application to signal processing theory has to resolve
certain issues:
1.1. Information quantity measure, its properties and its relation to signal
space as a set of material carriers of information with special properties.
1.2. The relation between the notion of signal space in information theory
and the notion of signal space in signal processing theory.
1.3. Main informational relationships between the signals in signal space that
is the category of information theory.
1.4. Interrelation between potential quality indices (confining the efficiency)
of signal processing in signal space which is the category of information
theory, and the main informational relationships between the signals.
1.5. Algebraic properties of signal spaces, where the best signal processing
quality indices may be obtained by providing minimum losses of infor-
mation contained in useful signals.
1.6. Informational characteristics of communication channels built upon sig-
nal spaces with the properties mentioned in Item 1.5.
Signal processing theory with application to information theory has the following
issues:
2.1. Main informational relations between the signals in signal space that is
the category of signal processing theory.
2.2. Informational interrelation between main signal processing problems.
2.3. Interrelation between potential quality indices (confining the efficiency)
of signal processing in signal space with certain algebraic properties and
main informational relationships between the signals.
Introduction xxv

2.4. The synthesis of algorithms and units of optimal signal processing in


signal space with special algebraic properties, taking into consideration
Item 1.5 (the synthesis problem).

2.5. Quality indices of signal processing algorithms and units in signal space
with special algebraic properties, taking into consideration Item 1.5 (the
analysis problem).

Apparently, it is possible to resolve the above issues only by constructing signal


processing theory while providing the unity at the level of its basics with information
theory (at least within the framework of syntactical aspects), thus providing their
theoretical compatibility and harmonious associativity.
List of Abbreviations

Notion Abbreviation

Abstract algebra AA
Autocorrelation function ACF
Characteristic function CF
Cumulative distribution function CDF
Decision gate DG
Envelope computation unit ECU
Estimator formation unit EFU
Generalized Boolean algebra GBA
Hyperspectral density HSD
Information distribution density IDD
Information theory IT
Lattice theory LT
Least modules method LMM
Least squares method LSM
Linear frequency modulated signal LFM signal
Matched filtering unit MFU
Mathematical analysis MA
Mathematical statistics MS
Median filter MF
Mutual information distribution density mutual IDD
Mutual normalized function of statistical interrelationship mutual NFSI
Normalized function of statistical interrelationship NFSI
Normalized measure of statistical interrelationship NMSI
Overall quantity of information o.q.i.
Power spectral density PSD
Probabilistic measure of statistical interrelationship PMSI
Probability density function PDF
Probability theory PT
Radio frequency RF
Relative quantity of information r.q.i.
Set theory ST
Signal detection unit SDU
Signal extraction unit SEU
Signal processing theory SPT
Signal theory SgT
White Gaussian noise WGN

xxvii
Notation System

Notion Notation

abstract algebra and universal algebra


Additive semigroup SG(+)
Boolean algebra B0
Boolean lattice BL0
Boolean ring BR0
Generalized Boolean algebra B
Generalized Boolean lattice BL
Generalized Boolean ring BR
Group G
Lattice L
Measure on generalized Boolean algebra m
Metric upon generalized Boolean algebra ρ(A, B) = m(A∆B)
Multiplicative semigroup SG(·)
Null element O
Ring R(+, ·)
Semigroup SG
Signature of universal algebra A TA
Unit element I
Universal algebra A = (X, TA )
probability theory and mathematical statistics
Characteristic function Φw (u)
Coefficient of statistical interrelation ψξη
Cumulative distribution function Fξη (x, y), Fξ (x)
Estimator of parameter λ λ̂
Hyperspectral density σξ (ω)
Linear sample space LS(X , BX ; +)
Mathematical expectation M(∗)
Matrix of probabilities of transition Πn
Metric between samples ξt , ηt µ(ξt , ηt )
Metric between stochastic processes ξ(t), η(t) ρξη
Mutual normalized function of statistical interrelationship ψξη (tj , t0k )
Negative part of stochastic process v(t) v− (t)
Normalized correlation function rξ (t1 , t2 ), r(τ )
Normalized function of statistical interrelationship ψξ (tj , tk ), ψ(τ )
Normalized measure of statistical interrelationship ν(ξt , ηt ), ν(at , bt )
Normalized variance function θξ (t1 , t2 )
Positive part of stochastic process v(t) v+ (t)
Power spectral density S(ω)
Continued on next page

xxix
xxx Notation System

Notion Notation

Probabilistic measure of statistical interrelationship νP (ξt , ηt ), νP (at , bt )


Probability P
Probability density function pξη (x, y), pξ (x)
Sample space with lattice properties L(Y, BY ; ∨, ∧)
Sample space with L-group properties L(X , BX ; +, ∨, ∧)
Stochastic process ξ(t), η(t), . . .
signal processing theory
Carrier frequency f0
Conditional probability of correct detection D
Conditional probability of false alarm F
Cross-correlation coefficient between two signals rik
Detection of signal s(t) Det[s(t)]
Domain of definition of signal s(t) Ts
Dynamic error of signal filtering δd0
Dynamic error of signal smoothing δd,sm
Energy falling at one bit of information Eb
Energy of signal s(t) Es
Envelope of signal v(t) Ev (t)
Estimation of time parameter t1 Est[t1 ]
Estimator of the signal s(t) ŝ(t)
Estimator of time parameter t1 t̂1
Extraction of the signal s(t) Ext[s(t)]
Frequency shift ∆F
Intermediate processing of signal s(t) IP [s(t)]
Intermediate smoothing of signal s(t) ISm[s(t)]
Linear signal space LS(+)
Matched filtering M F [s(t)]
Metric signal space (Γ, µ)
Mismatching function w(λ0 , λ)
Modulating function M [∗, ∗]
Noise power spectral density N0
Normalized mismatching function ρ(λ0 , λ)
Normalized time-frequency mismatching function ρ(δτ, δF )
Number of periods of harmonic signal s(t) Ns
Observation interval Tobs
Period of signal carrier T0
Physical signal space Γ
Primary filtering P F [s(t)]
Probability of correct formation of signal estimator ŝ(t) P (Cc )
Probability of error formation of signal estimator ŝ(t) P (Ce )
Realization of signal x(t) x∗ (t)
Relative frequency shift δF
Relative time delay δτ
Resolution of signals s1 (t), s2 (t) Res[s1,2 (t)]
Signal space with L-group properties L(+, ∨, ∧), Γ(+, ∨, ∧)
Signal space with lattice properties L(∨, ∧), Γ(∨, ∧)
Signals a(t), b(t), . . .
Continued on next page
Notation System xxxi

Notion Notation

Smoothing of signal s(t) Sm[s(t)]


Time delay ∆τ
Time of arrival of signal t0
information theory
Channel capacity C
Information distribution density iξ (tα , t), iξ (τ )
Information losses of first genus IL0
Information losses of second genus IL00
Informational signal space Ω
Mutual information distribution density iξη (t0k , t), iηξ (tj , t0 )
Overall quantity of information I(A), I[ξ(t)]
Quantity of absolute information IA
Quantity of mutual information IA·B
Quantity of overall information IA+B
Quantity of particular relative information IA−B
Quantity of relative information IA∆B
Relative quantity of information I∆ (A), I∆ [ξ(t)]
geometry
Barycenter of n-dimensional simplex bc[Sx(A)]
Curvature of set of elements A c(A)
Line l
Plane αABC
Sheet LAB
Simplex Sx(A)
Sphere Sp
Triangle ∆ABC
general notations
Cardinality of set U = {ui } Card U
Dirac delta function δ(x)
End of example 5
End of proof 
Euclidean space (n-dimensional) Rn
Fourier transform F[∗]
Heaviside step function 1(τ )
Hilbert space HS
Hilbert transform H[∗]
Indexed set {At }t∈T
Set of natural numbers N
1
General Ideas of Natural Science, Signal Theory, and
Information Theory

All scientific research has both subject and methodological content. The last is
connected with critical reconsideration of existing conceptual apparatus and ap-
proaches for interpretation of phenomena of interest.
A researcher working with real world physical objects eventually must choose
a methodological basis to describe researched phenomena. This basis determines a
proper mathematical apparatus.
The choice of correct methodological basis is the key to success and often the
reason more useful information than expected is obtained. Mathematical principles
and laws contain a lot of hidden information accumulated during the ages. Math-
ematical ideas could give much more than expected before. That is why a person
engaged in some fundamental mathematical research may not foresee all the pos-
sible applications in natural science, sometimes creating precedent when “the most
principal thing has been said by someone, who does not understand it”. For exam-
ple, Johannes Kepler possessed all the necessary information to formulate universal
gravitation law, but he did not do so, etc.
It is clear that any mathematical model developed on the basis of a specific
methodology may describe the real phenomenon studied with a certain amount of
accuracy. But it may happen that the mathematical model used until now does
not satisfy the requirements of completeness, adequacy, and internal consistency
anymore. In such a case, a researcher has two choices. The first is to explain all
known facts, all new facts, and appearing paradoxes within the Procrustean bed of
an old theory. The second option is to devise a new mathematical model to explain
all the available facts.
The modern methodologies of all natural sciences involve its base, i.e., method-
ological basis (a certain mathematical apparatus), general system of notions and
principles, and also particular system of notions and principles of a given scientific
direction. These components driving natural science research are interconnected
within their disciplines and to other branches of science.
We first consider general system of notions and principles of modern research
methodology within natural sciences and then discuss a particular system of notions
of specific research directions, in this case signal processing theory and information
theory. We also explain the relationships of these subjects to various branches of
the natural sciences.
Phenomena and processes related to information and signal processing theories
are essential constituent parts of an entire physical world. The foundations of these

1
2 1 General Ideas of Natural Science, Signal Theory, and Information Theory

theories rely on the same fundamental notions and the same research techniques
utilized by all branches of science. The first step is to find out the content of
general system of notions used in research methodology of natural sciences, without
going into details of its structure, which is well stated in the proper literature.
Next, the researcher should analyze the suitability of the systems of notions that
are used in classical information theory and signal processing theory. Finally, one
should determine how both theories converge within the framework of research
methodology of natural sciences.

1.1 General System of Notions and Principles of Modern Research


Methodology
Since ancient times, philosophers tried to understand natural phenomena, consider-
ing them within the unity of preservation and change of parts and the properties of
the whole; within the unity of order and disorder, regular and random; and within
the relation between diversity and duplication, discreteness and continuity. As nat-
ural science continued to develop, new concepts explained the relationships between
symmetry versus asymmetry and linearity versus nonlinearity.
Science progressed on the basis of deterministic principles for most of its history.
However, scientific facts revealed in the last few centuries demonstrated the need
for a probabilistic approach, i.e., the rejection of the unique description of the con-
sidered phenomena. Thermodynamics became the first branch of physics in which
probabilistic methods were tested. The subject was later named statistical thermo-
dynamics. Scientists of the 19th century conceived the wave properties of light as
streams of discrete particles. By the 20th century, the use of the probabilistic ap-
proach to describe these phenomena stimulated further review of fundamental ideas
and resulted in the creation of quantum mechanics. Applying probabilistic meth-
ods to classical physics led to the development of statistical physics and its first
parts, i.e., statistical thermodynamics and quantum statistics. The transition from
discreteness to continuity and from deterministic to probabilistic methods proved
productive in studies of natural phenomena.
We know that the discrete versus continuous and deterministic versus proba-
bilistic properties of matter should not be in opposition because they are closely
linked and effective if used in combination. Determining interrelations of various
properties of materials is not a simple endeavor. The degree of success is determined
by the depth of investigation of studied objects.
Symmetry in the natural world dates back to antiquity; its extraordinary ver-
satility was not known until the 19th century. Most natural phenomena display
symmetry and that fact played an important role in scientific advancement [177],
[178], [179], [180].
Studies of symmetry led to the discovery of invariance principles confining the
diversity of the objects’ structure. On the contrary, asymmetry leads to the ob-
1.1 General System of Notions and Principles of Modern Research Methodology 3

jects’ diversity grounded in a given structural basis. Commonality of properties


constitutes symmetry; distinct properties reveal asymmetry.
Symmetry is almost impossible to define exactly. In every single phenomenon,
symmetry inevitably takes the form corresponding to it. Examples of symmetry
include the metrics of poetry and music, coloring in painting, word arrangements in
literature, constellations in astronomy. Symmetry reveals itself within limitations of
physical processes passing. These constraints are described by laws of preservation of
energy, mass, momentum, electrical charge, etc. The creation of relativistic quantum
theory evoked the discovery of a new type of symmetry. This is symmetry of nature’s
laws with respect to simultaneous transformations of charge conjugation, parity
transformation, and time reversal designated CPT-symmetry. Symmetry underlies
elementary particle classification in chemistry and allowed Dmitri Mendeleev to
devise his periodic table of elements. Gregor Mendel applied the idea of symmetry to
characterize hereditary factors. In abstract algebra and topology, symmetry appears
in isomorphic and homomorphic mappings respectively.
One more unusual qualitative measure of investigative complexity is nonlinear-
ity. One of the first attempts to solve the wave equation for a pendulum demon-
strated that its deflections were not always negligibly small and one should use
more exact expressions to determine the deflecting force that creates a nonlinear
appearance. Even small nonlinear additions qualitatively change a situation; a sum
of several solutions may not satisfy an equation. There is no principle of superposi-
tion. One cannot obtain a general solution from a group of specific solutions; other
approaches may be required.
Nonlinearity appears within the majority of real systems [181], [182], [183].
In investigation of nonlinearity, difficulties taking place are stipulated by the fact
that the world of nonlinear phenomena, which require the special models for their
description, is much wider than “linear world”. There are a lot of important nonlinear
equations which should be studied. To top it all off, the majority of such equations
cannot be solved analytically.
Nonlinearity is a feature describing the interactions of components within a
physical structure; it is also a universal property of various types of matter that
provide the diversity of material forms found throughout the world.
Among the most important ideas upon which all our knowledge systems are
built is the notion of space. By the notion of space modern geometry means the set
of some geometrical objects that are connected to each other by certain relations.
Depending on the character of these ties, we differ Euclidean and non-Euclidean
types of spaces.
The discoveries of nonEuclidean geometry and group theory represented turning
points in the history of mathematics. Since then, a new era of mathematical devel-
opment has begun and various types of geometries beyond the Euclidean started
to appear [184], [185], [186]. New algebraic systems that had no classical analogues
advance mathematics beyond the basic arithmetic of real numbers.
The greatest challenges at present are intensive research of cosmic space and
studies of the behaviors of elementary articles. In this connection, we now want to
4 1 General Ideas of Natural Science, Signal Theory, and Information Theory

understand the structure of the universe and the geometry and algebra of intra-
atomic space. In the opinion of Paul Dirac:
The modern physical developments have required a mathematics that continu-
ally shifts its foundation and gets more abstract. Non-Euclidean geometry and
noncommutative algebra, which at one time were considered to be purely fic-
tions of the mind and pastimes of logical thinkers, have now been found to be
very necessary for the description of general facts of the physical world [187].
Modern science has no definitive answers to these questions. At present, we can
express only the most general understanding. The main elements of cosmic space
are objects called straight lines. They are trajectories of light wave movements
or trajectories of movements of particles bearing light energy, i.e., photons. The
gravity field lines surrounding all masses of matter are considered rectilinear. Tra-
jectories of material particles (cosmic rays) moving freely throughout the universe
are rectilinear.
All these straight lines analyzed on Earth’s scale are considered identical but
that conclusion may not be correct. We have no reason yet to speak about the
geometry of the universe. We can speak only about the geometries of light rays and
gravitation fields, etc. It is quite possible that these geometries can be absolutely
different, and the issue becomes even more complicated because the concepts of
general relativity theory, electromagnetic waves, and gravity fields are dependent
on each other.
The violation of the rectilinearity of light waves within gravity fields was estab-
lished theoretically and confirmed by observations. Light rays passing a heavy body,
for example, near the Sun, are distorted. The geometry of light rays in space is com-
plicated because huge masses of matter are distributed nonuniformly throughout
the universe.
General relativity theory revealed the interdependence of gravity field space,
electromagnetic field space, and time. These objects define four-dimensional space
whose laws have been explained by modern physicists, astronomers, and mathe-
maticians.
At present, we can say only that the properties of these objects are not described
by Euclidean geometry.
The geometry of the intra-atomic world is more ambiguous. In cosmic space,
we can indicate straight lines in a certain sense, but it is impossible to do this with
atomic nuclei. We have little to say about the geometry of intra-atomic space but
we can certainly say there is no Euclidean geometry there.
Although the word information has served as a catch-all for a long time, its use
in the middle of the 20th century evolved to describe a specific concept that plays a
critical role in all fields of science. Application of the information approach expanded
greatly since then and Claude Shannon reminded scientists working in social and
humanitarian disciplines of the need to keep their houses in first class order [188]. At
present, there are a lot of definitions of this notion. The definition choice sufficiently
depends on researchers’ directions, goals, techniques, and available technologies of
research in every individual case.
1.2 Information Theory and Natural Sciences 5

The definition of information has been covered widely in scientific literature


for decades but no single definition has appeared. The main reason is a persistent
development of information theory and other sciences that actively use a theoretical-
informational approach in their methodology. Expansion of the notion of informa-
tion continues to impact its definition and reveal new features and properties. The
alternative is either to develop a new definition of the notion based on a simple
generalization of new data, or to give qualitatively new definition of the notion of
information, which would be broad enough to not require revision whenever new
revelations appear.
The analysis of the development of information theory allows the selection of
the most universal and sufficient features. The notion of information in its most
general form was described [189] as a reflected diversity and an interconnection
between reflection and diversity. Nevertheless, whether information is considered a
philosophical category or not, it occupies an important place in modern science and
a return to the past, when natural sciences could operate without this notion, is
impossible. The role of information in natural science has expanded steadily. The
1970 statement by Russian scientists A.I. Berg and B.V. Biryukov that, “Informa-
tion, probably, will take the first place in the 21st century in a world of scientific
and practically efficient notions” was certainly prophetic [190].
No concept or approach can be constructed without using a combination of
key notions. In this book, the key notions are probability and probabilistic ap-
proaches, symmetry and invariance principles, space and its geometry, nonlinearity,
information and its measure, discrete versus continuous, and deterministic versus
probabilistic. The main key notions of this approach are signal space and a measure
of information quantity. All necessary key notions regarding the suggested approach
are considered in their relation to research methodology used in natural sciences.

1.2 Information Theory and Natural Sciences


Analytical apparatus of information theory was created after the edifice of math-
ematical statistics had already been built. The central problem of mathematical
statistics is the development of methods that allow us to extract from data storage
the most vital information about the phenomena of interest. It is no wonder that
the first steps in studying information as a science were taken by Ronald A. Fis-
cher who is considered the founder of modern mathematical statistics. Apparently,
he was the first mathematician who understood this notion needs more accurate
definition and he introduced the notion of sufficient statistics, i.e., an extract from
observable data that contains all information about distribution parameters.
Fischer’s measure of information quantity contained in data concerning an un-
known parameter is well known to statisticians. This measure is the first use of
the notion of “information quantity” introduced mainly for the needs of estimation
theory.
6 1 General Ideas of Natural Science, Signal Theory, and Information Theory

After the appearance of the works of Claude Shannon [51] and Norbert Wiener
[85], interest in information theory and its utility in the fields of physics, biology,
psychology, and other hard and soft science fields increased. Solomon Kullback con-
nects it also with Wiener’s statement, that in practice of statistics, his (Wiener’s)
definition of information could be used instead of Fischer’s [89]. We should note
Leonard Savage’s remark that, “The ideas of Shannon and Wiener, though con-
cerned with probability, seem rather far from statistics. It is, therefore, something
of an accident that the term ‘information’ coined by them should be not altogether
inappropriate in statistics”.
The main thesis of Wiener’s book titled Cybernetics, or Control and Commu-
nication in the Animal and the Machine [85] was the similarity of control and com-
munications processes in machines, living organisms, and societies. These processes
encompass transmission, storage, and processing of information (signals carrying
messages).
One of the brightest examples are the processes of genetic information transmis-
sion. Genetic information transmission plays a vital role in all forms of life. About 2
million species of flora and fauna inhabit the Earth. Transmission of genetic infor-
mation determines the development of all organisms from single cells to their adult
forms. Transmitted genetic data governs species structures and individual features
for both present and future generations. All this information is preserved within a
small volume of elementary cell nucleus and is transmitted through intricate ways
to all the other cells originated from a given one by cell fission; this information
is preserved also during the process of further reproduction of next generations of
similar species.
Every field of natural science and technology relies on information transmission,
receiving, and transformation. Visible light reports a lot of live creatures up to
90% of data concerning the surrounding world, electromagnetic waves and fluxes of
particles carry an imprint of the processes from the universe remote parts. All living
organisms depend on information on their relationships with each other, with other
living things, and with inanimate objects. Physicists, philosophers, and all others
who studied all aspects of our existence and our world depended on the availability
of information and information quantity long before Shannon and Wiener defined
those terms.
When information theory developed in the works of mathematicians, it dropped
out of sight of physics and other nature sciences, and some scientists opined that
information was an intangible notion that had nothing to do with energy transfer
and other physical phenomena. The reasons for such misunderstandings could be
explained by some peculiarities of the origin of information theory, its further de-
velopment and application. Mathematical communication theory appearance was
stimulated by the achievements of electronics which in those times was based on
classical electrodynamics with its inherent ideas of continuity and absolute simul-
taneous measurability of all the parameters of physical objects.
As communication technologies evolved, information theory was used to solve
the problems of both communication channel optimization and encoding method op-
timization, which transformed into a vast independent part of mathematics, having
1.2 Information Theory and Natural Sciences 7

moved into the background the problems of physical constraints in communication


systems.
Nevertheless, the analogy between Shannon – Wiener’s entropy and Boltzmann’s
thermodynamic entropy attracted a lot of research attention. Physical ideas in in-
formation theory never seemed to be strange to outstanding representatives of its
mathematical direction. The use of high frequency intervals of the electromagnetic
spectrum and miniaturization of information processing systems precipitated the
development of electronics that approach the limit at which quantum mechanical
regularities and related constraints become essential.
By the end of the 20th century, quantum computing was accepted as a new
branch of science. Using the laws of quantum physics allowed us to create new types
of high capacity supercomputers. We know that quantum systems can provide high
computation speed and that messages transmitted via quantum communication
channels cannot be copied or intercepted [191], [192]. Russian scientist Alexander
Holevo said, “Regardless of the fact of how soon such projects could be realized,
quantum information theory is a new direction giving a key to understanding the
Nature’s fundamental regularities that were until now out of the field of researchers’
vision” [97].
What is the specificity of a physical approach to the problems of information
theory? Does the need for radical revision of Shannon’s theory really exist or does
the answer lie in application of physical equations to describe the properties of
communication channels? How should one connect the inferences of statistical in-
formation theory with general constraints imposed by Nature’s laws on communi-
cation channels? How can one establish the limits of fundamental realizability of
various information transmitting and processing systems? We know that under a
given signal power in the absence of interference (noise), channel capacity is always
a finite quantity [92].
Measurement is one of the fundamental problems of exact sciences. More gener-
ally, the problem is extracting information during detection, extraction, estimation,
classification, resolution, recognition, and other informational procedures. They are
widely used in physics (within relativity theory and quantum theory), mathematical
statistics (within statistical hypotheses testing and estimation theory), and other
disciplines. One should not consider the measurement process simply a comparison
of measured parameters with measurement units. One should take into account the
conditions under which measurements are made.
Measurement process is a specific case of the informational process; it is the
extraction of information about an object or its parameters.
In fact, the discussion on some theoretical physics questions has a theoretical-
informational character, at least, those which are connected with the causality prin-
ciple. It is enough to notice that the notion of the signal is used in deduction of the
laws of special theory of relativity.
Regarding the close connection of information theory and measurement (or es-
timation), we recall Wiener’s remark [193]: “. . . One single voltage measured to an
accuracy of one part in ten trillion could convey all the information in the Encyclo-
8 1 General Ideas of Natural Science, Signal Theory, and Information Theory

pedia Britannica, if the circuit noise did not hold us to an accuracy of measurement
of perhaps one part in ten thousand”.
Another important issue is the specificity of measurement processes that has
no analogues in classical physics. According to quantum theory, probabilistic pre-
diction of measurement results of signal receiving cannot be determined only by a
signal state; a list of measurable parameters should be specified. Exact measure-
ment of some parameters usually fails to include reliable estimates regarding other
parameters. For example, while aspiring to measure velocity of an elementary par-
ticle in quantum physics or a target in radiolocation or hydrolocation, the ability
to obtain information about an object’s position in space is limited.
The essential peculiarity of some measurement processes, especially those based
upon indirect methods of measurements, is nonlinearity of a measurement space (or
a sample space). An appearing problem of data processing optimization methods
is not trivial, and in general has not been resolved satisfactorily as of this writing.
The development of electronic systems that serve various functions and are
characterized by the use of the microwave and optic ranges of the electromagnetic
spectrum requires study of the influences of physical phenomena on the processes
of signal transmitting, receiving, and processing.
The application of physical methods to information theory involves reverse pro-
cess: the use of information to solve some key problems of theoretical physics.
An information interchange is an example of a process developing from the past
into the future. One can say, time “has a direction”. A listener cannot understand
a compact disk spinning in the reverse direction. Reverse pulling of film strips was
used widely to create surrealistic effects in the early phases of cinematography
development. The world may look absurd when the courses of events are reversed.
The laws of classical mechanics discovered by Isaac Newton are reversible. In
the equations time can figure with either positive or negative signs. Thus, time
can be considered reversible or irreversible. Time direction plays an important role
in research on life processes, meteorology, thermodynamics, quantum physics, and
other scientific areas.
The concept of an obvious irreversibility of time has found the most clear-cut
formulation in thermodynamics in the form of the Second Law, according to which
a quantitative characteristic called entropy never decreases. Originally, thermody-
namics was used to study the properties of the gases, i.e., large ensembles of particles
that are in persistent movement and interaction with each other. We can obtain
only partial data about such ensembles. Although Newton’s laws are applicable to
every single particle, it is impossible to observe each of them, or to distinguish one
particle from another. Their characteristics can not be determined exactly; they
can be studied only on the base of probabilistic relationships.
One can obtain certain data concerning macroscopic properties of this ensemble
of particles, for instance, concerning a number of degrees of freedom or dimen-
sion, pressure, volume, temperature, energy. Some properties can be represented by
statistical distributions, for instance, particles’ velocity distribution. Also one can
observe some microscopic movements, but it is impossible to obtain complete data
about every particle.
1.2 Information Theory and Natural Sciences 9

An important place among probabilistic systems is occupied by information


interchange systems. Both types of systems can utilize the same statistical methods
to describe their behaviors. Information sources do not contain complete data. We
can explain the properties of an ensemble or encoding system even though some
constraints are imposed on their messages or signals. A priori, we cannot know the
details of a message source or determine which message will be transmitted at the
next moment. We cannot anticipate coming events or obtain complete information
from signals.
In statistical thermodynamics, entropy is the function of probabilities of the
states of the particles that constitute a gas. In statistical communication theory,
information quantity is the same function of message source states. In both cases,
we have some ensemble. In the case of a gas, the ensemble is the totality of particles
whose states (energies) are distributed according to some function of probability.
In communication theory, totality of messages or message source states are also
described by some function of probability. The relation between information and
entropy is revealed in the following formula of Wiener and Shannon:
X
H=− pi log pi ,
i

where pi is the probability of the i-th state of message source.


The amazing connection between two areas of knowledge can be used in two
ways. One can follow the exact mathematical methods, taking into account the
appropriateness of their use or follow a less formal method. A lot has been written
about the entropy of social and economic systems and their applications in vari-
ous areas of research suffering from “methodological hunger.” As noted by Edward
Cherry [88]:
It is the kind of sweeping generality which people will clutch like a straw. Some
part of these interpretations has indeed been valid and useful, but the concept of
entropy is one of considerable difficulty and of a deceptively apparent simplicity.
It is essentially a mathematical concept and the rules of its application are
clearly laid down.
Unfortunately, the ease of transfer of thermodynamics ideas to information theory
and other sciences already led to several unsubstantiated and speculative ideas.
Shannon clearly sensed that and in a critical article [188] he wrote:
Starting as a technical tool for the communication engineer, it (information
theory) has received an extraordinary amount of publicity in the popular as
well as the scientific press. . . . As a consequence, it has perhaps been ballooned
to an importance beyond its actual accomplishments. . . . What can be done to
inject a note of moderation in this situation? In the first place, workers in other
fields should realize that the basic results of the subject are aimed in a very
specific direction. . . Secondly, we must keep our own house in first class order.
Of course, keeping “our own house in first class order” in the context of in-
formation theory means neither dust removal nor furniture transposition. Now it
10 1 General Ideas of Natural Science, Signal Theory, and Information Theory

is realized by the addition of outhouse extensions to the edifice of informational


theory.
Among them a researcher can select unified information theory [54], semantic
information theory [86], [91], quantum information theory [97], algebraic informa-
tion theory [95], dynamic information theory [98], combinatorial methods [59] and
topological approaches [87] to the definition of information. We cannot claim that
architectural harmony exists among the various choices. We can only hope that
information theory in the foreseeable future will be built upon an integrated load-
carrying frame.

1.3 Overcoming Logical Difficulties


Modern teachings about space are logical, consistent, and branched, and rest on
a synthesis of physical, mathematical, and philosophical ideas. In antiquity, the
first expression of mathematical abstraction of real space was developed and is now
known as Euclidean geometry. This geometry reflects the space relationships of our
daily experience with great accuracy, and has been used for more than 2,000 years.
Until recently it was the only space theory that represented lucidity, harmony, and
entirety for all natural sciences.
During the development of natural sciences and philosophy of New Time, teach-
ing about space became inseparably linked with principles of mechanics developed
by Isaac Newton and Galileo Galilei. Newton put forth the notions of absolute space
and relative space into the base of his mechanics. Within contemporary interpreta-
tion, Newton’s theses on classical mechanics [194] state:

1. Space exists independently of anything in the world.


2. Space contains all the objects of nature and gives the place to all of its
phenomena, but does not experience their influence upon itself.
3. Space is everywhere and the same with respect to its properties. All its
points are equitable and the same — space is isotropic.
4. In all times space is invariably the same.
5. Space is ranging along all the directions unrestrictedly and has infinite
volume.
6. Space has three dimensions.
7. Space is described by Euclidean geometry.
Newton’s understanding was considered the only true one for almost two centuries
despite Gottfried Leibniz’ opposing position [195]. Leibniz detected a bounded-
ness of Newton’s overview of space and considered the separation of space from
matter to be erroneous. He wrote, “What is the space without a material object?
. . . Unhesitatingly I will answer that I do not know anything about this.” Leibniz
1.3 Overcoming Logical Difficulties 11

considered space as a property of the world of material objects without which it


could not exist. This interpretation of the notion of space, while logical, did not
correspond to the majority opinion about the unity and universality of Euclidean
geometry. Although Leibniz’s idea seemed incontestable and obvious, the scientific
community would not even discuss the possible existence of a geometry other than
Euclidean.
The New Time’s teaching about space developed two opposite concepts: (a)
space is a receptacle of all the material objects and (b) space is a property of a set
of material objects. Concept (a) is connected with Newton and (b) is attributed to
Leibniz.
In a surprising development in geometry since the discoveries of Nikolai Loba-
chevsky and János Bolyai, Bernhard Riemann pursued differential-metrical research
and Felix Klein studied a group-theoretical approach. These developments in ge-
ometry found acceptance in philosophy and the natural sciences.
According to Riemann’s ideas about constructing geometry [184], [196], one
should define a variety of the elements characterized by their coordinates in some
absolute Euclidean space. He also proposed defining a differential quadratic form
that determines linear elements, i.e., follows the law of measurement of distances
between infinitely close elements of the variety.
Conversely, according Klein [184], [197], a mathematician constructing a geom-
etry should define some variety of the elements and the group of mappings allowing
to transform these elements into each other. In this case, he sees the problem of ge-
ometry as studying relationships between these elements which are invariable under
all the mappings of this group.
Two opposing philosophical concepts (Newton’s concept of absolute space and
Leibniz’s treatment of space as a property of material objects) developed within
the space geometry construction theories of Riemann and Klein respectively.
The development of the idea of space in signal theory repeats the concept of
space in physics and mathematics. Vladimir Kotelnikov was one of the first scientists
to utilize the concept of signal space to investigate noise immunity in communication
systems [127], thus, he laid the foundation of signal theory. Lewis Franks’ book titled
Signal Theory [122] is the first source in this field.
According to Franks, mathematical analysis provides the base of signal theory.
It relies on the notion of signal space corresponding to the function space in terms
of mathematical analysis. The two types of function spaces are metric and linear.
The linear spaces are further classified as normed linear spaces and linear spaces
with scalar product. The linear spaces with scalar product (more simply, Euclidean
spaces) serve as the basis of modern signal theory.
This approach to the construction of signal space includes a number of disad-
vantages. For example, a stochastic signal carrying a certain formation quantity and
a deterministic signal having known parameters (but not carrying information) are
represented by an element of linear space with scalar product (by a vector). Thus,
regardless of their informational properties, all signals are represented equally in
signal space.
Is it logical to ask how to describe signals in terms of linear space with scalar
12 1 General Ideas of Natural Science, Signal Theory, and Information Theory

product taking into consideration their informational properties? If we use the


modern view of the notion of space in physics, the question can be formulated in the
following form. Do any signals that carry or do not carry information, deterministic
or stochastic, wideband or narrowband, generate exact Euclidean spaces and no
others? Existing signal theory indicates the answer is affirmative. On the basis of
the sampling theorem, the transformation progresses from continuous signals to
discrete signals, i.e., into the form of a vector of the signal samples. Further, the
techniques and methods of linear algebra are used to solve the signal processing
problems.
Another theoretical problem lies in the use of linear space with scalar product
to describe real signals. This type of description imposes hard constraints on signal
properties and signal processing capabilities.
We take as an example a linear space S defined by the signal set {si (t)} with
scalar product with certain probabilistic properties: all the elements {si (t)} of the
space S are represented by signal sample vectors. The signals are assumed to be
stationary Gaussian stochastic processes. We also let G = {gij } be a group of linear
isomorphisms (bijection linear mappings) of signals {si } into each other in space S.
Whatever linear mapping gij over the elements si of the space S produced, there
is no place in it for nonGaussian stochastic processes and their nonlinear transfor-
mations. Thus, linearity property essentially confines the diversity of signal space
by the probabilistic properties of the signals (in this case, the Gaussian stochas-
tic signals form only signal space) and the type of signal processing (linear signal
transforms only). Note that si + sj is also a Gaussian stochastic process (closure
property of a group). For this (and only this) example, the probabilistic proper-
ties of the signals are confined by their stationarity and Gaussian distribution and
all possible signal processing methods are confined by linear signal processing. If
no constraints are imposed on probabilistic properties of stochastic processes and
the group of isomorphisms G = {gij }, then S is a linear space with nonGaussian
stochastic processes and G = {gij } is a group of nonlinear isomorphisms.
The next theoretical difficulty lies in relation between the continuous signal
s(t) and the result of its discretization. The sample vector s=[sj ]=[s(tj )] represents
the elements of distinct spaces with different properties. The first is the element
of Hilbert space HS. The second element belongs to finite-dimensional Euclidean
space Rn . These two spaces are not isomorphic to each other.
The problem arises because the signal sample vector s = [sj ] and its single
sample s(tj ) are represented absolutely equally in the space Rn by a point (vector).
Thus, both the single instantaneous value s(tj ) of the signal and the signal s(t) as
a whole, are represented equally by the least element of the space Rn (by a point).
Theoretical basis of transfer from the Hilbert space into finite-dimensional Eu-
clidean space is sampling theorem that embodies an attempt to achieve the har-
mony between continuous and discrete entities. Its inferences and relationships have
paramount theoretical and applied importance. They are used to solve various prob-
lems of signal processing theory and allow the providing of continuous and discrete
signal processing from the same standpoint. Nevertheless, using the sampling the-
orem as an exact statement with respect to real signals and imposing technological
1.3 Overcoming Logical Difficulties 13

methods of continuous signal processing without information losses will create nu-
merous problems.
One theoretical disadvantage of the sampling theorem is its orientation toward
using deterministic signals with known parameters that cannot carry information.
Nikolai Zheleznov [198] suggested applying the sampling theorem interpretation to
stochastic signals. This idea considers the signals as nonstationary stochastic pro-
cesses with some power spectral densities. Zheleznov also proposed using an idea
concerning the boundedness of correlation interval and its smallness as against the
signal duration. The correlation interval would be considered equal to the sam-
pling interval. That is, the utility of using the sampling theorem with stochastic
signals lies in the requirement for lack of correlation of neighbor samples during the
transformation of a continuous signal to a discrete one.
The main feature of Zheleznov’s variant of sampling theorem is the statement
that the sequence of discrete samples in time domain can provide only an approx-
imation of the initial signal. Meanwhile, we know that classical formulation of the
sampling theorem claims absolute accuracy of signal representation.
Unfortunately this interpretation also presents disadvantages. The correlation
concept used here describes only linear statistical relations and thus limits the use of
this interpretation on nonGaussian random processes (signals). Even if one assumes
the processed signal is completely deterministic and is described by some analytic
function (even a very complicated one), the use of the sampling theorem will create
theoretical and practical difficulties.
First, a real deterministic signal has a finite duration T . In the frequency do-
main, the signal has an unbounded spectrum. However, due to the properties of
real signal sources and the boundedness of real channel passband, one can consider
the signal spectrum to be bounded by some limited frequency F . The spectrum is
usually defined on the basis of an energetic criterion, i.e., it is bounded within the
frequency interval from 0 to F where most of the signal energy concentrates.
This spectrum boundedness leads to a loss of some information. As a result,
restoration of a bounded signal by its samples in time domain based on the sampling
theorem constrained by limits on the signal spectrum can be only approximate.
Errors arise also from the constraints on the finite number of samples lying within
time interval T (equal to 2F T according to the theorem). These errors appear to
be due to neglecting the infinite number of expansion functions corresponding to
samples outside the interval T .
Second, the restoration procedure causes another error arising from the impos-
sibility of creating pulses of infinitesimal duration and transmitting them through
real communication channels. The maximum output signal corresponding to the
reaction of an ideal low-pass filter on delta-function action has a delay time that
tends to infinity. For a finite time T , every sample function and their sums that
are copies of initial continuous signals will be formed only approximately. Less T
means more rough approximation.
Nevertheless, some formulations of the sample theorem are free of these disad-
vantages, as will be shown in Chapter 4.
The first steps in developing the notion of information quantity were under-
14 1 General Ideas of Natural Science, Signal Theory, and Information Theory

taken by Ralph Hartley, an American communication specialist [50]. He suggested


characterizing the information quantity of a message consisting of N symbols of the
alphabet containing q symbols by a quantity equal to logarithm of a total number
of messages from an ensemble (by logarithmic measure):

H = N log q.

Hartley considered messages consisting of both discrete symbols and continu-


ous waveforms. He stated that, “The sender is unable to control the form of the
function with complete accuracy”, thus the messages of the last type include infi-
nite quantity of information. Hartley considered information quantity that could
be transmitted within the frequency band F for the time T was proportional to the
product F T log S, where S is the number of “distinguishable intensities of a signal”.
Hartley understood that using measure of information quantity as a convenient ap-
proach to a number of practical problems was imperfect because it did not account
for the statistical structures of messages.
If we assume a correspondence between the symbols of some alphabet ai (or
states of signal waveforms) and probabilities pi , Hartley’s measure determining the
information quantity in a message consisting of k symbols takes the form:
k
X
H=− pi log pi . (1.3.1)
i=1

A number of authors obtained this equation by various methods. Shannon [51] and
Wiener [85] based their work on a statistical approach to information transmission.
However, statistical approaches vary. In the approach to information quantity
above, we dealt with average values, not information transmitted by a single symbol.
Thus, Equation (1.3.1) represents an average value. It can be overwritten as:

H = −log pi .

Specialists in probability theory and mathematical statistics may call H a mathe-


matical expectation of a logarithm of probability of symbols’ appearance. As well
as a variance, H represents a measure of scattering of the values of a random
variable (or a stochastic process) with respect to the mean, i.e., a measure of statis-
tical “rarity” of message symbols. Note that for the most common random variable
(stochastic process) models, this measure is proportional to some monotonically
increasing function of variance. Cherry [88] noted, “In a sense, it is a pity that the
mathematical concepts stemming from Hartley have been called ‘information’ at
all”. Equation (1.3.1) for entropy defines only one aspect of information: statistical
rarity of a single instantaneous value (symbol) appearance in continuous or discrete
message.
In attempts to extend the notion of entropy upon the continuous random vari-
ables, another difficulty appears. Let us determine the entropy of a continuous ran-
dom variable ξ with probability density function (PDF) p(x) starting from entropy
of discrete (quantized) random variable per Equation (1.3.1). If the quantization
1.3 Overcoming Logical Difficulties 15

step ∆x of the random variable ξ is small enough against its range, the probability
that random variable will take its values within the i-th quantization interval will
be approximately equal to:
pi = p(xi )∆x,
where p(xi ) is PDF value at the point xi . Substituting the value pi into Equation
(1.3.1), we have:
X
H (x) = lim {− p(xi )∆x log[p(xi )∆x]} =
∆x→0
i
X X
= lim {− [p(xi ) log p(xi )]∆x − log ∆x p(xi )∆x}.
∆x→0
i i

Taking into account normalization property of PDF p(x):


X
lim p(xi )∆x = 1,
∆x→0
i

and the limit transfer

X Z∞
lim {− [p(xi ) log p(xi )]∆x} = − p(x) log p(x)dx,
∆x→0
i −∞

we obtain:
Z∞
H (x) = − p(x) log p(x)dx − lim (log ∆x). (1.3.2)
∆x→0
−∞

As shown by Equation (1.3.2), while transferring to continuous random variable


entropy, the last tends to infinity. Therefore, continuous random variables do not
allow introducing a finite absolute quantitative measure of information. Definition
of differential entropy H (x) is realized by rejection of the infinite term lim (log ∆x)
∆x→0
of the Equation (1.3.2):
Z∞
H (x) = − p(x) log p(x)dx. (1.3.3)
−∞

Thus, to define continuous random variable entropy, Shannon’s approach [51] lies
in a limit passage from discrete random variables to continuous ones while ignoring
infinity, rejecting a continuous random variable entropy devoid of sense, and re-
placing it by differential entropy. Shannon’s preference for discrete communication
channels does not look logical based on recent developments in communications
technology. Shannon’s information quantity during the transition to continuous
noiseless channels is coupled too closely with thermodynamic entropy so problems
arising from both concepts have the same characteristics caused by the same reason.
The additive noise that limits the signal receiving accuracy according to the clas-
sical information theory imparts a finite unambiguous sense to information quantity.
16 1 General Ideas of Natural Science, Signal Theory, and Information Theory

An analogous situation occurs in determining the capacity of continuous commu-


nication channel with additive Gaussian noise [51]:

P
C = F log(1 + ), (1.3.4)
N
where F is a channel bandwidth, P is the signal power, N is the noise power.
Analysis of this expression leads to a conclusion about the possibility of trans-
mitting any information quantity per second by indefinitely weak signals in the
absence of noise. This result is based on the assumption that at low levels of inter-
ference (noise), one can distinguish two indefinitely close to each other signals with
any reliability, causing an unlimited increase in channel capacity while decreasing
noise power to zero. This assumption seems absurd from a theoretical view because
Nature’s fundamental laws limit measurement accuracy and they are insuperable
by any technological methods and means.
Despite attempts to eliminate infinity during a transition from discrete ran-
dom variable entropy to continuous random variable entropy, Shannon’s theory can
cause the same problem arising from defining continuous channel capacity based on
differential entropy. The indispensable condition is a presence of noise in channels,
because information quantity transmitted by a signal per time unit tends to infinity
in the absence of noise.
Another difficulty with classical information theory arises when differential en-
tropy (1.3.3) compared with Equation (1.3.1) is not preserved under bijection map-
pings of stochastic signals; this situation can produce paradoxical results. For ex-
ample, the process y (t) obtained from initial signal x(t) by its amplification k times
(k > 1) possesses greater differential entropy than the original signal x(t) in the
input of the amplifier. This, of course, does not mean that the signal y (t) = kx(t)
carries more information than the original one x(t). Note an important circum-
stance. Shannon’s theory excludes the notion of quantity of absolute information
generally contained in a signal.
The question of the quantity of information contained in a signal (stochastic
process) x(t) has no place in Shannon’s theory. In this theory, the notion of infor-
mation quantity makes sense only with respect to a pair of signals. In that case, the
appropriate question is: how much information does the signal y (t) contain with
respect to the signal x(t)? If the signals are Gaussian with correlation coefficient
ρxy , quantity of mutual information I [x(t), y (t)] contained in the signal y (t) with
respect to the signal x(t), assuming their linear relation y (t) = kx(t), is equal to
infinity:
Z∞ Z∞
p(x, y ) q
I [x(t), y (t)] = p(x, y ) log dxdy = − log 1 − ρ2xy = ∞.
p(x)p(y )
−∞ −∞

Evidently the answer to the question about the quantity of mutual informa-
tion and the quantity of absolute information is too weak. The question about the
quantity of absolute information can be formulated more generally and neutrally.
1.3 Overcoming Logical Difficulties 17

Let y (t) be the result of a nonlinear (in general case) one-to-one transformation of
Gaussian stochastic signal x(t):

y (t) = f [x(t)]. (1.3.5)

The question is: will the information contained in signals x(t) and y (t) be the
same if their probabilistic-statistical characteristics differ? In the case above, their
probability density functions, autocorrelation functions, and power spectral densi-
ties could differ. For example, if the result of nonlinear transformation (1.3.5) of
quasi-white Gaussian stochastic process x(t) with power spectral density width Fx
is the stochastic process y (t) with power spectral density width Fy , Fy > Fx , does
the resulting stochastic process y (t) carry more information than its original in the
input of a transformer? It is hard to believe, that classical information theory can
provide a perspicuous answer to the question.
The so-called cryptographic encoding paradox relates to this question and can
be explained. According to Shannon [72], information quantities obtained from two
independent sources, should be added. The cryptographic encoder must provide
statistical independence between the input x(t) and the output y (t) signals:

p(x, y ) = p(x)p(y ).

where p(x, y ) and p(x), p(y ) are joint and univariate probability density functions,
respectively.
We can consider the signals at the input x(t) and the output y (t) of a crypto-
graphic encoder as independent message sources. The general quantity of informa-
tion I obtained from each of them (Ix , Iy ) should equal their sum: I = Ix + Iy .
However, under any one-to-one transformation under Equation (1.3.5), the identity:
I = Ix = Iy must hold inasmuch as both x(t) and y (t) carry the same information.
The information theory conclusion that Gaussian noises possessing the most in-
terference effect (maximum entropy) among all types of noises with limited average
power is connected closely with differential entropy noninvariance property [Equa-
tion (1.3.3)] with respect to a group of signal mappings. This statement contradicts
known results, for example, of signal detection theory and estimation theory in
which the examples of interference (noise) exert stronger influence on signal pro-
cessing systems than Gaussian interference (noise).
Shannon’s measure of information quantity, like Hartley’s measure, cannot pre-
tend to cover all factors determining “uncertainty of outcome” in an arbitrary sense.
For example, these measures do not account for time aspects of signals. Entropy
(1.3.1) is defined by the probabilities pi of various outcomes. It does not depend on
the nature of outcomes — whether they are close or distant [199]. The uncertainty
degree will be the same for two discrete random variables ξ and η characterized by
identical probability distributions pξ (x) and pη (y ):
X ξ X η
pξ (x) = pi · δ (x − xi ), pη (y ) = pi · δ (y − yi ), pξi = pηi ,
i i

for equipotent countable sets of the values {xi } = x1 , x2 , . . . , xn , {yi } =


18 1 General Ideas of Natural Science, Signal Theory, and Information Theory

y1 , y2 , . . . , yn , although the average absolute deviations of random variables ξ and


η can differ:
Z∞ Z∞
|x − mξ |pξ (x)dx << |y − mη |pη (y )dy,
−∞ −∞

R∞ R∞
where mξ = xpξ (x)dx, mη = ypη (y )dy.
−∞ −∞
Applying these concepts to the messages (signals) means that under Shannon’s
approach, one should consider so-called conditional entropy to account for the sta-
tistical relationships between single fragments of the messages.
If the chaos is interpreted as the absence of statistical coupling between time
series of events, then Shannon’s entropy (1.3.1) is an uncertainty measure in time-
less space or in space with a time disorder. Jonathan Swift described the situation
in Gulliver’s Travels. The protagonist visits Laputian Academy of Lagado and en-
counters a wonder machine with which “the most ignorant person, at a reasonable
charge, and with a little bodily labour, might write books in philosophy, poetry,
politics, laws, mathematics, and theology, without the least assistance from genius
of study.” The sentences in such books are formed by random combinations of “par-
ticles, nouns, and verbs, and other parts of speech.” Every press of the machine’s
handle produces a new phrase with the help of “all the words of their language, in
their several moods, tenses, and declensions, but without any order.”
Émile Borel described an experiment involving a monkey and a typewriter. A
monkey randomly pressing the keys could create “texts” with maximal information
content according Shannon. It seems appropriate here to repeat Ilya Prigogine’s
statement about the “impossibility of surrounding world description without con-
structive role of time” [200]. We contend that neglecting the time component (or
statistical relations between the instantaneous values of the signals) is unsatisfac-
tory when constructing the foundations of signal theory and information theory.
The paradox of subjective perception of a message by sender and addressee is
specific to statistical information theory. The message M represents a deterministic
set of the elements (signs, symbols, etc.) to the sender. Thus, from the standpoint
of the sender, the message contains the quantity of information I (M ) equal to zero.
For the addressee, the same message M ∗ is a probabilistic-statistical totality of the
elements. Therefore, from the standpoint of the addressee, this message contains
the quantity of information I (M ∗ ) that does not equal to zero. A paradoxical situ-
ation occurs when the sender knowingly sends a message that contains a quantity
of information equal to zero while the receiver is sure the message contains real
content.
Three considerations for constructing the foundations of information theory are:

1. Accepting the sender’s view that the message is completely known and its
elements form a deterministic totality
2. Considering the view of the addressee — the message is unknown and its
elements form a probabilistic-statistical totality
1.3 Overcoming Logical Difficulties 19

3. As an ideal observer, attempting to unify the views of the sender and the
addressee
The authors of the sampling theory accept the view of the sender who knows the
content of the message sent and they prefer to work with deterministic functions.
The creators of statistical information theory considered the view of the addressee
indisputable. Researchers in semantic information theory tried to improve the sit-
uation by treating the message as an invariant of information [91], [201]. From this
view, the quantity of semantic information transmitted by a message has to be the
same for both the sender and the addressee.
Analogously, quantity of syntactical information I (M ) contained in determinis-
tic (for the sender) message M must be equal to the quantity of syntactical informa-
tion I (M ∗ ) contained in the received message M ∗ that represents for the addressee
the probabilistic-statistical totality of the elements: I (M ) = I (M ∗ ).
This circumstance demands appropriate elucidation of information theory to
ensure that a measure of information quantity combine both probabilistic and de-
terministic approaches.
The author sees a resolution to this predicament using an approach that does
not require identification of measure of information quantity and physical entropy.
Its essence lies in representation of the signals carrying information by a physical
system with a set of probabilistic states, and not by abstract random variables and
a set of numbers.
The informational structure of a random function (stochastic signal) represents
an internal organization of the system (the signal) that determines robust relations
between its elements. The totality of the elements of informational structure of the
signal is the set of the elements of metric signal space where the metric between
any two elements is invariant with respect to a group of signal mappings.
In summary, these considerations can be divided into two groups. The first
group concerns signal space concept, its current state (Group 1 below) and its
suggested development (Group 1A below). The second group covers issues related
to a measure of information quantity, its current state (Group 2 below) and its
suggested development (Group 2A below).
Group 1:

1. Linear spaces with scalar product (Euclidean spaces) serve as the basis of
the modern variant of signal theory construction.
2. In classical signal theory, any signal, whether stochastic (carrying a certain
quantity of information) or deterministic (containing no information), is
represented by an element of linear space with scalar product (a vector).
Regardless of their informational properties, all signals are represented
equally in the signal space.
3. Using linear space with scalar product to describe the real signals im-
poses strong constraints on signal properties and signal processing. The
probabilistic properties of signals are described, mainly, by Gaussian dis-
tribution, and optimal processing of such signals is confined within linear
20 1 General Ideas of Natural Science, Signal Theory, and Information Theory

processing. In the case of nonGaussian inputs, optimal signal processing


algorithms require the use of nonlinear processing methods.
4. The least element of classical signal space is a signal represented by a
point (or vector) in Euclidean space. Quite similarly a single element of
the signal (instantaneous value of the signal) is described. To maintain
logic, an instantaneous value of a signal must be the least element of sig-
nal space. This circumstance influences the profundity of constructing the
signal space concept, possibilities of investigating the real signals proper-
ties, and also possibilities of optimal signal processing.
5. The concept of signal space is related closely to formulating sampling
theorems and provides dialectical unity of continuous and discrete forms
of signal representation. The use of existing variant of sampling theorem
for passing from Hilbert to Euclidean space meets both theoretical and
practical difficulties.
Based on items 1 through 5, we can formulate a requirement list for our signal
space concept.
Group 1A:

1. Signal space has no absolute character; it is not a receptacle of material


objects (signals). Signal space is formed by a set of the signals interacting
with each other.
2. To construct the geometry of signal space, the set of the space elements
(the set of the signals) should be defined. Also one should define a group of
morphisms to allow mapping these elements into each other. The geometry
of signal space requires studying relationships of the signals that remain
invariable through all mappings of a given group.
3. The signal space concept must be based on metric spaces and not confined
exclusively to linear spaces with scalar product.
4. The signal space concept must describe interactions of signals with arbi-
trary probabilistic and informational properties.
5. The signal space concept must consider time aspect of signals.
6. Transition from continuous form of signal representation to discrete form
with the help of sampling theorem or its analogue must ensure the sig-
nals belong to the same space. The least element of metric signal space
(the point of metric space) must be a single instantaneous value (a signal
sample), not a signal as a whole.
7. The signal space concept must be in accordance with a measure of infor-
mation quantity carried by the signals.
Adduce the considerations concerning classical measure of information quantity
that could be united into the second group.
1.3 Overcoming Logical Difficulties 21

Group 2:

1. Introducing entropy as a measure of information quantity was advocated


by communication engineering of the past, and was not based on the
needs of information theory. Entropy reflects a measure of fortuitousness
of events (random variable) or a measure of statistical rarity. From the
standpoint of analyzing the characteristics of random variable, entropy is
a measure of spread in the values of random variable around its mean
value.
A measure of information quantity of a statistical totality of elements (a
message or a signal) must reflect the diversity of all the elements, not
simply a single element of this totality (a sample of either a message or a
signal). Information contained in a totality of the elements is based on the
diversity of statistical relationships of the elements, not by the diversity
of the possible values of the elements. Information is not a characteris-
tic of a single random variable considered outside statistical totality. The
statement that separately taken random variable contains a quantity of
information is nonsensical. Information can be used with respect to sta-
tistical totality as a whole or with respect to its single elements. With
application to stochastic processes (signals), the measure of information
quantity must include consideration of time statistical relationships be-
tween single instantaneous values (samples) of stochastic process.
2. Another theoretical inconvenience of classical information theory is the
inability to construct a signal space using entropy as a measure of infor-
mation quantity. The situation when information exists outside a signal
space that should have been formed by signals acting as material carriers
of information (when information exists per se, and the signal space exists
per se) is absurd.
The absence of a measure of information quantity built on relationships
of the elements of statistical totality (the signal) does not allow uniting
within the framework of the unified signal space the sets of single elements
(samples) of the signals and the sets of the signals.
3. Classical information theory is slanted toward stochastic processes in the
form of discrete random sequences (random sequences with domain of
definition and range of values in the form of countable sets). Entropy in-
troduced for this purpose depends on the average statistical rarity of the
values of discrete random variable. Attempts to extend entropy to contin-
uous random variables encounter the difficulty that entropy of continuous
random variables is an infinitely large quantity due to an unbounded in-
creasing statistical rarity of random variable values.
Trying to overcome infinity by rejecting an infinite term and thus forming
the notion of differential entropy may appear similar to entropy of a dis-
crete random variable but the action is baseless. The result is an incorrect
inference of infinitely large channel capacity in the absence of noise.
22 1 General Ideas of Natural Science, Signal Theory, and Information Theory

4. Statistical information theory is one-sided: it does not allow using the


possibilities of nonprobabilistic approaches to describe the deterministic
totalities of the elements. The orientation of classical information theory
toward random sequences is clear. It is impossible for two observers to
compare information quantity of the same message, when one of them
exactly knows the message content, but another does not know it. This
situation does not permit using the quantity of absolute information con-
tained in a message (signal) independently of subjective factors that is
invariant with respect to a group of mappings of the messages (signals).
In summary, we can formulate a list of requirements to suggested measure of
information quantity. Measure of information quantity must:
Group 2A:

1. Be a characteristic of a single element of statistical totality and statistical


totality as a whole.
2. Be a measure of structural diversity of this totality.
3. Take into account statistical relationships between the elements of total-
ity: time statistical relationships of stochastic processes and space-time
statistical relationships of stochastic fields.
4. Evaluate information quantity within deterministic and probabilistic-
statistical totalities of the elements, thus combining the possibilities of
probabilistic and nonprobabilistic approaches.
5. Conform to the notion of signal space.
6. Encompass stochastic processes (signals) of all kinds utilizing domain of
definition and range of values as countable sets and continuum, or both.
7. Be invariant of a group of signal mappings in signal space.
The last consideration is formulated as the main axiom of signal processing
theory.

Axiom 1.3.1. The main axiom of signal processing theory. Information quantity
I [y ] contained in the signal y in the output of arbitrary processing unit f [∗] does
not exceed the information quantity I [x] contained in the initial signal x before
processing:
I [y ] ≤ I [x]; y = f [x], (1.3.6)
so that the inequality (1.3.6) turns into identity under the condition, if signal pro-
cessing realizes one-to-one mapping of the signal:
f
x  y; I [x] = I [y ], (1.3.6a)
f −1

where f ∈ G, G is a signal mappings group.


1.3 Overcoming Logical Difficulties 23

The identity (1.3.6) will be formulated and proved in the form of the corre-
sponding theorems in Chapters 2, 3, and 4. Note that some signal mappings not
related to one-to-one transformations provide identity (1.3.6). We consider some of
these issues in Chapter 4.
W.G. Tuller [10] formulated the idea of comparing information quantities in the
input and output of a processing unit in the form of (1.3.6).
The methodological approach applied to construction of information theory by
Shannon is based on elementary techniques and methods of probability theory. It
ignores the basis of probability theory built on Boolean algebra with a measure.
Thus, we should expect to use Boolean algebra with a measure to construct the
foundations of signal processing theory and information theory. This approach could
be a unified factor allowing us to provide the unity of theoretical foundations of
aforementioned directions of mathematical science, imparting to them force, uni-
versality, and commonality.
Furthermore, the physical character of signal processing theory and its applied
orientation may stimulate the development of information theory, probability the-
ory, mathematical statistics, and other mathematical directions and application of
their research methods to all branches of natural science.
The principal concept of the work is constructing the unified foundations of
signal processing theory and information theory on the basis of the notion of sig-
nal space built upon generalized Boolean algebra with a measure. The last induces
metric in this space, and simultaneously it is a measure of information quantity
contained in the signals. That consideration provides the unity of a theoretical
foundation of interrelated directions of mathematical science: probability theory,
signal processing theory, and information theory, based upon the unified method-
ological basis: generalized Boolean algebra with a measure.
2
Information Carrier Space Built upon Generalized
Boolean Algebra with a Measure

In Chapter 1, the requirements for signal space, i.e., the space of material carriers
of information and the requirements for a measure of information quantity were
formulated. In this chapter, we show that the space built upon Boolean algebra with
a measure meets all the properties of signal space and a measure of information
quantity in full.
The difficulties of attempting to define some general notion of scientific use are
well known. So, the fundamental notion of a “set” has no direct definition, but this
fact does not interfere with the study of mathematics; it is enough to know the
main theses on set theory.
In this chapter, our attention will be concentrated on a fundamental, but more
vague notion of “information”. Even a very superficial analysis produces consid-
erable difficulty in defining information. Undoubtedly, the science needs such a
definition. Signal processing theory and information theory require distinctly for-
mulated axioms and theorems describing, on the one hand, the properties of a space
of material carriers of information (signal space), and on the other hand, the prop-
erties of information itself, its measure, and the peculiarities of processing of its
carriers (signals).
Boolean algebra with a measure and metric space built upon Boolean algebra
and induced by this measure are well investigated [173–176, 202–213]. Main results
on generalized Boolean algebras and rings are contained in several works [214–218].
Study of interrelations between lattices and geometries begins from [219], [220],
[221]. The papers [204] and [222] are the first to describe geometric properties of
Boolean ring and Boolean algebra respectively.
Analysis of the development of signal processing theory and information theory
suggests their independent and isolated existence along with their weak interac-
tions. Often, one can gain the impression, that information carriers (signals) and
principles of their processing exist per se, while transmitted information and the
approaches intended to describe a wide range of informational processes exist apart
from information carriers. This is shown by the fact that the founders of signal
theory, signal processing theory, and information theory considered a signal space
irrespective of information carried by the signals [122], [127], [143], and on the
other hand, a measure of information quantity regardless of its material carriers
(signals) [50], [51], [85]. This contradiction will inevitably question the necessity of
unifying mathematical foundations of both signal processing theory and information
theory. It is shown in this chapter that this theoretical difficulty can be overcome

25
26 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

by utilizing generalized Boolean algebra with a measure to describe informational


characteristics and properties of stochastic signals.
Generalized Boolean algebra with a measure is a wonderful model to unify math-
ematical foundations of information theory and signal processing theory, inasmuch
as a measure determines information quantity transferred by the signals forming
the signal space with specific properties, and also induces metric in this space.

2.1 Information Carrier Space


As for the notation system, one should note the following. Upper case Latin letters
are
S usedTto denote the sets of elements and their Boolean analogues. The symbols
and are used to denote operations of theoretical-set union and intersection,
respectively.STo denote the notion of Boolean sum, the symbols + and Σ are used.
The symbol is used to emphasize a structure (for instance, a continuous or discrete
one), which is inherent to a system of elements. To denote the notion of Boolean
product, the symbols · and Π are used. The symbol ∆ is used to denote a sum
operation of Boolean ring and also symmetric difference of Boolean algebra. Null
element of Boolean algebra is denoted by O. An indexed set {At }t∈T is a mapping
that associates each element t ∈ T with an element At .
In this section, the main material concerning lattice theory and Boolean alge-
bras is expounded. More detailed information is contained in the known algebraic
literature (see [175], [176], [221], [223]).
Universal algebra A = (X, TA ) with a signature TA = {Tn |n ≥ 0} (a system
of operations) is a set X (algebra carrier), where for any n ≥ 0 to every t ∈ Tn , a
n-ary algebraic operation is associated on A = (X, TA ) [221], [223].
Algebraic structure L = (X, TL ), TL = (+, · ) is a lattice with binary operations
of addition a + b = supL {a, b} and multiplication a · b = inf L {a, b}; a, b ∈ X, if it
satisfies the following identities:

(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b) · a = a, (a · b) + a = a (absorption)

Lattice L is called distributive if it satisfies the identities called distributive laws:

a(b + c) = ab + ac, a + (bc) = (a + b)(a + c).

Over a lattice a null O (unit I) element (also called zero and unity, respectively)
can be defined:

a + O = a, a · O = O; a + I = I, a · I = a.

Lattice L is called lattice with relative complements, if for any element a from any
2.1 Information Carrier Space 27

interval [b, c] it can be found that element d, d ∈ [b, c], and that a + d = c and
a · d = b. Element d is called relative complement of element a in the interval [b, c].
Lattice L with zero O and unity I is called complemented lattice, if every element
has a relative complement in the interval [O, I]. Relative complements in the interval
[O, I] are simply called complements.
In distributive lattice L with zero O and unity I, every element a possessing the
complement a0 has also a relative complement d = (a0 + b)c in any interval [b, c],
a ∈ [b, c].
Distributive lattice L with null element O and relative complements is called
generalized Boolean lattice BL in which a relative complement of the element a in
the interval [O, a + b] is called the difference of the elements b and a and is denoted
by b−a. BL with unit element I is called Boolean lattice BL0 . One can also say that
Boolean lattice BL0 is a distributive lattice with complements, i.e., a distributive
lattice L with zero O and unity I.
Generalized Boolean lattice BL, considered as a universal algebra BL =
(X, TBL ) with the signature TBL = (+, · , −, O) of the type (2, 2, 2, 0), is called
generalized Boolean algebra B = (X, TB ), TB ≡ TBL . Let a∆b = (a + b) − ab in
BL; then we obtain generalized Boolean ring BR = (X, TBR ) with the signature
TBR = (∆, · , O) of the type (2, 2, 0). On the contrary, any generalized Boolean
ring BR = (X, TBR ) with the signature TBR = (∆, · , O) of the type (2, 2, 0)
can be turned into generalized Boolean lattice BL = (X, TBL ) with the signature
TBL = (+, · , −, O) of the type (2, 2, 2, 0), assuming a + b = a∆b∆ab.
Generalized Boolean algebra B can be defined by the following system of iden-
tities:
(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b) · a = a, (a · b) + a = a (absorption)
a · (b + c) = ab + ac, a + bc = (a + b)(a + c) (distributivity)
a · (b − a) = O, a + ( b − a) = a + b.

Boolean lattice BL0 , considered as universal algebra BL0 = (X, TBL0 ) with the
signature TBL0 = (+, · , 0 , O, I) of the type (2, 2, 1, 0, 0), is called Boolean algebra
B0 = (X, TB0 ), TB0 ≡ TBL0 .
Boolean algebra B0 can be defined by the following system of identities:

(a + b) + c = a + (b + c), (a · b) · c = a · (b · c) (associativity)
a+b = b + a, a·b = b·a (commutativity)
a+a = a, a·a = a (idempotency)
(a + b)a = a, (a · b) + a = a (absorption)
a(b + c) = ab + ac, a + bc = (a + b)(a + c) (distributivity)
a+O = a, a·I = a;
a · a0 = O, a + a0 = I.

There are supplementary operations in Boolean algebra derived from the main
28 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

ones. The most important ones are difference a − b = ab0 and symmetric difference
a∆b = (a − b) + (b − a).
Let a∆b = ab0 + a0 b in Boolean algebra B0 ; we obtain Boolean ring BR0 with the
signature (∆, ·, O, I) of the type (2, 2, 0, 0). Conversely, any BR0 with the signature
(∆, ·, O, I) of the type (2, 2, 0, 0) can be transformed into Boolean algebra B0 with
the signature (+, ·,0 , O, I) of the type (2, 2, 1, 0, 0), assuming a + b = a∆b∆ab,
a0 = a∆I. Thus, Stone’s duality between Boolean algebras and Boolean rings is
established.
Finite additive measure on generalized Boolean algebra B is a finite real function
m on B satisfying the following conditions:

1. m(a) ≥ 0 for any a ∈ B


2. m(a + b) = m(a) + m(b), if ab = O
The conditions imply m(a + b) ≤ m(a)+ m(b), m(a) ≤ m(b) on a ≤ b and m(O) = 0.
Measure on P generalized P Boolean algebra B is called σ-additive measure (or σ-
∞ ∞
measure), if m[ i=1 xi ] = i=1 m(xi ) for any countable orthogonal system {xi :
xi · xj = O, i 6= j ; i, j ∈ {1, 2, . . .}}.
Further, to denote subalgebras of generalized Boolean algebra with a measure
defined on the various sets, for instance on X and Y , X 6= Y , we write B(X ) and
B(Y ), respectively.
We define the notion of information carrier space using generalized Boolean
algebra B(Ω) with a measure m.

Definition 2.1.1. Information carrier space Ω is a set of the elements {A, B, . . .}:
A, B, . . . ⊂ Ω called information carriers, that possesses the following properties:

1. The space Ω is generalized Boolean algebra B with a measure m and the


signature (+, ·, −, O) of the type (2, 2, 2, 0).
2. Any set A: A ⊂ Ω as an informationScarrier in Ω can be represented by
a totality of the elements {Aα }: A = Aα ; each of them is characterized
α
by normalized measure: m(Aα ) = 1. Thus, any set A: A ⊂ Ω forms the
system of the elements (of sets) with normalized measure.
3. Any set A of the space Ω is linearly ordered set by an index, i.e., any two of
its elements Aα and Aβ are comparable by the index: α < β ⇒ Aα < Aβ .
Systems of the elements A, B, . . . not possessing this property will not be
considered below.
4. System of the elements A = {Aα } possesses the property of continuity, if
for any arbitrary pair of the elements Aα and Aβ of the system A, there
exists such an element Aγ of the system A that α < γ < β. This is a
system of elements with continuous structure. Systems not possessing this
property are systems of elements with discrete structure.
5. The following P the space Ω over the elements Aα ,
operations are defined in P
Aα ∈ A, A = Aα ; Bβ , Bβ ∈ B, B = Bβ ; A, B ⊂ Ω:
α α
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 29
P P
(a) addition — A + B = Aα + Bβ
α β
P P
(b) multiplication — Aα1 ·Aα2 , Bβ1 ·Bβ2 , Aα ·Bβ , A·B = ( Aα ) · ( Bβ )
α β
(c) difference — Aα1 − Aα2 , Bβ1 − Bβ2 , Aα − Bβ , A − B = A − (A · B )
(d) symmetric difference — Aα1 ∆Aα2 , Bβ1 ∆Bβ2 , Aα ∆Bβ , A∆B = (A −
B ) + (B − A)
(e) null element O: A∆A = O; A − A = O
6. A measure m on generalized Boolean algebra B(Ω) introduces a metric
defining a distance ρ(A, B ) between the sets A, B by the relationship:

ρ(A, B ) = m(A∆B ), (2.1.1)

where A∆B is a symmetric difference of sets A, B.


Analogously, the distance ρ(Aα1 , Aα2 ) between the single elements Aα1
and Aα2 of the set A is defined:

ρ(Aα1 , Aα2 ) = m(Aα1 ∆Aα2 ), (2.1.2)

Thus, information carrier space (Ω, ρ) is a metric space.


Iso
7. Isomorphism Iso from the space Ω onto the space Ω0 — Iso: Ω → Ω0
preserving a measure m: m(A) = m(A0 ), m(B ) = m(B 0 ), A, B ⊂ Ω,
A0 , B 0 ⊂ Ω0 also preserves a metric induced by this measure:

ρ(A, B ) = m(A∆B ) = m(A0 ∆B 0 ) = ρ(A0 , B 0 )

Thus, the spaces (Ω, ρ) and (Ω0 , ρ) are isometric.

2.2 Geometrical Properties of Metric Space Built upon


Generalized Boolean Algebra
In this section, we consider the questions concerning geometrical properties of met-
ric space Ω based on generalized Boolean algebra B(Ω) with a measure m.

2.2.1 Main Relationships between Elements of Metric Space Built upon


Generalized Boolean Algebra with a Measure
Let B(Ω) be a generalized Boolean algebra B = (Ω, TB ) with a measure m defined
on a set Ω. A measure m on B(Ω) introduces a metric in Ω, determining a distance
ρ(A, B ) between the elements A, B ∈ Ω by the formula [175, Section VI.1,(I)]:

ρ(A, B ) = m(A∆B ), (2.2.1)


where A∆B is the symmetric difference between the elements A, B of the space Ω.
30 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Thus, introducing the metric, generalized Boolean algebra B = (Ω, TB ) turns


into the metric space Ω, whose topological properties are considered in [175]. Main
relationships between the elements A, B of metric space Ω can be obtained easily
from the relations of B(Ω) with a measure m and null element O.
The main group of identities:

m(A + B ) = m(A) + m(B ) − m(AB );

m(A + B ) = m(A∆B ) + m(AB );


m(A∆B ) = m(A) + m(B ) − 2m(AB ).
The main group of inequalities:

m(A) + m(B ) ≥ m(A + B ) ≥ m(A∆B );

m(A) + m(B ) ≥ m(A + B ) ≥ max[m(A), m(B )];


p
max[m(A), m(B )] ≥ m(A) · m(B ) ≥ min[m(A), m(B )] ≥ m(AB ).
Relationships of orthogonality:
Two elements X, Y ∈ Ω of generalized Boolean algebra B(Ω), X, Y ∈ Ω are called
orthogonal X⊥Y , if the identity holds [175, Section VI.1,(I)]:

XY = O, (m(XY ) = 0).

Then, for an arbitrary pair of the elements A, B, we have:

AB⊥A0 ⊥B 0 ⊥AB⊥A∆B,

where A0 = A∆(AB ), B 0 = B ∆(AB ).

FIGURE 2.2.1 Hexahedron built upon FIGURE 2.2.2 Tetrahedron built upon ele-
a set of all subsets of element A + B ments O, A, B, C

Consider hexahedron O, AB, A0 , B 0 , A, B, A∆B, A + B built upon a set of all


the subsets of the element A + B of generalized Boolean algebra B(Ω) represented
by A + B = (AB )∆A0 ∆B 0 (see Fig. 2.2.1). One can try to perceive a geometry of
the space Ω by the method of immersion of metric space Ω, formed by B(Ω) with a
measure m into Euclidean space. Consider the points of hexahedron O, AB, A0 , B 0 ,
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 31

A, B, A∆B, A + B as the points of Euclidean space Rn of dimension n = 3, so that


null element of the space Ω coincides with null element (zero) of Euclidean space R3 ,
and axes x1 , x2 , x3 pass through the points A0 , B 0 , AB, respectively. Coordinates of
hexahedron points O, AB, A0 , B 0 , A, B, A∆B, A + B are, respectively, equal to:

O (0, 0, 0), AB (0, 0, m(AB )), A0 (m(A0 ), 0, 0), B 0 (0, m(B 0 ), 0);

A (m(A0 ), 0, m(AB )), B (0, m(B 0 ), m(AB ));


A∆B (m(A0 ), m(B 0 ), 0), A + B (m(A0 ), m(B 0 ), m(AB )).
The points O, AB, A0 , B 0 , A, B, A∆B, A + B belong to a sphere Sp(OAB , RAB )
in R3 , so that the coordinates of the center OAB and radius RAB of a sphere are,
respectively, equal to:

OAB (m(A0 )/2, m(B 0 )/2, m(AB )/2), (2.2.2)


p
RAB = (m(A0 )2 + m(B 0 )2 + m(AB )2 )/2. (2.2.3)
Metric (2.2.1) between the points A, B in the space Ω is given by the relationship:

ρ(A, B ) = |xA B A B A B
1 − x1 | + |x2 − x2 | + |x3 − x3 |. (2.2.4)

Metric d(A, B ) between the points A, B in Euclidean space R3 is defined by the


known equation:

d(A, B ) = [|xA B 2 A B 2 A B 2 1/2


1 − x1 | + |x2 − x2 | + |x3 − x3 | ] . (2.2.5)

It is obvious that ρ(A, B ) ≥ d(A, B ). The sides of triangle ∆OAB in the space Ω,
according to Equation (2.2.4), are equal to:

ρ(O, A) = m(A0 ) + m(AB ); ρ(O, B ) = m(B 0 ) + m(AB );

ρ(A, B ) = m(A0 ) + m(B 0 ).


The sides of triangle ∆OAB in the space R3 , according to Equation (2.2.5), are
equal to:

d(O, A) = [m2 (A0 ) + m2 (AB )]1/2 ; d(O, B ) = [m2 (B 0 ) + m2 (AB )]1/2 ;

d(A, B ) = [m2 (A0 ) + m2 (B 0 )]1/2 .


For the sides of triangle ∆OAB, the relationships of sine rule (sine theorem) and
cosine rule (cosine theorem) of a triangle in Euclidean space R3 hold.
For hexahedron O, AB, A0 , B 0 , A, B, A∆B, A + B, the following relationships
hold.
The equalities between the opposite sides of hexahedron:

m(A0 ) = m(A0 ∆O) = m[A∆(AB )] = m[(A + B )∆B ] = m[(A∆B )∆B 0 ];

m(B 0 ) = m(B 0 ∆O) = m[B ∆(AB )] = m[(A + B )∆A] = m[(A∆B )∆A0 ].


32 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

The equalities between diagonals of the opposite faces of hexahedron:

m(A∆B ) = m[(A∆B )∆O] = m(A0 ∆B 0 ) = m[(A + B )∆(AB )];

m(A) = m(A∆O) = m[A0 ∆(AB )] = m[(A + B )∆B 0 ] = m[(A∆B )∆B ];


m(B ) = m(B ∆O) = m[B 0 ∆(AB )] = m[(A + B )∆A0 ] = m[(A∆B )∆A].
The equalities between large diagonals of hexahedron:

m(A + B ) = m[(A + B )∆O] = m[(A∆B )∆AB ] = m(A0 ∆B ) = m(A∆B 0 ).

For an arbitrary triplet of the elements A, B, C and null elements O of the space
Ω, the tetrahedron metric relationships hold (see Fig. 2.2.2):

m(A∆B ) = m(A∆O) + m(O∆B ) − 2m[(A∆O)(O∆B )];

m(B ∆C ) = m(B ∆O) + m(O∆C ) − 2m[(B ∆O)(O∆C )];


m(C ∆A) = m(C ∆O) + m(O∆A) − 2m[(C ∆O)(O∆A)];
m(A∆B ) = m(A∆C ) + m(C ∆B ) − 2m[(A∆C )(C ∆B )];
m(B ∆C ) = m(B ∆A) + m(A∆C ) − 2m[(B ∆A)(A∆C )];
m(C ∆A) = m(C ∆B ) + m(B ∆A) − 2m[(C ∆B )(B ∆A)].

2.2.2 Notion of Line in Metric Space Built upon Generalized Boolean


Algebra with a Measure
The notion of a line (we shall mean a straight line here and below) in metric space
Ω with metric (2.2.1) differs from the corresponding notion in Euclidean space and
is introduced by the following definition [222]:

Definition 2.2.1. Line l in the metric space Ω with metric (2.2.1) is a set con-
taining at least three elements A, B, X: A, B, X ∈ l ⊂ Ω, if the metric identity
holds:
ρ(A, B ) = ρ(A, X ) + ρ(X, B ). (2.2.6)

We say that the element X is situated between the elements A and B (A ≺


X ≺ B or A  X  B). The condition (2.2.6) is equivalent to the identity:

m(X ) = m(AX ) + m(XB ) − m(AB ). (2.2.7)

Equation (2.2.7) has two solutions with respect to X: 1) X = A + B, 2) X = A·B.


Then, obviously, the following lemmas hold.

Lemma 2.2.1. For an arbitrary pair of the elements A, B of metric space Ω, the
elements A, X = A + B, B lie on the same line, so that the element X = A + B is
situated between the elements A and B:

m(A∆X ) + m(X ∆B ) = m(A∆B ).


2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 33

Lemma 2.2.2. For an arbitrary pair of the elements A, B of metric space Ω, the
elements A, Y = A · B, B lie on the same line, so that the element Y = A · B is
situated between the elements A and B:

m(A∆Y ) + m(Y ∆B ) = m(A∆B ).

For the vertices of hexahedron built upon the points O, AB, A0 , B 0 , A, B, A∆B,
A + B, the following relationships of belonging to the same line lij in Ω hold:

1. For the points AB, B, A + B, A:

m[(AB )∆(A + B )] = m[(AB )∆B ]+ m[B ∆(A + B )] ⇒ AB, B, A + B ∈ l11 ;

m(B ∆A) = m[B ∆(A + B )] + m[(A + B )∆A] ⇒ B, A + B, A ∈ l12 ;


m[(A + B )∆(AB )] = m[(A + B )∆A] + m[A∆(AB )] ⇒ A + B, A, AB ∈ l13 ;
m[A∆B ] = m[A∆(AB )] + m[(AB )∆B ] ⇒ A, AB, B ∈ l14 .

2. For the points O, B 0 , A∆B, A0 :

m[O∆(A∆B )] = m[O∆B 0 ] + m[B 0 ∆(A∆B )] ⇒ O, B 0 , A∆B ∈ l21 ;

m(B 0 ∆A0 ) = m[B 0 ∆(A∆B )] + m[(A∆B )∆A0 ] ⇒ B 0 , A∆B, A0 ∈ l22 ;


m[(A∆B )∆O] = m[(A∆B )∆A0 ] + m[A0 ∆O] ⇒ A∆B, A0 , O ∈ l23 ;
m[A0 ∆B 0 ] = m[A0 ∆O] + m[O∆B 0 ] ⇒ A0 , O, B 0 ∈ l24 .

3. For the points O, AB, A, A0 :

m[O∆A] = m[O∆(AB )] + m[(AB )∆A] ⇒ O, AB, A ∈ l31 ;

m[(AB )∆A0 ] = m[(AB )∆A] + m[A∆A0 ] ⇒ AB, A, A0 ∈ l32 ;


m[A∆O] = m[A∆A0 ] + m[A0 ∆O] ⇒ A, A0 , O ∈ l33 ;
m[A0 ∆(AB )] = m[A0 ∆O] + m[O∆(AB )] ⇒ A0 , O, AB ∈ l34 .

4. For the points O, AB, B, B 0 :

m[O∆B ] = m[O∆(AB )] + m[(AB )∆B ] ⇒ O, AB, B ∈ l41 ;

m[(AB )∆B 0 ] = m[(AB )∆B ] + m[B ∆B 0 ] ⇒ AB, B, B 0 ∈ l42 ;


m[B ∆O] = m[B ∆B 0 ] + m[B 0 ∆O] ⇒ B, B 0 , O ∈ l43 ;
m[B 0 ∆(AB )] = m[B 0 ∆O] + m[O∆(AB )] ⇒ B 0 , O, AB ∈ l44 .
34 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

5. For the points A, A0 , A∆B, A + B:

m[A∆(A∆B )] = m[A∆A0 ] + m[A0 ∆(A∆B )] ⇒ A, A0 , A∆B ∈ l51 ;

m[A0 ∆(A + B )] = m[A0 ∆(A∆B )] + m[(A∆B )∆(A + B )] ⇒


⇒ A0 , A∆B, A + B ∈ l52 ;
m[(A∆B )∆A] = m[(A∆B )∆(A + B )] + m[(A + B )∆A] ⇒
⇒ A∆B, A + B, A ∈ l53 ;
m[(A + B )∆A0 ] = m[(A + B )∆A] + m[A∆A0 ] ⇒ A + B, A, A0 ∈ l54 .

6. For the points B, B 0 , A∆B, A + B:

m[B ∆(A∆B )] = m[B ∆B 0 ] + m[B 0 ∆(A∆B )] ⇒ B, B 0 , A∆B ∈ l61 ;

m[B 0 ∆(A + B )] = m[B 0 ∆(A∆B )] + m[(A∆B )∆(A + B )] ⇒


⇒ B 0 , A∆B, A + B ∈ l62 ;
m[(A∆B )∆B ] = m[(A∆B )∆(A + B )] + m[(A + B )∆B ] ⇒
⇒ A∆B, A + B, B ∈ l63 ;
m[(A + B )∆B 0 ] = m[(A + B )∆B ] + m[B ∆B 0 ] ⇒ A + B, B, B 0 ∈ l64 .

7. For the points O, AB, A + B, A∆B:

m[O∆(A + B )] = m[O∆(AB )]+ m[(AB )∆(A + B )] ⇒ O, AB, A + B ∈ l71 ;

m[(AB )∆(A∆B )] = m[(AB )∆(A + B )] + m[(A + B )∆(A∆B )] ⇒


⇒ AB, A + B, A∆B ∈ l72 ;
m[(A + B )∆O] = m[(A + B )∆(A∆B )] + m[(A∆B )∆O] ⇒
⇒ A + B, A∆B, O ∈ l73 ;
m[(A∆B )∆(AB )] = m[(A∆B )∆O] + m[O∆(AB )] ⇒ A∆B, O, AB ∈ l74 .

8. For the points A0 , A, B 0 , B:

m(A0 ∆B ) = m(A0 ∆A) + m(A∆B ) ⇒ A0 , A, B ∈ l81 ;

m(A∆B 0 ) = m(A∆B ) + m(B ∆B 0 ) ⇒ A, B, B 0 ∈ l82 ;


m(B ∆A0 ) = m(B ∆B 0 ) + m(B 0 ∆A0 ) ⇒ B, B 0 , A0 ∈ l83 ;
m(B 0 ∆A) = m(B 0 ∆A0 ) + m(A0 ∆A) ⇒ B 0 , A0 , A ∈ l84 .
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 35

9. For the points O, A, A + B, B 0 :

m[O∆(A + B )] = m(O∆A) + m[A∆(A + B )] ⇒ O, A, A + B ∈ l91 ;

m(A∆B 0 ) = m[A∆(A + B )] + m[(A + B )∆B 0 ] ⇒ A, A + B, B 0 ∈ l92 ;


m[(A + B )∆O] = m[(A + B )∆B 0 ] + m(B 0 ∆O) ⇒ A + B, B 0 , O ∈ l93 ;
m(B 0 ∆A) = m(B 0 ∆O) + m(O∆A) ⇒ B 0 , O, A ∈ l94 .

10. For the points AB, B 0 , A∆B, A0 :

m[(AB )∆(A∆B )] = m[(AB )∆B ] + m[B ∆(A∆B )] ⇒ AB, B, A∆B ∈ l101 ;

m(B ∆A0 ) = m[B ∆(A∆B )] + m[(A∆B )∆A0 ] ⇒ B, A∆B, A0 ∈ l102 ;


m[(A∆B )∆(AB )] = m[(A∆B )∆A0 ]+m[A0 ∆(AB )] ⇒ A∆B, AB, A0 ∈ l103 ;
m(A0 ∆B ) = m[A0 ∆(AB )] + m[(AB )∆B ] ⇒ A0 , AB, B ∈ l104 .

11. For the points O, B, A + B, A0 :

m[O∆(A + B )] = m(O∆B ) + m[B ∆(A + B )] ⇒ O, B, A + B ∈ l111 ;

m(B ∆A0 ) = m[B ∆(A + B )] + m[(A + B )∆A0 ] ⇒ B, A + B, A0 ∈ l112 ;


m[(A + B )∆O] = m[(A + B )∆A0 ] + m[A0 ∆O] ⇒ A + B, A0 , O ∈ l113 ;
m(A0 ∆B ) = m(A0 ∆O) + m(O∆B 0 ) ⇒ A0 , O, B 0 ∈ l114 .

12. For the points AB, B 0 , A∆B, A:

m[(AB )∆(A∆B )] = m[(AB )∆B 0 ]+m[B 0 ∆(A∆B )] ⇒ AB, B 0 , A∆B ∈ l121 ;

m(B 0 ∆A) = m[B 0 ∆(A∆B )] + m[(A∆B )∆A] ⇒ B 0 , A∆B, A ∈ l122 ;


m[(A∆B )∆(AB )] = m[(A∆B )∆A] + m[A∆(AB )] ⇒ A∆B, A, AB ∈ l123 ;
m(A∆B 0 ) = m[A∆(AB )] + m[(AB )∆B 0 ] ⇒ A0 , AB, B 0 ∈ l124 .

If a combination axiom of Hilbert’s axiom system [224, Section 2] claiming that


there exists the only line passing through two given points is used with respect to
the group of identities, then we can obtain the following paradoxical conclusion.
The relationships (1 through 12) imply that the following four points belong to the
same line:

1. AB, B, A + B, A ∈ l1
2. O, B 0 , A∆B, A0 ∈ l2
3. O, AB, A, A0 ∈ l3
4. O, AB, B, B 0 ∈ l4
5. A, A0 , A∆B, A + B ∈ l5
36 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

6. B, B 0 , A∆B, A + B ∈ l6
7. O, AB, A + B, A∆B ∈ l7
8. A0 , A, B 0 , B ∈ l8
9. O, A, A + B, B 0 ∈ l9
10. AB, B 0 , A∆B, A0 ∈ l10
11. O, B, A + B, A0 ∈ l11
12. AB, B 0 , A∆B, A ∈ l12
Therefore, line l1 is identical to the lines l3 . . . l8 , l9 , l11 :

l1 ≡ l3 (since the points AB, A ∈ l1 and AB, A ∈ l3 );


l1 ≡ l4 (since the points AB, B ∈ l1 and AB, B ∈ l4 );
l1 ≡ l5 (since the points A, A + B ∈ l1 and A, A + B ∈ l5 );
l1 ≡ l6 (since the points B, A + B ∈ l1 and B, A + B ∈ l6 );
l1 ≡ l7 (since the points AB, A + B ∈ l1 and AB, A + B ∈ l7 );
l1 ≡ l8 (since the points A, B ∈ l1 and A, B ∈ l8 );
l1 ≡ l9 (since the points A, A + B ∈ l1 and A, A + B ∈ l9 );
l1 ≡ l11 (since the points B, A + B ∈ l1 and B, A + B ∈ l11 );
and the line l2 is identical to the lines l3 . . . l8 , l10 , l12 :

l2 ≡ l3 (since the points O, A0 ∈ l2 and O, A0 ∈ l3 );


l2 ≡ l4 (since the points O, B 0 ∈ l2 and O, B 0 ∈ l4 );
l2 ≡ l5 (since the points A0 , A∆B ∈ l2 and A0 , A∆B ∈ l5 );
l2 ≡ l6 (since the points B 0 , A∆B ∈ l2 and B 0 , A∆B ∈ l6 );
l2 ≡ l7 (since the points O, A∆B ∈ l2 and O, A∆B ∈ l7 );
l2 ≡ l8 (since the points A0 , B 0 ∈ l2 and A0 , B 0 ∈ l8 );
l2 ≡ l10 (since the points A0 , A∆B ∈ l2 and A0 , A∆B ∈ l10 );
l2 ≡ l12 (since the points B 0 , A∆B ∈ l2 and B 0 , A∆B ∈ l12 ).
This implies that all the vertices of hexahedron O, AB, A0 , B 0 , A, B, A∆B, A + B
belong to the same line l. However, that cannot be, inasmuch as the following
triplets of the points A0 , B 0 , AB; A0 , B 0 , A + B; A, B, A∆B; O, A, B; AB, A0 , A + B;
AB, B 0 , A + B do not lie on the same line.
It is easy to see that axiomatics of metric space Ω has to differ from Hilbert’s
axiom system.
The following relationships of belonging of four points to the same circle ci in
R3 with metric (2.2.5) are inherent to the vertices of hexahedron O, AB, A0 , B 0 ,
A, B, A∆B, A + B:
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 37

1. For the points AB, B, A + B, A:

d(A + B, AB )d(A, B ) = d(AB, B )d(A + B, A) + d(AB, A)d(A + B, B ) ⇒

⇒ AB, B, A + B, A ∈ c1 .

2. For the points O, B 0 , A∆B, A0 :

d(A∆B, O)d(A0 , B 0 ) = d(B 0 , O)d(A∆B, A0 ) + d(A0 , O)d(A∆B, B 0 ) ⇒

⇒ O, B 0 , A∆B, A0 ∈ c2 .

3. For the points O, AB, A, A0 :

d(A, O)d(A0 , AB ) = d(AB, O)d(A, A0 ) + d(A, AB )d(A0 , O) ⇒


⇒ O, AB, A, A0 ∈ c3 .

4. For the points O, AB, B, B 0 :

d(B, O)d(B 0 , AB ) = d(AB, O)d(B, B 0 ) + d(B, AB )d(B 0 , O) ⇒

⇒ O, AB, B, B 0 ∈ c4 .

5. For the points A, A0 , A∆B, A + B:

d(A + B, A0 )d(A∆B, A) = d(A∆B, A0 )d(A + B, A)+


+d(A + B, A∆B )d(A, A0 ) ⇒ A, A0 , A∆B, A + B ∈ c5 .

6. For the points B, B 0 , A∆B, A + B:

d(A + B, B 0 )d(A∆B, B ) = d(A∆B, B 0 )d(A + B, B )+


+d(A + B, A∆B )d(B, B 0 ) ⇒ B, B 0 , A∆B, A + B ∈ c6 .

7. For the points O, AB, A + B, A∆B:

d(A + B, O)d(A∆B, AB ) = d(A + B, A∆B )d(AB, O)+


+d(A + B, AB )d(A∆B, O) ⇒ O, AB, A + B, A∆B ∈ c7 .

8. For the points A0 , A, B 0 , B:

d(B, A0 )d(A, B 0 ) = d(B, B 0 )d(A, A0 ) + d(A, B )d(A0 , B 0 ) ⇒


⇒ A0 , A, B 0 , B ∈ c8 .

9. For the points O, A, A + B, B 0 :

d(A + B, O)d(A, B 0 ) = d(A, O)d(A + B, B 0 ) + d(B 0 , O)d(A + B, A) ⇒


⇒ O, A, A + B, B 0 ∈ c9 .
38 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

10. For the points AB, B, A∆B, A0 :

d(A∆B, AB )d(A0 , B ) = d(AB, B )d(A∆B, A0 ) + d(AB, A0 )d(A∆B, B ) ⇒


⇒ AB, B, A∆B, A0 ∈ c10 .

11. For the points O, B, A + B, A0 :

d(A + B, O)d(A0 , B ) = d(B, O)d(A + B, A0 ) + d(A0 , O)d(A + B, B ) ⇒


⇒ O, B, A + B, A0 ∈ c11 .

12. For the points AB, B 0 , A∆B, A:

d(A∆B, AB )d(A, B 0 ) = d(AB, B 0 )d(A∆B, A) + d(AB, A)d(A∆B, B 0 ) ⇒


⇒ AB, B 0 , A∆B, A ∈ c12 .

Thus, the aforementioned quartets of the vertices of hexahedron O, AB, A0 , B 0 , A,


B, A∆B, A + B belong to the corresponding circles c1 . . . c12 in Euclidean space
R3 , which will be identified with the lines in the space Ω, and all eight vertices of
the hexahedron belong to the same sphere Sp(OAB , RAB ) in R3 .
There is a relation of order on generalized Boolean algebra B(Ω) and on any
lattice, so for the elements A, B ∈ Ω the relation A ≤ B means that A = AB:
A ≤ B ⇔ A = AB. The relation of a strict order A < B between the elements
A, B ∈ Ω means equivalence: A < B ⇔ (A ≤ B ) & (A 6= B ).
Definition 2.2.2. If Q1 and Q2 are the elements of generalized Boolean algebra
B(Ω), then the set [Q1 , Q2 ] = {Xα : Xα ∈ Ω, Q1 ≤ Xa ≤ Q2 } is called a closed
interval (or simply an interval).
Lemma 2.2.3. The elements Q1 , Xα , Q2 of the interval [Q1 , Q2 ], Xα 6= Q1,2 lie on
the same line l in metric space Ω:

Q1 , Xα , Q2 ∈ [Q1 , Q2 ] ⇒ Q1 , Xα , Q2 ∈ l ⇔ Q1 ≺ Xα ≺ Q2 .

Proof. For the elements Q1 , Xα , Q2 of the interval [Q1 , Q2 ], defined on generalized


Boolean algebra B(Ω) with a measure m, the following relations hold:

Q1 < Xα ⇒ Q1 = Q1 · Xα , Xα < Q2 ⇒ Xα = Xα · Q2 ,

so, the metric identities hold:

m(Q1 ∆Xα ) = m(Xα ) − m(Q1 ), m(Xα ∆Q2 ) = m(Q2 ) − m(Xα ).

According to relationships, the metric identity, being the condition of belonging of


three points to the same line, holds:

ρ(Q1 , Q2 ) = ρ(Q1 , Xα ) + ρ(Xα , Q2 ) = m(Q2 ) − m(Q1 ),

where ρ(Q1 , Q2 ) = m(Q1 ∆Q2 ) is metric between the elements Q1 , Q2 in the space
Ω.
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 39

Definition 2.2.3. The mapping ϕ associating every element α ∈ Φ with the el-
ement Aα ∈ Ω is called the indexed set {Aα }α∈Φ on generalized Boolean algebra
B(Ω) with a measure m: ϕ : α → Aα .
Definition 2.2.4. If the indexed set A = {Aα }α∈Φ is a subset of generalized
Boolean algebra B(Ω), whose elements are connected by a linear order, i.e., for an
arbitrary pair of the elements Aα , Aβ ∈ A, α ≤ β, the inequality Aα ≤ Aβ holds,
then the set A is a linearly ordered indexed set or indexed chain.
Theorem 2.2.1. For the elements {Aj }, j = 0, 1, . . . , n of linearly ordered indexed
set A = {Aj }, A ⊂ Ω, defined on generalized Boolean algebra B(Ω) with a measure
m, the metric identity holds:
n
X
m(A0 ∆An ) = m(Aj−1 ∆Aj ). (2.2.8)
j=1

Proof. Since for the elements of linearly ordered set A = {Aj } defined on general-
ized Boolean algebra B(Ω) with a measure m, an implication Aj−1 ≤ Aj ⇔ Aj−1 =
Aj−1 · Aj and the metric identity hold:

m(Aj−1 ∆Aj ) = m(Aj ) − m(Aj−1 ). (2.2.9)

Inserting the right part of the identity (2.2.9) into the sum of the right part of the
identity (2.2.8), we get the value of the series’ sum:
n
X n
X
m(Aj−1 ∆Aj ) = [m(Aj ) − m(Aj−1 )] = m(A0 ∆An ).
j=1 j=1

Based on Theorem 2.2.1, the condition of three points belonging to the same line in
the space Ω (2.2.6) can be extended to the case of arbitrary large numbers of points.
We consider that the elements {Aj } of the indexed set A = {Aj }, j = 0, 1, . . . , n lie
on the same line in metric space Ω, if the metric identity holds (compare with [222]):
n
X
ρ(A0 ∆An ) = ρ(Aj−1 ∆Aj ). (2.2.10)
j=1

where ρ(Aα , Aβ ) = m(Aα ∆Aβ ) is metric in space Ω.


Then Theorem 2.2.1 has the following corollary.
Corollary 2.2.1. The elements {Aj } of linearly ordered indexed set A = {Aj },
j = 0, 1, . . . , n, A ⊂ Ω belong to the same line l:

A = {Aj : ∀Aj ∈ l : A0 ≺ A1 ≺ . . . ≺ Aj ≺ . . . ≺ An }.

Definition 2.2.5. If all the elements of the line l in metric space Ω form a partially
ordered set, then the line is called the line with partially ordered elements.
40 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Definition 2.2.6. If all the elements of the line l in metric space Ω form a linearly
ordered set, then the line is called the line with linearly ordered elements.

Thus, in metric space Ω, we shall differentiate lines with partially and linearly
ordered elements.
Mutual situation of two distinct lines in the space Ω is characterized by the
following feature in contrast with the lines in Euclidean space.

Example 2.2.1. Consider two linearly ordered indexed sets A = {Aj }, j =


0, 1, . . . , J, B = {Bi }, i = 0, 1, . . . , I on generalized Boolean algebra B(Ω):
A, B ⊂ Ω, whose subsets A∗ ⊂ A, B ∗ ⊂ B are identical to each other in the
interval [Q0 , QK ]:
(
A∗ = {Aj : Aj = Qk , Aj ∈ A, j = k + NJ , k = 0, 1, . . . , K};
(2.2.11)
B ∗ = {Bi : Bi = Qk , Bi ∈ B, i = k + NI , k = 0, 1, . . . , K},

where K, NJ , NI ∈ N are constants, so that K + NJ < J and K + NI < I.


Condition (2.2.11) implies that the intersection of linearly ordered sets A = {Aj },
B = {Bi } is the linearly ordered set Q = {Qk }:

A ∩ B = Q = {Qk }, k = 0, 1, . . . , K.

Then, according to the Corollary 2.2.1 of Theorem 2.2.1, the lines lA and lB , on
which the elements {Aj } of the set A and the elements {Bi } of the set B are
situated, respectively, intersect each other on the set Q:

lA ∩ lB = Q = {Qk }, k = 0, 1, . . . , K. 5

This example implies that two distinct lines lA and lB in the space Ω could have
an arbitrarily large number K + 1 of common points {Qk } belonging to the same
interval [Q0 , QK ]: {Qk } ⊂ [Q0 , QK ].

Example 2.2.2. Consider two linearly ordered sets A0 = {A0j }, j = 0, 1, . . . , J and


A00 = {A00i }, i = 0, 1, . . . , I defined in the interval [O, A] of generalized Boolean
algebra B(Ω), which are such that A0 ⊂ A, A00 ⊂ A and the limits of the sequences
of the sets {A0j } and {A00i } coincide with their largest and least elements, which are,
respectively, equal to:
(
A0 = {A0j : A0j ∈ [O, A], limj→0 A0j = O & limj→∞ A0j = A};
(2.2.12)
A00 = {A00i : A00i ∈ [O, A], limi→0 A00i = O & limi→∞ A00i = A}.

The condition (2.2.12) implies that the intersection of the sets A0 = {A0j } and
A00 = {A00i } contains two elements of generalized Boolean algebra B(Ω), O and A:
A0 ∩ A00 = {O, A}. Then, according to Corollary 2.2.1 of Theorem 2.2.1, the lines
l0 and l00 , on which the elements {A0j } of the set A0 and the elements {A00i } of the
set A00 are situated, intersect at two points of the space Ω: l0 ∩ l00 = {O, A}. 5
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 41

This example implies that two distinct lines l0 and l00 in space Ω, passing through
the points of linearly ordered sets A0 = {A0j }, A00 = {A00i }, respectively: A0 ⊂ l0 and
A00 ⊂ l00 , being the subsets of the interval [O, A]: A0 ⊂ [O, A] and A00 ⊂ [O, A],
intersect at two extreme points of this interval O, A.

Definition 2.2.7. Linearly ordered indexed set A = {Aα }α∈Φ is called an every-
where dense set, if for all Aα , Aβ ∈ A: Aα < Aβ an element Aγ can be found, such
that: Aα < Aγ < Aβ .

Definition 2.2.8. The element Q ∈ Ω is called the limit of everywhere dense


linearly ordered indexed set A = {Aα }α∈Φ defined on generalized Boolean algebra
B(Ω) with a measure m: A ⊂ Ω, if any neighborhood Uε (Q) of the point Q can
be associated with such an index αU ∈ ∆, that for all α > αU the condition
Aα ∈ Uε (Q) holds:

Q = lim Aα ⇔ ∀ε > 0 : ∃αU : ∀α > αU : m(Aα ∆Q) < ε.


α→αQ

Example 2.2.3. Consider two linearly ordered everywhere dense indexed sets A =
{Aα }α∈Φ1 and B = {Bβ }β∈Φ2 defined in the intervals [A0 , QA ] and [B0 , QB ] of
generalized Boolean algebra B(Ω), respectively: A ⊂ [A0 , QA ], B ⊂ [B0 , QB ], so
that [A0 , QA ] ∩ [B0 , QB ] = [Q1 , Q2 ], A0 < Q1 & B0 < Q1 , Q2 < QA & Q2 < QB ,
and the limits of the sequences A and B coincide with the extreme points of the
interval [Q1 , Q2 ], respectively:
(
limα→α1 Aα = limβ→β1 Bβ = Q1 ;
(2.2.13)
limα→α2 Aα = limβ→β2 Bβ = Q2 .

Identities (2.2.13) imply that the intersection of linearly ordered sets A = {Aα }α∈Φ1
and B = {Bβ }β∈Φ2 contains two elements of generalized Boolean algebra B(Ω): Q1
and Q2 : A ∩ B = {Q1 , Q2 }.
Then, according to Corollary 2.2.1 of Theorem 2.2.1, the lines lA and lB , on
which the elements A = {Aα }α∈Φ1 , B = {Bβ }β∈Φ2 of the sets A, B are situated,
respectively, intersect at two points of the space Ω: lA ∩ lB = {Q1 , Q2 }. 5

This example implies that two distinct lines lA and lB , passing through the
points of linearly ordered sets A = {Aα }α∈Φ1 , B = {Bβ }β∈Φ2 , respectively: {Aα } ⊂
lA , {Bβ } ⊂ lB , can intersect at two or more points of the space Ω.

2.2.3 Notions of Sheet and Plane in Metric Space Built upon


Generalized Boolean Algebra with a Measure
To characterize geometrical relations between a pair and a triplet of elements in
metric space Ω not connected by the relation of an order, we introduce extra-
axiomatic definitions of the notions of sheet and plane, respectively.
42 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Definition 2.2.9. In metric space Ω, sheet LAB is defined as a geometrical image


of subalgebra B(A + B ) of generalized Boolean algebra B(Ω), A, B ∈ Ω, represented
by three-element partition:

A + B = (AB ) + (A − B ) + (B − A),

where the elements AB, A − B, B − A are pairwise orthogonal and differ from null
element:

AB⊥(A − B )⊥(B − A)⊥AB ⇔


⇔ (AB ) · (A − B ) = O, (A − B ) · (B − A) = O, (B − A) · (AB ) = O;
AB 6= O, A − B 6= O, B − A 6= O.
The coordinates of the points O, AB, A − B, B − A, A, B, A∆B, A + B of the
sheet LAB in R3 are equal to:

O (0, 0, 0), AB (0, 0, m(AB )), A − B (m(A − B ), 0, 0);

B − A (0, m(B − A), 0), A (m(A − B ), 0, m(AB )), B (0, m(B − A), m(AB ));
A∆B (m(A − B ), m(B − A), 0), A + B (m(A − B ), m(B − A), m(AB )).
The points O, AB, A − B, B − A, A, B, A∆B, A + B of the sheet LAB belong
to the sphere Sp(OAB , RAB ) in R3 , so that the center OAB and radius RAB of the
sphere are determined by the Equations (2.2.2) and (2.2.3), respectively.
Definition 2.2.10. In metric space Ω, sheet LAB is a set of the points which
contains two elements A, B ∈ Ω that are not connected by relation of an order
(A 6= AB & B 6= AB ), lines passing through two given points; every line that
passes through an arbitrary pair of the points on the lines passing through two
given points.
By Definitions 2.2.9 and 2.2.10, the algebraic and geometric essences of a sheet
are established.
Definition 2.2.11. Two points A and B are called generator points of the sheet
LAB in metric space Ω, if they are not connected by relation of an order on gener-
alized Boolean algebra B(Ω): A 6= AB & B 6= AB.
Definitions 2.2.9 and 2.2.10 imply that the sheet LAB given by generator points
A, B contains the points O, AB, A − B, B − A, A, B, A∆B, A + B, and also the
points {Xα } of a linearly ordered indexed set X lying on the line passing through
the extreme points O, X of the interval [O, X ], where X = Y ∗ Z, Y = A ∗ B,
Z = A ∗ B, and the asterisk ∗ is an arbitrary signature operation of generalized
Boolean algebra B(Ω).
To denote one-to-one correspondence between an algebraic notion W and its
geometric image Geom(W ), we shall write:

W ↔ Geom(W ). (2.2.14)
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 43

For instance, the correspondence between the sheet LAB and subalgebra B(A + B )
of generalized Boolean algebra B(Ω) will be denoted as LAB ↔ A + B.
Applying the correspondence (2.2.14), one can define the relationships between
the objects of metric space Ω with some given properties. For instance, the inter-
section of two sheets LAB and LCD , determined by the generator points A, B and
C, D, respectively, is the sheet with generator points (A + B )C and (A + B )D or
the sheet with generator points (C + D)A and (C + D)B:

LAB ∩ LCD ↔ (A + B )(C + D) =


(
(A + B )C + (A + B )D ↔ L(A+B)C,(A+B)D ;
=
(C + D)A + (C + D)B ↔ L(C+D)A,(C+D)B .
Using the last relationship, if the sheet LAB is a subset of the sheet LCD we
obtain:

LAB ⊂ LCD ↔ A + B ⊂ C + D ⇒ (A + B )(C + D) = A + B ↔ LAB ,

then the intersection is the sheet LAB : LAB ∩ LCD = LAB .


If the points A and C of the sheets LAB and LCD are identically equal: A ≡ C,
then intersection of these sheets is the sheet LA,BD with generator points A and
BD:
LAB ∩ LCD ↔ (A + B )(C + D) = (A + B )(A + D) =
= A + AD + AB + BD = A + BD ↔ LA,BD ,
i.e.: A ≡ C ⇒ LAB ∩ LCD = LA,BD .
But if the points A and C of the sheets LAB and LCD are identically equal:
A ≡ C, and the elements B and D are orthogonal BD = O, then intersection of
these sheets is an element A:

LAB ∩ LCD ↔ (A + B )(C + D) = (A + B )(A + D) =

= A + AD + AB + BD = A + BD = A,
i.e.: (A ≡ C ) & (BD = O) ⇒ LAB ∩ LCD = A.
Definition 2.2.12. In metric space Ω, plane αABC passing through three points
A, B, C, which are not pairwise connected by relation of an order:

A 6= AB & B 6= AB & B 6= BC & C 6= CB & A 6= AC & C 6= AC,

is a geometric image of subalgebra B(A + B + C ) of generalized Boolean algebra


B(Ω).
Definition 2.2.13. In metric space Ω, plane αABC is a set of the points, which
contains: the arbitrary three points A, B, C not lying on the same line; the lines
passing through a pair of three points A, B, C; any line that passes through an
arbitrary pair of the points on the lines passing through any pair of three given
points A, B, C.
44 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

By Definitions 2.2.12 and 2.2.13, the algebraic and geometric essences of a plane
are established, respectively.

Definition 2.2.14. Three points A, B, C are called generator points of the plane
αABC in metric space Ω, if they are not pairwise connected by relation of an order:

A 6= AB & B 6= AB & B 6= BC & C 6= CB & A 6= AC & C 6= AC.

Using the relation (2.2.14), we obtain that the intersection of two planes αABC and
αDEF , where the points A, B, C and D, E, F are not connected with each other by
relation of order, is the plane with generator points A(D + E + F ), B (D + E + F ),
C (D + E + F ), or the plane with generator points D(A + B + C ), E (A + B + C ),
F (A + B + C ):

αABC ∩ αDEF ↔ (A + B + C )(D + E + F ) =


(
A(D + E + F ) + B (D + E + F ) + C (D + E + F );
= ↔
D(A + B + C ) + E (A + B + C ) + F (A + B + C ).
(
αA(D+E+F ),B(D+E+F ),C(D+E+F ) ;

αD(A+B+C),E(A+B+C),F (A+B+C) .
Using the last relation, if the plane αABC is a subset of the plane αDEF we obtain:

αABC ⊂ αDEF ↔ A + B + C ⊂ D + E + F ⇒ (A + B + C )(D + E + F ) =

= A + B + C,
then the intersection of these planes is the plane αABC :

αABC ⊂ αDEF ⇒ αABC ∩ αDEF = αABC .

If the points A, B and D, E of the planes αABC , αDEF are pairwise identical A ≡ D,
B ≡ E, then the intersection of these planes is the plane αA,B,CF with generator
points A, B, CF :

A ≡ D, B ≡ E ⇒ αABC ∩ αDEF ↔ (A + B + C )(D + E + F ) =

= (A + B + C )(A + B + F ) = A + B + CF ↔ αA,B,CF ,
i.e., A ≡ D, B ≡ E ⇒ αABC ∩ αDEF = αA,B,CF . If the points A, B and D, E of
the planes αABC , αDEF are pairwise identical A ≡ D, B ≡ E, and the elements C
and F are orthogonal CF = O, then the intersection of these planes is the sheet
LAB with generator points A, B:

(A ≡ D, B ≡ E ) & (CF = O) ⇒ αABC ∩ αDEF ↔ (A + B + C )(D + E + F ) =

= (A + B + C )(A + B + F ) = A + B + CF = A + B,
i.e., (A ≡ D, B ≡ E ) & (CF = O) ⇒ αABC ∩ αDEF = LAB .
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 45

But if the points A and D of the planes αABC , αDEF are identical A ≡ D, and the
elements B, C and E + F are pairwise orthogonal B (E + F ) = O, C (E + F ) = O,
then the intersection of these planes is the element A:

(A ≡ D) & (B (E + F ) = O) & (C (E + F ) = O) ⇒ αABC ∩ αDEF ↔

↔ (A + B + C )(D + E + F ) = A(A + E + F ) + B (A + E + F ) + C (A + E + F ) =
= A + BA + CA + B (E + F ) + C (E + F ) = A,
i.e., (A ≡ D) & (B (E + F ) = O) & (C (E + F ) = O) ⇒ αABC ∩ αDEF = A.

Example 2.2.4. Let the elements A and D of the planes αABC , αDEF be identical
A ≡ D, and the elements B, C and E + F are pairwise orthogonal B (E + F ) = O,
C (E + F ) = O. Then the intersection of these planes is the element A:

(A ≡ D) & (B (E + F ) = O) & (C (E + F ) = O) ⇒ αABC ∩ αDEF = A. 5

Example 2.2.5. Consider the sheets LAB , LAE with the generator points A, B
and A, E belonging to the planes αABC , αAEF , respectively. Let the lines lAB , lAE
passing through the generator points A, B and A, E of these sheets, respectively,
intersect each other A, so that:

lAB ∩ lAE = A;

A, B ∈ lAB ⊂ LAB ⊂ αABC ;

A, E ∈ lAE ⊂ LAE ⊂ αAEF . 5

This example implies that two planes αABC , αAEF , two sheets LAB , LAE , and two
lines lAB , lAE belonging to these planes αABC , αAEF , respectively, can intersect
each other in a single point in metric space Ω.

Example 2.2.6. Consider three planes {αi }, i = 1, 2, 3 in metric space Ω, each


of them with the generator points {Ai , Bi , Ci }, i = 1, 2, 3 connected by relation
of order: A1 < A2 < A3 , B1 < B2 < B3 , C1 < C2 < C3 . Then, according to
Corollary 2.2.1 of Theorem 2.2.1, the points {Ai }, {Bi }, {Ci }, i = 1, 2, 3 lie on the
lines lA , lB , lC , respectively: Ai ∈ lA , Bi ∈ lB , Ci ∈ lC . It should be noted that
A1 < A2 , hence, the points A1 , A2 belong to the plane α2 : A1 , A2 ∈ α2 , however,
A1 < A2 < A3 , hence, the point A3 does not belong to the plane α2 . This example
implies that in metric space Ω among three points A1 , A2 , A3 belonging to the same
line lA , there are the points A1 , A2 that belong to the plane α2 . However, the point
A3 does not belong to this plane. It is elucidated by the fact, that in metric space
Ω, the axiom of “absolute geometry” that “if two points of a line belong to a plane,
then all the points of this line belong to this plane” [224, Section 2] does not hold.
5
46 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Example 2.2.7. Consider, as well as in the previous example, three planes {αi },
i = 1, 2, 3 in metric space Ω, each of them with generator points {Ai , Bi , Ci },
i = 1, 2, 3 connected with each other by relation of order: A1 < A2 < A3 , B1 <
B2 < B3 , C1 < C2 < C3 . Every pair of the generator points {Ai , Bi }, i = 1, 2, 3
determines the sheets {LABi } belonging to the corresponding planes {αi }. The
points {Ai }, {Bi }, {Ci }, i = 1, 2, 3 lie on the lines lA , lB , lC , respectively: Ai ∈ lA ,
Bi ∈ lB , Ci ∈ lC . Let also the lines lA , lB intersect in the point AB: lA ∩ lB = AB,
so that AB = Ai Bi , i = 1, 2, 3. It should be noted that AB < A1 < A2 , AB <
B1 < B2 , hence, the lines lA2 , lB2 pass through the points AB, A1 , A2 ; AB, B1 , B2 ,
respectively, AB, A1 , A2 ∈ lA2 ; AB, B1 , B2 ∈ lB2 belong to the plane α2 , so that
lA2 ∩ lB2 = AB. Therefore two crossing lines lA2 , lB2 belong to the same plane α2 ;
however, two crossing lines lA , lB do not belong to the same plane α2 . In metric
space Ω the corollary from axioms of connection of “absolute geometry” that “two
crossing lines belong to the same plane” [224, Section 2] does not hold. Thus, in
metric space Ω, two crossing lines can belong or not belong to the same plane. 5

If the generator points {Ai , Bi , Ci } of the planes {αi } are connected by relation
of order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 , Ci−1 < Ci < Ci+1 and, there-
fore, lie on the same line respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB ;
Ci−1 , Ci , Ci+1 ∈ lC , then we shall consider that the planes αi−1 ,αi ,αi+1 belong to
one another: αi−1 ⊂ αi ⊂ αi+1 .
If the generator points {Ai , Bi } of the sheets {Li } are connected by relation of
order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 and, therefore, lie on the same line
respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB , then we shall consider that
the sheets Li−1 ,Li ,Li+1 belong to one another: Li−1 ⊂ Li ⊂ Li+1 .
If the generator points {Ai , Bi , Ci } of the planes {αi } are connected by relation
of order: Ai−1 < Ai < Ai+1 , Bi−1 < Bi < Bi+1 , Ci−1 < Ci < Ci+1 and, there-
fore, lie on the same line, respectively: Ai−1 , Ai , Ai+1 ∈ lA ; Bi−1 , Bi , Bi+1 ∈ lB ;
Ci−1 , Ci , Ci+1 ∈ lC , then we shall consider that the lines lAi−1 , lAi , lAi+1 :

lAi = lA ∩ αi = {Aα : (∀Aα ∈ Ai ) & (∀Aα ∈ lA )},

belong to one another, respectively: lAi−1 ⊂ lAi ⊂ lAi+1 .

2.2.4 Axiomatic System of Metric Space Built upon Generalized


Boolean Algebra with a Measure
To formulate the axiomatic system of metric space Ω built upon generalized Boolean
algebra B(Ω) with a measure m, we shall use Hilbert’s idea that every space is a set
of the objects of three types—points, lines, and planes, so that there are relations of
mutual containment (connection or incidence); betweenness relations (i.e., relations
of an order); relations of congruence, continuity, and parallelism, and descriptions of
these relations are given by axioms [224, Section 1]. Consider the axiomatic system
of metric space Ω starting from the properties of its main objects — points, lines,
sheets, and planes.
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 47

Axioms of connection (containment)

A11 . There exists at least one line passing through two given points.

A12 . There exist at least three points that belong to every line.

A13 . There exist at least three points that do not belong to the same line.

A14 . Two crossing lines have at least one point in common.

A15 . There exists only one sheet passing through null element and two
points that do not lie with null element on the same line.

A16 . A null element and at least two points belong to each sheet.

A17 . There exist at least three points that do not belong to the same sheet
and differ from the null element.

A18 . Two different crossing sheets have at least one point in common, which
differs from null element.

A19 . There exists only one plane passing through null element and three
points that do not lie on the same sheet.

A110 . A null element and at least three points belong to every plane.

A111 . There exist at least four points that do not belong to the same plane
and differ from the null element.

A112 . Two different crossing planes have at least one point in common that
differs from the null element.

Corollaries from axioms of connection

C11 . There exists only one plane passing through a sheet and a point that
does not belong to it (from A19 and A15 ).

C21 . There exists only one plane passing through two distinct sheets cross-
ing in a point which differs from the null element (from A19 and A15 ).

Axioms of order

A21 . Three points A, B, X of metric space Ω belong to some line l, and X


is situated between the points A and B, if the metric identity holds:

ρ(A, B ) = ρ(A, X ) + ρ(X, B ) ⇒ A ≺ X ≺ B (or A  X  B ),

where ρ(A, B ) = m(A∆B ) is a metric between the elements A, B in


the space Ω.

A22 . Among any three points on a line, there exists no more than one point
lying between the other two.
48 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

A23 . Two elements A, B of metric space Ω, which are not connected by


relation of order ((A 6= AB ) & (B 6= AB )) lie on the same line l1
along with the elements AB and A + B: A, B, AB, A + B ∈ l1 , and
also lie on the same line l2 along with the elements A − B and B − A:
A, B, B − A, A − B ∈ l2 , so that the following relations of geometric
order hold:

∀A, B : (A 6= AB ) & (B 6= AB ) ⇒ A, B, AB, A + B ∈ l1 :

AB ≺ A ≺ A + B ≺ B ≺ AB or AB  A  A + B  B  AB ;
∀A, B : (A 6= AB ) & (B 6= AB ) ⇒ A, B, B − A, A − B ∈ l2 :
A ≺ B ≺ B − A ≺ A − B ≺ A or A  B  B − A  A − B  A.

A24 . The elements {Aj } of linearly ordered indexed set A = {Aj }, j =


0, 1, . . . , n, Aj < Aj+1 in metric space Ω lie on the same line l, so that
the metric identity holds:
n
X
ρ(A0 ∆An ) = ρ(Aj−1 ∆Aj ) ⇔
j=1

⇔ A0 ≺ A1 ≺ . . . ≺ Aj ≺ . . . ≺ An ⇔ {Aj } ⊂ l.

Definition 2.2.15. In the space Ω, triangle ∆ABC is a set of the elements be-
longing to the union of intersections of each of three sheets LAB , LBC , LCA of the
i j k
plane αABC with the collections of the lines {lAB }, {lBC }, {lCA } passing through
the generator points A, B; B, C; C, A of these sheets respectively:
[ [ j [
i k
∆ABC = [LAB ∩ ( lAB )] ∪ [LBC ∩ ( lBC )] ∪ [LCA ∩ ( lCA )].
i j k

Definition 2.2.16. Angle ∠B∆ABC of the triangle ∆ABC in the space Ω is a set of
the elements belonging to the union of intersections of each of two sheets LAB LBC
i j
of the plane αABC with the collections of the lines {lAB }, {lBC } passing through
the generator points A, B; B, C of these sheets respectively:
[ [ j
i
∠B∆ABC = [LAB ∩ ( lAB )] ∪ [LBC ∩ ( lBC )].
i j

Axiom of congruence

A31 . If, in a group G of mappings of the space Ω into itself, there exists the
mapping g ∈ G preserving a measure m, at the same time the points
A, B, C ∈ Ω, determining the plane αABC , and pairwise determining
the sheets LAB , LBC , LCA , are mapped into the points A0 , B 0 , C 0 ∈ Ω;
the triangle ∆ABC is mapped into the triangle ∆A0 B 0 C 0 ; the sheets
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 49

LAB , LBC , LCA are mapped into the sheets LA0 B 0 , LB 0 C 0 , LC 0 A0 , and
the plane αABC is mapped into the plane αA0 B 0 C 0 respectively:
g g g g
A → A0 , B → B 0 , C → C 0 ; ∆ABC → ∆A0 B 0 C 0 ;
g g g g
LAB → LA0 B 0 , LBC → LB 0 C 0 , LCA → LC 0 A0 ; αABC → αA0 B 0 C 0 ,
then corresponding sheets, and also planes, triangles, and angles
formed by the sheets are congruent:

LAB ∼
= LA0 B 0 , LBC ∼
= LB 0 C 0 , LCA ∼
= LC 0 A0 ;

αABC ∼
= αA0 B 0 C 0 ; ∆ABC ∼
= ∆A0 B 0 C 0 ;
∠A∆ABC ∼
= ∠A0∆A0 B 0 C 0 , ∠B∆ABC ∼ 0
= ∠B∆A ∼ 0
0 B 0 C 0 , ∠C∆ABC = ∠C∆A0 B 0 C 0 .

Corollary from axiom of congruence

C13 . If triangles ∆ABC and ∆A0 B 0 C 0 are congruent in the space Ω, then
there exists a mapping f ∈ G preserving a measure m, which maps
the triangle ∆ABC into the triangle ∆A0 B 0 C 0 :
f
∆ABC → ∆A0 B 0 C 0 .

Axiom of continuity

A41 . (Cantor’s axiom). If, within the interval X ∗ = [O, X ], X ∗ ⊂ Ω:

X ∗ = {x : O ≤ x ≤ X},

there is an infinite system of intervals {[ai , bi ]}, where each following


interval is contained within the previous one:

(O < ai < ai+1 < bi+1 < bi < X ),

at the same time, on a set X ∗ , there is no interval that lies within all
the intervals of the system, then there exists the only element that
belongs to all these intervals.

Axioms of parallels
Axioms of parallels for a sheet

A51 (a) In a given sheet LAB , through a point C ∈ LAB that does not belong
to a given line lAB passing through the points A, B, there can be
drawn at least one line lC not crossing a given one lAB , under the
condition that the point C is the element of linearly ordered indexed
set {Cγ }γ∈Γ , C ∈ {Cγ }γ∈Γ .
50 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

A52 (a) In a given sheet LAB , through a point C ∈ LAB that does not belong
to a given line lAB passing through the points A, B, there can be
drawn only one line lC not crossing a given lAB under the condition
that the point C is not the element of linearly ordered indexed set
{Cγ }γ∈Γ , C ∈
/ {Cγ }γ∈Γ .

A53 (a) In a given sheet LX , through a point C ∈ LX that does not belong
to a given line lAB passing through the points A, B, one cannot draw
a line not crossing a given lAB under the condition that the points
A, B belong to an interval [O, X ] : A, B ∈ [O, X ], so that:

O < A < B < X ⇒ O, A, B, X ∈ lAB ,

and the point C is not the element of linearly ordered indexed


set {Cγ }γ∈Γ , C ∈
/ {Cγ }γ∈Γ while C takes the values from the set
{B ∆A, X ∆B, X ∆A, X ∆B ∆A}.

Axioms of parallels for a plane

A51 (b) In a given plane αABC , through a point C that does not belong to a
given line lAB passing through the points A, B, there can be drawn
at least one line lC not crossing a given lAB under the condition that
the point C and the line lAB belong to the distinct sheets of the
plane αABC :

(lAB ∈ LAB ) & [(C ∈ LBC )or(C ∈ LCA )];

LAB 6= LBC 6= LCA 6= LAB .

A52 (b) In a given plane αABC , through a point C that does not belong to a
given line lAB passing through the points A, B, there can be drawn
only one line lC not crossing a given lAB , under the condition that the
point C is not the element of linearly ordered indexed set {Cγ }γ∈Γ ,
C ∈/ {Cγ }γ∈Γ , and along with the line lAB , it belongs to the same
sheet LAB .

A53 (b) In a given plane αABC , through a point C that does not belong to
a given line lAB passing through the points A, B, one cannot draw
a line not crossing a given lAB under the condition that the points
A, B belong to an interval [O, X ] : A, B ∈ [O, X ], so that:

O < A < B < X ⇒ O, A, B, X ∈ lAB ,

and the point C is not the element of linearly ordered indexed


set {Cγ }γ∈Γ , C ∈
/ {Cγ }γ∈Γ while C takes the values from the set
{B ∆A, X ∆B, X ∆A, X ∆B ∆A}.
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 51

A given axiomatic system of the space Ω, built upon generalized Boolean algebra
B(Ω) with a measure m implies that the axioms of connection and the axioms of
parallels are characterized with essentially the lesser constraints than the axioms
of analogous groups of Euclidean space. Thus, it may be assumed that geometry of
generalized Boolean algebra B(Ω) with a measure m contains in itself some other
geometries. In particular, axioms of parallels both for a sheet and for a plane contain
the axioms of parallels of hyperbolic, elliptic, and parabolic (Euclidean) geometries.

2.2.5 Metric and Trigonometrical Relationships in Space Built upon


Generalized Boolean Algebra with a Measure
Consider metric and trigonometrical relationships in space built upon generalized
Boolean algebra B(Ω) with a measure m. The distinction between the metric (2.2.1)
of the space Ω built upon generalized Boolean algebra B(Ω) with a measure m
and the metric of Euclidean space predetermines the presence in the space Ω of
geometric properties that differ from the properties of Euclidean space. The axiom
A11 from axioms of connection of geometry of generalized Boolean algebra B(Ω)
with a measure m, asserting that “there exists at least one line passing through
two given points”, however, does not mean that the distance between points A and
B is an arbitrary quantity. Metric ρ(A, B ) for a pair of arbitrary points A and B
of the space Ω is a strictly determined quantity and does not depend on a line,
passing through the points A and B, along which this metric is evaluated. This
statement poses a question: what should be considered as the sides and the angles
of a triangle built upon three arbitrary points A, B, C of the space Ω, if through
two of triangle’s vertices there can be drawn at least one line? The answer to this
question is contained in the Definitions 2.2.15 and 2.2.16.
Main metric relationships of a triangle ∆ABC of the space Ω will be obtained
on the base of the following argument. For an arbitrary triplet of the elements
A, B, C in the space Ω, we denote:

a = A − (B + C ), b = B − (A + C ), c = C − (A + B ), d = ABC ; (2.2.15)

a0 = BC − ABC, b0 = AC − ABC, c0 = AB − ABC. (2.2.16)


Then:
m(A∆B ) = m(a) + m(b) + m(a0 ) + m(b0 ); (2.2.17)
m(B ∆C ) = m(b) + m(c) + m(b0 ) + m(c0 ); (2.2.18)
m(C ∆A) = m(c) + m(a) + m(c0 ) + m(a0 ); (2.2.19)
m[(A∆C )(C ∆B )] = m(c) + m(c0 ); (2.2.20)
m[(B ∆A)(A∆C )] = m(a) + m(a0 ); (2.2.21)
m[(C ∆B )(B ∆A)] = m(b) + m(b0 ). (2.2.22)
52 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Theorem 2.2.2. Cosine theorem of the space Ω. For a triangle ∆ABC in the space
Ω, the following relationships hold:

m(A∆B ) = m(A∆C ) + m(C ∆B ) − 2m[(A∆C )(C ∆B )]; (2.2.23)

m(B ∆C ) = m(B ∆A) + m(A∆C ) − 2m[(B ∆A)(A∆C )]; (2.2.24)


m(C ∆A) = m(C ∆B ) + m(B ∆A) − 2m[(C ∆B )(B ∆A)]. (2.2.25)
Proof of the theorem follows directly from Equations (2.2.17) through (2.2.22).
It should be noted that the identity holds:

m[(A∆C ) + (C ∆B )] = m[(B ∆A) + (A∆C )] = m[(C ∆B ) + (B ∆A)] =

= m(A + B + C ) − m(ABC ), (2.2.26)


where m(A + B + C ) − m(ABC ) = p(A, B, C ) is half-perimeter of triangle ∆ABC:

p(A, B, C ) = [m(A∆B ) + (B ∆C ) + m(C ∆A)]/2 =

= m(a) + m(b) + m(c) + m(a0 ) + m(b0 ) + m(c0 ).


To evaluate the proximity measure of the sides AC and CB of triangle ∆ABC, an
angular measure ϕAB may be introduced that possesses the following properties:
m[(A∆C )(C ∆B )]
cos2 ϕAB = = (2.2.27)
m[(A∆C ) + (C ∆B )]

m(c) + m(c0 )
= ;
m(a) + m(b) + m(c) + m(a0 ) + m(b0 ) + m(c0 )
m[A∆B ]
sin2 ϕAB = = (2.2.28)
m[(A∆C ) + (C ∆B )]
m(a) + m(b) + m(a0 ) + m(b0 )
= ;
m(a) + m(b) + m(c) + m(a0 ) + m(b0 ) + m(c0 )
cos2 ϕAB + sin2 ϕAB = 1.
Theorem 2.2.3. Sine theorem of the space Ω. For a triangle ∆ABC in the space
Ω, the following relationship holds:
m[A∆B ] m[B ∆C ] m[C ∆A]
2 = 2 = = (2.2.29)
sin ϕAB sin ϕBC sin2 ϕCA
= m(A + B + C ) − m(ABC ) = p(A, B, C ).
Proof of theorem follows directly from the Equations (2.2.26) and (2.2.28).
Theorem 2.2.4. Theorem on cos-invariant of the space Ω. For a triangle ∆ABC in
the space Ω, the identity that determines cos-invariant of angular measures of this
triangle holds:
cos2 ϕAB + cos2 ϕBC + cos2 ϕCA = 1. (2.2.30)
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 53

Proof of theorem follows from the Equations (2.2.20) through (2.2.22) and (2.2.26),
and also from the Equation (2.2.27).
There is two-sided inequality that follows from the equality (2.2.30) and deter-
mines constraint relationships between the angles of a triangle in the space Ω:
√ √
3 arccos(1/ 3) ≤ ϕAB + ϕBC + ϕCA ≤ 3π + 3 arccos(1/ 3). (2.2.31)
According to (2.2.31), one can emphasize four of the most essential intervals
of domain of definition of a triangle angle sum in the space Ω that determine the
belonging of a triangle to a space with hyperbolic, Euclidean, elliptic, and other
geometries, respectively:

3 arccos(1/ 3) ≤ ϕAB + ϕBC + ϕCA < π (hyperbolic space);
ϕAB + ϕBC + ϕCA = π (Euclidean space);
π < ϕAB + ϕBC + ϕCA < 3π (elliptic space);

3π ≤ ϕAB + ϕBC + ϕCA ≤ 3π + 3 arccos(1/ 3).
Thus, commonality of the properties of the space Ω built upon generalized Boolean
algebra B(Ω) with a measure m extends on three well studied geometries—
hyperbolic, Euclidean, and elliptic.
Theorem 2.2.5. Theorem on sin-invariant of the space Ω. For a triangle ∆ABC in
the space Ω, the identity that determines sin-invariant of angular measures of this
triangle holds:
sin2 ϕAB + sin2 ϕBC + sin2 ϕCA = 2. (2.2.32)
Proof of theorem follows from the Equations (2.2.17) through (2.2.19) and (2.2.26),
and also from the Equation (2.2.28).
Theorem 2.2.6. Function (2.2.28) is a metric.
Proof. Write an obvious inequality:
2 sin2 ϕCA ≤ 2. (2.2.33)
According to the identity (2.2.32), we substitute 2 in the right part of the inequality
(2.2.33) for the sum of squared sines of triangle’s angles:
sin2 ϕAB + sin2 ϕBC + sin2 ϕCA ≥ 2 sin2 ϕCA .
The next inequality follows from the obtained one:
sin2 ϕAB + sin2 ϕBC ≥ sin2 ϕCA . (2.2.34)
Equation (2.2.28) implies that sin2 ϕAB = 0 only if m(A∆B ) = 0, or if A ≡ B:
A ≡ B ⇒ sin2 ϕAB = 0. (2.2.35)
Taking into account the symmetry of the function sin2 ϕAB = sin2 ϕBA and also
the properties (2.2.34) and (2.2.35), it is easy to conclude that the function (2.2.28)
satisfies all the axioms of the metric.
54 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Nevertheless, it is too problematic to use the function (2.2.28) as a metric in the


space Ω because of the presence of the third element C. It is possible to overcome
the the problem introducing the following metric in the space Ω:

m(A∆B )
µ(A, B ) = . (2.2.36)
m(A + B )

Formally, this result could be obtained from (2.2.28) if let C = O, where O is a null
element of the space Ω. Thus, while analyzing the metric relationships in a plane
in the space Ω, instead of a triangle ∆ABC, one should consider a tetrahedron
OABC. Metric (2.2.36) was introduced in [225].

2.2.6 Properties of Metric Space with Normalized Metric Built upon


Generalized Boolean Algebra with a Measure
It is convenient, with a help of a metric (2.2.26), to measure the distances between
the elements of some collection {O, A1 , A2 , . . . , An }, for instance, n-dimensional
Pn
simplex Sx(A): A = Aj , A0 ≡ O, when it is not necessary to know absolute
j=0
values of the distances ρ(Ai , Aj ) = m(Ai ∆Aj ) between the pairs of elements Ai ,
Aj .
n
P
Consider a set A that is a sum of the elements {Aj }: A = Aj . The elements
j=0
{Aj } of a set A in metric space Ω are situated in vertices of n-dimensional simplex
Sx(A). Further, separate three elements Ai , Aj and Ak from a set A: Ai , Aj , Ak ∈ A
and we introduce the following designations:

Ai + Aj = Aij ; Aj + Ak = Ajk ; Ak + Ai = Aki ; Ai + Aj + Ak = Aijk .

Then, according to Lemma 2.2.1, the elements Aij , Ajk , Aki are situated on the
corresponding sides of a triangle 4Ai Aj Ak in metric space Ω, as shown in Fig. 2.2.3,
so that:
Ai , Aij , Aj ∈ lij ; Aj , Ajk , Ak ∈ ljk ; Ak , Aki , Ai ∈ lki ,
where lij ; ljk ; lki are the lines of metric space Ω.
In the plane of triangle 4Ai Aj Ak , the point Aijk = Ai + Aj + Ak is situated upon
the intersection of lines lijk ; ljki ; lkij , so that the following relations of belonging
hold:
Aij , Aijk , Ak ∈ lijk ; Ajk , Aijk , Ai ∈ ljki ; Aki , Aijk , Aj ∈ lkij .

Definition 2.2.17. The point Aijk = Ai + Aj + Ak of a two-dimensional face


n
P
(triangle) 4Ai Aj Ak of n-dimensional simplex Sx(A): A = Aj , A0 ≡ O is called
j=0
barycenter of two-dimensional simplex (triangle) 4Ai Aj Ak :

Aijk = bc[Sx(Ai + Aj + Ak )].


2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 55

Extending the introduced notion upon three-dimensional simplex (tetrahedron)


Sx(Ai + Aj + Ak + Am ) formed by a collection of elements {Ai , Aj , Ak , Am } of a
set A in metric space Ω, we obtain that a set Aijkm = Ai + Aj + Ak + Am is a
barycenter of simplex Sx(Ai + Aj + Ak + Am ):
Aijkm = bc[Sx(Ai + Aj + Ak + Am )],
so that in metric space Ω, the point Aijkm lies on the intersection of four lines
passing through barycenters of four two-dimensional faces of tetrahedron Sx(Ai +
Aj + Ak + Am ) and also through the opposite vertices:
Ajkm , Aijkm , Ai ∈ li ; Aikm , Aijkm , Aj ∈ lj ; Aijm , Aijkm , Ak ∈ lk ; Aijk , Aijkm , Am ∈ lm
where
Ajkm = bc[Sx(Aj + Ak + Am )]; Aikm = bc[Sx(Ai + Ak + Am )];
Aijm = bc[Sx(Ai + Aj + Am )]; Aijk = bc[Sx(Ai + Aj + Ak )];
\ \ \
Aijkm = li lj lk lm .
n
P
Similarly, a set A = Aj , A0 ≡ O is barycenter of n-dimensional simplex Sx(A)
j=0
in metric space Ω:   
n
X
A = bc Sx A = Aj  .
j=0

Lemma 2.2.4. In the space Ω, all the elements {Aj } of n-dimensional simplex
n
P
Sx(A): A = Aj , A0 ≡ O belong to some n-dimensional sphere Sp(OA , RA ) with
j=0
center OA and radius RA : ∀Aj ∈ Sx(A): Aj ∈ Sp(OA , RA ).
In the space Ω, n-dimensional sphere
Sp(OA , RA ) can be drawn around an ar-
bitrary n-dimensional simplex Sx(A). It
would be a good thing here to draw an
analogy with an Euclidean sphere in R3 ,
where a distance between a pair of points
A, B is determinated by an Euclidean met-
ric (2.2.5), and by angular measure of an
arc of a large circle passing through a given
pair of the points A, B and the center of
FIGURE 2.2.3 Simplex Sx(Ai + Aj + the sphere. Distances between the elements
Ak ) in metric space Ω {Aj } of n-dimensional simplex Sx(A), ly-
ing on n-dimensional sphere Sp(OA , RA ) in
the space Ω, are uniquely determined by metric µ(Ai , Aj ) (36), which is the function
of angular measure ϕij between the elements Ai and Aj :
m(Ai ∆Aj )
µ(Ai , Aj ) = sin2 ϕij = . (2.2.37)
m(Ai + Aj )
56 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Consider the main properties of metric space ω with metric µ(A, B ) (2.2.36), which
is a subset of metric space Ω with metric ρ(A, B ) = m(A∆B ). At the same time,
we require that three elements A, X, B, belonging to the same line l in the space
Ω: A, X, B ∈ l, l ∈ Ω, so that the element X lies between the elements A and B,
would also belong to the same line l0 in the space ω: A, X, B ∈ l0 , l0 ∈ ω. Then two
metric identities jointly hold:

ρ(A, B ) = ρ(A, X ) + ρ(X, B );
(2.2.38)
µ(A, B ) = µ(A, X ) + µ(X, B ).

According to the definitions of metrics ρ(A, B ) and µ(A, B ) of spaces Ω and ω,


respectively, the system (2.2.38) can be written as follows:

 m(A∆B ) = m(A∆X ) + m(X ∆B );
m(A∆B ) m(A∆X ) m(X ∆B ) (2.2.39)
= + .
m(A + B ) m(A + X ) m(X + B )

The first equation of the system (2.2.39), as noted above (see the identities (2.2.6)
and (2.2.7)), has two solutions with respect to X: (1) X = A + B and (2) X = A·B.
Substituting these values of X into the second equation of (2.2.39), we notice that
it turns into an identity on X = A + B, but it does not hold on X = A · B.
This implies that the following equivalent requirement to the space ω: ω should be
closed under the operation of addition of generalized Boolean algebra B(Ω) with a
measure m.
We now give a general definition of the space ω.

Definition 2.2.18. Metric space ω with metric ρω (A, B ) = m(A∆B )/m(A + B )


is the subset of the elements {A, B, . . .} of metric space Ω with metric ρ(A, B ) =
m(A∆B ) built upon generalized Boolean algebra B(Ω) with a measure m, which:
(1) includes null element O of generalized Boolean algebra B(Ω), O ∈ ω; (2) is
closed under operation of addition of generalized Boolean algebra B(Ω), A, B ∈
ω ⇒ A + B ∈ ω; (3) along with every its element A, contains all the elements
contained in A: A ∈ ω, X ∈ A ⇒ X ∈ ω.

Summarizing this definition briefly, metric space ω with metric ρω (A, B ) =


m(A∆B )/m(A+B ) is the set of principal ideals {JA+B = A+B, JB+C = B +C, . . .}
of generalized Boolean algebra B(Ω) with a measure m formed by the pairs of
nonnull elements A, B; B, C; . . . of metric space Ω with metric ρ(A, B ) = m(A∆B ).
Generally, mapping of one space into another (in this case, the space Ω into
the space ω: Ω → ω) according to Klein’s terminology [197], can be interpreted
by two distinct methods: “active” and “passive”. While using the passive method of
interpretation, mapping Ω → ω is represented as a change of coordinate system, i.e.,
for a fixed element, for instance A ∈ Ω, which has coordinates x = [x1 , x2 , . . . , xn ],
new coordinates x0 = [x01 , x02 , . . . , x0n ] are assigned in space ω. From this approach,
the Definition 2.2.18 of the space ω is given. Conversely, with active understanding,
the coordinate system is fixed, and the space is transformed. Each element A of
the space Ω with coordinates x = [x1 , x2 , . . . , xn ] is associated with the element a
2.2 Geometrical Properties of Metric Space Built upon Generalized Boolean Algebra 57

of the space ω with coordinates x0 = [x01 , x02 , . . . , x0n ]; thus, some transformation of
the elements of the space is established; it is that method of interpretation we shall
deal with henceforth while considering the properties of the space ω.
We now give one more definition of the space ω.
Definition 2.2.19. Metric space ω with metric ρω (A, B ) is the set of the elements
{a, b, . . .} in which each pair a, b is the result of mapping f of the corresponding
pair of elements A, B from metric space Ω:

f : A → a, B → b; A, B ∈ Ω; a, b ∈ ω,

and the distance ρω (A, B ) between the elements a and b in the space ω is equal to
m(A∆B )/m(A + B ), and three elements A, X, B belonging to the same line l in
the space Ω: A, X, B ∈ l, l ∈ Ω, so that the element X lies between the elements A
and B, mapped into three elements a, x, b respectively:

f : A → a, X → x, B → b;

thus, they belong to the same line l0 in the space ω: a, x, b ∈ l0 , l0 ∈ ω, and the
element x lies between the elements a and b.
We now summarize the properties of the space (ω, ρω ).
Group of geometric properties

1. Space (ω, ρω ) is a metric space with metric ρω (A, B ) = m(A∆B )/m(A +


B ), where fAB : A → a, B → b; A, B ∈ Ω; a, b ∈ ω, and indices A, B of
mapping fAB point to the fact that it is applied to a pair of elements A, B
only.
2. Under mapping fAB : A → a, B → b of a pair of elements A, B of the
space Ω with coordinates xA = [xA A
1 , . . . , xn ], x
B
= [xB B
1 , . . . , xn ] into
the elements a, b of the space ω with coordinates xa = [xa1 , . . . , xan ],
xb = [xb1 , . . . , xbn ], respectively, coordinates xA , xB of the elements A, B
and coordinates xa , xb of the elements a, b are interconnected by the re-
lationships:

xa = k · xA , xb = k · xB , k = 1/m(A + B ). (2.2.40)

(a) Under pairwise mapping of three elements A, B, C of the space Ω into


the elements a, b, c of the space ω, respectively, the connection be-
tween coordinates xA , xB , xC of the elements A, B, C and coordinates
xa , xb , xc of the elements a, b, c is determined by the relationships:
0 0
fAB : A → a0 , B → b0 , xa = kab xA , xb = kab xB ;
00 00
fBC : B → b00 , C → c00 , xb = kbc xB , xc = kbc xC ;
000 000
fCA : C → c000 , A → a000 , xc = kca xC , xa = kca xA,
where kab = 1/m(A + B ); kbc = 1/m(B + C ); kca = 1/m(C + A).
58 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

(b) Under mapping fAB of a pair of elements A, B of the space Ω into


the elements a, b of the space ω: fAB : A → a, B → b, every element
x ∈ ω: fAB : X → x, X ∈ A + B is characterized by normalized
measure µ(x):
µ(x) = m(X )/m(A + B ). (2.2.41)

3. Under mapping f : A → a, X → x, B → b of three elements A, X, B ∈ Ω


belonging to the same line l in the space Ω, where the element X lies
between the elements A and B, the elements a, x, b also belong to some
line l0 in the space ω: a, x, b ∈ l0 , l0 ∈ ω, and the element x lies between
the elements a and b.
4. Under mapping f : Sx(A) → Sx(a), Sx(A) ∈ Ω, Sx(a) ∈ ω of the simplex
n
P n
P
Sx(A): A = Aj , A0 ≡ O into the simplex Sx(a): a = aj , a0 ≡ O,
j=0 j=0
the barycenters of p-dimensional faces of the simplex Sx(A), p ≤ n are
mapped into the barycenters of the corresponding faces of the simplex
Sx(a):
Aij = bc[Sx(Ai + Aj )] → aij = bc[Sx(ai + aj )];
Aijk = bc[Sx(Ai + Aj + Ak )] → aijk = bc[Sx(ai + aj + ak )];
Aijkm = bc[Sx(Ai + Aj + Ak + Am )] → aijkm = bc[Sx(ai + aj + ak + am )];
............................................................;
A = bc[Sx(A)] → a = bc[Sx(a)].

5. Diameter diam(ω ) of the space ω is equal to 1:

diam(ω ) = sup µ(a, b) = µ(a, O) = 1.


a, b∈ω

Group of algebraic properties

1. Metric space (ω, ρω ) is a set of principal ideals {JA+B = A + B, JB+C =


B + C, . . .} of generalized Boolean algebra B(Ω) with a measure m, which
are formed by the corresponding pairs of non-null elements A, B; B, C;
. . . of metric space Ω with metric ρ(A, B ) = m(A∆B ).
(a) The principal ideal JA+B of generalized Boolean algebra B(Ω) with a
measure m, generated by a pair of non-null elements A and B, is an
independent generalized Boolean algebra B(A + B ) with a measure
m.
(b) The mapping fAB : A → a, B → b of a pair of elements A, B of the
space Ω into a pair of elements a, b of the space ω determines an
isomorphism between generalized Boolean algebras B(A + B ) with a
measure m and B(a + b) with a measure µ, so that measures m and
µ are connected by the relationship (2.2.41).
2.3 Informational Properties of Information Carrier Space 59

2. Metric space ω contains the null element O of generalized Boolean algebra


B(Ω) with a measure m.
3. Metric space ω is closed under the operation of addition of generalized
Boolean algebra B(Ω) with a measure m. Taking into account closure, as-
sociativity, and commutativity of operation of addition, one can conclude
that the space ω is an additive Abelian semigroup ω = (SG, +).
4. Metric space ω is closed under operation of multiplication of generalized
Boolean algebra B(Ω) with a measure m. Taking into account closure,
associativity, and commutativity of operation of multiplication, one can
conclude that the space ω is a multiplicative Abelian semigroup ω =
(SG, · ).
5. Stone’s duality between generalized Boolean algebra B(Ω) with the signa-
ture (+, ·, −, O) of the type (2, 2, 2, 0) and generalized Boolean ring BR(Ω)
with the signature (∆, ·, O) of the type (2, 2, 0) induces the following cor-
respondence. If ω is an ideal of generalized Boolean algebra B(Ω), then ω
is an ideal of generalized Boolean ring BR(Ω).
6. Metric space ω is closed under an operation of addition ∆ of generalized
Boolean ring BR(Ω) with a measure m. Taking into account closure, asso-
ciativity, and commutativity of addition operation ∆, the presence of null
element O ∈ ω: a∆O = O∆a = a, and the presence of inverse element
a−1 = a: a∆a−1 = O for each element a ∈ ω, one can conclude that the
space ω is an additive Abelian group: ω = (G, ∆).

2.3 Informational Properties of Information Carrier Space


The content of this section is based on the definition of information carrier space
built upon generalized Boolean algebra with a measure and introduced in sec-
tion 2.1. The narration is arranged in such a manner that at first we consider
main relationships between information carriers A, B ⊂ Ω (sets of elements) of in-
formation carrier space Ω (the description of the space Ω at a macrolevel), and then
we consider main relationships within a separate information carrier (a set of the
elements) A of the space Ω (the description of the space Ω at a microlevel). Any
set of the elements A = {Aα }, A ⊂ Ω is considered simultaneously as a subalgebra
B(A) of generalized Boolean algebra B(Ω) with signature (+, ·, −, O) of the type
(2, 2, 2, 0). At a final part of the section, we introduce theorems characterizing main
informational relationships on the most interesting (from the view of signal process-
ing theory) mappings of the elements of information carrier space. Informational
properties of information carrier space Ω are introduced by the following axiom.

Axiom 2.3.1. Axiom of a measure of a binary operation. Measure m(a) of the el-
ement a: a = b ◦ c; a, b, c ∈ Ω, considered as a result of a binary operation ◦ of
60 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

generalized Boolean algebra B(Ω) with a measure m, determines the quantity of


information Ia = m(a) corresponding to the result of the given operation.

Based on the axiom, a measure m of the element a of carrier space Ω determines


a quantitative aspect of information, while a binary operation ◦ of generalized
Boolean algebra determines a qualitative aspect of this information. Within the
formulated axiomatic statement 2.3.1, depending on a type of relations between
the sets A = {Aα }, B = {Bβ } and/or their elements, we shall distinguish the
following types of information quantity introduced by the following definitions.

Definition 2.3.1. Quantity of overall information IAα1 +Aα2 , contained in an arbi-


trary pair of elements Aα1 ,Aα2 : Aα1 , Aα2 ∈ A of the set A considered as subalgebra
of generalized Boolean algebra B(Ω), is an information quantity equal to the mea-
sure of the sum of these elements:

IAα1 +Aα2 = m(Aα1 + Aα2 ).

Definition 2.3.2. Quantity of mutual information IAα1 ·Aα2 , contained in an arbi-


trary pair of elements Aα1 ,Aα2 : Aα1 , Aα2 ∈ A of the set A considered as subalgebra
of generalized Boolean algebra B(Ω), is an information quantity equal to the mea-
sure of the product of these elements:

IAα1 ·Aα2 = m(Aα1 · Aα2 ).

Definition 2.3.3. Quantity of absolute information IAα , contained in arbitrary


element Aα : Aα ∈ A of the set A considered as subalgebra of generalized Boolean
algebra B(Ω), is an information quantity equal to 1, according to the property 2 of
information carrier space Ω (see Definition 2.1.1):

IAα = m(Aα · Aα ) = m(Aα + Aα ) = m(Aα ) = 1.

Remark 2.3.1. Quantity of absolute information IAα may be considered the quan-
tity of overall information IAα +Aα = m(Aα + Aα ) or the quantity of mutual infor-
mation IAα ·Aα = m(Aα · Aα ) contained in the element Aα with respect to itself.

Definition 2.3.4. Quantity of particular relative information, which is contained


in the element Aα1 with respect to the element Aα2 (or vice versa, is contained in
the element Aα2 with respect to the element Aα1 ), Aα1 , Aα2 ∈ A, is an information
quantity equal to a measure of a difference of these elements:

IAα1 −Aα2 = m(Aα1 − Aα2 ); IAα2 −Aα1 = m(Aα2 − Aα1 ).

Definition 2.3.5. Quantity of relative information IAα1 ∆Aα2 , which is contained


by the element Aα1 with respect to the element Aα2 and vice versa, Aα1 , Aα2 ∈ A,
is an information quantity equal to a measure of a symmetric difference of these
elements:
IAα1 ∆Aα2 = m(Aα1 ∆Aα2 ) = IAα1 −Aα2 + IAα2 −Aα1 .
2.3 Informational Properties of Information Carrier Space 61

Remark 2.3.2. Quantity of relative information IAα1 ∆Aα2 is identically equal to


the introduced metric (2.1.1) between the elements Aα1 and Aα2 of the set A
considered a subalgebra of generalized Boolean algebra B(Ω):

IAα1 ∆Aα2 = m(Aα1 ∆Aα2 ) = ρ(Aα1 ∆Aα2 ).

Definition 2.3.6. Quantity of absolute information IA (IB ) is an information


quantity contained in a set A (B) considered a subalgebra of generalized Boolean
algebra B(Ω) with a collection of the elements {Aα }, ({Bβ }), in the consequence
of structural diversity of set A (B):
[ X
IA = m(A) = m( Aα ) ≡ m( Aα );
α α
[ X
IB = m(B ) = m( Bβ ) ≡ m( Bβ ).
β β

Definition 2.3.7. Quantity of overall information IA+B contained in an arbitrary


pair of sets A and B is an information quantity equal to a measure of sum of these
two sets A and B; each is considered a subalgebra of generalized Boolean algebra
B(Ω) (see Fig. 2.3.1a):
IA+B = m(A + B ).

(a) (b) (c) (d)

FIGURE 2.3.1 Information quantities between two sets A and B: (a) quantity of overall
information; (b) quantity of mutual information; (c) quantity of particular relative infor-
mation; (d) quantity of relative information

Definition 2.3.8. Quantity of mutual information IA·B contained in an arbitrary


pair of sets A and B is an information quantity equal to a measure of product of
sets A and B; each is considered a subalgebra of generalized Boolean algebra B(Ω)
(see Fig. 2.3.1b):
IA·B = m(A · B ).

Remark 2.3.3. Quantity of absolute information IA may be considered a quantity


of overall information IA+A = m(A + A) or a quantity of mutual information
IA·A = m(A · A) contained in a set A with respect to itself.
62 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Definition 2.3.9. Quantity of particular relative information contained in the set


A with respect to the set B, i.e. IA−B (or vice versa, in the set B with respect to
the set A, i.e. IB−A ), is an information quantity equal to a measure of the difference
of the sets A and B, respectively (see Fig. 2.3.1c):

IA−B = m(A − B ), IB−A = m(B − A).

Definition 2.3.10. Quantity of relative information IA∆B contained in the set


A with respect to the set B and vice versa is an information quantity equal to a
measure of the symmetric difference of the sets A and B (see Fig. 2.3.1d):

IA∆B = m(A∆B ) = IA−B + IB−A .

Remark 2.3.4. Quantity of relative information IA∆B is identically equal to the


introduced metric (2.1.1) between the elements A and B of the space Ω:

IA∆B = m(A∆B ) = ρ(A∆B ). (2.3.1)

Metric space Ω allows us to give the following geometric interpretation of the


main introduced notions.
In the information carrier space Ω (see Defini-
S
tion 2.1.1), all the elements of the set A = α Aα
are situated on a surface of some n-dimensional
sphere Sp(O, R), whose center is a null element O
of generalized Boolean algebra B(Ω) and radius R
is equal to 1 (see Fig. 2.3.2):

R = m(Aα ∆O) = m(Aα ) = 1.

A distance from null element O of the space Ω to


some arbitrary set A is equal to a measure of a
FIGURES 2.3.2 Elements of set set m(A) or is equal to the quantity of absolute
A = α Aα situated on sur- information IA , which is contained in a given set:
face of n-dimensional sphere
Sp(O, R) ρ(A∆O) = m(A∆O) = m(A) = IA .

It is obvious that quantity of absolute information IA of a set A is equal to quantity


of relative information IA∆O contained in a set A with respect to null element O:

IA = m(A) = m(A∆O) = IA∆O .

Measure m(A) of a set A of the elements or a quantity of absolute information IA


contained in a given set may be interpreted as a length of a set in metric space Ω.
Quantity of relative information IA∆B contained in a set A with respect to a set B,
and vice versa, as noted in Remark 2.3.2, is identically equal to a distance between
sets A and B in the space Ω (2.3.1).
Main relationships between the sets A and B in metric space Ω and geometric
properties of Ω are considered in section 2.2. From these relationships, one can easily
2.3 Informational Properties of Information Carrier Space 63

obtain the corresponding interrelations between various types of information. Here


we bring the main ones only.
Group of the identities:

IA+B = IA + IB − IAB ; IA+B = IA∆B + IAB ; IA∆B = IA + IB − 2IAB ;

Group of the inequalities:

IA + IB ≥ IA+B ≥ IA∆B ; IA + IB ≥ IA+B ≥ max[IA , IB ];


p
max[IA , IB ] ≥ IA · IB ≥ min[IA , IB ] ≥ IA·B .
Informational relationships for an arbitrary triplet of sets A, B, C and null element
O of the space Ω, that are equivalent to metric relationships of tetrahedron (see
Fig. 2.2.2), look like:

IA∆B = IA + IB − 2IAB ; IB∆C = IB + IC − 2IBC ; IC∆A = IC + IA − 2ICA ;

IA∆B = IA∆C + IC∆B − 2I(A∆C)(C∆B) .


Now one should consider the unities of information quantity measurement. Mathe-
matical apparatus for constructing the stated foundations of both signal processing
theory and information theory is Boolean algebra with a measure. As a unit of infor-
S absolute information IAα , which is contained
mation quantity, we take a quantity of
in a single element Aα of a set A = α Aα with normalized measure (m(Aα ) = 1),
and according to Definition 2.3.3, it is equal to 1:

IAα = m(Aα ) |Aα ∈A = 1.

Definition 2.3.11. Quantity of absoluteSinformation IAα contained in a single


element Aα : Aα ∈ A ⊂ Ω of a set A, A = Aα is called abit (abit is abbreviation
α
for absolute unit): IAα = m(Aα ) = 1 abit.

Now we finish the consideration of the main relationships between information


carriers (sets of elements) of the space Ω and consider the main relationships within
S of the space Ω.
the framework of a single information carrier (a set of the elements)
As noted above, all the elements {Aα } of an arbitrary set A = α Aα in informa-
tion carrier space are situated on a surface of some n-dimensional sphere Sp(O, R),
whose center is a null element O of generalized Boolean algebra B(Ω) and radius
R is equal to 1: R = 1 (see Fig. 2.3.2).
Evidently, all the relationships between
S the sets of the space Ω hold for arbitrary
elements {Aα } of a single set A = α Aα .
S the elements {Aα } with normalized measure m(Aα ) = 1 of an arbitrary
For all
set A = α Aα , one may introduce a metric between the elements Aα and Aβ , that
is equivalent to the metric (2.1.2):
1 1
d(Aα , Aβ ) = m(Aα ∆Aβ ) = [m(Aα ) + m(Aβ ) − 2m(Aα · Aβ )] = 1 − m(Aα · Aβ );
2 2
64 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
1
d(Aα , Aβ ) = ρ(Aα , Aβ ). (2.3.2)
2
The set of the elements with a discrete structure {Aj } in metric space Ω is rep-
resented by the vertices {Aj } of polygonal line l({Aj }) that lie upon a sphere
Sp(O, R), and at the same time, are the vertices of an n-dimensional simplex
Sx(A) inscribed into the sphere Sp(O, R). The length (perimeter) P ({Aj }) of a
closed polygonal line l({Aj }) is determined by the expression:
n n
X 1X
P ({Aj }) = d(Aj , Aj+1 ) = ρ(Aj , Aj+1 ),
j=0
2 j=0

where d(Aj , Ak ), ρ(Aj , Ak ) are metrics determined by the relationships (2.3.2) and
(2.1.2), respectively.
Here, the values of indices are denoted modulo n + 1, i.e., An+1 ≡ A0 .
The set of the elements with a continuous structure {Aα } in metric space Ω is a
continuous closed line l({Aα }) situated upon a sphere Sp(O, R), which at the same
time is a fragment of n-dimensional simplexSx(A) inscribed into a sphere Sp(O, R),
and in series connects the vertices {Aj } of a given simplex.
On the base of these
S notions, one can distinguish two forms of structural di-
versity of a set A = α Aα and correspondingly, two measures of collections of the
elements of a set A, i.e., overall quantity of information and relative quantity of
information introduced by the following definitions.

Definition 2.3.12. Overall quantity of information I (A), which is contained in a


set A considered a collection of the elements {Aα }, is an information quantity equal
to the measure of their sum:
[ X
I (A) = m(A) = m( Aα ) = m( Aα ). (2.3.3)
α α

Evidently, this measure of structural diversity is identical to the quantity of


absolute information IA contained in a set A:

I (A) = IA .

Definition 2.3.13. Relative quantity of information I∆ (A), which is contained in a


set A in the consequence of distinctions between the elements of a collection {Aα },
is an information quantity equal to the measure of their symmetric difference:

I∆ (A) = m(∆ Aα ), (2.3.4)


α

where ∆ Aα = . . . ∆Aα ∆Aβ . . . is a symmetric difference between the elements of


α
collection {Aα } of a set A.

The presence of two measures of structural diversity of a collection {Aα } of the


set elements is stipulated by Stone’s duality between generalized Boolean algebra
B(Ω) with signature (+, ·, −, O) of the type (2, 2, 2, 0) and a generalized Boolean
2.3 Informational Properties of Information Carrier Space 65

ring BR(Ω) with signature (∆, ·, O) of the type (2, 2, 0) that are isomorphic between
each other [176], [215].
Information, contained in a set A of elements, is revealed at the same time in
similarity and distinction between the single elements of collection {Aα }. If a set
A consists of the identical elements . . . = Aα = Aβ = . . . of a collection {Aα },
then the overall quantity of information I (A) contained in a set A is equal to 1:
I (A) = 1, and relative quantity of information I∆ (A) contained in a set A is equal
to zero: I∆ (A) = 0. This means that overall quantity of information contained in
a set A consisting of a collection of identical elements {Aα } is equal to a measure
of an element: I (A) = m(Aα ) = 1, whereas an information quantity that can be
extracted from such a set is determined by relative quantity of information I∆ (A)
and equal to zero.
Consider the main relationships
P that characterize the introduced measures of
collections of the elements m( Aj ) and m(∆ Aj ) for a set A with discrete structure
j j
{Aj }:
Xn n
X X X
m( Aj ) = m(Aj )− m(Aj Ak ) + m(Aj Ak Al ) − . . . (2.3.5)
j=0 j=0 0≤j<k≤n 0≤j<k<l≤n
n
Y
. . . + (−1)n m( Aj );
j=0
n n n
n X X X
m( ∆ Aj ) = m( Aj ) − m( Aj Ak ) + m( Aj Ak Al ) − . . . (2.3.6)
j=0
j=0 0≤j<k≤n 0≤j<k<l≤n
n
Y
n
. . . + (−1) m( Aj ).
j=0

The identities (2.3.5) and (2.3.6) imply double inequality:


X X
m(∆ Aj ) ≤ m( Aj ) ≤ m(Aj ), (2.3.7)
j
j j

which implies, in its turn, double informational inequality:


X
I∆ (A) ≤ IA ≤ m(Aj ). (2.3.8)
j

If a set A consists of disjoint elements (Aj · Ak = O), then the inequality (2.3.8)
transforms to the identity:
X
I∆ (A) = IA = m(Aj ). (2.3.9)
j

It can be shown that for a set A with discrete structure {Aj }, j = 0, 1, . . . , n, the
following relationship holds:
X Y
m( Aj ) = P ({Aj }) + m( Aj ), (2.3.10)
j j
66 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure
n n
1
P P
where P ({Aj }) = 2 m(Aj ∆Aj+1 ) = d(Aj , Aj+1 ) is perimeter of a closed
j=0 j=0
polygonal line l({Aj }) that in series connects the ordered elements {Aj } of a set
A in metric space of information carriers Ω. Here the values of indices are denoted
modulo n + 1, i.e., An+1 ≡ A0 .
The relationships (2.3.6) and (2.3.10) imply the equality:

m(∆ Aj ) = P ({Aj }) − P ({Aj · Aj+1 }) + P ({Aj · Aj+1 · Aj+2 }) − . . . (2.3.11)


j

j+k
Y Y
k
+(−1) P ({ Ai }) + . . . + m( Aj ) · mod 2 (n − 1),
i=j j

1, n = 2k, k ∈ N;
where mod 2 (n − 1) =
0, n = 2k + 1,
n
P
P ({Aj }) = d(Aj , Aj+1 ) is perimeter of a closed polygonal line l({Aj }) that in
j=0
series connects the ordered elements {Aj } of a set A in metric space of information
carriers Ω (see Fig. 2.3.2);
n
P
P ({Aj · Aj+1 }) = d[(Aj−1 · Aj ), (Aj · Aj+1 )] is perimeter of a closed polygonal
j=0
line l({AΠj }) that in series connects the ordered elements {AΠj }: AΠj = Aj · Aj+1
S
of a set AΠ = AΠj in metric space Ω (see Fig. 2.3.2);
j
j+k
Q n
P j+k
Q j+k+1
Q
P ({ Ai }) = d[ Ai , Ai ] is perimeter of a closed polygonal line
i=j j=0 i=j i=j+1
j+k
Q
l({AΠj,j+k }) that in series connects the ordered elements {AΠj,k }: AΠj,k = Ai
i=j
S
of a set AΠk = AΠj,k in metric space Ω.
j
The relationship (2.3.10) implies that overall quantity of information I (A) is an
information quantity contained in a set A in the consequence of a presence of metric
distinctions between the elements of a collection {Aj }. This overall quantity of
information is determined by perimeter P ({Aj }) of a closed polygonal line l({Aj })
in metric space Ω and also by quantity
Q of mutual information defined by a measure
of the product of the elements m( Aj ).
j
The relationship (2.3.11) implies that relative quantity of information I∆ (A)
is an information quantity contained in a set A owing to the presence of metric
distinctions between the elements of a collection {Aj }. But unlike overall quantity of
information I (A), relative quantity of information I∆ (A) is determined by perimeter
P ({Aj }) of a closed polygonal line l({Aj }) in metric space Ω, taking into account
j+k
Q
the influence of metric distinctions between the products Ai of the elements of
i=j
a collection {Aj }. S
Both measures of structural diversity of a set of the elements A = Aj
j
2.3 Informational Properties of Information Carrier Space 67
P
(m( Aj ) and m(∆ Aj )) are the functions of perimeter P ({Aj }) of the ordered
j j
structure of a set A in the form of a closed polygonal line l({Aj }) that in se-
ries connects the elements of a collection {Aj } in metric space Ω with metric
d(Aj , Ak ) = 21 m(Aj ∆Ak ). The first of these measures, i.e., overall quantity of in-
formation I (A), has a sense of entire quantity of information contained in a set A
considered a collection of elements with an ordered structure. Relative quantity of
information I∆ (A) has a sense of information quantity that may be extracted from
a set by a proper processing.
Under discretization, a set A with continuous structure {Aα } is represented by
a finite (countable) set {Aj }. In its turn, a set of the elements A0 with discrete
structure {Aj } may be associated with the first one:
[ [
A= Aα → A0 = Aj ,
α j

so that each element Aj of a set A0 with discrete structure is, at the same time, the
element of a set A with continuous structure: Aj ∈ A.
Thus, a discretization D : A → A0 of a set of the elements A with continuous
structure {Aα } may be considered a mapping of a set A = Aα into a set A0 = Aj
S S
α j
with discrete structure {Aj }, so that each element of a set A0 with discrete structure
is, at the same time, the element of a set A with continuous structure, and the
distinct elements Aα , Aβ ∈ A, Aα 6= Aβ of a set A are mapped into the distinct
elements Aj , Ak ∈ A0 , Aj 6= Ak of a set A0 :

D : A → A0 ; (2.3.12)
0
Aα , Aβ ∈ A, Aα 6= Aβ , Aα → Aj , Aβ → Ak , Aj , Ak ∈ A , Aj 6= Ak . (2.3.12a)

The mapping D possessing the property (2.3.12a) is an injective one. Homo-


morphism (2.3.12) maps generalized Boolean algebra B(A) with the signature
(+, ·, −, O) of the type (2, 2, 2, 0) into subalgebra B(A0 ) preserving the operations:

Aα + Aβ = Aj + Ak (addition)
Aα · Aβ = Aj · Ak (multiplication)
Aα − Aβ = Aj − Ak (difference / obtaining a relative complement)
OA ≡ OA0 ≡ O (identity of null element)
In common case, discretization of a set of the elements A with continuous structure
{Aα } is not an isomorphic mapping that preserves both measures of structural
diversity I (A) (2.3.3) and I∆ (A) (2.3.4):

I (A) 6= I (A0 ); I∆ (A) 6= I∆ (A0 ), (2.3.13)

where I (A) is overall quantity of information contained in a set of the elements A


with continuous structure {Aα }; I (A0 ) is overall quantity of information contained
in a set of the elements A0 with discrete structure {Aj }; I∆ (A) is relative quantity
68 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

of information contained in a set of the elements A with continuous structure {Aα };


I∆ (A0 ) is relative quantity of information contained in a set of the elements A0 with
discrete structure {Aj }.
Consider the main informational relationships characterizing the representation
of a set of the elements A with continuous structure {Aα } by a finite (countable)
set {Aj }.
Under discretization D : A → A0 (2.3.12), the following informational inequali-
ties hold:
I (A) ≥ I (A0 ), (2.3.14a)
I (A0 ) ≥ I∆ (A0 ). (2.3.14b)
It should be noted, that the first inequality (2.3.14a) is stipulated by Axiom 1.3.1.
The second inequality is stipulated by the relationship (2.3.6) between overall quan-
tity of information and relative quantity of information contained in a result of
discretization {Aj } of a set of the elements A with continuous structure {Aα }.
Overall quantity of information I (A), which is contained in a set of the elements
A with continuous structure {Aα }, is equal to the sum of overall quantity of infor-
mation I (A0 ) contained in a set A0 as a result of discretization of a set A and a
quantity of information IL0 , arising from curvature of a structure of a set A and is
lost under discretization:
I (A) = I (A0 ) + IL0 . (2.3.15)
Information quantity IL0 is called information losses of the first genus. For a set of
the elements A with continuous structure {Aα } in the space Ω, we can introduce
a measure of a structure curvature denoted as c(A) and characterizing a deflection
of structure locus {Aα } of a set A from a line:

I (A) − I (A0 )
c(A) = . (2.3.16)
I (A)

Obviously, curvature c(A) of a set of the elements A can be varied within 0 ≤


c(A) ≤ 1. In the case of an arbitrary pair of adjacent elements Aj , Aj+1 ∈ A0 of
a discrete set A0 and the element Aα ∈ A, Aα ∈ [Aj Aj+1 , Aj + Aj+1 ], the metric
identity holds:
d[Aj , Aj+1 ] = d[Aj , Aα ] + d[Aα , Aj+1 ],
where d[Aα , Aβ ] = 21 m(Aα ∆Aβ ) is metric in the space Ω, curvature c(A) of a set
of the elements A with continuous structure is equal to zero.
Overall quantity of information I (A0 ) contained in a set A0 with discrete struc-
ture {Aj }, is equal to the sum of relative quantity of information I∆ (A0 ) of this set
and quantity of redundant information IL00 contained in a set A0 owing to nonempty
pairwise intersections (due to mutual information) between its elements:

I (A0 ) = I∆ (A0 ) + IL00 . (2.3.17)

Information quantity IL00 is called information losses of the second genus. For the
result of discretization, i.e., for a set A0 of the elements with discrete structure {Aj }
2.3 Informational Properties of Information Carrier Space 69

of the space Ω, one can introduce a measure of informational redundancy r(A0 ) that
characterizes informational interrelations between the elements of discrete structure
{Aj } of a set A0 (or simply, mutual information between them):

I (A0 ) − I∆ (A0 )
r(A0 ) = . (2.3.18)
I (A0 )

In the case of an arbitrary pair of adjacent elements, a measure of their product


is equal to zero and the measure of informational redundancy r(A0 ) of a structure
{Aj } is also equal to zero.
Substituting equality (2.3.17) into (2.3.15), we obtain the following relation:

I (A) = I∆ (A0 ) + IL0 + IL00 . (2.3.19)

The sense of the expression (2.3.19) can be elucidated in the following way: from
a set of the elements A with continuous structure, one cannot extract more infor-
mation quantity than a value of relative quantity of information I∆ (A0 ) contained
in A owing to diversity between the elements of its structure. Information losses IL0
and IL00 take place, on the one hand, owing to curvature c(A) of a structure {Aα }
of a set A in metric space Ω, and on the other hand, as a consequence of some
informational redundancy r(A0 ) of a discrete set A0 (mutual information between
its elements {Aj }).
Discretization D : A → A0 , according to (2.3.19), must provide a maximum
of the ratio of relative quantity of information I∆ (A0 ), contained in a set A0 with
discrete structure, to overall quantity of information I (A) contained in a set A with
continuous structure:
I∆ (A0 )
→ max, (2.3.20)
I (A)
or is equivalent to provide a minimum sum of information losses of the first IL0 and
the second IL00 genus:
IL0 + IL00 → min .
In this case, the following theorem can be formulated.

Theorem 2.3.1. Theorem on equivalent (from the standpoint of preserving overall


quantity of information) representation of a set of the elements A with continuous
structure {Aα } by a set A0 with discrete
S structure {Aj }. In the information carrier
space Ω, a set of the elements A = Aα , A ⊂ Ω with continuous structure {Aα } and
α
a finite measure m(A) < ∞ can be represented equivalently by a finite set A0 = Aj
S
j
without any information losses under the condition, if a set A of the elements
with continuous structure {Aα } can be represented by a partition of its orthogonal
elements {Aj }: Aj ·Ak = O, so that between relative quantity of information I∆ (A0 )
contained in a set A0 and overall quantity of information I (A) contained in a set
A, the equality holds:
I (A) = I∆ (A0 ). (2.3.21)
70 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

This identity provides the possibility of extracting the same information quan-
tity contained in a set A, from a finite set A0 without any information losses. Such a
representation of a set A with continuous structure by a finite set A0 with discrete
structure, when the relation (2.3.21) holds, is equivalent from the standpoint of
preserving overall quantity of information.
In this case, between both measures of structural diversity of the sets of the ele-
ments with continuous {Aα } and discrete {Aj } structures respectively, i.e., between
measures of sets A and A0 , the identity holds:
X X
m( Aα ) ≡ m( Aj ),
α j

and also the equality between aforementioned measures and measure of symmetric
difference of a set m(∆ Aj ) holds:
j
X X
m( Aα ) = m( Aj ) = m(∆ Aj ),
j
α j

and from an informational standpoint, is equivalent to the identity:

I (A) = I (A0 ) = I∆ (A0 ) = I∆ (A).

Fulfillment of these identities is provided by orthogonality of the elements {Aj },


so a measure of a set A0 with discrete structure {Aj } is equal to a measure of
symmetric difference of the elements of a given set A:
X
m( Aj ) = m(∆ Aj ),
j
j

i.e., overall quantity of information I (A0 ) of a set A0 is equal to relative quantity


of information I∆ (A0 ) of this set: I (A0 ) ≡ I∆ (A0 ). This means that under discrete
representation of a set of the elements A with continuous structure {Aα } possessing
the mentioned property, information losses of the first IL00 and second IL00 genus are
equal to zero: IL0 = IL00 = 0.
The structural diversity measure of a set of the elements A with continuous
structure {Aα }, based on a measure of symmetric difference between the elements
(2.3.4), allows us to formulate the following variant of a theorem on equivalent
representation.

Theorem 2.3.2. Theorem on equivalent (from the standpoint of preserving relative


quantity of information) representation of a set of the elements A with continuous
structure {Aα } by a set A0 with discrete
S structure {Aj }. In the information carrier
space Ω, a set of the elements A = Aα , A ⊂ Ω with continuous structure {Aα }
α
and aSfinite measure m(A) < ∞ can be represented equivalently by a finite set
A0 = Aj , if between relative quantity of information I∆ (A), contained in a set A
j
of the elements with continuous structure {Aα }, and relative quantity of information
2.3 Informational Properties of Information Carrier Space 71

I∆ (A0 ), contained in a set of the elements A0 with discrete structure, the equality
holds:
I∆ (A) = I∆ (A0 ). (2.3.22)

This identity provides the possibility of extracting


S the same relative quantity of
information contained in a set of the elements A = Aα with continuous structure
α
{Aα }, from a finite set A0 = Aj with discrete structure {Aj }. Such a representa-
S
j
tion of a set A with continuous structure by a finite set A0 with discrete structure,
while the relation (2.3.22) holds, is equivalent from the standpoint of preserving
relative quantity of information.

Theorem 2.3.3. Theorem on isomorphic


S mapping preserving a measure. If in metric
space Ω, a set of the elements A = Aα with continuous (discrete) structure {Aα }
α
and a finite measure m(A) < ∞ is isomorphic to a set of the elements A0 = A0α
S
α
with continuous (discrete) structure {A0α } , A, A0 ⊂ Ω:
g g
Aα  A0α ; A  A0 ; g ∈ G, (2.3.23)
g −1 g −1

where G is an automorphism group of generalized Boolean algebra B(Ω) with a


measure m, then the mapping g is a measure preserving isomorphism [175]:

Aα1 + Aα2 = A0α1 + A0α2 ; (2.3.24a)

Aα1 · Aα2 = A0α1 · A0α2 ; (2.3.24b)


Aα1 − Aα2 = A0α1 − A0α2 ; (2.3.24c)
Aα1 ∆Aα2 = A0α1 ∆A0α2 . (2.3.24d)

Corollary 2.3.1. Isomorphic mapping g (2.3.23) of a set A with continuous (dis-


Aα into a set A0 with continuous (discrete) structure
S
crete) structure A =
α
A0 = A0α preserves the measures of all binary operations between an arbitrary
S
α
pair of the elements Aα1 , Aα2 ∈ A of a set A considered as subalgebra B(A) of
generalized Boolean algebra B(Ω) with signature (+, ·, −, O) of the type (2, 2, 2, 0):

m(Aα1 + Aα2 ) = m(A0α1 + A0α2 ); (2.3.25a)

m(Aα1 · Aα2 ) = m(A0α1 · A0α2 ); (2.3.25b)


m(Aα1 − Aα2 ) = m(A0α1 − A0α2 ); (2.3.25c)
m(Aα1 ∆Aα2 ) = m(A0α1 ∆A0α2 ). (2.3.25d)

The group of the identities (2.3.25a through d) implies the following corollary.
72 2 Information Carrier Space Built upon Generalized Boolean Algebra with a Measure

Corollary 2.3.2. Isomorphic mapping g (2.3.23) preserves all the sorts of infor-
mation quantities between an arbitrary pair of the elements Aα1 , Aα2 ∈ A of a set A
considered as subalgebra B(A) of generalized Boolean algebra B(Ω) with signature
(+, ·, −, O) of the type (2, 2, 2, 0):

IAα1 +Aα2 = IA0α1 +A0α2 ; (2.3.26a)

IAα1 ·Aα2 = IA0α1 ·A0α2 ; (2.3.26b)


IAα1 −Aα2 = IA0α1 −A0α2 ; (2.3.26c)
IAα1 ∆Aα2 = IA0α1 ∆A0α2 . (2.3.26d)

Corollary 2.3.3. Isomorphic mapping g (2.3.23) preserves overall quantity of in-


formation I (A) contained in a set A considered as a collection of elements {Aα }:
X X
I (A) = m(A) = m( Aα ) = m( A0α ) = m(A0 ) = I (A0 ).
α α

Corollary 2.3.4. Isomorphic mapping g (2.3.23) preserves relative quantity of in-


formation I∆ (A) contained in a set A owing to distinctions between the elements
{Aα }:
I∆ (A) = m(∆ Aα ) = m(∆ A0α ) = I∆ (A0 ).
α α

The results of Theorem 2.3.3 can be generalized upon isomorphic mapping of a


pair of sets that are information carriers.

Theorem 2.3.4. Theorem on isomorphic mapping. Under isomorphic mapping S g


(2.3.23) of a pair of sets A, B with continuous (discrete) structure A = Aα ,
α
into the sets A0 , B 0 with continuous (discrete) structure A0 = A0α ,
S S
B = Bβ
β α
B 0 = Bβ0 in information carrier space Ω, respectively, A, A0 ⊂ Ω; B, B 0 ⊂ Ω:
S
β

g g
A  A0 ; B  B 0 ; g ∈ G, (2.3.27)
g −1 g −1

the measures of all binary operations between a pair of sets A, B ⊂ A ∪ B, whose


union A ∪ B is considered a subalgebra B(A + B ) of a generalized Boolean algebra
B(Ω) with signature (+, ·, −, O) of the type (2, 2, 2, 0), are preserved:

m(A + B ) = m(A0 + B 0 ); (2.3.28a)

m(A · B ) = m(A0 · B 0 ); (2.3.28b)


m(A − B ) = m(A0 − B 0 ); (2.3.28c)
m(A∆B ) = m(A0 ∆B 0 ). (2.3.28d)

The group of identities (2.3.28a) through (2.3.28d) implies the following corol-
lary.
2.3 Informational Properties of Information Carrier Space 73

Corollary 2.3.5. Isomorphic mapping g (2.3.27) preserves all the sorts of informa-
tion quantity between a pair of sets A, B ⊂ A∪B, whose union A∪B is considered a
subalgebra B(A + B ) of generalized Boolean algebra B(Ω) with signature (+, ·, −, O)
of the type (2, 2, 2, 0):
IA+B = IA0 +B 0 ; (2.3.29a)
IA·B = IA0 ·B 0 ; (2.3.29b)
IA−B = IA0 −B 0 ; (2.3.29c)
IA∆B = IA0 ∆B 0 . (2.3.29d)

By the relationships (2.3.26a) through (2.3.26d) and (2.3.29a) through (2.3.29d),


Corollaries 2.3.2 and 2.3.5 establish invariance properties of the quantities of overall,
mutual, particular relative, and relative information, respectively. Corollaries 2.3.3
and 2.3.4 define invariance properties of overall and relative quantities of infor-
mation, respectively. Besides, Corollary 2.3.3 sets up invariance property of the
quantity of absolute information.
3
Informational Characteristics and Properties of
Stochastic Processes

The physical nature of real signals participating in the processes of transmitting,


receiving, and processing information can vary widely. However, despite the diver-
sity of signals, there is a class of mathematical models embracing main properties
of real signals of various types. Such a class is represented, in a general case, by
multivariate stochastic functions, but within this work the models of the signals are
bounded by scalar stochastic processes only.
All fundamental inferences of information theory and signal processing theory
are based on probabilistic-statistical descriptions of the signals, their interactions
and transformations. This approach, being closely interrelated with Boolean alge-
bra, remains a determinative one and is used in this chapter and subsequent ones.
The essential role of the notion of space in recent natural science research was
underlined in Section 1.1. The notion of space performs an important function in
probability theory. Correct problem statements concerning a measure of closeness
between random variables must reveal what should be understood by a measure of
closeness of random variables: a closeness of their realizations upon a probabilistic
space or a closeness of distribution functions of these random variables, and so on.
All the results concerning a distance evaluation upon the sets of random variables,
as well as the methods of their use for applied problems of various types, form the
subject of theory of probabilistic metric spaces [226].
A lot of publications are devoted to investigation of metric characteristics of
random variables and stochastic processes in metric spaces [226], [227], [228], [229],
[230].
It was also noted in Section 1.1 that nonlinearity as a general property of var-
ious sorts of matter, provides the diversity of material forms of the outer world.
The processes of signal receiving and signal processing in electronic systems of var-
ious functionalities are not an exception in this sense. Use of invariance principles
is considered one of the basic methods of description of nonlinear phenomena of
various types.
There are such invariants of groups of mappings of stochastic processes described
in this work that characterize probabilistic-statistical interrelations between their
instantaneous values (samples), and are based upon metric relations between them.

75
76 3 Informational Characteristics and Properties of Stochastic Processes

3.1 Normalized Function of Statistical Interrelationship of


Stochastic Processes
Stochastic process ξ (t) is represented by a set of random variables under the fixed
values of argument t, so the same probabilistic characteristics are used for descrip-
tion are usually used for random variables: probability density functions, cumu-
lative distribution functions, characteristic functions, moment and cumulant func-
tions [114], [115], [113].
To describe characteristics of a pair of stochastic processes ξ (t) and η (t0 ) we
shall use different designations of time parameter t and t0 , assuming that these
stochastic processes exist in a different reference systems connected by location b
and scale a (a 6= 0) transformations:
 0
t = a · t + b;
(3.1.1)
t = (t0 − b)/a.

Normalized correlation function rξ (t1 , t2 ) and normalized cross-correlation function


rξη (t1 , t02 ) characterize only linear statistical interrelationships (dependences) of ran-
dom variables, i.e., the samples ξ (t1 ), ξ (t2 ) and ξ (t1 ), η (t02 ) of stochastic processes
ξ (t) and η (t0 ) at the instants t1 , t2 , t02 , respectively. Correlation ratio (normalized
variance function) is considered to be more complete characteristic of statistical
interrelation between two samples of stochastic process is defined by the following
expression [231], [232]:
1
θξ2 (t1 , t2 ) = Mξ(t2 ) {[M{ξ (t1 )/ξ (t2 )} − M{ξ (t1 )}]2 },
Dξ (t1 )

where Dξ (t) is a variance of stochastic process ξ (t); M(∗) denotes a symbol of


expectation; M{ξ (t)} is an expectation of stochastic process ξ (t); M{ξ (t1 )/ξ (t2 )}
is a conditional expectation of stochastic process ξ (t);

Mξ(t2 ) {[M{ξ (t1 )/ξ (t2 )} − M{ξ (t1 )}]2 } =

Z∞
= [M{ξ (t1 )/ξ (t2 )} − M{ξ (t1 )}]2 p(x2 , t2 )dx2 ,
−∞

where p(x, t) is a probability density function (PDF).


It should be noted that correlation ratio does not possess the symmetry prop-
erty: θξ (t1 , t2 ) 6= θξ (t2 , t1 ), and it is not invariant with respect to one-to-one map-
pings of stochastic processes inasmuch as it is not preserved while realizing the last
ones.
To define a complete measure of statistical interrelationship between two sam-
ples ξ (tj ) and ξ (tk ) of stochastic process ξ (t), and also to define an invariant of
bijective mappings of stochastic process, it is logical to introduce the function
3.1 Normalized Function of Statistical Interrelationship of Stochastic Processes 77

ψξ (tj , tk ), which is characterized by metric d(p, p0 ) between joint PDF of the sam-
ples p2 (xj , xk ; tj , tk ) and the product of their univariate PDFs p1 (xj , tj ), p1 (xk , tk ):

ψξ (tj , tk ) = d(p, p0 ), (3.1.2)

where metric d(p, p0 ) is defined by the following expression:

Z∞ Z∞
1
d(p, p0 ) = |p2 (xj , xk ; tj , tk ) − p1 (xj , tj )p1 (xk , tk )|dxj dxk . (3.1.2a)
2
−∞ −∞

Index 0 attached to p0 shows the peculiarity that the product of univariant


PDFs corresponds to bivariate PDF of statistically independent samples.

Definition 3.1.1. The function ψξ (tj , tk ), which is defined by the expression (3.1.2)
and characterizes a measure of statistical interrelationship between two samples
ξ (tj ), ξ (tk ) of stochastic process ξ (t), is called the normalized function of statistical
interrelationship (NFSI).

The NFSI ψξ (tj , tk ) of Gaussian stochastic process, as it follows from (3.1.2), is


entirely defined by normalized correlation function rξ (tj , tk ):

ψξ (tj , tk ) = g [rξ (tj , tk )], (3.1.3)

where g [x] is some deterministic function, which is rather exactly (with an error not
larger than 1% in the interval x ∈ [0, 2/3] and not larger than 10% in the interval
x ∈ [2/3, 1]) approximated by the following expression:
r
2
g [x] ≈ 1 − 1 − arcsin[|x|]. (3.1.4)
π
In a similar manner, we define a measure of statistical interrelationship between
two samples ξ (tj ) and η (t0k ) of stochastic processes ξ (t) and η (t0 ), introducing the
function:

ψξη (tj , t0k ) = dξη (p, p0 ), (3.1.5)


Z∞ Z∞
1
dξη (p, p0 ) = |p2 (xj , yk ; tj , t0k ) − p1 (xj , tj )p1 (yk , t0k )|dxj dyk , (3.1.5a)
2
−∞ −∞

where dξη (p, p0 ) is metric between joint PDF p2 (xj , yk ; tj , tk ) of the samples ξ (tj )
and η (t0k ) and the product of their univariate PDFs p1 (xj , tj ) and p1 (yk , t0k ).

Definition 3.1.2. The function ψξη (tj , t0k ), which is defined by (3.1.5) and charac-
terizes a measure of statistical interrelationship between two samples ξ (tj ), η (t0k ) of
stochastic processes ξ (t) and η (t0 ), respectively, is called mutual normalized function
of statistical interrelationship (mutual NFSI).
78 3 Informational Characteristics and Properties of Stochastic Processes

Mutual NFSI ψξη (tj , t0k ) of a pair of Gaussian stochastic processes ξ (t) and η (t0 ),
from Equation (3.1.5), is defined by normalized cross-correlation function rξη (tj , t0k ):

ψξη (tj , t0k ) = g [rξη (tj , t0k )], (3.1.6)

where the function g [x] is approximated by Equation (3.1.4).


Main properties of NFSI and mutual NFSI are determined by informational
properties of stochastic processes and will be considered in the next section.
We shall call the function dξη (tj , t0k ) connected with metric dξη (p, p0 ) by the
relationship:
dξη (tj , t0k ) = 1 − dξη (p, p0 ) (3.1.7)
the metric between two samples ξ (tj ) and η (t0k ) of stochastic processes ξ (t) and η (t0 )
respectively.
Then, according to the formulas (3.1.5) and (3.1.7), the functions ψξη (tj , t0k )
and dξη (tj , t0k ) are connected by the identity:

ψξη (tj , t0k ) + dξη (tj , t0k ) = 1. (3.1.8)

Components ψξη (tj , t0k ) and dξη (tj , t0k ) appearing in formula (3.1.8) are closeness
measure and distance measure between two samples ξ (tj ) and η (t0k ) of stochastic
processes ξ (t) and η (t0 ), respectively. Using these functions, one can introduce sim-
ilar measures for a pair of stochastic processes ξ (t) and η (t0 ) by determining the
quantities ψξη and dξη in the following way:

ψξη = sup ψξη (tj , t0k ); (3.1.9a)


tj , t0k ∈]−∞,∞[

dξη = inf dξη (tj , t0k ). (3.1.9b)


tj , t0k ∈]−∞,∞[

We have closeness measure and distance measure for a pair of stochastic processes
ξ (t) and η (t0 ). The quantity ψξη determined by the formula (3.1.9a) we shall call co-
efficient of statistical interrelation, and the quantity dξη determined by the formula
(3.1.9b) we shall call metric between stochastic processes ξ (t) and η (t0 ). Coefficient
of statistical interrelation and metric between stochastic processes ξ (t) and η (t0 )
are connected by a relationship similar to (3.1.8):

ψξη + dξη = 1. (3.1.10)

Based on the known and introduced characteristics of stochastic processes, one


can classify them taking into account constraints imposed upon their probabilistic
characteristics. Stochastic process is considered weakly stationary (stationary in a
narrow sense), if its NFSI is determined exclusively by the time difference τ = t2 −t1
between a pair of its samples .
Fourier transform F [∗] of NFSI ψξ (τ ) of stationary stochastic process ξ (t) allows
determining a new characteristic.
3.1 Normalized Function of Statistical Interrelationship of Stochastic Processes 79

Definition 3.1.3. Fourier transform F of NFSI ψξ (τ ) of stationary stochastic


process ξ (t) is called hyperspectral density σξ (ω ):
Z∞
1
σξ (ω ) = F [ψξ (τ )] = ψξ (τ ) exp[−iωτ ]dτ . (3.1.11)

−∞

According to (3.1.11),
Z∞
1
σξ (0) = ψξ (τ )dτ . (3.1.12)

−∞

Inverse Fourier transform F −1 allows restoring NFSI ψξ (τ ) of stochastic process


ξ (t) on the base of its hyperspectral density σξ (ω ):
Z∞
−1
ψξ (τ ) = F [σξ (ω )] = σξ (ω ) exp[iτ ω ]dω. (3.1.13)
−∞

Enumerate the main properties of hyperspectral density (HSD) σξ (ω ) of stationary


stochastic process ξ (t):

1. σξ (ω ) is a continuous and real function.


2. σξ (ω ) is a nonnegative function: σξ (ω ) ≥ 0.
3. σξ (ω ) is an even function: σξ (ω ) = σξ (−ω ).
R∞
4. σξ (ω ) is a normalized function: σξ (ω )dω = 1.
−∞

For NFSI ψξ (τ ) of stationary stochastic process ξ (t), we introduce a quantity ∆τ


characterizing effective width of its NFSI:
Z∞
∆τ : ψξ (0)∆τ = ψξ (τ )dτ ;
0

thus, this relationship implies that effective width of NFSI ∆τ is equal to:
Z∞
∆τ = ψξ (τ )dτ . (3.1.14)
0

For HSD σξ (ω ) of stationary stochastic process ξ (t), we introduce a quantity ∆ω


characterizing effective width of its HSD:
Z∞
∆ω : σξ (0)∆ω = σξ (ω )dω ;
0
80 3 Informational Characteristics and Properties of Stochastic Processes

thus, this relationship implies that effective width of HSD ∆ω is equal to:
Z∞
1 1
∆ω = σξ (ω )dω = . (3.1.15)
σξ (0) 2σξ (0)
0

The product of effective width of NFSI ∆τ (3.1.14) and effective width of HSD
∆ω (3.1.15) of stationary stochastic process ξ (t), taking into account the relation-
ship (3.1.12), is equal to:
Z∞
1
∆τ · ∆ω = ψξ (τ )dτ = π/2. (3.1.16)
2σξ (0)
0

The expression (3.1.16) characterizes the known uncertainty relation for the func-
tions connected by Fourier transform; the larger is an effective width of NFSI ∆τ
the smaller is an effective width of HSD ∆ω, and vice versa.
Let f be bijective mapping of stochastic process ξ (t), determined in the interval
Tξ = [t0 , t∗ ], into stochastic process η (t0 ), determined in the interval Tη = [t00 , t0∗ ],
from the group of mappings G, f ∈ G, assuming the inverse mapping f −1 exists:

f : ξ (t) → η (t0 ); f −1 : η (t0 ) → ξ (t); f −1 · f = 1, (3.1.17)

where 1 is the unity of the group G.


Then the following theorem holds.

Theorem 3.1.1. Theorem on invariance of NFSI ψξ (tj , tk ) under bijective mapping


of stochastic process. Bijective mapping (3.1.17), which maps stochastic process ξ (t)
into a process η (t0 ) preserves its NFSI:

ψξ (tj , tk ) = ψη (t0j , t0k ),

where ψξ (tj , tk ), ψη (t0j , t0k ) are NFSIs of stochastic processes ξ (t) and η (t0 ) respec-
tively.

Proof. We again describe the probabilistic characteristics of a pair of stochastic


processes ξ (t) and η (t0 ) by using different designations of time parameter t and t0 ,
assuming that these stochastic processes exist in different reference systems.
According to the Definition 3.1.1, NFSI is the function:

ψξ (tj , tk ) = d(p, p0 ), (3.1.18)


Z∞
1
d(p, p0 ) = |pξ (xj , xk ; tj , tk ) − pξ (xj , tj )pξ (xk , tk )|dxj dxk , (3.1.19)
2
−∞

where d(p, p0 ) is metric between joint PDF pξ (xj , xk ; tj , tk ) of the samples ξ (tj ),
η (t0k ) and the product of their univariate PDFs pξ (xj , tj ) , pξ (xk , tk ).
3.1 Normalized Function of Statistical Interrelationship of Stochastic Processes 81

We know that under bijective mappings of random variables:

f : ξ (tj,k ) → η (t0j,k ),

the invariance property of probability differential holds:

pξ (xj , xk ; tj , tk )dxj dxk = pη (yj , yk ; t0j , t0k )dyj dyk ;

pξ (xj , tj )dxj = pη (yj , t0j )dyj ; pξ (xk , tk )dxk = pη (yk , t0k )dyk .
These relations imply, that under bijective mappings of stochastic processes
(3.1.17), the metric (3.1.19) is preserved, and therefore, NFSI (3.1.18) is preserved
too:
ψξ (tj , tk ) = ψη (t0j , t0k ). (3.1.20)

Corollary 3.1.1. Under the mapping (3.1.17) preserving the domain of definition
of stochastic process Tξ = [t0 , t∗ ] = Tη the identity holds:

ψξ (tj , tk ) = ψη (tj , tk ). (3.1.21)

Corollary 3.1.2. Under the bijective mapping (3.1.17) of stationary stochastic pro-
cess ξ (t), when the condition (3.1.21) holds, besides the fact that NFSI is preserved:
ψξ (τ ) = ψη (τ ); HSD is also preserved σξ (ω ):

σξ (ω ) = F [ψξ (τ )] = F [ψη (τ )] = ση (ω ), (3.1.22)

where F [∗] denotes Fourier transform.


Thus, under bijective mapping of the stationary stochastic process ξ (t) into the
stochastic process η (t), their NFSI and HSD are preserved.
Example 3.1.1. If ξ (t) is Gaussian stochastic process, then the notions of NFSI
ψξ (tj , tk ) and normalized correlation function rξ (tj , tk ) of a pair of samples ξ (tj )
and ξ (tk ) coincide with accuracy to some deterministic function g [x] (see (3.1.3)):

ψξ (tj , tk ) = g [rξ (tj , tk )],

where a function g [x] is approximated by (3.1.4). 5


Example 3.1.2. The extension of power spectral density Sξ (ω ) under nonlinear
mapping of stationary stochastic process ξ (t) is well known in statistical radio-
physics (radioengineering):

f : ξ (t) → η (t); f −1 : η (t) → ξ (t).

This means that under nonlinear mapping f of stochastic process ξ (t), the power
spectral density Sη (ω ) of the process η (t) is distributed within a wider frequency
band than power spectral density Sξ (ω ) of the process ξ (t) is distributed.
Similarly, if one more nonlinear mapping of stochastic process η (t): h : η (t) →
82 3 Informational Characteristics and Properties of Stochastic Processes

η ∗ (t) is realized, then power spectral density Sη∗ (ω ) of the process η ∗ (t) is distributed
within a wider frequency band than power spectral density Sη (ω ) of the process
η (t) is distributed. However, if the stochastic process η (t) is transformed with the
inverse function h (with respect to the initial mapping f : h = f −1 ), then we obtain
the initial stochastic process ξ (t): η ∗ (t) = ξ (t).
It is obvious that power spectral density of the processes η ∗ (t) and ξ (t) should be
identically equal: Sη∗ (ω ) = Sξ (ω ), but that contradicts an initial statement concern-
ing an extension of power spectral density under nonlinear mapping of a stochastic
process.
The obtained paradox can be easily elucidated with the use of HSD σξ (ω ) of
stochastic process ξ (t) and Corollary 3.1.2 of Theorem 3.1.1 on its invariance under
one-to-one transformation of stochastic process. In the mapping f : ξ (t) → η (t), we
have an identity between HSDs of the initial ξ (t) and the resultant η (t) processes:
σξ (ω ) = ση (ω ), and under the mapping h : η (t) → η ∗ (t), the similar identity be-
tween HSDs of the processes η (t) and η ∗ (t) holds: ση (ω ) = ση∗ (ω ). Thus, HSD ση∗ (ω )
of stochastic process η ∗ (t) is identically equal to HSD σξ (ω ) of initial stochastic pro-
cess ξ (t): ση∗ (ω ) = σξ (ω ), and if the secondary mapping is inverse with respect to
the initial one: h = f −1 , then no paradoxical conclusions appear. 5

3.2 Normalized Measure of Statistical Interrelationship of


Stochastic Processes
In Section 3.1, the normalized function of statistical interrelationship (NFSI) and
mutual NFSI were introduced by the Definitions 3.1.1 and 3.1.2 respectively. NFSI
and mutual NFSI characterize a measure of statistical interrelationship between
two instantaneous values (samples) of stochastic signals (processes) at distinct mo-
ments of time. This measure is based on metric relationships between the samples
of stochastic signals (processes) and is defined by a metric between joint probability
density function (PDF) and the product of univariate PDFs of the samples. This
could impact obtaining accurate values of NFSI and mutual NFSI of the samples,
even if their distributions are known. Nevertheless, often for practical applications,
it is necessary to know an amount of dependence between instantaneous values of
two distinct stochastic signals (processes) interacting at the same instant. Infor-
mation concerning the distributions of the processed signals is absent. In order to
establish such a characteristic of a pair of interacting samples of two stochastic
signals (processes) it is useful to take into account the considerations listed below.
Any pair of stochastic signals (processes) ξ (t), η (t) may be considered a partially
ordered set Γ, where for two instantaneous values (samples) ξt1 = ξ (t1 ), ηt2 = η (t2 )
of the processes ξ (t), η (t) ∈ Γ at each instant t ∈ T , the relation of order ξt ≤ ηt
(or ξt ≥ ηt ) is defined.
The partially ordered set Γ is a lattice with operations of least upper bound
and greatest lower bound (of join and meet), respectively: ξt ∨ ηt = supΓ {ξt , ηt },
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 83

ξt ∧ ηt = inf Γ {ξt , ηt }, and if ξt ≤ ηt , then ξt ∧ ηt = ξt and ξt ∨ ηt = ηt [221], [223]:



ξt ∧ ηt = ξt ;
ξt ≤ ηt ⇔
ξt ∨ ηt = ηt .

Upon partially ordered set Γ, we can naturally define an operation of addition


ξt + ηt between two samples ξt , ηt of the processes ξ (t), η (t) ∈ Γ at each instant
t ∈ T . Then the partially ordered set Γ becomes a lattice-ordered group Γ(+, ∨, ∧)
(or L-group).
In any L-group, the following statements hold [221], [233]:

1. Γ(+) is an additive group.


2. Γ(∨, ∧) is a lattice.
3. For arbitrary elements ξt , ηt , at , bt from Γ(+, ∨, ∧), the following identities
hold:
at + (ξt ∧ ηt ) + bt = (at + ξt + bt ) ∧ (at + ηt + bt );
at + (ξt ∨ ηt ) + bt = (at + ξt + bt ) ∨ (at + ηt + bt ).

There exists a neutral element (zero) 0 in the additive group Γ(+) of L–group
Γ(+, ∨, ∧): 0 ∈ Γ(+), such that for ∀x ∈ Γ(+), the inverse element (−x) exists,
and the identity holds: x + (−x) = 0.
Most cases of signal processing problems deal with stochastic signals (processes)
ξ (t), η (t) with symmetric (even) univariate PDFs pξ (x), pη (y ) of the following kind:
pξ (x) = pξ (−x); pη (y ) = pη (−y ). The characteristics of statistical interrelation
of a pair of instantaneous values (samples) ξt , ηt of stochastic signals (processes)
ξ (t), η (t) ∈ Γ introduced in this section and main results formulated in the form
of theorems are predominantly oriented on the class of the signals with those ex-
act properties, i.e., with even univariate PDFs pξ (x), pη (y ); and these signals (pro-
cesses) interact in partially ordered sets with the properties of lattice-ordered group
Γ(+, ∨, ∧) (L-group).

Theorem 3.2.1. For the samples ξt , ηt of stochastic processes ξ (t), η (t) interacting
in L-group Γ, ξt , ηt ∈ Γ, the functions µ(ξt , ηt ), µ0 (ξt , ηt ) equal to:

µ(ξt , ηt ) = 2(P[ξt ∨ ηt > 0] − P[ξt ∧ ηt > 0]); (3.2.1)


0
µ (ξt , ηt ) = 2(P[ξt ∨ ηt < 0] − P[ξt ∧ ηt < 0]), (3.2.1a)

are metrics.

Proof. Consider the probabilities P[ξt ∧ ηt > 0], P[ξt ∨ ηt > 0], P[ξt ∧ ηt < 0],
P[ξt ∨ ηt < 0], which, according to the formulas [115, (3.2.80)] and [115, (3.2.85)],
are equal to:
P[ξt ∧ ηt > 0] = 1 − (Fξ (0) + Fη (0) − Fξη (0, 0)); (3.2.2a)
P[ξt ∨ ηt > 0] = 1 − Fξη (0, 0); (3.2.2b)
P[ξt ∧ ηt < 0] = Fξ (0) + Fη (0) − Fξη (0, 0); (3.2.2c)
84 3 Informational Characteristics and Properties of Stochastic Processes

P[ξt ∨ ηt < 0] = Fξη (0, 0), (3.2.2d)


where Fξη (x, y ) is joint cumulative distribution function (CDF) of the samples ξt ,
ηt ; Fξ (x), Fη (y ) are univariate CDFs of the samples ξt , ηt .
Then the function P[ξt > 0], P[ηt > 0] defined on the lattice Γ is called valuation
if the identity holds [221, § X.1 (V1)]:

P[ξt > 0] + P[ηt > 0] = P[ξt ∨ ηt > 0] + P[ξt ∧ ηt > 0]. (3.2.3)

Valuation P[ξt > 0] is isotonic, inasmuch as an implication holds [221, § X.1 (V2)]:

ξt ≥ ξt0 ⇒ P[ξt > 0] ≥ P[ξt0 > 0]. (3.2.4)

Joint fulfillment of the equations (3) and (4), according to theorem 1 [221, § X.1],
implies the quantity µ(ξt , ηt ) that is equal to:

µ(ξt , ηt ) = 2(P[ξt ∨ ηt > 0] − P[ξt ∧ ηt > 0]) =

= 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0), (3.2.5)


is metric.
Substituting Equations (3.2.2c) and (3.2.2d) into the formula (3.2.1a), we obtain
the expression for the function µ0 (ξt , ηt ):

µ0 (ξt , ηt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0), (3.2.6)

i.e., the functions µ(ξt , ηt ), µ0 (ξt , ηt ) defined by Equations (3.2.1) and (3.2.1a) are
identically equal to:
µ(ξt , ηt ) = µ0 (ξt , ηt ),
and the function µ0 (ξt , ηt ) is also metric.

Definition 3.2.1. The quantity µ(ξt , ηt ) defined by Equation (3.2.1) is called a


metric between two samples ξt , ηt of stochastic processes ξ (t), η (t) interacting in
L-group Γ.

Thus, partially ordered set Γ with operations χ(t) = ξ (t)∨η (t), χ̃(t) = ξ (t)∧η (t),
t ∈ T is metric space (Γ, µ) with respect to metric µ (3.2.1).
Then any pair of stochastic processes ξ (t), η (t) ∈ Γ with even univariate PDFs
can be associated with the following normalized measure between the samples ξt ,
ηt .

Definition 3.2.2. Normalized measure of statistical interrelationship (NMSI) be-


tween the samples ξt , ηt of stochastic processes ξ (t), η (t) with even univariate PDFs
is the quantity ν (ξt , ηt ) equal to:

ν (ξt , ηt ) = 1 − µ(ξt , ηt ), (3.2.7)

where µ(ξt , ηt ) is a metric between two samples ξt , ηt of stochastic processes ξ (t),


η (t), which is determined by Equation (3.2.1).
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 85

The last relationship and formula (3.2.5) together imply that NMSI ν (ξt , ηt ) is
determined by joint CDF Fξη (x, y ) and univariate CDFs Fξ (x), Fη (y ) of the samples
ξt , ηt :
ν (ξt , ηt ) = 1 + 4Fξη (0, 0) − 2(Fξ (0) + Fη (0)). (3.2.8)

Theorem 3.2.2. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , the metric µ(ξt , ηt ) between the samples
ξt , ηt is an invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H;
α, β ∈ A of stochastic processes, which preserve zero (neutral/null element) 0 of the
group Γ(+): hα,β (0) = 0:

µ(ξt , ηt ) = µ(ξt0 , ηt0 ); (3.2.9)


0 0
hα : ξ (t) → ξ (t), hβ : η (t) → η (t); (3.2.9a)
h−1
α
0
: ξ (t) → ξ (t), h−1
β
0
: η (t) → η (t), (3.2.9b)

where ξt0 , ηt0 are the samples of stochastic processes ξ 0 (t), η 0 (t) in L–group Γ0 : hα,β :
Γ → Γ0 .

Proof. Under bijective mappings {hα,β }, hα,β ∈ H (9a,b), the invariance prop-
erty of probability differential holds, which implies the identity between joint CDF
Fξη (x, y ), Fξ0 η0 (x0 , y 0 ) and univariate CDFs Fξ (x), Fξ0 (x0 ); Fη (y ), Fη0 (y 0 ) of the pairs
of the samples ξt , ηt ; ξt0 , ηt0 respectively:

Fξ0 η0 (x0 , y 0 ) = Fξη (x, y ); (3.2.10)


Fξ0 (x0 ) = Fξ (x); (3.2.10a)
Fη0 (y 0 ) = Fη (y ). (3.2.10b)

Thus, taking into account (3.2.5), Equations (3.2.10), (3.2.10a), and (3.2.10b) imply
the identity (3.2.9).

Corollary 3.2.1. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , NMSI ν (ξt , ηt ) of the samples ξt , ηt is
an invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of
stochastic processes, which preserve zero 0 of group Γ(+): hα,β (0) = 0:

ν (ξt , ηt ) = ν (ξt0 , ηt0 ); (3.2.11)


0 0
hα : ξ (t) → ξ (t), hβ : η (t) → η (t); (3.2.11a)
h−1
α
0
: ξ (t) → ξ (t), h−1
β
0
: η (t) → η (t), (3.2.11b)

where ξt0 and ηt0 are the samples of stochastic processes ξ 0 (t), η 0 (t) in L–group Γ0 :
hα,β : Γ → Γ0 .

Theorem 3.2.3. For a pair of Gaussian centered stochastic processes ξ (t) and η (t)
with correlation coefficient ρξη between the samples ξt , ηt , NMSI ν (ξt , ηt ) is equal
to:
2
ν (ξt , ηt ) = arcsin[ρξη ]. (3.2.12)
π
86 3 Informational Characteristics and Properties of Stochastic Processes

Proof. Find an expression for joint CDF Fξη (x, y ) of the processes ξ (t) and η (t) at
the point x = 0, y = 0, which, according to Equation (12) of Appendix II of the
work [113], is determined by double integral K00 (α):
q 
1 − ρ2ξη
Fξη (0, 0) =   K00 (α),
π

where α = π − arccos(ρξη ), q
K00 (α) = α/(2 sin α) (see Equation (14) of the Ap-
pendix II in [113]), sin α = 1 − ρ2ξη . Then, after necessary transformations we
obtain the resultant expression for Fξη (0, 0):
 
2
Fξη (0, 0) = 1 + arcsin[ρξη ] /4.
π

Substituting the last expression into (3.2.8), we obtain the required identity (3.2.12).

Joint fulfillment of the relationships (3.1.18) of Theorem 3.1.1, (3.2.12) of The-


orem 3.2.3, and Equation (3.1.6) implies that mutual NFSI ψξη (t, t) and NMSI
ν (ξt , ηt ) between two samples ξt and ηt of stochastic processes ξ (t) and η (t), which
are introduced by Definitions 3.1.2 and 3.2.2, respectively, are connected by ap-
proximate equality: p
ψξη (t, t) ≈ 1 − 1 − ν (ξt , ηt ).

Theorem 3.2.4. For Gaussian centered stochastic processes ξ (t) and η (t), which
additively interact in L–group Γ: χ(t) = ξ (t)+ η (t), t ∈ T , the following relationship
between NMSIs ν (ξt , χt ), ν (ηt , χt ), and ν (ξt , ηt ) of the corresponding pairs of their
samples ξt , χt ; ηt , χt ; ξt , ηt holds:

ν (ξt , χt ) + ν (ηt , χt ) − ν (ξt , ηt ) = 1. (3.2.13)

Proof. Let q 2 = Dη /Dξ be a ratio of the variances Dη , Dξ , and ρξη is a correlation


coefficient between the samples ξt , ηt of Gaussian processes ξ (t), η (t). Then correla-
tion coefficients ρξχ , ρηχ of the pairs of samples ξt , χt and ηt , χt are correspondingly
equal to: q
ρξχ = (1 + ρξη q )/ 1 + 2ρξη q + q 2 , (3.2.14a)
q
ρηχ = (q + ρξη )/ 1 + 2ρξη q + q 2 . (3.2.14b)

As for Gaussian stochastic signals, NMSI is defined by Equation (3.2.12). So the


identity holds:
ν (ξt , χt ) + ν (ηt , χt ) − ν (ξt , ηt ) =
2 2 2
= arcsin[ρξχ ] + arcsin[ρηχ ] − arcsin[ρξη ]. (3.2.15)
π π π
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 87

Using the relationships [234, (I.3.5)], it is easy to obtain a sum of the first two items
of the right side of Equation (3.2.15):
2 2 2
arcsin[ρξχ ] + arcsin[ρηχ ] = (π − arcsin[c]) =
π π π
2 q
= (π − arcsin[ 1 − ρ2ξη ]),
π
q q
where c = ρξχ 1 − ρ2ηχ + ρηχ 1 − ρ2ξχ .
Substituting the sum of arcsines into the right side of (3.2.15), we calculate a sum
of NMSIs of the pairs of samples ξt , χt ; ηt , χt ; ξt , ηt :

ν (ξt , χt ) + ν (ηt , χt ) − ν (ξt , ηt ) =


2 q 2
= (π − arcsin[ 1 − ρ2ξη ]) − arcsin[ρξη ] = 1.
π π

Theorem 3.2.4 has the following corollary.

Corollary 3.2.2. For Gaussian centered stochastic processes ξ (t), η (t), which ad-
ditively interact in partially ordered set Γ: χ(t) = ξ (t) + η (t), t ∈ T , and their
corresponding pairs of the samples ξt , ηt ; ξt , χt ; χt , ηt , the metric identity holds:

µ(ξt , ηt ) = µ(ξt , χt ) + µ(χt , ηt ). (3.2.16)

Joint fulfillment of the identities (3.2.7) and (3.2.13) implies this metric identity.
Thus, Theorem 3.2.4 establishes invariance relationship (3.2.13) for NMSI of the
pairs of the samples ξt , χt ; ηt , χt ; ξt , ηt of additively interacting Gaussian stochastic
processes ξ (t), η (t). This identity does not depend on their energetic characteristics
despite the fact that NMSIs ν (ξt , χt ) and ν (ηt , χt ) are the functions of the variances
Dξ and Dη of the samples ξt and ηt of Gaussian processes ξ (t) and η (t).
The results (3.2.13) and (3.2.16) of Theorem 3.2.4 could be generalized upon
additively interacting processes ξ (t) and η (t), not demanding a Gaussian property
for their distributions. This generalization is provided by the following theorem.

Theorem 3.2.5. For nonGaussian stochastic processes ξ (t) and η (t) with even
univariate PDFs, which additively interact with each other in L–group Γ: χ(t) =
ξ (t) + η (t), t ∈ T , and also for their corresponding pairs of the samples ξt , ηt ; ξt ,
χt ; χt , ηt , the metric identity holds:

µ(ξt , ηt ) = µ(ξt , χt ) + µ(χt , ηt ). (3.2.17)

Proof. According to (3.2.5), the metric is equal to:

µ(ξt , ηt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0).


88 3 Informational Characteristics and Properties of Stochastic Processes

Metrics µ(ξt , χt ) and µ(ηt , χt ), according to the identity (3.2.5), are determined by
the following relationships:

µ(ξt , χt ) = 2(Fξ (0) + Fχ (0)) − 4Fξχ (0, 0);

µ(ηt , χt ) = 2(Fη (0) + Fχ (0)) − 4Fηχ (0, 0).


Note that under the additive interaction χ(t) = ξ (t)+η (t), the following relationship
between CDFs of the samples ξt , ηt , χt holds:

Fξχ (0, 0) + Fηχ (0, 0) = Fξη (0, 0) + Fχ (0).

Taking into account the last identity, the sum of metrics µ(ξt , χt ), µ(ηt , χt ) is equal
to:
µ(ξt , χt ) + µ(ηt , χt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0) = µ(ξt , ηt ).

Corollary 3.2.3. For nonGaussian stochastic processes ξ (t) and η (t) with even
univariate PDFs, which additively interact with each other in L–group Γ: χ(t) =
ξ (t) + η (t), t ∈ T , the following relationship between NMSIs ν (ξt , ηt ), ν (ξt , χt ),
ν (ηt , χt ) of the corresponding pairs of the samples ξt , ηt ; ξt , χt ; ηt , χt holds:

ν (ξt , χt ) + ν (ηt , χt ) − ν (ξt , ηt ) = 1. (3.2.18)

The metric identity (3.2.17) implies a conclusion that the samples ξt , χt , ηt of


stochastic processes ξ (t), χ(t), η (t), respectively, lie on the same line in metric space
(Γ, µ).
For stochastic processes ξ (t) and η (t) with even univariate PDFs pξ (x) and pη (y ):
pξ (x) = pξ (−x); pη (y ) = pη (−y ), which interact in L–group Γ with operations of
join χ(t) = ξ (t) ∨η (t) and meet χ̃(t) = ξ (t) ∧η (t), t ∈ T , there are some peculiarities
between metrics and NMSIs of the corresponding pairs of their samples established
by the following theorems.

Theorem 3.2.6. For the samples ξt and ηt of stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations of join
and meet, respectively: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , the functions
µ(ξt , χt ) and µ(ξt , χ̃t ) that are equal according to (3.2.1):

µ(ξt , χt ) = 2(P[ξt ∨ χt > 0] − P[ξt ∧ χt > 0]); (3.2.19a)

µ(ξt , χ̃t ) = 2(P[ξt ∨ χ̃t > 0] − P[ξt ∧ χ̃t > 0]), (3.2.19b)
are metrics between the corresponding samples ξt , χt and ξt , χ̃t .

Proof of theorem is realized by testing of the valuation identity (3.2.3) and


isotonic condition for the functions (3.2.19a) and (3.2.19b).
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 89

Theorem 3.2.7. For stochastic processes ξ (t) and η (t) with even univariate PDFs,
which interact in L–group Γ with operations of join and meet, respectively: χ(t) =
ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , between metrics µ(ξt , χt ), µ(ξt , χ̃t ) of the
corresponding pairs of the samples ξt , χt and ξt , χ̃t , the following relationships
hold:
µ(ξt , χt ) = 2[Fξ (0) − Fξη (0, 0)]; (3.2.20a)
µ(ξt , χ̃t ) = 2[Fη (0) − Fξη (0, 0)]. (3.2.20b)
Proof. According to the lattice absorption property, the following identities hold:
ξt ∧ χt = ξt ∧ (ξt ∨ ηt ) = ξt ; (3.2.21a)
ξt ∨ χ̃t = ξt ∨ (ξt ∧ ηt ) = ξt . (3.2.21b)
According to the lattice idempotency property, the following identities hold:
ξt ∨ χt = ξt ∨ (ξt ∨ ηt ) = ξt ∨ ηt ; (3.2.22a)
ξt ∧ χ̃t = ξt ∧ (ξt ∧ ηt ) = ξt ∧ ηt . (3.2.22b)
According to Definition 3.2.1, metric µ(ξt , χt ) is equal to:
µ(ξt , χt ) = 2(P[ξt ∨ χt > 0] − P[ξt ∧ χt > 0]). (3.2.23)
According to Equations (3.2.22a) and (3.2.2b), the equality holds:
P[ξt ∨ χt > 0] = P[ξt ∨ ηt > 0] = 1 − Fξη (0, 0), (3.2.24)
and according to (3.2.21a), the equality holds:
P[ξt ∧ χt > 0] = P[ξt > 0] = 1 − Fξ (0), (3.2.25)
where Fξη (x, y ) is the joint CDF of the samples ξt and ηt ; Fξ (x) is the univariate
CDF of the sample ξt .
Substituting the values of probabilities (3.2.24), (3.2.25) into (3.2.23), we obtain:
µ(ξt , χt ) = 2[Fξ (0) − Fξη (0, 0)]. (3.2.26)
Similarly, according to Definition 3.2.1, metric µ(ξt , χ̃t ) is equal to:
µ(ξt , χ̃t ) = 2(P[ξt ∨ χ̃t > 0] − P[ξt ∧ χ̃t > 0]). (3.2.27)
According to (3.2.21b), the equality holds:
P[ξt ∨ χ̃t > 0] = P[ξt > 0] = 1 − Fξ (0), (3.2.28)
and according to the relationships (3.2.22b) and (3.2.2a), the equality holds:
P[ξt ∧ χ̃t > 0] = P[ξt ∧ ηt > 0] = 1 − [Fξ (0) + Fη (0) − Fξη (0, 0)], (3.2.29)
where Fξη (x, y ) is joint CDF of the samples ξt and ηt ; Fξ (x) and Fη (y ) are univariate
CDFs of the samples ξt and ηt .
Substituting the values of probabilities (3.2.28) and (3.2.29) into (3.2.27), we
obtain:
µ(ξt , χ̃t ) = 2[Fη (0) − Fξη (0, 0)]. (3.2.30)
90 3 Informational Characteristics and Properties of Stochastic Processes

Theorem 3.2.7 implies several corollaries.


Corollary 3.2.4. For stochastic processes ξ (t) and η (t), which interact in L–group
Γ with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and for their NMSIs
ν (ξt , χt ), ν (ξt , χ̃t ) of the corresponding pairs of their samples ξt , χt and ξt , χ̃t , the
following relationships hold:

ν (ξt , χt ) = 1 + 2[Fξη (0, 0) − Fξ (0)]; (3.2.31a)

ν (ξt , χ̃t ) = 1 + 2[Fξη (0, 0) − Fη (0)]. (3.2.31b)


Proof of the corollary follows directly from (3.2.7).
Corollary 3.2.5. For stochastic processes ξ (t), η (t), which interact in L–group Γ
with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and also for metrics
µ(ξt , ηt ), µ(ξt , χt ), µ(ηt , χt ), µ(ξt , χ̃t ), µ(ηt , χ̃t ) between the corresponding pairs of
their samples ξt , ηt ; ξt , χt ; ηt , χt ; ξt , χ̃t ; ηt , χ̃t , the metric identities hold:

µ(ξt , ηt ) = µ(ξt , χt ) + µ(χt , ηt ); (3.2.32a)

µ(ξt , ηt ) = µ(ξt , χ̃t ) + µ(χ̃t , ηt ). (3.2.32b)


Proof. Joint fulfillment of the relationships (3.2.26) and (3.2.5):

µ(ξt , χt ) = 2[Fξ (0) − Fξη (0, 0)];

µ(ηt , χt ) = 2[Fη (0) − Fξη (0, 0)];


µ(ξt , ηt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0),
implies the identity ((3.2.32)a).
Similarly, joint fulfillment of the relationships (3.2.30) and (3.2.5):

µ(ξt , χ̃t ) = 2[Fη (0) − Fξη (0, 0)];

µ(ηt , χ̃t ) = 2[Fξ (0) − Fξη (0, 0)];


µ(ξt , ηt ) = 2(Fξ (0) + Fη (0)) − 4Fξη (0, 0),
implies the identity (3.2.32b).

The identities (3.2.32a) and (3.2.32b) imply that the samples ξt , χt , ηt (ξt , χ̃t ,
ηt ) of stochastic processes ξ (t), η (t), χ(t), χ̃(t), respectively, lie on the same line in
metric space Γ.
Corollary 3.2.6. For stochastic processes ξ (t) and η (t), which interact in L–group
Γ with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and also for their
NMSIs ν (ξt , ηt ), ν (ξt , χt ), ν (ηt , χt ), ν (ξt , χ̃t ), ν (ηt , χ̃t ) of the corresponding pairs of
the samples ξt , ηt ; ξt , χt ; ηt , χt ; ξt , χ̃t ; ηt , χ̃t , the following relationships hold:

ν (ξt , χt ) + ν (ηt , χt ) − ν (ξt , ηt ) = 1; (3.2.33a)

ν (ξt , χ̃t ) + ν (ηt , χ̃t ) − ν (ξt , ηt ) = 1. (3.2.33b)


3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 91

Proof of the corollary follows directly from the relationships (3.2.32a), (3.2.32b),
and (3.2.7).
Corollary 3.2.7. For stochastic processes ξ (t) and η (t), which interact in L–group
Γ with operations: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , the relationships be-
tween metrics µ(ξt , χt ), µ(ξt , χ̃t ) (3.2.20a), (3.2.20b) and between NMSIs ν (ξt , χt ),
ν (ξt , χ̃t ) (3.2.31a), (3.2.31b) of the corresponding pairs of their samples ξt , χt ,
and ξt , χ̃t are invariants of a group H of continuous mappings {hα,β }, hα,β ∈ H;
α, β ∈ A of stochastic processes (3.2.9a), (3.2.9b):

µ(ξt , χt ) = µ(ξt0 , χ0t );



(3.2.34)
µ(ξt , χ̃t ) = µ(ξt0 , χ̃0t );

ν (ξt , χt ) = ν (ξt0 , χ0t );



(3.2.35)
ν (ξt , χ̃t ) = ν (ξt0 , χ̃0t ),
where ξt0 and ηt0 are the samples of stochastic processes ξ 0 (t) and η 0 (t), which interact
with each other in L–group Γ0 , hα,β : Γ → Γ0 with operations: χ0 (t) = ξ 0 (t) ∨ η 0 (t),
χ̃0 (t) = ξ 0 (t) ∧ η 0 (t), t ∈ T ; χ0t , χ̃0t are the samples of the results χ0 (t) and χ̃0 (t) of
interactions of stochastic processes ξ 0 (t) and η 0 (t).
Proof of the corollary follows directly from the identities (3.2.20) and (3.2.31),
and also from CDF invariance relations (3.2.10).
Corollary 3.2.8. For statistically independent stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations: χ(t) =
ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , between metrics µ(ξt , χt ), µ(ξt , χ̃t ) (3.2.20),
and also between NMSIs ν (ξt , χt ), ν (ξt , χ̃t ) (3.2.31) of the corresponding pairs of
their samples ξt , χt and ξt , χ̃t , the following equations hold:

µ(ξt , χt ) = 1 − 2Fξη (0, 0);
(3.2.36)
µ(ξt , χ̃t ) = 1 − 2Fξη (0, 0),

ν (ξt , χt ) = 2Fξη (0, 0);
(3.2.37)
ν (ξt , χ̃t ) = 2Fξη (0, 0).
Proof of the corollary is realized by a direct substitution of CDF’s values into the
relationships (3.2.20) and (3.2.31): Fξ (0) = 0.5, Fη (0) = 0.5.
Corollary 3.2.9. For statistically independent stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations: χ(t) =
ξ (t) ∨η (t), χ̃(t) = ξ (t) ∧η (t), t ∈ T , between metrics µ(ξt , χt ), µ(ξt , χ̃t ) (3.2.20) and
also between NMSI ν (ξt , χt ), ν (ξt , χ̃t ) (3.2.31) of the corresponding pairs of their
samples ξt , χt and ξt , χ̃t , the following relationships hold:

µ(ξt , χt ) = 0.5;
(3.2.38a)
µ(ξt , χ̃t ) = 0.5;

ν (ξt , χt ) = 0.5;
(3.2.38b)
ν (ξt , χ̃t ) = 0.5.
92 3 Informational Characteristics and Properties of Stochastic Processes

Proof of the corollary is realized by a direct substitution of CDF values into the rela-
tionships (3.2.20) and (3.2.31): Fξ (0) = 0.5, Fη (0) = 0.5, Fξη (0, 0) = Fξ (0)Fη (0) =
0.25.

Corollary 3.2.10. For statistically independent stochastic processes ξ (t) and η (t)
with even univariate PDFs, which interact in L–group Γ with operations of join
and meet: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T , metric µ(χt , χ̃t ) and NMSI
ν (χt , χ̃t ) between the samples χt and χ̃t are, respectively, equal to:

µ(χt , χ̃t ) = µ(ξt , ηt ) = 1; (3.2.39a)

ν (χt , χ̃t ) = ν (ξt , ηt ) = 0. (3.2.39b)


Proof. Consider the probabilities P[χt ∨ χ̃t > 0], P[χt ∧ χ̃t > 0], which for statisti-
cally independent stochastic processes ξ (t) and η (t) are, respectively, equal to:

P[χt ∨ χ̃t > 0] = P[(ξt ∨ ηt ) ∨ (ξt ∧ ηt ) > 0] = P[ξt ∨ ηt > 0];

P[χt ∧ χ̃t > 0] = P[(ξt ∨ ηt ) ∧ (ξt ∧ ηt ) > 0] = P[ξt ∧ ηt > 0].


Substituting the last two relationships into definition of metric (3.2.1), we obtain
the initial identity (3.2.39a). The last identity and the coupling Equation (3.2.7)
between metric and NMSI of a pair of samples implies the initial equality (3.2.39b).

Thus, Theorem 3.2.7 establishes invariance relations for metrics and NMSIs
of the pairs of the samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t of stochastic signals
(processes) ξ (t), η (t) that interact in L–group Γ with operations: χ(t) = ξ (t) ∨ η (t),
χ̃(t) = ξ (t) ∧ η (t), t ∈ T , so that these identities do not depend on the energetic
relationships between interacting processes.
The obtained results allow us to generalize the geometric properties of metric
signal space (Γ, µ) with metric µ (1).
Stochastic signals (processes) ξ (t), η (t), t ∈ T interact with each other in metric
signal space (Γ, µ) with the properties of L–group Γ(+, ∨, ∧) and metric µ (1), so
that the results of their interaction χ+ (t), χ∨ (t), χ∧ (t) are described by binary
operations of addition +, join ∨ and meet ∧, respectively:

χ+ (t) = ξ (t) + η (t); (3.2.40a)

χ∨ (t) = ξ (t) ∨ η (t); (3.2.40b)


χ∧ (t) = ξ (t) ∧ η (t). (3.2.40c)
Similarly, the samples ξt , ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T inter-
act in metric signal space (Γ, µ) with L–group properties and metric µ (3.2.1), so
that the results of their interaction χ+ ∨ ∧
t , χt , χt are described by the same binary
operations, respectively:
χ+
t = ξt + ηt ; (3.2.41a)
χ∨
t = ξt ∨ ηt ; (3.2.41b)
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 93

χ∧
t = ξt ∧ ηt . (3.2.41c)
Consider a number of theorems, which can elucidate the main geometric properties
of metric signal space (Γ, µ) with the properties of L–group Γ(+, ∨, ∧) and metric
µ (1).
Theorem 3.2.8. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the result
of their interaction χ+ t , defined by operation of addition (3.2.41a), lie on the same
line l+ : ξt , χt , ηt ∈ l+ , so that the result of interaction χ+
+
t lies between the samples
ξt and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ+ +
t ) + µ(χt , ηt ). (3.2.42)
Theorem 3.2.8 is an overformulation of Theorem 5 through the notion of line
and ternary relation of “betweenness”.
Theorem 3.2.9. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the result
of their interaction χ∨ t , defined by operation of join (3.2.41b), lie on the same line
l∨ : ξt , χ∨
t , ηt ∈ l∨ , so that the result of interaction χ∨t lies between the samples ξt
and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ∨ ∨
t ) + µ(χt , ηt ). (3.2.43)
Theorem 3.2.10. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t) and η (t), t ∈ T and the result
of their interaction χ∧ t , defined by operation of meet (3.2.41c), lie on the same line
l∧ : ξt , χt , ηt ∈ l∧ , so that the result of interaction χ∧

t lies between the samples ξt
and ηt , and the metric identity holds:
µ(ξt , ηt ) = µ(ξt , χ∧ ∧
t ) + µ(χt , ηt ). (3.2.44)
Theorems 3.2.9 and 3.2.10 are an overformulation of Corollary 3.2.5 of Theorem
3.2.7 through the notions of line and ternary relation of “betweenness”. These two
theorems could be united by the following one.
Theorem 3.2.11. In metric signal space (Γ, µ) with metric µ (3.2.1), a pair of
samples ξt and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the results of
their interaction χ∨ ∧
t /χt , defined by operations of join (3.2.41b) and meet (3.2.41c),
lie on the same line l∨,∧ : ξt , χ∨ ∧
t , χt , ηt ∈ l∨,∧ : l∨,∧ = l∨ = l∧ , so that the result of
∨ ∧
interaction χt /χt lies between the samples ξt , ηt , and each sample of a pair ξt
and ηt lies between the samples χ∨ ∧
t and χt , and the metric identities between the
∨ ∧
samples ξt , χt , χt , ηt hold:
µ(ξt , χ∧ ∨
t ) = µ(χt , ηt ); (3.2.45a)
µ(ξt , χ∨ ∧
t ) = µ(χt , ηt ); (3.2.45b)
µ(ξt , ηt ) = µ(χ∨ ∧
t , χt ); (3.2.45c)
µ(χ∨ ∧ ∨ ∧
t , χt ) = µ(χt , ξt ) + µ(ξt , χt ); (3.2.45d)
µ(χ∨ ∧
t , χt ) = µ(χ∨
t , ηt ) + µ(ηt , χ∧
t ). (3.2.45e)
94 3 Informational Characteristics and Properties of Stochastic Processes

Proof. Theorem 3.2.7 implies the following metric identities between the samples
ξt , χ∨ ∧
t , χt , ηt :
µ(ξt , χ∨

t ) = 2[Fξ (0) − Fξη (0, 0)];

µ(ξt , χt ) = 2[Fη (0) − Fξη (0, 0)];
µ(ηt , χ∨

t ) = 2[Fη (0) − Fξη (0, 0)];
µ(ηt , χ∧
t ) = 2[Fξ (0) − Fξη (0, 0)],
and these systems imply the identities (3.2.45a) and (3.2.45b). Besides, the following
triplets of the samples lie on the same lines, respectively: ξt , χ∨ ∧ ∧
t , ηt ; ηt , χt , ξt ; χt ,
∨ ∨ ∧ ∨ ∧
ξt , χt ; χt , ηt , χt . Hence, all four samples ξt , χt , χt , ηt belong to the same line.
Summing pairwise the values of metrics in the first and in the second equality
systems respectively, we obtain the value of metric µ(χ∨ ∧
t , χt ) between the samples
∨ ∧
χt , χt :
µ(χ∨ ∧ ∨ ∧ ∨ ∧
t , χt ) = µ(χt , ξt ) + µ(ξt , χt ) = µ(χt , ηt ) + µ(ηt , χt ) =

= 2[Fξ (0) + Fη (0)] − 4Fξη (0, 0).


The last relationship is identically equal to metric µ(ξt , ηt ) between the samples ξt
and ηt , which is defined by the expression (3.2.5), so the initial equality (3.2.45c)
holds. Summing pairwise the values of metrics from the first and the second equali-
ties of both equality systems respectively, we obtain metric identities (3.2.45d) and
(3.2.45e).
Theorem 3.2.12. In metric signal space (Γ, µ) with metric µ (3.2.1), the results of
interaction χ+ ∨ ∧
t , χt , χt of the samples ξt and ηt of stochastic signals (processes) ξ (t),
+
η (t), t ∈ T , defined by operations in (3.2.41), lie on the same line l: χ∨ ∧
t , χt , χt ∈ l,
+ ∨ ∧
so that the element χt lies between the samples χt , χt , and the metric identity
holds:
µ(χ∨ ∧ ∨ + + ∧
t , χt ) = µ(χt , χt ) + µ(χt , χt ). (3.2.46)
Proof. The samples of triplet χ+ ∨ ∧
t , χt , χt are connected with each other by operation
+
of addition of group Γ(+): χt = χt + χ∧

t , so Theorem 3.2.5 implies the identity
(3.2.46).
The following visual interpretation could be given for Theorems 3.2.8 through
3.2.12. The images of lines of metric signal space (Γ, µ) with metric µ (3.2.1) in R3
are circles, so the main content of Theorems 3.2.8 through 3.2.12 is represented in
Fig. 3.2.1a through c, respectively, by denoting the triplets of points lying on the
corresponding lines (circles in R3 ).
For neutral element (zero) 0 of group Γ(+), and also for a pair of samples ξt
and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T and the result of their
interaction χ+ t defined by binary operation (3.2.41a) of addition of the group, the
following metric identities hold:
µ(0, ξt ) = 1; (3.2.47a)
µ(0, ηt ) = 1; (3.2.47b)
µ(0, χ+
t ) = 1. (3.2.47c)
3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 95

(a) (b) (c) (d)

FIGURE 3.2.1 Metric relationships between samples of signals in metric signal space (Γ, µ)
that correspond to the relationships (a) (3.2.42); (b) (3.2.43), (3.2.44), (3.2.45); (c) (3.2.46);
and (d) (3.2.48)

Theorem 3.2.13. In metric signal space (Γ, µ) with metric µ (3.2.1), the results
of interaction χ∨ ∧
t , χt of a pair of samples ξt and ηt of stochastic signals (processes)
ξ (t), η (t), t ∈ T , defined by the operations (3.2.41b) and (3.2.41c), and also neutral
element 0 of the group Γ(+) lie on the same line l0 : χ∨ ∧
t , χt , 0 ∈ l0 , so that the
element χ∧ ∨
t lies between the elements χt , 0; and the metric identity holds:

µ(0, χ∨ ∧ ∧ ∨
t ) = µ(0, χt ) + µ(χt , χt ). (3.2.48)

Proof. According to the equality (3.2.1), the values of metrics µ(0, χ∨ ∧


t ), µ(0, χt ),
∧ ∨
µ(χt , χt ) are determined by the following expressions:

µ(0, χ∨
t ) = 2(P[(ξt ∨ ηt ) ∨ 0 > 0] − P[(ξt ∨ ηt ) ∧ 0 > 0]) = 2P[ξt ∨ ηt > 0];

µ(0, χ∧
t ) = 2(P[(ξt ∧ ηt ) ∨ 0 > 0] − P[(ξt ∧ ηt ) ∧ 0 > 0]) = 2P[ξt ∧ ηt > 0];

µ(χ∧ ∨
t , χt ) = 2(P[(ξt ∨ ηt ) ∨ (ξt ∧ ηt ) > 0] − P[(ξt ∨ ηt ) ∧ (ξt ∧ ηt ) > 0]) =

= 2(P[ξt ∨ ηt > 0] − P[ξt ∧ ηt > 0]),


and their joint fulfillment directly implies the initial identity (3.2.48).

The content of Theorem 3.2.13 is visually represented in Fig. 3.2.1(d).


Let the samples ξt , ηt of the stochastic signals (processes) ξ (t), η (t), t ∈ T
interact with each other in metric signal space (Γ, µ) with metric µ (3.2.1), so
that the result of their interaction χt is described by some signature operation of
L–group Γ(+, ∨, ∧):
χt = ξt ⊕ ηt . (3.2.49)
Then the following lemma holds.

Lemma 3.2.1. In metric signal space (Γ, µ) with metric µ (3.2.1), under the map-
ping hα from the group H of continuous mappings hα ∈ H = {hα } preserving
neutral element 0 of the group Γ(+): hα (0) = 0, for the result of interaction χt
(3.2.49) of a pair of samples ξt and ηt of stochastic signals (processes) ξ (t), η (t),
96 3 Informational Characteristics and Properties of Stochastic Processes

t ∈ T , which is defined by some operations (3.2.41), metric between the elements


χt and hα (χt ) is equal to zero:

µ(χt , hα (χt )) = 0; (3.2.50)


0
hα : χ(t) → χ (t) = hα (χ(t)); (3.2.50a)
h−1
α
0
: χ (t) → χ(t). (3.2.50b)

Proof. According to Theorem 3.2.2, metric µ (3.2.1) is invariant of the group H of


mappings {hα,β }, hα,β ∈ H; α, β ∈ A of stochastic signals (processes) (3.2.9a) and
(3.2.9b):
µ(ξt , ηt ) = µ(ξt0 , ηt0 );
hα : ξ (t) → ξ 0 (t), hβ : η (t) → η 0 (t);
h−1 0
α : ξ (t) → ξ (t), h−1 0
β : η (t) → η (t).

Thus, if the relations hold:

h1 : χ(t) → χ(t), hα : χ(t) → χ0 (t) = hα (χ(t)),

where h1 is an identity mapping (the unity of the group H): h−1


α · hα = h1 , then
the identity holds:
µ(χt , χt ) = µ(χt , hα (χt )) = 0.

The lemma is necessary to prove the following theorem.


Theorem 3.2.14. In metric signal space (Γ, µ) with metric µ (3.2.1), under an
arbitrary mapping f of the result of interaction χt (3.2.49) of a pair of samples ξt
and ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T , which is defined by some
of operations (3.2.41), the metric inequalities hold:

µ(ξt , χt ) ≤ µ(ξt , f (χt )); (3.2.51a)

µ(ηt , χt ) ≤ µ(ηt , f (χt )), (3.2.51b)


so that the inequalities (3.2.51) turn into equalities, if and only if the mapping f
belongs to a group H = {hα } of continuous mappings f ∈ H preserving neutral
element 0 of the additive group Γ(+): hα (0) = 0, hα ∈ H.
Proof. At first we prove the second part of theorem. If over the result of interaction
χt (3.2.49) of the samples ξt and ηt , the mapping f is realized such that it belongs
to a group H of odd mappings f ∈ H, then according to Lemma 3.2.1, the identity
µ(χt , f (χt )) = 0 implies the identity χt ≡ f (χt ) between the result of interaction
χt (3.2.49) of the samples ξt , ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T
and the result of its mapping f (χt ). Then, according to metric identities (3.2.42)
through (3.2.44) of Theorems 3.2.8 through 3.2.10, respectively, the samples ξt ,
χt = f (χt ), ηt lie on the same line l⊕ (see Fig. 3.2.2), and the identities hold:

µ(ξt , ηt ) = µ(ξt , χt ) + µ(χt , ηt );


3.2 Normalized Measure of Statistical Interrelationship of Stochastic Processes 97

µ(ξt , ηt ) = µ(ξt , f (χt )) + µ(f (χt ), ηt );


µ(ξt , χt ) = µ(ξt , f (χt ));
µ(ηt , χt ) = µ(ηt , f (χt )).
The second part of theorem has been proved.
If over the result of interaction χt (3.2.49) of the samples ξt and ηt , the mapping
f is realized such that it does not belong to a group H of continuous mappings
f ∈/ H preserving neutral element 0 of the group Γ(+): hα (0) = 0, hα ∈ H, then
6 χt , and correspondingly, f (χt ) does not belong to the line l⊕ , on which the
f (χt ) =
samples ξt , ηt , χt lie (see Fig. 3.2.2).

FIGURE 3.2.2 Metric relationships between samples of signals in metric space (Γ, µ) elu-
cidating Theorem 3.2.14

Then the strict inequalities hold:


µ(ξt , ηt ) < µ(ξt , f (χt )) + µ(f (χt ), ηt );
µ(ξt , χt ) < µ(ξt , f (χt ));
µ(ηt , χt ) < µ(ηt , f (χt )).
2
Theorem 3.2.14 has a consequence that is important for both signal processing
theory and information theory. We formulate it in the form of the following theorem.
Theorem 3.2.15. In metric signal space (Γ, µ) with metric µ (3.2.1), under an
arbitrary mapping f of the result of interaction χt (3.2.49) of the samples ξt and
ηt of stochastic signals (processes) ξ (t), η (t), t ∈ T , which is defined by some of
operations (3.2.41), the inequalities between NMSIs hold:
ν (ξt , f (χt )) ≤ ν (ξt , χt ); (3.2.52a)
ν (ηt , f (χt )) ≤ ν (ηt , χt ), (3.2.52b)
so that inequalities (3.2.52) turn into identities, if and only if the mapping f belongs
to a group H = {hα } of continuous mappings f ∈ H preserving neutral element 0
of the additive group Γ(+): hα (0) = 0, hα ∈ H.
98 3 Informational Characteristics and Properties of Stochastic Processes

Theorem 3.2.15 is a substantial analogue of the known theorem on data pro-


cessing inequality formulated for mutual information I (X, Y ) between the signals
X, Y, Z, which form a Markov chain [70]:

X→Y →Z: I (X, Y ) ≥ I (X, Z ).

As well as a normalized measure of statistical interrelationship, mutual infor-


mation I (X, Y ) is a measure of statistical dependence between the signals, which is
preserved under their bijective mappings. The essential distinction between them
lies in the fact that normalized measure of statistical interrelationship is completely
defined by metric (3.2.1) through the relation (3.2.7), requiring the evenness of uni-
variate PDFs of stochastic signals (processes) instead of Markov chain presence.

3.3 Probabilistic Measure of Statistical Interrelationship of


Stochastic Processes
Let ξt and ηt be the instantaneous values (samples) of stochastic signals (processes)
ξ (t), η (t), t ∈ T with joint cumulative distribution function (CDF) Fξη (x, y ), joint
probability density function (PDF) pξη (x, y ) and also with even univariate PDFs
pξ (x) and pη (y ) of the kind: pξ (x) = pξ (−x); pη (y ) = pη (−y ).
Then each pair of stochastic processes ξ (t), η (t) ∈ Γ can be associated with the
following normalized measure between the samples ξt and ηt based on probabilistic
relationships between them.

Definition 3.3.1. Probabilistic measure of statistical interrelationship (PMSI) be-


tween the samples ξt , ηt of stochastic processes ξ (t), η (t) ∈ Γ is the quantity
νP (ξt , ηt ) equal to:
νP (ξt , ηt ) = 3 − 4P[ξt ∨ ηt > 0], (3.3.1)
where P[ξt ∨ ηt > 0] is the probability that the random variable ξt ∨ ηt , which is
equal to join of the samples ξt and ηt , takes a value greater than zero.

Theorem 3.3.1. PMSI νP (ξt , ηt ) is defined by the joint CDF Fξη (x, y ) of the
samples ξt , ηt :
νP (ξt , ηt ) = 4Fξη (0, 0) − 1. (3.3.2)

Proof. According to the formula [115, (3.2.85)], random variable ξt ∨ηt is character-
ized by CDF Fξ∨η (z ) equal to Fξ∨η (z ) = Fξη (z, z ). So, the probability P[ξt ∨ηt > 0]
is equal to P[ξt ∨ ηt > 0] = 1 − Fξ∨η (0) = 1 − Fξη (0, 0).

As follows from the equalities (3.2.8) and (3.3.2), for stochastic signals (pro-
cesses) ξ (t), η (t) ∈ Γ with joint CDF Fξη (x, y ) and even univariate PDFs pξ (x),
pη (y ) of a sort: pξ (x) = pξ (−x); pη (y ) = pη (−y ), the notions of NMSI ν (ξt , ηt ) and
PMSI νP (ξt , ηt ) coincide:
ν (ξt , ηt ) = νP (ξt , ηt ).
3.3 Probabilistic Measure of Statistical Interrelationship of Stochastic Processes 99

Thus the theorems listed below except special cases, repeat the corresponding the-
orems of the previous section where the properties of NMSI ν (ξt , ηt ) are considered
and remain valid for PMSI νP (ξt , ηt ).

Theorem 3.3.2. PMSI νP (ξt , ηt ) can be defined by the following way:

νP (ξt , ηt ) = 3 − 4P[ξt ∧ ηt < 0] = 4Fξη (0, 0) − 1, (3.3.3)

where P[ξt ∧ ηt < 0] is the probability that random variable ξt ∧ ηt , which is equal
to meet of the samples ξt and ηt , takes a value less than zero.

Proof. According to the formula [115, (3.2.82)], random variable ξt ∧ηt is character-
ized by CDF Fξ∧η (z ) equal to Fξ∧η (z ) = Fξ (z ) + Fη (z ) − Fξη (z, z ), where Fξ (u) and
Fη (v ) are univariate CDFs of the samples ξt and ηt , respectively. So, the probability
P[ξt ∧ ηt < 0] is equal to:

P[ξt ∧ ηt < 0] = Fξ (0) + Fη (0) − Fξη (0, 0) = 1 − Fξη (0, 0).

Theorem 3.3.3. For a pair of stochastic processes ξ (t), η (t) with even univariate
PDFs in L–group Γ: ξ (t), η (t) ∈ Γ, t ∈ T , PMSI νP (ξt , ηt ) of a pair of samples ξt ,
ηt is invariant of a group H of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of
stochastic processes, which preserve neutral element 0 of the group Γ(+): hα,β (0) =
0:

νP (ξt , ηt ) = νP (ξt0 , ηt0 ); (3.3.4)


0 0
hα : ξ (t) → ξ (t), hβ : η (t) → η (t); (3.3.4a)
h−1
α
0
: ξ (t) → ξ (t), h−1
β
0
: η (t) → η (t), (3.3.4b)

where ξt0 and ηt0 are the samples of stochastic processes ξ 0 (t) and η 0 (t) in L–group
Γ’: hα,β : Γ → Γ0 .

The proof of theorem is the same as the proof of Theorem 3.2.2.

Theorem 3.3.4. For Gaussian centered stochastic processes ξ (t) and η (t) with
correlation coefficient ρξη between the samples ξt and ηt , their PMSI νP (ξt , ηt ) is
equal to:
2
νP (ξt , ηt ) = arcsin[ρξη ]. (3.3.5)
π
The proof of theorem is the same as well as the proof of Theorem 3.2.3.
Joint fulfillment of the relationship (3.3.5) of Theorem 3.3.4 and Equation (3.1.6)
implies, that mutual NFSI ψξη (t, t) and PMSI νP (ξt , ηt ) between two samples ξt
and ηt of stochastic processes ξ (t) and η (t) introduced by Definitions 3.1.2 and
3.3.1, respectively, are rather exactly connected by the following approximation:
p
ψξη (t, t) ≈ 1 − 1 − νP (ξt , ηt ).
100 3 Informational Characteristics and Properties of Stochastic Processes

Theorem 3.3.5. For Gaussian centered stochastic processes ξ (t) and η (t), which
additively interact in L–group Γ: χ(t) = ξ (t)+η (t), t ∈ T , between PMSIs νP (ξt , χt ),
νP (ηt , χt ), νP (ξt , ηt ) of the corresponding pairs of their samples ξt , χt ; ηt , χt ; ξt ,
ηt , the following relationship holds:

νP (ξt , χt ) + νP (ηt , χt ) − νP (ξt , ηt ) = 1. (3.3.6)

The proof of theorem is the same as the proof of Theorem 3.2.4.


Thus, Theorem 3.3.5 establishes an invariance relationship for PMSIs of the
pairs of the samples ξt , χt ; ηt , χt ; ξt , ηt of additively interacting Gaussian stochastic
processes ξ (t), η (t), and besides, this identity does not depend on their energetic
relationships, despite the fact that PMSIs νP (ξt , χt ), νP (ηt , χt ) are the functions of
variances Dξ , Dη of the samples ξt , ηt of Gaussian signals ξ (t), η (t).
The result (3.3.6) of Theorem 3.3.5 could be generalized upon additively inter-
acting processes ξ (t), η (t) with rather arbitrary probabilistic-statistical properties,
whose diversity is bounded by cardinality of the group of mappings H = {hα,β },
α, β ∈ A, and besides, every mapping hα,β ∈ H possesses the property of the map-
pings (3.3.4a,b). The required generalization is provided by the following theorem.
Theorem 3.3.6. For nonGaussian stochastic processes ξ (t) and η (t), which ad-
ditively interact with each other in L–group Γ: χ(t) = ξ (t) + η (t), t ∈ T , between
PMSIs νP (ξt , ηt ), νP (ξt , χt ), νP (ηt , χt ) of the corresponding pairs of the samples ξt ,
ηt ; ξt , χt ; χt , ηt , that are invariants of the group H of odd mappings H = {hα,β },
hα,β ∈ H, α, β ∈ A of stochastic processes (3.3.4), the relationship holds:

νP (ξt , χt ) + νP (ηt , χt ) − νP (ξt , ηt ) = 1. (3.3.7)

The proof of the theorem is the same as the proof of Theorem 3.2.5.
For partially ordered set Γ with lattice properties, where the processes ξ (t) and
η (t) interact with the results of interaction χ(t), χ̃(t): χ(t) = ξ (t) ∨ η (t), χ̃(t) =
ξ (t) ∧ η (t), t ∈ T in the form of lattice binary operations ∨, ∧, the notion of
normalized measure between the samples ξt and ηt of stochastic processes ξ (t) and
η (t) needs refinement. It is stipulated by the fact that the sameness of the PMSI
definition through the functions (3.3.1) and (3.3.3) does not hold in partially ordered
set Γ with lattice properties because of violation of evenness property of PDFs of
the samples χt and χ̃t .
Definition 3.3.2. Probabilistic measures of statistical interrelationship (PMSIs)
between the samples ξt , χt and ξt , χ̃t of stochastic processes ξ (t), χ(t), χ̃(t) ∈ Γ:
χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T in partially ordered set Γ with lattice
properties are the quantities νP (ξt , χt ), νP (ξt , χ̃t ) that are respectively equal to:

νP (ξt , χt ) = 3 − 4P[ξt ∧ χt < 0]; (3.3.8a)

νP (ξt , χ̃t ) = 3 − 4P[ξt ∨ χ̃t > 0], (3.3.8b)


where P[ξt ∧ χt < 0] and P[ξt ∨ χ̃t > 0] are the probabilities that random variables
ξt ∧ χt /ξt ∨ χ̃t , which are equal to meet and join of the samples ξt , χt and ξt , χ̃t ,
take the values less and more than zero, respectively.
3.4 Information Distribution Density of Stochastic Processes 101

Theorem 3.3.7. For stochastic processes ξ (t) and η (t), which interact in partially
ordered set Γ with lattice properties: χ(t) = ξ (t) ∨ η (t), χ̃(t) = ξ (t) ∧ η (t), t ∈ T ,
and for their PMSIs νP (ξt , χt ), νP (ηt , χt ); νP (ξt , χ̃t ), νP (ηt , χ̃t ) of the corresponding
pairs of their samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t , the following relationships
hold:
νP (ξt , χt ) = 1, νP (ηt , χt ) = 1; (3.3.9a)
νP (ξt , χ̃t ) = 1, νP (ηt , χ̃t ) = 1. (3.3.9b)
Proof. According to absorption axiom of lattice, the identities hold:
ξt ∧ χt = ξt ∧ (ξt ∨ ηt ) = ξt ; (3.3.10a)
ηt ∧ χt = ηt ∧ (ξt ∨ ηt ) = ηt ; (3.3.10b)
ξt ∨ χ̃t = ξt ∨ (ξt ∧ ηt ) = ξt ; (3.3.10c)
ηt ∨ χ̃t = ηt ∨ (ξt ∧ ηt ) = ηt . (3.3.10d)
Substituting the results of (3.3.10) into the formula (3.3.8) and taking into account
the evenness of univariate PDFs pξ (x) and pη (y ) of the samples ξt , ηt , the proba-
bilities in (3.3.8) are determined by univariate CDFs Fξ (x), Fη (y ) on x = y = 0:

P[ξt ∧ χt < 0] = Fξ (0) = 1/2;
(3.3.11a)
P[ηt ∧ χt < 0] = Fη (0) = 1/2;

P[ξt ∨ χ̃t > 0] = Fξ (0) = 1/2;
(3.3.11b)
P[ηt ∨ χ̃t > 0] = Fη (0) = 1/2.
Substituting the results (3.3.11) into the formulas (3.3.8) we obtain (3.3.9).
Thus, Theorem 3.3.7 establishes invariance relationships for PMSIs (3.3.9) of the
pairs of samples ξt , χt ; ηt , χt and ξt , χ̃t ; ηt , χ̃t of stochastic processes ξ (t) and η (t),
which interact in partially ordered set Γ with lattice properties: χ(t) = ξ (t) ∨ η (t),
χ̃(t) = ξ (t) ∧ η (t), t ∈ T , and these identities do not depend on probabilistic
distributions of interacting signals and their energetic relationships.

3.4 Information Distribution Density of Stochastic Processes


Each sample ξ (tj ) of stochastic process ξ (t) with normalized function of statisti-
cal interrelationship (NFSI) ψξ (tj , tk ), introduced in the previous section, can be
associated with some function iξ (tj , t) connected with NFSI by the relationship:
ψξ (tj , tk ) = d(p, p0 ) = 1 − d[iξ (tj , t); iξ (tk , t)]; (3.4.1)
Z∞
1
d[iξ (tj , t); iξ (tk , t)] = |iξ (tj , t) − iξ (tk , t)|dt, (3.4.1a)
2
−∞

where d(p, p0 ) is metric determined by (3.1.2a); d[iξ (tj , t); iξ (tk , t)] is the metric be-
tween the functions iξ (tj , t) and iξ (tk , t) of the samples ξ (tj ) and ξ (tk ), respectively.
102 3 Informational Characteristics and Properties of Stochastic Processes

Definition 3.4.1. The function iξ (tj , t) connected with NFSI ψξ (tj , tk ) by the
relationship (3.4.1) is called information distribution density (IDD) of stochastic
process ξ (t).

For stationary stochastic process ξ (t) with NFSI ψξ (τ ), the expression (3.4.1)
takes the form:

ψξ (τ ) = 1 − d[iξ (tj , t); iξ (tj + τ, t)]; (3.4.2)


Z∞
1
d[iξ (tj , t); iξ (tj + τ, t)] = |iξ (tj , t) − iξ (tj + τ, t)|dt, (3.4.2a)
2
−∞

The IDD of stationary stochastic process ξ (t) will be denoted as iξ (τ ) if it is not


necessary to fix the relation of IDD to a single sample ξ (tj ).
Consider the main properties of the IDD of stochastic process ξ (t) carrying a
finite information quantity I (Tξ ) < ∞ on the domain of definition Tξ .

1. IDD is a nonnegative and bounded function: 0 ≤ iξ (tj , t) ≤ iξ (tj , tj ).


R
2. IDD is a normalized function: iξ (tj , t)dt = 1.

3. IDD is a symmetric function with respect to the line t = tj :

iξ (tj , tj − τ ) = iξ (tj , tj + τ ).

For a stationary stochastic process with a normalized function of statistical inter-


relationship (NFSI) ψξ (τ ), one can obtain a coupling equation for IDD iξ (τ ) that
is dual with respect to the expression (3.4.2a) directly from its consequence:

1
1− iξ (x)dx = ψξ (2τ ), τ > 0.
2
−∞

Taking into account the evenness of the functions iξ (τ ), ψξ (τ ) and the property of
an integral with variable upper limit of integration, it is easy to obtain the coupling
equation between IDD iξ (τ ) and NFSI ψξ (τ ) of stochastic process:
 1 0
iξ (τ ) = 2 ψ− (2τ ), τ < 0;
(3.4.3)
1 0
− 2 ψ+ (2τ ), τ ≥ 0,
0 0
where ψ− (τ ) and ψ+ (τ ) are the derivatives of NFSI ψξ (τ ) on the left and on the
right, respectively.
For Gaussian stationary stochastic process, IDD iξ (τ ) is completely defined by
the normalized correlation function rξ (τ ):
 1 0
iξ (τ ) = 2 g− [rξ (2τ )], τ < 0;
(3.4.4)
1 0
− 2 g+ [rξ (2τ )], τ ≥ 0,
3.4 Information Distribution Density of Stochastic Processes 103
0 0
where g− [rξ (τ )] and g+ [rξ (τ )] are derivatives of deterministic functions g [x] on the
left and on the right, respectively (see Equations (3.1.3) and (3.1.4)).
The IDD iξ (τ ) of a stochastic process, as follows from (3.4.3), has clear physical
sense that IDD iξ (τ ) characterizes a rate of change of statistical interrelationships
between the samples of a stochastic process.
The formula (3.1.3) implies that, for stochastic process ξ (t) in the form of white
Gaussian noise, NFSI ψξ (τ ) is equal to:

1, τ = 0;
ψξ (τ ) =
0, τ 6= 0,

and formula (3.4.3) implies that IDD iξ (tj , t) of an arbitrary sample ξ (tj ) has a
form of delta-function iξ (tj , t) = δ (t − tj ). This means that a single sample ξ (tj ) of
white Gaussian noise does not carry any information regarding the other samples
of this stochastic process.
It is easy to determine the IDD of stochastic process with help of the coupling
equation (3.4.3), if its NFSI is known.

Example 3.4.1. Let the NFSI ψ (τ ) of a stochastic process be determined by the


function: 
|τ |
1− , |τ | ≤ a;

ψ (τ ) = a
 0, |τ | ≥ a.
Then its IDD i(τ ) is equal to:
1h  a  a i
i(τ ) = 1 τ+ −1 τ − ,
a 2 2
where 1(τ ) is the Heaviside step function. 5

Example 3.4.2. Let the NFSI ψ (τ ) of a stochastic process be determined by the


function:  2
|τ |
ψ (τ ) = 1 − , |τ | ≤ 2a.
2a
Then its IDD i(τ ) is equal to:
( h i
1 |τ |
a 1− a , |τ | ≤ a;
i(τ ) =
0, |τ | ≥ a. 5

Example 3.4.3. Let the NFSI ψ (τ ) of a stochastic process be determined by the


function:
ψ (τ ) = exp(−λ |τ |).
Then its IDD i(τ ) is equal to:

i(τ ) = λ exp(−2λ |τ |). 5


104 3 Informational Characteristics and Properties of Stochastic Processes

Considering the relation between NFSI and IDD, one should note that IDD is
a primary characteristic of this pair (IDD and NFSI) for the stochastic processes
that are capable of carrying information. The IDD determines NFSI, not vice versa.
This means that the expression (3.4.3) has a sense when known beforehand that the
IDD of stochastic process exists, or the stochastic process can carry information.
Naturally, there are some classes of stochastic processes for which the use of (3.4.3)
is impossible or is possible but with a proviso. Based on the simplicity of consid-
eration, we consider Gaussian stochastic processes as the examples of stochastic
processes of various kinds.
Analytical stochastic processes form a wide class of stochastic processes, for
which the use of (3.4.3) makes no sense [115], [4].
Stochastic process ξ (t) is called analytical in the interval [t0 , t0 + T ], if most
realizations of the process assume an analytical continuation in this interval. Ana-
lyticity of a process ξ (t) in a neighborhood of the point t0 implies a possibility of
representation of this process realization by Taylor’s series with random coefficients:

X (t − t0 )k
ξ (t) = ξ (k) (t0 ) , (3.4.5)
k!
k=0

where ξ (k) (t) is a k-th order derivative of stochastic process ξ (t) in mean square
sense.
The following theorem determines necessary and sufficient conditions of analyt-
icity of the Gaussian stochastic process [115].
Theorem 3.4.1. Let the normalized correlation function rξ (t1 , t2 ) of Gaussian
stochastic process ξ (t) be analytical function of two variables in a neighborhood of
the point (t0 , t0 ). Then stochastic process ξ (t) is analytical in a neighborhood of
this point.
Below are examples of analytical stochastic processes.
Example 3.4.4. Consider a Gaussian stationary stochastic process with a constant
power spectral density S (ω ) = N0 = const bounded by the band [−∆ω/2, ∆ω/2]:

N0 , ω ∈ [−∆ω/2, ∆ω/2];
S (ω ) = (3.4.6)
0, ω ∈/ [−∆ω/2, ∆ω/2].
According to this, its normalized correlation function r(τ ) is determined by the
function:
sin(∆ωτ /2)
r(τ ) = . 5 (3.4.7)
∆ωτ /2
Example 3.4.5. Consider a Gaussian stationary stochastic process with the nor-
malized correlation function:
r(τ ) = exp(−µτ 2 ). (3.4.8)
In this case, its power spectral density S (ω ) is determined by the expression:
ω2
r  
π
S (ω ) = exp − . 5 (3.4.9)
µ 4µ
3.4 Information Distribution Density of Stochastic Processes 105

Example 3.4.6. Consider a Gaussian stationary stochastic process with the nor-
malized correlation function:
  τ 2 −1
r(τ ) = 1 + . (3.4.10)
α

In this case, its power spectral density S (ω ) is determined by the expression:

S (ω ) = A exp(−α|ω|). 5 (3.4.11)

All the values of realization of analytical stochastic process may be restored on


the base of an indefinitely small interval of its realization while using the methods
of analytical continuation. So, normalized correlation function and power spectral
density of ergodic analytical stationary process could be determined on an indefi-
nitely small interval of its realization. The realization of an analytical process may
be extrapolated arbitrarily into the future with a given accuracy on the base of a
small interval of past realization. In this sense, the realizations of analytical pro-
cesses are similar to usual deterministic functions. Thus, the analytical stochastic
processes cannot carry information.
The stochastic processes, shown by Examples 3.4.4 through 3.4.6, may be ob-
tained by passing white Gaussian noise through the forming filter with amplitude-
frequency characteristic (frequency response) K (ω ), whose squared module is equal
to the power spectral density of the output stochastic process (3.4.6), (3.4.9), and
(3.4.11):
2
|K (ω )| = S (ω ). (3.4.12)
However, filters with such amplitude-frequency characteristics are not physically
realizable ones because they do not satisfy the corresponding Paley-Wiener condi-
tion [235]:
Z∞ ln |K (ω )|2
dω < ∞.
1 + ω2
−∞

A wide class of stochastic processes indicates that the use of the formula (3.4.3) is
possible with a proviso; it is formed by narrowband Gaussian stationary stochastic
processes with oscillated normalized correlation function:

ρ(τ ) = r(τ ) cos ω0 τ, (3.4.13)

where ω0 is a central frequency of a power spectral density of a process; r(τ ) is an


envelope of normalized correlation function:
q
r(τ ) = ρ2 (τ ) + ρ2⊥ (τ );

1
R∞ ρ(x)
and ρ⊥ (τ ) = π τ −x dx is a function connected with an initial one by Hilbert
−∞
transform.
106 3 Informational Characteristics and Properties of Stochastic Processes

A narrowband Gaussian stationary stochastic process ξ (t) with normalized cor-


relation function (3.4.13) can carry information if its envelope is not an analytical
function.
The following theorem defines the necessary condition, according to which a
stochastic process has the ability to carry information.
Theorem 3.4.2. For stochastic process ξ (t) to possess the ability to carry infor-
mation, it is necessary that its NFSI ψξ (tj , tk ) satisfy the following requirements:
1. NFSI ψξ (tj , tk ) is not a differentiable function in the point (tj , tj ), and
its derivatives in this point on the left ψξ0 (tj , tj − 0) and on the right
ψξ0 (tj , tj + 0) have unlike signs and the same modules:

ψξ0 (tj , tj − 0) = −ψξ0 (tj , tj + 0);

2. Modules of derivatives of NFSI in the point (tj , tj ) on the left and on the
right are not equal to zero:

|ψξ0 (tj , tj − 0)| = |ψξ0 (tj , tj + 0)| 6= 0.

Proof. Let ξ (t) be an arbitrary stochastic process possessing the ability to carry
information. This means that for any its sample ξ (tj ), there exists IDD iξ (tj , t)
with the properties 1, 2, 3 listed above on page 102. We can find the derivatives of
NFSI ψξ (tj , tk ) in the point (tj , tj ) on both the left and right.
The definition of the derivative implies:
∆ψξ (tj , tj − 0) ψξ (tj , tj ) − ψξ (tj , tj − 0)
ψξ0 (tj , tj − 0) = lim = lim ; (3.4.14a)
∆t→0 ∆t ∆t→0 ∆t
∆ψξ (tj , tj + 0) ψξ (tj , tj + 0) − ψξ (tj , tj )
ψξ0 (tj , tj + 0) = lim = lim . (3.4.14b)
∆t→0 ∆t ∆t→0 ∆t
Taking into account the relationship (3.4.1), the expressions (3.4.14) take the form:

d[i(tj , t); i(tj − ∆t, t)]


ψξ0 (tj , tj − 0) = lim ; (3.4.15a)
∆t→0 ∆t
d[i(tj , t); i(tj + ∆t, t)]
ψξ0 (tj , tj + 0) = − lim , (3.4.15b)
∆t→0 ∆t
where
Z∞
1
d[iξ (tj , t); iξ (tj ± ∆t, t)] = |iξ (tj , t) − iξ (tj ± ∆t, t)|dt, (3.4.16)
2
−∞

and
d[iξ (tj , t); iξ (tj ± ∆t, t)] 6= 0. (3.4.17)
According to the definition of metric and the property of IDD evenness, the identity
holds:
|iξ (tj , t) − iξ (tj − ∆t, t)| = |iξ (tj , t) − iξ (tj + ∆t, t)| . (3.4.18)
3.4 Information Distribution Density of Stochastic Processes 107

Joint fulfillment of the formulas (3.4.15), (3.4.16), and (3.4.18) implies that deriva-
tives of NFSI on the left and on the right have unlike signs in the same module:

ψξ0 (tj , tj − 0) = −ψξ0 (tj , tj + 0),

and NFSI ψξ (tj , tk ) of stochastic process in the point (tj , tj ) is nondifferentiable.


Besides, the relationship (3.4.17) implies that modules of derivatives of NFSI
on the left and on the right are not equal to zero:

|ψξ0 (tj , tj − 0)| = |ψξ0 (tj , tj + 0)| 6= 0.

The theorem can be described on a qualitative level in the following way. For
stochastic process ξ (t) to possess the ability to carry information, it is necessary
for the NFSI ψξ (tj , tk ) to be characterized by the narrowed peak in a neighbor-
hood of the point (tj , tj ). Moreover, the sharper the peak of NFSI ψξ (tj , tk ) (i.e.,
the larger the module of derivative |ψξ0 (tj , tj )|), the larger the maximum iξ (tj , tj )
of IDD iξ (tj , t), and correspondingly, the larger the overall quantity of information
I (T ) that can be carried by stochastic process within time interval [t0 , t0 + T ]. Thus,
Theorem 3.4.2 states that not all stochastic processes possess the ability to carry
information.
Example 3.4.7. A wide class of stochastic processes can be obtained by passing
white Gaussian noise through the forming Butterworth filters with squared module
of amplitude-frequency characteristic Kn (ω ) [236]:
  ω 2n −1
2
|Kn (ω )| = 1 + ,
W
where W is Butterworth filter bandwidth at 0.5 power level; n is an order of But-
terworth filter.
Butterworth filters satisfy the Paley-Wiener condition of physical realizability.
Power spectral density Sn (ω ) of Gaussian Butterworth stochastic processes of n-th
order is determined by an expression similar to the previous one:
  ω 2n −1
Sn (ω ) = 1 + ,
W
where n is an order of Butterworth stochastic process.
The existence of all the derivatives R(2k) (t1 , t2 ) of covariation function R(t1 , t2 )
up to 2N -th order inclusively is a necessary and sufficient condition of N -times
differentiability of Gaussian stochastic process. This condition is equivalent to the
following one. Power spectral density Sn (ω ) decay faster than ω −2N −1 is necessary
and sufficient condition of N -times differentiability of Gaussian stochastic process.
The formulated condition implies that Butterworth stochastic process of n-th
order is n − 1-times differentiable, i.e., Butterworth stochastic processes of n > 1-th
order are differentiable, and according to Theorem 3.4.2, they do not possess the
108 3 Informational Characteristics and Properties of Stochastic Processes

ability to carry information. On the large values of an order n, Butterworth stochas-


tic process asymptotically tends to quasi-white Gaussian process with constant
power spectral density bounded by a bandwidth W . Thus, Butterworth stochastic
processes of large orders, with respect to their properties, are close to the properties
of analytical stochastic processes.
Conversely, Butterworth stochastic process of an order n = 1 with power spec-
tral density S1 (ω ):
  ω 2 −1
S1 (ω ) = 1 + ,
W
and correlation function R(τ ):

W
R(τ ) = exp[−W |τ |],
2
is nondifferentiable, and has the ability to carry information. 5

It should be noted that informational properties of stochastic processes with dis-


crete time domain and informational properties of continuous stochastic processes
may be described on the base of IDD. The use of Dirac delta function δ (t) allows
IDD of stochastic process with discrete time domain to be represented in the form:
n
X n
X
iξ (tj , t) = ij+k · δ (t − tj+k ), ij+k = 1.
k=−n k=−n

Thus, the aforementioned properties 1, 2, 3 listed on page 102 hold for IDD iξ (tj , t).
While considering informational properties of stochastic processes, the essential
distinctions between their domains are not drawn as shown below.

3.5 Informational Characteristics and Properties of Stochastic


Processes
By the mapping ϕ[iξ (tj , t)], information distribution density (IDD) iξ (tj , t) of an
arbitrary sample ξ (tj ) of stochastic process ξ (t) with domain of definition Tξ can
be associated with its ordinate set Xj [237]:

Xj = ϕ[iξ (tj , t)] = {(t, y ) : t ∈ Tξ , 0 ≤ y ≤ iξ (tj , t)}. (3.5.1)

Note that for an arbitrary pair of samples ξ (tj ) and ξ (tk ) of stochastic process
ξ (t) with IDDs iξ (tj , t) and iξ (tk , t), respectively (see Fig. 3.5.1), the metric identity
holds: Z
1 1
djk = |iξ (tj , t) − iξ (tk , t)|dt = m[Xj ∆Xk ], (3.5.2)
2 2

3.5 Informational Characteristics and Properties of Stochastic Processes 109
1
R
where djk = 2 |iξ (tj , t) − iξ (tk , t)|dt is the metric between IDD iξ (tj , t) of the sam-

ple ξ (tj ) and IDD iξ (tk , t) of the sample ξ (tk ); Xj = ϕ[iξ (tj , t)], Xk = ϕ[iξ (tk , t)];
m[Xj ∆Xk ] is a measure of symmetric difference of the sets Xj and Xk .
The mapping (3.5.1) transfers IDDs iξ (tj , t), iξ (tk , t) into corresponding equiv-
alent sets Xj , Xk and defines isometric mapping of set (I, d) of IDDs {iξ (tα , t)}
with metric djk (3.5.2) into the metric space (X, d∗ ) of the sets {Xα } with metric
d∗jk = 12 m[Xj ∆Xk ].
The normalization property 2 of IDD
(see Section 3.4, page 102) provides the nor-
malization of a measure m(Xj ) of an ar-
bitrary sample Rξ (tj ) of stochastic process
ξ (t): m(Xj ) = iξ (tj , t)dt = 1.

The mapping (3.5.1) allows describing
any stochastic process ξ (t) possessing the
ability to carry information as a collection
of the samples {ξ (tα )} andS
also as a collec-
tion of the sets {Xα }: X = Xα . Note that
α
FIGURE 3.5.1 IDDs iξ (tj , t), iξ (tk , t) of
a measure of a single element Xα possesses
samples ξ(tj ), ξ(tk ) of stochastic process
the normalization property: m(Xα ) = 1.
ξ(t)
Thus, any stochastic process (signal)
may be considered a system of the elements {Xα } of metric space (X, d∗ ) with
metric d∗jk between a pair of elements Xj and Xk : d∗jk = 21 m[Xj ∆Xk ].
The mapping (3.5.1) allows considering an arbitrary stochastic process ξ (t) as a
collection of statistically dependent samples {ξ (tα )} and also as subalgebra B(X )
of generalized Boolean algebra B(Ω) with a measure m and signature (+, ·, −, O)
of the type (2, 2, 2, 0) with the following operations over the elements {Xα } ⊂ X,
X ∈ Ω:

1. addition Xα1 + Xα2


2. multiplication Xα1 · Xα2
3. difference Xα1 − Xα2
4. symmetric difference Xα1 ∆Xα2
5. null element O: Xα ∆Xα = O; Xα − Xα = O
Informational characteristics of stochastic process ξ (t), considered a subalgebra
B(X ) of generalized Boolean algebra B(Ω) with measure m, are introduced by the
following axiom that is similar to Axiom 2.3.1.

Axiom 3.5.1. Axiom of a measure of binary operation. Measure m(Xα ) of the ele-
ment Xα , Xα = Xβ ◦ Xγ ; Xα , Xβ , Xγ ∈ X, considered a result of binary operation
“◦” of subalgebra B(X ) of generalized Boolean algebra B(Ω) with a measure m,
defines information quantity I (Xα ) = m(Xα ) that corresponds to the result of this
operation.
110 3 Informational Characteristics and Properties of Stochastic Processes

Based on the axiom, the measure m(Xα ) of the element Xα of the space (X, d∗ )
defines a quantitative aspect of information it contains whereas binary operation
“◦” of generalized Boolean algebra B(Ω) defines a qualitative aspect of information.
Within the framework of the formulated Axiom 3.5.1 depending on the kinds of re-
lations between the elements {ξ (tα )} of stochastic process ξ (t), we shall distinguish
the following types of information quantities that according to Axiom 3.5.1, are de-
fined by the corresponding binary operation of general Boolean algebra B(Ω) (see
Definitions 2.3.1 through 2.3.5).
+
Definition 3.5.1. Quantity of overall information Ijk contained in an arbitrary
pair of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t) is an information quantity
equal to a measure of sum m(Xj + Xk ) of the elements Xj and Xk :
+
Ijk = m(Xj + Xk ) = m(Xj ) + m(Xk ) − m(Xj · Xk ) = 2 − ψξ (tj , tk ),

where Xj = ϕ[iξ (tj , t)]; Xk = ϕ[iξ (tk , t)]; m(Xj · Xk ) is a measure of product of the
elements Xj and Xk ; ψξ (tj , tk ) is a normalized function of statistical interrelation-
ship (NFSI) between the samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t).
Definition 3.5.2. Quantity of mutual information Ijk contained simultaneously
in both the sample ξ (tj ) and the sample ξ (tk ) of stochastic process ξ (t) is an infor-
mation quantity equal to a measure of product m(Xj · Xk ) of the elements Xj and
Xk :
Ijk = m(Xj · Xk ) = ψξ (tj , tk ).
Definition 3.5.3. Quantity of absolute information Ij contained in an arbitrary
sample ξ (tj ) of stochastic process ξ (t) is a quantity of mutual information Ijj equal
to 1:
Ij = Ijj = ψξ (tj , tj ) = m(Xj · Xj ) = m(Xj ) = 1.
Thus, a measure m(Xj ) of the element Xj defines an information quantity
contained in the sample ξ (tj ) concerning itself.

Definition 3.5.4. Quantity of particular relative information Ijk contained in the
sample ξ (tj ) with respect to the sample ξ (tk ) is an information quantity equal to a
measure of difference m(Xj − Xk ) between the elements Xj and Xk :

Ijk = m(Xj − Xk ) = m(Xj ) − m(Xj · Xk ) = 1 − ψξ (tj , tk ).

Definition 3.5.5. Quantity of relative information Ijk contained in the sample
ξ (tj ) with respect to the sample ξ (tk ) and, vice versa, contained in the sample ξ (tk )
with respect to the sample ξ (tj ), is an information quantity equal to a measure of
symmetric difference m(Xj ∆Xk ) between the elements Xj and Xk :
∆ − −
Ijk = m(Xj ∆Xk ) = m(Xj − Xk ) + m(Xk − Xj ) = Ijk + Ikj = 2(1 − ψξ (tj , tk )).

We should consider a unit of information quantity (see Definition 2.3.11). Mathe-


matical apparatus designed to construct the stated fundamentals of signal process-
ing theory and information theory is generalized Boolean algebra with a normalized
3.5 Informational Characteristics and Properties of Stochastic Processes 111

measure. So, as an information unit, we take the quantity of absolute information


I [ξ (tα )] contained in a single element ξ (tα ) of stochastic process ξ (t) with a domain
of definition Tξ (in the form of a finite set or a continuum) and information distri-
bution density (IDD) iξ (tα , t), and according to the property of IDD normalization,
the result is equal to:
Z
I [ξ (tα )] = m(Xα )|tα ∈Tξ = iξ (tα ; t)dt = 1 abit. (3.5.3)

Definition 3.5.6. Quantity of absolute information contained in a single element


(sample) ξ (tα ) of stochastic process ξ (t), considered a subalgebra B(X ) of gen-
eralized Boolean algebra B(Ω) with a measure m, is called an absolute unit, or
abit.
On the base of the mapping (3.5.1) and the approach that allows considering an
arbitrary stochastic process ξ (t) as a collection of statistically dependent samples
{ξ (tα )} and as a subalgebra B(X ) of generalized Boolean algebra B(Ω) with a
measure m and the signature (+, ·, −, O) of the type (2, 2, 2, 0), one canSdetermine
two forms of structural diversity of the stochastic process ξ (t)/X = Xα and,
α
correspondingly, two measures of collection of the elements {ξ (tα )} of the signal
ξ (t): overall quantity of information and relative quantity of information, which
are introduced by the following definitions.
Definition 3.5.7. Overall quantity of information I [ξ (t)] contained in a stochastic
process ξ (t) considered a collection of the elements {Xα } is an information quantity
equal to a measure of their sum:
[ X
I [ξ (t)] = m( Xα ) = m( Xα ). (3.5.4)
α α

It is obvious that a given measure of structural diversity is identical to the quantity


of absolute information IX contained in a stochastic process ξ (t): I [ξ (t)] = IX .
Definition 3.5.8. Relative quantity of information I∆ [ξ (t)] contained in a stochas-
tic process ξ (t) owing to distinctions between the elements of the collection {Xα }
is an information quantity equal to a measure of symmetric difference of these
elements:
I∆ [ξ (t)] = m(∆ Xα ), (3.5.5)
α
where ∆ Xα = . . . ∆Xα ∆Xβ ∆ . . . is the symmetric difference of the elements of the
α
collection {Xα } of stochastic process ξ (t)/X.
Overall quantity of information I [ξ (t)] contained in stochastic process ξ (t) with
IDD iξ (tα , t) and strictly distributed in its domain of definition Tξ = [t0 , t0 + T ],
according to the Definition 3.5.7, is equal to:
X Z
I [ξ (t)] = m( Xα )|tα ∈Tξ = sup [iξ (tα , t)]dt (abit). (3.5.6)
α tα ∈Tξ

112 3 Informational Characteristics and Properties of Stochastic Processes

Overall quantity of information I [ξ (t)] contained in stationary stochastic process


ξ (t) with IDD iξ (τ ) in its domain of definition Tξ = [t0 , t0 + T ] is determined by
the identity:
Iξ (Tξ ) = iξ (0) · T (abit), (3.5.7)
where iξ (0) is a maximum value of IDD iξ (τ ) of stochastic process ξ (t).
Information quantity ∆I (tj , t ∈ [a, b]) contained in a sample ξ (tj ) concerning
the instantaneous values of stochastic process ξ (t) within time interval [a, b] ⊂ Tξ
is equal to:
Zb
∆I (tj , t ∈ [a, b]) = iξ (tj , t)dt. (3.5.8)
a

Thus, IDD iξ (tj , t) of the sample ξ (tj ) of stochastic process ξ (t) may be defined
as the limit of the ratio of information quantity ∆I (tj , t ∈ [t0 , t0 + ∆t]), contained
in the sample ξ (tj ) concerning the instantaneous values of stochastic process ξ (t)
within the interval [t0 , t0 + ∆t], to the value of this interval ∆t, while the last one
tends to zero:
∆I (tj , t ∈ [t0 , t0 + ∆t])
iξ (tj , t) = lim .
∆t→0 ∆t
The last relationship implies that IDD iξ (tα , t) is a dimensional function, and its
dimensionality is inversely proportional to the dimensionality of the time parameter.
Thus, one can draw the following conclusion: on the one hand, IDD iξ (tα , t)
of stochastic process ξ (t) characterizes a distribution of information, contained in
a single sample ξ (tα ) concerning all the instantaneous values of stochastic process
ξ (t). On the other hand, IDD iξ (tα , t) characterizes a distribution of information
along the samples {ξ (tα )} of stochastic process ξ (t) concerning the sample ξ (tα ).
IDD is inherent to any sample of stochastic process owing to a sample is an
element of a collection of statistically interconnected other samples {ξ (tα )} forming
as the stochastic process ξ (t) and cannot be associated with an arbitrary random
variable as opposed to the sample ξ (tα ) of stochastic process ξ (t). So, a single
random variable considered outside the statistical collection of random variables
does not carry any information.
We can generalize the properties of quantity of mutual information Ijk of a pair
of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t):

1. Quantity of mutual information Ijk is nonnegative and bounded:

0 ≤ Ijk ≤ Ijj = 1.

2. Quantity of mutual information Ijk is symmetric: Ijk = Ikj .


3. Quantity of mutual information Ijk is identical to normalized function of
statistical interrelationship (NFSI) ψξ (tj , tk ) between the samples ξ (tj )
and ξ (tk ) of stochastic process ξ (t):

Ijk = ψξ (tj , tk ).
3.5 Informational Characteristics and Properties of Stochastic Processes 113

The last identity allows us to define NFSI ψξ (tj , tk ), introduced earlier by the
relationship (3.1.2), based on quantity of mutual information Ijk of a pair of samples
ξ (tj ) and ξ (tk ) of stochastic process ξ (t). It is obvious that the first three properties
of NFSI are similar to the properties of quantity of mutual information on the list
above, and a list of the general properties of NFSI ψξ (tj , tk ) of stochastic processes
possessing the ability to carry information looks like:

1. NFSI ψξ (tj , tk ) is nonnegative and is bounded:

0 ≤ ψξ (tj , tk ) ≤ ψξ (tj , tj ) = 1.

2. NFSI ψξ (tj , tk ) is symmetric: ψξ (tj , tk ) = ψξ (tk , tj ).


3. NFSI ψξ (tj , tk ) is not differentiable in the point (tj , tj ), and its derivatives
in this point on the left ψξ0 (tj , tj − 0) and on the right ψξ0 (tj , tj + 0) are
different in sign and the same in module:

ψξ0 (tj , tj − 0) = −ψξ0 (tj , tj + 0).

4. Modules of derivatives of NFSI in the point (tj , tj ) on the left and on the
right are not equal to zero:

|ψξ0 (tj , tj − 0)| = |ψξ0 (tj , tj + 0)| 6= 0.

Normalized functions of statistical interrelation (NFSIs) ψξ (τ ) of stationary


stochastic processes possess the following properties:

1. 0 ≤ ψξ (τ ) ≤ ψξ (0) (nonnegativity and boundedness).


2. ψξ (τ ) = ψξ (−τ ) (evenness).
3. lim ψξ (τ ) = 0.
τ →∞
4. NFSI ψξ (τ ) is not a differentiable function in the point τ = 0, and its
derivatives in this point on the left ψξ0 (τ − 0) and on the right ψξ0 (τ + 0)
are different in sign and the same in module:

ψξ0 (τ − 0) = −ψξ0 (τ + 0);

5. Modules of derivatives of NFSI τ = 0 on the left and on the right are not
equal to zero:
|ψξ0 (τ − 0)| = |ψξ0 (τ + 0)| 6= 0.

6. Fourier transform F of NFSI ψξ (τ ) of stationary stochastic process ξ (t) is


a nonnegative function:
Z∞
1
F [ψξ (τ )] = ψξ (τ ) exp[−iωτ ]dτ ≥ 0.

−∞
114 3 Informational Characteristics and Properties of Stochastic Processes

Taking into account the aforementioned considerations, the corollary list of the
Theorem 3.1.1 may be continued.

Corollary 3.5.1. Under bijective mapping (3.1.17), the measures of all sorts of
information contained in a pair of samples ξ (tj ) and ξ (tk ) of stochastic process ξ (t)
are preserved:
Quantity of mutual information Ijk :

Ijk = m(Xj · Xk ) = ψξ (tj , tk ) = ψη (t0j , t0k ) = m(Yj · Yk ); (3.5.9)

Quantity of absolute information Ij :

Ij = m(Xj ) = ψξ (tj , tj ) = ψη (t0j , t0j ) = m(Yj ) = 1; (3.5.10)


+
Quantity of overall information Ijk :
+
Ijk = m(Xj + Xk ) = 2 − ψξ (tj , tk ) = 2 − ψη (t0j , t0k ) = m(Yj + Yk ); (3.5.11)

Quantity of particular relative information Ijk :

Ijk = m(Xj − Xk ) = 1 − ψξ (tj , tk ) = 1 − ψη (t0j , t0k ) = m(Yj − Yk ); (3.5.12)


Quantity of relative information Ijk :

Ijk = m(Xj ∆Xk ) = 2(1 − ψξ (tj , tk )) = 2(1 − ψη (t0j , t0k )) = m(Yj ∆Yk ), (3.5.13)

where Xj = ϕ[iξ (tj , t)]; Xk = ϕ[iξ (tk , t)]; Yj = ϕ[iη (t0j , t0 )]; Yk = ϕ[iη (t0k , t0 )] are
the ordinate sets of IDDs iξ (tα , t), iη (tβ , t) of stochastic processes ξ (t) and η (t),
respectively, defined by the formula (3.5.1).

Corollary 3.5.2. Under bijective mapping of stochastic processes ξ (t) (3.1.17), the
differential of information quantity iξ (tj , t)dt is preserved:

iξ (tj , t)dt = iη (t0j , t0 )dt0 . (3.5.14)

Corollary 3.5.3. Under bijective mapping (3.1.17), overall quantity of information


I [ξ (t)]S contained in stochastic process ξ (t) considered a collection of the elements
X = Xα where every element Xα corresponds to IDD iξ (tα , t) of the sample ξ (tα )
α
is preserved:
[ [
I [ξ (t)] = m(X ) = m[ Xα ] = m[ Yβ ] = m(Y ) = I [η (t0 )], (3.5.15)
α β

where X is the result of mapping


S (3.5.1) of stochastic process ξ (t) into the set of
ordinate sets {Xα }: X = Xα .
α
3.5 Informational Characteristics and Properties of Stochastic Processes 115

Corollary 3.5.4. Under bijective mapping (3.1.17), relative quantity of informa-


tion I∆
S[ξ (t)] contained in stochastic process ξ (t) considered a collection of elements
X = Xα , where every element Xα corresponds to IDD iξ (tα , t) of the sample
α
ξ (tα ), is preserved:

I∆ [ξ (t)] = m(∆ Xα ) = m(∆ Yβ ) = I∆ [η (t0 )], (3.5.16)


α β

where X is the result of mapping


S (3.5.1) of stochastic process ξ (t) into the set of
ordinate sets {Xα }: X = Xα .
α

We now introduce the characterization of informational properties of two


stochastic processes ξ (t) and η (t0 ).
Each pair of samples ξ (tj ), η (t0k ) of stochastic processes ξ (t) and η (t0 ) may
be associated with two functions iξη (t0k , t) and iηξ (tj , t0 ) that we shall call mutual
information distribution density (mutual IDD).
Mutual IDD iξη (t0k , t) characterizes the distribution of information along the
time parameter t (along the samples {ξ (tα )} of stochastic process ξ (t)) concerning
the sample η (t0k ) of stochastic process η (t0 ) and is connected with mutual NFSI
ψξη (tj , t0k ) (see Definition 3.1.2) by the relationship:

ψξη (tj , t0k ) = 1 − d[iξη (t0k , t); iξ (tj , t)]; (3.5.17)


Z∞
0 1
d[iξη (tk , t); iξ (tj , t)] = |iξη (t0k , t) − iξ (tj , t)|dt, (3.5.17a)
2
−∞

where d[iξη (t0k , t); iξ (tj , t)] is the metric between mutual IDD iξη (t0k , t) of the sample
η (t0k ) and IDD iξ (tj , t) of the sample ξ (tj ).
Definition 3.5.9. The function iξη (t0k , t) connected with mutual NFSI ψξη (tj , t0k )
and IDD iξ (tj , t) by the relationship (3.5.17) is called mutual information distribu-
tion density (mutual IDD).
The function d[iξη (t0k , t); iξ (tj , t)] may be considered a metric between the sam-
ples ξ (tj ), η (t0k ) of stochastic processes ξ (t) and η (t0 ), respectively. According to
(3.5.17), the functions iξη (t0k , t), iξ (tj , t) and ψξη (tj , t0k ), d[iξη (t0k , t); iξ (tj , t)] are con-
nected by the identity:

ψξη (tj , t0k ) + d[iξη (t0k , t); iξ (tj , t)] = 1. (3.5.18)

Defining the quantities ψξη and ρξη by:

ψξη = sup ψξη (tj , t0k ); (3.5.19a)


tj , t0k ∈]−∞,∞[

ρξη = inf d[iξη (t0k , t); iξ (tj , t)], (3.5.19b)


tj , t0k ∈]−∞,∞[

we obtain a closeness measure and a distance measure for stochastic processes ξ (t)
and η (t0 ), respectively.
116 3 Informational Characteristics and Properties of Stochastic Processes

The quantity ψξη determined by the formulas (3.1.9a) and (3.5.19a) we shall
call coefficient of statistical interrelation, and the quantity ρξη determined by the
formula (3.5.1b), we shall call metric between stochastic processes ξ (t) and η (t0 ).
As against the metric dξη defined by the formula (3.1.9b), the metric ρξη (3.5.19b)
is based on a distance between mutual IDD and IDD of the samples of stochastic
processes ξ (t) and η (t0 ).
Coefficient of statistical interrelation ψξη of stochastic processes ξ (t), η (t0 ) and
the metric ρξη between them are connected by a relationship similar to the identity
(3.1.8):
ψξη + ρξη = 1. (3.5.20)
Mutual IDD iξη (t0k , t) along with IDD iη (t0k , t0 ) are characteristics of a single
sample η (t0k ) of the sample collection {η (t0β )} of stochastic process η (t0 ), but unlike
IDD, the sample η (t0k ) is considered within its relation with the sample collection
{ξ (tα )} of another stochastic process ξ (t).
Mutual IDD iηξ (tj , t0 ) characterizes the distribution of information along the
time parameter t0 (along the samples {η (t0β )} of stochastic process η (t0 ) concerning
the sample ξ (tj ) of stochastic process ξ (t)) and is connected with mutual NFSI
ψηξ (t0k , tj ) by the equation:

ψηξ (t0k , tj ) = 1 − d[iηξ (tj , t0 ); iη (t0k , t0 )], (3.5.21)


R∞
where d[iηξ (tj , t0 ); iη (t0k , t0 )] = 1
2 |iηξ (tj , t0 ) − iη (t0k , t0 )|dt0 is the metric between
−∞
mutual IDD iηξ (tj , t0 ) of the sample ξ (tj ) and IDD iη (t0k , t0 ) of the sample η (t0k ).
Mutual IDD iηξ (tj , t0 ), also as IDD iξ (tj , t), is a characteristic of a single sam-
ple ξ (tj ) of the sample collection {ξ (tα )} of stochastic process ξ (t), but, in this
case, unlike IDD, the sample ξ (tj ) is considered within its relation with the sample
collection {η (t0β )} of another stochastic process η (t0 ).
The values of the functions ψξη (tj , t0k ) and ψηξ (t0k , tj ) determined by the formulas
(3.5.17) and (3.5.21) characterize the quantity of mutual information Ijk (Ijk = Ikj )
contained in both the sample ξ (tj ) and the sample η (t0k ) of stochastic processes ξ (t)
and η (t0 ):
ψξη (tj , t0k ) = ψηξ (t0k , tj ) = Ijk . (3.5.22)
The relations between IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs ψξη (tj , t0k ),
ψηξ (t0k , tj ) of the samples ξ (tj ), η (t0k ), respectively, are defined by the consequence
from the identity (3.5.22):
Z∞ Z∞
|iξη (t0k , t) − iξ (tj , t)|dt = |iηξ (tj , t0 ) − iη (t0k , t0 )|dt0 . (3.5.23)
−∞ −∞

Consider the main properties of mutual IDD iξη (t0k , t) of a pair of stochastic pro-
cesses ξ (t) and η (t0 ).

1. Mutual IDD is nonnegative: iξη (t0k , t) ≥ 0.


3.5 Informational Characteristics and Properties of Stochastic Processes 117
R∞
2. Mutual IDD is normalized: iξη (t0k , t)dt = 1.
−∞
3. In general case, mutual IDD is not even: iξη (t0k , t) 6= iξη (t0k , −t).
4. Information quantity ∆I (t0k , t ∈ [a, b]) contained in the sample η (t0k ) con-
cerning the instantaneous values of stochastic process ξ (t) within the time
Rb
interval [a, b] is equal to ∆I (t0k , t ∈ [a, b]) = iξη (t0k , t)dt.
a
5. Metric relationships between IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs
iηξ (tj , t0 ), iξη (t0k , t) of the samples ξ (tj ), η (t0k ) of stochastic processes ξ (t)
and η (t0 ), respectively, are determined by the formula (3.5.23).
6. For a pair of stochastic processes ξ (t) and η (t0 ), each sample ξ (tj ) of
stochastic process ξ (t) may be associated with the sample η (t0k ) of stochas-
tic process η (t0 ), which is such that metric d[iξη (t0k , t); iξ (tj , t)] between
mutual IDD iξη (t0k , t) of the sample η (t0k ) and IDD iξ (tj , t) of the sample
ξ (tj ) is minimal:
Z∞
1
d[iξη (t0k , t); iξ (tj , t)] = |iξη (t0k , t) − iξ (tj , t)|dt → min . (3.5.24)
2
−∞

7. For stationary coupled stochastic processes ξ (t) and η (t0 ), the metric
(3.5.24) between mutual IDD iξη (t0k , t) of the sample η (t0k ) and IDD iξ (tj , t)
of the sample ξ (tj ) depends only on the time difference τ = t0k −tj between
the samples η (t0k ) and ξ (tj ) of stochastic processes η (t0 ) and ξ (t):
Z∞
1
d[iξη (t0k , t); iξ (tj , t)] = |iξη (tj + τ, t) − iξ (tj , t)|dt =
2
−∞

= d[iξη (tj + τ, t); iξ (tj , t)] = d[τ ].


(a) For stationary coupled stochastic processes ξ (t) and η (t0 ), the mutual
NFSI ψξη (tj , t0k ) depends only on the time difference τ = t0k − tj
between the samples η (t0k ) and ξ (tj ) of stochastic processes η (t0 ) and
ξ (t) respectively:
ψξη (tj , t0k ) = ψξη (τ ) = 1 − d[τ ].

8. Let ξ (t) and η (t0 ) be functionally interconnected stochastic processes:


η (t0k ) = f [ξ (t)]; ξ (t) = f −1 [η (t0k )], (3.5.25)
where f [∗] is a one-to-one function.
Then for each sample η (t0k ) of stochastic process η (t0 ), there exists such a
sample ξ (tj ) of stochastic process ξ (t), that mutual IDD iξη (t0k , t) of the
sample η (t0k ) and IDD iξ (tj , t) of the sample ξ (tj ) are identically equal:
iξη (t0k , t) ≡ iξ (tj , t).
118 3 Informational Characteristics and Properties of Stochastic Processes

(a) Let ξ (t) and η (t0 ) are functionally interconnected stochastic processes
(3.5.25). Then for each sample η (t0k ) of stochastic process η (t0 ), there
exists such a sample ξ (tj ) of stochastic process ξ (t), that the metric
(3.5.24) between mutual IDD iξη (t0k , t) of the sample η (t0k ) and IDD
iξ (tj , t) of the sample ξ (tj ) is equal to zero:
d[iξη (t0k , t); iξ (tj , t)] = 0.

The eight properties of mutual IDD directly imply general properties of mutual
NFSI.

1. Mutual NFSI ψξη (tj , t0k ) is nonnegative and bounded: 0 ≤ ψξη (tj , t0k ) ≤ 1.
2. Mutual NFSI ψξη (tj , t0k ) is symmetric:
ψξη (tj , t0k ) = ψηξ (t0k , tj ).

3. The equality ψξη (tj , t0k ) = 1 holds if and only if stochastic processes ξ (t),
η (t0 ) are connected by one-to-one correspondence (3.5.25).
4. The equality ψξη (tj , t0k ) = 0 holds if and only if stochastic processes ξ (t),
η (t0 ) are statistically independent.
Let each IDD iξ (tj , t), iη (t0k , t0 ) and also each mutual IDD iηξ (tj , t0 ), iξη (t0k , t) of
an arbitrary pair of samples ξ (tj ), η (t0k ) of statistically dependent (in general case)
stochastic processes ξ (t), η (t0 ) be associated with their ordinate sets Xj , Yk , XjY ,
YkX (see Fig. 3.5.2) by the mapping ϕ (3.5.1):
Xj = ϕ[iξ (tj , t)] = {(t, y ) : t ∈ Tξ ; 0 ≤ y ≤ iξ (tj , t)}; (3.5.26a)
Yk = ϕ[iη (t0k , t0 )] = {(t0 , z ) : t0 ∈ Tη ; 0 ≤ z ≤ iη (t0k , t0 )}; (3.5.26b)
0 0 0 0
XjY = ϕ[iηξ (tj , t )] = {(t , z ) : t ∈ Tξ , t ∈ Tη ; 0 ≤ z ≤ iηξ (tj , t )}; (3.5.26c)
YkX = ϕ[iξη (t0k , t)] 0
= {(t, y ) : t ∈ Tξ , t ∈ Tη ; 0 ≤ y ≤ iξη (t0k , t)}, (3.5.26d)
where Tξ = [t0 , t0 + Tx ], Tη = [t00 , t00 + Ty ] are the domains of definitions of stochastic
processes ξ (t), η (t0 ), respectively.
For an arbitrary pair of the samples ξ (tj ), η (t0k ) of statistically dependent stochas-
tic processes ξ (t), η (t0 ) with IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ),
iξη (t0k , t), the metric identities hold:

dξη 0
jk = d[iξ (tj , t); iξη (tk , t)] =
Z
1 1
= |iξ (tj , t) − iξη (t0k , t)|dt = m(Xj ∆YkX ); (3.5.27a)
2 2

dηξ 0 0 0
kj = d[iη (tk , t ); iηξ (tj , t )] =
Z
1 1
= |iη (t0k , t0 ) − iηξ (tj , t0 )|dt0 = m(XjY ∆Yk ), (3.5.27b)
2 2

3.5 Informational Characteristics and Properties of Stochastic Processes 119

FIGURE 3.5.2 IDD iξ (tj , t) and mutual IDD iξη (t0k , t) of samples ξ(tj ), η(t0k ) of stochastic
processes ξ(t), η(t0 )

where dξη ηξ 0
jk /dkj are the metrics between mutual IDD of the sample η (tk )/ξ (tj ) and
0
IDD of the sample ξ (tj )/η (tk ), respectively.
According to the equality (3.5.23), the metric identity holds:
1 1
m(Xj ∆YkX ) = m(XjY ∆Yk ).
2 2
Besides, for an arbitrary pair of the samples ξ (tj )/η (t0k ) and ξ (tl )/η (t0m ) of stochastic
process ξ (t)/η (t0 ), the metric identities hold (see relationship (3.5.2)):

dξjl = d[iξ (tj , t); iξ (tl , t)] =


Z
1 1
= |iξ (tj , t) − iξ (tl , t)|dt = m(Xj ∆Xl ); (3.5.28a)
2 2

dηkm = d[iη (t0k , t0 ); iη (t0m , t0 )] =


Z
1 1
= |iη (t0k , t0 ) − iη (t0m , t0 )|dt0 = m(Yk ∆Ym ). (3.5.28b)
2 2

The mappings (3.5.26) transform IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs
iηξ (tj , t0 ), iξη (t0k , t) into the corresponding equivalent sets Xj , Yk , XjY , YkX , defin-
ing isometric mapping of the function space (I, dξη ξ η
jk , djl , dkm ) of stochastic pro-
cesses ξ (t), η (t0 ) with metrics dξη ξ η
jk (3.5.27) and djl , dkm (3.5.28) into the set space
(Ω, dξη ξ η 1 X 1 1
jk , djl , dkm ) with metrics 2 m(Xj ∆Yk ), 2 m(Xj ∆Xl ), 2 m(Yk ∆Ym ).
The normalization property of IDD and mutual IDD of a pair of samples ξ (tj ),
η (t0k ) of stochastic processes ξ (t), η (t0 ) provides the normalization of the measures
m(Xj ), m(Yk ), m(XjY ), m(YkX ):
Z Z
m(Xj ) = iξ (tj , t)dt = 1, m(Xj ) = iηξ (tj , t0 )dt0 = 1;
Y

Tξ Tη
Z Z
m(Yk ) = iη (t0k , t0 )dt0 = 1, m(YkX ) = iξη (t0k , t)dt = 1.
Tη Tξ
120 3 Informational Characteristics and Properties of Stochastic Processes

The mappings (3.5.26) allow us to consider stochastic processes (signals) ξ (t), η (t0 )
with IDDs iξ (tj , t), iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ), iξη (t0k , t) as the collections
of statistically dependent samples {ξ (tα )}, {η (t0β )}, and also as the collections of
sets X = {Xα }, Y = {Yβ }:
[ [
ϕ : ξ (t) → X = Xα ; η (t0 ) → Y = Yβ .
α β

Besides, the sets X ⊂ Ω, Y ⊂ Ω form the subalgebra B(X + Y ) of generalized


Boolean algebra B(Ω) with a measure m and signature (+, ·, −, O) of the type
(2, 2, 2, 0), with operations of addition, multiplication, subtraction (relative com-
plement operation), and null element O [223], [175], [176].
On the base of mappings (3.5.26) and considering an arbitrary pair of stochastic
processes ξ (t), η (t0 ) a collection of statistically dependent samples {ξ (tα )}, {η (t0β )}
and also a subalgebra B(X +Y ) of generalized Boolean algebra B(Ω) with a measure
m and signature (+, ·, −, O) of the type (2, 2, 2, 0), one may define a measure of a
collection of the elements of this pair of signals ξ (t), η (t0 ) as a quantity of mutual
information introduced by the following definition.
Definition 3.5.10. Quantity of mutual information I [ξ (t), η (t0 )] = IXY contained
in an arbitrary pair of signals ξ (t) and η (t0 ) (sets X and Y , respectively) is an
information quantity contained in a set that is the product of the elements X and
Y of generalized Boolean algebra B(Ω) with a measure m:
X X X X
I [ξ (t), η (t0 )] = IXY = m(X · Y ) = m[ (XαY · Yβ )] = m[ (YβX · Xα )],
α β β α

where X and Y are the results of mappings (3.5.1) of stochastic processes ξ (t) and
η (t0 ) into the sets of the ordinate sets {Xα }, {Yβ }: X = Xα , Y = Yβ ; Xα , YβX
S S
α β
are the results of mappings (3.5.26) of IDD iξ (tα , t) and mutual IDD iξη (t0β , t) of
the samples ξ (tα ) and η (t0β ) of stochastic processes ξ (t) and η (t0 ).
The properties 1 through 4 of mutual NFSI (see page 118) of stochastic processes
ξ (t) and η (t0 ) may be supplemented with the following theorem.
Theorem 3.5.1. Let f be a bijective mapping of stochastic processes ξ (t) and η (t0 ),
defined in the intervals Tξ = [t0ξ , tξ ], Tη = [t0η , tη ], into stochastic processes α(t00 )
and β (t000 ) defined in the intervals Tα = [t0α , tα ], Tβ = [t0β , tβ ], respectively:
α(t00 ) = f [ξ (t)], ξ (t) = f −1 [α(t00 )]; (3.5.29a)
β (t000 ) = f [η (t0 )], η (t0 ) = f −1 [β (t000 )]. (3.5.29b)
Then the function f that maps the stochastic processes ξ (t) and η (t0 ) into stochastic
processes α(t00 ) and β (t000 ), respectively, preserves mutual NFSI:
ψξη (tj , t0k ) = ψαβ (t00j , t000
k ),

where ψξη (tj , t0k ), ψαβ (t00j , t000


k ) are mutual NFSIs of the pairs of stochastic processes
ξ (t), η (t0 ) and α(t00 ), β (t000 ), respectively.
3.5 Informational Characteristics and Properties of Stochastic Processes 121

The proof of the Theorem 3.5.1 is similar to the proof of the Theorem 3.1.1.

Corollary 3.5.5. Under bijective mappings (3.5.29), the measures of all sorts of
information contained in a pair of the samples ξ (tj ), η (t0k ) of stochastic processes
ξ (t) and η (t0 ) are preserved:
Quantity of mutual information Ijk :

Ijk = m(Xj · YkX ) = ψξη (tj , t0k ) = ψαβ (t00j , t000 A


k ) = m(Aj · Bk ); (3.5.30)
+
Quantity of overall information Ijk :
+
Ijk = m(Xj + YkX ) = 2 − ψξη (tj , t0k ) = 2 − ψαβ (t00j , t000 A
k ) = m(Aj + Bk ); (3.5.31)


Quantity of particular relative information Ijk :

Ijk = m(Xj − YkX ) = 1 − ψξη (tj , t0k ) = 1 − ψαβ (t00j , t000 A
k ) = m(Aj − Bk ); (3.5.32)


Quantity of relative information Ijk :

Ijk = m(Xj ∆YkX ) = 2(1−ψξη (tj , t0k )) = 2(1−ψαβ (t00j , t000 A
k )) = m(Aj ∆Bk ), (3.5.33)

where Xj = ϕ[iξ (tj , t)]; YkX = ϕ[iξη (t0k , t)]; Aj = ϕ[iα (t00j , t00 )]; BkA = ϕ[iαβ (t000 000
k , t )]
0 0 0
are ordinate sets of IDDs iξ (tj , t), iξη (tk , t ) of stochastic processes ξ (t) and η (t ) and
also IDDs iα (t00j , t00 ), iαβ (t000 000 00 000
k , t ) of stochastic processes α(t ) and β (t ) respectively,
defined by the relationships (3.5.26).

Corollary 3.5.6. Under bijective mappings (3.5.29), the quantity of mutual in-
formation I [ξ (t), η (t0 )] contained in a pair of stochastic processes ξ (t) and η (t0 ), is
preserved:
X X
I [ξ (t), η (t0 )] = m(X, Y ) = m[ (YβX · Xα )] =
β α
X X
= m[ (BδA · Aγ )] = m(A, B ) = I [α(t00 ), β (t000 )], (3.5.34)
δ γ

where Xα , YβX are the results of mappings (3.5.26) of IDD iξ (tα , t) and mutual IDD
iξη (t0β , t) of the samples ξ (tα ) and η (t0β ) of stochastic processes ξ (t) and η (t0 ); X,
Y are the results of mappings (3.5.1) of S stochastic processes ξ (t) and η (t0 ) into the
sets of ordinate sets {Xα }, {Yβ }: X = Xα , Y = Yβ ; Aγ , BδA are the results
S
α β
of mappings (3.5.26) of IDD iα (t00γ , t00 ) and mutual IDD iαβ (t000 00
δ , t ) of the samples
00 000 00 000
α(tγ ) and β (tδ ) of stochastic processes α(t ) and β (t ); A, B are the results of
mappings (3.5.1) of stochastic
S processes
S α(t00 ) and β (t000 ) into the sets of ordinate
sets {Aγ }, {Bδ }: A = Aγ , B = Bδ .
γ δ

Summarizing the aforementioned considerations, the mapping ϕ defined by the


122 3 Informational Characteristics and Properties of Stochastic Processes

relationships (3.5.26) realizes morphism of physical signal space Γ = {ξ (t), η (t0 ), . . .}


into informational signal space Ω = {X, Y, . . .}:
[ [
ϕ : ξ (t) → X = Xα ; η (t0 ) → Y = Yβ ; Γ → Ω, (3.5.35)
α β

whose properties are the subject of Chapter 4.


Invariants of bijective mappings of stochastic processes in the form of NFSI,
mutual NFSI, IDD and mutual IDD, introduced on the base of Definitions 3.1.1,
3.1.2, 3.4.1, and 3.5.9, respectively, with the help of the mappings (3.5.26), allow us
to use generalized Boolean algebra with a measure to describe physical and informa-
tional signal interactions in physical and informational signal spaces, respectively.
Measure m on generalized Boolean algebra B(Ω), on the one hand, is a measure
of information quantity transferred by the signals forming the informational signal
space Ω, and on the other hand, induces a metric in this space.
Invariants of a group of mappings of stochastic processes in the form of IDD and
mutual IDD will be used to investigate characteristics and properties of signal space
built upon generalized Boolean algebra with a measure, and also to establish the
main quantitative regularities characterizing the signal functional transformations
with special properties used in signal processing.
On the basis of invariants of a group of mappings of stochastic processes in
the form of IDD, mutual IDD, NFSI and PMSI, the informational and metric
relationships between the signals, which interact in signal spaces with the distinct
algebraic properties, will be established.
NFSI and IDD will be used to evaluate the quantities of information transferred
by discrete and continuous signals and determine noiseless communication channel
capacity.
NMSI will be used to investigate the potential quality indices of signal processing
in signal spaces with distinct algebraic properties.
A pair of invariants, such as NFSI and HSD connected by Fourier transform,
may be used for analysis of nonlinear transformations of stochastic processes.
Invariants of group of mappings of stochastic processes in the form of mutual
NFSI, NMSI and PMSI may be used while synthesizing algorithms and units of
multichannel signal processing under prior uncertainty conditions in the presence
of interference (noise).
4
Signal Spaces with Lattice Properties

The principal ideas and results of Chapters 2 and 3 taking into account the re-
quirements to both a signal space concept and a measure of information quantity
formulated in Section 1.3, allow using the apparatus of generalized Boolean algebra
to construct the models of spaces where informational and real physical signal in-
teractions occur. On the other hand, these ideas and results permit us to establish
the main regularities and relationships of signal interactions in such spaces.
However, one should clearly distinguish these two models of signal spaces and
their main geometrical and informational properties. Within the framework of the
signal space model, it is necessary to obtain results that define quantitative infor-
mational relationships between the signals based on their homomorphic and iso-
morphic mappings. It is interesting to investigate the quantitative informational
relationships between the signals and the results of their interactions in a signal
space with properties of additive commutative group (in linear space) and also in
signal spaces with other algebraic properties.

4.1 Physical and Informational Signal Spaces


The notions of information distribution density (IDD) and mutual IDD, introduced
in Sections 3.4 and 3.5, provide an entire informational description of stochastic
process (signal). Such a description is complete in the sense that it characterizes a
stochastic process (a signal) within its interrelation with other stochastic processes
(signals) from physical signal space Γ = {ξ (t), η (t), . . .}. The main informational
relationships between an arbitrary pair of signals ξ (t) and η (t) from the space Γ
are determined by IDD and mutual IDD of their samples {ξ (tα )} and {η (tβ )},
respectively.

Definition 4.1.1. The set Γ of stochastic signals {ξ (t), η (t), . . .} interacting with
each other forms the physical signal space that is a commutative semigroup SG =
(Γ, ⊕) with respect to a binary operation ⊕ with the following properties:

ξ (t) ⊕ η (t) ∈ Γ (closure);

ξ (t) ⊕ η (t) = η (t) ⊕ ξ (t) (commutativity);


ξ (t) ⊕ (η (t) ⊕ ζ (t)) = (ξ (t) ⊕ η (t)) ⊕ ζ (t) (associativity).

123
124 4 Signal Spaces with Lattice Properties

Definition 4.1.2. Binary operation ⊕: χ(t) = ξ (t) ⊕ η (t) between the elements
ξ (t) and η (t) of semigroup SG = (Γ, ⊕) is called physical interaction of the signals
ξ (t) and η (t) in physical signal space Γ.

The mappings (3.5.26) allow us to consider the stochastic processes (signals) ξ (t)
and η (t0 ) with IDDs iξ (tj , t) and iη (t0k , t0 ) and mutual IDDs iηξ (tj , t0 ) and iξη (t0k , t)
as the collections of statistically dependent samples {ξ (tα )}, {η (t0β )} and also as
the collections of sets X = {Xα }, Y = {Yβ }:
[ [
ϕ : ξ (t) → X = Xα ; η (t0 ) → Y = Yβ . (4.1.1)
α β

The sets X ⊂ Ω, Y ⊂ Ω form the subalgebra B(X + Y ) of generalized Boolean


algebra B(Ω) with a measure m and signature (+, ·, −, O) of the type (2, 2, 2, 0)
with the operations of addition, multiplication, subtraction (relative complement
operation), and null element O [175], [176], [223].
The mapping ϕ (4.1.1) realizes a morphism of physical signal space Γ =
{ξ (t), η (t0 ), . . .} into informational signal space Ω = {X, Y, . . .}:

ϕ: Γ → Ω. (4.1.2)

The mapping ϕ (4.1.1) describes the stochastic processes (signals) from general
algebraic positions on the basis of the notion of information carrier space, whose
content is elucidated in Chapter 2.
In order to fix the interrelation (4.1.2) between physical and informational sig-
nal spaces while denoting the signals ξ (t) and η (t) and their images X and Y ,
connected by the relationship (4.1.1), the notations ξ (t)/X and η (t)/Y will be
used, respectively.
Generally, in physical signal space Γ = {ξ (t), η (t), . . .}, the instantaneous values
{ξ (tα )} and {η (tβ )} of the signals ξ (t) and η (t) interact in the same coordinate
dimension according to the Definition 4.1.2: ξ (tα ) ⊕ η (tα ). Since the interrelations
between informational characteristics of the signals are considered in physical and
informational signal spaces, to provide the commonality of approach, we suppose
that the interaction of instantaneous values {ξ (tα )}, {η (t0β )} of the signals ξ (t), η (t0 )
in physical signal space, in a general case, may be realized in the distinct reference
systems ξ (tα ) ⊕ η (t0α ).
We define the notion of informational signal space Ω, using the analogy with
the notion of information carrier space (see Section 2.1), on the basis of apparatus
of Boolean algebra B(Ω) with a measure m.

Definition 4.1.3. Informational signal space Ω is a set of the elements {X, Y, . . .}:
{X, Y, . . .} ⊂ Ω: {ϕ : ξ (t) → X, η (t0 ) → Y, . . .} characterized by the following
general properties.

1. The space Ω is generalized Boolean algebra B(Ω) with a measure m and


signature (+, ·, −, O) of the type (2,2,2,0).
4.1 Physical and Informational Signal Spaces 125

2. In the space Γ/Ω, the signal ξ (t)/X with IDD iξ (tα , t) and NFSI ψξ (tα , tβ )
forms a collection of the elements {ξ (tα )}/{Xα } with normalized measure:

m(Xα ) = m(Xα · Xα ) = ψξ (tα , tα ) = 1.

3. In the space Γ/Ω, the signal ξ (t)/X with IDD iξ (tα , t) possesses
the property of continuity if, for an arbitrary pair of the elements
ξ (tα ), ξ (tβ )/Xα , Xβ , there exists such an element ξ (tγ )/Xγ , that: tα <
tγ < tβ (α < γ < β ). We call this a signal with continuous structure.
The signals that do not possess this property are called the signals with
discrete structure.
4. In the space Ω, the following operations are defined over the elements
{Xα } ⊂ X, {Yβ } ⊂ Y , X ∈ Ω, Y ∈ Ω that characterize informational
interrelations between them:
P P
(a) Addition — X + Y = Xα + Yβ ;
α β
P P
(b) Multiplication — Xα1 ·Xα2 , Yβ1 ·Yβ2 , Xα ·Yβ , X ·Y = ( Xα ) · ( Yβ );
α β
(c) Difference — Xα1 − Xα2 , Yβ1 − Yβ2 , Xα − Yβ , X − Y = X − (X · Y );
(d) Symmetric difference — Xα1 ∆Xα2 , Yβ1 ∆Yβ2 , Xα ∆Yβ , X ∆Y = (X −
Y ) + (Y − X );
(e) Null element O: X ∆X = O; X − X = O.
5. A measure m introduces a metric upon generalized Boolean algebra B(Ω)
defining the distance ρ(X, Y ) between the signals ξ (t)/X and η (t)/Y (be-
tween the sets X, Y ∈ Ω) by the relationship:

ρ(X, Y ) = m(X ∆Y ) = m(X ) + m(Y ) − 2m(XY ), (4.1.3)

where X ∆Y is a symmetric difference of the sets X and Y .


Similarly, we define the distance ρ(Xα , Xβ ) between the single elements
Xα and Xβ of the signal ξ (t)/X: ρ(Xα , Xβ ) = m(Xα ∆Xβ ) and the dis-
tance ρ(Xα , Yβ ) between the single elements Xα and Yβ of the signals
ξ (t)/X and η (t0 )/Y , respectively:

ρ(Xα , Yβ ) = m(Xα ∆YβX ) = m(XαY ∆Yβ ).

Thus, the signal space Ω is metric space.


Iso
6. Isomorphism Iso of signal space Ω into the signal space Ω0 — Iso: Ω →
Ω0 , preserving a measure m: m(X ) = m(X 0 ), m(Y ) = m(Y 0 ), X, Y ⊂ Ω,
X 0 , Y 0 ⊂ Ω0 , also preserves metric induced by this measure:

ρ(X, Y ) = m(X ∆Y ) = m(X 0 ∆Y 0 ) = ρ(X 0 , Y 0 ).

Thus, the spaces Ω and Ω0 are isometric.


126 4 Signal Spaces with Lattice Properties

Informational properties of signal space Ω are introduced by Axiom 4.1.1 that


is analogous to Axiom 3.5.1 concerning the signal ξ (t)/X considered a subalgebra
B(X ) of generalized Boolean algebra B(Ω) with a measure m.

Axiom 4.1.1. Axiom of a measure of binary operation. Measure m(Z ) of the element
Z: Z = X ◦ Y ; X, Y, Z ∈ Ω considered as the result of binary operation ◦ of
generalized Boolean algebra B(Ω) with a measure m defines the information quantity
IZ = m(Z ) that corresponds to the result of this operation.

The axiom implies that a measure m of the element Z of the space Ω defines a
quantitative aspect of information contained in this element, while a binary oper-
ation “◦” of generalized Boolean algebra B(Ω) defines a qualitative aspect of this
information.
Within the S
framework of Axiom 4.1.1, depending on relations between the sig-
0
S
nals ξ (t)/X = Xα and η (t )/Y = Yβ , we shall distinguish the following types
α β
of information quantities. The same information quantities are denoted in a twofold
manner, for instance, as in Definitions 3.5.1 through 3.5.5, with respect to the signals
ξ (t) and η (t0 ) directly, and on the other hand, with respect to the sets X = Xα ,
S
S α
Y = Yβ associated with the signals by the mapping (4.1.1).
β

Definition 4.1.4. Quantity of absolute information I [ξ (t)] = IX (or I [η (t0 )] = IY )


is an information quantity contained in the signal ξ (t) (or η (t0 )) considered as
a collection of the elements {Xα } (or {Yβ }) in the consequence of its structural
diversity: X
I [ξ (t)] = IX = m(X ) = m( Xα );
α
X
I [η (t0 )] = IY = m(Y ) = m( Yβ ),
β
P P
where m( Xα ) and m( Yβ ) are measures of sums of the elements {Xα } and
α β
{Yβ } of the signals ξ (t) and η (t0 ), respectively.

Quantity of absolute information with respect to its content is identical to over-


all quantity of information (see Definition 3.5.7) and is calculated by the formula
(3.5.6).

Definition 4.1.5. Quantity of overall information I + [ξ (t), η (t0 )] = IX+Y contained


in an arbitrary pair of the signals ξ (t) and η (t0 ) (in the sets X and Y , respectively)
is an information quantity contained in the set that is the sum of the elements X
and Y of generalized Boolean algebra B(Ω):

I + [ξ (t), η (t0 )] = IX+Y = m(X + Y ) = m(X ) + m(Y ) − m(XY ).

Definition 4.1.6. Quantity of mutual information I [ξ (t), η (t0 )] = IXY contained


in an arbitrary pair of signals ξ (t) and η (t0 ) (in the sets X and Y , respectively) is
4.1 Physical and Informational Signal Spaces 127

an information quantity contained in the set that is the product of the elements X
and Y of generalized Boolean algebra B(Ω):
X X X X
I [ξ (t), η (t0 )] = IXY = m(X · Y ) = m[ (XαY · Yβ )] = m[ (YβX · Xα )],
α β β α

where X and Y are the results of the mapping (4.1.1) of stochastic processesS(sig-
nals) ξ (t) and η (t0 ) into the sets of ordinate sets {Xα } and {Yβ }: X = Xα ,
α
Y = Yβ ; Xα and YβX are the results of mappings (3.5.26) of IDD iξ (tα , t) and
S
β
mutual IDD iξη (t0β , t) of the samples ξ (tα ) and η (t0β ) of stochastic processes (signals)
ξ (t) and η (t0 ), respectively.
Remark 4.1.1. Quantity of absolute information IX contained in the signal ξ (t)
may be interpreted as the quantity of overall information IX+X = m(X + X ), or
as the quantity of mutual information IXX = m(X · X ) contained in the signal ξ (t)
with respect to itself.
Definition 4.1.7. Quantity of particular relative information I − [ξ (t), η (t0 )] =
IX−Y contained in the signal ξ (t)/X with respect to the signal η (t0 )/Y (or vice
versa, contained in the signal η (t0 )/Y with respect to the signal ξ (t)/X, i.e.,
I − [η (t0 ), ξ (t)] = IY −X ), is an information quantity contained in the difference be-
tween the elements X and Y of generalized Boolean algebra B(Ω):
I − [ξ (t), η (t0 )] = IX−Y = m(X − Y ) = m(X ) − m(XY );
I − [η (t0 ), ξ (t)] = IY −X = m(Y − X ) = m(Y ) − m(XY ).
Definition 4.1.8. Quantity of relative information I ∆ [ξ (t), η (t0 )] = IX∆Y con-
tained in the signal ξ (t)/X with respect to the signal η (t0 )/Y is an information
quantity contained in the symmetric difference between the elements X and Y of
generalized Boolean algebra B(Ω):
I ∆ [ξ (t), η (t0 )] = IX∆Y = m(X ∆Y ) = m(X − Y ) + m(Y − X ) = IX−Y + IY −X .
Quantity of relative information IX∆Y is identically equal to an introduced
metric:
IX∆Y = ρ(X, Y ).
Regarding the units of information quantity, as a unit of information quantity
in signal space Ω, we take the quantity of absolute information I [ξ (tα )] contained
in a single element ξ (tα ) of the signal ξ (t) with a domain of definition Tξ (in the
form of a discrete set or continuum) and information distribution density (IDD)
iξ (tα , t), and according to the relationship (3.5.3), it is equal to:
Z
I [ξ (tα )] = m(Xα )|tα ∈Tξ = iξ (tα ; t)dt = 1abit. (4.1.4)

So, the unit of information quantity in signal space Ω is introduced by definition


that is similar to the Definition 3.5.6.
128 4 Signal Spaces with Lattice Properties

Definition 4.1.9. In signal space Ω, a unit of information quantity is the quantity


of absolute information contained in a single element (the sample) ξ (tα ) of stochastic
process (signal) ξ (t) considered a subalgebra B(X ) of generalized Boolean algebra
B(Ω) with a measure m, and it is called absolute unit or abit (absolute unit).

The metric signal space Ω is an informational space and allows us to give the
following geometrical interpretation of the main introduced notions.
S
In signal space Ω, all the elements of the signal ξ (t)/X = Xα are situated on
α
the surface of some n-dimensional sphere Sp(O, R), whose center is null element O
of the space Ω, and its radius R is equal to 1:

R = m(Xα ∆O) = m(Xα ) = 1.

The distance from null element O of the space Ω to an arbitrary signal ξ (t)/X is
equal to a measure of this signal m(X ) or a quantity of absolute information IX
contained in this signal ξ (t):

ρ(X ∆O) = m(X ∆O) = m(X ) = IX .

It is obvious that quantity of absolute information IX of the signal ξ (t)/X is equiv-


alent to the quantity of relative information IX∆O contained in the signal ξ (t)/X
with respect to null element O:

IX = m(X ) = m(X ∆O) = IX∆O .

Measure m(X ) of the signal ξ (t)/X, or a quantity of absolute information IX con-


tained in this signal, may be interpreted as a length of the signal ξ (t) in metric
space Ω. Quantity of relative information IX∆Y contained in the signal ξ (t) with
respect to the signal η (t0 ), has a sense of a distance between the signals ξ (t) and
η (t0 ) (between the sets X and Y ) in signal space Ω:

IX∆Y = m(X ∆Y ) = ρ(X, Y ).

In informational signal space Ω, informational properties of the signals are charac-


terized by the following main relationships.
Group of the identities for an arbitrary pair of signals ξ (t)/X and η (t0 )/Y :

IX+Y = IX + IY − IXY ;

IX+Y = IX∆Y + IXY ;


IX∆Y = IX + IY − 2IXY ;
Group of the inequalities:

IX + IY ≥ IX+Y ≥ IX∆Y ;

IX + IY ≥ IX+Y ≥ max[IX , IY ];
4.1 Physical and Informational Signal Spaces 129
p
max[IX , IY ] ≥ IX IY ≥ min[IX , IY ] ≥ IXY .
Informational relationships for an arbitrary triplet of the signals ξ (t)/X, η (t0 )/Y ,
ζ (t00 )/Z and null elements O of signal space Ω, that are equivalent to metric re-
lationships of tetrahedron (see the relationships described in Subsection 2.2.1 on
page 32), hold:
IX∆Y = IX + IY − 2IXY ;
IY ∆Z = IY + IZ − 2IY Z ;
IZ∆X = IZ + IX − 2IZX ;
IX∆Y = IX∆Z + IZ∆Y − 2I(X∆Z)(Z∆Y ) .
To conclude discussion of the main relationships characterizing an informational
interaction between the signals in the space Ω, we begin considering the main
relationships within a single signal of the space Ω.
It is obvious that all the relationships between the elements of the signal space
hold
S with respect to arbitrary elements (samples) {Xα } of the signal ξ (t)/X =
Xα . For all the elements {Xα } of the signal ξ (t)/X with normalized measure
α
m(Xα ) = 1, it is convenient to introduce a metric between the elements Xα and
Xβ that is equivalent to the metric (4.1.3):

1 1
d(Xα , Xβ ) = m(Xα ∆Xβ ) = [m(Xα ) + m(Xβ ) − 2m(Xα · Xβ )] = 1 − m(Xα · Xβ );
2 2
1
d(Xα , Xβ ) = ρ(Xα , Xβ ). (4.1.5)
2
S
Signal ξ (t)/X = Xj with discrete structure {Xj } (with discrete time domain)
j
in metric signal space Ω is represented by the vertices {Xj } of polygonal line
l({Xj }) that lie upon a sphere Sp(O, R) and at the same time are the vertices
of n-dimensional simplex Sx(X ) inscribed into the sphere Sp(O, R). The length
(perimeter) P ({Xj }) of the closed polygonal line l({Xj }) is determined by the
expression:
n n
X 1X
P ({Xj }) = d(Xj , Xj+1 ) = ρ(Xj , Xj+1 ),
j=0
2 j=0

where d(Xj , Xk ) and ρ(Xj , Xk ) are the metrics determined by the relationships
(4.1.5) and (4.1.3), respectively. Here, the values of indices are denoted modulo
n + 1, i.e., Xn+1 ≡ X0 . S
The signal ξ (t)/X = Xα with continuous structure {Xα } (with continuous
α
time domain) in metric space Ω may be represented by a continuous closed line
l({Xα }) situated upon a sphere Sp(O, R), which at the same time is a fragment
of n-dimensional simplex Sx(X ) inscribed into this sphere Sp(O, R), and in series
connects the vertices {Xj } of a given simplex.
130 4 Signal Spaces with Lattice Properties

On the basis of the introduced notions


S one can distinguish two forms of struc-
tural diversity of the signal ξ (t)/X = Xα , respectively, two measures of a col-
α
lection of the elements of the signal ξ (t), i.e., overall quantity of information and
relative quantity of information introduced by the following definitions.

Definition 4.1.10. Overall quantity of information I [ξ (t)] contained in the signal


ξ (t)/X considered a collection of the elements {Xα } is an information quantity
equal to the measure of their sum:
[ X
I [ξ (t)] = m(X ) = m( Xα ) = m( Xα ). (4.1.6)
α α

Evidently, this measure of structural diversity is identical to the quantity of


absolute information IX contained in the signal ξ (t)/X: I [ξ (t)] = IX .

Definition 4.1.11. Relative quantity of information I∆ [ξ (t)] contained in the sig-


nal ξ (t) owing to distinctions between the elements of a collection {Xα } is an
information quantity equal to the measure of symmetric difference between these
elements:
I∆ [ξ (t)] = m(∆ Xα ), (4.1.7)
α

where ∆ Xα = . . . ∆Xα ∆Xβ . . . is symmetric difference of the elements of a collec-


α
tion {Xα } of the signal ξ (t)/X.

The presence of two measures of structural diversity of the signal ξ (t)/X is


stipulated by Stone’s duality between generalized Boolean algebra B(Ω) with sig-
nature (+, ·, −, O) of the type (2, 2, 2, 0) and generalized Boolean ring BR(Ω)
with signature (∆, ·, O) of the type (2, 2, 0), that are isomorphic between each
other [175], [176], [223]:
Iso
B(Ω) ↔ BR(Ω).
The overall quantity of information I [ξ (t)] has a sense of an entire information
quantity contained in the signal ξ (t)/X considered a collection of the elements
with an ordered structure. Relative quantity of information I∆ [ξ (t)] has a sense of
maximal information quantity that can be extracted from the signal ξ (t) by the
proper processing.
Consider the main relationships
P characterizing the introduced measures of a
collection of the elements m( Xj ) and m(∆ Xj ) for the signal ξ (t)/X with discrete
j j
structure {Xj }:
X X X
m( Xj ) = m(Xj ) − m(Xj Xk )+
j j 0≤j<k≤n

X n
Y
+ m(Xj Xk Xl ) − . . . + (−1)n m( Xj ); (4.1.8)
0≤j<k<l≤n j=0
4.1 Physical and Informational Signal Spaces 131
X X
m(∆ Xj ) = m( Xj ) − m( Xj Xk )+
j
j 0≤j<k≤n

X n
Y
+m( Xj Xk Xl ) − . . . + (−1)n m( Xj ). (4.1.9)
0≤j<k<l≤n j=0

The identities (4.1.8) and (4.1.9) imply the double inequality:


X X
m(∆ Xj ) ≤ m( Xj ) ≤ m(Xj ), (4.1.10)
j
j j

which implies the double informational inequality:


X
I∆ [ξ (t)] ≤ I [ξ (t)] ≤ m(Xj ). (4.1.11)
j

If the signal ξ (t)/X considered as a collection {Xj } consists of the disjoint elements
(Xj · Xk = O), then the inequality (4.1.11) transforms to the identity:
X
I∆ [ξ (t)] = I [ξ (t)] = m(Xj ). (4.1.12)
j

For the signal ξ (t)/X with discrete structure {Xj }, j = 0, 1, . . . , n, the following
relationship holds: X Y
m( Xj ) = P ({Xj }) + m( Xj ), (4.1.13)
j j
n n
1
P P
where P ({Xj }) = 2 m(Xj ∆Xj+1 ) = d(Xj , Xj+1 ) is the perimeter of a closed
j=0 j=0
polygonal line l({Xj }) that connects in series the ordered elements {Xj } of the
signal ξ (t)/X in metric signal space Ω. Here, the values of indices are denoted
modulo n + 1, i.e., Xn+1 ≡ X0 .
The relationships (4.1.9) and (4.1.13) imply the equality:

m(∆ Xj ) = P ({Xj }) − P ({Xj · Xj+1 }) + P ({Xj · Xj+1 · Xj+2 }) − . . . (4.1.14)


j

j+k
Y Y
+(−1)k P ({ Xi }) + . . . + m( Xj ) · mod 2 (n − 1),
i=j j

1, n = 2k, k ∈ N;
where mod 2 (n − 1) =
0, n = 2k + 1,
n
P
P ({Xj }) = d(Xj , Xj+1 ) is the perimeter of a closed polygonal line l({Xj }) that
j=0
in series connects the ordered elements {Xj } of the signal ξ (t)/X in metric signal
Pn
space Ω; P ({Xj · Xj+1 }) = d[(Xj−1 · Xj ), (Xj · Xj+1 )] is the perimeter of a
j=0
closed polygonal line l({XΠj }) that connects in series the ordered elements {XΠj }:
132 4 Signal Spaces with Lattice Properties

S j+k
Q
XΠj = Xj · Xj+1 of a set XΠ = XΠj in metric space Ω; and P ({ Xi }) =
j i=j
n
P j+k
Q j+k+1
Q
d[ Xi , Xi ] is the perimeter of a closed polygonal line l({XΠj,j+k }) that
j=0 i=j i=j+1
j+k
Q
connects in series the ordered elements {XΠj,k }: XΠj,k = Xi of a set XΠk =
i=j
S
XΠj,k in metric space Ω.
j
The relationship (4.1.13) implies that overall quantity of information I [ξ (t)] is an
information quantity contained in the signal ξ (t)/X due to the presence of metric
distinctions between the elements of a collection {Xj }. This overall quantity of
information is determined by perimeter P ({Xj }) of a closed polygonal line l({Xj })
in metric space Ω and also by the quantity of mutual information between Q the
elements of a collection defined by measure of the product of the elements m( Xj ).
j
The relationship (4.1.14) implies that relative quantity of information I∆ [ξ (t)] is
an information quantity contained in the signal ξ (t)/X due to the presence of metric
distinctions between the elements of a collection {Xj }, as well as overall quantity
of information. Unlike overall quantity of information I [ξ (t)], relative quantity of
information I∆ [ξ (t)] is defined by perimeter P ({Xj }) of a closed polygonal line
l({Xj }) in metric space Ω, taking into account the influence of metric distinctions
j+k
Q
between the products Xi of the elements of a collection {Xj }.
i=j
For the signal ξ (t)/X with continuous structure {Xα }, the identity holds:
X
m( Xα ) = P ({Xα }), (4.1.15)
α

1
P
where P ({Xα }) = lim m(Xα ∆Xα+dα ) is a length of some line l({Xα }) (in
dα→0 2 α∈A
general case, not a straight one) that connects in series the ordered elements {Xα } of
the signal ξ (t)/X in space Ω; A = [α0 , αI ] is an interval of definition of a parameter
α; lim 12 m(XαI ∆XαI +dα ) = 12 m(XαI ∆Xα0 ).
dα→0
The identity (4.1.15) means that overall quantity of information I [ξ (t)] con-
tained in the signal ξ (t)/X considered a collection of the elements {Xα } is nu-
merically equal to the perimeter P ({Xα }) of a closed polygonal line l({Xα }) that
in series connects the elements {Xα } in metric space Ω with metric d(Xα , Xβ ) =
1
2 m(Xα ∆Xβ ).
For a signal ξ(t)/X characterized by IDD in the form of δ-function or Heaviside
step function a1 1(τ + a2 ) − 1(τ − a2 ) , the measures of structural diversity (4.1.7)
and (4.1.6) are equivalent:
X
m(∆ Xα ) = m( Xα ), (4.1.16)
α
α

and the relative quantity of information I∆ [ξ (t)] contained in the signal ξ (t)/X is
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 133

equal to the overall quantity of information I [ξ (t)]:

I∆ [ξ (t)] = I [ξ (t)]. (4.1.17)

For a signal ξ (t)/X characterized by IDD in the form of the other functions,
the measure of structural diversity (4.1.7) is equal to a half of the measure (4.1.6):
1 X
m(∆ Xα ) = m( Xα ), (4.1.18)
α 2 α

and correspondingly, relative quantity of information I∆ [ξ (t)] contained in a signal


ξ (t)/X is equal to half of the overall quantity of information I [ξ (t)]:
1
I∆ [ξ (t)] = I [ξ (t)]. (4.1.19)
2
Both measures of Pstructural diversity of a signal
P ξ (t)/X with discrete (continu-
ous) structure, m( Xj ) and m(∆ Xj ) (m( Xα ) and m(∆ Xα )), are functions
j j α α
of perimeter P ({Xj }) (P ({Xα })) of an ordered structure of a set X in the form
of a closed polygonal line l({Xj }) (l({Xα })) that in series connects the elements
{Xj } ({Xα }) in metric space Ω with metric d(Xj , Xk ) = 21 m(Xj ∆Xk ). The overall
quantity of information I [ξ (t)] has a sense of an entire quantity of information con-
tained in a signal ξ (t)/X considered a collection of the elements with an ordered
structure. Relative quantity of information I∆ [ξ (t)] has a sense of an information
quantity that can be extracted from a signal ξ (t)/X by the proper processing.
The interests of information theory and signal processing theory require the
formulation of the main relationships, characterizing informational interrelations
between the signals and their elements, under all the transformations realized over
processed signals. The next section is devoted to establishing such relationships.

4.2 Homomorphic Mappings in Signal Space Built upon


Generalized Boolean Algebra
4.2.1 Homomorphic Mappings of Continuous Signal Into a Finite
(Countable) Sample Set in Signal Space Built upon Generalized
Boolean Algebra with a Measure
The sampling theorem is a cornerstone of signal processing theory that was known
for a long time [238], [239], [52]. As a theoretical base for a transition from Hilbert
signal space into finite-dimensional Euclidean space, the sampling theorem over-
comes the main difficulties of research and signal processing problems, in an at-
tempt to achieve a harmony between continual and discrete. Its conclusions and
relationships have extremely important theoretical and applied significance; they
are used to solve various signal processing theory problems. Nevertheless, the use of
the sampling theorem, as an exact statement with respect to the real signals, and
134 4 Signal Spaces with Lattice Properties

attempts to organize the technological methods of signal processing without losses


of useful information contained in the signals, encounters a number of difficulties.
Theoretical complications that arise during use of this theorem and its interpreta-
tions are well known from a large body of scientific literature ranging from narrow
and specialized [240] to educational [241].
The basis of the classical formulation of the sampling theorem is the principle
of equivalence. According to this principle, the transition from continuous signal to
the result of its discretization (sampling or quantization in time) is realized. The
principle of equivalent representation of continuous signal by a finite (countable)
set of samples is based on continuous function expansion in a series of orthogonal
functions (generalized Fourier series) in Hilbert space.
Disadvantages of this theorem and its interpretations were discussed in Chap-
ter 1. In the most general sense, all disadvantages of classical sampling theorem
relate to ignoring the informational properties of continuous signal that must be
represented by a finite (countable) set of the samples, and on the other hand, to
neglecting geometrical properties of signal space.
In this section, we consider possible variants of sampling theorem formulation
with application to stochastic processes, taking into account the informational char-
acteristics and properties of stochastic signals covered in Section 3.5.
Consider the sampling of continuous stochastic process (signal) on the basis of
theoretical-set approach, whose essence is revealed below.
The mapping (3.5.1) of IDD iξ (tα , t) of an arbitrary sample ξ (tα ) of stochastic
process ξ (t) into its ordinate set Xα – ϕ: {iξ (tα , t)} → {Xα } considers any stochastic
process (signal) ξ (t) with IDD iξ (tα , t) in the space Ω as a collection of statistically
dependent samples {ξ (tα )}, and also as a subalgebra B(X ) of generalized Boolean
algebra B(Ω) with a measure m and signature (+, ·, −, O) of the type (2, 2, 2, 0).
The set of the elements X is characterized by continuous structure {Xα }:
[
ξ (t) → X = Xα ,
α

and a measure of a single element Xα possesses the normalization property


m(Xα ) = 1.

Definition 4.2.1. By homomorphism of stochastic process ξ (t) with IDD iξ (tα , t),
defined in the interval Tξ = [t0 , t∗ ], t ∈ Tξ , into stochastic process η (t0 ) with IDD
iη (t0β , t0 ), defined in the interval Tη = [t00 , t0∗ ], t0 ∈ Tη , in the terms of the mapping
(3.5.1): [
ϕ : {iξ (tα , t)} → {Xα }, ξ (t) → X = Xα ;
α
[
ϕ : {iη (t0β , t0 )} → {Yβ }, η (t0 ) → Y = Yβ ,
α

we mean the mapping h: ξ (t) → η (t0 ), X → Y of generalized Boolean algebra B(X )


into generalized Boolean algebra B(Y ) preserving all its signature operations:

1. Xα1 + Xα2 = Yβ1 + Yβ2 (addition)


4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 135

2. Xα1 · Xα2 = Yβ1 · Yβ2 (multiplication)


3. Xα1 − Xα2 = Yβ1 − Yβ2 (difference/relative complement operation)
4. OX ≡ OY ≡ O (identity of null element)
Definition 4.2.2. By isomorphism of stochastic process ξ (t) with IDD iξ (tα , t),
defined in the interval Tξ = [t0 , t∗ ], t ∈ Tξ , into stochastic process η (t0 ) with IDD
iη (t0β , t0 ), defined in the interval Tη = [t00 , t0∗ ], t0 ∈ Tη , in the terms of the mapping
(3.5.1), we mean bijective homomorphism.
Stochastic processes ξ (t) and η (t0 ) connected by homomorphism (isomorphism)
are called homomorphic (isomorphic) in terms of the mappings (3.5.1).
As a result of sampling, the stochastic process (signal) ξ (t) is represented by a
finite (countable) set of the samples Ξ = {ξ (tj )}. The last set may be associated
with a set of the elements X 0 with discrete structure {Xj }:
[
Ξ = {ξ (tj )} → X 0 = Xj ,
j

and every element of a set X 0 with discrete structure is simultaneously the element
of a set X with continuous structure.
Thus, discretization (sampling) D: ξ (t) → {Xj } of continuous stochastic process
ξ (t) with IDD iξ (tα , t)Smay be considered a mapping of the set X with continuous
structure {Xα }, X = Xα into the set X 0 = Xj with discrete structure {Xj },
S
α j
and every element of the set X 0 with discrete structure is simultaneously the element
of a set X with continuous structure, and the distinct elements Xα , Xβ ∈ X, Xα 6=
Xβ of the set X are mapped into the distinct elements Xj , Xk ∈ X 0 , Xj 6= Xk of
the set X 0 :

D : X → X 0; (4.2.1)
Xα → Xj , Xβ → Xk ; (4.2.1a)
0
Xα , Xβ ∈ X, Xα 6= Xβ ; Xj , Xk ∈ X , Xj 6= Xk .

The mapping D, possessing the property (4.2.1a), is injective. Homomorphism


(4.2.1) maps generalized Boolean algebra B(X ) with signature (+, ·, −, O) of the
type (2, 2, 2, 0) into subalgebra B(X 0 ), preserving the operations:

1. Xα + Xβ = Xj + Xk (addition)
2. Xα · Xβ = Xj · Xk (multiplication)
3. Xα − Xβ = Xj − Xk (difference/relative complement operation)
4. OX ≡ OX 0 ≡ O (identity of null element)
In a general case, discretization (sampling) of continuous stochastic process ξ (t) is
not an isomorphic mapping preserving both measures of structural diversity of a
signal I [ξ (t)] (4.1.6) and I∆ [ξ (t)] (4.1.7):

I [ξ (t)] 6= I [{ξ (tj )}], I∆ [ξ (t)] 6= I∆ [{ξ (tj )}]; (4.2.2a)


136 4 Signal Spaces with Lattice Properties

I (X ) 6= I (X 0 ), I∆ (X ) 6= I∆ (X 0 ), (4.2.2b)
where I [ξ (t)] is overall quantity of information contained in continuous signal ξ (t),
I [ξ (t)] = I (X ); I [{ξ (tj )}] is overall quantity of information contained in a sampled
signal {ξ (tj )}, I [{ξ (tj )}] = I (X 0 ); I∆ [ξ (t)] is relative quantity of information con-
tained in a continuous signal ξ (t), I∆ [ξ (t)] = I∆ (X ); I∆ [{ξ (tj )}] is relative quantity
of information contained in a sampled signal {ξ (tj )}, I∆ [{ξ (tj )}] = I∆ (X 0 ).
It should be noted that notations (4.2.2a) and (4.2.2b), here and below, are used
to denote informational characteristics of continuous signal ξ (t) and the result of its
discretization {ξ (tj )}, and also to denote equivalent informational characteristics of
their images X, X 0 : ξ (t) → X, {ξ (tj )} → X 0 considered in terms of general Boolean
algebra B(X ).
Since discretization D of a continuous signal (4.2.1), like an arbitrary homo-
morphism, in a general case, does not preserve measures of structural diversity
I [ξ (t)], I∆ [ξ (t)] (4.2.2a) of a signal, it is impossible to formulate strictly the sam-
pling theorem valid for arbitrary stochastic processes. So, the principle of equivalent
representation on any formulation of the sampling theorem will be used depending
on informational properties of the signals.
Consider the main informational relationships characterizing a representation
of continuous stochastic process (signal) ξ (t) by a finite set of samples in a bounded
time interval Tξ = [0, T ], t ∈ Tξ .
The following informational inequalities hold under discretization D : X → X 0
(4.2.1):
I (X ) ≥ I (X 0 ), (4.2.3a)
I (X 0 ) ≥ I∆ (X 0 ). (4.2.3b)
The first inequality (4.2.3a) is stipulated by the relationship X 0 ⊂ X ⇒ m(X 0 ) ≤
m(X ). The second (4.2.3b) is stipulated by the relationship (4.1.10) between over-
all and relative quantity of information contained in the result of discretization
{ξ (tj )} of the signal ξ (t). The relationship between relative quantity of information
I∆ (X ) in continuous signal ξ (t), and relative quantity of information I∆ (X 0 ) in the
result of its discretization {ξ (tj )} requires comment. This relationship essentially
depends on the kind of IDD of stochastic process ξ (t). For continuous weakly (wide-
sense) stationary stochastic process ξ (t) with IDD iξ (τ ) that is differentiable in the
point τ = 0, there exists such an interval of discretization ∆t between the samples
{ξ (tj )} that relative quantity of information I∆ (X ) in continuous signal ξ (t), and
relative quantity of information I∆ (X 0 ) in the result of its discretization {ξ (tj )} are
connected by the inequality:
I∆ (X ) ≤ I∆ (X 0 ). (4.2.4)
For continuous stochastic process (signal) ξ (t) with IDD iξ (τ ) that is non-
differentiable in the point τ = 0 on arbitrary values of discretization interval (sam-
pling interval) ∆t between the samples {ξ (tj )}, relative quantity of information
I∆ (X ) in continuous signal ξ (t) and relative quantity of information I∆ (X 0 ) in the
result of its discretization {ξ (tj )} are connected by the relationship:
I∆ (X ) ≥ I∆ (X 0 ). (4.2.5)
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 137

Overall quantity of information I (X ) contained in a continuous stochastic process


(signal) ξ (t) considered a set X with continuous structure {Xα } is equal to the
sum of overall quantity of information I (X 0 ) contained in a finite set of the samples
Ξ = {ξ (tj )}/X 0 (i.e., the result of discretization of stochastic process (signal) ξ (t)/
a set X) and an information quantity IL0 existing due to a curvature of a structure
of a set X in signal space Ω that is lost under its discretization:

I (X ) = I (X 0 ) + IL0 . (4.2.6)

Information quantity IL0 is called information losses of the first genus. For a set
of the elements X of a stochastic process (signal) ξ (t) with a continuous structure
{Xα } in the space Ω, we can introduce a measure of a curvature of a structure
c(X ), which characterizes a deflection of a locus of a structure {Xα } of a set X
from a line:
I (X ) − I (X 0 )
c(A) = . (4.2.7)
I (X )
Evidently, the curvature c(X ) of a set of the elements X can be varied within
0 ≤ c(X ) ≤ 1. If for an arbitrary pair of the adjacent elements Xj , Xj+1 ∈ X 0 of a
discrete set X 0 and the element Xα ∈ X, Xα ∈ [Xj Xj+1 , Xj + Xj+1 ], the metric
identity holds:
d[Xj , Xj+1 ] = d[Xj , Xα ] + d[Xα , Xj+1 ],
where d[Xα , Xβ ] = 21 m(Xα ∆Xβ ) is a metric in the space Ω, then the curvature
c(X ) of a set of elements X with continuous structure is equal to zero.
Overall quantity of information I (X 0 ) contained in a set X 0 with discrete struc-
ture {Xj } is equal to the sum of relative quantity of information I∆ (X 0 ) of this set
and a quantity of redundant information IL00 contained in a set X 0 due to nonempty
pairwise intersections (due to mutual information) between its elements:

I (X 0 ) = I∆ (X 0 ) + IL00 . (4.2.8)

Information quantity IL00 is called information losses of the second genus. For the
result of discretization of continuous stochastic process (signal) ξ (t), i.e., a set of
the elements X 0 with discrete structure {Xj } of the space Ω, one can introduce
a measure of informational redundancy r(X 0 ), which characterizes a presence of
informational interrelations between the elements of discrete structure {Xj } of a
set X 0 (or simply mutual information between them):

I (X 0 ) − I∆ (X 0 )
r(X 0 ) = . (4.2.9)
I (X 0 )

If for an arbitrary pair of the adjacent elements Xj , Xj+1 ∈ X 0 , the measure of


their product is equal to zero: m(Xj Xj+1 ) = 0, then a measure of informational
redundancy r(X 0 ) of a structure {Xj } also is equal to zero: r(X 0 ) = 0.
Substituting the equality (4.2.8) into (4.2.6), we obtain the following relation-
ship:
I (X ) = I∆ (X 0 ) + IL0 + IL00 . (4.2.10)
138 4 Signal Spaces with Lattice Properties

The meaning of the relationship (4.2.10) can be elucidated. One cannot extract
more information quantity from the signal ξ (t) (i.e., a collection of the elements X
with continuous structure) than a value of relative quantity of information I∆ (X 0 )
contained in this signal ξ (t) due to diversity between the elements of its structure.
Information losses IL0 and IL00 take place, on the one hand, in the consequence of
curvature c(X ) of a structure {Xα } of signal ξ (t) in metric space Ω, and on the
other hand, due to some informational redundancy r(X 0 ) of a discrete set X 0 (some
mutual information between the samples {Xj }).
Discretization D: ξ (t) → {ξ (tj )}, according to the identity (4.2.10), has to
be realized to provide maximum ratio of relative quantity of information I∆ (X 0 )
contained in the sampled signal {ξ (tj )} to overall quantity of information I (X )
contained in continuous signal ξ (t) in a bounded time interval Tξ = [0, T ], t ∈ Tξ :

I∆ (X 0 )
→ max, (4.2.11)
I (X )

or, equivalently, to provide minimum sum of information losses of the first IL0 and
the second IL00 genera:
IL0 + IL00 → min .
On the base of criterion (4.2.11), the sampling theorem may be formulated for sig-
nals with various informational and probabilistic-statistical properties. We limit the
consideration of possible variants of the sampling theorem to a stationary stochastic
process (signal), whose IDD iξ (tα , t) of an arbitrary sample ξ (tα ) is characterized
by the property: iξ (tα , t) ≡ iξ (τ ), τ = t − tα .

Theorem 4.2.1. Theorem on equivalent (from the standpoint of preserving overall


quantity of information) representation of continuous stochastic process by a finite
set of the samples. Continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) (0 < iξ (0) < ∞) defined in the interval Tξ = [0, T ], t ∈ Tξ can be equivalently
represented by a finite set of the samples
S Ξ = {ξ (tj )} without any information
losses if a set of the elements X = Xα of the signal ξ (t) can be represented by
α
a partition X 0 = Xj of its orthogonal elements {Xj }: Xj · Xk = O, j 6= k, so
S
j
that for overall quantity of information I [{ξ (tj )}] in the sampled signal {ξ (tj )} and
overall quantity of information I [ξ (t)] in continuous signal ξ (t), the equality holds:

X ≡ X 0 ⇒ I [ξ (t)] = I [{ξ (tj )}]. (4.2.12)

This identity makes possible the extraction of the entire information quantity,
contained in this process ξ (t) in time interval Tξ = [0, T ], from a finite set of
the samples Ξ = {ξ (tj )} of stochastic process ξ (t) without any information losses.
Such representation of continuous stochastic process ξ (t) by a finite set of samples
Ξ = {ξ (tj )} in the interval Tξ = [0, T ], t ∈ Tξ , when the relationship (4.2.12) holds,
is equivalent from the standpoint of preserving overall quantity of information.
The main condition of the Theorem 4.2.1, i.e., orthogonality of the elements of
continuous structure of stochastic process (signal) ξ (t), is provided if and only if its
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 139

IDD iξ (τ ) has a uniform distribution in a bounded interval [−∆/2, ∆/2] and takes
the form:
1 ∆ ∆
iξ (τ ) = [1(τ + ) − 1(τ − )],
∆ 2 2
where 1(t) is a Heaviside step function.
In this case, the identity holds between both measures of structural diversity
of continuous stochastic process (signal) ξ (t) and a set of the samples Ξ = {ξ (tj )}
(the sets of the elements with continuous {Xα } and discrete {Xj } structures, re-
spectively), i.e., between the measures of the sets X and X 0 :
X X
m(X ) = m( Xα ) ≡ m( Xj ) = m(X 0 ),
α j

and a measure of symmetric difference of a set m(∆ Xj ):


j
X X
m( Xα ) = m( Xj ) = m(∆ Xj ) = m(∆ Xα ),
j α
α j

that from an informational view is equivalent to the identity:

I (X ) = I (X 0 ) = I∆ (X 0 ) = I∆ (X ).

Fulfillment of these identities is provided by orthogonality of the elements {Xj },


so measure of a set X 0 with a discrete structure {Xj } is equal to a measure of
symmetric difference of the elements of this set:
X
m( Xj ) = m(∆ Xj ).
j
j

The overall quantity of information I (X 0 ) of a set X 0 is equal to the relative quantity


of information I∆ (X 0 ) of this set: I (X 0 ) ≡ I∆ (X 0 ). This means that under discrete
representation of a stochastic process (signal) ξ (t) possessing the mentioned prop-
erty, the information losses of the first IL00 and the second IL00 genera are equal to
zero: IL0 = IL00 = 0.
The presence of a measure of structural diversity I∆ [ξ (t)] (relative quantity of
information) of the signal ξ (t), based on symmetric differences of the elements,
allows us to formulate the following variant of the sampling theorem.
Theorem 4.2.2. Theorem on equivalent (from the standpoint of preserving relative
quantity of information) representation of continuous stochastic process by a finite set
of the samples. A continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) (0 < iξ (0) < ∞) defined in the interval Tξ = [0, T ], t ∈ Tξ may be equivalently
represented by a finite set of the samples Ξ = {ξ (tj )}, if the equality between relative
quantity of information I∆ [ξ (t)] contained in stochastic process (signal)ξ (t) in the
interval Tξ = [0, T ] and relative quantity of information I∆ [{ξ (tj )}] contained in a
finite set of the samples Ξ = {ξ (tj )} holds:

I∆ [ξ (t)] = I∆ [{ξ (tj )}]; t, tj ∈ Tξ . (4.2.13)


140 4 Signal Spaces with Lattice Properties

This identity provides the possibility of extracting the same relative quantity of
information contained in signal ξ (t) in the interval Tξ = [0, T ] from a finite set of
the samples Ξ = {ξ (tj )} of stochastic process (signal) ξ (t). Such representation of
continuous stochastic process ξ (t) determined in the interval Tξ = [0, T ] by a finite
set of the samples Ξ = {ξ (tj )} under the condition (4.2.13) is equivalent from the
standpoint of preserving relative quantity of information.
Theorem 4.2.2 has the following corollaries.

Corollary 4.2.1. Continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) that has a uniform distribution in the bounded interval [−∆/2, ∆/2] of the
form:     
1 ∆ ∆
iξ (τ ) = 1 τ+ −1 τ − , (4.2.14)
∆ 2 2
may be equivalently represented by a finite set of the samples Ξ = {ξ (tj )} if the
discretization interval (the sampling interval) ∆t = tj+1 − tj is chosen to be equal
to ∆t = 1/iξ (0).

Corollary 4.2.2. Continuous stationary stochastic process (signal) ξ (t) with IDD
iξ (τ ) of the following form:
( h i
1 |τ |
a 1 − a , |τ | ≤ a;
iξ (τ ) =
0, |τ | > a,

may be equivalently represented by a finite set of the samples Ξ = {ξ (tj )} if the


discretization interval (the sampling interval) ∆t = tj+1 − tj is chosen within the
interval ∆t ∈]0, 2/iξ (0)].

In the cases cited in Corollaries 4.2.1 and 4.2.2 between measures of structural
diversity of continuous stochastic process (signal) ξ (t) and a set of the samples Ξ =
{ξ (tj )} (sets of the elements with continuous {Xα } and discrete {Xj } structures,
respectively), namely between measures of symmetric difference of the sets X and
X 0 , the identity holds:
m(∆ Xα ) = m(∆ Xj ),
α j

and from an informational view, is equivalent to the identity:

I∆ (X ) = I∆ (X 0 ).

For stochastic processes with IDD iξ (τ ) differentiable in the point τ = 0, the sam-
pling theorem may be formulated in the following way.

Theorem 4.2.3. Theorem on equivalent (from the standpoint of maximal ratio


(4.2.11)) representation of continuous stochastic process by a finite set of the sam-
ples. Continuous stationary stochastic process (signal) ξ (t), defined in the interval
Tξ = [0, T ], t ∈ Tξ , with IDD iξ (τ ) (0 < iξ (0) < ∞) that is differentiable in the point
τ = 0, may be equivalently represented by a finite set of the samples Ξ = {ξ (tj )}, if
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 141

a chosen value of discretization interval (sampling interval) ∆t provides the maxi-


mal ratio of relative quantity of information I∆ (X 0 ) contained in the sampled signal
{ξ (tj )} to overall information quantity I (X ) contained in continuous signal ξ (t) in
its domain of definition Tξ = [0, T ], t ∈ Tξ :

I∆ (X 0 )
 
∆topt = arg max . (4.2.15)
∆t I (X )

Representation of continuous signal defined by Theorem 4.2.3 is equivalent in the


sense that it is impossible to extract from the initial signal ξ (t) larger information
quantity than I∆ (X 0 ) contained in a collection of the samples {ξ (tj )} that follow
through the interval defined by the relationship (4.2.15). Other variants of Theorems
4.2.1 through 4.2.3 can be found in [242].
Recommendations concerning a discretization (sampling) of continuous stochas-
tic signals may be stated in less strict formulation that is closer to applied problems.
Their variant is based upon an approximation of the real IDD iξ (τ ) of the signal
by the function (4.2.14) where iξ (0) · ∆ = 1 and may be expressed by the following
remark.

Remark 4.2.1. Stationary stochastic process (signal) ξ (t) with IDD iξ (τ ) (0 <
iξ (0) < ∞) defined in the interval Tξ = [0, T ], t ∈ Tξ may be represented by a finite
set of the samples Ξ = {ξ (tj )} that follow through the interval:

∆t = tj+1 − tj = 1/iξ (0). (4.2.16)

Graphs of dependences of the ratio [I∆ (X 0 )/I (X )] (∆t) of relative quantity of


information I∆ [{ξ (tj )}] contained in a sampled signal {ξ (tj )} to overall quantity of
information I [ξ (t)] contained in continuous signal on the discretization interval ∆t
are shown in Fig. 4.2.1. Graphs of dependences are plotted for stochastic processes
with the following IDDs:
h i
1 τ2
i1 (τ ) = √2πa exp − 2a 2 ; i2 (τ ) =  1 τ 2  ;
( (h b )
π 1+
i
1 |τ |
d 1 − d , |τ | ≤ d;
i3 (τ ) = c exp(−2c |τ |); i4 (τ ) =
0, |τ | > d.

(X 0 )
h i
Analysis of the dependences of the ratio I∆ I(X) (∆t) shown in Fig. 4.2.1 shows
certain features typical for discretization of continuous stochastic processes (sig-
nals).

1. For the plotted dependences, the limit of the ratio [I∆ (X 0 )/I (X )] (∆t) on
∆t → 0 is equal to 1/2:

I∆ (X 0 )
 
1
lim (∆t) = .
∆t→0 I (X ) 2
This confirms the result (4.1.19) concerning the ratio of relative quantity
142 4 Signal Spaces with Lattice Properties

of information I∆ [ξ (t)] to overall quantity of information I [ξ (t)] contained


in a continuous signal ξ (t) with IDD that differs from a uniform one:

I∆ [ξ (t)] = 0.5I [ξ (t)].

2. For stochastic processes with IDDs i1 (τ ) and i2 (τ ) (IDD iξ (τ ) is differ-


entiable in the point τ = 0), the dependence [I∆ (X 0 )/I (X )] (∆t) has a
pronounced maximum. This feature allows choosing the discretization in-
terval ∆t for practical applications, maximizing the ratio I∆ (X 0 )/I (X ),
according to Theorem 4.2.3 and Formula (4.2.15).
On I∆ (X 0 )/I (X ) > 1/2, the inequality I∆ (X 0 ) > I∆ (X ) holds. Under a
discretization through the interval ∆t, which is equal to 1/iξ (0), the deflec-
tion from maximum value [I∆ (X 0 )/I (X )]max for the plotted dependences
does not exceed 10%:
|I∆ (X 0 )/I (X ) − [I∆ (X 0 )/I (X )]max |
≤ 0.1.
[I∆ (X 0 )/I (X )]max

3. For stochastic processes with IDDs i3 (τ ) and i4 (τ ) (IDD iξ (τ ) is nondif-


ferentiable in the point τ = 0), the function [I∆ (X 0 )/I (X )] (∆t) is non-
increasing and has no maximum. In this case, the inequality I∆ (X 0 ) ≤
I∆ (X ) holds. Discretization of the process with an arbitrary interval ∆t
is accompanied by the losses of relative quantity of information I∆ [ξ (t)]
that may be extracted from the signal.

FIGURE 4.2.1 Graphs of dependences of ratio I∆ (X 0 )/I(X) on discretization interval ∆t.


Modified from [242], with permission

For most stochastic processes possessing the ability to carry information, the
values iξ (0) belong to the interval [2∆f, 4∆f ], where ∆f is the real effective width of
R∞
hyperspectral density σξ (ω ) equal to: ∆f = [ σξ (ω )dω ]/[2πσξ (0)], so the values of
0
the discretization (sampling) interval ∆t = 1/iξ (0) could change within the interval
[1/(4∆f ), 1/(2∆f )].
Aforementioned variants of the sampling theorem are formulated for stationary
stochastic processes possessing the ability to carry information (see Theorem 3.4.2)
4.2 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 143

and characterized by unbounded (in frequency domain) power spectral density.


For such stochastic processes, the interval of discretization is entirely determined
by IDD introduced by Definition 3.4.1, which in the case of Gaussian stochas-
tic processes, is entirely determined by their normalized correlation function. The
suggested approach to formulating the sampling theorem can be generalized for
nonstationary processes. In this case, the discretization of continuous signal will
be characterized by nonequidistant (in time domain) samples. Depending on IDD,
one should distinguish two types of stochastic processes possessing the ability to
carry information. Under discretization of stochastic processes with IDD, which is
differentiable in the point τ = 0, by processing a finite set of the samples {ξ (tj )},
j = 1, . . . , n in the interval Tξ = [0, T ] instead of processing the sample contin-
uum {ξ (tα )}, tα ∈ Tξ , one can obtain a quantitative gain in the use of information
contained in the signal, according to the relationship (4.2.4). On the other hand,
processing a finite set of the samples {ξ (tj )} of stochastic processes with IDD iξ (τ ),
which is nondifferentiable in the point τ = 0, from an informational view, is inex-
pedient (see the relationship (4.2.5)). It should realize processing of such signals as
a whole in continuum of the samples {ξ (tα )}, tα ∈ Tξ .

4.2.2 Theorems on Isomorphism in Signal Space Built upon


Generalized Boolean Algebra with a Measure
The mappings (3.5.1) and (3.5.26) that allow us to use generalized Boolean alge-
bra with a measure for describing informational properties of stochastic processes
(signals), create the possibility to overformulate the known results with application
to the stochastic processes (the signals) in the forms of the following theorems and
their corollaries.
Let ϕ: Γ → Ω be morphism of physical signal space Γ into informational signal
space Ω defined by the relationship (4.1.1). Let also G be a group of automorphisms
of generalized Boolean algebra B(Ω) with a measure m, and besides, for an arbitrary
mapping g of the group G:

g: ξ (t) → ξ 0 (t); ξ (t), ξ 0 (t) ∈ Γ; g ∈ G,

where g is isomorphic mapping of stochastic process (signal) in the sense of Defini-


tion 4.2.2.
On the base of Theorem 2.3.3, one can formulate a similar theorem for stochastic
processes (signals).

Theorem 4.2.4. Theorem on isomorphic mapping preserving a measure of infor-


mation quantity. If in metric signal space Ω, stochastic signal ξ (t), considered
a subalgebra B(X ) of generalized Boolean algebra B(Ω) with a measure m and
signature (+, ·, −, O) of the type (2, 2, 2, 0), and characterized by IDD iξ (tα , t),
(0 < iξ (tα , t) < ∞) in the interval Tξ = [t0 , t∗ ], t ∈ Tξ , is isomorphic to stochastic
signal ξ 0 (t): g: ξ (t) → ξ 0 (t): g ∈ G in the sense of Definition 4.2.2, then the mapping
g is a measure preserving isomorphism.
144 4 Signal Spaces with Lattice Properties

Corollary 4.2.3. Isomorphic mapping g of stochastic process (signal) ξ (t) into


stochastic process (signal) ξ 0 (t):
g: ξ (t) → ξ 0 (t); (4.2.17)
{iξ (tα , t)} → {i0ξ (tα , t)};
[ [
{Xα } → {Xα0 }; X = Xα → X 0 = Xα0 ,
α α
preserves the measures of all the binary operations between an arbitrary pair of the
elements Xα1 , Xα2 ∈ X of a S set X of the corresponding pair of samples ξ (tα1 ),
ξ (tα2 ) of the signal ξ (t)/X = Xα considered as subalgebra B(X ) of generalized
α
Boolean algebra B(Ω) with signature (+, ·, −, O) of the type (2, 2, 2, 0):
0 0
m(Xα1 + Xα2 ) = m(Xα1 + Xα2 ); (4.2.18a)
0 0
m(Xα1 · Xα2 ) = m(Xα1 · Xα2 ); (4.2.18b)
0 0
m(Xα1 − Xα2 ) = m(Xα1 − Xα2 ); (4.2.18c)
0 0
m(Xα1 ∆Xα2 ) = m(Xα1 ∆Xα2 ). (4.2.18d)
Group of the identities (4.2.18) implies the following corollary.
Corollary 4.2.4. Isomorphic mapping g (4.2.17) preserves all the sorts of infor-
mation quantities (see Definitions 3.5.1 through 3.5.5) between arbitrary elements
Xα1 , Xα2 ∈
S X of a set X of the corresponding samples ξ (tα1 ), ξ (tα2 ) of the signal
ξ (t)/X = Xα considered a subalgebra B(X ) of generalized Boolean algebra B(Ω)
α
with signature (+, ·, −, O) of the type (2, 2, 2, 0):
IXα1 +Xα2 = IXα1
0 +X 0 ;
α2
(4.2.19a)
IXα1 ·Xα2 = IXα1
0 ·X 0 ;
α2
(4.2.19b)
IXα1 −Xα2 = IXα1
0 −X 0 ;
α2
(4.2.19c)
IXα1 ∆Xα2 = IXα1
0 ∆X 0 .
α2
(4.2.19d)
Corollary 4.2.5. Isomorphic mapping g (4.2.17) preserves overall quantity of in-
formation I [ξ (t)] contained
S in stochastic process (signal) ξ (t) considered a collection
of the elements X = Xα ; each element corresponds to IDD iξ (tα , t) of the sample
α
ξ (tα ): ! !
X X
I [ξ (t)] = m Xα =m Xα0 = I [ξ 0 (t)].
α α

Corollary 4.2.6. Isomorphic mapping g (4.2.17) preserves relative quantity of in-


formation I∆ [ξ (t)] containedSin stochastic process (signal) ξ (t) considered as a col-
lection of the elements X = Xα ; each of them corresponds to IDD iξ (tα , t) of the
α
sample ξ (tα ):
I∆ [ξ (t)] = m(∆ Xα ) = m(∆ Xα0 ) = I∆ [ξ 0 (t)].
α α
4.3 Homomorphic Mappings in Signal Space Built upon Generalized Boolean Algebra 145

The conclusions of Theorems 4.2.4 and 2.3.4 may be generalized upon an iso-
morphic mapping of stochastic processes ξ (t), η (t0 ).

Theorem 4.2.5. Theorem on isomorphic mapping of stochastic processes. Isomor-


phic mapping gS (4.2.17) of a pairSof stochastic processes (signals) ξ (t), η (t0 ) —
ϕ: ξ (t)/X = Xα , η (t0 )/Y = Yβ considered a subalgebras B(X ), B(Y ) of
α β
generalized Boolean algebra B(Ω) with a measure m and signature (+, ·, −,S O) of
the type (2, 2, 2, 0), into a pair of stochastic processes (signals) ξ 0 (t)/X 0 = Xα0 ,
α
η 0 (t0 )/Y 0 = Yβ0 , respectively:
S
β

g: ξ (t) → ξ 0 (t), η (t0 ) → η 0 (t0 ), g ∈ G; (4.2.20)


0 −1 0 0 −1 0
X = g (X ), X = g (X ); Y = g (Y ), Y = g (Y ), (4.2.20a)

preserves the measures of all the binary operations between the sets X, Y ⊂ X ∪ Y ,
whose union X ∪ Y is also a subalgebra B(X + Y ) of generalized Boolean algebra
B(Ω):
m(X + Y ) = m(X 0 + Y 0 );
m(X · Y ) = m(X 0 · Y 0 );
m(X − Y ) = m(X 0 − Y 0 );
m(X ∆Y ) = m(X 0 ∆Y 0 ).

Corollary 4.2.7. Isomorphic mapping g (4.2.20) S preserves all theSsorts of infor-


mation quantities between the signals ξ (t)/X = Xα , η (t0 )/Y = Yβ considered
α β
subalgebras B(X ), B(Y ) of generalized Boolean algebra B(Ω) with a measure m
and signature (+, ·, −, O) of the type (2, 2, 2, 0):
Quantity of overall information:

I + [ξ (t), η (t0 )] = IX+Y = IX 0 +Y 0 = I + [ξ 0 (t), η 0 (t0 )]; (4.2.21)

Quantity of mutual information:

I [ξ (t), η (t0 )] = IX·Y = IX 0 ·Y 0 = I [ξ 0 (t), η 0 (t0 )]; (4.2.22)

Quantity of particular relative information:

I − [ξ (t), η (t0 )] = IX−Y = IX 0 −Y 0 = I − [ξ 0 (t), η 0 (t0 )]; (4.2.23)

Quantity of relative information:

I ∆ [ξ (t), η (t0 )] = IX∆Y = IX 0 ∆Y 0 = I ∆ [ξ 0 (t), η 0 (t0 )]. (4.2.24)


146 4 Signal Spaces with Lattice Properties

4.3 Features of Signal Interaction in Signal Spaces with Various


Algebraic Properties
In most of the literature, the problems of signal processing in the presence of inter-
ference (noise) are formulated in terms of linear signal space LS, where the result of
interaction x of useful signal s and interference (noise) n is described by the opera-
tion of addition x = s + n of an additive commutative group [148], [143], [122], [153].
Meanwhile, some authors generally state the problem of signal processing in the
presence of interference (noise) with respect to the kind of interaction [159], [156],
[243]: x = F (s, n), where F (s, n) is some deterministic function.
The results of synthesis of concrete signal processing algorithms and units are
obtained on the base of criteria of optimality, usually assuming an additive (in
terms of linear space) interaction x of signal s and interference (noise) n: x = s + n.
As a rule, the question concerning optimality with respect to the kind of signal
interaction is not investigated. How close to optimal is the model of such an additive
signal interaction in the input of a processing unit with respect to its content? Could
better quality indices of signal processing be achieved during interaction between
the signals, if the signal space is not a linear? Could the problem of synthesis of
optimal signal processing algorithm and unit be solved in the most general form
on the assumption that the properties of signal space where the signal interaction
x = F (s, n) is realized are not defined? In other words, could we find an unknown
function x = F (s, n) that determines the kind of interaction between useful signal s
and interference (noise) n, while solving some specific problems of synthesis of signal
processing algorithms and units on the base of the known criteria of optimality?
If within such a formulation, the problem of synthesis has no solution, one can
try to obtain a solution using various optimality criteria, first, to determine the
kind of signal interaction F (s, n) in the input of signal processing unit and second
to realize a synthesis of optimal signal processing algorithm and unit.
If the algebraic properties of the signal space are defined, where the best signal
interaction F (s, n) of a corresponding kind takes place, then the requirement of
determinacy of a function F (s, n) used for the following synthesis of optimal signal
processing algorithms will not seem to be too strict a confinement.
In order to answer these and related questions, it is necessary to provide a gen-
eralized approach concerning the analysis of informational relationships that take
place during signal interaction in signal spaces with various algebraic properties.
An important circumstance should be emphasized here. Shannon’s information
theory does not assume the notion of information quantity contained in a signal in
general to exist, so there is no sense of asking: “What is the quantity of information
contained in a signal a?”, inasmuch as the notion of information quantity has a sense
here according to a pair of signals only [51]. In this case, the correct formulation
of the question is: “How much information does the signal b contain concerning the
signal a?” This means that classical information theory operates exclusively with
the notion of mutual information which may be applied to a pair of signals only.
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 147

On the basis of classical information theory one can evaluate mutual information
I [x, a] contained in the signal x concerning the signal a, if, for instance, the signal
x is the result of additive interaction of two signals a and b: x = a + b in linear
signal space LS. However, it is impossible to evaluate quantity of information losses
that take place during such an interaction. One cannot compare the sum Ia + Ib
of information quantities contained in the signals a and b, respectively, and an
information quantity Ix contained in the signal x, which is a result of interaction
of signals a and b. The reason is the same, inasmuch as the values Ix , Ia , Ib ,
corresponding to information quantities contained in each signal x, a, b separately,
are not defined.
The goal of this section is to further develop the approach evaluating informa-
tional relationships between the signals covered in Sections 3.4, 3.5, 4.1, and 4.2,
to carry out the comparative analysis of informational properties of signal spaces
with various algebraic properties stipulated by signal interactions in these spaces.
There are two problems formulated and solved in this section. First, the main
informational relationships between the signals before and after their interaction in
signal spaces with various algebraic properties are determined. Second, depending
on a quantitative content of these informational relationships, at a quality level,
the types of interactions between the signals in signal spaces are distinguished.
Inasmuch as this section provides a research transition from informational signal
space to the physical signal spaces, we change the notation system based on the
Greek alphabet accepted for the signals (stochastic processes) before. Here and
below we will use the Latin alphabet to denote useful and interference signals
while considering the questions of their processing that form the main content of
Chapter 7.

4.3.1 Informational Paradox of Additive Signal Interaction in Linear


Signal Space: Notion of Ideal Signal Interaction
The notions of physical and informational signal spaces, and also physical inter-
actions and informational interrelations between the signals were introduced in
Section 4.1.
Unlike informational interrelations between the signals that do not directly af-
fect their physical interaction, the last impacts the informational relationships be-
tween the signals and the results of their interaction. Consider these features of
physical signal interaction, citing as an example the additive interaction of two sta-
tionary statistically independent Gaussian stochastic processes a(t) and b(t) with
zero expectations in linear (Hilbert) space LS, a(t), b(t) ∈ LS, based on the ideas
stated in Sections 3.3 and 4.1:
x(t) = a(t) + b(t), t ∈ T. (4.3.1)
Let the centered stochastic processes (the signals) a(t) and b(t) be charac-
terized by the variances Da , Db and identical normalized correlation functions
ra (τ ) = rb (τ ) = r(τ ) ≥ 0, respectively. The correlation function Rx (τ ) of an addi-
tive mixture x(t) (4.3.1) of these two signals is equal to:
Rx (τ ) = Da ra (τ ) + Db rb (τ ) = (Da + Db )r(τ ),
148 4 Signal Spaces with Lattice Properties

and its normalized correlation function rx (τ ) is identically equal to the normalized


correlation function of the signals a(t) and b(t):

rx (τ ) = ra (τ ) = rb (τ ) = r(τ ). (4.3.2)

For Gaussian stochastic signals, the normalized correlation function defines its nor-
malized function of statistical interrelationship (NFSI) introduced in Section 3.1.
The equality (4.3.2), according to the formula (3.1.3), implies the identity of NFSIs
ψx (τ ), ψa (τ ), ψb (τ ) of the signals x(t), a(t), b(t):

ψx (τ ) = ψa (τ ) = ψb (τ ), (4.3.3)

which implies the identity of information distribution densities (IDDs) ix (τ ), ia (τ ),


ib (τ ) of the signals x(t), a(t), b(t) (see formula (3.4.3)):

ix (τ ) = ia (τ ) = ib (τ ). (4.3.4)

On the basis of Axiom 4.1.1 formulated in Section 4.1, depending on the sort of sig-
nature relations between the images A, B of the signals a(t), b(t) in informational
signal space Ω built upon a generalized Boolean algebra B(Ω) with a measure m
and signature (+, ·, −, O), A ∈ Ω, B ∈ Ω, the following main types of information
quantity are defined: quantity of absolute information IA , IB , quantity of mutual
information IAB , and quantity of overall information IA+B determined by the fol-
lowing relationships, respectively:

IA = m(A), IB = m(B ); (4.3.5a)

IAB = m(AB ); (4.3.5b)


IA+B = m(A + B ) = m(A) + m(B ) − m(AB ) = IA + IB − IAB . (4.3.5c)
The images A and B of the corresponding signals a(t) and b(t) are connected by
the mapping (3.5.26a).
Identity (4.3.4), according to the formula (3.5.6b), means that, within the time
interval T , the signals x(t), a(t), b(t) carry the same quantity of absolute informa-
tion:
IX = IA = IB . (4.3.6)
However, taking into account statistical independence of the signals a(t) and b(t)
(i.e., IAB = 0), it is easy to conclude that information contents carried by these
signals are absolutely distinct despite quantitative equality IA = IB . Thus, the
quantity of overall information IA+B carried by the signals a(t) and b(t) together
must equal the sum of IA and IB , and according to the relationships (4.3.5c) and
(4.3.6), is equal to:
IA+B = IA + IB = 2IX . (4.3.7)
Comparing the equations (4.3.6) and (4.3.7), it is natural to ask: why the quantity
of overall information IA+B carried by two independent Gaussian signals a(t) and
b(t) with identical normalized correlation functions is twice as large as the quantity
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 149

of absolute information IX contained in additive mixture x(t) of the signals a(t)


and b(t), although the strict equality between them seems to be intuitively correct:

IA+B = IX ?

Nevertheless, during additive interaction (4.3.1) of two stationary statistically inde-


pendent signals a(t), b(t) with zero expectations and arbitrary NFSIs ψa (τ ), ψb (τ )
(and, respectively, IDDs ia (τ ), ib (τ )) in linear space LS, the informational inequal-
ity always holds:
IX < IA+B = IA + IB . (4.3.8)
We shall call this relationship informational paradox (informational inequality)
of additive interaction between the signals in linear space, whose essence lies in
nonequivalence (IA+B 6= IX , IA+B > IX ) of both quantity of overall information
IA+B = IA + IB contained in two statistically independent signals a(t), b(t) and
quantity of absolute information IX contained in their additive sum x(t).
On the qualitative level, this paradox can be elucidated by the losses of informa-
tion ∆I that accompany an additive interaction of statistically independent signals
a(t) and b(t), and are stipulated by destroying the part of information contained in
the signals:
∆I = IA+B − IX .

Example 4.3.1. In the interaction of two Gaussian statistically independent


stochastic signals with identical normalized correlation functions ra (τ ) = rb (τ ) =
r(τ ) (i.e. IA = IB = IX ) and arbitrary variances Da , Db , the value of losses ∆I is
equal to the sum of information quantities carried by both signals:

∆I = (IA + IB )/2 = IA = IB = IX . 5

Thus, interaction of the signals in linear space LS is accompanied by losses of


information contained in them.
We will distinguish the corresponding sorts of information quantity defined for
informational Ω and physical Γ signal spaces. For quantity of absolute information,
quantity of mutual information, and quantity of overall information, defined in
informational signal space Ω, we preserve the designations accepted in Section 4.1,
respectively: IA , IB ; IAB ; IA+B . All these notions defined in physical signal space
Γ with an interaction of the signals a(t), b(t) in the form of x(t) = a(t) ⊕ b(t), where
⊕ is some binary operation in Γ, we shall denote as Ia , Ib ; Iab ; Ia+b , respectively.
The problem of information losses during signal interaction in physical signal
space Γ in general, and in particular, during their additive interaction (4.3.1) in lin-
ear space LS, has, surely, nontrivial character. In physical signal spaces that differ
from generalized Boolean algebra in their algebraic structure, the informational re-
lationships between the interacting signals obtained in Section 4.1 for informational
signal space Ω cease to hold. This means that physical interaction of the signals
negatively affects the value of some types of information quantities defined for
physical signal space Γ. In particular, quantity of overall information Ia⊕b 6= IA+B
150 4 Signal Spaces with Lattice Properties

and quantity of mutual information Iab 6= IAB corresponding to a pair of interact-


ing signals a(t), b(t) are changed. Meanwhile, the quantity of absolute information
contained in each signal remains unchanged:

Ia = IA , Ib = IB . (4.3.9)

Thus, the notion of quantity of absolute information does not need refinement,
inasmuch as it is defined irrespective of other signals of physical signal space; it is
introduced exclusively with respect to a single signal.
Now, for physical signal space Γ with binary operation ⊕ defined on it, infor-
mational inequality (4.3.8) can be written in the form:

Ia⊕b ≤ IA+B = IA + IB − IAB , (4.3.10)

where Ia⊕b is a quantity of overall information contained in a pair of signals a(t)


and b(t) in physical space Γ that is equal to the quantity of absolute information
Ix = IX contained in the result of their interaction x(t) = a(t) ⊕ b(t): Ia⊕b = Ix ;
IA+B is a quantity of overall information contained in a pair of signals a(t) and b(t)
in informational signal space Ω; IAB is a quantity of mutual information contained
in a pair of signals a(t) and b(t) in informational signal space Ω; Ia = IA , Ib = IB is
a quantity of absolute information contained in every signal a(t) and b(t) separately.
Unlike inequality (4.3.8), inequality (4.3.10) is not strict, assuming the existence of
signal spaces, where the signal interaction occurs without information losses.
For any physical signal space Γ with arbitrary algebraic properties, there exists
a pair of signals a(t) and b(t), for which informational inequality (4.3.10) turns to
identity. Informational properties of such signals are introduced by the following
definition.

Definition 4.3.1. Two signals a(t) and b(t) of physical signal space Γ, a(t), b(t) ∈ Γ
are called identical in an informational sense, if for them the inequality (4.3.10)
turns to the identity:
Ia⊕b = IA+B = IA = IB ,
so that the images A and B of the signals a(t) and b(t) in informational space Ω
are identical: A ≡ B.

As follows from Definition 4.3.1, any signal a(t) is identical to itself in an in-
formational sense. For instance, in the case of additive interaction in linear space
LS, the signals a(t) and b(t) are identical in an informational sense, if they are
connected by linear dependence: a(t) = k · b(t), where k = const.
According to formulated informational inequality (4.3.10), the following ques-
tions may be formulated.

1. Why do the losses of information take place during signal interaction in


linear space LS?
2. Is the signal interaction accompanied by the losses of information in all
the spaces?
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 151

3. If the answer to the second question is affirmative, what are the signal
spaces where such losses are minimal? If the answer to the second question
is negative, what are the signal spaces where the signal interaction is
possible without losses of information?
One can answer these questions, based on the following definition.
Definition 4.3.2. Ideal interaction x(t) = a(t) ⊕ b(t) between two statistically in-
dependent signals a(t) and b(t) in physical space Γ, where two binary operations are
defined: addition ⊕ and multiplication ⊗, is a binary operation ⊕, which provides
the quantities of overall information Ia⊕b , IA+B , contained in a pair of signals a(t),
b(t) and defined for both physical Γ and informational Ω signal spaces, respectively,
to be equivalent:
Ia⊕b = IA+B . (4.3.11)
Here and below, the binary operations of addition ⊕ and multiplication ⊗ are
understood as abstract operations over some algebraic structure. Find out, what are
the algebraic properties of the physical signal space providing the identity (4.3.11)
to hold. The answer to this question is given by the following theorem.
Theorem 4.3.1. Let there are two binary operations of addition ⊕ and multipli-
cation ⊗ defined in physical signal space Γ. Then for the identity (4.3.11) to hold,
it is necessary and sufficient that physical signal space Γ be a generalized Boolean
algebra B(Γ) with a measure mΓ .
Proof of necessity. Identity (4.3.11) may be written in more detailed form:
Ia⊕b = Ix = IX = IA+B = IA + IB − IAB = Ia + Ib − Iab . (4.3.12)
According to (4.3.9), the identities between the quantities of absolute information
of the signals a(t) and b(t) defined for both physical Γ and informational Ω signal
spaces, respectively, hold: Ia = IA , Ib = IB . Besides, the identity holds Iab =
IAB between the quantities of mutual information Iab and IAB defined for both
physical Γ and informational Ω signal spaces, respectively. Then a measure mΓ
of the elements (the signals) {a(t), b(t), . . .} of physical space Γ is isomorphic to a
measure m of informational signal space Ω: i.e., for every c(t) ∈ Γ, ∃C ∈ Ω, C =
ϕ[c(t)]: mΓ [c(t)] = m(C ) [176]; and the mapping ϕ: Γ → Ω defines isomorphism of
the spaces Γ and Ω into each other: i.e., ∀ϕ, ∃ϕ−1 : ϕ−1 : Ω → Γ. Thus, physical
signal space Γ, where the identity (4.3.11) holds, is the algebraic structure identical
to informational signal space Ω, i.e., generalized Boolean algebra B(Γ) with a
measure mΓ and signature (⊕, ⊗, −, O). 
Proof of sufficiency. If physical signal space Γ is a generalized Boolean algebra B(Γ)
with a measure mΓ and signature (⊕, ⊗, −, O), then the mapping ϕ defined by the
relationship (4.1.2) ϕ: Γ → Ω is a homomorphism preserving all the signature
operations, and the following equations hold:
ϕ[a(t) ⊕ b(t)] = ϕ[a(t)] + ϕ[b(t)]; (4.3.13a)
ϕ[a(t) ⊗ b(t)] = ϕ[a(t)] · ϕ[b(t)], (4.3.13b)
152 4 Signal Spaces with Lattice Properties

where x(t) = a(t) ⊕ b(t), x̃(t) = a(t) ⊗ b(t); ϕ[a(t)] = A, ϕ[b(t)] = B, ϕ[a(t) ⊕ b(t)] =
X, ϕ[a(t) ⊗ b(t)] = X̃; a(t), b(t), x(t), x̃(t) ∈ Γ; A, B, X, X̃ ∈ Ω.
According to (4.3.9), the identities between the quantities of absolute informa-
tion of the signals a(t), b(t), x(t), x̃(t) defined for both physical Γ and informational
Ω signal spaces, respectively, hold: Ia = IA , Ib = IB , Ix = IX , Ix̃ = IX̃ . These
identities define isomorphism of the measures mΓ , m of both physical Γ and infor-
mational Ω signal spaces: mΓ [a(t)] = m(A), mΓ [b(t)] = m(B ), mΓ [x(t)] = m(X ),
mΓ [x̃(t)] = m(X̃ ). Then the mapping ϕ: Γ → Ω is an isomorphism preserving a
measure and mΓ [x(t)] = m(X ) ⇒ Ix = IX ⇒ Ia⊕b = IA+B . 
Note that a measure mΓ [x̃(t)] of the signal x̃(t) = a(t) ⊗ b(t) gives a sense of
the quantity of mutual information Iab contained in both the signal a(t) and the
signal b(t), which will be also denoted below as Ia⊗b , indicating the relation of this
measure to the binary operation ⊗ of the space Γ.
Thus, the main content of the Theorem 4.3.1 claims that during the interaction
x(t) = a(t) ⊕ b(t) of the signals a(t) and b(t) in physical space Γ in the form of
generalized Boolean algebra, the measures of the corresponding sorts of informa-
tion quantities defined for both physical Γ and informational Ω signal spaces are
isomorphic and the identities hold:

Ia = IA , Ib = IB , Ix = IX , Ix̃ = IX̃ , Ia⊕b = IA+B , Iab = Ia⊗b = IAB .

The answers to the questions above are as follows.

1. During signal interaction in linear space LS, the losses of information take
place. The presence of such losses is explained by the fact that linear space
LS is not isomorphic to general Boolean algebra with a measure.
2. Interaction of the signals is accompanied by losses of information that do
not affect all the spaces. The exceptions are the spaces with the properties
of a generalized Boolean algebra with a measure.
Unfortunately, Theorem 4.3.1 does not give concrete recommendations for obtaining
the signal spaces with these useful informational properties. The requirement for the
informational identity (4.3.11) to be valid for practical application may be too strict.
In this case, it is enough to require the quantity of mutual information Iax = Ia⊗x ,
Ibx = Ib⊗x , contained in the signals a(t) and b(t) and in the result of their interaction
x(t), to be identically equal to the quantities of absolute information Ia and Ib
contained in these signals: 
Ia⊗x = Ia ;
Ib⊗x = Ib .
On the basis of this approach, one may define a sort of interaction of the sig-
nals in physical signal space that differs from ideal interaction with respect to its
informational properties. This formulation is closer to applied aspects of signal
processing and is expanded in its algebraic interpretation.

Definition 4.3.3. Quasi-ideal interaction x(t) = a(t) ⊕ b(t) of two signals a(t)
and b(t) in physical signal space Γ, where two binary operations of addition ⊕ and
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 153

multiplication ⊗ are defined, is a binary operation ⊕ providing that the quantity


of mutual information Iax = Ia⊗x , Ibx = Ib⊗x contained in the signals a(t) and
b(t), and in the result of their interaction x(t) is equal to the quantity of absolute
information Ia , Ib contained in these signals, respectively:

 Ia⊗x = Ia ; (a)
Ib⊗x = Ib ; (b) (4.3.14)
x(t) = a(t) ⊕ b(t). (c)

Find out what are the algebraic properties of physical signal space Γ for the
equation system (4.3.14) to hold. The answer to this question is given by the fol-
lowing theorem.
Theorem 4.3.2. Let there be two binary operations of addition ⊕ and multiplica-
tion ⊗ defined in physical signal space Γ. Then for the equation system (4.3.14) to
hold, it is necessary and sufficient that physical signal space Γ be a lattice with a
measure mΓ and operations of join ⊕ and meet ⊗.
Proof. If physical signal space Γ is a lattice with operations of join a(t) ⊕ b(t) and
meet a(t) ⊗ b(t), then the following relationships hold:

 a(t) ⊗ x(t) = a(t); (a)
b(t) ⊗ x(t) = b(t); (b) (4.3.15)
x(t) = a(t) ⊕ b(t), (c)

where a(t) ⊕ b(t) = supΓ {a(t), b(t)}; a(t) ⊗ b(t) = inf Γ {a(t), b(t)}.
Identities (4.3.15a) and (4.3.15b) define axioms of absorption of lattice [223],
[221]. If physical signal space Γ is a lattice with a measure mΓ , then the system
(4.3.15) determines the following identities:

 mΓ (a(t) ⊗ x(t)) = mΓ (a(t)); (a)
mΓ (b(t) ⊗ x(t)) = mΓ (b(t)); (b) (4.3.16)
x(t) = a(t) ⊕ b(t), (c)

where mΓ (a(t) ⊗x(t)) = Ia⊗x , mΓ (b(t) ⊗x(t)) = Ib⊗x , mΓ (a(t)) = Ia , mΓ (b(t)) = Ib .


Thus, for the identities (4.3.14a) and (4.3.14b) of the system (4.3.14) to hold,
it is sufficient that physical signal space Γ be a lattice with a measure.

Thus, for the identities (4.3.14a) and (4.3.14b) of the system (4.3.14) to hold,
i.e., for quasi-ideal interaction in physical signal space Γ to take place, it is sufficient
that physical signal space Γ be a lattice Γ(∨, ∧) with operations of join and meet,
respectively: a(t) ∨ b(t) = supΓ {a(t), b(t)}, a(t) ∧ b(t) = inf Γ {a(t), b(t)}.
Then for interaction of two signals a(t), b(t) in physical signal space Γ with
lattice operations: x(t) = a(t) ∨ b(t) or x̃(t) = a(t) ∧ b(t), the initial requirement
(4.3.14) could be written in slightly extended form owing to duality of lattice op-
eration properties with the help of two equation systems, respectively:

 Ia∧x = Ia ; (a)
Ib∧x = Ib ; (b) (4.3.17)
x(t) = a(t) ∨ b(t), (c)

154 4 Signal Spaces with Lattice Properties

 Ia∨x̃ = Ia ; (a)
Ib∨x̃ = Ib ; (b) (4.3.18)
x̃(t) = a(t) ∧ b(t). (c)

It should be noted that in physical signal space Γ, where ideal interaction of the sig-
nals exists (i.e., the identity (4.3.11) holds), the relationships (4.3.14a) and (4.3.14b)
unconditionally hold, and correspondingly there exists a quasi-ideal interaction of
the signals. Physical signal space Γ, which is a generalized Boolean algebra B(Γ)
with a measure mΓ and signature (⊕, ⊗, −, O), is also a lattice of signature (⊕, ⊗)
with operations of least upper bound and greatest lower bound (join and meet),
respectively:

a(t) ⊕ b(t) = sup{a(t), b(t)}; a(t) ⊗ b(t) = inf {a(t), b(t)}.


Γ Γ

On the base of Definition 4.3.3, another variant of formulation of the notion of


quasi-ideal interaction may be suggested that does not directly reflect informational
properties of interaction of this kind, putting an accent on its algebraic properties
only.
Definition 4.3.4. Quasi-ideal interaction x(t) = a(t) ⊕ b(t) of useful a(t) and
interference b(t) signals in physical signal space Γ, where two binary operations of
addition ⊕ and multiplication ⊗ are defined, is a binary operation ⊕ forming a
result of signal interaction x(t) that allows extracting a completely known signal
a(t) from a mixture x(t) without losses of information, so that the identity holds:

â(t) ⊗ x(t) = a(t), (4.3.19)

where â(t) is some deterministic function of the signals a(t) and x(t).
Find out what are the algebraic properties of physical signal space Γ for the
identity (4.3.19) to hold. Also we must establish the kind of the function â(t) sat-
isfying Equation (4.3.19). The answer to this question is given by the following
theorem.
Theorem 4.3.3. Let there be two binary operations of addition ⊕ and multiplica-
tion ⊗, which are defined in physical signal space Γ. Then for the identity (4.3.19)
to hold, it is sufficient that physical signal space Γ is a lattice with signature (⊕, ⊗)
and operations of join and meet, respectively:

a(t) ⊕ b(t) = sup{a(t), b(t)}; a(t) ⊗ b(t) = inf {a(t), b(t)}.


Γ Γ

Proof. Definition 4.3.4 implies that in quasi-ideal interaction x(t) = a(t) ⊕ b(t) of a
completely known useful signal a(t) with interference (noise) signal b(t) in physical
signal space Γ, there exists a binary operation ⊗ that allows, with the help of
the estimator â(t) of the signal a(t) with completely known parameters (â(t) =
a(t)), obtaining (extracting) the useful signal a(t) from the result of interaction
x(t) without information losses:

y (t) = â(t) ⊗ x(t) = a(t) ⊗ (a(t) ⊕ b(t)) = a(t). (4.3.20)


4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 155

The last interrelation determines the absorption property for a lattice with opera-
tions of join and meet, respectively: a(t) ⊕ b(t) and a(t) ⊗ b(t). This means that for
quasi-ideal interaction of useful a(t) and interference b(t) signals defined above to
exist in physical signal space Γ, it is sufficient that signal space Γ be a lattice with
signature (⊕, ⊗).

It should be noted that for physical signal space Γ with lattice properties and
signature (⊕, ⊗), for dual interaction of the signals in the form of x̃(t) = a(t) ⊗ b(t),
the relationship that is dual with respect to the equality (4.3.20), holds:

y (t) = â(t) ⊕ x(t) = a(t) ⊕ (a(t) ⊗ b(t)) = a(t). (4.3.21)

The utility of the notion of quasi-ideal interaction is illustrated by the following


examples.

Example 4.3.2. Consider a model of interaction of the signal si (t) from the set
of deterministic signals S = {si (t)}, i = 1, . . . , m and noise n(t) in signal space
Γ(∨, ∧) with lattice properties and operations of join a(t) ∨ b(t) and meet a(t) ∧ b(t)
(a(t), b(t) ∈ Γ(∨, ∧)), respectively:

x(t) = si (t) ∨ n(t), t ∈ Ts , i = 1, . . . , m, (4.3.22)

where Ts = [t0 , t0 + T ] is the domain of definition of the signal si (t); t0 is a known


time of signal arrival; T is a signal duration; m ∈ N, N is the set of natural numbers.
We consider that the noise n(t) is characterized by arbitrary probabilistic-
statistical properties. In the presence of the signal sk (t), sk (t) ∈ S, 1 ≤ k ≤ m
in the observed process x(t): x(t) = sk (t) ∨ n(t), t ∈ Ts , to solve a classification
problem for the signal sk (t) from a set of deterministic signals S = {si (t)} in the
presence of noise n(t), it is necessary to form the estimator ŝk (t), which is equal
to the received signal sk (t), so that the structure-forming function of the optimal
signal processing algorithm has the form ŝk (t) ∧ x(t), and the result of optimal
processing yk (t), according to the lattice absorption axiom, is identically equal to
the received signal sk (t):

yk (t) = ŝk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ n(t)] = sk (t).

Thus, regardless of parametric and nonparametric prior uncertainty conditions and


correspondingly, regardless of probabilistic-statistical properties of noise (interfer-
ence), the optimal demodulator of deterministic signals in the signal space with
lattice properties accurately classifies (demodulates) the signals from the given set
S = {si (t)}, i = 1, . . . , m. 5

Example 4.3.3. Shannon’s theorem on capacity of communication channel with


additive white Gaussian noise implies a lower bound value of the ratio Eb /N0 of
the energy Eb that falls to one bit of information contained in the signal to the
value of noise power spectral density N0 ; this ratio is called an ultimate Shannon
limit. This value Eb /N0 equal to ln 2 establishes the limit of errorless transmission
156 4 Signal Spaces with Lattice Properties

of information; see, for instance, [164]. The former example implies that while solv-
ing the classification problem of deterministic signals in the presence of noise, the
value Eb /N0 and the probability of error receiving of the signal si (t) from a set
of deterministic signals S = {si (t)}, i = 1, . . . , m may be arbitrarily small. This,
however, does not mean that the unbounded capacity of a communication channel
with noise can be provided in such signal spaces. In particular, Chapter 5 will show
that communication channel capacity, even in the absence of noise, is always a finite
value. 5

4.3.2 Informational Relationships Characterizing Signal Interaction in


Signal Spaces with Various Algebraic Properties
Analysis of informational relationships taking place under different sorts of signal
interactions in the signal spaces, i.e., spaces with various algebraic properties, has
to be performed from unified positions. To provide such an approach, we use the
following considerations.
The statement of Sections 3.2 and 3.3 that any pair of stochastic processes can
be considered a partially ordered set may be generalized for a physical signal space
as a whole. Any physical signal space Γ can be considered a partially ordered set.
In every time instant t ∈ T between two instantaneous values (samples) at , bt of
the signals a(t), b(t) ∈ Γ, a relation of order at ≤ bt (or at ≥ bt ) is defined. Then the
partially ordered set Γ is a lattice with operations of join and meet, respectively:
at ∨ bt = supΓ {at , bt }, at ∧ bt = inf Γ {at , bt }, and if at ≤ bt , then at ∧ bt = at and
at ∨ bt = bt [221]: 
at ∧ bt = at ;
at ≤ bt ⇔
at ∨ bt = bt .
Thus, let at and bt be the instantaneous values (samples) of stochastic signals a(t)
and b(t) with symmetric (even) univariate probability density functions (PDFs)
pa (u), pb (v ): pa (u) = pa (−u); pb (v ) = pb (−v ). The quantity νP (at , bt ), introduced
by Definition 3.3.1 in Section 3.3, is called a probabilistic measure of statistical
interrelationship (PMSI) between a pair of the samples at , bt of stochastic signals
a(t) and b(t) in physical signal space Γ: a(t), b(t) ∈ Γ:
νP (at , bt ) = 3 − 4P[at ∨ bt > 0], (4.3.23)
where P[at ∨ bt > 0] is the probability that random variable at ∨ bt , which is equal
to join of the samples at and bt , takes a value greater than zero.
Within further consideration of informational relationships taking place during
signal interaction in signal spaces with various algebraic properties, in the essential
measure, the notion of PMSI and also the material of Section 3.3 will be used.
The notions of ideal and quasi-ideal interactions between the signals are in-
troduced by Definitions 4.3.2, 4.3.3 respectively, and besides, it is specified that
physical signal space Γ, where these sorts of interactions take place, necessarily has
to possess two binary operations, i.e., the operations of addition ⊕ and multipli-
cation ⊗ appearing in definition of the notions of quantity of overall information
Ia⊕b and quantity of mutual information Iab = Ia⊗b , respectively. The last ones
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 157

are equal to the quantity of absolute information Ix , Ix̃ contained in the results of
these binary operations x(t) = a(t) ⊕ b(t), x̃(t) = a(t) ⊗ b(t) between the signals
a(t), b(t), respectively:
Ix = Ia⊕b ; Ix̃ = Ia⊗b . (4.3.24)
Note that two aforementioned binary operations (of addition ⊕ and multipli-
cation ⊗) cannot be defined at the same time in any physical signal space Γ. The
exceptions are the signal spaces with group (semigroup) properties, where the only
binary operation is defined over the signals, i.e., either addition ⊕ or multiplication
⊗.
There exists a theoretical problem which will be considered within physical
signal space Γ with the properties of an additive group of linear space, where
only one binary operation between the signals is defined, i.e., addition ⊕. By a
method based on the relationships (4.3.24), we can define only a quantity of overall
information Ia⊕b contained in the result of interaction x(t) = a(t) ⊕ b(t) in such a
signal space. It is not possible to define similarly a quantity of mutual information
contained in both signals a(t) and b(t).
In order to provide an approach to analyze informational relationships taking
place in the signal space with various types interactions, it is necessary to define the
quantity of mutual information contained in both signals a(t) and b(t) in such a way,
that is acceptable for the signal spaces with minimal number of binary operations
between the signals.
It is obvious that the minimal number of operations is equal to one; in this case,
the only binary operation characterizes an interaction of the signals in physical sig-
nal space with group (semigroup) properties. On the other hand, a definition of a
quantity Iab should provide the introduced notion will not contradict the obtained
results. The necessary approach to a formulation of the quantity of mutual informa-
tion Iab that is acceptable for the signal spaces with arbitrary algebraic properties
is given by the following definition.
Definition 4.3.5. For stationary and stationary coupled stochastic signals a(t) and
b(t), which interact in physical signal space Γ with arbitrary algebraic properties,
quantity of mutual information, contained in the signals a(t) and b(t), is a quantity
Iab equal to:
Iab = νab min(Ia , Ib ), (4.3.25)
where νab = νP (at , bt ) is PMSI between the samples at and bt of stochastic signals
a(t) and b(t) in physical signal space Γ determined by the relationship (4.3.23).
With help from Definition 4.3.5, for an arbitrary signal a(t) in physical signal
space Γ with arbitrary algebraic properties, we introduce a measure of information
quantity mΓ defined by the Equation (4.3.25):

mΓ [a(t)] = Iaa = νaa min(Ia , Ia ) = Ia .

As shown in Section 3.3, for physical signal space Γ with group properties, the
notions of PMSI (see Definition 3.3.1) and normalized measure of statistical inter-
relationship (NMSI) (see Definition 3.2.2) coincide; as for signal space Γ with lattice
158 4 Signal Spaces with Lattice Properties

properties, these notions have different content. In this section, the further analy-
sis of informational relationships between the signals interacting in physical signal
space Γ with various algebraic properties will be performed on the base of quantity
of mutual information (4.3.25), and this quantity Iab will be used on the base of
PMSI (νab = νP (at , bt )), although it can be defined through NMSI (νab = ν (at , bt )).
So, for instance, in the next section, this notion will be defined and used on the
basis of NMSI (νab = ν (at , bt )).
As noted at the beginning of the section, in applied problems of radiophysics and
radioengeneering, an additive interaction of useful signal and interference (noise) is
considered. In some special cases, a multiplicative signal interaction is the subject
of interest [244]. These and other sorts of signal interactions in physical signal space
that differ in their informational properties from the above kinds of interaction will
refer to a large group of interactions with so-called usual informational properties
with help of the following definition based on the notion of quantity of mutual
information introduced by Definition 4.3.5.
Definition 4.3.6. Usual interaction of two signals a(t) and b(t) in physical signal
space Γ with semigroup properties is a binary operation ⊕ providing the quantity of
mutual information Iax , Ibx contained in the distinct signals a(t), b(t) (a(t) 6= b(t))
and in the result of their interaction x(t) = a(t) ⊕ b(t) is less than the quantity of
absolute information Ia , Ib contained in these signals, respectively:

 Iax < Ia ; (a)
Ibx < Ib ; (b) (4.3.26)
x(t) = a(t) ⊕ b(t). (c)

Recall, that all the definitions of information quantities listed in Section 4.1 are
based upon an axiomatic statement, according to which, the sorts of information
quantities are completely defined by the kind of binary operation between the im-
ages A and B of the signals a(t) and b(t) in informational signal space Ω built upon
generalized Boolean algebra B(Ω) with a measure m and signature (+, ·, −, O).
Meanwhile, the quantity of mutual information is introduced differently in Defini-
tion 4.3.5, i.e., by PMSI and a quantity of absolute information contained in the
interacting signals a(t), b(t). First, it adjusts to the corresponding notion intro-
duced in Section 4.1. Second, it allows considering the features of informational
relationships between the signals interacting in physical signal space Γ with arbi-
trary algebraic properties.
For ideal interaction x(t) = a(t) ⊕ b(t) between the signals a(t) and b(t) in
physical signal space Γ with the properties of generalized Boolean algebra B(Γ) with
a measure mΓ and signature (⊕, ⊗, −, O), the following informational relationships
hold: 
 Ia + Ib − Iab = Ix ; (a)
Iax = Ia ; (b) (4.3.27)
Ibx = Ib , (c)

where Ia = mΓ [a(t)] = IA = m(A), Ib = mΓ [b(t)] = IB = m(B ), Ix = mΓ [x(t)] =


IX = IA+B = m(A + B ), Iab = mΓ [a(t) ⊗ b(t)] = IAB = m(AB ); m is a measure of
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 159

informational signal space Ω that is isomorphic to a measure mΓ of physical signal


space Γ.
Equality (4.3.27a) represents an informational identity of ideal interaction
(4.3.11) and (4.3.12). Equalities (4.3.27b) and (4.3.27c) represent informational
identities of quasi-ideal interaction (4.3.14a), (4.3.14b) that take place during ideal
interaction of signals. The values of quantities of mutual information Iab , Iax , Ibx
appearing in all the equations of the system (4.3.27) should be considered in the con-
text of Definition 4.3.5. The identities (4.3.27b) and (4.3.27c) are the consequence
of lattice absorption axioms (4.3.15a) and (4.3.15b). Based on the equations in
(4.3.27), the following system of equations can be obtained:

Iax Ibx Iab
+ − = 1; (a)


Ix Ix Ix

Iax = Ia ; (b) (4.3.28)


 I =I . (c)
bx b

For quasi-ideal interaction x(t) = a(t) ⊕ b(t) between the signals a(t) and b(t)
in physical signal space Γ with lattice properties with a measure mΓ , the following
main informational relationships hold:

 Ia + Ib − Iab > Ix ; (a)
Iax = Ia ; (b) (4.3.29)
Ibx = Ib , (c)

where Ia = mΓ [a(t)] = IA = m(A), Ib = mΓ [b(t)] = IB = m(B ), Ix = mΓ [x(t)] =


IX = m(X ), Iab = νab min(Ia , Ib ); m is a measure of informational signal space Ω;
mΓ is a measure of physical signal space Γ with lattice properties.
Here, the inequality (4.3.29a) represents the informational inequality (4.3.10),
and the equalities (4.3.29b) and (4.3.29c) represent informational identities of quasi-
ideal interaction (4.3.14a) and (4.3.14b). The quantities of mutual information Iab ,
Iax , Ibx , appearing in all the equations of the system (4.3.29), should be considered
in the sense of Definition 4.3.5. The identities (4.3.29b) and (4.3.29c) follow from
the results (3.3.9a) and (3.3.9b) of Theorem 3.3.7.
On the basis of relationships of the system (4.3.29), the following system of
relationships can be obtained:

Iax Ibx Iab
+ − > 1; (a)


Ix Ix Ix

Iax = Ia ; (b) (4.3.30)


 I =I . (c)
bx b

For interaction x(t) = a(t) ⊕ b(t) of a pair of stochastic signals a(t) and b(t) in
physical signal space Γ with group properties (i.e., for additive interaction x(t) =
a(t) + b(t) in linear signal space LS), the following main informational relationships
hold: 
 νP (at , xt ) + νP (bt , xt ) − νP (at , bt ) = 1; (a)
νP (at , xt ) < 1; (b) (4.3.31)
νP (bt , xt ) < 1, (c)

160 4 Signal Spaces with Lattice Properties

where νP (at , xt ), νP (bt , xt ), νP (at , bt ) are PMSIs of the corresponding pairs of their
samples at , xt ; bt , xt ; at , bt of stochastic signals a(t), b(t), x(t).
The equality (4.3.31a) represents the result (3.3.7) of the Theorem 3.3.6. The
inequalities (4.3.31b), (4.3.31c) are the consequences of the inequalities (4.3.26a),
(4.3.26b) of Definition 4.3.6. Considering the relationship (4.3.25) of Defini-
tion 4.3.5, the system (4.3.31) can be overwritten in the following form:

Iax Ibx Iab




 + − = 1; (a)
 I Ib min[Ia , Ib ]
 a


Iax < Ia ; (b)
(4.3.32)

 Ibx < Ib , ( c)

 I a I b I ab
+ − > 1, (d)


Ix Ix Ix
where Ia = IA = m(A), Ib = IB = m(B ), Ix = IX = m(X ); Iab = νab min(Ia , Ib ),
Iax = νax min(Ia , Ix ), Ibx = νbx min(Ib , Ix ); m is a measure of informational signal
space Ω.
The equality (4.3.32a) is the consequence of joint fulfillment of the relationships
(4.3.31a) and (4.3.25). The inequalities (4.3.32b), (4.3.32c) are the consequences of
the inequalities (4.3.26a), (4.3.26b) of Definition 4.3.6. Inequality (4.3.32d) repre-
sents the informational inequality (4.3.10).
Fig. 4.3.1 illustrates the dependences
Ibx (Iax ) of quantity of mutual information
Ibx contained in the signal x(t) with respect
to the signal b(t) on quantity of mutual in-
formation Iax , contained in the signal x(t)
with respect to the signal a(t): line 1 cor-
responds to ideal interaction; point 2 is the
quasi-ideal interaction; line 3 illustrates the
usual interaction of statistically indepen-
dent signals (i.e., on Iab = 0) in physical
signal space Γ with group properties (in lin-
ear signal space LS); curve 4 is the usual
interaction of statistically independent sig-
FIGURE 4.3.1 Dependences Ibx (Iax ) of nals in physical signal space Γ with group
quantity of mutual information Ibx on properties (in linear signal space LS) on
quantity of mutual information Iax Ibx (Iax ) ≤ sup [Ibx (Iax )] |Iab=0 ; line 5
Ia +Ib =const
is the usual interaction of statistically de-
pendent signals in physical signal space Γ with group properties (in linear signal
space LS) built on the relationships (4.3.28), (4.3.30), and (4.3.32), respectively.
For the ideal interaction x(t) = a(t) ⊕ b(t) between the signals a(t), b(t) in physical
signal space Γ with properties of generalized Boolean algebra B(Γ) with signature
(⊕, ⊗, −, O), the dependence 1 is determined by the line equation (4.3.28a), and
Ibx = 0 on Iax = Ia + Ib , and Iax = 0 on Ibx = Ia + Ib . The dependence 1 determines
the upper bound of possible values of quantities of mutual information Iax and Ibx
4.3 Features of Signal Interaction in Signal Spaces with Various Algebraic Properties 161

of interacting signals a(t) and b(t). No sorts of signal interactions allow achieving
more high performance informational relationships on the fixed sum Ia + Ib =const.
For quasi-ideal interaction x(t) = a(t) ⊕ b(t) of a pair of signals a(t) and b(t)
in physical signal space Γ with lattice properties, the dependence 2 is determined
by the point Iax = Ia , Ibx = Ib that corresponds to the identities (4.3.29b) and
(4.3.29c).
For interaction x(t) = a(t) ⊕ b(t) between statistically independent signals a(t)
and b(t) in physical signal space Γ with group properties (for additive interaction
x(t) = a(t) + b(t) in linear signal space LS), the dependence 3 is determined by the
line equation (4.3.32a) passing through the points (Ia , 0) and (0, Ib ) correspondingly,
so that for independent signals a(t) and b(t), quantity of mutual information is
equal to zero: Iab =0. The dependence 3 determines the lower bound of possible
values of quantities of mutual information Iax and Ibx of interacting signals a(t)
and b(t). Whatever the signal interactions, one cannot achieve worse informational
relationships on the fixed sum Ia + Ib =const.
The dependence 4 sup [Ibx (Iax )] |Iab=0 determines the upper bound of
Ia +Ib =const
possible values of quantities of mutual information Iax and Ibx of the signals a(t)
and b(t) additively interacting in physical signal space Γ with group properties (in
linear space LS). Whatever the signal interactions, one cannot achieve higher per-
formance informational relationships on the fixed sum Ia + Ib =const. This function
is determined by the relationship:
p p 2
sup [Ibx (Iax )] |Iab=0 = Ia + Ib − Iax . (4.3.33)
Ia +Ib =const

For the interaction x(t) = a(t) ⊕ b(t) between statistically dependent signals a(t)
and b(t) in physical signal space Γ with group properties (for additive interaction
x(t) = a(t) + b(t) in linear signal space LS), the dependence 5 is determined by the
line equation (4.3.32a) passing through the points (kIa , 0) and (0, kIb ), where k is
equal to:
k = 1 + [Iab / min(Ia , Ib )], (4.3.34)
and for the dependent signals a(t) and b(t), the quantity of mutual information Iab
is bounded by the quantity Iab ≤ [min(Ia , Ib )]2 / max(Ia , Ib ).
Dependence 5 determines the upper bound of possible values of quantities of
mutual information Iax and Ibx of statistically dependent signals a(t) and b(t)
interacting in physical signal space Γ with group properties (in linear signal space
LS). Whatever the signal interactions, one cannot achieve more high performance
informational relationships on the fixed sum Ia + Ib =const. If the interacting signals
a(t) and b(t) are identical in informational sense (see Definition 4.3.1), i.e., they
are characterized by the same information quantity Ia = Ib , so that information
contained in these signals has the same content, then the value of coefficient k
determined by relationship (4.3.34) becomes equal to 2: k = 2. In this (and only in
this) case, the dependences 5 and 1 coincide.
Thus, the locus of the lines, that characterize the dependence (4.3.32a) for
interaction x(t) = a(t) ⊕ b(t) of the signals a(t) and b(t) in physical signal space Γ
162 4 Signal Spaces with Lattice Properties

with group properties (in linear signal space LS), is between the curves 1 and 3 in
Fig. 4.3.1.
In summary, one can make the following conclusions.

1. As a rule, in all physical signal spaces, the signal interaction is accom-


panied by the losses of some quantity of overall information. Exceptions
are the signal spaces with the properties of generalized Boolean algebra.
In such spaces, the signal interaction is characterized by preservation of
quantity of overall information contained in a pair of interacting signals,
so that the informational identity (4.3.12) holds.
2. During a signal interaction in physical signal space that is non-isomorphic
to generalized Boolean algebra with a measure, there inevitably appear
the losses of some quantity of overall information. On the qualitative level,
these losses are described by informational inequality (4.3.10). The inter-
action of the signals in physical signal space Γ with group properties (and
also in linear space LS) is always accompanied with losses of some part
of overall information.
3. As a rule, in all physical signal spaces, the signal interaction is accom-
panied by the losses of some quantity of mutual information between the
signals before and after their interactions. Exceptions are the signal spaces
with lattice properties. The signal interactions in the spaces of such type
are characterized by preservation of quantity of mutual information con-
tained in one of the interacting signals (a(t) or b(t)), and in the result
of their interaction x(t) = a(t) ⊕ b(t), and the informational identities
(4.3.14a) and (4.3.14b) hold.
4. During a signal interaction in physical signal space that is non-isomorphic
to lattice, there inevitably appear the losses of some quantity of mutual
information contained in one of the interacting signals (a(t) or b(t)) and
in the result of their interaction x(t) = a(t) ⊕ b(t). On the qualitative
level, these losses are described by informational inequalities (4.3.24a),
(4.3.24b). The signal interaction in physical signal space Γ with group
properties (and also in linear space LS) is always accompanied by the
losses of some quantity of mutual information contained in one of the
interacting signals and in the result of their interaction.
5. Depending on qualitative relations between quantity of overall informa-
tion Ia⊕b , quantity of mutual information Iab , and quantities of absolute
information Ia and Ib contained in the interacting signals a(t) and b(t),
we distinguish the following types of interactions between the signals in
physical signal space: ideal interaction, quasi-ideal interaction, and usual
interaction introduced by Definitions 4.3.2, 4.3.3, and 4.3.6, respectively.
6. During ideal interaction of the signals in physical signal space with the
properties of generalized Boolean algebra, the informational relationships
(4.3.28) hold.
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 163

7. During quasi-ideal interaction of the signals in physical signal space with


lattice properties, the informational relationships (4.3.30) hold.
8. During the signal interaction in physical signal space Γ with group prop-
erties (in linear signal space LS), the informational relationships (4.3.32)
hold.
9. Dependences 1 and 3 in Fig. 4.3.1 defined by the equalities (4.3.28a)
and (4.3.32a) establish the corresponding upper and the lower theoretical
bounds that confine the informational relationships of the signals inter-
acting in signal spaces with various algebraic properties.
10. Informational signal space is an abstract notion that allows us to evalu-
ate possible informational relationships between the signals interacting in
real physical signal space. If we will discover a physical medium in which
the signal interactions will have the properties of ideal interactions de-
fined here, then highest possible quality indices of signal processing will
be achieved. Unlike signal spaces admitting ideal interaction of signals,
physical signal space with lattice properties can be realized now. The pre-
requisites for developing such signal spaces and their relations with linear
signal spaces were defined by Birkhoff [221, Sections XIII.3 and XIII.4].
11. Unfortunately, the quantity equal to 1 − νP (at , bt ) that acts as a metric in
a linear signal space is not a metric in a signal space with lattice properties
and does not allow adequate evaluation of the informational relationships
describing signal interactions in signal spaces with various algebraic prop-
erties based on unified positions. This statement arises from the defini-
tion of quantity of mutual information (4.3.25) based on the probabilistic
measure of statistical interrelationship (PMSI). One main property of a
measure of information quantity is providing signal space metrization that
cannot be realized based on quantity of mutual information (4.3.25). To
construct a metric signal space with concrete algebraic properties, we use
the normalized measure of statistical interrelationship (NMSI) introduced
in Section 3.2. That approach is the subject of Section 4.4.

4.4 Metric and Informational Relationships between Signals


Interacting in Signal Space
Results of comparative analysis on the main informational relationships describing
the signal interactions in both linear signal space and the signal space with lattice
properties discussed in Section 4.3 require clarification concerning metric and in-
formational properties of such physical signal spaces. As noted in Section 4.3, the
physical interaction x(t) = a(t) ⊕ b(t) of two signals a(t) and b(t) in physical space
Γ exert negative influences on values of some types of information quantities de-
fined for physical signal space Γ. The quantities of overall information Ia⊕b 6= IA+B
164 4 Signal Spaces with Lattice Properties

and mutual information Iab 6= IAB , corresponding to a pair of interacting signals


a(t) and b(t) with respect to the quantities IA+B and IAB defining these notions in
informational signal space Ω, are changed.
The only way to solve the problem of defining informational relationships be-
tween signals interacting in a physical signal is introducing a metric into it. Other-
wise, one cannot establish how close or far the signals are from each other. In other
words, introducing a metric in physical signal space provides adequacy of analy-
sis of informational relationships of the signals. A newly introduced metric must
satisfy a critical property: it must conform to the measure of information quantity
introduced earlier.
The goals of this section are (1) metrization of physical signal space on the basis
of a normalized metric µ(at , bt ) (3.2.1) between the samples at , bt of stochastic sig-
nals a(t), b(t) ∈ Γ and (2) definition of the main types of information quantities that
characterize signal interactions in physical signal spaces with various informational
properties.
A key notion of this section is a metric µ(at , bt ) between the samples at , bt
of stochastic signals a(t), b(t) ∈ Γ interacting in physical signal space Γ with the
properties of either a group Γ(+) or a lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧).
Preserving the introduced notation system for the signals interacting in physical
space Γ, we consider an additive interaction of two stationary stochastic signals a(t)
and b(t) in a group Γ(+) of L-group Γ(+, ∨, ∧), a(t), b(t) ∈ Γ(+), t ∈ T :
x(t) = a(t) + b(t), (4.4.1)
and also their interaction in the signal space with lattice properties Γ(∨, ∧) of
L-group Γ(+, ∨, ∧) with operations of join and meet, respectively: a(t) ∨ b(t) =
supΓ {a(t), b(t)}, a(t) ∧ b(t) = supΓ {a(t), b(t)}, a(t), b(t) ∈ Γ(∨, ∧), t ∈ T :
x(t) = a(t) ∨ b(t), x̃(t) = a(t) ∧ b(t). (4.4.2)
On the basis of Axiom 4.1.1 formulated in Section 4.1, depending on the sig-
nature relations between the images A and B of the signals a(t) and b(t) in in-
formational signal space Ω built upon generalized Boolean algebra B(Ω) with a
measure m and signature (+, ·, −, O), A ∈ Ω, B ∈ Ω, the following sorts of infor-
mation quantity are defined: quantities of absolute information IA and IB , quantity
of mutual information IAB , and quantity of overall information IA+B defined by
the following relationships:
IA = m(A), IB = m(B ); (4.4.3a)
IAB = m(AB ); (4.4.3b)
IA+B = m(A + B ) = IA + IB − IAB , (4.4.3c)
so that the images A and B of the corresponding signals a(t) and b(t) are defined
by their information distribution densities and mappings (3.5.26).
Theorems 4.4.1 through 4.4.3 formulated below allow us to realize metric trans-
formation of physical signal space Γ — (Γ, µ) with a metric µ(at , bt ) (3.2.1) equal
to:
µ(at , bt ) = 1 − ν (at , bt ), (4.4.4)
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 165

where ν (at , bt ) is a normalized measure of stochastic interrelation (NMSI) between


the samples at and bt of stochastic signals a(t), b(t) ∈ Γ introduced by Defini-
tion 3.2.2.
Taking into account the peculiarities of the main informational relationships
between the signals interacting in the signal space with lattice properties, which
are cited in the previous section, we define a metric in physical signal space of
this type on the basis of the metric (4.4.4), supposing the interacting signals to be
stationary and stationary coupled, i.e., ν (at , bt ) =νab =const, t ∈ T .

Theorem 4.4.1. For stationary and stationary coupled stochastic signals


a(t), b(t) ∈ Γ in physical signal space Γ that are characterized by the quantity
of absolute information Ia and Ib , respectively, and also by NMSI ν (at , bt ) =νab
between the samples at and bt , the function dab determined by the equation:

dab = Ia + Ib − 2νab min[Ia , Ib ], (4.4.5)

is metric.

Before proving the theorem, we transform Equation (4.4.5) to the form that
is convenient for the following reasoning, using the relationship (4.4.4) and the
identity [221, Section XIII.4;(22)]:

min[Ia , Ib ] = 0.5(Ia + Ib − |Ia − Ib |), (4.4.6)

with whose help the function dab may be written in the equivalent form:

dab = µ(at , bt )(Ia + Ib ) + (1 − µ(at , bt ))|Ia − Ib |. (4.4.7)

We write the expression (4.4.7) in the form of the function of three variables:

dab = F (du , µ+ ∆ + ∆
u , µu ) = du µu + (1 − du )µu , (4.4.8)
+
where du = µ(at , bt ), µ+ ∆ ∆
u = µab = Ia + Ib , µu = µab = |Ia − Ib |.
Similarly, we introduce the following designations for the corresponding func-
tions dbc , dca between the pairs of signals b(t), c(t) and c(t), a(t):

dbc = F (dv , µ+ ∆ + ∆
v , µv ) = dv µv + (1 − dv )µv , (4.4.9)
+
where dv = µ(bt , ct ), µ+ ∆ ∆
v = µbc = Ib + Ic , µv = µbc = |Ib − Ic |;

dca = F (dw , µ+ ∆ + ∆
w , µw ) = dw µw + (1 − dw )µw , (4.4.10)

where dw = µ(ct , at ), µ+ + ∆ ∆
w = µca = Ic + Ia , µw = µca = |Ic − Ia |.
It should be noted that for the values du , µ+ ∆ + ∆ + ∆
u , µu ; dv , µv , µv ; dw , µw , µw , the
inequalities hold:

du ≤ dv + dw ; µ+ + +
u ≤ µv + µ w ; µ∆ ∆ ∆
u ≤ µv + µw , (4.4.11)

µ∆ +
u ≤ µu ; µ∆ +
v ≤ µv ; µ∆ +
w ≤ µw . (4.4.12)
166 4 Signal Spaces with Lattice Properties

Further we shall denote:

F (xu,v,w ) = F (du,v,w , µ+ ∆
u,v,w , µu,v,w ), (4.4.13)

where xu,v,w is a variable denoting one of three variables of the function (4.4.13),
under a condition in which another two are the constants:
  
 xu,v,w = du,v,w ;  du,v,w = const;  du,v,w = const;
µ+ = const ; or x u,v,w = µ +
; or µ+
u,v,w = const; (4.4.14)
 u,v,w
∆ ∆
u,v,w

µu,v,w = const, µu,v,w = const, xu,v,w = µu,v,w .
 

For the next proof, the generalization of lemma 9.0.2 in [245] will be used. It de-
fines the sufficient conditions for the function (4.4.13) to make a space to remain
pseudometric .
Lemma 4.4.1. Let F (du , µ+ ∆
u , µu ) be a monotonic, nondecreasing, convex upward
function such that F (du = 0, µu , µ∆
+ + ∆ + ∆
u ) = 0, F (du , µu = 0, µu ) ≥ 0, F (du , µu , µu =
0) ≥ 0, which is defined by the relationship (4.4.8). Then if (Γ, du ) is a pseudometric
space, then (Γ, F (du , µ+ ∆
u , µu )) is also a pseudometric space.

Proof of lemma. Fulfillment of the condition F (du = 0, µ+ ∆


u , µu ) = 0 of the function
(4.4.8) is obvious, inasmuch as condition du = µ(at , bt ) = 0 implies the identity
a ≡ b, thus, Ia = Ib . Taking into account the designations introduced earlier, to
prove a triangle inequality, it is enough to show, that if xu , xv , xw are nonnegative
values such that xu ≤ xv + xw , then F (xu ) ≤ F (xv ) + F (xw ). It is obvious, that
for nondecreasing function F (xu,v,w ), the inequality holds:

F (xu ) ≤ F (xv + xw ). (4.4.15)

For nondecreasing convex upward function F (xu,v,w ), under the condition that
F (0) ≥ 0 for ∀x: 0 ≤ x ≤ xv + xw , the relation holds:

F (x)/x ≥ F (xv + xw )/(xv + xw ), (4.4.16)

where x and xv,w are the variables denoting one of three in function (4.4.13), ac-
cording to accepted designations (4.4.14).
The inequality (4.4.16) implies the inequalities:

F (xv ) ≥ xv F (xv + xw )/(xv + xw ); (4.4.17a)

F (xw ) ≥ xw F (xv + xw )/(xv + xw ). (4.4.17b)


Taking into account the inequality (4.4.15), summing the left and the right parts
of the inequalities (4.4.17a) and (4.4.17b), we obtain the triangle inequality being
proved:
F (xv ) + F (xw ) ≥ F (xv + xw ) ≥ F (xu ).
The symmetry property of pseudometric (4.4.5) and (4.4.7) is obvious and follows
from the symmetry of the functions µ(at , bt ), µ+ ∆
ab = Ia + Ib , µab = |Ia − Ib |. 
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 167

Proof of theorem. According to Lemma 4.4.1, the function dab = F (du , µ+ ∆


u , µu )
(4.4.8) (and also the function (4.4.7)) is pseudometric. We prove the identity prop-
erty of a metric. Equality dab = 0 implies the equalities µ(at , bt ) = 0 and Ia = Ib ,
and the equality µ(at , bt ) = 0 implies the identity a ≡ b. Conversely, the identity
a ≡ b implies the equalities µ(at , bt ) = 0 and Ia = Ib that implies the equality
dab = 0. 
After introducing the metric dab in physical signal space Γ, which is determined
by the relationship (4.4.5), one should define the main sorts of information quanti-
ties.
The notions of the quantity of absolute information in physical and informa-
tional signal spaces, as shown in the previous section, completely coincide, inasmuch
as physical interaction of the signals has no influence upon this sort of information
quantity carried by the signals. The notion of quantity of absolute information for
physical signal spaces with arbitrary properties remains without changes within the
initial variant contained in the Definition 4.1.4.
Unlike the quantity of absolute information, the other sorts of information quan-
tities introduced by Definitions 4.1.5, 4.1.6, and 4.1.8 for informational signal space
Ω need their content clarifying with respect to physical signal space Γ. Such notions
are introduced by Definitions 4.4.1 through 4.4.3.
Definition 4.4.1. Quantity of mutual information Iab contained in a pair of
stochastic signals a(t), b(t) ∈ Γ, so that each is characterized by quantity of ab-
solute information Ia and Ib , respectively, with NMSI ν (at , bt ) = νab between the
samples at and bt , is an information quantity equal to:
Iab = νab min[Ia , Ib ]. (4.4.18)
Definition 4.4.1 establishes a measure of information quantity contained simul-
taneously in two distinct signals a(t) 6= b(t) and also a measure of information
quantity for two identical signals a(t) ≡ b(t). This essential feature of the quantity
of mutual information is below in Remark 4.4.1.
The list below shows (without a proof) the main properties of quantity of mutual
information Iab of stationary coupled stochastic signals a(t) and b(t) that interact
in physical signal space Γ.
1. Iab ≤ Ia , Iab ≤ Ib (boundedness)
2. Iab ≥ 0 (nonnegativity)
3. Iab = Iba (symmetry)
4. Iaa = Ia , Ibb = Ib (idempotency)
5. Iab = 0 if and only if the stochastic signals a(t) and b(t) are statistically
independent (ν (at , bt ) = 0)

Definition 4.4.2. Quantity of relative information Iab contained in a pair of
stochastic signals a(t), b(t) ∈ Γ, so that each is characterized by quantity of ab-
solute information Ia and Ib , respectively, with NMSI ν (at , bt ) = νab between the
samples at and bt , is an information quantity equal to dab (4.4.5):

Iab = dab = Ia + Ib − 2νab min[Ia , Ib ]. (4.4.19)
168 4 Signal Spaces with Lattice Properties
+
Definition 4.4.3. Quantity of overall information Iab contained in a pair of
stochastic signals a(t), b(t) ∈ Γ, so that each is characterized by quantity of ab-
solute information Ia and Ib , respectively, with NMSI ν (at , bt ) = νab between the
samples at and bt , is an information quantity equal to:
+
Iab = Ia + Ib − νab min[Ia , Ib ]. (4.4.20)

Remark 4.4.1. Quantity of absolute information Ia contained in the signal


a(t) ∈ Γ of physical signal space Γ is identically equal to the quantity of mutual
+
information Iaa and quantity of overall information Iaa contained in the signal a(t)
interacting with itself in the signal space with corresponding algebraic properties:
+
Ia = Iaa = Iaa . (4.4.21)

Proof of the identity (4.4.21) follows from the relations (4.4.18) and (4.4.20),
under the condition that νaa = 1.
Thus, quantity of mutual information Iab between the signals establishes a mea-
sure of information quantity that corresponds to a measure for informational signal
space introduced in Section 4.1.
+
Remark 4.4.2. Quantity of overall information Iab contained in a pair of stochastic
signals a(t), b(t) ∈ Γ interacting in physical signal space Γ is equal to the sum of

the quantity of mutual information Iab and the quantity of relative information Iab
and is bounded below by max[Ia , Ib ]:
+ ∆
Iab = Iab + Iab ≥ max[Ia , Ib ]. (4.4.22)

Proof of the inequality (4.4.22) follows directly from joint fulfillment of triplets
of the relationships (4.4.18), (4.4.19), (4.4.20), and proof of the identity (4.4.22)
follows from the definition (4.4.20) by realizing identical transformations:

+
Iab = Ia + Ib − νab min[Ia , Ib ] =
= max[Ia , Ib ] + min[Ia , Ib ] − νab min[Ia , Ib ] =
= max[Ia , Ib ] + (1 − νab ) min[Ia , Ib ] ≥ max[Ia , Ib ]. 

Remark 4.4.3. Quantity of absolute information Ix (Ix̃ ) contained in the result


of interaction x(t) = a(t) ∨ b(t) (x̃(t) = a(t) ∧ b(t)) between the stochastic signals
a(t), b(t) ∈ Γ in physical signal space Γ(∨, ∧) with lattice properties is bounded
below by the value of linear combination between the quantities of mutual infor-
mation of a pair of signals a(t) and b(t) and the result of their interaction x(t), and
also by min[Ia , Ib ]:
Ix ≥ Iax + Ibx − Iab ≥ min[Ia , Ib ]; (4.4.23a)
Ix̃ ≥ Iax̃ + Ibx̃ − Iab ≥ min[Ia , Ib ]. (4.4.23b)

Proof of remark. Write a triangle inequality dab : dab ≤ dax + dxb , substituting in
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 169

both parts the relationship between metric dab and quantity of relative information

Iab (4.4.19) that implies the inequality:

Ix ≥ Iax + Ibx − Iab ≥ νax Ia + νbx Ib − νab min[Ia , Ib ] ≥


(4.4.24)
≥ (νax + νbx − νab ) min[Ia , Ib ].

Substituting into the last inequality the value of linear combination νax + νbx −νab =
1 defined by the relationship (3.2.33a), we obtain the initial inequality (4.4.23a).
Proof of the inequality (4.4.23b) is similar. 

Remark 4.4.4. Quantity of absolute information Ix contained in the result of


interaction x(t) = a(t) + b(t) of a pair of stochastic signals a(t), b(t) ∈ Γ in physical
signal space Γ(+) with group properties, is bounded below by the value of a linear
combination between the quantities of mutual information of a pair of signals a(t)
and b(t) and the result of their interaction x(t), and also by min[Ia , Ib ]:

Ix ≥ Iax + Ibx − Iab ≥ min[Ia , Ib ]. (4.4.25)

The proof of remark is the same as the previous one, except that the value of linear
combination νax + νbx − νab = 1 is defined by the relationship (3.2.18).
As accepted for informational signal space Ω, similarly, for physical signal space
Γ as a unit of information quantity measurement, we take the quantity of absolute
information I [ξ (tα )] contained in a single element ξ (tα ) of stochastic signal ξ (t),
and according to the relationship (4.1.4), it is equal to:
Z
I [ξ (tα )] = m(Xα )|tα ∈Tξ = iξ (tα ; t)dt = 1abit, (4.4.26)

where Tξ is a domain of definition of the signal ξ (t) (in the form of a discrete set
or continuum); iξ (tα ; t) is an information distribution density (IDD) of the signal
ξ (t).
In the physical signal space Γ, the unit of information quantity is introduced
by the definition that corresponds to Definition 4.1.9.

Definition 4.4.4. In physical signal space Γ, unit of information quantity is the


quantity of absolute information, which is contained in a single element (a sample)
ξ (tα ) of stochastic process (signal) ξ (t) considered a subalgebra B(X ) of generalized
Boolean algebra B(Ω) with a measure m, and is called absolute unit or abit.

Below we discuss informational relationships that characterize interactions of


the signals a(t) and b(t) in physical signal space Γ, a(t), b(t) ∈ Γ:

x(t) = a(t) ⊕ b(t),

where ⊕ is a binary operation of a group Γ(+) or a lattice Γ(∨, ∧).


Consider a group of identities that characterize informational relations for an
170 4 Signal Spaces with Lattice Properties

arbitrary pair of stochastic signals a(t), b(t) ∈ Γ interacting in physical signal space
Γ with properties of a group Γ(+) and a lattice Γ(∨, ∧):

Iab = νab min[Ia , Ib ]; (4.4.27a)



Iab = Ia + Ib − 2Iab ; (4.4.27b)
+
Iab = Ia + Ib − Iab ; (4.4.27c)
+ ∆
Iab = Iab + Iab ; (4.4.27d)
Iax = νax min[Ia , Ix ] = νax Ia ≤ Ia ; (4.4.27e)

Iax = Ia + Ix − 2νax min[Ia , Ix ] = Ix + (µ(at , xt ) − νax )Ia ; (4.4.27f)
+
Iax = Ia + Ix − νax min[Ia , Ix ] = Ix + µ(at , xt )Ia . (4.4.27g)
Consider also a group of inequalities that characterize informational relations for
an arbitrary pair of stochastic signals a(t), b(t) ∈ Γ interacting in physical signal
space Γ with properties of a group Γ(+) and a lattice Γ(∨, ∧):
+ ∆
Ia + Ib ≥ Iab ≥ Iab ; (4.4.28a)
+
Ia + Ib ≥ Iab ≥ Iab ; (4.4.28b)
Ia + Ib > Ix ≥ Iax + Ibx − Iab ≥ min[Ia , Ib ]; (4.4.28c)
+
Ia + Ib > Iab ≥ max[Ia , Ib ]. (4.4.28d)
For an arbitrary pair of stochastic signals a(t), b(t) ∈ Γ interacting in physical
signal space Γ(∨, ∧) with lattice properties: x(t) = a(t) ∨ b(t), x̃(t) = a(t) ∧ b(t), as
follows from the relation (3.2.38b), the identities hold:

 Iax = Ia /2;
Iax̃ = Ia /2; (4.4.29a)
Iax + Iax̃ = Ia ,


 Ibx = Ib /2;
Ibx̃ = Ib /2; (4.4.29b)
Ibx + Ibx̃ = Ib ,

and if the signals a(t) and b(t) are statistically independent, then the identities that
directly follows from the relation (3.2.39b) hold:

Iab = Ixx̃ = 0. (4.4.30)


∆ +
The invariance property of the quantities Iab , Iab , Iab introduced by Definitions 4.4.1
through 4.4.3 under bijection mappings of stochastic signals a(t) and b(t) is defined
by the following theorem.
4.4 Metric and Informational Relationships between Signals Interacting in Signal Space 171

Theorem 4.4.2. For a pair of stationary stochastic signals a(t) and b(t) with even
univariate PDFs in L-group Γ(+, ∨, ∧): a(t), b(t) ∈ Γ, t ∈ T , the quantities Iab ,
∆ +
Iab , Iab introduced by Definitions 4.4.1 through 4.4.3 are invariants of a group H
of continuous mappings {hα,β }, hα,β ∈ H; α, β ∈ A of stochastic signals preserving
neutral element (zero) 0 of a group Γ(+) of L-group Γ(+, ∨, ∧): hα,β (0) = 0:
+

Iab = Ia0 b0 , Iab = Ia∆0 b0 , Iab = Ia+0 b0 ; (4.4.31)
0 0
hα : a(t) → a (t), hβ : b(t) → b (t); (4.4.31a)
h−1
α
0
: a (t) → a(t), h−1
β
0
: b (t) → b(t), (4.4.31b)

where Ia0 b0 , Ia∆0 b0 , Ia+0 b0 are the quantities of mutual, relative, and overall information
between a pair of signals a0 (t) and b0 (t), that are the results of mappings (4.4.31a)
of the signals a(t) and b(t), respectively.
Proof. Corollary 3.5.3 of the Theorem 3.1.1. (3.5.15) (and also Corollary 4.2.5 of the
Theorem 4.2.4) implies invariance property of the quantity of absolute information
Ia and Ib contained in the signals a(t) and b(t), respectively:

Ia = Ia0 , I b = I b0 . (4.4.32)

Joint fulfillment of the equalities (4.4.18), (4.4.32), and (3.2.11) implies the identity
that determines invariance of the quantity of mutual information Iab :

Iab = Ia0 b0 , (4.4.33)

while joint fulfillment of the identities (4.4.32), (4.4.33), (4.4.27b) and (4.4.27c) im-
plies the identity that determines invariance of the quantity of relative information

Iab :

Iab = Ia∆0 b0 , (4.4.34)
and also implies the identity that determines the invariance of the quantity of overall
+
information Iab :
+
Iab = Ia+0 b0 . (4.4.35)

On the basis of Theorem 4.4.1 defining a metric dab (4.4.5) for the stationary
and stationary coupled stochastic signals a(t), b(t0 ) ∈ Γ in physical signal space
Γ, one can also define a metric dab (t, t0 ) between a pair of the samples at = a(t),
b0t = b(t0 ) for nonstationary signals:

dab (t, t0 ) = Ia + Ib − 2ν (at , bt0 ) min[Ia , Ib ], t, t0 ∈ T, (4.4.36)

where Ia and Ib is the quantity of absolute information contained in the signals


a(t) and b(t0 ); ν (at , bt0 ) = 1 − µ(at , bt0 ) is NMSI of a pair of the samples at = a(t),
b0t = b(t0 ) of the signals a(t), b(t0 ); µ(at , bt0 ) is a normalized metric between the
samples at = a(t), b0t = b(t0 ) of the signals a(t) and b(t0 ), which is defined by
relationship (3.2.1).
Such a possibility is provided by the following theorem.
172 4 Signal Spaces with Lattice Properties

Theorem 4.4.3. For nonstationary stochastic signals a(t), b(t0 ) ∈ Γ interacting


in physical signal space Γ in such a way that for an arbitrary pair of the samples
at = a(t), b0t = b(t0 ), the condition νab = sup ν (at , bt0 ) holds, the function ρab
t,t0 ∈T
determined by the relationship:

ρab = inf
0
dab (t, t0 ) = Ia + Ib − 2νab min[Ia , Ib ], (4.4.37)
t,t ∈T

is a metric.

Proof. Two stochastic signals a(t), b(t0 ) ∈ Γ could be considered as the correspond-
ing nonintersecting sets of the samples A = {at }, B = {bt0 } with a distance between
the samples (4.4.36). Then the distance ρab between the sets A = {at }, B = {bt0 } of
the samples (between the signals a(t), b(t0 )) is determined by the identity [246, Sec-
tion IV.1;(1)]:

ρab = inf
0
dab (t, t0 ) = Ia + Ib − 2 sup ν (at , bt0 ) min[Ia , Ib ]. (4.4.38)
t,t ∈T t,t0 ∈T

Under the condition νab = sup ν (at , bt0 ), from the equality (4.4.38) we obtain the
t,t0 ∈T
initial relationship.

The condition νab = sup ν (at , bt0 ) appearing in Theorem 4.4.3 considers the
t,t0 ∈T
fact when the closest, i.e., statistically most dependent samples of two signals a(t)
and b(t0 ) interact at the same time instant t. When the signals a(t) and b(t0 ) are
statistically independent, this condition is always valid.
The results reported in this section allow us to draw the following conclusions.

1. The metric (4.4.5) introduced in physical signal space provide the ade-
quacy of the following analysis of informational relationships between the
signals interacting in physical signal space with both group and lattice
properties.
2. Quantity of mutual information introduced by Definition 4.4.1 establishes
a measure of information quantity, which completely corresponds to a
measure of information quantity for informational signal space introduced
earlier in Section 4.1.
3. The obtained informational relationships create the basis for the analysis
of quality indices (possibilities) of signal processing in signal spaces with
both group and lattice properties.
5
Communication Channel Capacity

The intermediate inferences on informational signal space shown in Chapter 4 allow


us to obtain the final results that are important for both signal processing theory
and information theory and, in particular, determine the regularities and capacities
of discrete and continuous noiseless communication channels.
In the most general case, the signal s(t) carrying information can be represented
in the following form [247], [248]:

s(t) = M [c(t), u(t)],

where M [∗, ∗] is a modulating function; c(t) is a carrier signal (or simply a carrier);
u(t) is a transmitted message.
Modulation is changing some parameter of a carrier signal according to a trans-
mitted message. A variable (under a modulation) parameter is called an informa-
tional one. Naturally, to extract the transmitted message u(t) at the receiving side,
the modulating function M [∗, ∗] has to be a one-to-one relation:

u(t) = M −1 [c(t), s(t)],

where M −1 [∗, ∗] is a function that is an inverse with respect to the initial M [∗, ∗].
To transmit information over some distance, the systems use the signals possess-
ing the ability to be propagated as electromagnetic, hydroacoustic, and other oscil-
lations within the proper physical medium separating a sender and an addressee.
The wide class of such signals is described by harmonic functions. The transmitted
information must be included in high-frequency oscillation cos ω0 t called a carrier:

s(t) = A(t) cos(ω0 t + ϕ(t)) = A(t) cos(Φ(t)),

where the amplitude A(t) and/or phase ϕ(t) are changed according to the trans-
mitted message u(t).
Depending on which parameter of a carrier signal is changed, we distinguish
amplitude, frequency, and phase modulations. Modulation of a carrier signal by a
discrete message u(t) = {uj (t)}, j = 1, 2, . . . , n; uj (t) ∈ {ui }; i = 1, 2, . . . , q; q = 2k ;
k ∈ N is called keying, and the signal s(t) is a manipulated one. In the following
section, the informational characteristics of discrete (binary (q = 2) and m-ary
(q = 2k ; k > 1)) and continuous signals are considered.

173
174 5 Communication Channel Capacity

5.1 Information Quantity Carried by Discrete and Continuous


Signals
5.1.1 Information Quantity Carried by Binary Signals
A large class of binary signals can be represented in the following form [115]:
s(t) = u(t)c1 (t) + [1 − u(t)]c2 (t), (5.1.1)
where u(t) is a binary informational message taking two values only {u1 , u2 }, u1 6=
u2 , u1,2 ∈ {0, 1}; c1 (t) and c2 (t) are some narrowband deterministic signals.
In particular, the signals with amplitude-shift, frequency-shift, and phase-shift
keying can be associated with this class.
Let u(t) be a discrete stochastic process that may accept one of two values u1
and u2 at arbitrary instant t with the same probabilities p1 = P{u(t) = u1 } = 0.5,
p2 = P{u(t) = u2 } = 0.5, so that the change of the states (the values) is possible
at the fixed instants tj :
tj = ∆ ± jτ0 ,
where τ0 is a duration of an elementary signal; j = 0, 1, 2, . . . is an integer nonnega-
tive number; ∆ is a random variable that does not depend on u(t) and is uniformly
distributed in the interval [0, τ0 ].
If the probabilities of state transition P{u1 → u2 }, P{u2 → u1 } and probabili-
ties of state preservation P{u1 → u1 }, P{u2 → u2 } are assumed to be equal:
P{u1 → u2 } = P{u2 → u1 } = pc ; P{u1 → u1 } = P{u2 → u2 } = ps = 1 − pc ,
then the transition matrix of one step probabilities of a transition Π1 from one
state to another is equal to [249]:
 
ps pc
Π1 = ,
pc ps
and the matrix of probabilities of transition Πn from one state to another for n
steps is equal to [115], [249]:
 
Π11 Π12
Πn = ,
Π21 Π22
h i  h i 
τ 
1 − 2pc |τ | τ
where Πij = 0.5(1 + (−1)i+j Π); Π = (ps − pc ) τ0 − τ0 ; n =

τ0
h i
τ
τ0 is an integer part of ratio ττ0 ; τ = tj − tk is a time difference between two
arbitrary samples u(tj ) and u(tk ) of discrete stochastic process (message) u(t).
The joint bivariate probability density function (PDF) p(x1 , x2 ; τ ) of two sam-
ples u(tj ) and u(tk ) of discrete stochastic process u(t) is determined by the expres-
sion:
p(x1 , x2 ; τ ) = 0.5Π11 δ (x1 − u1 )δ (x2 − u1 ) + 0.5Π12 δ (x1 − u1 )δ (x2 − u2 )+
+ 0.5Π21 δ (x1 − u2 )δ (x2 − u1 ) + 0.5Π22 δ (x1 − u2 )δ (x2 − u2 ), (5.1.2)
5.1 Information Quantity Carried by Discrete and Continuous Signals 175

and univariate PDFs of the samples u(tj ) and u(tk ) of discrete stochastic process
u(t) are equal to:
p(x1 ) = 0.5δ (x1 − u1 ) + 0.5δ (x1 − u2 ); (5.1.3a)
p(x2 ) = 0.5δ (x2 − u1 ) + 0.5δ (x2 − u2 ), (5.1.3b)
where δ (x) is Dirac delta function.
Substituting the relationships (5.1.2), (5.1.3a), and (5.1.3b) into the formula
(3.1.2), we obtain the normalized function of statistical interrelationship (NFSI)
ψu (τ ) of stochastic process u(t) which for arbitrary values of state transition prob-
ability pc , is defined by the expression:
h i     
τ |τ | τ
ψu (τ ) = |1 − 2pc | 0 1 − (1 − |1 − 2pc |) · − . (5.1.4)
τ
τ0 τ0

From the relationships (5.1.2) and (5.1.3) one can also find the normalized autocor-
relation function (ACF) ru (τ ) of stochastic process u(t), which, on arbitrary values
of state transition probabilities pc , is determined by the relationship:
h i     
τ |τ | τ
ru (τ ) = (1 − 2pc ) 0 1 − 2pc − . (5.1.5)
τ
τ0 τ0

For stochastic process u(t) with state transition probability pc taking the values in
the interval [0; 0.5], the normalized ACF ru (τ ) is a strictly positive function, and
NFSI ψu (τ ) is identically equal to it: ψu (τ ) = ru (τ ) ≥ 0 (see Fig. 5.1.1(a)).

(a) (b)
FIGURE 5.1.1 NFSI ψu (τ ) and normalized ACF ru (τ ) of stochastic process u(t) with
state transition probability pc taking values in intervals (a) [0; 0.5], (b) ]0.5; 1]

For stochastic processes u(t) with state transition probability pc taking the
values in the interval ]0.5; 1], normalized ACF ru (τ ) has an oscillated character
possessing both positive and negative values.
An example of a relation between normalized ACF and NFSI for this case is
shown in the Fig. 5.1.1(b).
In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(pc = ps = 1/2), normalized ACF ru (τ ) and NFSI ψu (τ ) are determined by the
176 5 Communication Channel Capacity

expression: 
|τ |
1− , |τ | ≤ τ0 ;

ψu (τ ) = ru (τ ) = τ0
 0, |τ | > τ0 .
It should be noted that for stochastic processes with state transition probability
pc = P taking the values in the interval [0; 0.5], NFSI ψu (τ ) |P is identically equal to
NFSI ψu (τ ) |1−P of stochastic processes with state transition probability pc = 1 −P
taking the values in the interval [0.5; 1]:

ψu (τ ) |P = ψu (τ ) |1−P = ψu (τ ). (5.1.6)

According to the coupling equation between information distribution density (IDD)


and NFSI (3.4.3), IDD iu (τ ) of stochastic process u(t) is determined by the expres-
sion (see Fig. 5.1.2):
h i

iu (τ ) = (1 − |1 − 2pc |) · |1 − 2pc | . (5.1.7)
τ
0

FIGURE 5.1.2 IDD iu (τ ) of stochastic process u(t) that is determined by Equation (5.1.7)

In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(pc = ps = 1/2), IDD iu (τ ) is determined by the expression:

iu (τ ) = [1(τ + τ0 /2) − 1(τ − τ0 /2)]/τ0 .

Quantity of absolute information Iu (T ) carried by stochastic process u(t) in the


interval t ∈ [0; T ], according to (3.5.7), is equal to:

Iu (T ) = T iu (0), (5.1.8)

where iu (0) is the maximum value of IDD iu (τ ) of stochastic process u(t) on τ = 0.


There are the sequences with state transition probability pc = 1/2 among all
binary random sequences that carry the largest information quantity Iu (T ) during
the time interval T , inasmuch as in this case, there is no statistical interrelation
between the elements of the sequence:

sup Iu (T ) = T · sup iu (0) = T /τ0 = n; (5.1.9)


pc pc
5.1 Information Quantity Carried by Discrete and Continuous Signals 177

arg sup Iu (T ) = 1/2,


pc

where n is a number of binary symbols of the sequence u(t).


Conversely, some sequences among all binary random sequences carry the least
information quantity Iu (T ) if their state transition and state preservation proba-
bilities are equal to pc = 0, ps = 1 (pc = 1, ps = 0), respectively:

inf Iu (T ) = T · inf iu (0) = 0. (5.1.10)


pc pc

According to the expression (5.1.10), binary sequences in the form of a constant


(pc = 0, u(t) = {. . . u1 , u1 , u1 , . . .} or {. . . u2 , u2 , u2 , . . .}) and also in the form of a
meander (pc = 1, u(t) = {. . . u1 , u2 , u1 , u2 , u1 , u2 , . . .}) do not carry any informa-
tion.
The mathematical apparatus for the construction of the stated fundamentals of
signal processing theory and information theory is Boolean algebra with a measure.
As a unit of measurement of information quantity, we accept the quantity of abso-
lute information I [s(tα )] contained in a single element s(tα ) of stochastic process
s(t) with a domain of definition Ts (in the form of discrete set or continuum) and
IDD i(tα ; t) which, according to the relationship (3.5.3), is equal to:
Z
I [s(tα )] = i(tα ; t)dt = 1 abit.
Ts

The unit of the quantity of absolute information contained in a single element of


stochastic process, according to Definition 3.5.6, is called absolute unit or abit. For
newly introduced unit of measurement of information quantity, it is necessary to
establish interrelation with known units of measurement of information quantity
that take their origins from the logarithmic measure of Hartley [50].
Let ξ (tj ) be a discrete random sequence with independent elements, which is
defined upon a finite set of instants Tξ = {tj }; j = 1, 2, . . . , n, and the random vari-
able ξj = ξ (tj ) equiprobably takes the values from the set Ξ = {xi }; i = 1, 2, . . . , q.
Then the information quantity Iξ [n, q ] contained in discrete random sequence ξ (tj )
is determined by the known relationship:

Iξ [n, q ] = n log q. (5.1.11)

For a logarithmic measure to base 2, information quantity Iξ [n, q ] is equal to n log2 q


binary units (bit).
According to the formula (5.1.9), information quantity Iu [n, q ], contained in a
binary sequence u(t) (q = 2) with equiprobable states {u1 , u2 } (P{u1 } = P{u2 } =
0.5) and state transition probability pc = 1/2, is equal to:

Iu [n, q ] = n log2 q = n, (bit) (5.1.12)

that corresponds to the result determined by formula (5.1.9).


Thus, for binary sequence u(t) containing n elements with two equiprobable
178 5 Communication Channel Capacity

symbols (states {u1 , u2 }) (P{u1 } = P{u2 } = 0.5) and state transition probability
pc = 1/2, the quantity of absolute information Iu (T ) measured in abits is equal to
information quantity Iu [n, 2] measured in bits:

Iu (T ) (abit) = Iu [n, 2] = n. (bit) (5.1.13)

Despite the equality (5.1.13), a quantity of absolute information Iu (T ) measured in


abits is not equivalent to information quantity Iu [n, 2] measured in bits, inasmuch
as the first one is measured based on IDD, and the second is measured on the base
of entropy. The difference between these measures of information quantity will be
elucidated more thoroughly below. Now we consider informational characteristics
of binary signals s(t) formed on the base of binary messages u(t).
The signal representation (5.1.1) admits the possibility of unique extraction
(demodulation) of initial message u(t) from the signal s(t) using information con-
cerning the carrier signals c1,2 (t). Thus, for a message u(t), informational signal
s(t), and carrier signal c1,2 (t), there exists some one-to-one modulating function
M [∗, ∗] that the following relation holds:

s(t) = M [u(t), c1,2 (t)]; u(t) = M −1 [s(t), c1,2 (t)], (5.1.14)

where M −1 [∗, ∗] is a function that is inverse with respect to initial one.


According to Theorem 3.1.1, under the isomorphic mapping of the processes
u(t) and s(t) into each other:
M
u(t)  s(t), (5.1.15)
M −1

NFSI is preserved:
ψu (tj , tk ) = ψs (t0j , t0k ), (5.1.16)
and the overall quantity of information I [u(t)] (I [s(t)]) contained in stochastic pro-
cess u(t) (s(t)), is preserved too:

I [u(t)] = I [s(t)]. (5.1.17)

If the mapping (5.1.14) preserves domain Tu = [t0 ; t0 + T ] = Ts of stochastic


process u(t) (s(t)), then the identity holds:

ψu (tj , tk ) = ψs (tj , tk ). (5.1.18)

Let MAM , MP M , MF M be the functions describing the transformation (5.1.15) of a


binary message u(t) into a binary signal s(t) under amplitude-shift sAM (t), phase-
shift sP M (t), and frequency-shift sF M (t) keying of binary message u(t) = {uj (t)},
j = 1, 2, . . . , n; uj (t) ∈ {ui }; i = 1, 2, respectively:

MAM : u(t) → {sAM j (t)} = sAM (t);

MP M : u(t) → {sP M j (t)} = sP M (t);


MF M : u(t) → {sF M j (t)} = sF M (t).
5.1 Information Quantity Carried by Discrete and Continuous Signals 179

Assume the processes sAM (t), sP M (t), sF M (t) are stationary. Then, according to
Theorem 3.1.1, the relationships, which characterize the identities between single
characteristics of the signal s(t) and a message u(t), hold:
Identity between NFSIs:

ψsAM (τ ) = ψsP M (τ ) = ψsF M (τ ) = ψu (τ );


Identity between IDDs:

isAM (τ ) = isP M (τ ) = isF M (τ ) = iu (τ );

Identity between HSDs:

σsAM (ω ) = σsP M (ω ) = σsF M (ω ) = σu (ω );

Identity of overall quantity of information:

I [u(t)] = I [sAM (t)] = I [sP M (t)] = I [sF M (t)].

5.1.2 Information Quantity Carried by m-ary Signals


At arbitrary instant t, let each symbol uj (t) of discrete stochastic process u(t) =
{uj (t)}, j = 0, 1, 2, . . . takes one of q values from the set {ui }, i = 1, 2, . . . , q; q = 2k ;
k ∈ N with the same probabilities pi = P{u(t) = ui } = 1/q, so that state transition
is possible at the fixed instants tj only:

tj = ∆ ± jτ0 ,

where τ0 is duration of an elementary signal; j = 0, 1, 2, . . . is an integer nonnegative


number; ∆ is a random variable that does not depend on u(t) and is uniformly
distributed in the interval [0, τ0 ].
If state transition probabilities P{ul → um } (1 ≤ l ≤ q; 1 ≤ m ≤ q) and state
preservation probabilities P{ul → ul } are, respectively, equal to:

P{ul → um } = plm ,

then transition matrix of one-step probabilities of a transition Π1 from one state


to another is equal to:
 
p11 p12 . . . p1q
 p21 p22 . . . p2q 
Π1 =  ... ... ...
,
... 
pq1 pq2 . . . pqq
q
X
plm = 1.
m=1

In general, it is extremely difficult to obtain the expressions for normalized ACF


ru (τ ) and NFSI ψu (τ ) of a discrete random sequence u(t) with arbitrary state
180 5 Communication Channel Capacity

transition probabilities plm , (l 6= m) and cardinality Card{ui } = q of a set {ui }


of values. Nevertheless, if we consider the case when state transition probabilities
P{ul → um }, (l 6= m) and state preservation probabilities P{ul → ul } are equal
to:
P{ul → um } = pc ; P{ul → ul } = ps ,
then the transition matrix of one-step probabilities of a transition Π1 from one
state to another is equal to:
 
ps pc . . . pc
 pc ps . . . pc 
Π1 = 
 ...
,
... ... ... 
pc pc . . . ps

where pc ∈]0; 1/(q − 1)]; ps ∈ [0; (q − 1)pc ], and the transition matrix of probabilities
of transition Πn from one state to another for n steps is equal to [115], [249]:
 
Π11 Π12 . . . Π1q
 Π21 Π22 . . . Π2q 
Πn =   ... ... ... ... ,

Πq1 Πq2 . . . Πqq


h i h i 
 τ 
1 − qpc |τ |
i+j τ
where Πij = (1 + (−1) Π)/q; Π = (1 − qpc ) 0 − τ0 ; n =
τ
τ0
h i
τ τ k
τ0 is an integer part of the ratio τ0 ; q = 2 ; k ∈ N; τ = tj − tk is a time difference
between two arbitrary samples u(tj ), u(tk ) of discrete stochastic process u(t).
Then joint bivariate PDF p(x1 , x2 ; τ ) of two samples u(tj ), u(tk ) of discrete
stochastic process u(t) is determined by the expression:
q
1 X
p(x1 , x2 ; τ ) = Πij δ (x1 − ui )δ (x2 − uj ), (5.1.19)
q i,j=1

and univariate PDFs p(x1 ), p(x2 ) of the samples u(tj ) and u(tk ) of discrete stochas-
tic process u(t) are, respectively, equal to:
q q
1X 1X
p(x1 ) = δ (x1 − ui ); p(x2 ) = δ (x2 − ui ). (5.1.20)
q i=1 q i=1

Substituting the relationships (5.1.19) and (5.1.20) into the formula (3.1.2), we
obtain NFSI ψu (τ ) of stochastic process u(t), which, on arbitrary values of state
transition probabilities pc , is determined by the expression:
h i     
τ |τ | τ
ψu (τ ) = |1 − q · pc | 1 − (1 − |1 − q · pc |) · − . (5.1.21)
τ
0
τ0 τ0

From the relationships (5.1.19) and (5.1.20), we can also find the normalized ACF
5.1 Information Quantity Carried by Discrete and Continuous Signals 181

ru (τ ) of stochastic process u(t), which, on arbitrary values of state transition prob-


abilities pc , is determined by the expression:
h i     
τ |τ | τ
ru (τ ) = (1 − q · pc ) 1 − q · pc · − , (5.1.22)
τ
0
τ0 τ0
h i
where ττ0 is an integer part of the ratio ττ0 .
For discrete stochastic process u(t) with state transition probability pc taking
the values in the interval [0; 1/q ], normalized ACF ru (τ ) is a strictly positive func-
tion, and NFSI ψu (τ ) is identically equal to it: ψu (τ ) = ru (τ ) ≥ 0. Conversely, for
stochastic process u(t) with state transition probability pc taking the values in the
interval ]1/q ; 1/(q − 1)], normalized ACF ru (τ ) is an oscillating function, taking
both positive and negative values. The examples of NFSIs for multipositional se-
quence u(t) with state transition probability pc equal to pc = 1/(q − 1) on q = 8; 64
are represented in Fig. 5.1.3.

FIGURE 5.1.3 NFSI of multipositional FIGURE 5.1.4 IDD of multipositional


sequence sequence

In the case of statistical independence of the symbols {uj (t)} of a message u(t)
(plm = pc = ps = 1/q), normalized ACF ru (τ ) and NFSI ψu (τ ) are determined by
the relationship: 
|τ |
1− , |τ | ≤ τ0 ;

ψu (τ ) = ru (τ ) = τ0
 0, |τ | > τ0 .
According to the coupling equation (3.4.3) between IDD and NFSI (5.1.21), IDD
iu (τ ) of stochastic process u(t) is determined by the expression:
h i

iu (τ ) = (1 − |1 − q · pc |) · |1 − q · pc | . (5.1.23)
τ
0

The examples of IDDs for multipositional sequence u(t) with state transition proba-
bility pc equal to pc = 1/(q− 1) on q = 8; 64 are represented in Fig. 5.1.4. In the case
of statistical independence of the symbols {uj (t)} of a message u(t) (plm = 1/q),
IDD iu (τ ) is defined by the expression:

iu (τ ) = [1(τ + τ0 /2) − 1(τ − τ0 /2)]/τ0 ,


182 5 Communication Channel Capacity

where 1(t) is Heaviside step function.


Quantity of absolute information Iu (T ) carried by stochastic process u(t) in the
interval t ∈ Tu = [t0 , t0 + T ], according to (5.1.8), is equal to:

Iu (T ) = T iu (0) = n (abit), (5.1.24)

where iu (0) is maximum value of IDD iu (τ ) of stochastic process u(t) on τ = 0;


n is a number of symbols of a sequence.
Comparing the obtained result (5.1.24) with formula (5.1.9), one may conclude
that the quantity of absolute information Iu (T ), contained in a discrete message
u(t) being the sequence of statistically independent symbols {uj (t)}, whose every
symbol uj (t) takes one of q values from the set ui , is equal to the number n of
symbols of the sequence:
Iu (T ) = n (abit),
i.e., quantity of absolute information (in abits) carried by a discrete sequence does
not depend on cardinality Card{ui } = q of the set of values {ui } of sequence
symbols. This result can be elucidated by the fact that abit characterizes the infor-
mation quantity contained in a single element uj (t) of statistical collection and it is
defined by a measure m(Uj ) of ordinate set ϕ[i(tj , t)] of IDD i(tj , t) of this element:
Z
m(Uj ) = i(tj , t)dt = 1,
Tu

where Uj = ϕ[i(tj , t)] = {(t, y ) : t ∈ Tu , 0 ≤ y ≤ i(tj , t)}.


Thus, independently of cardinality Card{ui } = q of a set of the values {ui } of the
sequence symbols {uj (t)}, information quantity (in abits) contained in a discrete
message in the case of statistical independence of the symbols is determined by
their number n. It is fair to ask: if an equality between the information quantity
Iu (T ), contained in discrete random sequence {uj (t)} and measured in abits, and
the information quantity Iu [n, q ], measured in bits, holds with respect to binary
sequences only (q = 2) (5.1.13):

Iu (T ) = Iu [n, q ] = n, (5.1.25)

is introducing a new measure and its application with respect to discrete random
sequences necessary and reasonable? In the case of arbitrary discrete sequence with
q > 2, the equality (5.1.25) does not hold:

Iu (T ) 6= Iu [n, q ].

Moreover, discrete random sequences {uj (t)}, {wj (t)} with the same length n
(there is no statistical dependence between the symbols) and distinct cardinali-
ties Card{ui }, Card{wk } of the sets of the values {ui }, {wk } (i = 1, 2, . . . , qu ;
k = 1, 2, . . . , qw ; qu 6= qw ) contain the same quantity of absolute information Iu (T ),
Iw (T ) measured in abits:
Iu (T ) = Iw (T ),
5.1 Information Quantity Carried by Discrete and Continuous Signals 183

whereas information quantities Iu [n, qu ], Iw [n, qw ] measured in bits are not equal
to each other:
Iu [n, qu ] 6= Iw [n, qw ], qu 6= qw .
Thus, for a newly introduced unit of information quantity measurement, it is neces-
sary to establish the interrelation with known units based on logarithmic measure
of Hartley [50]. It may be realized by the definition of a new notion.

S 5.1.1. Discrete random sequence ξ (t) = {ξj (t)} is defined on a finite


Definition
set Tξ = Tj of time intervals Tj =]tj−1 , tj ]:
j

tj = ∆ + jτ0 ,

where τ0 is a duration of an elementary signal; j = 1, 2, . . . , n is an integer positive


number; ∆ is a random variable that does not depend on ξ (t) and is uniformly
distributed in the segment [0, τ0 ], is called generalized discrete random sequence, if
the element of the sequence ξj (t), t ∈ Tj equiprobably takes the values from the set
{zi }, i = 1, 2, . . . , q; q = 2k ; k ∈ N, so that, in general case, {zi } are statistically
independent random variables and each of them has symmetric (with respect to ci )
PDF pi (ci , z ):
pi (ci , z ) = pi (ci , 2ci − z ),
where ci is a center of symmetry of the function pi (ci , z ).

Thus, a generalized discrete random sequence ξ (t) consists of the elements


{ξj (t)}, so that each of them may be considered a stochastic process ξj (t) defined
in the interval Tj =]tj−1 , tj ], and an arbitrary pair of elements ξj (t), t ∈ Tj ; ξk (t),
t ∈ Tk is a pair of statistically dependent stochastic processes.
For PDF pi (ci , z ) of random variable zi with domain Dz , we denote its ordinate
set Zi by ϕ[pi (ci , z )]:

Zi = ϕ[pi (ci , z )] = {(z, p) : z ∈ Dz , 0 ≤ p ≤ pi (ci , z )}. (5.1.26)

It is not difficult to notice that for an arbitrary pair of random variables zi and
zl from the set of values {zi } of the element ξj (t) of generalized discrete random
sequence ξ (t) with PDFs pi (ci , z ) and pl (cl , z ), respectively, the metric identity
holds: Z
1 1
dil = |pi (ci , z ) − pl (cl , z )|dz = m[Zi ∆Zl ], (5.1.27)
2 2
Dz

where dil is a metric between PDF pi (ci , z ) of random variable zi and PDF pl (cl , z )
of random variable zl ; m[Zi ∆Zl ] is a measure of symmetric difference of the sets Zi
and Zl ; Zi = ϕ[pi (ci , z )]; Zl = ϕ[pl (cl , z )].
The mapping (5.1.26) transforms PDFs pi (ci , z ), pl (cl , z ) into corresponding
equivalent sets Zi , Zl and defines isometric mapping of the function space of PDFs
{pi (ci , z )} — (P, d) with metric dil (5.1.27) into the set space {Zi } — (Z, d∗ ) with
metric d∗il = 21 m[Zi ∆Zl ].
184 5 Communication Channel Capacity

The normalization property of PDF provides normalization of a measure m(Zi )


of random variable zi from a set of the values {ui }:
Z
m(Zi ) = pi (ci , z )dz = 1.
Dz

The mapping (5.1.26) permits considering a set of values {zi } of generalized discrete
random sequence ξ (t) as a collection of sets {Zi }:
q
[
Z= Zi ,
i=1

so that a measure of a single element Zi is normalized: m(Zi ) = 1.


By a measure m(Z ) we mean a quantity of absolute information I [{zi }], which
is contained in a set of values {zi } of a symbol ξj (t) of generalized discrete random
sequence ξ (t), as in a collection of the elements {Zi }:
q
[
I [{zi }] = m[ Zi ].
i=1

The information quantity Iξ (n, q ), contained in generalized discrete random se-


quence ξ (t) and measured in bits, can be evaluated on the base of logarithmic mea-
sure of Hartley [50] and two measures of quantity of absolute information I [ξ (t)],
I [{zi }] contained in this sequence and measured in abits:

Iξ (n, q ) = I [ξ (t)] · log2 I [{zi }] (bit), (5.1.28)


n
S
where I [ξ (t)] = m( Xj ) = T · iu (0) (abit) is a quantity of absolute information
j=1
contained in generalized discrete random sequence ξ (t), as in a collection of the
elements Xj ; Xj = ϕ[i(tj , t)] = {(t, y ) : t ∈ T, 0 ≤ y ≤ i(tj , t)} is an ordinate set
Sq
of IDD i(tj , t) of a symbol ξj (t) of a sequence ξ (t); I [{zi }] = m[ Zi ] (abit) is
i=1
a quantity of absolute information contained in a set of values {zi } of a symbol
ξj (t) of a sequence ξ (t); Zi = ϕ[pi (ci , z )] = {(z, p) : z ∈ Dz , 0 ≤ p ≤ pi (ci , z )} is
an ordinate set of PDF pi (ci , z ) of random values {zi } forming a set of values of a
symbol ξj (t) of a sequence ξ (t).
Example 5.1.1. Let ξ (t) be generalized discrete random sequence with statistically
independent elements {ξj (t)}, so that the element of a sequence ξj = ξj (t), t ∈
Tj equiprobably takes the values from a set {zi }, i = 1, 2, . . . , q; and {zi } are
statistically independent random variables with PDF pi (ci , z ) = δ (z − ci ), ci 6= cl ,
i 6= l.
The information quantity Iξ (n, q ) contained in generalized discrete random se-
quence ξ (t) and measured in bits, according to (5.1.28), is equal to:

Iξ (n, q ) = n log2 q (bit), (5.1.29)


5.1 Information Quantity Carried by Discrete and Continuous Signals 185
q
S
inasmuch as I [ξ (t)] = n (abit) (see (5.1.24)), and I [{zi }] = m[ Zi ] = q (abit).
i=1
5

A random sequence in the output of a message source is an example of practical


application of a generalized discrete random sequence of this type in the absence
of various destabilizing and interfering factors influencing symbol generation.

Example 5.1.2. Let ξ (t) be a generalized discrete random sequence with statisti-
cally independent elements {ξj (t)}, so that the element of a sequence ξj (t), t ∈ Tj
equiprobably takes the values from a set {zi }, i = 1, 2, . . . , q; and {zi } are statisti-
cally independent random variables with PDF pi (ci , z ):
  
 1 |z − ci |
1− , |z − ci | ≤ b;
pi (ci , z ) = b b
0, |z − ci | > b,

where ci = b · i is a center of symmetry of PDF pi (ci , z ); b is a scale parameter of


PDF pi (ci , z ).
The information quantity Iξ (n, q ) contained in generalized discrete random se-
quence ξ (t) and measured in bits, according to (5.1.28), is equal to:

Iξ (n, q ) = n log2 (3q/4) < n log2 q (bit),


q
S
inasmuch as I [ξ (t)] = n (abit), and I [{zi }] = m[ Zi ] = 3q/4 (abit). 5
i=1

A random sequence in the outputs of transmitting and receiving sets, under


influence of various destabilizing and/or interfering factors during generation and
receiving message symbols, respectively, is an example of practical application of a
generalized discrete random sequence of this type (pi (ci , z ) 6= δ (z − ci )).

Example 5.1.3. Let ξ (t) be a generalized discrete random sequence with statis-
tically independent elements {ξj (t)}, so that the element of a sequence ξj = ξj (t),
t ∈ Tj equiprobably takes the values from a set {zi }, i = 1, 2, . . . , q; and {zi } are
statistically independent random variables with PDF pi (ci , z ) = δ (z − ci ), ci 6= cl ,
i 6= l, and IDD iξ (τ ) of a sequence ξ (t) is determined by the expression (5.1.23):
h i

iu (τ ) = (1 − |1 − q · pc |) · |1 − q · pc | ,
τ
0

where pc = 0.5/q is the state transition probability P{ul → um } (l 6= m).


The information quantity Iξ (n, q ) contained in a generalized discrete random
sequence ξ (t) and measured in bits, according to (5.1.28), is equal to:

Iξ (n, q ) = T iξ (0) log2 q = 0.5n log2 q (bit),


q
S
inasmuch as I [ξ (t)] = T iξ (0) = 0.5n (abit), and I [{zi }] = m[ Zi ] = q (abit). 5
i=1
186 5 Communication Channel Capacity

The result (5.1.29) obtained first by Hartley [50] corresponds to the informa-
tional characteristic of an ideal message source. From a formal standpoint, for a
finite time interval Tξ = [t0 , t0 + T ], an ideal message source is able to produce
information quantity Iξ (n, q ) equal to:

Iξ (n, q ) = n log2 q (bit), (5.1.30)

where n = T /τ0 is a number of sequence symbols in the interval Tξ ; τ0 is a duration


of a sequence symbol.
Information quantity Iξ (1, q ) corresponding to one symbol ξj (t) of a multiposi-
tion discrete random sequence {ξj (t)} is evaluated by the formula:

Iξ (1, q ) = log2 q (bit). (5.1.31)

According to (5.1.30), a multiposition discrete random sequence with an arbitrarily


large code base (alphabet size) q permits generating and transmitting an arbitrarily
large information quantity Iξ (n, q ) for a finite time interval:

lim Iξ (n, q ) = lim (n log2 q ) = ∞. (5.1.32)


q→∞ q→∞

Then information quantity Iξ (1, q ), transmitted by a single symbol of multiposition


discrete random sequence with an arbitrarily large code base q, is an infinitely large
quantity:
lim Iξ (1, q ) = ∞. (5.1.33)
q→∞

This feature of multiposition discrete random sequences was reported by Wiener


in [193]: “One single voltage measured to an accuracy of one part in ten trillion
could convey all the information in the Encyclopedia Britannica, if the circuit noise
did not hold us to an accuracy of measurement of perhaps one part in ten thou-
sand”. According to Wiener’s view, the only restriction influencing the information
transmission process with the use of multiposition discrete random sequences is
interfering action of noise. However, above all, there exist fundamental proper-
ties of the signals that confine severely information quantity transmitted through
the communication channels but neither noise nor interference. We shall show that
excepting noise and interference, another confining condition does not permit trans-
mitting infinitely large information quantity Iξ (n, q ) → ∞ for a finite time interval
Tξ = [t0 , t0 + T ].
Let u(t) = {uj (t)} be discrete random sequences with statistically independent
symbols, where each symbol uj (t) at an arbitrary instant t may take one of q values
from a set {ui }, i = 1, 2, . . . , q with the same probabilities pi = P{uj (t) = ui } =
1/q, so that the state (value) transition is possible at the fixed instants tj :

tj = ∆ ± jτ0 ,

where τ0 is a duration of an elementary signal; j = 1, 2, . . . , n is an integer positive


number; ∆ is some deterministic quantity.
Then the following theorem holds.
5.1 Information Quantity Carried by Discrete and Continuous Signals 187

Theorem 5.1.1. Information quantity Iu [n, q ], carried by discrete sequence u(t)


with n symbols and code base q, is defined by Hartley’s measure but does not exceed
n log2 n bit: 
n · log2 q, q < n;
Iu [n, q ] = (5.1.34)
n · log2 n, q ≥ n.

Proof. Obviously, the fact that a part of identity in (5.1.34), corresponding to the
case when a strict inequality q < n holds, does not require a proof. So we consider
the proof of identity when q ≥ n. Let f be a function that realizes a bijection
mapping of discrete message u(t) in the form of a discrete random sequence into a
discrete sequence w(t):
f
u(t)  w(t), (5.1.35)
f −1

and the mapping f is such that each symbol uj (t) of initial message u(t) = {uj (t)},
j = 1, 2, . . . , n, accepting one of q values of a set {ui }, i = 1, 2, . . . , q, (Card{ui } =
q), is mapped into corresponding symbol wl (t) of multipositional discrete sequence
w(t) = {wl (t)}, l = 1, 2, . . . , n, which can accept one of n values of a set {wl },
(Card{wl } = n):
f : uj (t) → wl (t). (5.1.36)
The possibility of such mapping f exists because the maximum number of distinct
symbols of initial sequence u(t) = {uj (t)}, j = 1, 2, . . . , n and the maximum number
of distinct symbols of the sequence w(t) = {wl (t)}, l = 1, 2, . . . , n are equal to the
number of sequence symbols n. Under bijection mapping f (5.1.35), the initial
discrete random sequence u(t) with n symbols and code base q transforms into
discrete random sequence w(t) with n symbols and code base n. Then, according
to the main axiom of signal processing theory 2.3.1, information quantity Iw [n, n]
contained in discrete random sequence w(t) is equal to information quantity Iu [n, q ]
contained in the initial discrete random sequence u(t):

Iu [n, q ] = Iw [n, n] = n log2 [min(q, n)] (bit). (5.1.37)

Equality (5.1.37) can be interpreted in the following way: discrete random se-
quence u(t) = {uj (t)}, j = 1, 2, . . . , n, whose every symbol uj (t) at an arbitrary
instant equiprobably accepts the values from a set {ui }, i = 1, 2, . . . , q, contains the
information quantity I [u(t)] defined by Hartley’s logarithmic measure (5.1.34) but
not greater than n log2 n bit. Consider another proof of equality (5.1.34).

Proof. Consider the mapping h of a set of symbols Ux = {uj (t)} of discrete random
sequence u(t) onto a set of values Uz = {ui }:

h : Ux = {uj (t)} → Uz = {ui }. (5.1.38)

The mapping h has a property, according to which, for each element ui from a set
of values Uz = {ui } there exists at least one element uj (t) from a set of symbols
188 5 Communication Channel Capacity

Ux , uj (t) ∈ Ux = {uj (t)}, i.e., h is a surjective mapping of a set Ux = {uj (t)}


onto a set Uz = {ui }. This implies, that for an arbitrary sequence u(t) = {uj (t)},
j = 1, 2, . . . , n, a cardinality CardUz of a set Uz = {ui } does not exceed a cardinality
CardUx of a set Ux = {uj (t)}:

CardUz ≤ CardUx , (5.1.39)

where CardUx = n.
Cardinality CardU of a set U of all possible mappings of the elements {uj (t)}
of a set Ux onto a set Uz = {ui } is equal to:

CardU = (CardUz )CardUx . (5.1.40)

Finding the logarithm of both parts of (5.1.40), we obtain Hartley’s logarithmic


measure defining information quantity Iu [n, q ] carried by discrete random sequence
u(t):
Iu [n, q ] = log2 (CardU ) = CardUx log2 (CardUz ). (5.1.41)
Applying the inequality (5.1.39) to the relationship (5.1.41) and replacing a car-
dinality CardUx of a set Ux with n, we obtain the upper bound of information
quantity Iu [n, q ] contained in a discrete random sequence:

Iu [n, q ] = CardUx log2 (CardUz ) ≤ CardUx log2 (CardUx );

Iu [n, q ] ≤ n log2 n. (5.1.42)

Theorem 5.1.1, taken with the relationship (5.1.34) and inequality (5.1.42), per-
mits us to draw some conclusions.

1. Upper bound of information quantity Iu [n, q ], that can be transmitted by


a discrete random sequence u(t) with a length of n symbols and code base
q, is determined by a number of symbols n of the sequence only, and does
not depend on code base (alphabet size) q.
2. If a discrete random sequence u(t) is defined on a finite time interval
Tu = [t0 , t0 + T ], then the information quantity Iu [n, q ] contained in this
sequence is always a finite quantity:
T T
Iu [n, q ] ≤ log2 (bit), (5.1.43)
τ0 τ0
where τ0 is a duration of sequence symbol.
3. There exists an exclusively theoretical possibility of transmitting an in-
finitely large information quantity Ij by a single symbol uj (t) of discrete
message u(t), that can be realized only on the basis of the sequence of an
infinitely large length n → ∞:

lim Ij = lim log2 n = ∞ (bit).


n→∞ n→∞
5.1 Information Quantity Carried by Discrete and Continuous Signals 189

4. Generally, it is impossible to transmit an arbitrary information quantity


by a single elementary signal uj (t) considered outside statistical collection
{uj (t)}. Here, the reason cannot be elucidated by the absence of necessary
technical means and ill effect of interference (noise). The right answer is
stipulated by a fundamental property of information, i.e., its ability to
exist exclusively within a statistical collection of structural elements of its
carrier (the signal).
5. If ξ (t) = {ξj (t)S
} is a generalized discrete random sequence defined on a
finite set Tξ = Tj of time intervals Tj =]tj−1 , tj ]:
j

tj = ∆ + jτ0 ,

where τ0 is a duration of an elementary signal; j = 1, 2, . . . , n is an integer


positive number; ∆ is a random variable that does not depend on ξ (t)
and is uniformly distributed in a segment [0, τ0 ]; and the element of the
sequence ξj (t), t ∈ Tj equiprobably accepts the values from a set {zi },
i = 1, 2, . . . , q; q = 2k ; k ∈ N, so that, in general case, {zi } are statisti-
cally dependent random variables, each of them has PDF pi (ci , z ) that is
symmetric with respect to ci :

pi (ci , z ) = pi (ci , 2ci − z ),

where ci is a center of symmetry of PDF pi (ci , z ), then the following


statements hold.
(a) Information quantity Iξ (n, q ) carried by generalized discrete random
sequence ξ (t) is determined by the relationships (5.1.28) and (5.1.34):

I [ξ (t)] · log2 I [{zi }], q < n;
Iξ (n, q ) = (bit) (5.1.44)
I [ξ (t)] · log2 I [ξ (t)], q ≥ n,
n
S
where I [ξ (t)] = m( Xj ) = T · iu (0) (abit) is a quantity of abso-
j=1
lute information contained in generalized discrete random sequence
ξ (t) considered as a collection of the elements Xj ; Xj = ϕ[i(tj , t)] =
{(t, y ) : t ∈ T, 0 ≤ y ≤ i(tj , t)} is an ordinate set of IDD i(tj , t) of
Sq
the symbol ξj (t) of the sequence ξ (t); I [{zi }] = m[ Zi ] (abit) is
i=1
a quantity of absolute information contained in a set of values {zi }
of the symbol ξj (t) of the sequence ξ (t); Zi = ϕ[pi (ci , z )] = {(z, p) :
z ∈ Dz , 0 ≤ p ≤ pi (ci , z )} is an ordinate set of PDF pi (ci , z ) of ran-
dom variables {zi } forming a set of values of the symbol ξj (t) of the
sequence ξ (t).
The relationship (5.1.44) means that if cardinality Card{zi } of a set
of values {zi }, i = 1, 2, . . . , q of generalized discrete random sequence
190 5 Communication Channel Capacity

ξ (t) = ξj (t), j = 1, 2, . . . , n is less than cardinality of a set of the


symbols Card{ξj (t)}:

Card{zi } = q < Card{ξj (t)} = n,

the information quantity Iξ (n, q ) carried by this sequence is equal to:

Iξ (n, q ) = I [ξ (t)] · log2 I [{zi }]. (bit)

If a cardinality Card{zi } of a set of values {zi }, i = 1, 2, . . . , q of gener-


alized discrete random sequence ξ (t) = ξj (t), j = 1, 2, . . . , n is greater
than or equal to a cardinality of a set of the symbols Card{ξj (t)}:

Card{zi } = q ≥ Card{ξj (t)} = n,

then the information quantity Iξ (n, q ) carried by this sequence is equal


to:
Iξ (n, q ) = I [ξ (t)] · log2 I [ξ (t)] (bit).
(b) Information quantity Iξ (n, q ) carried by a generalized discrete random
sequence ξ (t) cannot exceed some upper bound:

Iξ (n, q ) ≤ I [ξ (t)] · log2 I [ξ (t)] (bit), (5.1.45)


n
S
where I [ξ (t)] = m( Xj ) is quantity of absolute information con-
j=1
tained in generalized discrete random sequence ξ (t) considered a
collection of the elements {Xj }. Thus, for generalized discrete ran-
dom sequences ξ (t) with relatively large cardinality of a set of values
Card{zi } = q > n, information quantity Iξ (n, q ) carried by this se-
quence and measured in bits (5.1.45) is entirely determined by quan-
tity of absolute information I [ξ (t)] measured in abits.

5.1.3 Information Quantity Carried by Continuous Signals


As shown in the previous subsection, information quantity Iξ (n, q ) contained in a
generalized discrete random sequence ξ (t) and measured in bits can be evaluated on
the base of Hartley’s logarithmic measure and two measures of quantity of absolute
information I [ξ (t)] and I [{zi }] contained in this sequence and measured in abits
(see Formula (5.1.28)):

Iξ (n, q ) = I [ξ (t)] · log2 I [{zi }] (bit),


n
S
where I [ξ (t)] = m( Xj ) (abit) is a quantity of absolute information contained in
j=1
a generalized discrete random sequence ξ (t) considered a collection of the elements
q
S
{Xj }; I [{zi }] = m[ Zi ] (abit) is a quantity of absolute information contained in
i=1
a set of the values {zi } of the symbols ξj (t) of the sequence ξ (t).
5.1 Information Quantity Carried by Discrete and Continuous Signals 191

The result (5.1.28) may be generalized, introducing, as the variables, the ab-
stract measures MT , MX of a set of the symbols {ξj (t)}, j = 1, 2, . . . , n and a set
of the values {zi }, i = 1, 2, . . . , q, respectively:

Iξ = MT log2 MX . (bit) (5.1.46)

As to the information quantity Iξ carried by a usual discrete random sequence,


the measures MT and MX are determined by the cardinalities Card{ξj (t)} = n,
Card{zi } = q of the sets {ξj (t)}, {zi }, respectively:

MT = Card{ξj (t)} = n; MX = Card{zi } = q.

Regarding information quantity Iξ carried by a generalized discrete random se-


quence, the measures MT and MX are determined by the quantities of absolute
information I [ξ (t)] and I [{zi }] of the sets {ξj (t)} and {zi }, respectively:
n
[ q
[
MT = I [ξ (t)] = m( Xj ); MX = I [{zi }] = m[ Zi ].
j=1 i=1

The result of application of Hartley’s generalized logarithmic measure (5.1.46) with


respect to continuous stochastic processes (to stochastic functions or signals) to
evaluate the information quantity they contain and measured in bits is included in
the following theorem.

Theorem 5.1.2. Overall quantity of information Iξ contained in continuous


stochastic signal ξ (t) and evaluated by logarithmic measure (5.1.46) is determined
by the expression:
Iξ = I [ξ (t)] · log2 I [ξ (t)] (bit).

Proof. Continuous stochastic function ξ (t), defined in the interval Tξ = [t0 , t0 + T ],


can be considered a mapping of a set of the values of argument Tξ = {tα } (domain
of definition of the function) into a set of the values of the function Ξ = {xα }
(codomain of the function): xα = ξ (tα ), xα is a random variable. For any tα , tβ ∈ T ,
the equality ξ (tα ) = ξ (tβ ) implies the equality tα = tβ , i.e., stochastic function
ξ (t) is an injective mapping [233]. For continuous random variables xα = ξ (tα ),
xβ = ξ (tβ ), probability P[xβ = x/xα = x] that random variable xβ takes the same
value x taken by random variable xα is equal to zero P[xβ = x/xα = x] = 0, if
tα 6= tβ . On the other hand, for any xα ∈ Ξ, there exists at least one element tα ∈ Tξ ,
such that ξ (tα ) = xα , i.e., stochastic function is a surjective mapping [233]. Being
simultaneously both injective and surjective mapping, stochastic function ξ (t) is
also bijection mapping.
Each element tα from a set Tξ (every sample ξ (tα ) of stochastic process ξ (t)) is
associated with a function called IDD i(tα ; t), which is associated with its ordinate
set Xα with normalized measure m(Xα ) = 1 by the mapping (3.5.1). Hence, the set
S values of argument Tξ P
of = {tα } (the set of samples {ξ (tα )}) is associated with a set
Xα with a measure m( Xα ) called overall quantity of information I [ξ (t)], which
α α
192 5 Communication Channel Capacity

is contained in a stochastic function ξ (t) (see Section 4.1). The stochastic function
ξ (t) is a one-to-one correspondence between the sets Tξ = {tα } and Ξ = {xα }.
This means that both the set of values of an argument Tξ =S{tα } and the set of
values of the function Ξ = {xα } can be associated with a set Xα with a measure
P α
m( Xα ). This implies that a measure MT of the set of the values of argument of
α
the function Tξ is equal to a measure
P MX of the set of the values of the function Ξ
and is equal to a measure m( Xα ) called overall quantity of information I [ξ (t)]:

MT = MX = I [ξ (t)] = m( Xα ) (see Definition 3.5.4), which is equal to the
α
quantity of absolute information (see Definition 4.1.4).
Thus, for a continuous stochastic signal, a measure MT of a domain of definition
Tξ = {tα } and a measure MX of a codomain Ξ = {xα } are identical and equal to
the overall quantity of information I [ξ (t)] contained in a stochastic signal in the
interval of its existence:
MT = MX = I [ξ (t)].
Thus, the overall quantity of information I [ξ (t)] contained in a stochastic signal ξ (t)
and evaluated by logarithmic measure (5.1.46) is determined by the expression:

Iξ = I [ξ (t)] log2 I [ξ (t)] (bit). (5.1.47)

It should be reminded here that overall quantity of information I [ξ (t)], which


is included in the formula (5.1.47), is measured in abits.
Theorem 5.1.2. has a corollary.

Corollary 5.1.1. Relative quantity of information Iξ,∆ contained in stochastic


signal ξ (t) and evaluated by logarithmic measure (5.1.46) is defined by the following
expression:
Iξ,∆ = I∆ [ξ (t)] log2 I∆ [ξ (t)] (bit). (5.1.48)

Relative quantity of information Iξ,∆ incoming into the formula (5.1.47) is mea-
sured in abits and connected with overall quantity of information I [ξ (t)] by the
relationship (4.1.19).
It should be noted that overall quantity of information Iξ contained in con-
tinuous stochastic signal ξ (t) and evaluated by logarithmic measure (5.1.47) does
not possess the property of additivity, unlike overall quantity of information I [ξ (t)]
K
S
measured in abits. Let a collection ξk (t), t ∈ Tk form a partition of a stochastic
k=1
process ξ (t):
K
[
ξ (t) = ξk (t), t ∈ Tk ; (5.1.49)
k=1
K
[
Tk ∩ Tl = ∅, k 6= l, Tk = Tξ , Tξ = [t0 , t0 + T ];
k=1
5.1 Information Quantity Carried by Discrete and Continuous Signals 193

ξk (t) = ξ (t), t ∈ Tk ,
where Tξ = [t0 , t0 + T ] is a domain of definition of ξ (t).
Let I [ξk (t)] be an overall quantity of information contained in an elementary
signal ξk (t) and measured in abits, and Iξ,k be an overall quantity of information
contained in an elementary signal ξk (t) and evaluated by logarithmic measure:
Iξ,k = I [ξk (t)] log2 I [ξk (t)] (bit). (5.1.50)
Then the identity holds:
K
X
I [ξ (t)] = I [ξk (t)]. (5.1.51)
k=1
Substituting the equality (5.1.51) into the expression (5.1.47), we obtain the fol-
lowing relationship:
K
! K
!
X X
Iξ = I [ξk (t)] log2 I [ξk (t)] =
k=1 k=1
K K
!!
X X
= I [ξk (t)] · log2 I [ξk (t)] . (5.1.52)
k=1 k=1

Using the identity (5.1.52) and the inequality:


K K
!! K
X X X
I [ξk (t)] · log2 I [ξk (t)] > (I [ξk (t)] · log2 I [ξk (t)]),
k=1 k=1 k=1

we obtain the following relationship:


K
X
Iξ > Iξ,k . (5.1.53)
k=1

Thus, the relationship (5.1.53) implies that overall quantity of information Iξ con-
tained in continuous stochastic signal ξ (t) and evaluated by logarithmic measure
(5.1.47), does not possess the property of additivity, and this overall quantity of
information is always greater than the sum of information quantities contained in
separate parts ξk (t) of an entire stochastic process (see (5.1.49)).
There are a lot of practical applications in which it is necessary to operate with
information quantity contained in an unknown nonstochastic signal (or an unknown
nonrandom parameter of this signal). An evaluation of this quantity is given by the
following theorem.
Theorem 5.1.3. Information quantity Is (T∞ ) contained in an unknown non-
stochastic signal s(t) defined in an infinite time interval t ∈ T∞ by some continuous
function:
s(t) = f (λ, t), t ∈ T∞ =] − ∞, ∞[,
where λ is an unknown nonrandom parameter is equal to 1 abit:
Is (T∞ ) = 1 (abit).
194 5 Communication Channel Capacity

Corollary 5.1.2. Information quantity Iλ (T∞ ) contained in an unknown nonran-


dom parameter λ(t), which is a constant in an infinite time interval t ∈ T∞ :

λ(t) = λ = const, t ∈ T∞ =] − ∞, ∞[,

is equal to 1 abit:
Iλ (T∞ ) = 1 (abit).

Corollary 5.1.3. Information quantity Is (T0 ) contained in an unknown non-


stochastic signal s(t) defined in a bounded open time interval t ∈ T0 by some con-
tinuous function:
s(t) = f (λ, t), t ∈ T0 =]t0 , t0 + T [,
where λ is an unknown nonrandom parameter, is equal to 1 abit:

Is (T0 ) = 1 (abit).

Corollary 5.1.4. Information quantity Iλ (T0 ) contained in an unknown nonran-


dom parameter λ(t), which is a constant in a finite time interval t ∈ T0 :

λ(t) = λ = const, t ∈ T0 =]t0 , t0 + T [,

is equal to 1 abit:
Iλ (T0 ) = 1 (abit).

Proof of Theorem 5.1.3 and Corollaries 5.1.2 through 5.1.4. According to a condi-
tion of the theorem, arbitrary samples s(t0 ) and s(t00 ), t00 = t0 + ∆ of the signal s(t)
are connected by one-to-one transformation:

s(t00 ) = f (λ, t0 + ∆) = s(t0 + ∆);




s(t0 ) = f (λ, t00 − ∆) = s(t00 − ∆).

The NFSI ψs (t0 , t00 ) of the signal s(t) (3.1.2) between its arbitrary samples s(t0 ) and
s(t00 ) is equal to one:
ψs (t0 , t00 ) = 1.
This identity holds if and only if an arbitrary sample s(t0 ) of the signal s(t) is
characterized by IDD is (t0 , t) of the following form:
 h  
0 1 0 a  
0 a i
is (t , t) = lim 1 t − (t − ) − 1 t − (t + ) ,
a→∞ a 2 2

where 1(t) is Heaviside step function.


According to the formula (3.5.6), overall quantity of information I [ξ (t)] con-
tained in the signal ξ (t) with IDD iξ (tα , t) in the interval Tξ = [t0 , t0 + Tx ] is equal
to: Z
I [ξ (t)] = sup [iξ (tα , t)]dt.
tα ∈Tξ

5.1 Information Quantity Carried by Discrete and Continuous Signals 195

Thus, information quantity Is (T∞ ) contained in an unknown nonstochastic signal


s(t) defined in infinite time interval t ∈ T∞ is equal to:
Z
Is (T∞ ) = sup [is (t0 , t)]dt = 1 (abit).
t0 ∈T∞
T∞

The Corollary 5.1.2 is proved similarly.


To prove the Corollary 5.1.3, consider the one-to-one mapping g of the signal
s(t) = f (λ, t), defined on an infinite time interval t ∈ T∞ =] − ∞, ∞[, into the
signal s∗ (t) = f ∗ (λ, t) defined on an open time interval t ∈ T0 =]t0 , t0 + T [:
g
s(t)  s∗ (t).
g −1

According to Corollary 4.2.5 of Theorem 4.2.4, such a mapping g preserves overall


quantity of information Is (T∞ ) contained in the signal s(t):

Is (T∞ ) = Is (T0 ) = 1 (abit).

The Corollary 5.1.4 is proved similarly.


Thus, information quantity contained in an unknown nonstochastic signal s(t) =
f (λ, t) and its constant parameter λ in a bounded open time interval t ∈ T0 =
]t0 , t0 + T [ is equal to 1 abit:

Is (T0 ) = Iλ (T0 ) = 1 (abit). 

Hereinafter this statement will be expanded upon the signals (and their parameters)
defined on the closed interval [t0 , t0 + T ].
Thus, for continuous stochastic signal ξ (t) characterized by the partition
(5.1.49), the following statements hold.

1. Overall quantity of information I [ξ (t)] measured in abits possesses the


property of additivity (5.1.51).
2. Conversely, overall quantity of information Iξ contained in continuous
stochastic signal ξ (t) and evaluated by logarithmic measure (5.1.47), does
not possess the property of additivity. This quantity Iξ is always greater
than the sum of the quantities Iξ,k characterizing overall quantity of in-
formation contained in the elements {ξk (t)} of a partition of the signal
ξ (t) (5.1.53).
3. Information quantity contained in an unknown nonstochastic signal s(t) =
f (λ, t) and its constant parameter λ in a finite time interval is equal to
1 abit:
Is (T0 ) = Iλ (T0 ) = 1 (abit).

As for information quantity carried by continuous stochastic processes, one


should summarize the following facts. First, information quantity (both overall and
relative) carried by continuous stochastic process ξ (t) in a bounded time interval
196 5 Communication Channel Capacity

Tξ = [t0 , t0 + T ] independently of the units is always a finite quantity. The excep-


tions are stochastic processes with IDD in the form of δ-function, and, in particular,
delta-correlated Gaussian processes. Second, overall quantity of information Iξ con-
tained in a continuous stochastic signal ξ (t) and evaluated by logarithmic measure
(5.1.47), does not possess the property of additivity, unlike overall quantity of in-
formation I [ξ (t)], which is measured in abits. In this case, the inequality (5.1.53)
confirms Aristotle’s pronouncement that “the whole is always greater than the sum
of its parts”.

5.2 Capacity of Noiseless Communication Channels


Using the formalized descriptions of signals as the elements of metric space with
special properties and introduced measures of information quantities, this section
considers informational characteristics of communication channels to establish the
most general regularities defining information transmitting in the absence of noise
(interference). Informational characteristics of communication channels under noise
(interference) influence will be investigated in Chapter 6.
Channels and signals considered within informational systems are characterized
by diversity of their physical natures and properties. To determine the general
regularities, it is necessary to abstract from their concrete physical content and to
operate with general notions.
An information transmitting and receiving channel (communication channel or,
simply, channel), in a wide sense, is a collection of transmitting, receiving, and pro-
cessing units (sets) and also physical medium providing information transmission
from one point of space to another by signals, their receiving, and processing to ex-
tract useful information contained in signals [247], [248]. In information theory, one
should distinguish continuous and discrete channels. Continuous (discrete) chan-
nels are intended for transmitting continuous (discrete) messages, respectively. Real
channels are complicated inertial nonlinear objects, whose characteristics change in
a random manner over time. To analyze such channels, special mathematical mod-
els are used. They differ in the complexity level and adequacy degree. Gaussian
channels are the types of real channel models built under the following supposi-
tions [247], [248], [70]:

1. The main physical parameters of a channel are its known nonrandom


quantities.
2. Channel passband is bounded by some upper bound frequency F .
3. A stochastic Gaussian signals with bounded power, normal distribution of
instantaneous values, and uniform power spectral density are transmitted
over a channel within a bounded channel passband.
4. A channel contains an additive Gaussian quasi-white noise with bounded
power and uniform power spectral density bounded by a channel passband.
5.2 Capacity of Noiseless Communication Channels 197

5. There are no statistical relations between a signal and noise.


This model needs more detail concerning a boundedness of a channel passband.
An ideal filter with rectangular passband is physically unrealizable inasmuch as it
does not satisfy the causality principle [235]. The need to apply an ideal filter as
a theoretical model of a channel was stipulated by sampling theorem requiring the
boundedness of a signal band by some value F . Here we shall consider physically
feasible channels operating with “effective bandwidth of a channel” and “effective
bandwidth of signal power spectral density”.
If the ill effects of noise and/or interference in channels can be ignored, then
analyzing channels involves a model in the form of an ideal channel, also called a
noiseless channel. When a high level of information transmission quality is required
and noise (interference) influence in a channel can not be neglected, a more com-
plicated model is used known as a noisy channel (channel with noise/interference).
Capacity is the main informational characteristic of a channel. In the most gen-
eral case, noiseless channel capacity C means maximum of information quantity I (s)
that can be transmitted over the channel by the signal s(t) (discrete or continuous)
for a signal duration T :
I (s)
C = max .
s T
Earlier we introduced two measures of information quantity carried by a signal,
i.e., overall quantity of information I (s) and relative quantity of information I∆ (s).
According to this approach, one should distinguish channel capacity C evaluated by
overall quantity of information I (s) and channel capacity C∆ evaluated by relative
quantity of information I∆ (s). By denoting channel capacities C (by o.q.i.) and
C∆ (by r.q.i.), we mean the capacities evaluated by these measures of information
quantity.

Definition 5.2.1. By noiseless channel capacity C (by o.q.i.) we mean maximum


of information quantity I (s) can be transmitted through the channel by the signal
s(t) with duration T :
I (s)
C = max (abit/s). (5.2.1)
s T
Substituting into Definition (5.2.1) a value of overall quantity of information
I (s), which, according to the formula (5.1.8), is expressed over information dis-
tribution density (IDD) is (τ ) of the signal s(t), we obtain one more variant of
definition of noiseless channel capacity C:

T · is (0)
C = max = max is (0) (abit/s). (5.2.2)
s T s

Definition 5.2.2. By noiseless channel capacity C∆ (by r.q.i.) we mean maximum


of information quantity I∆ (s) that can be extracted from the signal s(t) under its
transmitting through a noiseless channel for a signal duration T :

I∆ (s)
C∆ = max (abit/s). (5.2.3)
s T
198 5 Communication Channel Capacity

In all the variants of the capacity Definitions (5.2.1) through (5.2.3), the choice
of the best and the most appropriate signal s(t) for this channel is realized over all
the possible signals from signal space. In this sense, the channel is considered to be
matched with the signal s(t) if its capacity evaluated by overall (relative) quantity
of information is equal to the ratio of overall (relative) quantity of information
contained in the signal s(t) to its duration T :

I (s) T · is (0)
C= = = is (0) (abit/s); (5.2.4a)
T T
I∆ (s)
C∆ = (abit/s), (5.2.4b)
T
where is (0) is a value of IDD is (τ ) in the point τ = 0.
As shown in Section 4.1, for the signals characterized by IDD in the form
of δ-function or in the form of the difference of two Heaviside step functions
1 a a
a 1(τ + 2 ) − 1(τ − 2 ) , relative quantity of information I∆ (s) contained in the
signal s(t) is equal to overall quantity of information I (s) (see Formula (4.1.17)):

I∆ (s) = I (s).

Meanwhile, for the signals characterized by IDD in the form of other functions,
relative quantity of information I∆ (s) contained in the signal s(t) is equal to a half
of overall quantity of information I (s) (see Formula (4.1.19)):
1
I∆ (s) = I (s).
2
Taking into account the interrelation between relative quantity of information I∆ (s)
and overall quantity of information I (s) contained in the signal s(t), the relationship
between noiseless channel capacity evaluated by relative quantity of information and
overall quantity of information is defined by IDD of the signal s(t).
 of δ-function or in
As for the signals characterized by IDD in the form  the form
of the difference of two Heaviside step functions a1 1(τ + a2 ) − 1(τ − a2 ) , noiseless
channel capacity C∆ (by r.q.i.) is equal to a capacity C (by o.q.i.):

C∆ = C. (5.2.5)

As for the signals characterized by IDD in the form of other functions, noiseless
channel capacity (by r.q.i.) C∆ is equal to a half of capacity (by o.q.i.) C:
1
C∆ = C. (5.2.6)
2
Using the relationships obtained in Section 5.1, we evaluate the capacities of both
discrete and continuous noiseless channels, which characterize information quanti-
ties carried by both discrete and continuous stochastic signals respectively.
5.2 Capacity of Noiseless Communication Channels 199

5.2.1 Discrete Noiseless Channel Capacity


According to Theorem 5.1.1, discrete random sequence u(t) = {uj (t)}, j =
1, 2, . . . , n with statistically independent symbols in which each symbol uj (t)
equiprobably takes the values from the set {ui }, i = 1, 2, . . . , q at arbitrary instant
t ∈ Tu = [t0 , t0 + T ] contains information quantity I [u(t)] = Iu [n, q ] determined by
Hartley’s logarithmic measure Iu [n, q ] = n log2 q (bit), but it is always less than or
equal to n log2 n bit (see (5.1.34)):

n · log2 q, q < n;
Iu [n, q ] = (bit)
n · log2 n, q ≥ n.

We shall assume that a discrete random sequence u(t) is matched with a channel.
Then a discrete noiseless channel capacity (by o.q.i.) C (bit/s) measured in bit/s
is equal to the ratio of overall quantity of information Iu [n, q ] contained in a signal
u(t) to its duration T :

 1

Iu [n, q ]  τ0 · log2 q, q < n;
C= = 1 (5.2.7)
T 
 · log2 n, q ≥ n,
τ0
where τ0 is a duration of elementary signal uj (t).
Consider the case in which discrete random sequence u(t) = {uj (t)} is charac-
terized by a statistical interrelation between the symbols with IDD iu (τ ), and is
matched with a channel. The discrete noiseless channel capacity (by o.q.i.) C (bit/s)
measured in bit/s, according to the relationship (5.1.45), is bounded by the quan-
tity:
C (bit/s) ≤ iu (0) log2 [iu (0)T ]. (5.2.8)
The relationships (5.2.7) and (5.2.8) imply that discrete noiseless channel capacity
(by o.q.i.) C (bit/s) measured in bit/s is always a finite quantity under the condition
that a duration T of a sequence is a bounded value: T < ∞.
Let a discrete random sequence u(t) with statistically independent symbols
{uj (t)} characterized by IDD iu (τ ) in the form of the difference of two Heavi-
side step functions τ10 1(τ + τ20 ) − 1(τ − τ20 ) be matched with a channel. Then the
discrete noiseless channel capacity (by r.q.i.) C∆ (bit/s) is equal to channel capacity
(by o.q.i.) C (bit/s):
C∆ (bit/s) = C (bit/s), (5.2.9)
and the last, according to the relationships (5.2.7) and (5.1.40), is determined by
the expression: 
C (abit/s) · log2 q, q < n;
C (bit/s) = (5.2.10)
C (abit/s) · log2 n, q ≥ n,
where C (abit/s)= iu (0) = 1/τ0 .
200 5 Communication Channel Capacity

5.2.2 Continuous Noiseless Channel Capacity


According to Theorem 5.1.2, overall quantity of information Iξ contained in con-
tinuous signal ξ (t), t ∈ Tξ = [t0 , t0 + T ] and evaluated by logarithmic measure is
determined by the expression (5.1.47):

Iξ = I [ξ (t)] log2 I [ξ (t)] (bit).


S
where I [ξ (t)] = m( Xα ) (abit) is overall quantity of information contained in
α
continuous signal ξ (t) considered a collection of the elements {Xα }.
We assume that the continuous stochastic signal is matched with a channel and
vice versa. The continuous noiseless channel capacity (by o.q.i.) C (bit/s) measured
in bit/s is equal to the ratio of overall quantity of information Iξ contained in the
signal ξ (t) to its duration T :

I [ξ (t)]
C (bit/s) = log2 I [ξ (t)] = C (abit/s) · log2 I [ξ (t)] =
T
= C (abit/s) · log2 [C (abit/s)T ], (5.2.11)
where C (abit/s) is continuous noiseless channel capacity (by o.q.i.) measured in
abit/s.
The expression (5.2.11) defines ultimate information quantity Iξ that can be
transmitted by the signal ξ (t) over the channel for a time equal to a signal duration
T.
The relative quantity of information Iξ,∆ contained in continuous stochastic sig-
nal ξ (t) and evaluated by logarithmic measure is determined by Expression (5.1.48):

Iξ,∆ = I∆ [ξ (t)] log2 I∆ [ξ (t)] (bit).

Continuous noiseless channel capacity (by r.q.i.) C∆ (bit/s) measured in bit/s is


equal to the ratio of relative quantity of information Iξ,∆ contained in the signal
ξ (t) to its duration T :

I∆ [ξ (t)]
C∆ (bit/s) = log2 I∆ [ξ (t)] = C∆ (abit/s) · log2 I∆ [ξ (t)] =
T
= C∆ (abit/s) · log2 [C∆ (abit/s)T ], (5.2.12)
where C∆ (abit/s) is a continuous noiseless channel capacity (by r.q.i.) measured
in abit/s.
The expression (5.2.12) defines ultimate quantity of useful information Iξ,∆
that can be extracted from the transmitted signal ξ (t) for a time equal to a signal
duration T .
The formulas converting continuous noiseless channel capacity from unit mea-
sured in abit/s to unit measured in bit/s (5.2.11), (5.2.12) have the following fea-
tures. First, continuous noiseless channel capacity, evaluated by overall (relative)
quantity of information C (bit/s) (C∆ (bit/s)) and measured in bit/s, unlike chan-
nel capacity C (abit/s) (C∆ (abit/s)) measured in abit/s, depends on IDD iξ (τ ),
5.2 Capacity of Noiseless Communication Channels 201

and also on overall (relative) quantity of information I [ξ (t)] (I∆ [ξ (t)]) contained in
the signal ξ (t). Second, continuous noiseless channel capacity, evaluated by overall
(relative) quantity of information C(bit/s) (C∆ (bit/s)) and measured in bit/s, as
well as channel capacity C (abit/s) (C∆ (abit/s)) measured in abit/s, is always a fi-
nite quantity under the condition that duration T of continuous signal is a bounded
value: T < ∞. Third, continuous noiseless channel capacity, evaluated by overall
(relative) quantity of information C (bit/s) (C∆ (bit/s)) and measured in bit/s, is
uniquely determined only when a signal duration T is known. Fourth, (5.2.11) (and
(5.2.12)) imply that under any signal-to-noise ratio and by means of any signal pro-
cessing units, one cannot transmit (extract) greater information quantity (quantity
of useful information) for a signal duration T than the quantities determined by
these relationships, respectively.
Thus, when talking about a continuous noiseless channel capacity, one should
take into account the following considerations. On the base of introduced notions
of quantities of absolute, mutual, and relative information along with overall and
relative quantities of information, a continuous noiseless channel capacity can be
defined uniquely. Conversely, applying logarithmic measure of information quantity,
a continuous noiseless channel capacity can be uniquely defined exclusively for a
fixed time, for instance, for a signal duration T , or for a time unit (e.g., a second).
In the last case, the formulas converting continuous noiseless channel capacity eval-
uated by overall (relative) quantity of information (5.2.11), (5.2.12) may be written
as follows:
C (bit/s) = C (abit/s) · log2 [C (abit/s) · 1 s]; (5.2.13a)
C∆ (bit/s) = C∆ (abit/s) · log2 [C∆ (abit/s) · 1 s]. (5.2.13b)
Continuous noiseless channel capacity (by o.q.i.) C (bit/s) (5.2.13a) measured in
bit/s defines maximum information quantity that can be transmitted through a
noiseless channel for one second. Continuous noiseless channel capacity (by r.q.i.)
C∆ (bit/s) (5.2.13b) measured in bit/s defines maximum quantity of useful infor-
mation that can be extracted from a signal transmitted through a noiseless channel
for one second.

5.2.3 Evaluation of Noiseless Channel Capacity


5.2.3.1 Evaluation of Capacity of Noiseless Channel Matched with Stochastic
Stationary Signal with Uniform Information Distribution Density
We can determine a capacity of noiseless channel matched with a stochastic sta-
tionary signal s(t) characterized by a uniform (rectangular) IDD is (τ ):
1h a a i
is (τ ) = 1(τ + ) − 1(τ − ) , (5.2.14)
a 2 2
where 1(τ ) is the Heaviside step function; a is a scale parameter.
There is a multipositional discrete random sequence s(t) = {sj (t)}, j =
1, 2, . . . , n with statistically independent symbols {sj (t)} that is an example of such
202 5 Communication Channel Capacity

a signal, the duration τ0 of a symbol sj (t) of this sequence is equal to a appearing


in the formula: τ0 = a.
Normalized function of statistical interrelationship (NFSI) ψs (τ ) of a stationary
signal s(t) is completely defined by IDD is (τ ) by the relationship (3.4.2):

|τ |
1− , |τ | ≤ a;

ψs (τ ) = a (5.2.15)
 0, |τ | > a.
Hyperspectral density (HSD) σs (ω ) of the signal s(t), according to the formula
(3.1.11), is connected with NFSI ψs (τ ) over Fourier transform:
Z∞
1
σs (ω ) = F [ψs (τ )] ψs (τ ) exp[−iωτ ]dτ . (5.2.16)

−∞

Substituting the value of NFSI ψs (τ ) (5.2.15) into (5.2.16), we obtain the expression
for HSD σs (ω ) of the signal s(t):
 2
a sin(aω/2)
σs (ω ) = . (5.2.17)
2π aω/2
Effective width ∆ω of HSD σs (ω ) is equal to:
Z∞
1 1 π
∆ω = σs (ω )dω = = . (5.2.18)
σs (0) 2σs (0) a
0

Noiseless channel capacity (by o.q.i.) C, under the condition that the channel is
matched with a stochastic signal with a rectangular IDD, according to the formula
(5.2.4a), is determined by maximum value is (0) of IDD is (τ ):
1
C = is (0) = = 2∆f (abit/s), (5.2.19)
a
where ∆f = ∆ω/(2π ) = 1/(2a) = 0.5is (0) is a real effective width of HSD σs (ω ).
Noiseless channel capacity (by r.q.i.) C∆ , under the condition that the channel is
matched with a stochastic signal with a rectangular IDD, according to the formula
(5.2.5), is equal to channel capacity (by o.q.i.) C:
C∆ = C = is (0) = 2∆f (abit/s). (5.2.20)
On the base of a continuous noiseless channel capacity evaluated by overall (relative)
quantity of information C (abit/s) (C∆ (abit/s)) (5.2.19) and (5.2.20), we shall
determine a channel capacity C (bit/s) (C∆ (bit/s)) measured in bit/s on the
assumption that a duration T of the signal s(t) is equal to one second. In this case,
according to the formulas (5.2.13), the channel capacities C (bit/s) (C∆ (bit/s))
are equal to:
C (bit/s) = 2∆f · log2 [2∆f · 1 s]; (5.2.21a)
C∆ (bit/s) = 2∆f · log2 [2∆f · 1 s], (5.2.21b)
where ∆f = 0.5is (0) is a real effective width of HSD σs (ω ) of the signal s(t).
5.2 Capacity of Noiseless Communication Channels 203

Example 5.2.1. For a channel matched with a signal characterized by IDD is (τ )


in the form (5.2.14) with real effective width of HSD ∆f = 3.4 kHz, channel capaci-
ties C (bit/s) (C∆ (bit/s)), determined by the formulas (5.2.21a) and (5.2.21b), are
equal to each other: C (bit/s)=C∆ (bit/s)= 87 kbit/s. The value C (bit/s) deter-
mines a maximum information quantity that can be transmitted over this channel
by the signal s(t) for one second. The value C∆ (bit/s) determines a maximum
quantity of useful information that can be extracted from the signal s(t) for one
second. For any channel, as follows from Formulas (5.2.21), both these values are
potentially achievable and are the finite values under a finite value of a real effec-
tive width of HSD ∆f of the signal s(t). It should be noted that C (bit/s) and
C∆ (bit/s) are not identical quantities, and their possible equality is determined
by features of IDD is (τ ) (5.2.14) of the signal s(t). The expression (5.2.21a) (the
expression (5.2.21b)) implies that under any signal-to-noise ratio in a channel and
by means of any signal processing units, one cannot transmit (extract) more in-
formation quantity (quantity of useful information) for a second than the quantity
determined by this formulas. 5

As for Gaussian channels, one should note the following consideration. Squared
module of frequency response function (amplitude-frequency characteristic) K (ω )
of a channel, matched with Gaussian signal s(t), is equal to its power spectral
density S (ω ):
2
|K (ω )| = S (ω ). (5.2.22)
Despite the nonlinear dependence between NFSI ψs (τ ) and normalized autocorre-
lation function rs (τ ) of the signal s(t), which is defined by the relationship (3.1.3),
one may consider the power spectral density S (ω ) to be approximately proportional
to HSD σs (ω ):
S (ω ) ≈ Aσs (ω ), (5.2.23)
where A is a coefficient of proportionality providing the normalization property of
R∞
HSD: σξ (ω )dω = 1 (see Section 3.1).
−∞
The effective width ∆ωeff of a squared module of a frequency response function
K (ω ) of a channel, matched with Gaussian signal s(t), is approximately equal to
an effective width ∆ω of HSD σs (ω ):
Z∞
1
∆ωeff = |K (ω )|2 dω ≈ ∆ω. (5.2.24)
|K (0)|2
0

Real effective width Feff of a squared module of frequency response function K (ω )


of a channel matched with a Gaussian signal s(t) is connected with effective width
∆ωeff by the known relationship and is approximately equal to real effective width
of HSD ∆f :
Feff = ∆ωeff /2π ≈ ∆f. (5.2.25)
According to the last relationship, the formulas (5.2.21) for a channel matched with
204 5 Communication Channel Capacity

a Gaussian signal s(t) may be written in the following form:

C (bit/s) ≈ 2Feff · log2 [2Feff · 1 s]; (5.2.26a)

C∆ (bit/s) ≈ 2Feff · log2 [2Feff · 1 s]. (5.2.26b)

5.2.3.2 Evaluation of Capacity of Noiseless Channel Matched with Stochastic


Stationary Signal with Laplacian Information Distribution Density
We now determine a capacity of a noiseless channel matched with a stochastic
stationary signal s(t) characterized by Laplacian IDD is (τ ):

is (τ ) = b exp(−2b|τ |), (5.2.27)

where b is a scale parameter.


NFSI ψs (τ ) of a stationary signal s(t) is completely defined by IDD is (τ ) over
the relationship (3.4.2):
ψs (τ ) = exp(−b|τ |). (5.2.28)
HSD σs (ω ) of a signal s(t), according to formula (3.1.11), is connected with NFSI
ψs (τ ) by Fourier transform and is equal to:
Z∞
1 b
σs (ω ) = F [ψs (τ )] = ψs (τ ) exp[−iωτ ]dτ = . (5.2.29)
2π π (b2 + ω2)
−∞

Effective width ∆ω of HSD σs (ω ) is equal to:


Z∞
1 1 π·b
∆ω = σs (ω )dω = = . (5.2.30)
σs (0) 2σs (0) 2
0

Noiseless channel capacity (by o.q.i.) C, under the condition that the channel is
matched with a stationary signal with Laplacian IDD, according to the formula
(5.2.4a), is determined by maximum value is (0) of IDD is (τ ):

C = is (0) = b = 4∆f (abit/s), (5.2.31)

where ∆f = ∆ω/(2π ) = b/4 = is (0)/4 is the real effective width of HSD σs (ω ).


Noiseless channel capacity (by r.q.i.) C∆ , under the condition that the channel
is matched with a stationary signal with Laplacian IDD, according to the formula
(5.2.6), is equal to half of noiseless channel capacity over overall quantity of infor-
mation C:
1 1
C∆ = C = is (0) = 2∆f (abit/s). (5.2.32)
2 2
On the base of noiseless channel capacity evaluated by overall (relative) quantity of
information C (abit/s) (C∆ (abit/s)) (5.2.31) and (5.2.32), we determine channel
capacities C (bit/s) and C∆ (bit/s) measured in bit/s, on the assumption that
5.2 Capacity of Noiseless Communication Channels 205

duration T of a signal s(t) is equal to one second. In this case, according to the
formulas (5.2.13), channel capacities C (bit/s), C∆ (bit/s) are equal to:

C (bit/s) = 4∆f · log2 [4∆f · 1 s]; (5.2.33a)

C∆ (bit/s) = 2∆f · log2 [2∆f · 1 s], (5.2.33b)


where ∆f = is (0) is real effective width of HSD σs (ω ) of a signal s(t).

Example 5.2.2. For a channel matched with a signal characterized by IDD is (τ )


in the form (5.2.27) with the real effective width of HSD ∆f = 3.4 kHz, noiseless
channel capacities C (bit/s) (C∆ (bit/s)) determined by the formulas (5.2.33a) and
(5.2.33b) are, respectively, equal to C (bit/s)=187 kbit/s, C∆ (bit/s)= 87 kbit/s.
The quantity C (bit/s) determines maximum of information quantity, which can be
transmitted over such a channel by a signal s(t) for one second, and the quantity
C∆ (bit/s) determines maximum quantity of useful information, which can be ex-
tracted from a signal s(t) for one second. Both values with respect to any channel,
as follows from the formulas (5.2.33), are potentially achievable and are the finite
values under a finite value of real effective width of HSD ∆f of a signal s(t). Thus,
the expression (5.2.33a) (the expression (5.2.33b)) implies that under any signal-
to-noise ratio and by means of any signal processing units one cannot transmit
(extract) greater information quantity (quantity of useful information) for a second
than the quantity determined by this formulas, respectively. 5

According to the relationship (5.2.25), the formulas (5.2.33) for a noiseless chan-
nel matched with Gaussian signal s(t) may be written as follows:

C (bit/s) ≈ 4Feff · log2 [4Feff · 1 s]; (5.2.34a)

C∆ (bit/s) ≈ 2Feff · log2 [2Feff · 1 s], (5.2.34b)


where Feff is real effective width of a squared module of frequency-response function
of the channel matched with a Gaussian signal s(t).
Comparing the expressions (5.2.21) and (5.2.33) determining the capacities
C (bit/s) (C∆ (bit/s)) for two distinct channels matched with signals character-
ized by the same real effective width of HSD ∆f , one can draw the following con-
clusions. First, a channel matched with a signal characterized by Laplacian IDD
(5.2.27), permits transmitting for one second more than double information quantity
the channel matched with a signal characterized by rectangular IDD (5.2.14). This
conclusion questions: which one from a collection of channels, under a fixed value of
real effective width of HSD ∆f =const, permits transmission of greater information
quantity for a time unit. Second, in spite of marked feature, one can extract the
same quantity of useful information for a time unit from the signal characterized
by Laplacian IDD (5.2.27) that can be extracted from the signal characterized by
rectangular IDD (5.2.14), under the condition that the values of real effective width
of HSD ∆f of both signals, with which these channels are matched, are the same.
This conclusion allows us to formulate a question: which one from a collection of
channels, under a fixed value of real effective width of HSD ∆f =const, permits
206 5 Communication Channel Capacity

extraction of greater quantity of useful information from a transmitted signal for a


time unit.
Among all the physically feasible channels with fixed values of real effective
width of HSD ∆f =const, only a channel matched with stochastic stationary signal
characterized by Laplacian IDD in the form of (5.2.27) can transmit the largest
information quantity per a time unit equal to is (0) = 4∆f (abit/s). From this signal
transmitted through such a channel, one can extract the largest quantity of useful
information per a time unit equal to 0.5is (0) = 2∆f (abit/s). The last property
belongs also to all the discrete random sequences (both binary and multipositional)
with statistically independent symbols with rectangular IDDs in the form (5.2.14).
Investigation of noisy channel capacity is essentially based on the general rela-
tionships characterizing the efficiency of signal processing under noise (interference)
background, so the approaches to evaluation of capacity of the channels with noise
(interference), will be considered in Chapter 6.
6
Quality Indices of Signal Processing in Metric Spaces
with L-group Properties

In electronic systems that transmit, receive, and extract information, signal process-
ing is always accompanied with interactions between useful signals and interference
(noise). The presence of interference (noise) in the input of a signal processing unit
will prevent accurate reproduction of the useful signal or transmitted message in
the output of the signal processing unit because of information losses.
A signal processing unit may be considered optimal if the losses of information
are minimal. Solving signal processing problems requires various criteria and quality
indices.
The conditions of interactions of useful signals and interference (noise) require
optimal signal processing algorithms to ensure minimal information losses. The level
of information losses defines the potential quality indices of signal processing.
Generally, the evaluation of potential quality indices of signal processing and
also synthesis and analysis of signal processing algorithms are fundamental problems
of signal processing theory. In contrast to synthesis, the establishment of potential
quality indices of signal processing is not based on any possible criteria of optimality.
Potential quality indices of signal processing that determine the upper bound of
efficiency of solving signal processing problems are not defined by the structure of
signal processing unit. They follow from informational properties of the signal space
where signal processing is realized.
Naturally, under certain conditions of interactions between useful signals and
interference (noise), the algorithms of signal processing in the presence of inter-
ference (noise) obtained via the synthesis based on an optimality criterion cannot
provide quality indices of signal processing that are better than those of potential
ones.
The main results obtained in Chapter 4 with respect to physical signal space
create the basis for achieving the final results that are important for signal pro-
cessing theory which deals with establishment of potential quality indices of signal
processing in signal spaces with various algebraic properties. These final results also
are important for information theory with respect to the relationships defining the
capacity of discrete and continuous communication channels, taking into account
the influence of interference (noise) there.
We will formulate the main five signal processing problems, excluding the prob-
lem of signal recognition, that are covered in the known works on statistical sig-
nal processing, statistical radio engineering, and statistical communication the-
ory [159], [163], [155]. We will extend their content upon the signal spaces with

207
208 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

lattice properties, so that consideration of these problems is not confined within


signal spaces with group properties (within the linear spaces).

6.1 Formulation of Main Signal Processing Problems


Let a useful signal s(t) interact with noise (interference) n(t) in physical signal
space Γ with the properties of L-group Γ (+, ∨, ∧):

x(t) = s(t) ⊕ n(t), t ∈ Ts , (6.1.1)

where ⊕ is one of three binary operations of L-group Γ (+, ∨, ∧); Ts = [t0 , t0 + T ]


is a domain of definition of a useful signal s(t); T is a duration of useful signal s(t).
Probabilistic-statistical properties of a stochastic signal s(t) and noise (interfer-
ence) n(t) are known to some extent.
Useful signal s(t), as a rule, is a deterministic one-to-one function M [∗, ∗] of
a carrier signal c(t) and vector of informational parameters λ(t) changing on a
domain of definition Ts of a signal s(t):

s(t) = M [c(t), λ(t)], (6.1.2)

where λ(t) = [λ1 (t), . . . , λm (t)], λ(t) ∈ Λ is a vector of informational parameters


of a signal s(t).
Evaluation of an estimator ŝ(t + τ ) of a signal s(t) as a functional Fŝ [x(t)] of
an observed realization x(t), t ∈ Ts on τ = 0 is called a signal filtering (extraction)
problem, and on τ < 0 it is called a signal interpolation (smoothing) problem
s(t) [159], [163], [155]:
ŝ(t + τ ) = Fŝ [x(t)], t ∈ Ts . (6.1.3)
Taking into account the one-to-one property of modulating function M [∗, ∗], the
problems of evaluation of the estimators ŝ(t) and λ̂(t) are equivalent in a sense that
by having an estimator ŝ(t) of a signal s(t), it is easy to obtain an estimator λ̂(t)
of a parameter λ(t) and vice versa:

λ̂(t) = M −1 [c(t), ŝ(t)]; (6.1.4a)

ŝ(t) = M [c(t), λ̂(t)]. (6.1.4b)


If an estimated vector parameter λ(t) does not change on a domain of definition Ts
of a signal s(t), the problem of parameter filtering λ(t) is equivalent to the problem
of estimation, which is formulated in the following way.
Let a useful signal s(t) be a deterministic one-to-one function M [∗, ∗] of a carrier
signal c(t) and informational parameter λ that remains permanent on a domain of
definition Ts of a signal s(t):

s(t) = M [c(t), λ], (6.1.5)


6.1 Formulation of Main Signal Processing Problems 209

where λ = [λ1 , . . . , λm ], λ ∈ Λ is a vector of unknown informational parameters of


a signal s(t), λ1 = const,. . . , λm = const.
Then the problem of estimation of an unknown vector parameter λ =
[λ1 , . . . , λm ] of a signal s(t) under influence of noise (interference) n(t) lies in form-
ing (according to a certain criterion) an estimator vector λ̂ in the form of a vector
functional Fλ̂ [x(t)] of an observed realization x(t):

λ̂ = Fλ̂ [x(t)], t ∈ Ts , λ̂ ∈ Λ. (6.1.6)

Undoubtedly, the problem of signal extraction (filtering) is more general and com-
plex than the problem of parameter estimation [163], [155], inasmuch as an esti-
mator λ̂ of parameter λ of a signal s(t) may be obtained from the solution of a
filtering problem (6.1.3) on the base of the relationship (6.1.4a):

λ̂ = M −1 [c(t), ŝ(t)]; (6.1.7a)

ŝ(t) = Fŝ [x(t)]. (6.1.7b)


Consider now a model of interaction between a signal si (t) from a signal set
S = {si (t)}, i = 1, . . . , m and interference (noise) n(t) in a signal space with the
properties of L-group Γ (+, ∨, ∧):

x(t) = si (t) ⊕ n(t), t ∈ Ts , i = 1, . . . , m, (6.1.8)

where ⊕ is some binary operation of L-group Γ (+, ∨, ∧); Ts = [t0 , t0 + T ] is a


domain of definition of a signal si (t); t0 is a time of signal arrival; T is a signal
duration; m ∈ N, N is the set of natural numbers.
The problem of classification of the signals in the presence of interference (noise)
lies in making a decision, using some criterion, distinguishing which one from a set
of the signals S = {si (t)}, i = 1, . . . , m is contained in the observed process x(t).
Detection of the signal s(t) under influence of interference (noise) n(t) is a case of
the problem of binary classification of two signals on S = {s1 (t) = 0, s2 (t) = s(t)},
m = 2.
Consider a model of interaction between a signal si (t, λ) from a signal set S =
{si (t, λ)}, i = 1, . . . , m and interference (noise) n(t) in a signal space with the
properties of L-group Γ (+, ∨, ∧):
 
m
x(t) = ⊕ θi si (t; λ) ⊕ n(t), t ∈ Tobs , i = 1, . . . , m, (6.1.9)
i=1

where ⊕ is some binary operation of L-group Γ (+, ∨, ∧); Tobs is an observation


interval of the signals si (t, λ); θi is a random parameter taking the values from the
set {0, 1}: θi ∈ {0, 1}; λ is an unknown nonrandom scalar parameter of a signal
si (t, λ) that takes the values on a set Λ: λ ∈ Λ; m ∈ N, N is the set of natural
numbers.
The problem of signal resolution-classification under interference (noise) back-
ground lies in making a decision, using some criterion, distinguishing which one
210 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

of signal combinations from a set S = {si (t, λ)}, i = 1, . . . , m is contained in the


observed process x(t). If the signals from a set S = {si (t, λ)}, i = 1, . . . , m are such
that the formulated problem has the solutions on any of 2m signal combinations,
then one can claim that the signals {si (t, λ)} are resolvable in a parameter λ.
It is obvious that the formulated problem of signal resolution-classification is
equivalent to the problem of signal classification from a set S 0 = {sk (t, λ)}, k =
1, . . . , 2m .
Within the process of resolution-classification, one can consider resolution-
detection of a useful signal si (t, λ) in the presence of interference (noise) n(t) and
other interfering signals {sj (t, λ)}, j 6= i, j = 1, . . . , m. Thus, under resolution-
detection, the problem of signal processing is the establishment of the presence of
the i-th signal in the observed process x(t) (6.1.9), i.e., its separate detection.
If the type of signal combination (6.1.9) from a set S = {si (t, λ)} is established
and information extraction lies in estimation of their parameters taken separately,
then the problem of signal processing turns to resolution-estimation.
Thus, two from five formulated (above) main signal processing problems are
the most general, i.e., filtering (extraction) of the signals and/or their parameters
under interference (noise) background, and also signal classification (or multiple-
alternative detection) in the presence of interference (noise), that is repeatedly
stressed in the works on statistical communication theory and statistical radio en-
gineering [163], [155].
Within every section of this chapter, we establish the potential quality indices
of signal processing that define the efficiency upper bound of solving each main
signal processing problem formulated here.

6.2 Quality Indices of Signal Filtering in Metric Spaces with


L-group Properties
Let useful signal s(t) interact with interference (noise) n(t) in physical signal space
Γ with the properties of L-group Γ (+, ∨, ∧):

x(t) = s(t) ⊕ n(t), t ∈ Ts , (6.2.1)

where ⊕ is some binary operation of L-group Γ (+, ∨, ∧); Ts = [t0 , t0 + T ] is a


domain of definition of a useful signal s(t); T is a duration of useful signal s(t).
Useful signal s(t) is a deterministic one-to-one function M [∗, ∗] of a carrier signal
c(t) and informational parameter λ(t) changing on a domain of definition Ts of a
signal s(t):
s(t) = M [c(t), λ(t)].
Then, by filtering (extraction) of a signal s(t) (or its parameter λ(t)) in the presence
of interference (noise) n(t), we mean the formation of an estimator ŝ(t) of a signal
s(t) (or estimator λ̂(t) of its parameter λ(t)) [163], [155]. Taking into account the
one-to-one property of modulating function M [∗, ∗], the problems of formation of
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group Properties 211

the estimators ŝ(t) and λ̂(t) are equivalent within the sense, that an estimator ŝ(t)
of a signal s(t) makes it easy to obtain an estimator λ̂(t) of a parameter λ(t) and
vice versa (see the relationships (6.1.4)).
Generally, any estimator ŝ(t) of useful signal s(t) (or estimator λ̂(t) of its pa-
rameter λ(t)) is some function fŝ [x(t)] (fλ̂ [x(t)]) of an observed process x(t), which
is a result of interaction between a signal s(t) and interference (noise) n(t):

ŝ(t) = fŝ [x(t)]; (6.2.2a)

λ̂(t) = fλ̂ [x(t)]. (6.2.2b)


Under given probabilistic-statistical properties of the signal and interference (noise),
and a certain type of interaction between them, the quality of the estimator ŝ(t)
(λ̂) of a signal s(t) (or its parameter λ) is defined by the kind of the processing
function fŝ (fλ̂ ).
Meanwhile, one can claim with confidence that a quality of the best (opti-
mal with respect to a chosen criterion) estimators will be defined by probabilistic-
statistical properties of useful signal and interference (noise) and also by the type
of their interaction. Thus, to establish the limiting values (as a rule, the upper
bounds) of quality indices of estimation of signals and their parameters, it is not
necessary to know the concrete kind of processing function fŝ (fλ̂ ).
In this section, we do not formulate the problem of establishing processing
function (estimating function) fŝ (fλ̂ ) which provides the best estimator of useful
signal s(t) (or its parameter λ); the next chapter is intended to solve this problem.
This section has two goals: (1) explaining the relationships determining the po-
tential possibilities for signal extraction (filtering) under interference (noise) con-
ditions in linear signal space and signal space with lattice properties based on
analyzing simple informational relationships between the processed signals and (2)
specifying the results of evaluating the capacity of a noiseless communication chan-
nel with respect to the channels operating in the presence of interference (noise).
We again consider the interaction of stochastic signals a(t) and b(t) in physical
signal space Γ: a(t), b(t) ∈ Γ, t ∈ Ts with properties of L-group Γ(+, ∨, ∧), bearing
in mind that the result of their interaction v (t) is described by some binary operation
of L-group Γ(+, ∨, ∧):
v (t) = a(t) ⊕ b(t), t ∈ Ts .
For stochastic signals a(t) and b(t) interacting in signal space Γ: a(t), b(t) ∈ Γ, by
Definition 4.4.1, we introduced Iab called the quantity of mutual information and
equal to:
Iab = νab min[Ia , Ib ], (6.2.3)
where νab = ν (at , bt ) is a normalized measure of statistical relationship (NMSI) of
the samples at , bt of stochastic signals a(t), b(t); Ia and Ib represent the quantity
of absolute information contained in the signals a(t) and b(t), respectively.
The relationship (6.2.3) permits establishing the correspondence between quan-
tity of mutual information Isx contained in the result of interaction x(t) of useful
signal s(t) and interference (noise) n(t) (6.2.1), and quantity of absolute information
212 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

Is contained in useful signal s(t) in physical signal space Γ with arbitrary algebraic
properties.
With respect to useful signal s(t) and the result of interaction x(t) (6.2.1), the
relationship (6.2.3) takes the form:

Isx = νsx min[Is , Ix ], (6.2.4)

where Isx is the quantity of mutual information contained in the result of interaction
x(t) concerning a useful signal s(t); νsx = ν (st , xt ) is NMSI of the samples st , xt of
stochastic signals s(t), x(t); Is , Ix is quantity of absolute information contained in
useful signal s(t) and observed process x(t), respectively.
On the base of Theorem 3.2.14 stated for metric signal space (Γ, µ) with metric
µ (3.2.1), one can formulate the similar theorem for an estimator ŝ(t) of useful
signal s(t), whose interaction with interference (noise) n(t) in physical signal space
is described by some binary operation of L-group Γ(+, ∨, ∧).
Theorem 6.2.1. Metric inequality of signal processing. In metric signal space (Γ, µ)
with metric µ (3.2.1), while forming the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of use-
ful signal s(t) by processing of the result of interaction xt = st ⊕ nt (1) between
the samples st and nt of stochastic signals s(t), n(t), t ∈ Ts defined by a binary
operation of L-group Γ(+, ∨, ∧), the metric inequality holds:

µsx = µ(st , xt ) ≤ µsŝ = µ(st , ŝt ), (6.2.5)

and inequality (6.2.5) turns into identity if and only if the mapping fŝ belongs to a
group H = {hα } of continuous mappings fŝ ∈ H preserving null (identity) element
0 of a group Γ(+): hα (0) = 0, hα ∈ H.
Thus, Theorem 6.2.1 implies the expression determining a lower bound of metric
µsŝ between useful signal s(t) and its estimator ŝ(t):

µsx = inf µsŝ ≤ µsŝ . (6.2.6)


s,n,ŝ∈Γ

The sense of the expression (6.2.6) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties permit achieving a value
of metric µsŝ between useful signal s(t) and its estimator ŝ(t), that is less than
established by the relationship (6.2.6) (see Fig. 6.2.1).
To characterize the quality of the estimator ŝ(t) (6.2.2a) of useful signal s(t)
obtained as a result of its filtering (extraction) in the presence of interference (noise),
the following definition is introduced.
Definition 6.2.1. By quality index of estimator of a signal s(t), while solving the
problem of its filtering (extraction) under interference (noise) background within
the framework of the model (6.2.1), we mean the NMSI νsŝ = ν (st , ŝt ) between
useful signal s(t) and its estimator ŝ(t).
Taking into account the known relation between metric and NMSI (3.2.7), as
a corollary of Theorem 6.2.1, one can formulate independent theorems for NMSIs
characterizing the same pairs of the signals: s(t), x(t) and s(t), ŝ(t).
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group Properties 213

FIGURE 6.2.1 Metric relationships between signals elucidating Theorem 6.2.1

Theorem 6.2.2. Signal processing inequality. In metric signal space (Γ, µ) with
metric µ (3.2.1), while forming the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of useful signal
s(t) by processing of the result of interaction xt = st ⊕nt (6.2.1) between the samples
st , nt of stochastic signals s(t), n(t), t ∈ Ts defined by a binary operation of L-group
Γ(+, ∨, ∧), the inequality holds:

νsŝ = ν (st , ŝt ) ≤ νsx = ν (st , xt ), (6.2.7)

and inequality (6.2.7) turns into identity if and only if the mapping fŝ belongs to a
group H = {hα } of continuous mappings fŝ ∈ H preserving null (identity) element
0 of a group Γ(+): hα (0) = 0, hα ∈ H.

According to Theorem 6.2.2, the relationship (6.2.7) determines maximal part of


information quantity, which contains in the result of interaction x(t) with respect
to useful signal s(t) and can be extracted with a useful signal s(t) processed in
the presence of interference (noise) n(t) by signal filtering, i.e., while forming the
estimator ŝ(t) of a useful signal s(t) in a physical signal space.
The relationship (6.2.7) determines the potential possibilities of extraction (fil-
tering) of a useful signal s(t) from the mixture x(t), i.e., it determines the quality
of the estimator ŝ(t) of the useful signal s(t), thus, establishing the upper bound of
NMSI νsŝ = ν (st , ŝt ) between a useful signal s(t) and its estimator ŝ(t):

νsŝ ≤ sup νsŝ = νsx . (6.2.8)


s,n,ŝ∈Γ

The sense of the relationship (6.2.8) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties can provide quality index
of a useful signal estimator determined by NMSI νsŝ between a useful signal s(t)
and its estimator ŝ(t) that is greater than the quantity of NMSI νsx between a
useful signal s(t) and the result of its interaction x(t) with interference (noise) n(t).

Theorem 6.2.3. Informational inequality of signal processing. In metric signal space


(Γ, µ) with metric µ (3.2.1), while forming the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of
useful signal s(t) by processing of the result of interaction xt = st ⊕ nt (1) between
214 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

the samples st , nt of stationary stochastic signals s(t), n(t), t ∈ Ts determined by a


binary operation of L-group Γ(+, ∨, ∧), between the quantities of mutual informa-
tion Isx , Isŝ contained in the pairs of signals s(t), x(t) and s(t), ŝ(t), respectively,
the inequality holds:
Isŝ ≤ Isx , (6.2.9)
and inequality (6.2.9) turns into identity if and only if the mapping fŝ belongs to a
group H = {hα } of continuous mappings fŝ ∈ H preserving null (identity) element
0 of the group Γ(+): hα (0) = 0, hα ∈ H.

Proof. Multiplying the left and right parts of inequality (6.2.7) by the left and right
parts of obvious inequality min(Is , Iŝ ) ≤ min(Is , Ix ), respectively, we obtain:

νsŝ min(Is , Iŝ ) ≤ νsx min(Is , Ix ) ⇒ Isŝ ≤ Isx .

According to Theorem 6.2.3, the relationship (6.2.9) determines maximum quan-


tity of information Isx , which is contained in the result of interaction x(t) concerning
useful signal s(t), and can be extracted with a useful signal s(t) processed in the
presence of interference (noise) n(t) by signal filtering, i.e., by forming the estimator
ŝ(t) of a useful signal s(t) in a physical signal space. Besides, the relationship (6.2.9)
determines the potential possibilities of useful signal extraction (filtering) s(t) from
a mixture x(t) (6.2.1), i.e., determines informational quality of the estimator ŝ(t)
of a useful signal s(t), thus establishing the upper bound of quantity of mutual
information Isŝ between a useful signal s(t) and its estimator ŝ(t):

Isŝ ≤ sup Isŝ = Isx . (6.2.10)


s,n,ŝ∈Γ

The sense of the relationship (6.2.10) lies in the fact that no signal processing in
physical signal space Γ with arbitrary algebraic properties can provide the quantity
of mutual information Isŝ between a useful signal s(t) and its estimator ŝ(t) that
is greater than the value of quantity of mutual information Isx between the useful
signal s(t) and the result of its interaction x(t) with interference (noise) n(t).
In the case of additive interaction between a statistically independent Gaussian
useful signal s(t) and interference (noise) n(t) in the form of white Gaussian noise
(WGN) in linear signal space Γ(+):

x(t) = s(t) + n(t), (6.2.11)

according to Theorem 3.2.3, the following relationships hold:


2
νsx = arcsin[ρsx ]; (6.2.12)
π
2
Isx = νsx min[Is , Ix ] = arcsin[ρsx ]Is ; (6.2.13)
π
p
ρsx = q/ 1 + q 2 , (6.2.14)
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group Properties 215

where νsx = π2 arcsin[ρsx ] is NMSI between the samples st and xt of Gaussian


signals s(t) and x(t); ρsx is the correlation coefficient between the samples st and
xt of Gaussian signals s(t) and x(t); q 2 = S0 /N0 = Ds /Dn is the ratio of the energy
of Gaussian useful signal to power spectral density of interference (noise) n(t); Ds =
1
R∞ R∞
R(0) = 2π S (ω )dω is a variance of useful signal s(t); S0 = S (0) = R(τ )dτ ;
−∞ −∞
R∞
Dn = N0 ( S (ω )dω )/(2πS0 ) is a variance of interference (noise) n(t) brought to
−∞
the energy of useful signal s(t); S (ω ) is power spectral density of useful signal s(t);
N0 is power spectral density of interference (noise) n(t) in the form of WGN.
In the case of interactions between a useful signal s(t) and interference (noise)
n(t) in signal space Γ(∨, ∧) with lattice properties:
x(t) = s(t) ∨ n(t); (6.2.15a)
x̃(t) = s(t) ∧ n(t), (6.2.15b)
according to the identities (3.2.38b) and (6.2.4), the following relationships deter-
mining the values of the quantities of mutual information and also their sum hold:

Isx = 0.5Is ; (6.2.16a)


Isx̃ = 0.5Is ; (6.2.16b)
Isx + Isx̃ = Is . (6.2.16c)
The relationships (6.2.13), (6.2.16a), and (6.2.16b) determine information quantity
contained in the results of interactions (6.2.11), (6.2.15a), and (6.2.15b) concern-
ing the useful signal s(t) that can be extracted by useful signal filtering, i.e., by
forming the estimator ŝ(t) in linear signal space Γ(+) and in the signal space
Γ(∨, ∧) with lattice properties, respectively. The relationships (6.2.13) and (6.2.16)
determine the potential possibilities of signal filtering from the received mixture
in linear signal space and in signal space with lattice properties, respectively. The
quantity of mutual information (6.2.13), (6.2.16a), and (6.2.16b), according to the
relationship (6.2.9) of informational inequality of signal processing represents it-
self the information quantity contained in the optimal (according to criterion of
maximum quantity of extracted useful information) estimator ŝ(t) of useful signal
s(t) obtained by processing the observed stochastic process x(t). The relationships
(6.2.16a) and (6.2.16b) imply that in signal space Γ(∨, ∧) with lattice properties
there exists a possibility of useful signal extraction (filtering) from a mixture with
permanent quality that does not depend on energetic relations of useful and inter-
ference signals. Optimal estimators ŝa (t) = fa [x(t)], ŝb (t) = fb [x̃(t)] of useful signal
s(t) in the form of (6.2.2a) obtained as a result of proper processing (filtering) of
the received mixture x(t) (6.2.15a) and x̃(t) (6.2.15b) in signal space Γ(∨, ∧) with
lattice properties contain exactly half of the quantity of absolute information Is
contained in useful signal s(t).
Due to its importance, the relationship (6.2.16c) demands a single elucidation.
It claims that under the proper use of the results of simultaneous processing of ob-
servations in the form of join (6.2.15a) and meet (6.2.15b) of the lattice Γ(∨, ∧), one
216 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

can avoid the losses of information contained in useful signals. It is elucidated by


the statistical independence of the processes x(t) (6.2.15a) and x̃(t) (6.2.15b) owing
to fulfillment of the relationship (3.2.39b); each of them, according to the identities
(6.2.16a) and (6.2.16b), contains half the information quantity contained in useful
signal s(t). On a qualitative level, this information has distinct content because of
statistical independence of the processes x(t), x̃(t), when taken together (but not
in the sum) they contain all information of a useful signal, not only on a quantita-
S
tive level, but on a qualitative one. While processing the signal y (t) = x(t) x̃(t)
(i.e., under simultaneous processing of the signals x(t) and x̃(t) taken separately),
according to Theorem 6.2.3 and the relationships (6.2.16), the inequality holds:

Isŝ ≤ Isy = Isx + Isx̃ = Is ,

and implies that the upper bound of quality index νsŝ of an estimator ŝ(t) of a
signal s(t), while solving the problem of its filtering (extraction) in signal space
Γ(∨, ∧) with lattice properties is equal to 1: sup νsŝ = 1.
The last relation means that the possibilities of signal filtering (extraction) under
interference (noise) background in signal space Γ(∨, ∧) with lattice properties are
not bounded by the conditions of parametric and nonparametric prior uncertainties.
There exists (at least, theoretically) possibility of signal processing without losses
of information contained in the processed signals x(t) and x̃(t).
In linear signal space Γ(+), as follows from (6.2.13), the potential possibilities
of extraction of useful signal s(t) while forming the estimator ŝ(t), as against the
signal space Γ(∨, ∧) with lattice properties, are essentially bounded by the energetic
relations between interacting useful and interference signals, as illustrated with the
following example.
Example 6.2.1. Consider a useful Gaussian signal s(t) with power spectral density
S (ω ):
S0
S (ω ) = (6.2.17)
1 + (ωT )2
and interference (noise) n(t) in the form of white Gaussian noise with power spectral
density (PSD) N0 additively interact with each other in linear signal space Γ(+).
The minimal variance of filtering error (fluctuation error of filtering) Dε is de-
termined by the value [159, (2.122)]:
S
Dε = M{[s(t) − ŝ(t)]2 } = p 0 , (6.2.18)
T ( 1 + q 2 + 1)

where M(∗) is a symbol of mathematical expectation, and the minimal relative


filtering error (relative fluctuation error of filtering) δε is, respectively, equal to:
2
δε = Dε /(2Ds ) = p , (6.2.19)
1 + q2 + 1

where Ds is a signal variance equal to Ds = S0 /(4T ); T is a constant of time;


q 2 = S0 /N0 is a ratio of signal energy to noise PSD.
6.2 Quality Indices of Signal Filtering in Metric Spaces with L-group Properties 217

The relationships (6.2.18) and (6.2.19) imply that correlation coefficient ρsŝ
between useful signal s(t) and its estimator ŝ(t) is determined by the quantity:
p
1 + q2 − 1
ρsŝ = 1 − δε = p . (6.2.20)
1 + q2 + 1

The relationship (6.2.7) implies that, between NMSI νsx of the samples st , xt of
Gaussian signals s(t), x(t) and NMSI νsŝ of the samples st , ŝt of Gaussian signals
s(t), ŝ(t), the following inequality holds:
2 2
νsŝ = arcsin[ρsŝ ] ≤ νsx = arcsin[ρsx ]. (6.2.21)
π π
Substituting the values of NMSIs for Gaussian signals determined by the equality
(3.2.12) and also the values of correlation coefficients from the relations (6.2.20)
and (6.2.14) into the inequality (6.2.21), we obtain the inequality:
p ! !
2 1 + q2 − 1 2 q
νsŝ (q ) = arcsin p ≤ νsx (q ) = arcsin p . (6.2.22)
π 1 + q2 + 1 π 1 + q2

The relationships (3.2.51a), (6.2.22), and (3.2.7) imply that metric µsx between the
samples st , xt of Gaussian signals s(t), x(t) and metric µsŝ between the samples st ,
ŝt of Gaussian signals s(t), ŝ(t) are connected by the inequality:
p !
2 1 + q2 − 1
µsŝ (q ) = 1 − arcsin p ≥
π 1 + q2 + 1
!
2 q
≥ µsx (q ) = 1 − arcsin p . 5 (6.2.23)
π 1 + q2

The graphs of dependences of NMSIs νsŝ (q ), νsx (q ) and metrics µsŝ (q ), µsx (q )
on a signal-to-noise ratio q 2 = S0 /N0 are shown in Figs. 6.2.2 and 6.2.3, respectively.

FIGURE 6.2.2 Dependences of NMSIs FIGURE 6.2.3 Dependences of metrics


νsŝ (q), νsx (q) on signal-to-noise ratio µsŝ (q), µsx (q) on signal-to-noise ratio
q 2 = S0 /N0 q 2 = S0 /N0
218 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

The relationship (6.2.22) (see Fig. 6.2.2) implies that NMSI dependence νsx (q )
determines the upper bound of NMSI νsŝ (q ) between useful signal s(t) and its esti-
mator ŝ(t), which cannot be exceeded by any methods of Gaussian signal processing
in linear signal space Γ(+) and does not depend on the kinds of useful signal mod-
ulation. Similarly, the relationship (6.2.23) determines the lower bound µsx (q ) of
metric µsŝ (q ) between useful signal s(t) and its estimator ŝ(t), which cannot be
smaller while using methods of Gaussian signal processing in linear space.
Generally, the properties of the estimators of the signals extracted under inter-
ference (noise) background are not included directly into a subject of information
theory. At the same time, consideration of informational properties of signal esti-
mators in informational relationships between processed signals is of interest, first,
to establish constraint relationships for quality indices of signal extraction (filter-
ing) in the presence of interference (noise), and second, to determine the capacity
of communication channels with interference (noise) that operate in linear signal
spaces and in signal spaces with lattice properties. The questions of evaluation of
noiseless channel capacity are considered within Section 5.2.

6.3 Quality Indices of Unknown Nonrandom Parameter


Estimation
This section discusses comparative characteristics of unknown nonrandom parame-
ter estimators in sample space with group properties and sample space with lattice
properties. Algebraic structures unifying the properties of a group and a lattice
are called L-groups. In existing algebraic literature, L-groups have been known for
a long time and are well investigated [221], [233]. Sample space L(X , BX ; +, ∨, ∧)
with L-group properties is defined as a probabilistic space (X , BX ) in which the
axioms of distributive lattice L(X ; ∨, ∧) with operations of join and meet hold:
a ∨ b = supL {a, b}, a ∧ b = inf L {a, b}; a, b ∈ L(X ; ∨, ∧), and also the axioms of a
group L(X ; +) with operation of addition hold.
In a wide range of literature on mathematical statistics, the attention is devoted
to estimator properties in sample space, where interaction of some deterministic
function f (λ) of unknown nonrandom parameter λ with estimation errors (mea-
surement errors) {Ni } is realized on the base of operation of addition of additive
commutative group X+,i = f (λ) + Ni [250], [231], [251]. The goals of this section
are, first, introducing quality indices of estimators based on metric relationships
between random variables (statistics), and second, consideration of the main qual-
itative and quantitative characteristics of some estimators in sample space with
L-group properties.
We now perform comparative analysis of characteristics of unknown nonrandom
parameter estimators for the models of a direct estimation (measurement) in sample
space L(X , BX ; +, ∨, ∧) with group and lattice properties of L-group L(X ; +, ∨, ∧),
6.3 Quality Indices of Unknown Nonrandom Parameter Estimation 219

respectively:
X+,i = λ + Ni ; (6.3.1a)
X∨,i = λ ∨ Ni ; (6.3.1b)
X∧,i = λ ∧ Ni . (6.3.1c)
where {Ni } are independent estimation (measurement) errors that are represented
by the sample N = (N1 , . . . , Nn ), Ni ∈ N , N ∈ L(X , BX ; +, ∨, ∧) with distribu-
tion from a distribution class with symmetric probability density function (PDF)
pN (z ) = pN (−z ) ; {X+,i }, {X∨,i }, {X∧,i } are the results of estimation (measure-
ment) represented by the sample X+ = (X+,1 , . . . , X+,n ), X∨ = (X∨,1 , . . . , X∨,n ),
X∧ = (X∧,1 , . . . , X∧,n ); X+,i ∈ X+ , X∨,i ∈ X∨ , X∧,i ∈ X∧ , respectively:
X+ , X∨ , X∧ ∈ L(X , BX ; +, ∨, ∧); +, ∨, ∧ are operations of addition, join, and
meet of sample space L(X , BX ; +, ∨, ∧) with properties of L-group L(X ; +, ∨, ∧),
respectively; i = 1, . . . , n is an index of the elements of statistical collections
{Ni }, {X+,i }, {X∨,i }, {X∧,i }; n represents size of the samples N = (N1 , . . . , Nn ),
X+ = (X+,1 , . . . , X+,n ), X∨ = (X∨,1 , . . . , X∨,n ), X∧ = (X∧,1 , . . . , X∧,n ).
For the model (6.3.1a), the estimator λ̂n,+ , which is a sample mean, is a uni-
formly minimum variance unbiased estimator [250], [251]:
n
1X
λ̂n,+ = X+,i . (6.3.2)
n i=1

As the estimator λ̂n,∧ of a parameter λ for the model (6.3.1b), we consider meet of
lattice L(X ; +, ∨, ∧):
n
λ̂n,∧ = ∧ X∨,i , (6.3.3)
i=1
n
where ∧ X∨,i = inf {X∨,i } is the least element of the sample X∨ =
i=1 X∨
(X∨,1 , . . . , X∨,n ).
As the estimator λ̂n,∨ of a parameter λ for the model (6.3.1c), we take the join
of lattice L(X ; +, ∨, ∧):
n
λ̂n,∨ = ∨ X∧,i , (6.3.4)
i=1
n
where ∨ X∧,i = sup{X∧,i } is the largest element of the sample X∧ =
i=1 X∧
(X∧,1 , . . . , X∧,n ).
Of course, there are no doubts concerning optimality of the estimator λ̂n,+ , at
least within Gaussian distribution of estimation (measurement) errors. At the same
time, it should be noted that the questions about the estimators λ̂n,∧ (6.3.3), λ̂n,∨
(6.3.4) on the basis of optimality criteria, are considered in Chapter 7.
For normally distributed errors of estimation (measurement) {Ni }, cumulative
distribution function (CDF) Fλ̂n,+ (z ) of the estimator λ̂n,+ is determined by the
formula:
Zz
Fλ̂n,+ (z ) = pλ̂n,+ (x)dx, (6.3.5)
−∞
220 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
 
where pλ̂n,+ (x) = (2πDλ̂n,+ )−1/2 exp −(x − λ)2 /2Dλ̂n,+ is the PDF of the esti-
mator λ̂n,+ ; Dλ̂n,+ = D/n is a variance of the estimator λ̂n,+ ; D is a variance of
estimation (measurement) errors {Ni }; λ is an estimated parameter.
Consider the expressions for CDFs Fλ̂n,∧ (z ), Fλ̂n,∨ (z ) of the estimators λ̂n,∧
(6.3.3) and λ̂n,∨ (6.3.4) and also for CDFs FX∨,i (z ), FX∧,i (z ) of estimation (mea-
surement) results X∨,i (6.3.1b) and X∧,i (6.3.1c), respectively, supposing that CDF
FN (z ) of estimation (measurement) errors {Ni } is an arbitrary one. The relation-
ships [115, (3.2.87)] and [115, (3.2.82)] imply that CDFs FX∨,i (z ), FX∧,i (z ) are,
respectively, equal to:
FX∨,i (z ) = Fλ (z )FN (z ); (6.3.6a)
FX∧,i (z ) = Fλ (z ) + FN (z ) − Fλ (z )FN (z ), (6.3.6b)
where Fλ (z ) = 1(z − λ) is CDF of unknown nonrandom parameter λ; 1(z ) is Heav-
iside step function; FN (z ) is the CDF of estimation (measurement) errors {Ni }.
The relationships [252, (2.1.2)] and [252, (2.1.1)] imply that CDFs Fλ̂n,∧ (z ),
Fλ̂n,∨ (z ) of the estimators λ̂n,∧ (6.3.3) and λ̂n,∨ (6.3.4) are, respectively, equal to:

Fλ̂n,∧ (z ) = [1 − (1 − FN (z ))n ]Fλ (z ); (6.3.7a)


n
Fλ̂n,∨ (z ) = FX ∧,i
(z ). (6.3.7b)
Generalizing the relationships (6.3.6), (6.3.7), the expressions for CDFs FX∨,i (z ),
FX∧,i (z ) of estimation (measurement) results X∨,i (6.3.1b), X∧,i (6.3.1c) and CDFs
Fλ̂n,∧ (z ), Fλ̂n,∨ (z ) of the estimators λ̂n,∧ (6.3.3) and λ̂n,∨ (6.3.4) are, respectively:

FX∨,i (z ) = inf [Fλ (z ), FN (z )]; (6.3.8a)


z

FX∧,i (z ) = sup[Fλ (z ), FN (z )]; (6.3.8b)


z

Fλ̂n,∧ (z ) = inf [Fλ (z ), (1 − (1 − FN (z ))n )]; (6.3.8c)


z

Fλ̂n,∨ (z ) = sup[Fλ (z ), FNn (z )], (6.3.8d)


z

where Fλ (z ) = 1(z − λ) is the CDF of unknown nonrandom parameter λ; 1(z )


is Heaviside step function; FN (z ) is the CDF of estimation (measurement) errors
{Ni }; n represents size of the samples N = (N1 , . . . , Nn ), X∨ = (X∨,1 , . . . , X∨,n ),
X∧ = (X∧,1 , . . . , X∧,n ).
For standard normal distribution of estimation (measurement) errors {Ni } with
the variance D = 1, the graphs of CDFs (6.3.8a,b) are shown in the Fig. 6.3.1(a),
and the graphs of CDFs of the estimators (6.3.8c,d) are shown in the Fig. 6.3.1(b).
In the graphs, CDFs FX∧,i (z ), Fλ̂n,∨ (z ) are shown by dotted lines, and CDFs
FX∨,i (z ), Fλ̂n,∧ (z ) are shown by solid lines. As follows from the relationships
(6.3.8a,b) and (6.3.8c,d), CDFs FX∨,i (z ), FX∧,i (z ), Fλ̂n,∧ (z ), Fλ̂n,∨ (z ) are contin-
uous on a union D<λ ∪ D>λ of open intervals D<λ =] − ∞, λ[, D>λ =]λ, ∞[, i.e.,
6.3 Quality Indices of Unknown Nonrandom Parameter Estimation 221

(a) (b)
FIGURE 6.3.1 CDFs: (a) CDFs FX∨,i (z) (6.3.8a) and FX∧,i (z) (6.3.8b) of estimation (mea-
surement) results X∨,i (6.3.1b), X∧,i (6.3.1c); (b) CDFs Fλ̂n,∧ (z) (6.3.8c) and Fλ̂n,∨ (z)
(6.3.8d) of estimators λ̂n,∧ (6.3.3) and λ̂n,∨ (6.3.4)

there is a unilateral continuity, and z = λ is a discontinuity point of the first kind.


The limit on left is not equal to the limit on right of the CDF in this point:

lim FX∨,i (z ) = FX∨,i (λ − 0), lim FX∨,i (z ) = FX∨,i (λ + 0); (6.3.9a)


z→λ−0 z→λ+0

lim FX∧,i (z ) = FX∧,i (λ − 0), lim FX∧,i (z ) = FX∧,i (λ + 0); (6.3.9b)


z→λ−0 z→λ+0

lim Fλ̂n,∧ (z ) = Fλ̂n,∧ (λ − 0), lim Fλ̂n,∧ (z ) = Fλ̂n,∧ (λ + 0); (6.3.9c)


z→λ−0 z→λ+0

lim Fλ̂n,∨ (z ) = Fλ̂n,∨ (λ − 0), lim Fλ̂n,∨ (z ) = Fλ̂n,∨ (λ + 0). (6.3.9d)


z→λ−0 z→λ+0

To determine a quality of the estimators (6.3.2), (6.3.3), and (6.3.4) in sample space
with L-group properties L(X , BX ; +, ∨, ∧), we introduce the functions µ(λ̂n , Xi )
(3.2.1), µ0 (λ̂n , Xi ) (3.2.1a), which characterize distinctions between one of the esti-
mators λ̂n (6.3.2), (6.3.3), and (6.3.4) of a parameter λ and the estimation (measure-
ment) result Xi = λ⊕Ni , where ⊕ is one of the operations of L-group L(X ; +, ∨, ∧)
(6.3.1a), (6.3.1b), and (6.3.1c), respectively:

µ(λ̂n , Xi ) = 2(P[λ̂n ∨ Xi > λ] − P[λ̂n ∧ Xi > λ]); (6.3.10a)

µ0 (λ̂n , Xi ) = 2(P[λ̂n ∧ Xi < λ] − P[λ̂n ∨ Xi < λ]). (6.3.10b)


Theorem 3.2.1 establishes that, for random variables with special properties, the
functions defined by the relationships (3.2.1), (3.2.1a) are metrics. Meanwhile, es-
tablishing this fact for the estimators (6.3.2), (6.3.3), and (6.3.4) requires a separate
proof contained in two following theorems.

Theorem 6.3.1. The functions µ(λ̂n,+ , X+,i ), µ(λ̂n,∧ , X∨,i ) determined by the ex-
pression (6.3.10a) between the estimators λ̂n,+ (6.3.2), λ̂n,∧ (6.3.3) and the estima-
tion (measurement) results X+,i (6.3.1a), X∨,i (6.3.1b), respectively, are metrics.
222 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

Proof. We use general designations to denote one of the estimators (6.3.2), (6.3.3)
of a parameter λ as λ̂n and denote the estimation (measurement) result as Xi ,
Xi = λ ⊕ Ni , where ⊕ is one of operations of L-group L(X ; +, ∨, ∧) (6.3.1a) and
(6.3.1b), respectively. Consider the probabilities P[λ̂n ∨ Xi > λ], P[λ̂n ∧ Xi > λ]
that, according to the formulas [115, (3.2.80)] and [115, (3.2.85)], are equal to:

P[λ̂n ∧ Xi > λ] = 1 − [Fλ̂ (λ + 0) + FXi (λ + 0) − Fλ̂,Xi (λ + 0, λ + 0)]; (6.3.11a)

P[λ̂n ∨ Xi > λ] = 1 − Fλ̂,Xi (λ + 0, λ + 0). (6.3.11b)


where Fλ̂,Xi (z1 , z2 ), Fλ̂ (z1 ), FXi (z2 ) are joint and univariate CDFs of the estimator
λ̂n and the measurement result Xi , respectively, and Fλ̂,Xi (λ + 0, λ + 0) 6= 1, Fλ̂ (λ +
0) 6= 1, FXi (λ + 0) 6= 1.
Then the function P[λ̂n > λ] = 1 − Fλ̂ (λ + 0), P[Xi > λ] = 1 − FXi (λ + 0) upon
a lattice L(X ; ∨, ∧) is a valuation, inasmuch as the relationships (6.3.11a,b) imply
the identity [221, Section X.1 (V1)]:

P[λ̂n > λ] + P[Xi > λ] = P[λ̂n ∨ Xi > λ] + P[λ̂n ∧ Xi > λ]. (6.3.12)

Valuation P[λ̂n > λ], P[Xi > λ] is isotonic, inasmuch as the implication holds [221,
Section X.1 (V2)]:
λ̂n ≥ λ̂0n ⇒ P[λ̂n > λ] ≥ P[λ̂0n > λ]; (6.3.13a)
Xi ≥ Xi0 ⇒ P[Xi > λ] ≥ P[Xi0 > λ]. (6.3.13b)
Joint fulfillment of the relationships (6.3.12) and (6.3.13a,b), according to Theo-
rem 6.3.1 [221, Section X.1], implies the quantity (6.3.10a) that is equal to:

µ(λ̂n , Xi ) = 2(P[λ̂n ∨ Xi > λ] − P[λ̂n ∧ Xi > λ]) =


= 2[Fλ̂ (λ + 0) + FXi (λ + 0)] − 4Fλ̂,Xi (λ + 0, λ + 0), (6.3.14)

is metric.

For the quantity µ0 (λ̂n , Xi ) determined by the relationship (6.3.10b), one can
formulate a theorem that is analogous to Theorem 6.3.1; and to prove it, the fol-
lowing lemma may be useful.

Lemma 6.3.1. The functions µ(λ̂n,∧ , X∨,i ), µ(λ̂n,∨ , X∧,i ); µ0 (λ̂n,∧ , X∨,i ),
µ0 (λ̂n,∨ , X∧,i ), determined by the expressions (6.3.10a,b) between the estimators
λ̂n,∧ (6.3.3), λ̂n,∨ (6.3.4) and measurement results X∨,i (6.3.1b), X∧,i (6.3.1c),
respectively, are equal to:

µ(λ̂n,∧ , X∨,i ) = 2[Fλ̂n,∧ (λ + 0) − FX∨,i (λ + 0)]; (6.3.15a)

µ(λ̂n,∨ , X∧,i ) = 2[FX∧,i (λ + 0) − Fλ̂n,∨ (λ + 0)]; (6.3.15b)

µ0 (λ̂n,∧ , X∨,i ) = 2[Fλ̂n,∧ (λ − 0) − FX∨,i (λ − 0)]; (6.3.15c)

µ0 (λ̂n,∨ , X∧,i ) = 2[FX∧,i (λ − 0) − Fλ̂n,∨ (λ − 0)]. (6.3.15d)


6.3 Quality Indices of Unknown Nonrandom Parameter Estimation 223

Proof. Determine the values of functions µ(λ̂n,∧ , X∨,i ), µ0 (λ̂n,∧ , X∨,i ) between the
estimator λ̂n,∧ (6.3.3) and the estimation (measurement) results X∨,i (6.3.1b). Join
λ̂n,∧ ∨ X∨,i , which appears in the initial formulas (6.3.10a,b), according to the
definition of the estimator λ̂n,∧ (6.3.3), and also according to the lattice absorption
property, is equal to the estimation (measurement) result X∨,i :

λ̂n,∧ ∨ X∨,i = [( ∧ X∨,j ) ∧ X∨,i ] ∨ X∨,i = X∨,i . (6.3.16)


j6=i

Meet λ̂n,∧ ∧ X∨,i , which appears in the initial formulas (6.3.10a,b) according to the
idempotency property of lattice, is equal to the estimator λ̂n,∧ :

λ̂n,∧ ∧ X∨,i = [( ∧ X∨,j ) ∧ X∨,i ] ∧ X∨,i = λ̂n,∧ . (6.3.17)


j6=i

Substituting the values of join and meet (6.3.16) and (6.3.17) into the initial for-
mulas (6.3.10a,b), we obtain the values of the functions µ(λ̂n,∧ , X∨,i ), µ0 (λ̂n,∧ , X∨,i )
between the estimator λ̂n,∧ and the estimation (measurement) result X∨,i :

µ(λ̂n,∧ , X∨,i ) = 2(P[X∨,i > λ] − P[λ̂n,∧ > λ]) =


= 2[Fλ̂n,∧ (λ + 0) − FX∨,i (λ + 0)]; (6.3.18a)

µ0 (λ̂n,∧ , X∨,i ) = 2(P[λ̂n,∧ < λ] − P[X∨,i < λ]) =


= 2[Fλ̂n,∧ (λ − 0) − FX∨,i (λ − 0)], (6.3.18b)
where
P[X∨,i > λ] = 1 − FX∨,i (λ + 0); (6.3.19a)
P[λ̂n,∧ > λ] = 1 − Fλ̂n,∧ (λ + 0); (6.3.19b)
P[X∨,i < λ] = 1 − FX∨,i (λ − 0); (6.3.19c)
P[λ̂n,∧ < λ] = 1 − Fλ̂n,∧ (λ − 0), (6.3.19d)

Fλ̂n,∧ (z ) = inf [Fλ (z ), (1 − (1 − FN (z ))n )] is the CDF of the estimator λ̂n,∧ de-
z
termined by the expression (6.3.8c); FX∨,i (z ) = inf [Fλ (z ), FN (z )] is the CDF of
z
random variable X∨,i (6.3.1b) determined by the expression (6.3.8a).
Similarly, we obtain the values of the functions µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∨ , X∧,i )
determined by the expressions (6.3.10a) and (6.3.10b) between the estimator λ̂n,∨
(6.3.4) and the estimation (measurement) result X∧,i (6.3.1c). In this case, join
λ̂n,∨ ∨ X∧,i , which appears in the initial formulas (6.3.10a) and (6.3.10b), according
to the definition of the estimator λ̂n,∨ (6.3.4), and also according to the lattice
idempotency property, is equal to the estimator λ̂n,∨ :

λ̂n,∨ ∨ X∧,i = [( ∨ X∧,j ) ∨ X∧,i ] ∨ X∧,i = λ̂n,∨ . (6.3.20)


j6=i
224 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

Meet λ̂n,∨ ∧ X∧,i , which appears in the initial formulas (6.3.10a) and (6.3.10b), ac-
cording to the lattice absorption property, is equal to the estimation (measurement)
result X∧,i :
λ̂n,∨ ∧ X∧,i = [( ∨ X∧,j ) ∨ X∧,i ] ∧ X∧,i = X∧,i . (6.3.21)
j6=i

Substituting the values of join and meet (6.3.20) and (6.3.21) into the initial formu-
las (6.3.10a,b), we obtain the values of the functions µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∨ , X∧,i )
between the estimator λ̂n,∨ and the estimation (measurement) result X∧,i :

µ(λ̂n,∨ , X∧,i ) = 2(P[λ̂n,∨ > λ] − P[X∧,i > λ]) =


= 2[FX∧,i (λ + 0) − Fλ̂n,∨ (λ + 0)]; (6.3.22a)

µ0 (λ̂n,∨ , X∧,i ) = 2(P[X∧,i < λ] − P[λ̂n,∨ < λ]) =


= 2[FX∧,i (λ − 0) − Fλ̂n,∨ (λ − 0)], (6.3.22b)
where
P[X∧,i > λ] = 1 − FX∧,i (λ + 0); (6.3.23a)
P[λ̂n,∨ > λ] = 1 − Fλ̂n,∨ (λ + 0); (6.3.23b)
P[X∧,i < λ] = 1 − FX∧,i (λ − 0); (6.3.23c)
P[λ̂n,∨ < λ] = 1 − Fλ̂n,∨ (λ − 0), (6.3.23d)
Fλ̂n,∨ (z ) = sup[Fλ (z ), FNn (z )] is the CDF of the estimator λn,∨ determined by the
z
expression (6.3.8d); FX∧,i (z ) = sup[Fλ (z ), FN (z )] is the CDF of random variable
z
X∧,i (6.3.1c) determined by the expression (6.3.8b).

Corollary 6.3.1. The quantities µ(λ̂n,∨ , X∧,i ), µ0 (λ̂n,∧ , X∨,i ) determined by the
expressions (6.3.22a), (6.3.18b) are, respectively, equal to zero ∀λ ∈] − ∞, ∞[:

µ(λ̂n,∨ , X∧,i ) = 0; (6.3.24a)

µ0 (λ̂n,∧ , X∨,i ) = 0. (6.3.24b)


The proof of the corollary is provided by the direct substitution of the values of
CDFs equal to:
Fλ̂n,∨ (λ + 0) = FX∧,i (λ + 0) = 1; (6.3.25a)
Fλ̂n,∧ (λ − 0) = FX∨,i (λ − 0) = 0, (6.3.25b)
into the relationships (6.3.22a) and (6.3.18b), respectively.
Theorem 6.3.2. The functions µ0 (λ̂n,+ , X+,i ), µ0 (λ̂n,∨ , X∧,i ), determined by the
expression (6.3.10b) between the estimators λ̂n,+ (6.3.2), λ̂n,∨ (6.3.4) and the es-
timation (measurement) results X+,i (6.3.1a), X∧,i (6.3.1c), respectively, are met-
rics, and for ∀λ ∈] − ∞, ∞[, the identities hold:
Within the group L(X ; +):

µ0 (λ̂n,+ , X+,i ) = µ(λ̂n,+ , X+,i ); (6.3.26a)


6.3 Quality Indices of Unknown Nonrandom Parameter Estimation 225

Within the lattice L(X ; ∨, ∧):

µ0 (λ̂n,∨ , X∧,i ) |λ=−λ0 = µ(λ̂n,∧ , X∨,i ) |λ=λ0 . (6.3.26b)

Proof. We first prove the identity (6.3.26a) for the group L(X ; +), and then the
identity (6.3.26b) for the lattice L(X ; ∨, ∧).
According to the identity (6.3.12), metric µ(λ̂n,+ , X+,i ) is equal to:

µ(λ̂n,+ , X+,i ) = 2(P[λ̂n,+ ∨ X+,i > λ] − P[λ̂n,+ ∧ X+,i > λ]) =

= 2[Fλ̂+ (λ + 0) + FX+,i (λ + 0)] − 4Fλ̂+,Xi (λ + 0, λ + 0), (6.3.27)


where Fλ̂+,Xi (z1 , z2 ) and Fλ̂+ (z1 ), FX+,i (z2 ) are joint and univariate CDFs of the
estimator λ̂n,+ and the estimation (measurement) result X+,i respectively.
Similarly, according to the formulas [115, (3.2.80)] and [115, (3.2.85)], the prob-
abilities P[λ̂n,+ ∧ X+,i < λ], P[λ̂n,+ ∨ X+,i < λ] are equal to:

P[λ̂n,+ ∧ X+,i < λ] = Fλ̂+ (λ − 0) + FX+,i (λ − 0) − Fλ̂+,Xi (λ − 0, λ − 0);

P[λ̂n,+ ∨ X+,i < λ] = Fλ̂+,Xi (λ − 0, λ − 0),

so the function µ0 (λ̂n,+ , X+,i ) is equal to:

µ0 (λ̂n,+ , X+,i ) = 2(P[λ̂n,+ ∧ X+,i < λ] − P[λ̂n,+ ∨ X+,i < λ]) =

= 2[Fλ̂+ (λ − 0) + FX+,i (λ − 0)] − 4Fλ̂+,Xi (λ − 0, λ − 0). (6.3.28)


Due to the continuity of CDFs Fλ̂+,Xi (z1 , z2 ), Fλ̂+ (z1 ), FX+,i (z2 ), the identities
hold:
Fλ̂+ (λ − 0) = Fλ̂+ (λ + 0) = Fλ̂+ (λ); (6.3.29a)
FX+,i (λ − 0) = FX+,i (λ + 0) = FX+,i (λ); (6.3.29b)
Fλ̂+,Xi (λ − 0, λ − 0) = Fλ̂+,Xi (λ + 0, λ + 0) = Fλ̂+,Xi (λ, λ). (6.3.29c)
Joint fulfillment of the equalities (6.3.29) implies the identity between the relation-
ships (6.3.27) and (6.3.28):

µ0 (λ̂n,+ , X+,i ) = µ(λ̂n,+ , X+,i ),

thus, the function µ0 (λ̂n,+ , X+,i ), being the same as µ(λ̂n,+ , X+,i ), is metric. The
initial statement (the identity (6.3.26a)) of the Theorem 6.3.2 is proved.
We prove the second statement of the Theorem 6.3.2. For CDF FN (z ) of mea-
surement errors {Ni } and CDF Fλ (z ) of unknown nonrandom parameter λ that
are symmetrical with respect to medians, the identity holds [253, (7.1)]:

FN (−z ) = 1 − FN (z ); (6.3.30a)

Fλ (−z ) |λ=−λ0 = 1 − Fλ (z ) |λ=λ0 . (6.3.30b)


Joint fulfillment of the relationships (6.3.30a,b), (6.3.8a), and (6.3.8b) implies the
226 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

identities determining the symmetry of a pair of distributions FX∨,i (z ), FX∧,i (z ) of


measurement results [253, (7.1)]:

FX∨,i (−z ) |λ=−λ0 = 1 − FX∧,i (z ) |λ=λ0 ; (6.3.31a)

FX∧,i (−z ) |λ=−λ0 = 1 − FX∨,i (z ) |λ=λ0 . (6.3.31b)


Similarly, joint fulfillment of the relationships (6.3.30a,b), (6.3.8c), and (6.3.8d)
implies the identities determining the symmetry of a pair of distributions Fλ̂n,∧ (z ),
Fλ̂n,∨ (z ) of the estimators λ̂n,∧ , λ̂n,∨ [253, (7.2)]:

Fλ̂n,∧ (−z ) |λ=−λ0 = 1 − Fλ̂n,∨ (z ) |λ=λ0 ; (6.3.32a)

Fλ̂n,∨ (−z ) |λ=−λ0 = 1 − Fλ̂n,∧ (z ) |λ=λ0 . (6.3.32b)


Joint fulfillment of the relationships (6.3.18a), (6.3.31a), (6.3.32a), and the relation-
ships (6.3.22b), (6.3.31b), (6.3.32b), implies the identity (6.3.26b), so, the function
µ0 (λ̂n,∨ , X∧,i ), as well as µ(λ̂n,∧ , X∨,i ), is metric.

Graphs of metrics µ(λ̂n,∧ , X∨,i ) = µ(λ̂n,∧ , λ),


µ0 (λ̂n,∨ , X∧,i ) = µ0 (λ̂n,∨ , λ), depending on
parameter λ for the size n of the sample
of the measurement results {X∨,i } (6.3.1b),
{X∧,i } (6.3.1c) that are equal to n =
3, 5, 20, under standard normal distribution
of the measurement errors {Ni }, are shown
in Fig. 6.3.2. The parts of graphs µ(λ̂n,∧ , λ),
µ0 (λ̂n,∨ , λ) passing above 1.0 are condition-
ally called the parcels of “superefficiency” of
the estimators. Further, for adequate and
FIGURE 6.3.2 Graphs of metrics
µ(λ̂n,∧ , λ) and µ0 (λ̂n,∨ , λ) depending on
impartial description of estimator proper-
parameter λ; n = 3, 5, 20 ties, the intervals where the metrics are
less than or equal to 1.0 are conditionally
called “operating” intervals. Operating interval for the metric µ(λ̂n,∧ , λ) is the
domain where parameter λ is positively defined: λ ∈ [0, ∞[, and for the metric
µ0 (λ̂n,∨ , λ), operating interval is the domain, where parameter λ is negatively de-
fined: λ ∈] − ∞, 0].
The metrics µ(λ̂n , Xi ) (6.3.10a) and µ0 (λ̂n , Xi ) (6.3.10b) between one of the
estimators λ̂n (6.3.2), (6.3.3), (6.3.4) of a parameter λ and the measurement result
Xi = λ⊕Ni , where ⊕ is one of operations of L-group L(X ; +, ∨, ∧) (6.3.1a) through
(6.3.1c), respectively, were introduced in order to characterize the quality of these
estimators (6.3.2) through (6.3.4) in sample space L(X , BX ; +, ∨, ∧) with the prop-
erties of L-group L(X ; +, ∨, ∧). Let us define the notion of quality of the estimator
λ̂n of an unknown nonrandom parameter λ in sample space L(X , BX ; +, ∨, ∧) with
L-group properties.
6.3 Quality Indices of Unknown Nonrandom Parameter Estimation 227

Definition 6.3.1. In sample space L(X , BX ; +, ∨, ∧) with L-group properties, by a


quality index of estimator λ̂n we mean the quantity q{λ̂n } equal to metric (6.3.10a)
and (6.3.10b) between one of the estimators λ̂n — λ̂n,+ (6.3.2), λ̂n,∧ (6.3.3), and
λ̂n,∨ (6.3.4) of a parameter λ and the estimation (measurement) results Xi =
λ ⊕ Ni — X+,i (6.3.1a), X∨,i (6.3.1b), X∧,i (6.3.1c), respectively, where ⊕ is one
of operations of L-group L(X ; +, ∨, ∧):
Within the group L(X ; +):

q{λ̂n,+ } = µ(λ̂n,+ , X+,i ), on λ ∈] − ∞, ∞[; (6.3.33a)

Within the lattice L(X ; ∨, ∧):

q{λ̂n,∧ } = µ(λ̂n,∧ , X∨,i ), on λ ∈ [0, ∞[; (6.3.33b)

q{λ̂n,∨ } = µ0 (λ̂n,∨ , X∧,i ), on λ ∈] − ∞, 0]. (6.3.33c)

For normally distributed estimation (measurement) errors {Ni }, joint CDF


Fλ̂+,Xi (z1 , z2 ) is invariant with respect to a shift of parameter λ, so, according
to the identity [115, (3.3.22)], it is equal to:

Fλ̂+,Xi (λ, λ) = Fλ̂+,Xi (0, 0) |λ=0 =


 
2
= 1 + arcsin(ρλ̂,X ) /4, (6.3.34)
π
where ρλ̂,X is a correlation coefficient between the estimator λ̂n,+ (6.3.2) and the

result of interaction X+,i (6.3.1a) equal to ρλ̂,X = 1/ n; n is a size of the sample
N = (N1 , . . . , Nn ).
Then, substituting the values of joint CDF (6.3.34) and the values of univariate
CDFs Fλ̂n,+ (λ) = 0.5, FX+,i (λ) = 0.5 into the formula (6.3.27), we obtain the
value of metric µ(λ̂n,+ , X+,i ) for the case of interaction between a parameter λ and
the estimation (measurement) errors {Ni } within the model (6.3.1a) under normal
distribution of the last ones:
2 1
µ(λ̂n,+ , X+,i ) = 1 − arcsin( √ ), (6.3.35)
π n

where n represents size of the samples N = (N1 , . . . , Nn ), X+ = (X+,1 , . . . , X+,n ).


According to the Definition 6.3.1, the quality index q{λ̂n,+ } of the estimator
λ̂n,+ (6.3.2) of an unknown nonrandom parameter λ, on λ ∈] − ∞, ∞[, is equal to:

2 1
q{λ̂n,+ } = µ(λ̂n,+ , X+,i ) = 1 − arcsin( √ ). (6.3.36)
π n

As shown by the graphs in Fig. 6.3.2, the dependences µ(λ̂n,∧ , X∨,i ) (6.3.18a),
µ0 (λ̂n,∨ , X∧,i ) (6.3.22b), under an arbitrary symmetric distribution FN (z ; σ ) of the
estimation (measurement) errors {Ni } with scale parameter σ and the property
228 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

(6.3.30a) will be determined by the relationship λ/σ between an estimated param-


eter λ and scale parameter σ. In this connection, we obtain the dependences of
quality index q{λ̂n } of estimation of an unknown nonrandom parameter λ deter-
mined by the relationships (6.3.33b) and (6.3.33c) in large quantities of estimation
(measurement) errors {Ni } when the module of the relationship λ/σ is much less
than 1: |λ/σ| << 1. Substituting the value of CDF FN (0) = 0.5 of the estimation
(measurement) errors {Ni } with the property (6.3.30a) and also the value of CDF
Fλ (z ) = 1(z − λ) |λ=0 of an unknown nonrandom parameter λ into the initial for-
mulas for calculation of CDFs of the estimation (measurement) results (6.3.8a,b)
and (6.3.8c,d), we obtain:

FX∨,i (λ + 0) = 0.5, Fλ̂n,∧ (λ + 0) = 1 − 2−n ; (6.3.37a)

FX∧,i (λ − 0) = 0.5, Fλ̂n,∨ (λ − 0) = 2−n . (6.3.37b)


Substituting the pairs of values of CDFs (6.3.37a) into the expression (6.3.18a), and
the pairs of values of CDFs (6.3.37b) into the expression (6.3.22b), we obtain the
values of quality indices q{λ̂n,∧ }, q{λ̂n,∨ } of the estimators λ̂n,∧ (6.3.3), λ̂n,∨ (6.3.4)
of an unknown nonrandom parameter λ under an arbitrary symmetric distribution
FN (z ; σ ) of the estimation (measurement) errors {Ni } and on |λ/σ| → 0:

q{λ̂n,∧ } = µ(λ̂n,∧ , X∨,i ) = 1 − 2−(n−1) ; (6.3.38a)

q{λ̂n,∨ } = µ0 (λ̂n,∨ , X∧,i ) = 1 − 2−(n−1) . (6.3.38b)


The graphs of the dependences q{λ̂n,+ } (6.3.36), q{λ̂n,∧ } (6.3.38a) and 1 − n1 on a
size of samples n are shown in Fig. 6.3.3(a). The graphs of dependences 1 −q{λ̂n,+ },
1 − q{λ̂n,∧ }, 1/n on a size of samples n are shown in Fig. 6.3.3(b) in logarithmic
scale.

(a) (b)

FIGURE 6.3.3 Dependences on size of samples n: (a) q{λ̂n,+ } (6.3.36), q{λ̂n,∧ } (6.3.38a),
and 1 − n1 ; (b) 1 − q{λ̂n,+ }, 1 − q{λ̂n,∧ }, 1/n

As seen in the graphs, the quality indices (6.3.38a,b) of the estimators in the
sample space with properties of the lattice L(X ; ∨, ∧) tend to 1 exponentially,
6.4 Quality Indices of Classification and Detection of Deterministic Signals 229

whereas the quality index (6.3.36) of the estimator in the sample space with prop-
erties of the group L(X ; +) tends to 1 more slowly than inversely proportional
dependence 1 − n1 . From an informational view, information, contained in the esti-
mation (measurement) results on a qualitative level is used better while processing
the estimation (measurement) results in the sample space with lattice properties,
than in sample space with group properties. We should note that while considering
information contained in the estimation (measurement) results (in the sample with
independent elements), we mean information in an unknown nonrandom parame-
ter λ and also information contained in the sample of the independent estimation
(measurement) results N = (N1 , . . . , Nn ), whose use provides achieving estimator
quality index determined by the sample size n or, in other words, by the quantity
of absolute information contained in this sample and measured in abits. Under an
arbitrary symmetric distribution FN (z ; σ ) of the estimation (measurement) results
{Ni } and rather small ratio λ/σ of estimated parameter λ to scale parameter σ
|λ/σ| → 0, one can consider that quality indices q{λ̂n,∧ }, q{λ̂n,∨ } (6.3.38a,b) of the
estimators λ̂n,∧ (6.3.3), λ̂n,∨ (6.3.4) in sample space with properties of the lattice
L(X ; ∨, ∧) are characterized by invariance property with respect to the conditions
of nonparametric prior uncertainty. This property is not inherent to the estima-
tor λ̂n,+ (6.3.2), whose quality is determined by CDF FN (z ; σ ) of the estimation
(measurement) errors {Ni } [250], [251].

6.4 Quality Indices of Classification and Detection of Deterministic


Signals
6.4.1 Quality Indices of Classification of Deterministic Signals in Metric
Spaces with L-group Properties
The classification of the signals with discrete informational parameters that do
not change their values on a signal domain of definition plays a key role in in-
formation transmitting and receiving. In this subsection we determine quality of
signal classification in such spaces and also establish a connection between quality
of classification of the signals with discrete parameters and discrete communication
channel capacity.
Consider a general model of interaction between the signal si (t) from a set of
deterministic signals S = {si (t)}, i = 1, . . . , m and interference (noise) n(t) in signal
space with properties of L-group Γ(+, ∨, ∧):

x(t) = si (t) ⊕ n(t), t ∈ Ts , i = 1, . . . , m, (6.4.1)

where ⊕ is some binary operation of L-group Γ(+, ∨, ∧); Ts = [t0 , t0 + T ] is domain


of definition of the signal si (t); t0 is a known time of arrival of the signal si (t); T is
a duration of the signal si (t); m ∈ N, N is the set of natural numbers.
Let the signals from a set S = {si (t)}, i = 1, . . . , m be characterized by
230 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

s2i (t)dt = E and cross-correlation coefficients rik =


R
the same energy Ei =
R t∈Ts
si (t)sk (t)dt/E.
t∈Ts
During signal classification, we should distinguish which one of m useful signals
from a set S = {si (t)}, i = 1, . . . , m is received in the input of receiver at some
moment of time.
Generally, quality of classification (as well as quality of multiple-alternative
detection) of signals is defined by probability of a signal receiving error taking into
account corresponding types of discrete modulation of informational parameters of
a signal [163], [155], [149]. This probability of error is some function of a distance
between the signals in Hilbert signal space [149, Section 4.2;(36)]. However, for the
case of classification of m > 2 signals from a set S = {si (t)}, an exact evaluation
of error probability is problematic even under normal distribution of interference
(noise) in the input of a processing unit (see, for instance, [155, (2.4.43)], [254,
Section 4.2;(59)]), so, the upper bound of error probability is often used for its
evaluation (see [155, (2.4.46)], [254, Section 4.2;(65)]). In a given subsection, quality
index of classification of deterministic signals is identified not by signal receiving
error probability, but by a quality index of the estimator ŝk (t) of a received signal
sk (t) from a set S = {si (t)}, i = 1, . . . , m, sk (t) ∈ S based on metric relationships
between the signals (in this case, between the signal sk (t) and its estimator ŝk (t)).
This section has two goals: (1) definition of quality indices of classification and
detection of deterministic signals on the base of metric (3.2.1), and (2) the es-
tablishment of the most general qualitative and quantitative regularities in signal
classification (detection) in linear signal space and signal space with lattice prop-
erties. As a model of signal space featuring the properties of these spaces, we use
the signal space with the properties of L-group Γ(+, ∨, ∧).
Further, on the base of metric (3.2.1), we shall define and evaluate the quality
indices of signal classification in signal space with properties of a group Γ(+) and
a lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧).
Within the framework of the general model (6.4.1), we consider the case of
additive interaction between the signal si (t) from a set of deterministic signals
S = {si (t)}, i = 1, . . . , m and interference (noise) n(t) in signal space with the
properties of the group Γ(+) of L-group Γ(+, ∨, ∧):

x(t) = si (t) + n(t), t ∈ Ts , i = 1, . . . , m, (6.4.2)

where Ts = [t0 , t0 + T ] is a domain of definition of the signal si (t); t0 is a known


time of arrival of the signal si (t); T is a duration of the signal si (t); m ∈ N, N is
the set of natural numbers.
Suppose the interference n(t) is white Gaussian noise with power spectral den-
sity N0 .
While solving the problem of signal classification in linear space, i.e., when
interaction equation (6.4.2) holds, the sufficient statistics yi (t) in the i-th processing
channel is a scalar product (x(t), si (t)) of the signals x(t), si (t) in Hilbert space
6.4 Quality Indices of Classification and Detection of Deterministic Signals 231
R
equal to correlation integral x(t)si (t)dt:
t∈Ts
Z
yi (t) = ŝi (t) = (x(t), si (t)) = x(t)si (t)dt, (6.4.3)
t∈Ts

where yi (t) is the estimator of energetic parameter of the signal si (t) (further,
simply the estimator ŝi (t) of the signal si (t)) in the i-th processing channel.
In the case of the presence of the signal sk (t) in the observed process x(t) =
sk (t) + n(t), the problem of signal classification is solved by maximization of suffi-
cient statistics (x(t), si (t)) (6.4.3):
Z

arg max [ x(t)si (t)dt] x(t)=sk (t)+n(t) = k̂,
i∈I;si (t)∈S
t∈Ts

where k̂ is the estimator of channel number corresponding to the number of a


received signal sk (t) fromR a set S = {si (t)}.
Correlation integral x(t)si (t)dt in the i-th processing channel at the instant
t∈Ts
t = t0 + T takes a mean value equal to the product of cross-correlation coefficient
rik between the signals si (t), sk (t) and signal energy E:
Z

M{yi (t)} = M{ x(t)si (t)dt} x(t)=sk (t)+n(t) = rik E, (6.4.4)
t∈Ts

so that in the k-th processing channel (i = k) at the instant t = t0 + T , it takes a


mean value equal to the energy E of the signal sk (t):
Z Z
M{yi (t)} = M{ x(t)si (t)dt} |i=k = sk (t)sk (t)dt = E, (6.4.5)
t∈Ts t∈Ts

where yi (t) is the estimator ŝi (t) of the signal si (t) in the i-th processing channel
in the presence of the signal sk (t) in the observed process x(t); M(∗) is the symbol
of mathematical expectation.
In the output of the i-th processing channel, the probability density function
(PDF) pŝi (y ) of the estimator yi (t) = ŝi (t) of the signal si (t), at the instant t =
t0 + T in the presence of the signal sk (t) in the observed process x(t), is determined
by the expression:

(y − rik E )2
 
−1/2
pŝi (y ) = (2πD) exp , (6.4.6)
2D

where D = EN0 is a noise variance in the output of the i-th processing channel; E
is the energy of the signal si (t); N0 is a power spectral density of interference (noise)
in the input of the processing unit; rik is a cross-correlation coefficient between the
signals si (t), sk (t).
232 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

Consider the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th processing
channel in signal presence in the observed process x(t) = sk (t) + n(t):

yk (t) = ŝk (t) x(t)=sk (t)+n(t) , t = t0 + T, (6.4.7)

and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of interference (noise) (n(t) = 0) in the
observed process x(t) = sk (t):

yk (t) n(t)=0 = ŝk (t) x(t)=sk (t) , t = t0 + T. (6.4.8)
To characterize the quality of the estimator yk (t) = ŝk (t) of the signal sk (t)
while solving the problem of classification of the signals from a set S = {si (t)},
i = 1, . . . , m that additively interact with interference (noise) n(t) (6.4.2) in the
group Γ(+) of L-group Γ(+, ∨, ∧), we introduce the function µ(yt , yt,0 ) that is
analogous to metric (3.2.1) and characterizes the difference between the estimator
yk (t) (6.4.7) of the signal sk (t) in the k-th processing channel in signal
presence in
the observed process x(t) = sk (t) + n(t) and the estimator yk (t) n(t)=0 (6.4.8) of
the signal sk (t) in the absence of interference (noise):
µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]. (6.4.9)
In the equation, yt is a sample of the estimator yk (t) (6.4.7) of the signal sk (t) in
the output of the k-th processing channel at the instant t = t0 + T in the presence
of the signal sk (t) in the observed process x(t); yt,0 is a sample of the estimator
yk (t) n(t)=0 (6.4.8) of the signal sk (t) in the output of the k-th processing channel
at the instant t = t0 + T in the absence of interference (noise) (n(t) = 0) in the
observed process x(t), equal to the energy E of the signal: yt,0 = E; h is some
threshold level h < E determined by an average of two mathematical expectations
of the processes in the outputs of the i-th (6.4.4) and the k-th (6.4.5) processing
channels:
h = (rik E + E )/2, (6.4.10)
rik is a cross-correlation coefficient between the signals si (t) and sk (t).
Theorem 3.2.1 states that for random variables with special properties, the func-
tion defined by the relationship (3.2.1) is a metric. Meanwhile, the establishment
of this fact for the function (6.4.9) requires a separate proof stated in the following
theorem.
Theorem 6.4.1. For a pair of samples yt,0 and yt of stochastic processes
yk (t) n(t)=0 , yk (t) in the output of the unit of optimal classification of the signals
from a set S = {si (t)}, i = 1, . . . , m, which additively interact with interference
(noise) n(t) (6.4.2) in a group Γ(+) of L-group Γ(+, ∨, ∧), the function µ(yt , yt,0 )
defined by the relationship (6.4.9) is metric.
Proof. Consider the probabilities P(yt ∨ yt,0 > h), P(yt ∧ yt,0 > h) in formula
(6.4.9). Joint fulfillment of the equality yt,0 = E and the inequality h < E implies
that these probabilities are equal to:
P(yt ∨ yt,0 > h) = P(yt,0 > h) = 1; (6.4.11a)
6.4 Quality Indices of Classification and Detection of Deterministic Signals 233

P(yt ∧ yt,0 > h) = P(yt > h) = 1 − Fŝk (h), (6.4.11b)


where Fŝk (y ) is the cumulative distribution function (CDF) of the estimator yk (t) =
ŝk (t) in the output of the k-th processing channel at the instant t = t0 + T , which,
according to the PDF (6.4.6), is equal to:
Zy
(x − E )2
 
−1/2
Fŝk (y ) = (2πD) exp − dx. (6.4.12)
2D
−∞

The identities (6.4.11) imply the equality:


P(yt > h) + P(yt,0 > h) = P(yt ∨ yt,0 > h) + P(yt ∧ yt,0 > h). (6.4.13)
The probability P(yt > h) is a valuation [221, Section X.1 (V1)]. Besides, probabil-
ity P(yt > h) is isotonic inasmuch as the implication holds [221, Section X.1 (V2)]:
yt ≥ yt0 ⇒ P(yt > h) ≥ P(yt0 > h). (6.4.14)
Joint fulfillment of the relationships (6.4.13), (6.4.14), according to Theorem 6.4.1
[221, Section X.1], implies that the quantity µ(yt , yt,0 ) (6.4.9) is metric.
Substituting the formulas (6.4.11) into the expression (6.4.9), we obtain a value
for metric µ(yt , yt,0 ) between the signal sk (t) and its estimator yk (t) = ŝk (t) while
solving the problem of classification of the signals from a set S = {si (t)}, i =
1, . . . , m in the group Γ(+) of L-group Γ(+, ∨, ∧):
 
E−h
µsŝ = µ(yt , yt,0 ) = 2Fŝk (h) = 2[1 − Φ √ ], (6.4.15)
D
where Fŝk (y ) is the CDF of the estimator yk (t) = ŝk (t) in the output of the k-th
processing channel at the instant t = t0 + T , which is determined by the formula
Rz n 2o
(6.4.12); Φ(z ) = (2π )−1/2 exp − x2 dx is a probability integral; D = EN0 is
−∞
a variance of noise in the output of the i-th processing channel; E is an energy of
the signals si (t) from a set S = {si (t)}, i = 1, . . . , m; h is a threshold level defined
by the relationship (6.4.10); N0 is a power spectral density of interference (noise)
in the input of a processing unit.
Thus, the expression (6.4.15) defines metric between the signal sk (t) and its
estimator yk (t) = ŝk (t), which can be considered as a measure of their closeness.
Definition 3.2.2 establishes the relationship (3.2.7) between normalized measure
of statistical interrelationship (NMSI) of a pair of samples and its metric (3.2.1).
Similarly, the following definition establishes the connection between quality index
of signal classification and metrics (6.4.9).
Definition 6.4.1. By quality index of estimator of the signals νsŝ while solving the
problem of their classification from a set S = {si (t)}, i = 1, . . . , m we mean NMSI
ν (yt , yt,0 ) between the samples yt,0 , yt of stochastic processes yk (t) n(t)=0 , yk (t) in
the output of a signal classification unit connected with metric µ(yt , yt,0 ) (6.4.9) by
the following relationship:
νsŝ = ν (yt , yt,0 ) = 1 − µ(yt , yt,0 ). (6.4.16)
234 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

We now determine a value of quality index of an estimator of the signals (6.4.16)


for the signals with arbitrary correlation properties from a set S = {si (t)}, i =
1, . . . , m under their classification in signal space with the properties of the group
Γ(+) of L-group Γ(+, ∨, ∧).
Substituting a value of correlation coefficient rik determining, according to the
formula (6.4.10), the value of a threshold h = (rik E + E )/2, into the expression
(6.4.15), and using simultaneously the coupling equation (6.4.16) between NMSI
and metric, we obtain the dependence of the quality index of the estimator of the
signals νsŝ (q 2 ) on signal-to-noise ratio q 2 = E/N0 for the signals with arbitrary
correlation properties:
p 
νsŝ (q 2 ) = 2Φ q 2 (1 − rik )/2 − 1, (6.4.17)

where rik is a cross-correlation coefficient between the signals si (t) and sk (t).
Consider the values of the quality index of the estimator of the signals (6.4.17)
for the orthogonal (rik = 0) and opposite (rik = −1) signals from a set S = {si (t)},
i = 1, . . . , m under their classification in signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧).
Based on the general formula (6.4.17), we obtain the dependences of quality
indices of the estimators νsŝ (q 2 ) on signal-to-noise ratio for orthogonal and opposite
signals, respectively:
p 
νsŝ (q 2 ) |⊥ = 2Φ q 2 /4 − 1, si (t)⊥sk (t), rik = 0; (6.4.18a)
p 
νsŝ (q 2 ) |− = 2Φ q 2 − 1, s1 (t) = −s2 (t), r12 = −1, (6.4.18b)

where q 2 = E/N0 is signal-to-noise ratio.


The dependences νsŝ (q 2 ) |⊥ (6.4.18a), νsŝ (q 2 ) |− (6.4.18b) are represented in
Fig. 6.4.1 and are shown by the solid line and the dot line, respectively. Based
on the formulas (6.4.18a), (6.4.18b), the use of opposite signals provides a gain in
signal energy as against orthogonal signals, that corresponds rather well to known
results [163], [155], [149]. Here, fourfold gain in signalRenergy elucidated by metric
(6.4.15) is a function of square of Euclidean metric (si (t) − sk (t))2 dt between
t∈Ts
the signals si (t) and sk (t) in Hilbert space.
Consider the model of interaction between the signal si (t) from a set of deter-
ministic signals S = {si (t)}, i = 1, . . . , m and interference (noise) n(t) in signal
space with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) with opera-
tions of join si (t) ∨ n(t) and meet si (t) ∧ n(t), respectively:

x(t) = si (t) ∨ n(t); (6.4.19a)

x̃(t) = si (t) ∧ n(t), (6.4.19b)


where t ∈ Ts ; Ts = [t0 , t0 + T ] is a domain of definition of the signal si (t); t0 is a
known time of arrival of the signal si (t); T is a duration of the signal si (t); m ∈ N,
N is the set of natural numbers.
6.4 Quality Indices of Classification and Detection of Deterministic Signals 235

FIGURE 6.4.1 Dependences νsŝ (q 2 ) |⊥ per Equation (6.4.18a); νsŝ (q 2 ) |− per Equation
(6.4.18b)

Let the signals from Ra set S = {si (t)}, i = 1, . . . , m be characterized by


the same energy Ei = s2i (t)dt = E and cross-correlation coefficients rik =
R t∈Ts
si (t)sk (t)dt/E. Also suppose that interference (noise) n(t) is characterized by
t∈Ts
arbitrary probabilistic-statistical properties.
Consider the estimators yk (t) and ỹk (t) of the signal sk (t) in the k-th processing
channel of the classification unit in the presence of the signal sk (t) in the observed
processes x(t) (6.4.19a) and x̃(t) (6.4.19b), which are determined by the following
relationships:
yk (t) = yk (t) x(t)=sk (t)∨n(t) = sk (t) ∧ x(t); (6.4.20a)

ỹk (t) = ỹk (t) x̃(t)=sk (t)∧n(t) = sk (t) ∨ x̃(t), (6.4.20b)

and also consider the similar estimators yk (t) n(t)=0 , ỹk (t) n(t)=0 of the signal sk (t)
in the k-th processing channel in the absence of interference (noise) (n(t) = 0) in
the observed processes x(t) (6.4.19a) and x̃(t) (6.4.19b):

yk (t) n(t)=0 = sk (t) ∧ x(t) x(t)=sk (t)∨0 ; (6.4.21a)

ỹk (t) n(t)=0 = sk (t) ∨ x̃(t) x̃(t)=sk (t)∧0 . (6.4.21b)
We next determine the values of the function (6.4.9), which, as for the lattice
Γ(∨, ∧), is also metric, along with determining a quality index of the estimator
of the signals (6.4.16) while solving the problem of signal classification from a set
S = {si (t)}, i = 1, . . . , m in signal space with the properties of the lattice Γ(∨, ∧)
of L-group Γ(+, ∨, ∧):

µsŝ = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]; (6.4.22a)

µ̃sŝ = µ(ỹt , ỹt,0 ) = 2[P(ỹt ∨ ỹt,0 > h) − P(ỹt ∧ ỹt,0 > h)], (6.4.22b)
where yt and ỹt are the samples of the estimators yk (t) (6.4.20a) and ỹk (t) (6.4.20b)
of the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the presence of the signal sk (t) in the observed processes x(t), x̃ (t); yt,0 and ỹt,0
are the samples of the estimators yk (t) n(t)=0 (6.4.21a) and ỹk (t) n(t)=0 (6.4.21b)
236 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

of the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the absence of interference (noise) (n(t) = 0) in the observed processes x(t), x̃(t);
h is some threshold level determined by energetic and correlation relations between
the signals from a set S = {si (t)}, i = 1, . . . , m.
Absorption axiom of a lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) contained in the
third part of each of four following multilink identities implies that the estimators

yk (t) (6.4.20a), ỹk (t) (6.4.20), and the estimators yk (t) n(t)=0 (6.4.21a), ỹk (t) n(t)=0
(6.4.21b) are identically equal to the received signal sk (t):

yk (t) = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ n(t)] = sk (t); (6.4.23a)

ỹk (t) = sk (t) ∨ x̃(t) = sk (t) ∨ [sk (t) ∧ n(t)] = sk (t); (6.4.23b)


yk (t) n(t)=0 = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ 0] = sk (t); (6.4.24a)

ỹk (t) n(t)=0 = sk (t) ∨ x̃(t) = sk (t) ∨ [sk (t) ∧ 0] = sk (t). (6.4.24b)
The obtained relationships imply that the samples yt and ỹt of the estimators
yk (t) (6.4.20a) and ỹk (t) (6.4.20b),
and the samples yt,0 and ỹt,0 of the estimators
yk (t) n(t)=0 (6.4.21a) and ỹk (t) n(t)=0 (6.4.21b) are identically equal to the received
signal sk (t):
yt = yk (t) = sk (t); (6.4.25a)
ỹt = ỹk (t) = sk (t); (6.4.25b)


yt,0 = yk (t) n(t)=0 = sk (t); (6.4.26a)

ỹt,0 = ỹk (t) n(t)=0 = sk (t). (6.4.26b)
Substituting the obtained values of the samples yt (6.4.25a) and yt,0 (6.4.26a) into
the relationship (6.4.22a), and also the values of the samples ỹt (6.4.25b) and
ỹt,0 (6.4.26b) into the relationship (6.4.22b), we note that the values of metrics
µ(yt , yt,0 ), µ(ỹt , ỹt,0 ) between the signal sk (t) and its estimators yk (t) and ỹk (t),
while solving the problem of classification of the signals from a set S = {si (t)},
i = 1, . . . , m in signal space with the properties of the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧), are identically equal to zero:

µsŝ = µ(yt , yt,0 ) = 0; (6.4.27a)

µ̃sŝ = µ(ỹt , ỹt,0 ) = 0. (6.4.27b)


From the relationships (6.4.27) and the coupling equation (6.4.16), we obtain that
quality indices of the estimators νsŝ , ν̃sŝ of the signals, while solving the problem
of their classification from a set S = {si (t)}, i = 1, . . . , m in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧), take absolute values equal
to 1:
νsŝ = 1 − µ(yt , yt,0 ) = 1; (6.4.28a)
ν̃sŝ = 1 − µ(ỹt , ỹt,0 ) = 1. (6.4.28b)
6.4 Quality Indices of Classification and Detection of Deterministic Signals 237

The relationships (6.4.28) imply very important conclusions on solving the problem
of classification of the signals from a set S = {si (t)}, i = 1, . . . , m in signal space
with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧). There exists a
possibility to provide absolute quality indices of the estimators of the signals νsŝ ,
ν̃sŝ . This fact creates the necessary conditions to extract deterministic signals in the
presence of interference (noise) without information losses. This is uncharacteristic
for solving the same problem within the signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧) (particularly, in linear signal space). Another
advantage for solving the problem of classification of deterministic signals from a
set S = {si (t)}, i = 1, . . . , m in signal space with the properties of the lattice
Γ(∨, ∧) of L-group Γ(+, ∨, ∧) is the invariance property of both metrics (6.4.27)
and quality indices of the estimators of the signals (6.4.28) with respect to the
conditions of parametric and nonparametric prior uncertainty. The quality indices
of the estimators of the signals in the signal space with lattice properties (6.4.28)
compared to the quality indices of the estimators of the signals in linear signal
space (6.4.18), do not depend on signal-to-noise ratio and on interference (noise)
distribution in the input of processing unit. The problem of synthesis of optimal
algorithm of signal classification in signal space with lattice properties demands
additional research that is the subject of consideration of the following chapter.
Generally, the results obtained in this subsection, will be used later to determine
the capacity of discrete communication channels functioning in the signal space with
the properties of the group Γ(+) and the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧).

6.4.2 Quality Indices of Deterministic Signal Detection in Metric


Spaces with L-group Properties
The problem of signal binary detection can be considered as classification of a pair
of signals, if one is assumed to be equal to zero. For signal detection in signal space
with the properties of L-group Γ(+, ∨, ∧) we will use the results from the previous
subsection.
Generally, the quality of signal detection is determined by conditional proba-
bilities of false alarm F and correct detection D (or false dismissal 1 − D) [163],
[155], [149]. Here, however, quality indices of signal detection will not be identi-
fied with conditional probabilities F , D, but will be shown within the problem of
signal classification in signal space with the properties of L-group Γ(+, ∨, ∧) and
determined by the quality index of an estimator of the detected signal based on the
metric relationships between the signals (in this case, between the signal s(t) and
its estimator ŝ(t)).
Consider the general model of interaction between deterministic signal s(t) and
interference (noise) n(t) in signal space with the properties of L-group Γ(+, ∨, ∧):

x(t) = θs(t) ⊕ n(t), t ∈ Ts , (6.4.29)

where ⊕ is some binary operation of L-group Γ(+, ∨, ∧); θ is an unknown nonran-


dom parameter that can take only two values: θ ∈ {0, 1}: θ = 0 (signal is absent) or
θ = 1 (signal is present); Ts is domain of definition of the signal s(t), Ts = [t0 , t1 ];
238 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

t0 is known time of arrival of the signal s(t); t1 is known time of signal ending;
T = t1 − t0 is a duration of the signal s(t).
Within the general model (6.4.29), consider the additive interaction of deter-
ministic signal and interference (noise) in signal space with the properties of the
group Γ(+) of L-group Γ(+, ∨, ∧):

x(t) = θs(t) + n(t), t ∈ Ts , θ ∈ {0, 1}. (6.4.30)

Suppose that interference n(t) is white Gaussian noise with a power spectral density
N0 .
While solving the problem of signal detection in linear signal space, i.e., when the
interaction equation (6.4.30) holds, the sufficient statistics y (t) is a scalar product
(x(t), s(t))R of the signals x(t) and s(t) in Hilbert space equal to the correlation
integral x(t)s(t)dt:
t∈Ts
Z
y (t) = (x(t), s(t)) = x(t)s(t)dt, (6.4.31)
t∈Ts

where y (t) = ŝ(t) is an estimator of an energetic parameter of the signal s(t) (simply
the estimator ŝ(t) of the signal s(t)).
Signal detection problem can be solved by maximization of sufficient statistics
y (t) (6.4.31) on the domain of definition of the signal Ts and its comparison with a
threshold l0 : Z d1
y (t) = x(t)s(t)dt → max y (t) ≷ l0 ,
Ts d0
t∈Ts

where d1 and d0 are the decisions concerning the presence and the absence of the
signal s(t) in the observed process x(t); d1 : θ̂ = 1, d0 : θ̂ = 0, respectively; θ̂ is an
estimate of parameter θ, θ ∈ {0, 1}.
In the presence (θ = 1) of the signal s(t) inR the observed process x(t) (6.4.30),
at the instant t = t0 + T , correlation integral x(t)s(t)dt takes an average value
t∈Ts
equal to energy E of the signal s(t):
Z Z
M{y (t)} = M{ x(t)s(t)dt} |θ=1 = s(t)s(t)dt = E, (6.4.32)
t∈Ts t∈Ts

where y (t) is the estimator of the signal s(t) in the output of detector; M(∗) is
a symbol of mathematical expectation, and in the absence (θ = 0) of the signal
s(t) in theR observed process x(t) (6.4.30), at the instant t = t0 + T , the correlation
integral x(t)s(t)dt takes an average value equal to zero:
t∈Ts
Z
M{y (t)} = M{ x(t)s(t)dt} |θ=0 = 0. (6.4.33)
t∈Ts
6.4 Quality Indices of Classification and Detection of Deterministic Signals 239

In the output of detector, PDF pŝ (y ) θ∈{0,1} of the estimator y (t) of the signal s(t),
at the instant t = t0 + T , in the presence of signal (θ = 1) or in its absence (θ = 0)
in the observed process x(t), is determined by the expression:
(y − θE )2
 
−1/2

pŝ (y ) θ∈{0,1} = (2πD)
exp , (6.4.34)
2D
where D = EN0 is a noise variance in the output of detector; E is an energy of
the signal s(t); N0 is power spectral density of interference (noise) in the input of
processing unit.
Consider the estimator y (t) of the signal s(t) in the output of detector in its
presence
in the observed process x(t) = s(t) + n(t) (6.4.30), and also the estimator
y (t) n(t)=0 of the signal s(t) in the output of detector in the absence of interference

(noise) (n(t) = 0) in an observed process x(t), x(t) = s(t).
To characterize the quality of the estimator y (t) of the signal s(t) while solving
the problem of its detection in signal space with the properties of the group Γ(+) of
L-group Γ(+, ∨, ∧) (6.4.30), we introduce the function µ(yt , yt,0 ), which is analogous
to metric (6.4.9):
µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)], (6.4.35)
where yt is a sample of the estimator y (t) of the signal s(t) in the output of detector
at the instant t = t0 + T in the presence of the signal s(t) and interference (noise)
n(t) in the observed process x(t); yt,0 is a sample of the estimator y (t) n(t)=0 of
the signal s(t) in the output of detector at the instant t = t0 + T in the absence
of interference (noise) (n(t) = 0) in the observed process x(t), x(t) = s(t), which is
equal to a signal energy E: yt,0 = E; h is some threshold level determined by an
average of two mathematical expectations of the processes in the output of detector
(6.4.32) and (6.4.33):
h = E/2. (6.4.36)
Joint fulfillment of the equality yt,0 = E and the inequality h < E implies that the
probabilities appearing in the expression (6.4.35) are equal to:
P(yt ∨ yt,0 > h) = P(yt,0 > h) = 1; (6.4.37a)
P(yt ∧ yt,0 > h) = P(yt > h) = 1 − Fŝ (h) |θ=1 , (6.4.37b)
where Fŝ (y ) |θ=1 is the CDF of the estimator y (t) in the output of detector at the
instant t = t0 + T in the presence of the signal s(t) (θ = 1) in the observed process
x(t) (6.4.30), which, according to the PDF (6.4.34), is equal to:
Zy
(x − E )2
 
−1/2
Fŝ (y ) |θ=1 = (2πD) exp − dx. (6.4.38)
2D
−∞

Substituting the formulas (6.4.37) into the expression (6.4.35), we obtain the resul-
tant value for metric µ(yt , yt,0 ) between the signal s(t) and its estimator y (t) while
solving the problem of signal detection in the group Γ(+) of L-group Γ(+, ∨, ∧):
µsŝ = µ(yt , yt,0 ) = 2Fŝ (h) |θ=1 =
240 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties
 
E−h p
= 2[1 − Φ √ ] = 2[1 − Φ( q 2 /4)], (6.4.39)
D
where Fŝ (y ) |θ=1 is the CDF of the estimator y (t) in the output of detector at
the instant t = t0 + T in the presence of the signal s(t) (θ = 1) in the ob-
served process x(t) (6.4.30), which is determined by the formula (6.4.38); Φ(z ) =
Rz n 2o
(2π )−1/2 exp − x2 dx is the probability integral; D = EN0 is a noise variance
−∞
in the output of detector; E is an energy of the signal s(t); h is a threshold level de-
termined by the relationship (6.4.36); N0 is a power spectral density of interference
(noise) n(t) in the input of detector; q 2 = E/N0 is signal-to-noise ratio.
By the analogy with the quality index of signal classification (6.4.16), we define
quality index of signal detection νsŝ .

Definition 6.4.2. By quality index of estimator of the signal νsŝ while solving the
problem of its detection we mean NMSI ν (yt , yt,0 ) between the samples yt,0 and yt of
stochastic processes y (t) n(t)=0 , y (t) in the output of detector, which is connected
with metric ν (yt , yt,0 ) (6.4.35) by the following relationship:

νsŝ = ν (yt , yt,0 ) = 1 − µ(yt , yt,0 ). (6.4.40)

Substituting the value of metric (6.4.39) into the coupling equation (6.4.40),
we obtain the dependence of quality index of detection on signal-to-noise ratio
q 2 = E/N0 : p
νsŝ (q 2 ) = 2Φ( q 2 /4) − 1. (6.4.41)
Based on the obtained formula, quality index of detection νsŝ (q 2 ) (6.4.41) is iden-
tically equal to the quality index of classification of orthogonal signals νsŝ (q 2 ) |⊥
(6.4.18a): νsŝ (q 2 ) = νsŝ (q 2 ) |⊥ , that, of course, is trivial, taking into account the
known link between signal detection and signal classification.
Within the general model (6.4.29), consider the interaction of deterministic
signal s(t) and interference (noise) n(t) in signal space with the properties of the
lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧), which is described by two binary operations
of join and meet, respectively:

x(t) = θs(t) ∨ n(t); (6.4.42a)

x̃(t) = θs(t) ∧ n(t), (6.4.42b)


where θ is an unknown nonrandom parameter that is able to take only two values
θ ∈ {0, 1}: θ = 0 (signal is absent) or θ = 1 (signal is present); t ∈ Ts , Ts is domain
of definition of the detected signal s(t), Ts = [t0 , t1 ]; t0 is known time of arrival of
the signal s(t); t1 is time of signal ending; T = t1 − t0 is a duration of the signal
s(t).
Suppose that interference (noise) n(t) is characterized by arbitrary probabilistic-
statistical properties.
Consider the estimators y (t), ỹ (t) of the signal s(t) in the output of a detector
6.4 Quality Indices of Classification and Detection of Deterministic Signals 241

in the presence of the signal (θ = 1) in the observed processes x(t) (6.4.42a), x̃(t)
(6.4.42b):
y (t) = y (t) x(t)=s(t)∨n(t) = s(t) ∧ x(t); (6.4.43a)

ỹ (t) = ỹ (t) x̃(t)=s(t)∧n(t) = s(t) ∨ x̃(t). (6.4.43b)

Also consider the estimators y (t) n(t)=0 , ỹ (t) n(t)=0 of the signal s(t) in the output
of detector in the absence of interference (noise) (n(t) = 0) in the observed processes
x(t) (6.4.42a) and x̃(t) (6.4.42b):

y (t) n(t)=0 = s(t) ∧ x(t) x(t)=s(t)∨0 ; (6.4.44a)

ỹ (t) n(t)=0 = s(t) ∨ x̃(t) x̃(t)=s(t)∧0 . (6.4.44b)
Determine the values of metric (6.4.35) to evaluate the quality index of the estimator
of the signal (6.4.40) while solving the problem of detection in signal space with
the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧):

µsŝ = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)]; (6.4.45a)

µ̃sŝ = µ(ỹt , ỹt,0 ) = 2[P(ỹt ∨ ỹt,0 > h) − P(ỹt ∧ ỹt,0 > h)], (6.4.45b)
where yt and ỹt are the samples of the estimators y (t) (6.4.43a) and ỹ (t) (6.4.43b)
of the signal s(t) in the output of detector at the instant t ∈ Ts in the presence
of the signal (θ = 1) in the observed processes x( t) (6.4.42a) and x̃(t) (6.4.42b);
yt,0 and ỹt,0 are the samples of the estimators y (t) n(t)=0 (6.4.44a) and ỹ (t) n(t)=0
(6.4.44b) of the signal s(t) in the output of detector at the instant t ∈ Ts in the
absence of interference (noise) (n(t) = 0) in the observed processes x(t) (6.4.42a)
and x̃(t) (6.4.42b); h is some threshold level.
The obtained relationships (6.4.43) and (6.4.44), and absorption axiom of a lat-
tice imply that the samples yt , ỹt of the estimators y (t) (6.4.43a) and ỹ (t) (6.4.43b),

and also the samples yt,0 , ỹt,0 of the estimators y (t) n(t)=0 (6.4.44a) and ỹ (t) n(t)=0
(6.4.44b) are identically equal to the received useful signal s(t):

yt = y (t) = s(t); (6.4.46a)

ỹt = ỹ (t) = s(t); (6.4.46b)


yt,0 = y (t) n(t)=0 = s(t); (6.4.47a)

ỹt,0 = ỹ (t) n(t)=0 = s(t). (6.4.47b)
Substituting the obtained values of a pair of samples yt (6.4.46a) and yt,0 (6.4.47a)
into the relationship (6.4.45a) and also substituting the values of a pair of samples
ỹt (6.4.46b) and ỹt,0 (6.4.47b) into the relationship (6.4.45b), we obtain that the
values of metrics µ(yt , yt,0 ) and µ(ỹt , ỹt,0 ) between the signal s(t) and its estimators
y (t), ỹ (t), while solving the problem of signal detection in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧), are equal to zero:

µsŝ = µ(yt , yt,0 ) = 0; (6.4.48a)


242 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

µ̃sŝ = µ(ỹt , ỹt,0 ) = 0. (6.4.48b)


The relationships (6.4.48) and the coupling equation (6.4.40) imply that quality
indices of the estimators of the signals νsŝ and ν̃sŝ , while solving the problem of
signal detection in signal space with the properties of the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧), take absolute values equal to 1:
νsŝ = 1 − µsŝ = 1; (6.4.49a)
ν̃sŝ = 1 − µ̃sŝ = 1. (6.4.49b)
The relationships (6.4.49) imply an important conclusion. While solving the prob-
lem of signal detection in signal space with the properties of the lattice Γ(∨, ∧) of
L-group Γ(+, ∨, ∧), there exists the possibility to provide absolute quality indices
of the estimators of the signals νsŝ and ν̃sŝ . This creates the necessary conditions to
detect deterministic signals in the presence of interference (noise) without informa-
tion losses, that is uncharacteristic when solving the same problem within the signal
space with the properties of the group Γ(+) of L-group Γ(+, ∨, ∧) (particularly,
in linear signal space). One more advantage for solving the problem of detection
of deterministic signals in signal space with the properties of the lattice Γ(∨, ∧) of
L-group Γ(+, ∨, ∧), is the invariance property of both metrics (6.4.48) and qual-
ity indices of the estimators of the signals (6.4.49) with respect to parametric and
nonparametric prior uncertainty conditions. The quality indices of signal detection
in the signal space with lattice properties (6.4.49), as against the quality indices
of signal detection in linear signal space (6.4.41), do not depend on signal-to-noise
ratio and on interference (noise) distribution in the input of processing unit. The
problem of synthesis of optimal signal detection algorithm in signal space with lat-
tice properties demands an additional research that is the subject of consideration
of the following chapter.

6.5 Capacity of Communication Channels Operating in Presence


of Interference (Noise)
Based on formalized descriptions of the signals and their instantaneous values as
the elements of metric space with special properties and measures of information
quantity, and relying upon the results in Section 6.2, we consider the informational
characteristics of communication channels operating in metric spaces with L-group
properties to establish the most general regularities defining the processes of infor-
mation transmitting and receiving in the presence of interference (noise).
In Section 5.2, we defined the relationships establishing upper bounds of the
capacities of continuous and discrete noiseless channels. The goals of this section are:
(1) obtain the relationships characterizing the capacities of continuous and discrete
channels that operate in the presence of interference (noise) and (2) perform a brief
comparative analysis of the obtained relationships determining channel capacity
with known results of classical information theory.
6.5 Capacity of Communication Channels Operating in Presence of Interference (Noise) 243

Consider a generalized structural scheme of a communication channel function-


ing in the presence of interference (noise) as shown in Fig. 6.5.1. In the input of
the receiving set of communication system, let the result x(t) of interaction be-
tween useful signal s(t) and interference (noise) n(t) be determined by some binary
operation of L-group Γ(+, ∨, ∧):

x(t) = s(t) ⊕ n(t); t ∈ Ts , (6.5.1)

where ⊕ is some binary operation of L-group: +, ∨, ∧; Ts = [t0 , t0 + T ] is a


domain of definition of useful signal s(t); T is a duration of useful signal s(t).
The interference (noise) exerts a negative influence on the quantity of absolute
information Iŝ contained in the estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of useful signal
s(t) formed by the proper processing of the interaction result x(t) (6.5.1) known as
filtering of useful signal (or its extraction) (see Fig. 6.5.1). The quantity of absolute
information Ix contained in the observed process x(t) can be arbitrarily large due to
interference (noise) influence. The task of the receiving set is forming a processing
function fŝ [∗], so that an obtained estimator ŝ(t) = fŝ [x(t)], t ∈ Ts of useful signal
s(t) formed in the output of the receiving set would contain maximum information
quantity concerning the signal s(t). In other words, the main informational property
of the estimator ŝ(t) obtained as the result of extraction of useful signal s(t) in the
presence of interference (noise) n(t) must provide the largest quantity of mutual
information Isŝ contained in the estimator ŝ(t) concerning the useful signal s(t):

ŝ(t) = fŝ [x(t)] |Isŝ →max . (6.5.2)

On the base of the relationship (6.5.2), we define the capacity Cn of communication


channel functioning in the presence of interference (noise).

FIGURE 6.5.1 Generalized structural scheme of communication channel functioning in


presence of interference (noise)

Definition 6.5.1. By capacity Cn of a communication channel operating in the


presence of interference (noise) in physical signal space Γ with the properties of L-
group Γ(+, ∨, ∧), we mean the largest quantity of mutual information Isŝ between
useful signal s(t) and its estimator ŝ(t) that can be extracted from the interaction
result x(t) (6.5.1) in a signal duration T :
Isŝ
Cn = max (abit/s). (6.5.3)
ŝ∈Γ T
244 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

According to Theorem 6.2.3, under interaction (6.5.1) between useful signal


s(t) and interference (noise) n(t) in physical signal space Γ with the properties of
L-group Γ(+, ∨, ∧), between the quantities of mutual information Isx and Isŝ con-
tained in the pairs of the signals s(t), x(t) and s(t), ŝ(t), respectively, the inequality
holds:
Isŝ ≤ Isx , (6.5.4)
so that the inequality (6.5.4) turns into identity if and only if the mapping fŝ
(6.5.2) belongs to a group H = {hα } of continuous mappings fŝ ∈ H preserving
null (identity) element 0 of the group Γ(+) of L-group Γ(+, ∨, ∧): hα (0) = 0,
hα ∈ H.
In the first subsection, we investigate the informational characteristics of contin-
uous communication channels and in the second one we consider the informational
characteristics of discrete communication channels.

6.5.1 Capacity of Continuous Communication Channels Operating in


Presence of Interference (Noise) in Metric Spaces with L-group
Properties
According to the informational inequality of signal processing (6.5.4) of Theo-
rem 6.2.3, the capacity Cn of a continuous communication channel functioning
in the presence of interference (noise) in physical signal space Γ with the proper-
ties of L-group Γ(+, ∨, ∧) is determined by the quantity of mutual information Isx
contained in the observation result x(t) (6.5.1) concerning the useful signal s(t) on
its domain of definition Ts with a duration T of this interval:
Isx
Cn = (abit/s). (6.5.5)
T
According to the expression (6.2.4), determining the quantity of mutual information
Isx , the relationship (6.5.5) takes the form:
(νsx Is )
Cn = = νsx C, (abit/s) (6.5.6)
T
where νsx = ν (st , xt ) is normalized measure of statistical interrelationship (NMSI)
of the samples st and xt of stochastic signals s(t) and x(t); Is is the quantity of
absolute information contained in useful signal s(t); C = Is /T is a continuous
noiseless channel capacity.
Thus, the capacity Cn (abit/s) of a continuous channel functioning under in-
teraction (6.5.1) between useful signal s(t) and interference (noise) n(t) in physical
signal space Γ with the properties of L-group Γ(+, ∨, ∧), is determined by capac-
ity C (abit/s) of noiseless continuous channel and by NMSI νsx = ν (st , xt ) of the
samples st and xt of stochastic signals s(t) and x(t).
The capacity of the channel in the presence of interference (noise) Cn (bit/s)
measured in bit/s, according to the expression (5.2.11), is determined by the rela-
tionship:
Cn (bit/s) = Cn (abit/s) · log2 [Cn (abit/s) · 1 s]. (6.5.7)
6.5 Capacity of Communication Channels Operating in Presence of Interference (Noise) 245

On the base of the obtained general relationships (6.5.5) and (6.5.6), we consider
the peculiarities of evaluating the capacities of continuous channels operating in the
presence of interference (noise) for the cases of concrete kinds of interaction (6.5.1)
between useful signal s(t) and interference (noise) n(t) in physical signal space Γ
with the properties of L-group Γ(+, ∨, ∧), where ⊕ is some binary operation of
L-group: +, ∨, ∧.
For the case of additive interaction (6.5.1) between useful Gaussian signal s(t)
and interference (noise) n(t) in the form of white Gaussian noise (WGN), we deter-
mine the capacity Cn,+ of continuous channel functioning in physical signal space
Γ with the properties of additive commutative group Γ(+) of L-group Γ(+, ∨, ∧):

x(t) = s(t) + n(t); t ∈ Ts . (6.5.8)

Substituting the relationships (6.2.13) and (6.2.14) into the formula (6.5.5), and the
relationships (6.2.12) and (6.2.14) into the formula (6.5.6), we obtain the expression
for the capacity Cn,+ of Gaussian continuous channel with WGN:

Cn,+ = Isx /T = (νsx Is )/T = νsx C =


2C p
= arcsin[q/ 1 + q 2 ] (abit/s), (6.5.9)
π
where νsx = π2 arcsin[ρsx ] is NMSI between the samples st , xt of Gaussian stochastic
p
signals s(t), x(t); ρsx = q/ 1 + q 2 is the correlation coefficient between the samples
st and xt of Gaussian stochastic signals s(t) and x(t); q 2 = S0 /N0 = Ds /Dn is the
ratio of the energy S0 of Gaussian useful signal to power spectral density N0 of
1
R∞
the interference (noise) n(t); Ds = R(0) = 2π S (ω )dω is a variance of useful
−∞
R∞ R∞
signal s(t); S0 = S (0) = R(τ )dτ ; Dn = N0 ( S (ω )dω )/(2πS0 ) is a variance of
−∞ −∞
interference (noise) n(t) brought to the energy of useful signal s(t); S (ω ) is power
spectral density of useful signal s(t); N0 is the power spectral density of interference
(noise) n(t) in the form of WGN; C is the continuous noiseless channel capacity.
Thus, under additive interaction (6.5.8) between useful signal s(t) and interfer-
ence (noise) n(t), the capacity Cn,+ (abit/s) of continuous channel functioning in
physical signal space Γ with the properties of additive commutative group Γ(+) of
L-group Γ(+, ∨, ∧) is determined by the capacity C (abit/s) of the noiseless channel
and the ratio q 2 = S0 /N0 of the energy of the Gaussian useful signal s(t) to the
power spectral density of interference (noise) n(t), but it does not exceed noiseless
channel capacity C (abit/s).
The capacity Cn,+ (bit/s) of Gaussian continuous channel with additive WGN,
according to the formulas (6.5.7) and (6.5.9), is determined by the relationship:

Cn,+ (bit/s) = Cn,+ (abit/s) · log2 [Cn,+ (abit/s) · 1 s], (6.5.10)


p
where Cn,+ = 2C 2 2
π arcsin[q/ 1 + q ] (abit/s); q = S0 /N0 = Ds /Dn is the ratio
of the energy of the Gaussian useful signal s(t) to the power spectral density of
interference (noise) n(t).
246 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

We now determine the capacity Cn,∨/∧ of continuous channels functioning in


the presence of interference (noise) in the interaction (6.5.1) between useful signal
s(t) and interference (noise) n(t) in physical signal space Γ, where ⊕ is a binary
operation of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) in the form of its join and
meet, respectively:
x(t) = s(t) ∨ n(t); (6.5.11a)
x̃(t) = s(t) ∧ n(t), (6.5.11b)
where t ∈ Ts ; Ts = [t0 , t0 + T ] is a domain of definition of useful signal s(t); T is a
duration of useful signal s(t).
Substituting the identities (6.2.16a) and (6.2.16b) into the formula (6.5.5), we
obtain the relationships for the capacity Cn,∨ and Cn,∧ of continuous channel func-
tioning under the interaction between useful signal s(t) and interference (noise) n(t)
in the form of join and meet of the lattice Γ(∨, ∧):

Cn,∨ = Isx /T = 0.5Is /T = 0.5C ; (6.5.12a)

Cn,∧ = Isx̃ /T = 0.5Is /T = 0.5C, (6.5.12b)


where C is continuous noiseless channel capacity; Isx and Isx̃ are the quantities
of mutual information contained in the observed processes x(t) (6.5.11a) and x̃(t)
(6.5.11b) concerning useful signal s(t), respectively; T is a duration of useful signal
s(t).
Note that the sense of the relationship (6.2.16c)as the consequence of the iden-
tities (6.2.16a) and (6.2.16b), lies in the fact that while using the results of si-
multaneous processing of the observations in the form of join (6.5.11a) and meet
(6.5.11b) of the lattice Γ(∨, ∧), one can avoid the losses of information on extracting
a useful signal. Due to fulfillment of the relationship (3.2.39b) elucidating statistical
independence of the processes x(t) (6.5.11a), x̃(t) (6.5.11b), each of them, accord-
ing to the identities (6.2.16a) and (6.2.16b), contains half the information quantity
contained in useful signal s(t). On a qualitative level, this information is distinct
with respect to its content, inasmuch as the processes x(t) and x̃(t) are statistically
independent. However, when taken together (but not in the sum!), they contain
complete information contained in the useful signal.
Substituting the identity (6.2.16c) into the formula (6.5.5), we obtain the rela-
tionship for the capacity Cn,∨/∧ of continuous channel functioning in the presence of
interference (noise), where we find the simultaneous processing of the observations
in the form of join (6.5.11a) and meet (6.5.11b) of the lattice Γ(∨, ∧):

Cn,∨/∧ = (Isx + Isx̃ )/T = Is /T = C, (6.5.13)

where C is the continuous noiseless channel capacity.


The resultant capacity of continuous channel Cn,∨/∧ (abit/s), functioning in
the presence of interference (noise) under the interactions (6.5.11a,b) between use-
ful signal s(t) and interference (noise) n(t) in physical signal space Γ with the
properties of the lattice Γ(∨, ∧), is determined by the noiseless continuous channel
6.5 Capacity of Communication Channels Operating in Presence of Interference (Noise) 247

capacity C (abit/s) and does not depend on energetic relationships between use-
ful signal and interference (noise) and on their probabilistic-statistical properties.
The last circumstance defines the invariance property of continuous channels with
lattice properties with respect to parametric and nonparametric prior uncertainty
conditions.
The capacity Cn,∨/∧ (bit/s) of continuous channels, functioning in the presence
of interference (noise), where we find the simultaneous processing of the obser-
vations in the form of join (6.5.11a) and meet (6.5.11b) of the lattice Γ(∨, ∧),
measured in bit/s, according to the formulas (6.5.7) and (6.5.13), is determined by
the relationship:

Cn,∨/∧ (bit/s) = Cn,∨/∧ (abit/s) · log2 [Cn,∨/∧ (abit/s) · 1 s] =

= C (abit/s) · log2 [C (abit/s) · 1 s], (6.5.14)


where C is continuous noiseless channel capacity.
Thus, the obtained results (6.5.9), (6.5.10) and (6.5.13), (6.5.14), determining
the expressions for the capacities of continuous channels operating in linear signal
spaces and the signal spaces with lattice properties respectively, imply that, inde-
pendently of energetic relationships between useful signal and interference (noise),
the capacity of a continuous channel is a finite quantity determined by the con-
tinuous noiseless channel capacity. These results, although contradictory to known
results of classical information theory, confirm the remarkable phrase of Vadim Mi-
tyugov in 1976 [92]: “. . . on a given signal power, even in the absence of interference,
information transmission rate is always a finite quantity”.

6.5.2 Capacity of Discrete Communication Channels Operating in


Presence of Interference (Noise) in Metric Spaces with L-group
Properties
Consider now the peculiarities of evaluation of the capacity of discrete communica-
tion channels functioning in the presence of interference (noise).
In the most general case, the signal s(t) transmitting a discrete message u(t) in
the form of a sequence of quasi-deterministic signals {sj (t)} can be represented in
the following form:
n−1
X
s(t) = M [c(t), u(t)] = sj (t − jτ0 ), t ∈ Ts , (6.5.15)
j=0

where M [∗, ∗] is a modulating function; Ts = [t0 , t0 + T ] is a domain of definition of


useful signal s(t); T is a duration time of useful signal s(t); sj (t) = M [c(t), uj (t)],
sj (t) ∈ S = {si (t)}, i = 1, . . . , m; c(t) is a carrier signal; u(t) is a transmitted dis-
crete message represented by a discrete random sequence of informational symbols
{uj (t)}:
n−1
X
u(t) = uj (t − jτ0 ), t ∈ Ts , (6.5.16)
j=0
248 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

where τ0 is a duration of a symbol uj (t); n is a number of informational symbols of


random sequence {uj (t)}.
Suppose that, at an arbitrary instant t ∈ Ts , each symbol uj (t) of discrete
random sequence u(t) = {uj (t)} (16), j = 0, 1, . . . , n − 1 equiprobably takes the
values from a set {ui }, i = 1, . . . , m.
We first define the discrete channel capacity measured in abit/s, and then define
the capacity measured in units accepted by classical information theory, i.e., in bit/s.
Based on Definition 6.5.1, by capacity Cn of discrete channel, functioning in the
presence of interference (noise) in signal space Γ with the properties of L-group
Γ(+, ∨, ∧), we mean the largest quantity of mutual information Isŝ between useful
signal s(t) and its estimator ŝ(t) that can be extracted from the interaction result
x(t) (6.5.1) in a signal duration T :

Isŝ
Cn = max (abit/s). (6.5.17)
ŝ∈Γ T
According to the expression (6.2.4) determining the quantity of mutual information
Isŝ , the relationship (6.5.17) takes the form:

(νsx Is )
Cn = = νsx C (abit/s), (6.5.18)
T
where νsx = ν (st , xt ) is the NMSI of the samples st and xt of stochastic signals
s(t) and x(t); Is is a quantity of absolute information contained in the useful signal
s(t); C = Is /T is a discrete noiseless channel capacity.
We now determine the capacity Cn,+ of discrete channel functioning in the
presence of interference (noise) in the additive interaction (6.5.8) between the useful
Gaussian signal s(t) and interference (noise) n(t) in the form of white Gaussian noise
(WGN) in physical signal space Γ with the properties of the additive commutative
group Γ(+) of L-group Γ(+, ∨, ∧).
Substituting the relationship (6.4.17) into the formula (6.5.18), we obtain the
resultant expression for the capacity Cn,+ of a discrete channel with additive WGN:
p 
Cn,+ = [2Φ q 2 (1 − rik )/2 − 1]C (abit/s), (6.5.19)

where q 2 = E/N0 is signal-to-noise ratio; E is the energy of the signals si (t)


from a set S = {si (t)}, i = 1, . . . , m; N0 is a power spectral density of inter-
ference (noise) in the input of the processing unit (the receiving set); Φ(z ) =
Rz n 2o
(2π )−1/2 exp − x2 dx is the integral of probability; rik is cross-correlation co-
−∞
efficient between the signals si (t), sk (t).
In a binary channel with additive WGN, where the transmission of opposite
signals s1 (t) = −s2 (t) (rik = 1) is realized, its capacity, according to the formula
(6.5.19), is equal to:
p 
Cn,+ = [2Φ q 2 − 1]C (abit/s), (6.5.20)
6.5 Capacity of Communication Channels Operating in Presence of Interference (Noise) 249

where C = Is /T is binary noiseless channel capacity.


According to the relationship (5.1.13), information quantities transmitted by
the signal (6.5.15) on the base of binary sequence (6.5.16), and measured in abits
and in bits, are equivalent, so the capacity of a binary channel with additive WGN,
measured in bit/s, exactly corresponds to the capacity measured in abit/s:

Cn,+ (abit/s) = Cn,+ (bit/s).

Unfortunately, this correspondence does not hold when it concerns the capacity of
a discrete channel transmitting discrete messages with cardinality Card{ui } = m
of a set of values {ui } of discrete random sequence u(t) = {uj (t)} that is greater
than 2: Card{ui } = m > 2, i.e., while transmitting m-ary signals.
In this case, the capacity of discrete channels with additive noise, measured in
bit/s, is connected with the capacity measured in abit/s by the relationship (5.2.10):

Cn,+ (abit/s) · log2 m, m < n;
Cn,+ (bit/s) = (6.5.21)
Cn,+ (abit/s) · log2 n, m ≥ n,

where m is the cardinality of a set of values {ui } of discrete random sequence


u(t) = {uj (t)} (16): m = Card{ui }; n is the length of discrete random sequence
u(t) = {uj (t)} (6.5.16); Cn,+ (abit/s) is the capacity of a discrete channel with
interference (noise) measured in abit/s determined by the formula (6.5.19).
According to formula (6.5.21), over the discrete channel transmitting the dis-
crete messages with the length of n symbols, it is impossible to transmit more
information per time unit, than the quantity Cn,+ (abit/s) · log2 n (bit/s).
We can determine the capacity Cn,∨/∧ of discrete channel functioning in the
presence of interference (noise), in the interaction between useful signal s(t) and
interference (noise) n(t) in physical signal space Γ with the properties of the lattice
Γ(∨, ∧) in the form of its join (6.5.11a) and meet (6.5.11b), respectively.
Substituting the relationships (6.4.28a) and (6.4.28b) into the formula (6.5.18),
we obtain the expressions for the capacity Cn,∨ , Cn,∧ of discrete channel with in-
terference (noise) for interactions between useful signal s(t) and interference (noise)
n(t) in physical signal space Γ in the form of join (6.5.11a) and meet (6.5.11b) of
the lattice Γ(∨, ∧), respectively:

Cn,∨ = νsŝ C = C (abit/s); (6.5.22a)

Cn,∧ = ν̃sŝ C = C (abit/s), (6.5.22b)


where C = Is /T (abit/s) is the discrete noiseless channel capacity measured in
abit/s.
In the signal space Γ(∨, ∧) with lattice properties, the capacity Cn,∨/∧ of a
discrete channel with interference (noise), measured in bit/s, is connected with
the discrete noiseless channel capacity C measured in abit/s by the relationship
(5.2.10): 
C (abit/s) · log2 m, m < n;
Cn,∨/∧ (bit/s) = (6.5.23)
C (abit/s) · log2 n, m ≥ n,
250 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

where m is the cardinality of a set of values {ui } of discrete random sequence


u(t) = {uj (t)} (6.5.16): m = Card{ui }; n is the length of discrete random sequence
u(t) = {uj (t)} (6.5.16); C = Is /T (abit/s) is the discrete noiseless channel capacity
measured in abit/s.
Thus, the obtained results (6.5.19) and (6.5.22a,b), determining the expressions
of the capacities of discrete channels operating in the presence of interference (noise)
in linear signal space and in signal space with lattice properties, imply that, regard-
less of energetic relationships between useful and interference signals, the capacity
of discrete channel is a finite quantity determined by the discrete noiseless channel
capacity.
The relationships (6.5.22a,b) and (6.5.23) imply an important conclusion. In
signal space Γ(∨, ∧) with lattice properties, the capacity of discrete channel func-
tioning in the presence of interference (noise) is equal to the discrete noiseless
channel capacity and does not depend on signal-to-interference (signal-to-noise) ra-
tio and interference (noise) distribution. The relationships (6.5.22) are the direct
consequence from the absolute values of the quality indices of the signal estimators
(6.4.28) while solving the problem of their classification from a set S = {si (t)},
i = 1, . . . , m, that is stipulated by the possibility of extraction of deterministic
signals si (t) under interference (noise) background without information losses.
The results in this section allow us to draw the following conclusions.

1. The capacity of a channel functioning in the presence of interference


(noise) in both linear signal space (6.5.9), (6.5.19) and a signal space
with lattice properties (6.5.13), (6.5.22), is a finite quantity, whose upper
bound is determined by the noiseless channel capacity.
2. The capacity of a channel functioning in the presence of interference
(noise) in a signal space with lattice properties is equal to the noiseless
channel capacity and does not depend on energetic relations between use-
ful and interference signals and their probabilistic-statistical properties.
The last circumstance defines the invariance property of a channel func-
tioning in the signal space with lattice properties with respect to para-
metric and nonparametric prior uncertainty conditions.

6.6 Quality Indices of Resolution-Detection in Metric Spaces with


L-group Properties
The general model of signal interaction related to the problem of their resolution is
described by the formula (6.1.9). This problem is considered as a pared-down form
in which the number of interacting useful signals does not exceed 2.
We consider a simple model of interaction between the signals si,k (t) from a
set of deterministic signals S = {si (t), sk (t)} and interference (noise) n(t) in signal
6.6 Quality Indices of Resolution-Detection in Metric Spaces with L-group Properties 251

space with the properties of L-group Γ(+, ∨, ∧):

x(t) = θk sk (t) ⊕ θi si (t) ⊕ n(t), t ∈ Ts , (6.6.1)

where ⊕ is some binary operation of L-group Γ(+, ∨, ∧); θi,k is a random parameter
that takes the values from the set {0, 1}: θi,k ∈ {0, 1}; Ts = [t0 , t0 + T ] is the domain
of definition of the signal si,k (t); t0 is the known time of arrival of the signal si,k (t);
T is a duration of the signal si,k (t).
Let the
R signals from the set S = {si (t), sk (t)} be characterized by an energy
2
Ei,k = si,k (t)dt and cross-correlation coefficient ri,k :
t∈Ts
Z p
ri,k = si (t)sk (t)dt/ Ei Ek .
t∈Ts

As noted in Section 6.1, the problem of resolution-classification (i.e., joint reso-


lution and classification) of signals in the presence of interference (noise) lies in the
plane, where one should decide based on some criterion, what the combination of
useful signals from a set S = {si (t), sk (t)} is included in the observed process x(t).
In this section we consider the problem of resolution-detection (i.e., joint resolution
and detection) of a useful signal sk (t) in the presence of interference (noise) n(t),
and also in the presence of interfering signal si (t). Under this problem statement,
the task of signal processing is establishing the presence of the k-th signal in the
observed process x(t) (6.6.1), i.e., its separate detection.
The first goal of this section is evaluation of quality indices of resolution-
detection of deterministic signals on the base of metric (3.2.1). The second one is
establishing general qualitative and quantitative regularities for resolution-detection
of the signals in linear signal spaces and in signal spaces with lattice properties.
As a model of signal space generalizing the properties of spaces of the mentioned
types, we will use the signal space with the properties of L-group Γ(+, ∨, ∧).
Further, on the base of metric (3.2.1), and considering the known interrelation
between signal classification and signal resolution, based on material in Section 6.4,
we will determine the quality indices of resolution-detection of the signals in signal
space with the properties of both the group Γ(+) and the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧).
Within the framework of the general model (6.6.1), we consider the additive
interaction between the signals si,k (t) from a set S = {si (t), sk (t)} and interfer-
ence (noise) n(t) in signal space with the properties of the group Γ(+) of L-group
Γ(+, ∨, ∧):
x(t) = θk sk (t) + θi si (t) + n(t), t ∈ Ts , (6.6.2)
where θi,k is a random parameter taking the values from a set {0, 1}: θi,k ∈ {0, 1};
Ts = [t0 , t0 + T ] is a domain of definition of the signal si,k (t); t0 is a known time of
arrival of the signal si,k (t); T is a duration of the signal si,k (t).
Suppose that interference n(t) is white Gaussian noise with power spectral den-
sity N0 .
252 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

Consider the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th processing
channel in the presence of the signal sk (t) and in the absence of the signal si (t) in
the observed process x(t) = sk (t) + n(t) (θk = 1, θi = 0):

yk (t) = ŝk (t) x(t)=sk (t)+n(t) , t = t0 + T, (6.6.3)

and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of both interference (noise) (n(t) = 0)
and the signal si (t) in the observed process x(t)=sk (t):

yk (t) n(t)=0 = ŝk (t) x(t)=sk (t) , t = t0 + T. (6.6.4)

To evaluate the quality index of the estimator yk (t) = ŝk (t) of the signal sk (t),
while solving the problem of resolution-detection of the signals in the group Γ(+)
of L-group Γ(+, ∨, ∧) in the presence of the signal sk (t) and in the absence of the
signal si (t) in the observed process x(t) = sk (t) + n(t) (θk = 1, θi = 0), we use
the metric µsk ŝk = µ(yt , yt,0 ) (6.4.9) introduced in Section 6.4, which characterizes
the distinction between the estimator yk (t) (6.6.3) of the signal sk (t) in the k-th
processing channel and the estimator yk (t) n(t)=0 (6.6.4) of the signal sk (t) in the
absence of interference (noise):

µsk ŝk = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > hk ) − P(yt ∧ yt,0 > hk )], (6.6.5)

where yt is the sample of the estimator yk (t) (6.6.3) of the signal sk (t) in the output
of the k-th processing channel at the instant t = t0 + T in the presence of the signal
sk (t) and in the absence of the
signal si (t) in the observed process x(t); yt,0 is the
sample of the estimator yk (t) n(t)=0 (6.6.4) of the signal sk (t) in the output of the
k-th processing channel at the instant t = t0 + T in the absence of interference
(noise) (n(t) = 0) and the signal si (t) in the observed process x(t), which is equal
to a signal energy Ek : yt,0 = Ek ; hk is some threshold level hk < Ek determined by
an average of two mathematical expectations of the processes in the k-th processing
channel (6.4.32) and (6.4.33):
hk = Ek /2. (6.6.6)
The equality yt,0 = Ek and the inequality hk < Ek imply that the probabilities
appearing in the expression (6.6.5) are equal to:

P(yt ∨ yt,0 > hk ) = P(yt,0 > hk ) = 1; (6.6.7a)

P(yt ∧ yt,0 > hk ) = P(yt > hk ) = 1 − Fŝk (hk ) |θk =1 , (6.6.7b)


where Fŝk (y ) |θk =1 is the cumulative distribution function (CDF) of the estimator
yk (t) in the output of detector at the instant t = t0 + T in the presence of the signal
sk (t) (θk = 1) in the observed process x(t) (6.6.2), which, according to the PDF
(6.4.34), is equal to:
Zy
(x − Ek )2
 
−1/2
Fŝk (y ) |θk =1 = (2πDk ) exp − dx, (6.6.8)
2Dk
−∞
6.6 Quality Indices of Resolution-Detection in Metric Spaces with L-group Properties 253

where Dk = Ek N0 is a variance of a noise in the output of the detector; Ek is an


energy of the signal sk (t); N0 is the power spectral density of interference (noise)
in the input of processing unit.
Substituting the formulas (6.6.7) into the expression (6.6.5), we obtain the resul-
tant value for metric µsk ŝk = µ(yt , yt,0 ) between the signal sk (t) and its estimator
yk (t) while solving the problem of resolution-detection in the group Γ(+) of L-group
Γ(+, ∨, ∧):
µsk ŝk = µ(yt , yt,0 ) = 2Fŝk (hk ) |θk =1 =
  q 
Ek − hk 2
= 2[1 − Φ √ ] = 2[1 − Φ qk /4 ], (6.6.9)
Dk
where Fŝk (y ) |θk =1 is the CDF of the estimator yk (t) in the output of detector
at the instant t = t0 + T , in the presence of the signal sk (t) (θk = 1) in the
observed process x(t) (6.6.2), which is determined by the formula (6.6.8); Φ(z ) =
Rz n 2o
(2π )−1/2 exp − x2 dx is the integral of probability; Dk = Ek N0 is a variance
−∞
of a noise in the k-th processing channel; Ek is an energy of the signal sk (t); qk2 =
Ek /N0 is a signal-to-noise ratio in the k-th processing channel; hk is a threshold level
determined by the relationship (6.6.6); N0 is a power spectral density of interference
(noise) in the input of processing unit.
By the analogy with the quality index of signal classification (6.4.16) and quality
index of signal detection (6.4.40), we shall determine the quality index of resolution-
detection of the signals νsk ŝk .
Definition 6.6.1. By quality index of estimator of the signals νsk ŝk , while solving
the problem of resolution-detection in the presence of the signal sk (t) and in the
absence of the signal si (t) in the observed process x(t) (θk = 1, θi = 0) (1), we mean
the normalized measure of statistical interrelationship (NMSI) ν (yt , yt,0 ) between
the samples yt,0 and yt of stochastic processes yk (t) n(t)=0 and yk (t) in the k-th
processing channel of a unit of resolution-detection connected with metric µsk ŝk =
µ(yt , yt,0 ) (6.6.5) by the following equation:

νsk ŝk = ν (yt , yt,0 ) = 1 − µsk ŝk . (6.6.10)

Substituting the value of metric (6.6.9) into the coupling equation (6.6.10) be-
tween the NMSI and metric, we obtain the dependence of quality index of resolution-
detection νsk ŝk on the signal-to-noise ratio:
q 
νsk ŝk (qk2 ) = 2Φ qk2 /4 − 1, (6.6.11)

where qk2 = Ek /N0 is the signal-to-noise ratio in the k-th processing channel.
Consider now the estimator yk (t) = ŝk (t) of the signal sk (t) in the k-th pro-
cessing channel in the presence of signals sk (t) and si (t) in the observed process
x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1):

yk (t) = ŝk (t) x(t)=s (t)+s (t)+n(t) , t = t0 + T,
k i
(6.6.12)
254 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

and also consider the estimator yk (t) n(t)=0 = ŝk (t) n(t)=0 of the signal sk (t) in
the k-th processing channel in the absence of interference (noise) (n(t) = 0) in the
observed process x(t) = sk (t) + si (t):

yk (t) n(t)=0 = ŝk (t) x(t)=s (t)+s (t) , t = t0 + T.
k i
(6.6.13)

To determine the quality of the estimator yk (t) = ŝk (t) of the signal sk (t),
while solving the problem of resolution-detection of the signals in the group
Γ(+) of L-group Γ(+, ∨, ∧) in the presence of signals sk (t) and si (t) in the ob-
served process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1), we use the metric
µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.5), which characterizes the distinction between
the estimator yk (t) (6.6.12) of the signal sk (t) in the k-th processing channel and
the estimator yk (t) n(t)=0 (6.6.13) of the signal sk (t) in the absence of interference
(noise):

µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > hk ) − P(yt ∧ yt,0 > hk )], (6.6.14)

where yt is the sample of the estimator yk (t) (6.6.12) of the signal sk (t) in the output
of the k-th processing channel at the instant t = t0 + T in the presence of both the
signals sk (t) and si (t) in the observed process x( t) = sk (t) + si (t) + n(t) (θk = 1,
θi = 1); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.13) of the signal sk (t)
in the output of the k-th processing signal at the instant t = t0 + T in the absence
of interference (noise) (n(t) = 0) in the observed process x(t) = sk (t) + si (t), which
is equal to the sum of√the energy Ek of the signal sk (t) and mutual energy of both
the signals Eik = rik Ei Ek : yt,0 = Ek + Eik ; hk is some threshold level, hk < Ek
determined by an average of two mathematical expectations of the processes in the
k-th processing channel (6.4.32) and (6.4.33):

hk = Ek /2. (6.6.15)

The equality yt,0 = Ek + Eik and the inequality hk < Ek imply that the probabilities
in the expression (6.6.14) are equal to:

P(yt ∨ yt,0 > hk ) = P(yt,0 > hk ) = 1; (6.6.16a)

P(yt ∧ yt,0 > hk ) = P(yt > hk ) = 1 − Fŝk (hk ) |θk =1,θi =1 , (6.6.16b)
where Fŝk (y ) |θk =1,θi =1 is the CDF of the estimator yk (t) in the k-th processing
channel at the instant t = t0 + T in the presence of both the signals sk (t) and si (t)
in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1) (6.6.2), which,
according to PDF (6.4.34), is equal to:
Zy
[x − (Ek + Eik )]2
 
−1/2
Fŝk (y ) |θk =1,θi =1 = (2πDk ) exp − dx, (6.6.17)
2Dk
−∞

where Dk = Ek N0 is a variance of a noise√ in the k-th processing channel; Ek is


an energy of the signal sk (t); Eik = rik Ei Ek is a mutual energy of the signals
6.6 Quality Indices of Resolution-Detection in Metric Spaces with L-group Properties 255

sk (t) and si (t); N0 is a power spectral density of interference (noise) in the input
of processing unit.
Substituting the formulas (6.6.16) into the expression (6.6.14), we obtain the
resultant value for metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) between the signal sk (t) and
its estimator yk (t) while solving the problem of resolution-detection in the group
Γ(+) of L-group Γ(+, ∨, ∧):

µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2Fŝk (hk ) |θk =1,θi =1 =
 
(Ek + Eik ) − hk
= 2[1 − Φ √ ] = 2[1 − Φ (0.5qk + rik qi )], (6.6.18)
Dk
where Fŝk (y ) |θk =1,θi =1 is the CDF of the estimator yk (t) in the k-th processing
channel at the instant t = t0 + T in the presence of both the signals sk (t) and si (t)
in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1) (6.6.2), which
Rz n 2o
is determined by the formula (6.6.17); Φ(z ) = (2π )−1/2 exp − x2 dx is the
−∞
integral of probability; Dk = Ek N0 is a variance of a noise √ in the k-th processing
channel; Ek is an energy of the signal sk (t); Eik = rik Ei Ek is mutual energy of
the signals sk (t) and si (t); rik is the cross-correlation coefficient of the signals sk (t)
and si (t); qk2 is signal-to-noise ratio in the k-th processing channel; qi2 is signal-to-
noise ratio in the i-th processing channel; hk is a threshold level determined by the
relationship (6.6.15); N0 is a power spectral density of interference (noise) in the
input of processing unit.
By analogy with the quality index of resolution-detection of the signals (6.6.10),
in the presence of the signal sk (t) and the absence of the signal si (t) in the observed
process x(t) = sk (t)+ n(t) (θk = 1, θi = 0), we define the quality index of resolution-
detection of the signals νsk ŝk in the presence of both signals sk (t) and si (t) in the
observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1).

Definition 6.6.2. By quality index of estimator of the signals νsk ŝk |θk =1,θi =1 ,
while solving the problem of resolution-detection in the presence of signals sk (t)
and si (t) in the observed process x(t) = sk (t) + si (t) + n(t) (θk = 1, θi = 1)
the NMSI ν (yt , yt,0 ) between the samples yt,0 and yt of stochastic
(6.6.1), we mean
process yk (t) n(t)=0 , yk (t) in the k-th processing channel of the resolution-detection
unit connected with metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.14) by the following
expression:
νsk ŝk |θk =1,θi =1 = ν (yt , yt,0 ) = 1 − µsk ŝk |θi =1,θk =1 . (6.6.19)

Substituting the value of metric (6.6.18) into the coupling equation


(6.6.19), we obtain the dependence of quality index of resolution-detection
νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 on the signal-to-noise ratio in the processing channel:

νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 = 2Φ (0.5qk + rik qi ) − 1, (6.6.20)

where rik is the cross-correlation coefficient of the signals sk (t) and si (t); qk2 , qi2 are
signal-to-noise ratios in the k-th and the i-th processing channels, respectively.
256 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

From comparative analysis of quality indices of resolution-detection of the sig-


nals (6.6.11) and (6.6.20) in signal space with properties of the group Γ(+) of
L-group Γ(+, ∨, ∧), depending on the values of the cross-correlation coefficient rik
between the signals sk (t) and si (t), the following relationships hold:
νsk ŝk (qk2 ) ≥ νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 , on rik ≤ 0;
νsk ŝk (qk2 ) ≤ νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 , on rik ≥ 0.
Within the framework of the general model (6.6.1), consider the case of interaction
between the signals si,k (t) from a set S = {si (t), sk (t)} and interference (noise) n(t)
in the signal space with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧):
x(t) = θk sk (t) ∨ θi si (t) ∨ n(t), t ∈ Ts , (6.6.21)
where ∨ and ∧ are binary operations of join and meet of the lattice Γ(∨, ∧) of
L-group Γ(+, ∨, ∧), respectively; θi,k is a random parameter taking values from the
set {0, 1}: θi,k ∈ {0, 1}; Ts = [t0 , t0 + T ] is a domain of definition of the signal si,k (t);
t0 is known time of arrival of the signal si,k (t); T is a duration of the signal si,k (t).
Let the
R signals from a set S = {si (t), sk (t)} be characterized by the energies
2
Ei,k = si,k (t)dt and cross-correlation coefficient rik :
t∈Ts
Z p
rik = si (t)sk (t)dt/ Ei Ek .
t∈Ts

Suppose also that interference (noise) n(t) is characterized by arbitrary


probabilistic-statistical properties.
Determine the quality indices of resolution-detection of the signals introduced
by the Definitions 6.6.1, 6.6.2 and by the expressions (6.6.10), (6.6.19), respectively,
while solving the problem of resolution-detection in signal space with the properties
of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧).
Consider the estimator yk (t) of the signal sk (t) in the k-th processing channel
of a resolution-detection unit in the presence of the signal sk (t) and in the absence
of the signal si (t) in the observed process x(t) = sk (t) ∨ 0 ∨ n(t) (θk = 1, θi = 0)
(6.6.21):
yk (t) = yk (t) x(t)=sk (t)∨0∨n(t) = sk (t) ∧ x(t), (6.6.22)

and also consider the estimator yk (t) n(t)=0 of the signal sk (t) in the k-th processing

channel in the absence of interference (noise) (n(t) = 0) and the signal si (t) in the
observed process x(t) = sk (t) ∨ 0:

yk (t) n(t)=0 = sk (t) ∧ x(t) x(t)=sk (t)∨0 . (6.6.23)
Determine the value of function (6.6.5), which, if the signal space is lattice Γ(∨, ∧),
is also metric, and then determine the quality index of the estimator of the signals
(6.6.10) while solving the problem of resolution-detection in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧):
µsk ŝk = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)], (6.6.24)
6.6 Quality Indices of Resolution-Detection in Metric Spaces with L-group Properties 257

where yt is the sample of the estimator yk (t) (6.6.22) of the signal sk (t) in the output
of the k-th processing channel at the instant t ∈ Ts in the presence of the signal sk (t)
and in the absence of the signal si (t) in the observed process x(t) = sk (t) ∨ 0 ∨ n(t)
(θk = 1, θi = 0) (6.6.21); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.23) of
the signal sk (t) in the output of the k-th processing channel at the instant t ∈ Ts
in the absence of interference (noise) n(t) = 0 and the signal si (t) in the observed
process x(t) = sk (t) ∨ 0; h is some threshold level determined by energetic and
correlation relations between the signals sk (t) and si (t).
The absorption axiom of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) contained in
the third part of each
of two multilink identities implies that the estimators yk (t)
(6.6.22) and yk (t) n(t)=0 (6.6.23) are identically equal to the received useful signal
sk (t):

yk (t) = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ 0 ∨ n(t)] = sk (t); (6.6.25a)



yk (t) n(t)=0 = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ 0] = sk (t). (6.6.25b)
The obtained relationships imply that the sample yt of the estimator yk (t) (6.6.22)
and also the sample yt,0 of the estimator yk (t) n(t)=0 (6.6.23) are identically equal
to the received signal sk (t):
yt = yk (t) = sk (t); (6.6.26a)

yt,0 = yk (t) n(t)=0 = sk (t). (6.6.26b)
Substituting the obtained values of the samples yt (6.6.26a) and yt,0 (6.6.26b) into
the relationship (6.6.24), the value of metric µsk ŝk = µ(yt , yt,0 ) between the signal
sk (t) and its estimator yk (t) while solving the problem of resolution-detection of
the signals in signal space with the properties of the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧) is equal to zero:

µsk ŝk = µ(yt , yt,0 ) = 0. (6.6.27)

From (6.6.27) and the coupling equation (6.6.10), we obtain quality indices of the
estimator of the signals νsk ŝk , while solving the problem of their resolution-detection
in signal space with the properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) in
the presence of the signal sk (t) and in the absence of the signal si (t) in the observed
process x(t) = sk (t) ∨ 0 ∨ n(t) (θk = 1, θi = 0), that take absolute values equal to 1:

νsk ŝk = 1 − µsk ŝk = 1. (6.6.28)

Consider now the estimator yk (t) of the signal sk (t) in the k-th processing channel
of the resolution-detection unit in the presence of signals sk (t) and si (t) in the
observed process x(t) = sk (t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1) (6.6.21):

yk (t) = yk (t) x(t)=sk (t)∨si (t)∨n(t) = sk (t) ∧ x(t), (6.6.29)

and also consider the estimator yk (t) n(t)=0 of the signal sk (t) in the k-th processing
258 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

channel in the absence of interference (noise) (n(t) = 0) in the observed process


x(t) = sk (t) ∨ si (t) ∨ 0:

yk (t) n(t)=0 = sk (t) ∧ x(t) x(t)=sk (t)∨si (t)∨0 . (6.6.30)

To determine the quality of the estimator yk (t) = ŝk (t) of the signal sk (t) while
solving the problem of resolution-detection of the signals in signal space with the
properties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) in the presence of both the
signals sk (t) and si (t) in the observed process x(t) = sk (t) ∨si (t) ∨n(t) (θk = 1, θi =
1), we will use the metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) (6.6.24), which characterizes
the distinctions between the estimator yk (t ) (6.6.29) of the signal sk (t) in the k-th
processing channel and the estimator yk (t) n(t)=0 (6.6.30) of the signal sk (t) in the
absence of interference (noise):

µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 2[P(yt ∨ yt,0 > h) − P(yt ∧ yt,0 > h)], (6.6.31)

where yt is the sample of the estimator yk (t) (6.6.29) of the signal sk (t) in the output
of the k-th processing channel at the instant t ∈ Ts in the presence of the signals
sk (t) and si (t) in the observed process x(t) = sk (t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1)
(6.6.21); yt,0 is the sample of the estimator yk (t) n(t)=0 (6.6.30) of the signal sk (t)
in the output of the k-th processing channel at the instant t ∈ Ts in the absence of
interference (noise) (n(t) = 0) in the observed process x(t) = sk (t) ∨ si (t) ∨ 0; h is
some threshold level determined by energetic and correlation relations between the
signals sk (t) and si (t).
The absorption axiom of the lattice Γ(∨, ∧) contained in the third part of each
of two multilink identities implies that the estimators yk (t) (6.6.29) and yk (t) n(t)=0
(6.6.30) are identically equal to the received useful signal sk (t):

yk (t) = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ si (t) ∨ n(t)] = sk (t); (6.6.32a)

yk (t) n(t)=0 = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ si (t) ∨ 0] = sk (t). (6.6.32b)
The obtained relationships imply that the sample yt of the estimator yk (t) (6.6.29),
and also the sample yt,0 of the estimator yk (t) n(t)=0 (6.6.30) are identically equal
to the received useful signal sk (t):

yt = yk (t) = sk (t); (6.6.33a)



yt,0 = yk (t) n(t)=0 = sk (t). (6.6.33b)
Substituting the obtained values of the samples yt (6.6.33a), yt,0 (6.6.33b) into the
relationship (6.6.31), the value of metric µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) between the
signal sk (t) and its estimator yk (t) while solving the problem of resolution-detection
of the signal in signal space with the properties of the lattice Γ(∨, ∧) of L-group
Γ(+, ∨, ∧) is equal to zero:

µsk ŝk |θk =1,θi =1 = µ(yt , yt,0 ) = 0. (6.6.34)

The relationship (6.6.34) and the coupling equation (6.6.28) imply that quality
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 259

indices of the estimator of the signals νsk ŝk |θk =1,θi =1 while solving the problem of
their resolution-detection in signal space with the properties of the lattice Γ(∨, ∧)
of L-group Γ(+, ∨, ∧) in the presence of both the signals sk (t) and si (t) in the
observed process x(t) = sk (t) ∨ si (t) ∨ n(t) (θk = 1, θi = 1), take absolute values
that are equal to 1:
νsk ŝk |θk =1,θi =1 = 1 − µsk ŝk |θk =1,θi =1 = 1. (6.6.35)
As follows from the comparative analysis of quality indices of signal resolution-
detection (6.6.28), (6.6.35) in signal space with the properties of the lattice Γ(∨, ∧)
of L-group Γ(+, ∨, ∧), regardless of cross-correlation coefficient rik and energetic
relations between the signals sk (t) and si (t), the following identity holds:
νsk ŝk (qk2 ) = νsk ŝk (qk2 , qi2 ) |θk =1,θi =1 = 1.
It fundamentally differs them from the quality indices of signal resolution-detection
(6.6.11) and (6.6.20) in signal space with the properties of the group Γ(+) of L-
group Γ(+, ∨, ∧), which are essentially determined by both cross-correlation coeffi-
cient rik and energetic relations between the signals sk (t) and si (t).
The relationships (6.6.28) and (6.6.35) imply an important conclusion. While
analyzing the problem of signal resolution-detection in signal space with the prop-
erties of the lattice Γ(∨, ∧) of L-group Γ(+, ∨, ∧) we face a possibility of providing
absolute quality indices of signal resolution-detection νsk ŝk , νsk ŝk |θk =1,θi =1 . This
creates the necessary conditions for resolution-detection of deterministic signals pro-
cessed under interference (noise) background without losses of information. Note
that this situation stipulated by the absence of information losses is not typical
for solving a similar problem in signal space with group properties (particularly
in linear signal space). One more advantage of signal resolution-detection in signal
spaces with lattice properties is the invariance property of both metrics (6.6.27) and
(6.6.34) and the quality indices of signal resolution-detection (6.6.28) and (6.6.35)
with respect to parametric and nonparametric prior uncertainty conditions. The
quality indices of signal resolution-detection in signal space with lattice proper-
ties (6.6.28) and (6.6.35), as against quality indices of signal resolution-detection
in linear signal space (6.6.11) and (6.6.20), do not depend on signal-to-noise ratio
(signal-to-interference ratio) and interference (noise) distribution in the input of the
processing unit. The problem of synthesis of optimal signal resolution algorithm in
signal space with lattice properties demands additional investigation that will be
discussed in the next chapter.

6.7 Quality Indices of Resolution-Estimation in Metric Spaces with


Lattice Properties
The problem of resolution-detection-estimation (i.e., joint resolution, detection, and
estimation) requires us to establish: (1) is there the useful signal s(t, λ) with an un-
known nonrandom parameter λ in the input of a processing unit and (2) what is
260 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

the value of parameter λ, so that both tasks are solved in the presence of interfer-
ing signal s0 (t, λ0 ) and interference (noise) n(t). Furthermore, the interfering signal
s0 (t, λ0 ) is a copy of a useful signal with an unknown parameter λ0 , which differs
from λ: λ0 6= λ.
Consider the general model of interaction of the signals s(t, λ) and s0 (t, λ0 ) and
interference (noise) n(t) in signal space with the properties of the lattice Γ(∨, ∧):

x(t) = θs(t, λ) ⊕ θ0 s0 (t, λ0 ) ⊕ n(t), t ∈ Tobs , (6.7.1)

where ⊕ is some binary operation of the lattice Γ(∨, ∧); θ, θ0 is a random parameter
that takes the values from the set {0, 1}: θ, θ0 ∈ {0, 1}; Tobs is an observation
interval, Tobs = Ts ∪ Ts0 ; Ts = [t00 , t00 + T ], Ts0 = [t00 , t00 + T ] are the domains of
definitions of the signals s(t, λ) and s0 (t, λ0 ), respectively; t0 and t00 are unknown
arrival times of the signals s(t, λ) and s0 (t, λ0 ); T is a duration of the signals s(t, λ)
and s0 (t, λ0 ).
Let the signals s(t, λ) and s0 (t, λ0 ) be periodic functions with a period T0 :

s(t, λ) = s(t + T0 , λ); s0 (t, λ0 ) = s0 (t + T0 , λ0 ), (6.7.2)

and they are connected by the proportionality relation:

s0 (t, λ0 ) = as(t + τ, λ + ∆λ), (6.7.3)

where a, τ , and ∆λ are some constants: a=const, τ =const, ∆λ=const.


Consider also that interference n(t) is quasi-white Gaussian noise with power
spectral density N0 and upper bound frequency Fn : Fn >> 1/T0 .
The problem of joint resolution-detection-estimation can be formulated by test-
ing statistical hypothesis Hθθ0 concerning the presence (absence) of useful s(t, λ) and
interfering s0 (t, λ0 ) signals observed under interference (noise) background. If either
the hypothesis H11 or the hypothesis H10 holds (θ = 1, i.e., in the observed process
x(t), the useful signal s(t, λ) is present, whereas the interfering signal s0 (t, λ0 ) can
be present or absent: θ0 ∈ {0, 1}), then on the base of the relationship (6.1.7a), the
estimator λ̂ of a parameter λ is formed:

λ̂ = M −1 [c(t), ŝ(t)]; (6.7.4a)

ŝ(t) = Fŝ [x(t)], (6.7.4b)


where M [∗, ∗] is a deterministic one-to-one function (modulating function); c(t) is
a carrier signal; ŝ(t) is the estimator of useful signal s(t, λ); Fŝ [∗] is the estimator
forming function.
The goal of this section is establishing the main potential characteristics of
signal resolution-estimation in signal space with lattice properties on the basis of
the estimators considered in Section 6.3.
The initial general model of signal interaction (6.6.1) in signal space with lattice
properties can be written as two separate models, each of them is determined by
the lattice operation of join and meet, respectively:

x(t) = θs(t, λ) ∨ θ0 s0 (t, λ0 ) ∨ n(t); (6.7.5a)


6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 261

x̃(t) = θs(t, λ) ∧ θ0 s0 (t, λ0 ) ∧ n(t), (6.7.5b)


where t ∈ Tobs ; Tobs is an observation interval.
While formulating the problem of joint resolution-detection-estimation for in-
teractions (6.7.5a), (6.7.5b), we consider two hypothesis H11 = Hθ=1,θ0 =1 , H01 =
Hθ=0,θ0 =1 concerning the presence of interfering signal s0 (t, λ0 ) (θ0 = 1) and also
the presence or absence of useful signal s(t, λ) (θ ∈ {0, 1}):

H11 : x(t) = s(t, λ) ∨ s0 (t, λ0 ) ∨ n(t); (6.7.6a)

H01 : x(t) = 0 ∨ s0 (t, λ0 ) ∨ n(t); (6.7.6b)

H11 : x̃(t) = s(t, λ) ∧ s0 (t, λ0 ) ∧ n(t); (6.7.7a)


H01 : x̃(t) = 0 ∧ s0 (t, λ0 ) ∧ n(t), (6.7.7b)
where t ∈ Tobs ; Tobs is an observation interval; 0 is a null (neutral) element of the
group Γ(+) of L-group Γ(+, ∨, ∧).
As the estimator ŝ∧ (t) of useful signal s(t, λ) for the model (6.7.5a), we take the
meet of lattice (6.3.3):
N
ŝ∧ (t) = ∧ x(t − jT0 ), (6.7.8)
j=1

where N is a number of periods of a carrier of useful signal s(t, λ) in its domain of


definition Ts = [t0 , t0 + T ].
As the estimator ŝ∨ (t) of useful signal s(t, λ) for the model (6.7.5b), we take
the join of lattice (6.3.4):
N
ŝ∨ (t) = ∨ x(t − jT0 ). (6.7.9)
j=1

The questions dealing with obtaining the estimators ŝ∧ (t) (6.7.8) and ŝ∨ (t) (6.7.9)
on the basis of optimality criteria will be considered in Chapter 7.
Let some functions Tα and Tβ of the observed process x(t) be used as the
estimators ŝα (t) and ŝβ (t) of the signals sα (t) and sβ (t) in signal space Γ(⊕):

ŝα (t) = Tα [x(t)]; (6.7.10a)

ŝβ (t) = Tβ [x(t)], (6.7.10b)


where x(t) = θα sα (t) ⊕ θβ sβ (t) ⊕ n(t); ⊕ is a binary operation in signal space Γ(⊕);
θα and θβ are random parameters taking the values from a set {0, 1}: θα , θβ ∈ {0, 1}.
Let also the estimators ŝα (t) and ŝβ (t) of the signals sα (t) and sβ (t) be charac-
terized by PDFs pŝα (z ) and pŝβ (z ), respectively.
Definition 6.7.1. A set Sαβ of pairs of the estimators {ŝα (t), ŝβ (t)} of the corre-
sponding pairs of signals {sα (t), sβ (t)} and {ŝα (t), ŝβ (t)} ⊂ Sαβ , α, β ∈ A is called
an estimator space (Sαβ , dαβ ) with metric dαβ :
Z∞
1
dαβ = d(ŝα , ŝβ ) = |pŝα (z ) − pŝβ (z )|dz, (6.7.11)
2
−∞
262 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

where dαβ is a metric between PDFs pŝα (z ) and pŝβ (z ) of the estimators ŝα (t) and
ŝβ (t) of the signals sα (t) and sβ (t) in signal space Γ(⊕), respectively.
To characterize the quality of the estimators ŝα (t) and ŝβ (t) while solving the
problem of joint resolution-estimation, we introduce the index based upon the met-
ric (6.7.11).
Definition 6.7.2. By quality index q↔ (ŝα , ŝβ ) of joint resolution-estimation of the
signals sα (t) and sβ (t), we mean the quantity equal to metric (6.7.11) between
PDFs pŝα (z ) and pŝβ (z ) of the estimators ŝα (t) and ŝβ (t) of these signals:
Z∞
1
q↔ (ŝα , ŝβ ) = d(ŝα , ŝβ ) = |pŝα (z ) − pŝβ (z )|dz. (6.7.12)
2
−∞

Determine now the quality indices q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) , q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) of
joint resolution-estimation (6.7.12) for the estimators ŝ∧ (t) (6.7.8) and ŝ∨ (t) (6.7.9)
in the case of separate fulfillment of the hypotheses H11 = Hθ=1,θ0 =1 and H01 =
Hθ=0,θ0 =1 (6.7.6) and (6.7.7), respectively.
By differentiating CDF (6.3.8c), under the condition that between the samples
of instantaneous values st , s0t of useful s(t, λ) and interfering s0 (t, λ0 ) signals, the
two-sided inequality holds 0 < s0t < st , we obtain the PDF of the estimator ŝ∧ (t):

pŝ∧ (z ) |H11 = δ (z − st )[1 − (1 − Fn (st + 0))N ]+


+ 1(z − st )N (1 − Fn (z ))N −1 pn (z ); (6.7.13a)

pŝ∧ (z ) |H01 = δ (z − s0t )[1 − (1 − Fn (s0t + 0))N ]+


+ 1(z − s0t )N (1 − Fn (z ))N −1 pn (z ), (6.7.13b)
where st and s0t are the samples of instantaneous values of the signals s(t, λ) and
s0 (t, λ0 ), respectively: st = s(t, λ), s0t = s0 (t, λ0 ), t ∈ Tobs ; Fn (z ), pn (z ) are the
univariate CDF and PDF of interference (noise), respectively; δ (t) is the Dirac
delta function; 1(t) is the Heaviside step function; N is a number of periods of a
carrier of useful signal s(t, λ) on its domain of definition Ts = [t0 , t0 + T ].
Similarly, differentiating CDF (6.3.8d), under the condition that, between the
samples of instantaneous values st , s0t of useful s(t, λ) and interfering s0 (t, λ0 ) signals,
the two-sided inequality holds st < s0t < 0, we obtain the PDF of the estimator
ŝ∨ (t):

pŝ∨ (z ) |H11 = δ (z − st )FnN (st − 0) + [1 − 1(z − st )]N FnN −1 (z )pn (z ); (6.7.14a)

pŝ∨ (z ) |H01 = δ (z − s0t )FnN (s0t − 0) + [1 − 1(z − s0t )]N FnN −1 (z )pn (z ), (6.7.14b)
On the qualitative level, the forms of PDFs (6.7.13a,b) and (6.7.14a,b) of the
estimators ŝ∧ (t) |H11 ,H01 , ŝ∨ (t) |H11 ,H01 , in the case of separate fulfillment of the
hypotheses H11 , H01 on N = 3, are shown in Fig. 6.7.1(a) and Fig. 6.7.1(b), re-
spectively.
Substituting the values of PDFs of the estimators ŝ∧ (t) |H11 ,H01 (6.7.13a) and
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 263

(a) (b)
FIGURE 6.7.1 PDFs of estimators: (a) ŝ∧ (t) |H11 ,H01 ; 1 = PDF pŝ∧ (z) |H11 (6.7.13a); 2 =
PDF pŝ∧ (z) |H01 (6.7.13b); 3 = PDF of interference (noise) pn (z); (b) ŝ∨ (t) |H11 ,H01 . 1 =
PDF pŝ∨ (z) |H11 (6.7.14a); 2 = PDF pŝ∨ (z) |H01 (6.7.14b); 3 = PDF of interference (noise)
pn (z)

(6.7.13b) into the formula (6.7.12), we obtain the quality index of joint resolution-
estimation q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) :
Z∞
1
q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) = |pŝ∧ (z ) |H11 − pŝ∧ (z ) |H01 |dz =
2
−∞

Z∞ Z∞
=1− pŝ∧ (z ) |H11 dz = 1 − N (1 − Fn (z ))N −1 pn (z )dz, st ≥ 0. (6.7.15)
st st

Similarly, substituting the values of PDFs of the estimators ŝ∨ (t) |H11 ,H01 (6.7.14a,b)
into the expression (6.7.12), we obtain the quality index of joint resolution-
estimation q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) :
Z∞
1
q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) = |pŝ∨ (z ) |H11 − pŝ∨ (z ) |H01 |dz =
2
−∞

Zst Zst
=1− pŝ∧ (z ) |H11 dz = 1 − N FnN −1 (z )pn (z )dz, st < 0. (6.7.16)
−∞ −∞

It is difficult to obtain the exact values of quality indices (6.7.15) and (6.7.16) even
on the assumption of normalcy of interference (noise) PDF. Meanwhile, due to
evenness of the function pn (z ): pn (z ) = pn (−z ), taking into account the positive
(negative) definiteness of the samples st of instantaneous values of the signal s(t)
for the estimators ŝ∧ (t) |H11 ,H01 and ŝ∨ (t) |H11 ,H01 , it is easy to obtain the values
of lower bounds of quality indices (6.7.15) and (6.7.16), which are determined by
the following inequalities:

q↔ (ŝ∧ |H11 , ŝ∧ |H01 ) ≥ 1 − 2−N ; (6.7.17a)


264 6 Quality Indices of Signal Processing in Metric Spaces with L-group Properties

q↔ (ŝ∨ |H11 , ŝ∨ |H01 ) ≥ 1 − 2−N . (6.7.17b)


The lower bounds of quality indices of joint resolution-estimation of the signals
in signal space with lattice properties (6.7.17a) and (6.7.17b), determined by the
metrics (6.7.15) and (6.7.16) between PDFs of the pairs of the estimators ŝ∧ (t) |H11 ,
ŝ∧ (t) |H01 and ŝ∨ (t) |H11 , ŝ∨ (t) |H01 , respectively, do not depend on both signal-
to-noise ratio and interference (noise) distribution. They are determined by the
number N of independent samples of interference (noise) n(t) used while forming
the estimators (6.7.8) and (6.7.9) equal to the number of periods of carrier of the
signal s(t, λ). Thus, the quality indices of joint resolution-estimation of the signals
(6.7.15) and (6.7.16) in signal space with lattice properties are invariant with respect
to parametric and nonparametric prior uncertainty conditions.
The last circumstance, despite a fundamental distinction between statistical and
deterministic approaches for evaluating the quality indices of signal resolution, indi-
cates that the conclusions obtained upon their base coincide. The reason is that both
statistical and deterministic characteristics of resolution depend strongly on the
metric between the estimators ŝ∧ (t) |H11 and ŝ∧ (t) |H01 (ŝ∨ (t) |H11 and ŝ∨ (t) |H01 )
of the signals s(t, λ) ∨ s0 (t, λ0 ) and s0 (t, λ0 ) (s(t, λ) ∧ s0 (t, λ0 ) and s0 (t, λ0 )) within
the hypotheses (6.7.6a,b) and (6.7.7a,b), and also on the metric between the signals
s(t, λ) ∨ s0 (t, λ0 ) and s0 (t, λ0 ) (s(t, λ) ∧ s0 (t, λ0 ) and s0 (t, λ0 )).
Using the deterministic approach to evaluating the signal resolution, the last
is characterized by a minimal difference ∆λmin = |λ − λ0 | in the parameters of
interacting signals s(t, λ), s0 (t, λ0 ), when two responses corresponding to these sig-
nals are separately observed in the output of the processing unit and two single
maximums of these responses are distinguished.
Within the framework of the stated deterministic approach, we obtain the value
of signal resolution in time delay for harmonic signals in signal space with lattice
properties.
Consider, within the model (6.7.5a), the interaction of two harmonic signals
s1 (t) and s2 (t) in signal space Γ(∨, ∧) with lattice properties in the absence of
interference (noise) n(t):
x(t) = s1 (t) ∨ s2 (t) ∨ 0. (6.7.18)
Let the model of the received signals s1 (t) and s2 (t) be determined by the
expression: 
si (t) = Ai cos(ω0 t + ϕ), t ∈ Ti ;
(6.7.19)
si (t) = 0, t∈/ Ti ,
where Ai is an unknown nonrandom amplitude of useful signal si (t); i = 1, 2;
ω0 = 2πf0 ; f0 is a known carrier frequency of the signal si (t); ϕ is an unknown
nonrandom initial phase of the signal si (t), ϕ ∈ [−π, π ]; Ti is a domain of definition
of the signal si (t), Ti = [t0i , t0i + T ]; t0i is an unknown time of arrival of the signal
si (t); T is a duration time of the signal; T = N T0 ; T0 is a period of a carrier; N
is a number of periods of harmonic signal si (t), N = T f0 ; N ∈ N, N is the set of
natural numbers.
Then the expression for the estimator ŝ∧ (t) (6.7.8) of the signal si (t), i = 1, 2,
6.7 Quality Indices of Resolution-Estimation in Metric Spaces with Lattice Properties 265

taking into account the observed process (6.7.18), is determined by the following
relationship: 
s1 (t) ∨ s2 (t), t ∈ Tŝ ;
ŝ∧ (t) = (6.7.20)
0, t ∈
/ Tŝ ,
Tŝ = [min[t01 , t02 ] + (N − 1)T0 , max[t01 , t02 ] + N T0 ], (6.7.20a)
where t0i is an unknown time of arrival of the signal si (t); N is a number of periods
of the harmonic signal si (t); T0 is a period of a carrier; i = 1, 2.
The relationship (6.7.20) shows that the filter forming the estimator (6.7.8)
provides the compression of a useful signal in N times. The result (6.7.20) can be
interpreted as the potential signal resolution in time domain under the condition
of extremely large signal-to-noise ratio in the input of the processing unit (filter).
The expression (6.7.20) also implies that the resolution ∆τmin of the filter in time
parameter (in time delay) is a quantity of a quarter of a carrier period order T0 :
∆τmin = T0 /4 + ε, where ε is an arbitrarily small value.
Fig. 6.7.2 illustrates the signal ŝ∧ (t) in the output of the filter forming the
estimator (6.7.8) under the interaction of two harmonic signals s1 (t) and s2 (t)
in the input of the processing unit (filter) in the absence of interference (noise)
n(t) = 0, on the value of time delay that is equal to t01 − t02 = T0 /3.

FIGURE 6.7.2 Signal ŝ∧ (t) in the output of the filter forming the estimator (6.7.8)

The curves shown in the figure denote: 1 is the signal s1 (t); 2 is the signal s2 (t);
3 is the response ŝ1 (t) of the signal s1 (t); 4 is the response ŝ2 (t) of the signal s2 (t),
the responses 3 and 4 of the s1 (t), s2 (t) are shown by the solid line.
The questions dealing with synthesis and analysis of signal resolution algorithm
in signal space with lattice properties are investigated in Chapter 7.
7
Synthesis and Analysis of Signal Processing
Algorithms

We demonstrated in Section 4.3, that under signal interaction in physical signal


space, which is not isomorphic to a lattice, there inevitably exist the losses of some
quantity of information contained in interacting signals (a(t) or b(t)) with respect
to the information quantity contained in the result of interaction x(t) = a(t) ⊕ b(t),
where ⊕ is a commutative binary operation. At a qualitative level, these losses
are described by informational inequalities (4.3.26). Thus, in linear space LS (+),
signal interaction is always accompanied by losses of a part of information quantity
contained in one of the interacting signals, as against the information quantity
contained in the result of signal interaction.
As noted in Section 4.3, there are physical signal spaces, where the signal inter-
action is accompanied by the losses of information quantity between the signals that
take place after their interaction, excluding the signal spaces with lattice proper-
ties. In such spaces, the signal interaction is characterized by the preservation of the
quantity of mutual information contained in one of interacting signals (a(t) or b(t))
and in the result of their interaction x(t) = a(t) ⊕ b(t), so that the informational
identities (4.3.14) hold.
This chapter is a logical continuation of the main ideas stated in the fourth and
the sixth chapters concerning signal processing theory. One can confirm or deny
the results obtained in these chapters only on the basis of synthesis of optimal
algorithms of signal processing in the spaces with lattice properties and further
analysis of the quality indices of signal processing based on such algorithms. It is
also of interest to carry out the comparative analysis of the synthesized algorithms
and units of signal processing with the analogues obtained for linear signal spaces.
The known variants of the main signal processing problems have been formu-
lated for the case of additive interaction x(t) = s(t) + n(t) (in terms of linear
space) between the signal s(t) and interference (noise) n(t). Nevertheless, some
authors assume the statement of these problems on the basis of a more general
model of interaction: x(t) = Φ[s(t), n(t)], where Φ is some deterministic func-
tion [155], [159], [163], [166]. The diversity of algebraic systems studied in modern
mathematics allow investigating the properties and characteristics of both signal
estimators and signal parameter estimators during interactions between useful sig-
nals and interference (noise) in signal spaces which differ from linear ones in their
algebraic properties. The lattices have been described and investigated in algebraic
literature [221], [223], [255].
The statement of a signal processing problem is often characterized by some

267
268 7 Synthesis and Analysis of Signal Processing Algorithms

level of prior uncertainty with respect to useful and/or interference signals. In his
work [118], P.M. Woodward expressed an opinion that the question of prior distri-
butions of informational and non-informational signal parameters will be a stum-
bling block on the way to the synthesis of optimal signal processing algorithms.
This utterance is fully applicable to most signal processing problems, where prior
knowledge of behavior and characteristics of useful and interference signals along
with their parameters plays a significant role in synthesizing the optimal signal
processing algorithms.
The methodology of the synthesis of signal processing algorithms in signal spaces
with lattice properties has not been studied yet. Nevertheless, it is necessary to
develop the such approaches to the synthesis to allow researchers to operate with
minimum prior data concerning characteristics and properties of interacting useful
and interference signals. First, no prior data concerning probabilistic distribution
of useful signals and interference are supposed to be present. Second, a priori, the
kind of useful signal (signals) is assumed to be known, i.e., it is either deterministic
(quasi-deterministic) or stochastic. As to interference, a time interval τ0 determining
the independence of interference samples is known, and it is assumed that the
relation τ0 << T holds, where T is a signal duration.
Such constraints for prior data content concerning the processed signals impose
their own peculiarities upon the approaches to the synthesis of signal processing
algorithms in signal space with lattice properties. The choice of optimality criteria
is not a mathematical problem.
Meanwhile, some considerations about the choice of criteria have to be stated.
The first is that optimality criteria of solving a signal processing problem should
not depend on possible distributions of useful and interference signals.
The other circumstance influencing the choice of appropriate optimality criteria
is the presence of information regarding time interval τ0 which, according to the
Theorems 4.2.1 and/or 4.2.2, is the basis for the sampling (discretization) of the
processed signals.
The third feature is the combination of the first and second ones implying that
an optimality criterion has to be a function of a set of the samples of the observed
process resulting from interactions of useful signal (signals) and interference (noise)
in signal space with lattice properties. The additional consideration for a proper
criterion of signal processing optimality in signal space with lattice properties is
the need to consider the metric properties of nonlinear signal space.
The last two considerations take into account the fundamental feature of the
considered approach to the synthesis of algorithms of optimal signal processing in
signal space with lattice properties. Under optimization, one should take into ac-
count the metric relationships between the samples of received (processed) realiza-
tion of the observed stochastic process. Thus, there is no need to take into account
the possible properties and characteristics of unreceived realizations of signals that
more completely reflect probabilistic-statistical characteristics and properties of the
ensemble of signal realizations.
The last circumstance fundamentally distinguishes the considered approach to
the synthesis from the classic one, where algorithms and units of signal processing
7.1 Signal Spaces with Lattice Properties 269

are optimal on average with respect to entire statistical ensemble of the received
(processed) signals. With the proposed approach, the algorithms and units of signal
processing are optimal in the sense of the only realization of the observed stochastic
process.
Before considering synthesis and analysis of signal processing algorithms in sig-
nal space with lattice properties, the algebraic properties of such spaces and their
relations with linear signal spaces should be considered.

7.1 Signal Spaces with Lattice Properties


Within Section 4.1, we distinguished the notions of physical signal space and in-
formational signal space established by Definitions 4.1.1 and 4.1.3, respectively. It
should be useful to review the main features of physical signal spaces, inasmuch as
the main subject contains physical signal spaces with lattice properties. Using two
definitions that are similar to Definitions 4.1.1 and 4.1.2, we introduce the notions
defining algebraic properties of physical signal space.
Definition 7.1.1. A set Γ of stochastic signals Γ = {a(t), b(t), . . .} considered
under their physical interaction with each other is a physical signal space Γ, which
forms a commutative semigroup SG = (Γ, ⊕) with respect to a binary operation
⊕, introduced over it, with the following properties:
a(t) ⊕ b(t) ∈ Γ (closure);
a(t) ⊕ b(t) = b(t) ⊕ a(t) (commutativity);
a(t) ⊕ (b(t) ⊕ c(t)) = (a(t) ⊕ b(t)) ⊕ c(t) (associativity).
Definition 7.1.2. A binary operation ⊕: x(t) = a(t) ⊕ b(t) between the elements
a(t) and b(t) of semigroup SG = (Γ, ⊕) is called a physical interaction between the
signals a(t) and b(t) in physical signal space Γ.
Any pair of stochastic signals a(t) and b(t) can be considered a partially ordered
set Γ where, at each instant between two instantaneous values (samples) at1 = a(t1 ),
bt2 = b(t2 ) of the processes a(t), b(t) ∈ Γ, there exists the relation of an order at ≤ bt
(or at ≥ bt ). Then the partially ordered set Γ is a lattice with operations of join and
meet, respectively: at ∨ bt = supΓ {at , bt }, at ∧ bt = inf Γ {at , bt } and if a(t) ≤ b(t),
then at ∧ bt = at and at ∨ bt = bt [221], [223]:

at ∧ bt = at ;
at ≤ bt ⇔
at ∨ bt = bt .
For a lattice L = (Γ, ∨, ∧), the following axioms hold [221], [223]:

a(t) ∧ a(t) = a(t), a(t) ∨ a(t) = a(t) (idempotency);


a(t) ∧ b(t) = b(t) ∧ a(t), a(t) ∨ b(t) = b(t) ∨ a(t) (commutativity);
a(t) ∧ (b(t) ∧ c(t)) = (a(t) ∧ b(t)) ∧ c(t),
a(t) ∨ (b(t) ∨ c(t)) = (a(t) ∨ b(t)) ∨ c(t) (associativity);
a(t) ∧ (a(t) ∨ b(t)) = a(t) ∨ (a(t) ∧ b(t)) = a(t) (absorption).
270 7 Synthesis and Analysis of Signal Processing Algorithms

The axioms above defining a lattice are not independent, so, for instance, the prop-
erty of idempotency follows from the axiom of absorption.
In this chapter, we deal mainly with physical signal spaces, which, according
to Definition 7.1.1 (4.1.1), are both semigroups and lattices, where each group
translation is isotonic. Such algebraic systems are called lattice-ordered groups LG
or L-groups [221], [223].
One assumption concerning isotonic property of group translations has the fol-
lowing form [221]: for ∀c(t), s(t) ∈ Γ: c(t) ≤ s(t), the following relationship holds:

a(t) ⊕ c(t) ⊕ b(t) ≤ a(t) ⊕ s(t) ⊕ b(t), for ∀a(t), b(t) ∈ Γ. (7.1.1)

Thus, any L-group LG = (Γ, ⊕, ∨, ∧, 0) can be considered a universal algebra


(Γ, ⊕, ∨, ∧, 0) with signature (⊕, ∨, ∧, 0) determined by three binary operations
⊕, ∨, ∧ and one 0-ary operation 0. Besides, G = (Γ, ⊕) is a commutative group,
L = (Γ, ∨, ∧) is a lattice, and SG∨ = (Γ, ∨), SG∧ = (Γ, ∧) are commutative
semigroups, 0 is a null/neutral element of a group G = (Γ, ⊕): ∀a(t) ∈ Γ: 0 ⊕a(t) =
a(t), ∃a−1 (t): a−1 (t) ⊕ a(t) = 0.
L-group is the only known algebraic structure in which along with lattice ax-
iomatics, axioms of a group of a linear space LS hold. The last circumstance stip-
ulates an extraordinary interest to L-groups from the direction of signal processing
theory, inasmuch as such a diversity of algebraic (and also geometric) properties
allows use of L-groups to describe, on the one hand, the signal spaces with group
properties (for instance, linear signal spaces), and on the other hand, the signal
spaces with lattice properties.
The most common model describing physical signal space is linear space LS,
by which we mean an additive commutative group GLS = (Γ, +) over a ring of
scalars [233]. The physical signal space Γ can be constructed on the basis of L-
group LG = (Γ, +, ∨, ∧, 0), so that GLS = (Γ, +) is an additive commutative
group, L = (Γ, ∨, ∧) is a lattice, and SG∨ = (Γ, ∨), SG∧ = (Γ, ∧) are commutative
semigroups, 0 is a null/neutral element of a group GLS = (Γ, +).
Further, to denote algebraic systems of L-group LG = (Γ, +, ∨, ∧, 0), we use
the following designations: LS (+) for a linear space; L(∨, ∧) for a lattice.
The main properties of L-group are stated, for instance, in [221, Section XIII.3].
Signal space with lattice properties L(∨, ∧) can be obtained by a transformation
of the signals of a linear space LS (+) in such a way that in signal space L(∨, ∧) with
lattice properties and operations of join and meet ∨, ∧, the results of interaction be-
tween the signals a(t) and b(t) are realized according to the following relationships:

a(t) ∨ b(t) = {[a(t) + b(t)] + |a(t) − b(t)|}/2; (7.1.2a)


a(t) ∧ b(t) = {[a(t) + b(t)] − |a(t) − b(t)|}/2, (7.1.2b)
which are the consequence of the equations cited in [221, Section XIII.3;(14)], [221,
Section XIII.4;(22)].
The identities (7.1.2a,b) determine the mapping of a linear signal space LS (+)
into a signal space with lattice properties L(∨, ∧): T : LS (+) → L(∨, ∧). The
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 271

mapping T −1 , which is inverse to the initial one T : T −1 : L(∨, ∧) → LS (+), is


determined by the known identity [221, Section XIII.3;(14)]:
a(t) + b(t) = a(t) ∨ b(t) + a(t) ∧ b(t). (7.1.3)

7.2 Estimation of Unknown Nonrandom Parameter in Sample


Space with Lattice Properties
Estimation of the signals and their parameters is the most general problem of
signal processing under interference (noise) background. Some other problems of
signal processing relate to this one, for instance, signal detection, signal classifica-
tion, and signal resolution. In the literature, the problems of signal processing in
the presence of interference (noise) are formulated in terms of linear signal space,
where the result of interaction x between the signal s and interference (noise) n is
described by operation of addition of an additive commutative group: x = s + n.
Quite similarly (i.e., by operation of addition), the literature describes the results of
interaction between unknown nonrandom parameters of the signal with estimation
errors (measurement errors) caused by the influence of interference (noise). How-
ever, in a number of cases, the interaction between useful signal and interference
(noise), or the interaction between signal parameters and measurement errors can
be nonlinear.
Characteristics and behavior of estimators under additive (in terms of linear
sample space) interaction between the estimated parameter and the measurement
errors from some arbitrary family of distributions, are covered in the corresponding
literature [250], [231], [232], [251]. The examples of the estimators whose asymp-
totic variance never exceeds the Cramer-Rao lower bound, and on some values
of estimated parameter, this variance is below it, were proposed by Hodges and
Le Cam [250], [251], [256]. Such estimators are called superefficient. Here we con-
sider an example of the estimators, which are close to the superefficient ones with
respect to their properties, but the nature of their superefficiency is fundamentally
another. It is stipulated by the differences of algebraic properties of sample spaces,
where they take place, from the properties of linear sample space.
The subject for further consideration is comparative characteristics of the esti-
mators of an unknown nonrandom location parameter in both linear sample space
and sample space with lattice properties. Lattices are well investigated [221], [223].
Sample space L(Y, BY ; ∨, ∧) with lattice properties is defined as a probabilistic space
(Y, BY ) in which the axioms of distributive lattice L(Y ; ∨, ∧) with operations of join
and meet, respectively: a ∨ b = supL {a, b}, a ∧ b = inf L {a, b}; a, b ∈ L(Y ; ∨, ∧) hold.
In most works on point estimation, the model of indirect measurement of an
unknown nonrandom scalar location parameter λ is described by its additive in-
teraction with statistically independent measurement errors in linear sample space
LS (X , BX ; +):
Xi = f (λ) + Ni ,
where f (λ) is some known one-to-one function of a measured parameter; {Ni } are
272 7 Synthesis and Analysis of Signal Processing Algorithms

the independent measurement errors with a distribution from the distribution class
with symmetric (even) probability density function pN (z ) = pN (−z ) represented
by the sample N = (N1 , . . . , Nn ), Ni ∈ N , so that N ∈ LS (X , BX ; +); {Xi } are the
measurement results represented by the sample X = (X1 , . . . , Xn ), Xi ∈ X: X ∈
LS (X , BX ; +); “+” is operation of addition of linear sample space LS (X , BX ; +);
i = 1, . . . , n is the index of the elements of statistical collections {Ni }, {Xi }; n is a
size of the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ).
The estimators, obtained on the basis of least squares method (LSM) and least
modules method (LMM), according to the criteria of minimum of sums of squares
and modules of measurement errors, respectively, are the first and simplest estima-
tors [231], [257]: ( )
X
λ̂LSM = arg min (Xi − f (λ))2 ; (7.2.1a)
λ i
( )
X
λ̂LMM = arg min |Xi − f (λ)| . (7.2.1b)
λ i
2
P P
Extrema of the functions (Xi − f (λ)) and |Xi − f (λ)| determined by criteria
i i
(7.2.1a) and (7.2.1b) are found as the roots of the equations:
X
d (Xi − f (λ̂))2 /dλ̂ = 0; (7.2.2a)
i
X
d |Xi − f (λ̂)|/dλ̂ = 0. (7.2.2b)
i

The values of the estimators λ̂LSM and λ̂LMM are the solutions of the Equations
(7.2.2a) and (7.2.2b) in the form of a function f −1 [∗] of the sample mean and the
sample median med{∗} of the observations {Xi }, respectively:
n
!
1 X
λ̂LSM = f −1 Xi ; (7.2.3a)
n i=1

λ̂LMM = f −1 [ med
T {Xi }], (7.2.3b)
i∈N [1,n]

where f −1 [∗] is a function that is inverse with respect to a function f (λ) of param-
eter λ; N is the set of natural numbers.
The estimators (7.2.3a) and (7.2.3b) are asymptotically effective in the case of
Gaussian and Laplacian distributions of measurement errors, respectively.
Consider two models of indirect measurement of an unknown nonrandom scalar
nonnegative location parameter λ ∈ R+ = [0, ∞[ in sample space with lattice
properties L(Y, BY ; ∨, ∧) respectively:

Yi = f (λ) ∨ Ni ; (7.2.4a)

Ỹi = f (λ) ∧ Ni , (7.2.4b)


7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 273

where f (λ) is some known one-to-one function of a measured parameter; {Ni }


are the independent measurement errors with a distribution from the distribution
class with symmetric probability density function pN (z ) = pN (−z ) represented by
the sample N = (N1 , . . . , Nn ), Ni ∈ N , and besides N ∈ L(Y, BY ; ∨, ∧); {Yi },
{Ỹi } are the measurement results represented by the samples Y = (Y1 , . . . , Yn ),
Ỹ = (Ỹ1 , . . . , Ỹn ), Yi ∈ Y , Ỹi ∈ Ỹ , respectively: Y, Ỹ ∈ L(Y, BY ; ∨, ∧); ∨, ∧ are the
operations of join and meet of sample space with lattice properties L(Y, BY ; ∨, ∧),
respectively; i = 1, . . . , n is the index of the elements of statistical collections {Ni },
{Yi }, {Ỹi }; n represents a size of the samples N = (N1 , . . . , Nn ), Y = (Y1 , . . . , Yn ),
Ỹ = (Ỹ1 , . . . , Ỹn ).
We can obtain the estimators λ̂n,∧ , λ̂n,∨ of parameter λ for the models of indirect
measurement (7.2.4a) and (7.2.4b) according to criteria of minimum of meet/join
of measurement errors, respectively:
n
λ̂n,∧ = arg min | ∧ (Yi − f (λ))|; (7.2.5a)
λ∈R+ i=1

n
λ̂n,∨ = arg min | ∨ (Ỹi − f (λ))|, (7.2.5b)
λ∈R+ i=1

n n
where ∧ Yi = inf {Yi } is the meet of a set Y = (Y1 , . . . , Yn ); ∨ Ỹi = sup{Ỹi } is
i=1 Y i=1

the join of a set Ỹ = (Ỹ1 , . . . , Ỹn ).
n n
We next find the extrema of the functions | ∧ (Yi − f (λ))| and | ∨ (Ỹi − f (λ))|
i=1 i=1
defined by the criteria (7.2.5a) and (7.2.5b), respectively, putting their derivatives
at the estimator λ̂ of a parameter λ to zero:
n n
d| ∧ (Yi − f (λ̂))|/dλ̂ = −sign[ ∧ (Yi − f (λ̂))]f 0 (λ̂) = 0; (7.2.6a)
i=1 i=1

n n
d| ∨ (Ỹi − f (λ̂))|/dλ̂ = −sign[ ∨ (Ỹi − f (λ̂))]f 0 (λ̂) = 0. (7.2.6b)
i=1 i=1

The values of the estimators λ̂n,∧ and λ̂n,∨ are the solutions of the equations (7.2.6a)
and (7.2.6b) in the form of the function f −1 [∗] of meet and join of the observation
results {Yi } and {Ỹi }, respectively:
 
n
λ̂n,∧ = f −1 ∧ Yi ; (7.2.7a)
i=1

 
−1 n
λ̂n,∨ = f ∨ Ỹi , (7.2.7b)
i=1

where f −1 [∗] is an inverse function with respect to a function f [∗] of a parameter


λ.
n n
Derivatives of the functions | ∧ (Yi − f (λ))| and | ∨ (Ỹi − f (λ))|, according to
i=1 i=1
the identities (7.2.6a) and (7.2.6b), change their signs from minus to plus at the
points λ̂n,∧ , λ̂n,∨ , so the extrema determined by the formulas (7.2.7a), (7.2.7b)
274 7 Synthesis and Analysis of Signal Processing Algorithms

are the minimum points of these functions and the solutions of Equations (7.2.5a),
(7.2.5b) defining these estimation criteria.
We now carry out the comparative analysis of the quality characteristics of
the estimation of an unknown nonrandom nonnegative location parameter λ ∈
R+ = [0, ∞[ for the model of the direct measurement in both linear sample space
LS (X , BX ; +) and sample space with lattice properties L(Y, BY ; ∨, ∧), respectively:

Xi = λ + Ni ; (7.2.8)

Yi = λ ∨ Ni , (7.2.9)
where {Ni } are the independent measurement errors that are normally distributed
with zero expectation and variance D, represented by the sample N = (N1 , . . . , Nn ),
Ni ∈ N , and N ∈ LS (X , BX ; +) and N ∈ L(Y, BY ; ∨, ∧); {Xi }, {Yi } are the mea-
surement results represented by the samples X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn );
Xi ∈ X, Yi ∈ Y , respectively: X ∈ LS (X , BX ; +), Y ∈ L(Y, BY ; ∨, ∧); “ +” and “∨”
are operation of addition of linear sample space LS (X , BX ; +) and operation of join
of sample space with lattice properties L(Y, BY ; ∨, ∧), respectively; i = 1, . . . , n is
the index of the elements of statistical collections {Ni }, {Xi }, {Yi }; n is a size of
the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn ).
For the model (7.2.8), the estimator λ̂n,+ in the form of a sample mean is a
uniformly minimum variance unbiased estimator [250], [251]:
n
1X
λ̂n,+ = Xi . (7.2.10)
n i=1

As the estimator λ̂n,∧ of parameter λ for the model (7.2.9), we take the meet
(7.2.7a):
n
λ̂n,∧ = ∧ Yi , (7.2.11)
i=1
n
where ∧ Yi = inf {Yi } is the least value from the sample Y = (Y1 , . . . , Yn ).
i=1 Y
Cumulative distribution function (CDF) Fλ̂n,+ (z ) and probability density func-
tion (PDF) pλ̂n,+ (z ) of the estimator λ̂n,+ (7.2.10) for the model (7.2.8) are deter-
mined by the expressions [250], [251]:
Zz
Fλ̂n,+ (z ) = pλ̂n,+ (x)dx; (7.2.12)
−∞

pλ̂n,+ (z ) = (2πD/n)−1/2 exp −(z − λ)2 /(2D/n) ,



(7.2.13)
where D is a variance of a measurement error Ni ; n is a size of the sample collection
{Xi }.
It is also known that CDF Fλ̂n,∧ (z ) of the estimator λ̂n,∧ (7.2.11) for the model
(7.2.9) is determined by the expression [252], [253]:

Fλ̂n,∧ (z ) = 1 − [1 − F (z )]n , (7.2.14)


7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 275

where F (z ) is CDF of random variable Yi = λ ∨ Ni :

F (z ) = FN (z ) · Fλ (z ), (7.2.15)
Rz 
−z 2

FN (z ) = pN (x)dx is CDF of measurement error Ni ; pN (z ) = √1 exp
2πD 2D
−∞
is the PDF of measurement error Ni ; Fλ (z ) = 1(z − λ) is CDF of an unknown
nonrandom parameter λ ≥ 0; 1(z ) is Heaviside step function.
So, F (z ) (7.2.15) can be written in the form:
(
FN (z ), z ≥ λ;
F (z ) = (7.2.16)
0, z < λ.

Taking into account (7.2.16), formula (7.2.14) can be written in the form:
(
1 − [1 − FN (z )]n , z ≥ λ;
Fλ̂n,∧ (z ) =
0, z < λ,

or, more compactly:

Fλ̂n,∧ (z ) = (1 − [1 − FN (z )]n ) · 1(z − λ). (7.2.17)

According to its definition, the PDF pλ̂n,∧ (z ) of the estimator λ̂n,∧ is a derivative
of CDF Fλ̂n,∧ (z ):
0
pλ̂n,∧ (z ) = Fλ̂n,∧ (z ) =

= (1 − [1 − FN (λ)]n ) · δ (z − λ) + n · pN (z )[1 − FN (z )]n−1 · 1(z − λ), (7.2.18)


where δ (z ) is the delta function.
The expression (7.2.18) can be written in the following form:
(
Pc · δ (z − λ), z = λ;
pλ̂n,∧ (z ) = (7.2.19)
n · pN (z )[1 − FN (z )]n−1 , z > λ,

where Pc = 1 − Pe is the probability of correct formation of the estimator λ̂n,∧ :

Pc = 1 − [1 − FN (λ)]n , (7.2.20)

Pe is the probability of error formation of the estimator λ̂n,∧ :

Pe = [1 − FN (λ)]n . (7.2.21)

The PDFs pλ̂n,∧ (z ) of the estimator λ̂n,∧ for n = 1, 2 are shown in Fig. 7.2.1. Each
random variable Ni is determined by even PDF pN (z ) with zero expectation, so
the inequality holds:
FN (λ) ≥ 1/2. (7.2.22)
276 7 Synthesis and Analysis of Signal Processing Algorithms

(a) (b)

FIGURE 7.2.1 PDF pλ̂n,∧ (z) of estimator λ̂n,∧ : (a) n = 1; (b) n = 2

According to the formula (7.2.22), probability Pe of error formation of the estimator


λ̂n,∧ is bounded above by the quantity:

Pe ≤ 2−n . (7.2.23)

Correspondingly, the probability Pc of correct formation of the estimator λ̂n,∧ is


bounded below by the quantity:

Pc ≥ 1 − 2 n . (7.2.24)

The relationship (7.2.19) implies that the estimator λ̂n,∧ is biased; nevertheless, it
is both consistent and asymptotically unbiased, inasmuch as it converges in proba-
P p
bility λ̂n,∧ → λ and in distribution λ̂n,∧ → λ to the estimated parameter λ:
P
λ̂n,∧ → λ : lim P {|λ̂n,∧ − λ| < ε} = 1, for ∀ε > 0;
n→∞

p
λ̂n,∧ → λ : lim pλ̂n,∧ (z ) = δ (z − λ).
n→∞

7.2.1 Efficiency of Estimator λ̂n,∧ in Sample Space with Lattice Prop-


erties with Respect to Estimator λ̂n,+ in Linear Sample Space
It is difficult to obtain an exact value of a variance D{λ̂n,∧ } of the estimator λ̂n,∧
directly on the basis of formula (7.2.19). The following theorem allows us to get an
idea on a variance D{λ̂n,∧ } of the estimator λ̂n,∧ .

Theorem 7.2.1. For the model of the measurement (7.2.9), variance D{λ̂n,∧ } of
the estimator λ̂n,∧ (7.2.11) of a parameter λ, is bounded above by the quantity:
"  #
2
λ exp −λ / 2 D
D{λ̂n,∧ } ≤ Pc Pe λ2 + n · Pe D 1 + √ ,
[1 − FN (λ)] 2πD

where Pc = 1 − [1 − FN (λ)]n , Pe = [1 − FN (λ)]n .


7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 277

Proof. The probability density function pλ̂n,∧ (z ) (7.2.19) of the estimator λ̂n,∧
(7.2.11) can be represented in the following form:

Pc · δ (z − λ), z = λ;
pλ̂n,∧ (z ) = (7.2.25)
2Pe · pN (z ) · K (z ), z > λ,
where K (z ) is a function determined by the formula:
K (z ) = n[1 − FN (z )]n−1 /(2 · Pe ). (7.2.26)
It should be noted that for any z ≥ λ, the inequality holds:
0 < [1 − FN (z )] ≤ [1 − FN (λ)],
which implies the inequality:
0 < K (z ) ≤ n/(2 · [1 − FN (λ)]). (7.2.27)
Taking into account the boundedness of the function K (z ) (7.2.26), appearing in
PDF pλ̂n,∧ (z ) (7.2.25) of the estimator λ̂n,∧ , one can obtain the upper bound
sup D{λ̂n,∧ } of its variance D{λ̂n,∧ } :
K(z)
 2
sup D{λ̂n,∧ } = sup m2 {λ̂n,∧ } − inf m1 {λ̂n,∧ } , (7.2.28)
K(z) K(z) K(z)

where m2 {λ̂n,∧ } is the second moment of the estimator λ̂n,∧ :


Z∞
m2 {λ̂n,∧ } = z 2 pλ̂n,∧ (z )dz ; (7.2.29)
−∞

m1 {λ̂n,∧ } is an expectation of the estimator λ̂n,∧ :


Z∞
m1 {λ̂n,∧ } = zpλ̂n,∧ (z )dz. (7.2.30)
−∞

Substituting the formula (7.2.25) into the definition of the second moment (7.2.29)
of the estimator λ̂n,∧ , we obtain the following expression:
Z∞ Z∞
2
m2 {λ̂n,∧ } = Pc z δ (z − λ)dz + 2Pe z 2 pN (z )K (z )dz. (7.2.31)
−∞ λ

Applying the inequality (7.2.27) characterizing the boundedness of K (z ) above to


the second component in the right part of the formula (7.2.31), we obtain the
quantity of the upper bound sup m2 {λ̂n,∧ } of the second moment m2 {λ̂n,∧ }:
K(z)

λ exp(−λ2 /2D)
 
sup m2 {λ̂n,∧ } = Pc λ2 + n · Pe D 1 + √ . (7.2.32)
K(z) 2πD[1 − FN (λ)]
278 7 Synthesis and Analysis of Signal Processing Algorithms

Similarly, substituting the formula (7.2.25) into the definition of expectation


(7.2.30) of the estimator λ̂n,∧ , we obtain the following expression:
Z∞ Z∞
m1 {λ̂n,∧ } = Pc zδ (z − λ)dz + 2Pe zpN (z )K (z )dz. (7.2.33)
−∞ λ

Applying the inequality (7.2.27) characterizing the boundedness below of K (z ) to


the second component in the right part of the formula (7.2.33), we obtain the
quantity of the lower bound inf m1 {λ̂n,∧ } of the expectation m1 {λ̂n,∧ }:
K(z)

inf m1 {λ̂n,∧ } = Pc λ. (7.2.34)


K(z)

Substituting the quantities determined by the formulas (7.2.32) and (7.2.34) into
the formula (7.2.28), we obtain the following expression for the upper bound
sup D{λ̂n,∧ } of a variance D{λ̂n,∧ } of the estimator λ̂n,∧ :
K(z)
"  #
2
λ exp −λ / 2 D
sup D{λ̂n,∧ } = Pc Pe λ2 + n · Pe D 1 + √ , (7.2.35)
K(z) [1 − FN (λ)] 2πD

where Pc = 1 − [1 − FN (λ)]n , Pe = [1 − FN (λ)]n .

Theorem 7.2.1 implies the corollary.


Corollary 7.2.1. Variance D{λ̂n,∧ } of the estimator λ̂n,∧ (7.2.11) of a parameter
λ for the model of measurement (7.2.9) is bounded above by the quantity:

D{λ̂n,∧ } ≤ n · 2−n D. (7.2.36)

Proof. Investigate the behavior of the upper bound sup D{λ̂n,∧ } of variance
K(z)
D{λ̂n,∧ } of the estimator λ̂n,∧ on λ ≥ 0. Show that sup D{λ̂n,∧ } is a monotone
K(z)
decreasing function of a parameter λ, so that this function takes a maximum value
equal to n · 2−n D in the point λ = 0:
!
lim sup D{λ̂n,∧ } = n · 2−n D ≥ sup D{λ̂n,∧ } ≥ D{λ̂n,∧ }. (7.2.37)
λ→0+0 K(z) K(z)

We denote the function sup D{λ̂n,∧ } determined by the identity (7.2.35) by y (λ):
K(z)

y (λ) = (1 − Qn (λ))Qn (λ)λ2 + nQn (λ)D(1 + λf (λ)Q−1 (λ)), (7.2.38)

where Q(x) = 1 − FN (x); f (x) = (2πD)−1/2 exp{−x2 /(2D)}.


Determine the derivative of the function (7.2.38), whose expression, omitting the
intermediate computations, can be written in the form:

y 0 (λ) = −2nλ2 f (λ)Qn−1 (λ)(1 − Qn (λ)) − Df (λ)Qn−1 (λ)(n2 − n)+


7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 279

+λQn−2 (λ)[2Q2 (λ)(1 − Qn (λ)) − Df 2 (λ)(n2 − n)]. (7.2.39)


It is obvious that the first two components in (7.2.39) are negative. Then, to prove
the derivative y 0 (λ) of the function y (λ) (7.2.38) is negative, it is sufficient to show
that the second multiplier in the third component is smaller than zero:

u(λ) = 2Q2 (λ)(1 − Qn (λ)) − Df 2 (λ)(n2 − n) < 0. (7.2.40)

To prove the inequality (7.2.40) we use the following lemma.


Lemma 7.2.1. For Gaussian random variable ξ with zero expectation and the
variance D, the inequality holds:

2(1 − F (x)) ≤ (2πD)1/2 f (x), (7.2.41)

where f (x) = (2πD)−1/2 exp{−x2 /(2D)} is a PDF of random variable ξ;


Rx
F (x) = f (t)dt is CDF of random variable ξ.
−∞

Proof of lemma. Consider the ratio of Mills, R(x) [258]:

R(x) = [1 − F (x)]/f (x) ≥ 0, R(0) = [πD/2]1/2 , x ≥ 0.

Obviously, to prove the inequality (7.2.41), it is sufficient to prove that R(x) is a


monotone decreasing function on x ≥ 0. Derivative R0 (x) of Mills’ function is equal
to:
R0 (x) = xR(x)/D − 1. (7.2.42)
There exists the representation of Mills’ function R(x) in the form of a continued
fraction [258]:
1·D
R(x) = . (7.2.43)
1·D
x+
2·D
x+
3·D
x+
4·D
x+
x + ...
The relation (7.2.43) implies that R(x) < D/x, and the relation (7.2.42), in its turn,
implies that R0 (x) < 0. Thus, R(x) ≤ [πD/2]1/2 and R0 (x) < 0, x ≥ 0. Lemma
7.2.1 is proved.
We overwrite the statement (7.2.41) of Lemma 7.2.1 in a form convenient to
prove the inequality (7.2.40):

2Q(λ) ≤ (2πD)1/2 f (λ). (7.2.44)

Squaring both parts of the inequality (7.2.44), we obtain the intermediate inequal-
ity:
4Q2 (λ) ≤ 2πDf (λ) ⇒ 2Q2 (λ) ≤ πDf (λ),
280 7 Synthesis and Analysis of Signal Processing Algorithms

which implies that the inequality (7.2.40) holds in the case of large samples n >> 1
(it is sufficient that n > 2):

2Q2 (λ)(1 − Qn (λ)) ≤ 2Q2 (λ) ≤ πDf (λ) < Df (λ)(n2 − n). (7.2.45)

Thus, the inequality (7.2.45) implies, that the function y (λ) determined by the
formula (38) is monotone decreasing function on λ ≥ 0 and n > 2, so that y (0) =
n2−n D. Corollary 7.2.1 is proved.

Compare the quality of the estimators λ̂n,+ (7.2.10) and λ̂n,∧ (7.2.11) of a
parameter λ for the models of the measurement (7.2.8) and (7.2.9), respectively,
using the relative efficiency e{λ̂n,∧ , λ̂n,+ } equal to the ratio:

e{λ̂n,∧ , λ̂n,+ } = D{λ̂n,+ }/D{λ̂n,∧ }, (7.2.46)

where D{λ̂n,+ } = D/n is a variance of the efficient estimator λ̂n,+ of a parameter λ


in linear sample space; D{λ̂n,∧ } is a variance of the estimator λ̂n,∧ of a parameter λ
in sample space with lattice properties; n is a size of the samples X = (X1 , . . . , Xn ),
Y = (Y1 , . . . , Yn ).
The comparative result of the quality of the estimators λ̂n,+ and λ̂n,∧ is deter-
mined by the following theorem.

Theorem 7.2.2. The relative efficiency e{λ̂n,∧ , λ̂n,+ } of the estimator λ̂n,∧ of a
parameter λ in sample space with lattice properties L(Y, BY ; ∨, ∧) with respect to
the estimator λ̂n,+ of the same parameter λ in linear sample space LS (X , BX ; +)
is bounded below by the quantity:

e{λ̂n,∧ , λ̂n,+ } ≥ 2n /n2 . (7.2.47)

Proof. Using the statement (7.2.36) of Corollary 7.2.1 of Theorem 7.2.1 in the
definition of relative efficiency e{λ̂n,∧ , λ̂n,+ } (7.2.46), we obtain that:

D{λ̂n,∧ } ≤ n · 2−n D ⇒ e{λ̂n,∧ , λ̂n,+ } ≥ (D/n)/(n · 2−n D) = 2n /n2 .

Using the result (7.2.35) of Theorem 7.2.1, one can determine the lower bound
inf e{λ̂n,∧ , λ̂n,+ } of relative efficiency e{λ̂n,∧ , λ̂n,+ } of the estimator λ̂n,∧ in the
K(z)
sample space with lattice properties (7.2.11) with respect to the estimator λ̂n,+ in
linear sample space (7.2.10) for some limit cases:

inf e{λ̂n,∧ , λ̂n,+ } = D{λ̂n,+ }/ sup D{λ̂n,∧ }, (7.2.48)


K(z) K(z)

where D{λ̂n,+ } = D/n is a variance of efficient estimator λ̂n,+ of a parameter λ in


linear sample space; sup D{λ̂n,∧ } is the upper bound of a variance D{λ̂n,∧ } (7.2.35)
K(z)
of the estimator λn,∧ of a parameter λ in sample space with lattice properties.
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 281

Assuming in the formula (7.2.35) that a parameter λ tends to zero and taking
into account the equality FN (0) = 1/2, we obtain the value of the limit of relative
efficiency lower bound of the estimator λn,∧ with respect to the estimator λ̂n,+ ,
which is identical to the statement (7.2.47) of Theorem 7.2.2:
 
lim inf e{λ̂n,∧ , λ̂n,+ } = 2n /n2 . (7.2.49)
λ→0+0 K(z)

Analyzing Theorem 7.2.2, it is easy to be convinced that the value of asymptotic


relative efficiency of the estimator λn,∧ with respect to the estimator λ̂n,+ on n → ∞
is infinitely large on arbitrary values of an estimated parameter λ:
lim e{λ̂n,∧ , λ̂n,+ } → ∞. (7.2.50)
n→∞

Theorem 7.2.2 shows how large can be the worst value of relative efficiency of the
estimator λn,∧ in sample space with lattice properties L(Y, BY ; ∨, ∧) with respect
to well known estimator λ̂n,+ in the form of a sample mean in linear sample space
LS (X , BX ; +).
Theorem 7.2.2 can be explained in the following way. While increasing the
sample size n, the variance D{λ̂n,+ } of the estimator λ̂n,+ in linear sample space
LS (X , BX ; +) decreases with a rate proportional to n, whereas the variance of the
estimator D{λ̂n,∧ } in sample space with lattice properties L(Y, BY ; ∨, ∧) decreases
with a rate proportional to 2n /n, i.e., almost exponentially.
Such striking distinctions concerning estimator behavior in linear sample space
LS (X , BX ; +) and estimator behavior in sample space with lattice properties
L(Y, BY ; ∨, ∧) can be elucidated by the fundamental difference between algebraic
properties of these spaces is revealed by the best use of information contained in
the statistical collection processed in one sample space as against another.
In summary, there exist estimators in nonlinear sample spaces characterized on
Gaussian distribution of measurement errors by a variance that is noticeably smaller
than an efficient estimator variance defined by the Cramer-Rao lower bound. This
circumstance poses a question regarding the adequacy of estimator variance appli-
cation to determine the efficiency of an unknown nonrandom parameter estimation
on wide classes of sample spaces and on the families of symmetric distributions of
measurement errors.
First, not all the distributions are characterized by a finite variance. Second, a
variance does not contain all information concerning the properties of distribution.
Third, the existence of superefficient estimators casts doubt on the correctness of
a variance application to determine parameter estimation efficiency. Fourth, the
analysis of the properties of the estimator sequences {λ̂n } (n is a sample size)
on the basis of their variances does not allow taking into account the topological
properties of sample spaces and parameter estimators in these spaces.
On the ground of aforementioned considerations, another approach is proposed
to determine the efficiency of an unknown nonrandom parameter estimation. Such
an approach can determine the efficiency of unbiased and asymptotically unbiased
estimators; it is based on metric properties of the estimators and is considered
below.
282 7 Synthesis and Analysis of Signal Processing Algorithms

7.2.2 Quality Indices of Estimators in Metric Sample Space


Let the estimation of an unknown nonrandom parameter λ ≥ 0 be realized
within the framework of the models of direct measurement in linear sample space
LS (X , BX ; +) and in sample space with the properties of a universal algebra
A(Y, BY ; S ) with a signature S, respectively [259], [260]:

Xi = λ + Ni ; (7.2.51)

Yi = λ ⊕ Ni , (7.2.52)
where {Ni } are independent measurement errors, each with a PDF pα N (z ) from
some indexed (over a parameter α) set P = {pα N ( z ) } represented by the sample N=
(N1 , . . . , Nn ), Ni ∈ N , so that N ∈ LS (X , BX ; +) and N ∈ A(Y, BY ; S ); {Xi }, {Yi }
are the measurement results represented by the samples X = (X1 , . . . , Xn ), Y =
(Y1 , . . . , Yn ), Xi ∈ X, Yi ∈ Y , respectively: X ∈ LS (X , BX ; +), Y ∈ A(Y, BY ; S );
“ +” is a binary operation of additive commutative group LS (+) of linear sample
space LS (X , BX ; +); “⊕” is a binary operation of additive commutative semigroup
A(⊕) of sample space with the properties of universal algebra A(Y, BY ; S ) and a
signature S; i = 1, . . . , n is an index of the elements of statistical collections {Ni },
{Xi }, {Yi }; n represents a size of the samples N = (N1 , . . . , Nn ), X = (X1 , . . . , Xn ),
Y = (Y1 , . . . , Yn ).
Let the estimators λ̂α α
n,+ , λ̃n,⊕ of an unknown nonrandom parameter λ, both
in linear sample space LS (X , BX ; +) and in sample space with the properties of
universal algebra A(Y, BY ; S ) and a signature S, be some functions T̂ and T̃ of the
samples X = (X1 , . . . , Xn ), Y = (Y1 , . . . , Yn ), respectively, and, in general case, the
functions T̂ , T̃ are different T̂ 6= T̃ :

λ̂α
n,+ = T̂ [X ]; (7.2.53)

λ̃α
n,⊕ = T̃ [Y ]. (7.2.54)

We also assume that the estimators λ̂α α


n,+ , λ̃n,⊕ of an unknown nonrandom pa-
rameter in sample spaces LS (X , BX ; +) and A(Y, BY ; S ) are characterized by PDFs

λ̂n,+
(z ), pα
λ̃n,⊕
(z ), respectively, under the condition that measurements errors {Ni }
are independent and are described by PDF pα N (z ).

Definition 7.2.1. A set Λα of pairs of the estimators {λ̂α α


k,+ ; λ̃n,⊕ }, k ≤ n,
{λ̂α α α α α α α α
k,+ ; λ̃n,⊕ } ⊂ Λ is called estimator space (Λ , d ) with metric d (λ̂k,+ , λ̃n,⊕ ):

Z∞
1
dα (λ̂α α
k,+ , λ̃n,⊕ ) = |pα
λ̂k,+
(z ) − pα
λ̃n,⊕
(z )|dz = dα (pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z )), (7.2.55)
2
−∞

1
R∞
where dα (pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z )) = 2 |pα
λ̂k,+
(z ) − pα
λ̃n,⊕
(z )|dz is a metric between
−∞
PDFs pα
λ̂k,+
(z ), pα
λ̃n,⊕
(z ) of the estimators λ̂α α
k,+ and λ̃n,⊕ , respectively, under the
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 283

condition that independent measurement errors {Ni } are characterized by PDF


pα α
N (z ) from some indexed (over a parameter α) set P = {pN (z )}; k and n are
the sizes of the samples X = (X1 , . . . , Xk ), Y = (Y1 , . . . , Yn ); X ∈ LS (X , BX ; +),
Y ∈ A(Y, BY ; S ), respectively, k ≤ n.

To define the quality of the estimator λ̃α


n,⊕ of an unknown nonrandom parameter
α α
λ in estimator space (Λ , d ), we introduce an index based on the metric (7.2.55).

Definition 7.2.2. By a quality index of the estimator λ̃α


n,⊕ we mean a quantity
α
q{λ̃n,⊕ } equal to the metric (7.2.55) between the PDF pα
λ̂1,+
(z ) of the estimator
λ̂α
1,+ of a parameter λ obtained on the basis of the only measurement result X1
within the model of additive interaction between the parameter λ and measurement
errors (7.2.51), and PDF pα λ̃n,⊕
(z ) of the estimator λ̃α
n,⊕ of parameter λ obtained
on the basis of all n measurement results of the observed sample collection Y =
(Y1 , . . . , Yn ) within the model of investigated interaction between the parameter λ
and measurement errors (7.2.52):

q{λ̃α α α α
n,⊕ } = d (pλ̂1,+ (z ), pλ̃n,⊕ (z )). (7.2.56)

Now we compare how distinct from each other are the PDFs pα
λ̂1,+
(z ), pα
λ̃n,⊕
(z )
of the estimators λ̂α α
1,+ and λ̃n,⊕ , i.e., the estimators obtained on the basis of the only
measurement result and n measurement results, respectively. By λ̃α n,⊕ we mean the
estimator (7.2.54) obtained within the model (7.2.52), so that a linear sample space
LS (X , BX ; +) can be also used as a sample space with the properties of universal
algebra A(Y, BY ; S ).
The following theorem helps to determine a relation between the values of qual-
ity indices of the estimation q{λ̃n,+ } and q{λ̃n,∧ } (7.2.56) for the estimators λ̃n,+
and λ̃n,∧ defined by the expressions (7.2.10) and (7.2.11) within the models of direct
measurement (7.2.8), (7.2.9) in both linear sample space LS (X , BX ; +) and sam-
ple space with lattice properties L(Y, BY ; ∨, ∧), respectively, on the assumption of
normalcy of distribution of measurement errors {Ni } with PDF pN (z ) in the form:
pN (z ) = (2πD)−1/2 exp −z 2 /2D .


Theorem 7.2.3. The relation between quality indices q{λ̃n,+ } and q{λ̃n,∧ } of the
estimators λ̃n,+ and λ̃n,∧ in linear sample space LS (X , BX ; +) and in sample space
with lattice properties L(Y, BY ; ∨, ∧), respectively, on the assumption of normalcy
of distribution of measurement errors, is determined by the following inequality:
r
−n 2
q{λ̃n,∧ } ≥ 1 − 2 > 1 − [2Φ(1) − 1] ≥ q{λ̃n,+ }, n ≥ 1, (7.2.57)
n+1
Rx  2
where Φ(x) = √1 exp − t2 dt.

−∞

Proof. As for the estimator λ̃n,+ , on the assumption of normalcy of distribution of


284 7 Synthesis and Analysis of Signal Processing Algorithms

measurement errors {Ni }, metric d(pλ̂1,+ (z ), pλ̂n,+ (z )) is determined by the expres-


sion:
r r
n 1
d(pλ̂1,+ (z ), pλ̂n,+ (z )) = 2 Φ( ln n) − Φ( ln n) , (7.2.58)

n−1 n−1

Rx  2
where Φ(x) = √1 exp − t2 dt;

−∞

pλ̂1,+ (z ) = (2πD)−1/2 exp −(z − λ)2 /2D ;



(7.2.59)

pλ̂n,+ (z ) = (2πD/n)−1/2 exp −(z − λ)2 /(2D/n) .



(7.2.60)
To make the following calculations convenient, we denote:
r r
0 n 00 1
x = ln n, x = ln n.
n−1 n−1
Then the metric (7.2.58) is determined by the expression:

d(pλ̂1,+ (z ), pλ̂n,+ (z )) = 2 |Φ(x0 ) − Φ(x00 )| . (7.2.61)

On x ≥ 0, the following inequality holds [258], [261]:


1 1p
Φ(x) ≤ Φ1 (x) = + 1 − exp (−2x2 /π ). (7.2.62)
2 2
q
On n ≥ 1, the relation x00 = 1
n−1 ln n ≤ 1 is true. Thus, on 0 ≤ x ≤ 1, the
inequality holds:
1 1
Φ(x) ≥ Φ2 (x) =
+ [Φ(1) − ] · x. (7.2.63)
2 2
Using the inequalities (7.2.62) and (7.2.63), we obtain the following inequality:

d(pλ̂1,+ (z ), pλ̂n,+ (z )) = 2 |Φ(x0 ) − Φ(x00 )| ≤ 2 |Φ1 (x0 ) − Φ2 (x00 )| . (7.2.64)

The following series expansion of logarithmic function is known [262]:



X (x − 1)2k−1
ln x = 2 , x > 0. (7.2.65)
(2k − 1)(x + 1)2k−1
k=1

Then the following inequality holds:


r r
00 1 2
x = ln n ≥ , n ≥ 1. (7.2.66)
n−1 n+1
Applying the inequality (7.2.66) to the inequality (7.2.64) and taking into account
that Φ1 (x0 ) ≤ 1, we obtain the inequality:
r
0 00 2
d(pλ̂1,+ (z ), pλ̂n,+ (z )) ≤ 2 |Φ1 (x ) − Φ2 (x )| ≤ 1 − [2Φ(1) − 1] . (7.2.67)
n+1
7.2 Estimation of Unknown Nonrandom Parameter in Sample Space with Lattice Properties 285

Quality index q{λ̂n,+ } of the estimator λ̂n,+ , according to (7.2.56), is determined


by the metric (7.2.58), which, by virtue of the inequality (7.2.67), is bounded above
by the quantity:
r
2
q{λ̂n,+ } = d(pλ̂1,+ (z ), pλ̂n,+ (z )) ≤ 1 − [2Φ(1) − 1] . (7.2.68)
n+1
As for the estimator λ̂n,∧ of parameter λ in sample space with lattice properties
L(Y, BY ; ∨, ∧), the metric d(pλ̂1,+ (z ), pλ̂n,∧ (z )) is determined by the expression:
Z∞
1
d(pλ̂1,+ (z ), pλ̂n,∧ (z )) = |pλ̂1,+ (z ) − pλ̂n,∧ (z )|dz, (7.2.69)
2
−∞

where pλ̂n,∧ (z ) is PDF of the estimator λ̂n,∧ determined by the formula (7.2.19)
when the sample Y = (Y1 , . . . , Yn ) with a size n is used within the model (7.2.9);
pλ̂1,+ (z ) is the PDF of the estimator λ̂n,+ determined by the formula (7.2.59), when
the only element X1 from the sample X = (X1 , . . . , Xk ) is used within the model
(7.2.8).
Substituting the expressions (7.2.19) and (7.2.59) into the formula (7.2.69), we
obtain the exact value of metric:
d(pλ̂1,+ (z ), pλ̂n,∧ (z )) = Pc = 1 − [1 − FN (λ)]n , (7.2.70)

where Pc is a probability of a correct formation of the estimator λ̂n,∧ defined by


the formula (7.2.20).
Since the estimated parameter λ takes nonnegative values (λ ≥ 0), the quality
index q{λ̂n,∧ } of the estimator λ̂n,∧ , according to definition (7.2.56), the identity
(7.2.70), and the inequality (7.2.24), is bounded below by the quantity 1 − 2−n :
q{λ̂n,∧ } = d(pλ̂1,+ (z ), pλ̂n,∧ (z )) ≥ 1 − 2−n . (7.2.71)
The joint fulfillment of the inequalities (7.2.68) and 7.2.71), and the inequality:
r
−n 2
1 − 2 > 1 − [2Φ(1) − 1] ,
n+1
implies fulfillment of the inequality (7.2.57).

Thus, Theorem 7.2.3 determines the upper ∆∧ (n) and lower ∆+ (n) bounds of
the rates of a convergence of quality indices q{λ̂n,∧ } and q{λ̂n,+ } of the estimators
λ̂n,∧ and λ̂n,+ to 1, described by the following functions, respectively:
r
−n 2
∆∧ (n) = 2 ; ∆+ (n) = [2Φ(1) − 1] ,
n+1
whose plots are shown in Fig. 7.2.2.
The analysis of relative efficiency of the estimators λ̂n,∧ , λ̂n,+ (7.2.47) and also
the relation between their quality indices (7.2.57) allows us to draw the following
conclusions.
286 7 Synthesis and Analysis of Signal Processing Algorithms

FIGURE 7.2.2 Upper ∆∧ (n) and lower ∆+ (n) bounds of quality indices q{λ̂n,∧ } and
q{λ̂n,+ }

1. The essential distinctions in characteristics and behavior of the estima-


tors λ̂n,∧ and λ̂n,+ showing the advantage of the estimator λ̂n,∧ are elu-
cidated by the features of the interaction between an unknown nonran-
dom parameter λ and measurement errors {Ni } in sample space with
lattice properties (model (7.2.9)). Under interactions between a parame-
ter and measurement errors {Ni } in sample space with lattice properties
L(Y, BY ; ∨, ∧), information contained in a sample is used with essentially
lower losses than occur under their additive interaction in linear sample
space LS (X , BX ; +). The last circumstance stipulates higher efficiency of
the estimation in sample space with lattice properties L(Y, BY ; ∨, ∧) as
against the estimation in linear sample space LS (X , BX ; +).
2. Asymptotic properties of the estimator λ̂n,∧ in sample space with lattice
properties L(Y, BY ; ∨, ∧) are much better than these properties of the
estimator λ̂n,+ in linear sample space LS (X , BX ; +). The rate of a con-
vergence of the quality index (7.2.56) to 1, as for the estimator λ̂n,∧ , is
determined by exponential dependence on a sample size n, and as for the
estimator λ̂n,+ , this rate does not exceed the square root of a sample size
n.
3. The estimator λ̂n,∧ is more robust with respect to the kind of the distri-
bution of measurement errors {Ni } than the estimator λ̂n,+ .
4. Consider the case of the estimation of an unknown nonrandom nonpositive
parameter λ ≤ 0 for the model of direct measurement in sample space with
lattice properties L(Y, BY ; ∨, ∧):

Zi = λ ∧ Ni , (7.2.72)

where {Ni } are independent measurement errors normally distributed


with zero expectation and a variance D represented by the sample N =
(N1 , . . . , Nn ), Ni ∈ N , so that N ∈ L(Y, BY ; ∨, ∧); {Zi } are the mea-
surements results represented by the sample Z = (Z1 , . . . , Zn ), Zi ∈ Z:
Z ∈ L(Y, BY ; ∨, ∧); ∧ is operation of the meet of sample space with lattice
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 287

properties L(Y, BY ; ∨, ∧); i = 1, . . . , n is an index of the elements of statis-


tical collections {Ni }, {Zi }; n is a size of the samples N = (N1 , . . . , Nn ),
Z = (Z1 , . . . , Zn ).
If we take join of lattice (7.2.7b) as the estimator λ̂n,∨ of a parameter λ
within the model (7.2.72):
n
λ̂n,∨ = ∨ Zi ,
i=1

n
where ∨ Zi = sup{Zi } is the largest value from the sample Z =
i=1
(Z1 , . . . , Zn ), then the estimator λ̂n,∨ is characterized by quality index
q{λ̂n,∨ }, which is determined according to the metric (7.2.56), and which
is equal to quality index q{λ̂n,∧ } of the estimator λ̂n,∧ : q{λ̂n,∨ } = q{λ̂n,∧ }.
5. The proposed quality index (7.2.56) can be used successfully to determine
the estimation quality on a wide class of sample spaces without constraints
with respect to their algebraic and probabilistic-statistical properties.

7.3 Extraction of Stochastic Signal in Metric Space with Lattice


Properties
The extraction (filtering) of a signal under interference (noise) background is the
most general problem of signal processing theory. The known variants of an extrac-
tion problem statement are formulated for an additive interaction x(t) = s(t) + n(t)
(in terms of linear space) between useful signal s(t) and interference (noise) n(t)
[254], [149], [163], [155]. Nevertheless, some authors assume the statement of this
problem on the basis of a more general model of interaction: x(t) = Φ[s(t), n(t)],
where Φ is some deterministic function [155, 163].
This section is intended to achieve a twofold goal. First, it is necessary to syn-
thesize algorithm and unit of narrowband signal extraction in the presence of inter-
ference (noise) with independent samples in signal space with L-group properties.
Second, it is necessary to describe characteristics and properties of the synthesized
unit and compare them with known analogues solving the extraction (filtering)
problem in linear signal space.
The synthesis and analysis of optimal algorithm of stochastic narrowband signal
extraction (filtering) are realized under the following assumptions. While synthe-
sizing, distributions of the signal s(t) and interference (noise) n(t) are considered
arbitrary. Further, while analyzing a synthesized processing unit and its algorithm,
both the signal s(t) and interference (noise) n(t) are considered Gaussian stochastic
processes.
Let an interaction between stochastic signal s(t) and interference (noise) n(t)
in signal space L(+, ∨, ∧) with L-group properties be described by two binary
288 7 Synthesis and Analysis of Signal Processing Algorithms

operations ∨ and ∧ in two receiving channels, respectively:

x(t) = s(t) ∨ n(t); (7.3.1a)

x̃(t) = s(t) ∧ n(t). (7.3.1b)


Instantaneous values (time samples) of the signal {s(tj )} and interference (noise)
{n(tj )} are the elements of signal space: s(tj ), n(tj ) ∈ L(+, ∨, ∧). Time samples
of interference (noise) {n(tj )} are considered independent. Let the samples s(tj )
and n(tj ) of the signal s(t) and interference (noise) n(t) be taken on the domain
of definition Ts of the signal s(t): tj ∈ Ts through the discretization (sampling)
interval ∆t, which provides the independence of the samples of interference (noise)
{n(tj )}, and ∆t << 1/f0 , where f0 is an unknown carrier frequency of the signal
s(t).
Taking into account the aforementioned constraints, the equations of observa-
tions in two receiving (processing) channels (7.3.1a) and (7.3.1b) look like:

x(tj ) = s(tj ) ∨ n(tj ); (7.3.2a)

x̃(tj ) = s(tj ) ∧ n(tj ), (7.3.2b)


where tj = t−j ∆t, j = 0, 1, . . . , N − 1, tj ∈ T ∗ , T ∗ ⊂ Ts ; T ∗ is a processing interval:
T ∗ = [t − (N − 1)∆t, t]; N ∈ N, N is the set of natural numbers.

7.3.1 Synthesis of Optimal Algorithm of Stochastic Signal Extraction


in Metric Space with L-group Properties
While solving the problem of useful signal extraction (filtering), on the basis of
processing of two random sequences {x(tj )} and {x̃(tj )}, it is necessary to form such
an estimator ŝ(t) of the useful signal s(t) in the output of processing unit, which in
the best way (with respect to chosen criteria) corresponds to the receiving signal
s(t). The problem of extraction Ext[s(t)] of stochastic signal s(t) is formulated and
solved through step-by-step processing of statistical collections {x(tj )} and {x̃(tj )}
determined by the observation Equations (7.3.2a) and (7.3.2b):

 P F [s(t)]; (a)
Ext[s(t)] = IP [s(t)]; (b) (7.3.3)
Sm[s(t)], (c)

where P F [s(t)] is primary filtering; IP [s(t)] is intermediate processing; Sm[s(t)] is


smoothing; they represent the successive stages of signal processing of the general
algorithm of useful signal extraction (7.3.3).
The optimality criteria defining every processing stage P F [s(t)], IP [s(t)], and
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 289

Sm[s(t)] that are interrelated and united into separate systems:



N −1 ^


 y (t) = arg min | ∧ [x(tj ) − y (t)]|; (a)
 ^ j=0


 y (t)∈Y ;t,tj ∈T ∗

 N −1 _
 ỹ (t) =

 arg min | ∨ [x̃(tj ) − y (t)]|; (b)
 _ ∗ j=0
y (t)∈Ỹ ;t,tj ∈T

P F [s(t)] = w ( t ) = F [y (t), ỹ (t)]; (c) (7.3.4)

NP−1





 |w(tj ) − s(tj )||n(t)≡0,∆t→0 → min ; (d)
 j=0
 w(tj )∈W

0 ∗
 N = arg max [δd (N )] N :δd (N ∗ )=δd0 , (e)



N 0 ∈N∩]0,N ∗ ]

where y (t) and ỹ (t) functions are the solutions of minimization problems of metrics
between the observed statistical collections {x(tj )} and {x̃(tj )} and optimization
` a
variables, i.e., the functions y (t) and y (t), respectively; w(t) is the function F [∗, ∗]
uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} (7.3.2a) and {x̃(tj )} (7.3.2b); T ∗ is a processing interval; N ∈ N,
N is the set of natural numbers; N is a number of the samples of stochastic processes
x(t), x̃(t) used in primary processing; δd (N ) is a relative dynamic error of filtering
as a function of sample number N ; δd0 is a given quantity of relative dynamic error
of filtering:

2
 M{u (t)} s(t)≡0 → min; (a)

L
IP [s(t)] = 2
M{[u(t) − s(t)] } n(t)≡0, ∆t→0 = ε ; (b) (7.3.5)

 u(t) = L[w(t)], (c)

where M{∗} is a symbol of mathematical expectation; L[w(t)] is a functional trans-


formation of the process w(t) into the process u(t); ε is a constant, which generally,
is some function of signal power; for instance, in the case of Gaussian signal it is
equal to: ε = ε0 Ds , ε0 = const; ε0 << 1, Ds is a variance of Gaussian signal s(t);

M −1
|u(tk ) − v ◦ (t)|;
P
 v (t) = arg min (a)



v ◦ (t)∈V ; t,tk ∈T̃ k=0

Sm[s(t)] = ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.3.6)

 0
 M = arg max [δf (M )] M ∗ :δ (M ∗ )=δ , (c)

 f f,sm
M 0 ∈N∩[M ∗ ,∞[

where v (t) = ŝ(t) is a result of filtering (the estimator ŝ(t) of the signal s(t))
that is the solution of minimization of a metric between the instantaneous values
of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t); tj =
t − j ∆t, j = 0, 1, . . . , N − 1, tj ∈ T ∗ ; tk = t − M
k
∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ ,
T̃ =]t − ∆T̃ , t]; T̃ is an interval in which the smoothing of stochastic process u(t)
is realized; ∆T̃ is a quantity of a smoothing interval T̃ ; M ∈ N, N is the set
of natural numbers; M is the number of samples of stochastic process u(t) used
during smoothing; δd (∆T̃ ), δf (M ) are relative dynamic and fluctuation errors of
smoothing as the dependences on the quantity ∆T̃ of the smoothing interval T̃ and
290 7 Synthesis and Analysis of Signal Processing Algorithms

the number of the samples M , respectively; δd,sm and δf,sm are relative dynamic
and fluctuation errors of smoothing, respectively.
The optimality criteria and single relations appearing in the systems (7.3.4),
(7.3.5), and (7.3.6) define consecutive stages of processing P F [s(t)], IP [s(t)],
Sm[s(t)] of the general algorithm of useful signal extraction (7.3.3).
The equations (7.3.4a), (7.3.4b) define a criterion of minimum metric between
the statistical sets of observations {x(tj )} and {x̃(tj )} and the results of primary
N −1 `
filtering y (t) and ỹ (t), respectively. The functions of metrics | ∧ [x(tj ) − y (t)]| and
j=0
N −1 a
| ∨ [x̃(tj ) − y (t)]| are chosen taking into account the metric convergence and the
j=0
N −1 N −1
convergence in probability of the sequences yN −1 = ∧ x(tj ), ỹN −1 = ∨ x̃(tj )
j=0 j=0
to the estimated parameter for the interactions of the kind (7.2.2a) and (7.2.2b)
(see Section 7.2). The equation (7.3.4d) defines the criterion of minimum metric
NP−1
|w(tj ) − s(tj )| between the useful signal s(t) and the process w(t) in the pro-
j=0
cessing interval T ∗ = [t − (N − 1)∆t, t]. This criterion establishes the function
F [y (t), ỹ (t)] (7.3.4c) uniting the results y (t) and ỹ (t) of primary processing of the
observed processes x(t) and x̃(t). Criterion (7.3.4d) is considered under two con-
straint conditions: (1) interference (noise) n(t) is absent in the input of the signal
processing unit; (2) the sample interval ∆t tends to zero: ∆t → 0. The equation
(7.3.4e) establishes the criterion of the choice of sample number N of stochastic
processes x(t) and x̃(t) providing a given quantity of a relative dynamic error δd0
of primary filtering.
The equations (7.3.5a) through (7.3.5c) establish the criterion of the choice of
functional transformation L[w(t)]. The equation (7.3.5a) defines the criterion of
minimum of the second moment of the process u(t) in the absence of useful signal
s(t) in the input of the signal processing unit. The equation (7.3.5b) establishes the
quantity of the second moment of the difference between the signals u(t) and s(t)
under two constraint conditions: (1) interference (noise) n(t) is absent in the input
of the signal processing unit; (2) sample interval ∆t tends to zero: ∆t → 0. The
relation (7.3.5c) defines a coupling equation between the processes u(t) and w(t).
M −1
|u(tk ) − v ◦ (t)|
P
The equation (7.3.6a) defines the criterion of minimum metric
k=0
between instantaneous values of the process u(t) and optimization variable v ◦ (t) in
the smoothing interval T̃ =]t − ∆T̃ , t], requiring the final processing of the signal
u(t) in the form of its smoothing. The equation (7.3.6b) establishes a criterion of
the choice of the quantity ∆T̃ of smoothing interval T̃ based on providing a given
quantity of dynamic error of smoothing δd,sm . The equation (7.3.6c) defines the
criterion of the choice of sample number M of stochastic process u(t) providing a
given quantity of fluctuation error of smoothing δf,sm .
We obtain the estimator ŝ(t) = v (t) of the signal s(t) in the output of the signal
processing unit by consecutivly solving the optimization relationships of the system
(7.3.4).
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 291
N −1 `
To solve the problem of minimization of the functions | ∧ [x(tj )− y (t)]| (7.3.4a),
j=0
N −1 a
| ∨ [x̃(tj ) − y (t)]| (7.3.4b), it is necessary to determine the extrema of these func-
j=0
` a
tions, setting their derivatives with respect to y (t) and y (t) to zero, respectively:
N −1 ^ ^ N −1 ^
d| ∧ [x(tj ) − y (t)]|/d y (t) = −sign( ∧ [x(tj ) − y (t)]) = 0; (7.3.7a)
j=0 j=0

N −1 _ _ N −1 _
d| ∨ [x̃(tj ) − y (t)]|/d y (t) = −sign( ∨ [x̃(tj ) − y (t)]) = 0. (7.3.7b)
j=0 j=0

The solutions of Equations (7.3.7a) and (7.3.7b) are the values of the estimators
y (t) and ỹ (t) in the form of meet and join of the observation results {x(tj )} and
{x̃(tj )}, respectively:
N −1 N −1
y (t) = ∧ x(tj ) = ∧ x(t − j ∆t); (7.3.8a)
j=0 j=0

N −1 N −1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − j ∆t). (7.3.8b)
j=0 j=0

N −1 ` N −1 a
The derivatives of the functions | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]|, accord-
j=0 j=0
ing to the relationships (7.3.7a) and (7.3.7b), change their sign from minus to plus
in the points y (t) and ỹ (t). Thus, the extrema determined by the formulas (7.3.8a),
(7.3.8b) are the minimum points of these functions and, respectively, are the solu-
tions of the equations (7.3.4a), (7.3.4b) defining these criteria of estimation (signal
filtering).
The conditions of the criterion (7.3.4d) of the system (7.3.4): n(t) ≡ 0, ∆t → 0
imply the equations of observations (7.3.2a,b) of the following form: x(tj ) = s(tj )∨0,
x̃(tj ) = s(tj ) ∧ 0, and according to the relationships (7.3.8a), (7.3.8b), the identities
hold:
N −1
y (t) n(t)≡0,∆t→0 = ∧ [s(tj ) ∨ 0] = s(t) ∨ 0; (7.3.9a)
j=0
N −1
ỹ (t) n(t)≡0,∆t→0 = ∨ [s(tj ) ∧ 0] = s(t) ∧ 0, (7.3.9b)
j=0

where tj = t − j ∆t, j = 0, 1, . . . , N − 1.
To provide the criterion (7.3.4d) of the system (7.3.4) on joint fulfillment of
the identities (7.3.9a), (7.3.9b), and (7.3.4c), it is necessary and sufficient that the
coupling equation (7.3.4c) between the stochastic process w(t) and a pair of the
results of primary processing y (t), ỹ (t) has the form:

w(t) n(t)≡0,∆t→0 = y (t) n(t)≡0,∆t→0 ∨ 0 + ỹ (t) n(t)≡0,∆t→0 ∧ 0 =
= s(t) ∨ 0 ∨ 0 + s(t) ∧ 0 ∧ 0 = s(t) ∨ 0 + s(t) ∧ 0 = s(t). (7.3.10)
−1
NP
Based on expression (7.3.10), the metric |w(tj ) − s(tj )|, that has to be min-
j=0
imized according to the criterion (7.3.4d) is minimal and equal to zero.
292 7 Synthesis and Analysis of Signal Processing Algorithms

It is obvious that the coupling equation (7.3.4c) has to be invariant with respect
to the presence (absence) of interference (noise) n(t), so the final coupling equation
can be written on the basis of the identity (7.3.10) in the form:

w(t) = y+ (t) + ỹ− (t); (7.3.11)


y+ (t) = y (t) ∨ 0; (7.3.11a)
ỹ− (t) = ỹ (t) ∧ 0. (7.3.11b)

Thus, the identity (7.3.11) defines the kind of coupling equation (7.3.4c) obtained
on the basis of joint fulfillment of criteria (7.3.4a), (7.3.4b), and (7.3.4d).
The solution u(t) of the relationships (7.3.5a) through (7.3.5c) establishing the
criterion of the choice of the functional transformation of the process w(t) is the
function L[w(t)] defining the gain characteristic of the limiter:

 a, w(t) ≥ a;
u(t) = L[w(t)] = w(t), −a < w(t) < a; (7.3.12)
−a, w(t) ≤ −a,

and its linear part provides the condition (7.3.5b), and its clipping part (above
and below) provides the minimization of the second moment of the process u(t)
according to the criterion (7.3.5a).
The relationship (7.3.12) can be written in terms of L-group L(+, ∨, ∧) in the
form:
u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.3.12a)
where, in the case of a Gaussian signal s(t) with a variance Ds that is√ smaller than
D: Ds < D, the limiter parameter a can be chosen proportional to D: a2 ∼ D
providing the equation (7.3.5b) holds.
We can finally obtain the estimator ŝ(t) = v (t) of the signal s(t) in the output
of the filtering unit by solving the minimization equation on the basis of crite-
M −1
|u(tk ) − v ◦ (t)|, setting its
P
rion (7.3.6a). We find the extremum of the function
k=0
derivative with respect to v ◦ (t) to zero:
M
X −1 M
X −1
d{ |u(tk ) − v ◦ (t)|}/dv ◦ (t) = − sign[u(tk ) − v ◦ (t)] = 0.
k=0 k=0

The solution of the last equation is the value of the estimator v (t) in the form of the
sample median med{∗} of a collection of the samples {u(tk )} of stochastic process
u(t):
v (t) = med{u(tk )}, (7.3.13)
tk ∈T̃

k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval in
which smoothing of the stochastic process u(t) is realized.
M −1
|u(tk ) − v ◦ (t)| in the point v (t) changes its
P
The derivative of the function
k=0
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 293

sign from minus to plus. Thus, the extremum determined by the function (7.3.13)
is a minimum of this function, and correspondingly, is the solution of Equation
(7.3.6a) determining this estimation criterion.
Thus, summarizing the relationships (7.3.13), (7.3.11), (7.3.11a,b), (7.3.8a,b),
one can draw a conclusion that the estimator ŝ(t) = v (t) of the signal s(t), extracted
in the presence of interference (noise) n(t), is the function of smoothing of the
stochastic process u(t) obtained by limiting the process w(t) that is the sum of
the results y (t) and ỹ (t) of the corresponding primary processing of the observed
stochastic processes x(t) and x̃(t) in the interval T ∗ = [t − (N − 1)∆t, t].
A block diagram of the processing unit, according to the general algorithm
Ext[s(t)], its stages P F [s(t)], IP [s(t)], Sm[s(t)], and the relationships (7.3.8a,b),
(7.3.11a,b), (7.3.11), (7.3.13), includes two processing channels, each containing
transversal filters; units of evaluation of positive y+ (t) and negative ỹ− (t) parts of
the processes y (t) and ỹ (t), respectively; an adder uniting the results of signal pro-
cessing in both channels; a limiter L[w(t)], and median filter (MF) (see Fig. 7.3.1).

FIGURE 7.3.1 Block diagram of processing unit realizing general algorithm Ext[s(t)]

Transversal filters in both processing channels perform primary filtration


N −1 N −1
P F [s(t)] y (t) = ∧ x(t − j ∆t), ỹ (t) = ∨ x̃(t − j ∆t) of the observed stochas-
j=0 j=0
tic processes x(t) and x̃(t) according to Equations (7.3.8a,b) providing fulfillment
of the criteria (7.3.4a), (7.3.4b). The units of evaluation of positive y+ (t) and neg-
ative ỹ− (t) parts of the processes y (t) and ỹ (t) in two processing channels compute
the values of these functions according to the identities (7.3.11a) and (7.3.11b),
respectively. The adder unites the results of signal processing in both channels
according to the equality (7.3.11), providing the criteria (7.3.4c) and (7.3.4d) to
hold. The limiter L[w(t)] realizes intermediate processing IP [s(t)] in the form of
clipping of the signal w(t) in the output of the adder, according to the criteria
(7.3.5a), (7.3.5b), to exclude from further processing the interference (noise) over-
shoots, whose instantaneous values exceed a given value a. A median filter (MF)
performs smoothing Sm[s(t)] of stochastic process u(t) according to the formula
(7.3.13), providing fulfillment of criterion (7.3.6a).
294 7 Synthesis and Analysis of Signal Processing Algorithms

7.3.2 Analysis of Optimal Algorithm of Signal Extraction in Metric


Space with L-group Properties
The further analysis of the synthesized signal processing algorithm and unit will be
realized so that the probabilistic-statistical characteristics of stochastic processes
at various processing stages will be obtained, and then the signal processing quality
indices of this unit will be determined.
To simplify the analysis of the synthesized unit, we assume that interference n(t)
is a quasi-white Gaussian noise with a power spectral density N (ω ) = N0 =const,
0 ≤ ω ≤ ωn,max , ωn,max = 2πfn,max and the corresponding normalized correlation
function rn (τ ):
rn (τ ) = sin(2πfn,max τ )/(2πfn,max τ ), (7.3.14)
where fn,max is an upper bound frequency of the power spectral density of interfer-
ence (noise).
We also consider that the received signal s(t) is a narrowband stochastic process
with unknown dependences on amplitude A(t) and phase ϕ(t) modulation:

s(t) = A(t) cos(ω0 t + ϕ(t)), t ∈ Ts , (7.3.15)

where ω0 = 2πf0 , f0 is an unknown frequency of signal carrier; ωn,max >> ω0 ; Ts


is a domain of definition of the signal s(t).
Determine, respectively, bivariate conditional probability distribution function
(PDF) and cumulative distribution function (CDF) of the instantaneous values
y (t1,2 ) of statistics y (t) (7.3.8a) under receiving the realization s∗ (t) of the signal
s(t):
py (z1 , z2 ; t1 , t2 /s∗ ) ≡ py (z1 , z2 /s∗ );
Fy (z1 , z2 ; t1 , t2 /s∗ ) ≡ Fy (z1 , z2 /s∗ ).
It is known that the CDF Fy (z1 , z2 /s∗ ) of the meet y (t1,2 ) of N independent random
variables {x(t1,2 − l∆t)} (7.3.8a) is determined by the expression [243]:

Fy (z1 , z2 /s∗ ) = 1 − [1 − F (z1 , z2 /s∗ )]N , (7.3.16)

where F (z1 , z2 /s∗ ) is a conditional CDF of bivariate random variable x(t1,2 ) =


s∗ (t1,2 ) ∨ n(t1,2 ):
F (z1 , z2 /s∗ ) = Fn (z1 , z2 ) · Fs (z1 , z2 ),
Rz2 Rz1
where Fn (z1 , z2 ) = pn (x1 , x2 )dx1 dx2 is the bivariate CDF of instantaneous
−∞ −∞
values of interference (noise) n(t1,2 ); pn (z1 , z2 ) is the bivariate PDF of instantaneous
values of interference (noise) n(t1,2 ); Fs (z1 , z2 ) = 1(z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 )) is the
bivariate CDF of instantaneous values of realization s∗ (t1,2 ) of the signal s(t1,2 );
1(t) is Heaviside step function.
Taking into account the above, the formula (7.3.16) can be represented in the
form:

Fy (z1 , z2 /s∗ ) = 1 − [1 − Fn (z1 , z2 )]N 1(z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 )). (7.3.17)

7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 295

According to its definition, the PDF py (z1 , z2 /s∗ ) of instantaneous values y (t1,2 )
of statistics (7.3.8a) is a mixed derivative of CDF Fy (z1 , z2 /s∗ ), so the expression
evaluating it can be written in the following form:

∂ 2 Fy (z1 , z2 /s∗ )
py (z1 , z2 /s∗ ) = =
∂z1 ∂z2
= 1 − [1 − Fn (z1 , z2 )]N δ (z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+


+ N [1 − Fn (z1 , z2 )]N −1 {Fn (z2 /z1 )pn (z1 )1(z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+
+ Fn (z1 /z2 )pn (z2 )δ (z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 ))}+
N −1
+ N [1 − Fn (z1 , z2 )]N −1 { Fn (z2 /z1 )Fn (z1 /z2 )pn (z1 )pn (z2 )+
1 − Fn (z1 , z2 )
+ pn (z1 , z2 )} · 1(z1 − s∗ (t1 )) · 1(z2 − s∗ (t2 )), (7.3.18)

where δ (z ) is Dirac delta function; Fn (z1 /z2 ) and Fn (z2 /z1 ) are conditional
CDFs of instantaneous values of interference (noise) n(t1,2 ), Fn (zi /z2 ) =
Rzi
pn (xi , zj )dxi /pn (zj ); pn (z1,2 ) is a univariate PDF of instantaneous values of
−∞
interference (noise) n(t1,2 ).
With help from analogous reasoning, one can obtain the univariate conditional
CDF Fy (z/s∗ ):

Fy (z/s∗ ) = 1 − [1 − Fn (z )]N · 1(z − s∗ (t)),




and also the univariate conditional PDF py (z/s∗ ) of the instantaneous value y (t)
of statistics (7.3.8a):

dFy (z/s∗ ) P (Cc ) · δ (z − s∗ (t)), z ≤ s∗ (t);



py (z/s∗ ) = = N −1 (7.3.19)
dz N · pn (z )[1 − Fn (z )] , z > s∗ (t),

where P (Cc ) and P (Ce ) are the probabilities of correct and error formation of the
signal estimator ŝ(t), respectively:

P (Cc ) = 1 − [1 − Fn (s∗ (t))]N ; (7.3.19a)

P (Ce ) = [1 − Fn (s∗ (t))]N . (7.3.19b)


Qualitative analysis of the expression (7.3.19) for the PDF py (z/s∗ ) allows us to
draw the following conclusion. On nonnegative values of the signal s(t) ≥ 0, the
process y (t) rather accurately (P (Cc ) ≥ 1 − 2−n ) reproduces the signal s(t). On
the other hand, from (7.3.19), the initial distribution of the noise component of
stochastic process y (t) in the output of transversal filter is essentially changed. On
negative values of the signal s(t) < 0, the event y (t) < s(t) < 0 is impossible.
Generally, on s(t) ≤ y (t) < 0, the process y (t) reproduces the useful signal s(t)
with distortions. However, for sufficiently small signal-interference (signal-noise)
ratio, regardless of the sign of s(t), the process y (t) accurately (P (Cc ) ≈ 1 − 2−n )
reproduces the signal s(t). If this condition does not hold, then, while processing
296 7 Synthesis and Analysis of Signal Processing Algorithms

negative instantaneous values of the signal s(t) < 0, the probability P (Cc ) of a
correct formation of the estimator ŝ(t), according to (7.3.19), becomes smaller than
the quantity 1 − 2−n .
To overcome this disadvantage (when s(t) < 0), within the filtering unit shown
in Fig. 7.3.1, we foresee the further processing of exclusively nonnegative values
y+ (t) of the process in the output of the filter y (t) (7.3.11a): y+ (t) = y (t) ∨ 0.
Applying similar reasoning, one can elucidate the necessity of formation of the
statistics ỹ− (t) determined by the expression (7.3.11b): ỹ− (t) = ỹ (t) ∧ 0.
The realization s∗ (t) of useful signal s(t) acting in the input of this filtering
unit, and also possible realization w∗ (t) of the process w(t) in the output of the
adder, obtained via statistical modeling, are shown in Fig. 7.3.2.

FIGURE 7.3.2 Realization s∗ (t) of useful signal s(t) acting in input of filtering unit and
possible realization w∗ (t) of process w(t) in output of adder

The example corresponds to the following conditions. The signal is a narrowband


pulse with a bell-shaped envelope; interference is a quasi-white Gaussian noise with
the ratio of maximum frequency of power spectral density to signal carrier fre-
quency fn,max /f0 = 64; the signal-interference (signal-noise) ratio Es /N0 is equal
to Es /N0 = 10−6 (Es is a signal energy, N0 is power spectral density of interference
(noise)). Transversal filters include time-delay lines with 16 taps (the number of
the samples N of the signals x(t) and x̃(t) used while processing is equal to 16).
The delay of the leading edge of semi-periods of the process w(t) with respect to
the signal s(t) stipulated by the dynamic error of formation of the signal estimator
ŝ(t), is equal to (N − 1)∆t, ∆t = 1/(2fn,max ) << 1/f0 .
The probability of error formation P (Ce ) of the signal estimator ŝ(t) (probability
of noise overshoot formation) is too small, P (Ce ) ≈ 2−16 , so there are no noise
overshoots in the interval of modeling (m = 210 = 1024 samples). While varying the
signal-interference (signal-noise) ratio Es /N0 within wide bounds (10 . . . 10−10 ), the
cross-correlation coefficient of the signals s(t) and w(t) takes the values 0.97 . . . 0.98
elucidated by invariance property of the investigated algorithm (unit) with respect
to the conditions of parametric prior uncertainty.
It should be noted that probabilistic-statistical properties of the processes y+ (t)
(7.3.11a) and ỹ− (t) (7.3.11b) are identical; in particular, their univariate and bi-
variate PDFs are reflective symmetrical. Thus, the expression for the bivariate con-
ditional PDF pw (z1 , z2 /s∗ ) of the process w(t) in the output of the filter can be
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 297

written on the basis of the PDF (7.3.18) of the process y (t) passed through the
signal limiter in the following form:

pw (z1 , z2 /s∗ ) = 1 − [1 − Fn (|z1 |, |z2 |)]N δ (z1 − s∗ (t1 )) · δ (z2 − s∗ (t2 ))+


+ N [1 − Fn (|z1 |, |z2 |)]N −1 {Fn (|z2 |/z1 )pn (z1 )hs (z1 )δ (z2 − s∗ (t2 ))+
+ Fn (|z1 |/z2 )pn (z2 )hs (z2 )δ (z1 − s∗ (t1 ))} + N [1 − Fn (|z1 |, |z2 |)]N −1 ×
N −1
×{ Fn (|z2 |/z1 )Fn (|z1 |/z2 )pn (z1 )pn (z2 )+
1 − Fn (|z1 |, |z2 |)
+ pn (z1 , z2 )} · hs (z1 )hs (z2 ), (7.3.20)
where Fn (|z2 |/z1 ), Fn (|z1 |/z2 ) are the conditional CDFs of instantaneous values
|z
Ri |
of interference (noise) n(t1,2 ): Fn (|zi |/zj ) = pn (xi , zj )dxi /pn (zj ); pn (z1,2 ) is the
−∞
univariate PDF of instantaneous values of interference (noise) n(t1,2 ); hs (z1,2 ) is the
function taking into account the sign of instantaneous values of realization s∗ (t1,2 )
equal to:
hs (z1,2 ) = [1 − sign(s∗ (t1,2 ))]/2 + sign(s∗ (t1,2 )) · 1(z1,2 − s∗ (t1,2 )). (7.3.21)
Univariate conditional PDF pw (z/s∗ ) of instantaneous values of the resulting pro-
cess w(t) in the output of the processing unit determined by the expression (7.3.11),
can be written on the basis of the formula (7.3.19) determining PDF py (z/s∗ ) of
the process y (t) (7.3.8a):
pw (z/s∗ ) = P ∗ (Cc ) · δ (z − s∗ (t)) + P ∗ (Ce )p0 (z ); (7.3.22)
P ∗ (Cc ) = 1 − [1 − Fn (|s∗ (t)|)]N ≥ 1 − 2−N ; (7.3.22a)
P ∗ (Ce ) = [1 − Fn (|s∗ (t)|)]N ≤ 2−N , (7.3.22b)
where p0 (z ) is PDF of noise component in the output of the processing unit equal
to:
p0 (z ) = p∗ (z ) · hs (z ); (7.3.22c)
p∗ (z ) = N · pn (z )[1 − Fn (|z|)]N −1 /P ∗ (Ce ),
hs (z ) is the function determined by the formula (7.3.21) that takes into account
the sign of s∗ (t).
The analysis of PDFs (7.3.20) and (7.3.22) implies that the process w(t) in
the output of the adder in the presence of the signal (s(t) 6= 0) is non-stationary;
whereas if the signal is absent (s(t) = 0), then this process is stationary. In the
absence of a signal, the stochastic process w(t) possesses an ergodic property with
respect to the univariate conditional PDF pw (z/s∗ ) (7.3.22). The random variables
w(t) and w(t + τ ) are independent under condition τ → ∞, since the samples n(t)
and n(t + τ ) of quasi-white Gaussian noise with normalized correlation function
(7.3.14), acting in the input of the processing unit, are asymptotically independent;
the condition holds [115]:
lim pw (z, z ; τ /s∗ ) = p2w (z/s∗ ).
τ →∞
298 7 Synthesis and Analysis of Signal Processing Algorithms

On nonnegative instantaneous values of the process w(t) ≥ 0, the PDF pw (z/s∗ )



corresponds to the PDF p+
w (z/s ) of its positive part, i.e., to the PDF of the process
y+ (t):
∗ ∗
p+
w (z/s ) = pw (z/s ), z ≥ 0. (7.3.23)
Based on the expression (7.3.22), on negative instantaneous values of the process
w(t) < 0, PDF p− ∗
w (z/s ) of its negative part ỹ− (t) is reflectively symmetrical with
respect to PDF pw (z/s∗ ) determined by the formula (7.3.23). We now analyze
+

the properties of distribution p+ w (z/s ) (20) of positive part y+ (t) of stochastic
process w(t) in the output of the adder. The PDFs pw (z/s∗ ), p+ ∗ − ∗
w (z/s ) and pw (z/s )
belong to a class of so-called ε-contaminated distributions that are widely used in
probability theory and mathematical statistics [251], [263]. Characteristic function

(CF) Φ+ +
w (u) corresponding to PDF pw (z/s ) (7.3.23) is represented in the form:

Φ+ +
w (u) = Φwn (u)e
jus (t)
, (7.3.24)

∗ ∗
R∞
where Φ+
wn (u) = P (Cc ) + P (Ce )Φ0 (u); Φ0 (u) = p0 (z + s∗ (t))ejuz dz.
−∞
The relationship (7.3.24) implies that CF Φ+ w (u ) of stochastic process w+ (t) =

w(t) ∨ 0 = y+ (t) is the product of two CFs: ejus (t) and Φ+ wn (u), so the process
w+ (t) in the output of the adder can be represented as the sum of two independent
processes, i.e., the signal ws+ (t) and noise wn+ (t) components:

w+ (t) = ws+ (t) + wn+ (t).

The signal component ws+ (t) is a stochastic process whose instantaneous values

of its realization ws+ (t) take values equal to (1) the least positive instantaneous
value of the signal from the set {s(tj )}, tj = t − j ∆t, j = 0, 1, . . . , N − 1 with
the probability (7.3.22a) on s(t) ≥ 0 or (2) zero with the probability (7.3.22b) on
s(t) < 0. The noise component wn+ (t) is a stochastic process whose instantaneous
values of its realization take values equal to (1) the least positive instantaneous
value of interference (noise) from the set {n(tj )}, tj = t − j ∆t, j = 0, 1, . . . , N − 1
with the probability 2−N or (2) zero with the probability 1 − 2−N .
Similar reasoning could be extended onto negative part w− (t) = w(t) ∧ 0 = ỹ− (t)
of the stochastic process w(t) in the output of the adder, that also can be represented
as the sum of two independent processes, i.e., the signal ws− (t) and the noise
wn− (t) components: w− (t) = ws− (t)+ wn− (t). Thus, the analysis of the distribution
pw (z/s∗ ) (7.3.22) of the process w(t) in the output of the adder implies that this
process can be represented in the form of the signal and noise components:

w(t) = ws (t) + wn (t) = w+ (t) + w− (t); (7.3.25)

ws (t) = ws+ (t) + ws− (t); (7.3.25a)


wn (t) = wn+ (t) + wn− (t). (7.3.25b)
We now evaluate the correlation function Rwn (τ ) of the noise component wn (t)
of the signal w(t) in the output of the adder. Based on the evenness of the PDF
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 299

pw (z/s∗ ) on s∗ (t) = 0, the expectation of the process wn (t) in the output of the
processing unit is equal to zero. Then Rwn (τ ) is determined on the basis of PDF
pw (z1 , z2 /s∗ ) (7.3.20) by the expression:
Z∞ Z∞
Rwn (τ ) = z1 z2 pw (z1 , z2 /s∗ )dz1 dz2 |s∗ (t)=0 . (7.3.26)
−∞ −∞

Computation of exact value of the function (7.3.26) is accompanied by consider-


able difficulties, so we simply use an upper bound of Rwn (τ ) determined by the
inequality:
Rwn (τ ) ≤ N · 2−(N −1) · Dn [rn (τ )]2(1+(N ln 2)/2) , (7.3.27)
where Dn is a variance of interference (noise) in the input of processing unit equal
to Dn = N0 fn,max ; rn (τ ) is a normalized correlation function of interference (noise)
n(t) in the input of the processing unit determined by the identity (7.3.14); N is a
number of the samples of interference (noise) n(t) which simultaneously take part
in processing.
The analysis of the relationship (7.3.27) permits us to draw the following con-
clusions. The variance Dwn of the noise component wn (t) is bounded above by the
quantity:
Dwn = Rwn (0) ≤ N · 2−(N −1) · Dn . (7.3.28)
The normalized correlation function rwn (τ ) of the noise component wn (t) is
bounded above by the quantity:

rwn (τ ) ≤ [rn (τ )]2(1+(N ln 2)/2) . (7.3.29)

When N = 10, the correlative relations of instantaneous values of the noise com-
ponent wn (t) decrease to 10−8 over the interval τ = 5/4fn,max . This suggests that
the rate of their destroying is considerable, and the filter possesses asymptotically
whitening properties.
Next we determine the mean N + (H = 0) of the positive overshoots of stochastic
process w(t) in the output of the adder per time unit at the level H = 0 in the
absence of the signal (s(t) = 0). Generally, for the stationary stochastic process
w(t), the quantity N + (H ) is determined by the formula [116], [264]:
Z∞
+
N (H ) = w0 p(H, w0 )dw0 |s(t)=0 , (7.3.30)
0

where w0 is a variable corresponding to derivative w0 (t) of the stochastic process


w(t); p(w, w0 ) is a joint PDF of the process w(t) and its derivative w0 (t).
Method of evaluation of PDF p(w, w0 ) is considered in [115], [116]. It is based
upon the computation of the limit: p(w, w0 ) = lim ∆ · pw (z − ∆ 0 ∆ 0 ∗
2 w , z + 2 w /s ),
∆→0
where pw (z1 , z2 /s∗ ) is a bivariate PDF of the process w(t) (7.3.20). As a result, mean
N + (H = 0) of the positive overshoots of stochastic process w(t) in the output of
300 7 Synthesis and Analysis of Signal Processing Algorithms

the adder per time unit at the level H = 0 in the absence of the signal (s(t) = 0)
is determined by the expression:

N + (H = 0) = N · 2−(N −1) fn,max / 3, (7.3.31)

where N is a number of the samples of interference (noise) n(t), which simultane-


ously take part in signal processing; fn,max is maximum frequency of power spectral
density of interference (noise) n(t).
It should be noted that the result (7.3.31) can be obtained as a mean of coinci-
dences of stochastic process overshoots from N , according to the method considered,
for instance, in [116], [264].
The average duration τ̄ (H ) of positive overshoots of stationary ergodic stochas-
tic process w(t) with respect to the level H is determined over the probability of
its exceeding P (w(t) > H ) by the formula [116], [264]:

τ̄ (H ) · N + (H ) = P (w(t) > H ), (7.3.32)

where P (w(t) > 0), according to the expression (7.3.22b), is equal to P ∗ (Ce ) = 2−N .
Then the average duration τ̄ (0) of positive overshoots of the process w(t) with
respect to the level H = 0, taking into account the expression (7.3.31), is determined
by the formula: √
τ̄ (0) = 3/(2N fn,max ). (7.3.33)
The noise component wn (t) of the signal w(t) in the output of the adder, according
to (7.3.25b), is equal to wn (t) = wn+ (t) + wn− (t). The component wn+ (t) (wn− (t))
is a pulse stochastic process with average duration of noise overshoots τ̄ (0) that is
small compared
√ to the correlation interval of interference (noise) ∆t = 1/2fn,max :
τ̄ (0) = 3∆t/N . Distribution of instantaneous values of noise overshoots of noise
component wn (t) is characterized by PDF p0 (z ) (7.3.22c). The component wn+ (t)
(wn− (t)) can be described by homogeneous Poisson process with a constant over-
shoot flow density λ = N + (H = 0) determined by the relationship (7.3.31).
While applying the known smoothing procedures [263], [265], [266] built upon
the basis of median estimators, a variance Dvn of noise component vn (t) of the
process v (t) in the output of the filter (7.3.13) (under condition that the signal s(t)
is a Gaussian narrowband process, and interference (noise) n(t) is very strong) can
be reduced to the quantity:
Z∞ ( )
2
∆T̃ 2 N 2 fn,max
2
Dv n ≤ Dw n pwn (τ )dτ = a exp − √ , (7.3.34)
8 π
∆T̃

where a is a parameter of the limiter L[w(t)], which, for instance, on ε ≈ 10−5


(see (7.3.5b) of the system (7.3.5) can be equal to: a2 = 16Ds ; Ds is a variance of
Gaussian signal s(t); ∆T̃ is a quantity of the smoothing interval T̃ ; N is a number of
the samples of the interference (noise) n(t), which simultaneously take part in signal
processing; fn,max is maximum frequency of power spectral density of interference
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 301

(noise) n(t); pwn (τ ) is PDF of duration of overshoots of the noise component wn (t)
of the process w(t), which is approximated by experimentally obtained dependence:
τ2
 
τ
pwn (τ ) = exp − ,
Dwn ,τ 2Dwn ,τ

where Dwn ,τ = π/(N 2 fn,max2
).
2
The formula (7.3.34) implies that the signal-to-noise ratio qout in the output of
the filter is determined by the quantity:
( )
2 2 2
1 ∆ T̃ N f
2
qout = Ds /Dvn ≥ exp √ n,max . (7.3.35)
16 8 π

The formula (7.3.34) also implies that fluctuation component of signal estimator
error (relative filtering error ) δf is bounded by the quantity:
( )
2
∆T̃ 2 N 2 fn,max
2
δf = M{(s(t) − v (t)) }/2Ds s(t)≡0 ≤ 8 exp − √ , (7.3.36)
8 π

where M(∗) is a symbol of mathematical expectation.


In the interval of primary processing T ∗ = [t − (N − 1)∆t, t], the instantaneous
values of the signal s(t) are slightly varied that cause dynamic error of signal primary
filtering δd0 . This error can be evaluated. Assume there is no interference (noise)
in the input of the processing unit (n(t) = 0). Then the domain of definition of the
s0 + s0 − s0 − s0 +
signal s(t) can be represented by the partition of subsets Ts+ , Ts+ , Ts− , Ts− :
0 0 0 0 0 0 0 0
s+ s− s− s+ s± s∓ s∓ s±
Ts = Ts+ ∪ Ts+ ∪ Ts− ∪ Ts− , Ts± ∩ Ts∓ = ∅, Ts± ∩ Ts∓ = ∅;
0 0
s−
s+
Ts+ = {t : s(t) > 0 & s0 (t) > 0}, Ts+ = {t : s(t) > 0 & s0 (t) ≤ 0};
0 0
s−
Ts− = {t : s(t) ≤ 0 & s0 (t) ≤ 0}, Ts−
s+
= {t : s(t) ≤ 0 & s0 (t) > 0};
0 0 0 0
s+ s− s− s+
m(Ts+ ) = m(Ts+ ) = m(Ts− ) = m(Ts− ), (7.3.37)
where m(X ) is a measure of a set X; s0 (t) is a derivative of the narrowband signal
s(t).
Then the process v (t) in the output of the processing unit is determined by the
relationship: (
s0 + s0 −
s(t − (N − 1)∆t), t ∈ Ts+ ∪ Ts− ;
v (t) = s0 − s0 + (7.3.38)
s(t), t ∈ Ts+ ∪ Ts− ,
i.e., in the absence of interference (noise) (n(t) = 0), the stochastic signal s(t) is
exactly reproduced within drooping parts (s0 (t) ≤ 0) on s(t) > 0 and also within
ascending parts (s0 (t) > 0) on s(t) ≤ 0, and reproduced with a delay equal to
(N − 1)∆t, within ascending parts (s0 (t) > 0) on s(t) > 0 and also within drooping
parts (s0 (t) ≤ 0) on s(t) ≤ 0 (see Fig. 7.3.2). Thus, dynamic error of signal filtering
δd0 , taking into account the identity (7.3.37), is equal to the quantity:

δd0 = M{(s(t) − v (t))2 }/2Ds |n(t)=0 = 1 − rs [(N − 1)∆t], (7.3.39)


302 7 Synthesis and Analysis of Signal Processing Algorithms

where rs (τ ) is normalized correlation function of the signal s(t); ∆t =


1/(2fn,max ) << 1/f0 .
Smoothing of the process u(t) in a median filter is accompanied by dynamic
error of signal smoothing δd,sm stipulated by partial smoothing of local extrema of
the signal s(t). Then, taking into account the relationship (7.3.38), dynamic error
of signal smoothing δd,sm is equal to:

δd,sm = M{(s(t) − u(t))2 }/2Ds |n(t)=0 = 1 − rs [∆T̃ ], (7.3.40)

where ∆T̃ is a quantity of the smoothing interval T̃ of the useful signal s(t).
Thus, the relative error δ of the estimator of Gaussian narrowband signal
(7.3.15) in the presence of interference (noise) under their interaction in the signal
space with lattice properties is bounded by the errors determined by the relation-
ships (7.3.36), (7.3.39), and (7.3.40):

δ ≤ δf + δd0 + δd,sm . (7.3.41)

7.3.3 Possibilities of Further Processing of Narrowband Stochastic Sig-


nal Extracted on Basis of Optimal Filtering Algorithm in Metric
Space with L-group Properties
According to the formula (6.1.2), the useful signal s(t), as a rule, is a determin-
istic one-to-one function M [∗, ∗] of carrier signal c(t) and vector of informational
parameters λ(t) varying on a domain of definition Ts of the signal s(t):

s(t) = M [c(t), λ(t)], (7.3.42)

where λ(t) = [λi (t)] = [λ1 (t), . . . , λm (t)], i = 1, . . . , m, λ(t) ∈ Λ is vector of


informational parameters of the signal s(t).
If the modulating function M [∗, ∗] is known, the estimator ŝ(t) = v (t) of the
signal s(t) extracted in the presence of interference (noise) n(t) determined by the
formula (7.3.13) allows us to obtain the estimator λ̂(t) of informational parameters
λ(t) of the signal which, according to the formula (7.3.42), is determined by an
inverse function with respect to M [∗, ∗] (see Fig. 7.3.3):

λ̂(t) = [λ̂i (t)] = M −1 [c(t), ŝ(t)], λ̂(t) ∈ Λ. (7.3.43)

If all the parameters λ(t) = [λi (t)], i = 1, . . . , m are unknown nonrandom


and constant on a domain of definition Ts of the signal s(t) (λi (t) = const = λi ,
t ∈ Ts ), then application of the relationship (7.3.43), together with the formula
(7.3.13), defines an algorithm of signal parameter estimation. If the dependence
λi (t) cannot be neglected and it is necessary to trace the instantaneous values of
varying informational parameters λi (t) that are stochastic processes, the application
of the relationship (7.3.43) together with the formula (7.3.13) defines an algorithm
of signal parameter filtering.
Inasmuch as the estimator ŝ(t) = v (t) of the useful signal and the estimator λ̂(t)
of informational parameters [λi (t)] of useful signal s(t) are connected by a one-to-
one relation (7.3.43), the information quantity contained in the estimator λ̂(t) and
7.3 Extraction of Stochastic Signal in Metric Space with Lattice Properties 303

FIGURE 7.3.3 Generalized block diagram of signal processing unit

information quantity contained in the estimator ŝ(t) are the same. Quality indices
of these estimators are the same too.
If, while extracting the stochastic signal s(t), the addressee does not know the
modulating function M [∗, ∗], then, on the basis of the estimator ŝ(t) of the signal
s(t), one can determine its envelope Eŝ (t), phase Φŝ (t), and instantaneous frequency
ωŝ (t) (see Fig. 7.3.3): p
Eŝ (t) = ŝ2 (t) + s̃2 (t), (7.3.44)
Z∞
1 ŝ(τ )dτ
where s̃(t) = H[ŝ(t)] = is Hilbert transform of the estimator ŝ(t);
π t−τ
−∞

Φŝ (t) = arctg[s̃(t)/ŝ(t)]; (7.3.45)

ωŝ (t) = Φ0ŝ (t) = [s̃0 (t)ŝ(t) − s̃(t)ŝ0 (t)]/Eŝ2 (t). (7.3.46)

On the basis of the obtained relationships (7.3.44) through (7.3.46), a modulating


function M [∗, ∗] can be determined.
The investigation of optimal filtering algorithm in signal space with L-group
properties led to the following conclusions.

1. The problem formulation of the synthesis of stochastic signal extraction


algorithm on the basis of successive minimization of the functions of the
observed statistical collections (7.3.2a,b) with simultaneous rejection of
using prior data concerning possible behavior and characteristics of the
signal and interference (noise) distribution provides the invariance prop-
erty of signal processing with respect to the conditions of both parametric
and nonparametric prior uncertainty, confirmed by the obtained quality
indices of signal processing (7.3.22a,b), (7.3.35), (7.3.36), (7.3.39), (7.3.40),
and (7.3.41).
2. The feature of signal filtering in space with L-group properties is that both
useful and interference signals can be extracted from the received mixture
304 7 Synthesis and Analysis of Signal Processing Algorithms

with the same or almost the same quality regardless of their energetic
relationships. The relation of the prior information quantity contained in
these signals does not matter.
3. The obtained signal extraction unit operating in the space with L-group
properties almost does not distort the structure of the received useful
signal regardless of the conditions of parametric prior uncertainty. High
quality of the obtained signal estimator ŝ(t) allows solving the main signal
processing problems.
4. The essential distinctions of signal extracting quality between linear space
LS (+) and signal space L(+, ∨, ∧) with L-group properties may be ex-
plained by the fundamental differences in the types of interactions between
the useful signal s(t) and interference (noise) n(t) within these spaces: ad-
ditive s(t) + n(t), and interactions in the form of join s(t) ∨ n(t) and meet
s(t) ∧ n(t), respectively.

7.4 Signal Detection in Metric Space with Lattice Properties


The known variants of signal detection problem statement are mostly formulated
for the case of an additive interaction x(t) = s(t)+ n(t) (in terms of linear space) be-
tween the signal s(t) and interference (noise) n(t) [254], [163], [155], [153]. Neverthe-
less, this problem can be formulated on the basis of a more general model of signal
interaction: x(t) = Φ[s(t), n(t)], where Φ is some deterministic function [159], [166].
As a rule, signal detection is not realized without simultaneous solving of other
signal processing problems. Meanwhile, signal detection is an independent signal
processing problem that has to be solved while constructing of signal processing
fundamentals in signal spaces with new properties that fundamentally differ from
the properties of linear signal space.
This section has a twofold goal. First, it is necessary to synthesize algorithm of
signal detection in the presence of interference (noise) in signal space with lattice
properties. Second, the synthesized algorithm of signal detection should be ana-
lyzed; besides, characteristics and properties of this algorithm should be compared
with the known analogues solving the signal detection problem in linear signal
space.
In linear signal space, signal detection problem is solved by signal energy es-
timation on the basis of comparison of signal energy estimator with some thresh-
old [254], [163]. This problem formulated in signal space with lattice properties is
solved in a different way. As shown below, the estimator of the received useful signal
is assumed to be formed.
7.4 Signal Detection in Metric Space with Lattice Properties 305

7.4.1 Deterministic Signal Detection in Presence of Interference


(Noise) in Metric Space with Lattice Properties
Deterministic signal application is bounded by the framework of theoretical analy-
sis, whose results, in the form of potential signal processing quality indices, may be
used to compare them with other signal processing algorithms. The known litera-
ture on signal detection usually considers the probabilistic character of the observed
stochastic process causes any signal detector operating under interference (noise)
background is characterized by nonzero probabilities of error decisions. However,
as shown below, that is not quite so, for instance, when we consider the case of
deterministic signal detection in signal space with lattice properties.
For signal detection, it is necessary to find out whether the signal s(t) is present
or absent in the received realization of stochastic process x(t) within the observa-
tion interval Tobs . Consider the model of interaction between the signal s(t) with
completely known parameters and interference (noise) n(t) in signal space in the
form of distributive lattice L(∨, ∧) with binary operations of join a ∨ b and meet
a ∧ b, respectively: a ∨ b = supL (a, b), a ∧ b = inf L (a, b); a, b ∈ L(∨, ∧):

x(t) = θs(t) ∨ n(t); t ∈ Ts ≡ Tobs ; θ ∈ {0, 1}, (7.4.1)

where Ts = [t0 , t0 + T ] is the domain of definition of the signal s(t); t0 is the known
time of arrival of the signal s(t); T is a duration of the signal s(t); θ is an unknown
nonrandom parameter taking value θ = 0 (the signal is absent) or θ = 1 (the signal
is present).
Thus, deterministic signal detection is equivalent to the estimation of an un-
known nonrandom parameter θ. For a solution, one has to obtain both algorithm
and block diagram of the optimal signal detector and its quality indices of de-
tection (conditional probabilities of correct detection and false alarm) have to be
determined.
Assume that interference (noise) n(t) is characterized by an arbitrary distribu-
tion with an even univariate probability density function pn (x) = pn (−x).
To synthesize the detectors in linear signal space LS (+), i.e., when the interac-
tion equality x(t) = θs(t)+ n(t) holds, θ ∈ {0, 1}, the part of the theory of statistical
inference called statistical hypotheses testing is used. The strategies of decision mak-
ing (within signal detection problem solving) considered in the literature suppose a
likelihood ratio computation deemed necessary to determine a likelihood function.
Such a function is determined by multivariate probability density function of inter-
ference (noise) n(t). During signal detection problem solving in linear signal space
LS (+), i.e., when the interaction equality x(t) = s(t) + n(t) holds, the trick of the
variable change is used to obtain likelihood function: n(t) = x(t) − s(t) [254], [163].
However, it is impossible to use this subterfuge to determine likelihood ratio un-
der interaction (7.4.1) between the signal and interference (noise), inasmuch as the
equation is unsolvable with respect to the variable n(t) because the lattice L(∨, ∧)
does not possess the group properties; another approach is necessary.
As applied to the case (7.4.1), solving the signal detection problem in the pres-
ence of interference (noise) n(t) with an arbitrary distribution lies in formation of an
306 7 Synthesis and Analysis of Signal Processing Algorithms

estimator ŝ(t) of the received signal s(t) which (on the basis of the chosen criteria)
would allow the observer to distinguish two possible situations of signal receiving
determined by the parameter θ. We formulate the problem of detection Det[s(t)]
of the signal s(t) on the basis of minimization of squared metric |y (t) − s(t)|2 dt
R
Ts
between the function y (t) = F [x(t)] of the observed process x(t) and the signal s(t)
under the condition that the observed process x(t) includes the signal θs(t) = s(t)
(θ = 1): x(t) = s(t) ∨ n(t):


 yR(t) = F [x(t)] = ŝ(t); (a)
2
|y ( t ) − s ( t ) | dt | → min ; (b)

θ=1



 Ts y(t)∈Y
Det[s(t)] = (7.4.2)
R
θ̂ = 1[max[ y (t)s(t)dt] θ∈{0,1} − l0 ]; (c)


 t∈T s Ts
 R R
 y (t)s(t)dt] |θ=1 6= y (t)s(t)dt] |θ=0 , (d)


Ts Ts

where y (t) = ŝ(t) is the signal estimator θs(t), θ ∈ {0, 1} in the


R observed process
x(t): x(t) = θs(t) ∨ n(t); F [∗] is some deterministic function; |y (t) − s(t)|2 dt =
Ts
2
ky (t) − s(t)k is a squared metric between the signals y (t), s(t) in Hilbert
R space HS;
θ̂ is an estimator of an unknown nonrandom parameter θ, θ ∈ {0, 1}; y (t)s(t)dt =
Ts
(y (t), s(t)) is scalar product of the signals y (t), s(t) in Hilbert space HS; 1[x] is
Heaviside step function; l0 is some threshold value.
The relation (7.4.2a) of the system (7.4.2) specifies the rule of formation of the
estimator ŝ(t) of the received signal in the form of some deterministic function
F [x(t)] of the observed process R x(t). The relation (7.4.2b) defines the criterion of
minimum of squared metric |y (t) − s(t)|2 dt |θ=1 in Hilbert space HS between
Ts
the signals y (t) and s(t); this criterion is applied when the signal is present in the
observed process θs(t) = s(t) (θ = 1): x(t) = s(t) ∨ n(t). The relation (7.4.2c) of the
system (7.4.2) determines the rule of forming the estimator θ̂ of parameter θ, which
lies in computation of maximum value of correlation integral between the estimator
y (t) = ŝ(t) and useful signal s(t) and in further comparison with a threshold value
l0 . The inequality (7.4.2d) determines a constraint, which must hold unconditionally
to successfully solve the detection problem (to form the estimator θ̂) according to
the rule (7.4.2c).
The solution of the problem of minimization of squared metric
Z
|y (t) − s(t)|2 dt |θ=1 → min
Ts

between the function y (t) = F [x(t)] and the signal s(t) in its presence in the ob-
served process x(t): x(t) = s(t) ∨ n(t) follows directly from the absorption axiom
of the lattice L(∨, ∧) (see page 269) contained in the third part of the multilink
identity:
y (t) = s(t) ∧ x(t) = s(t) ∧ [s(t) ∨ n(t)] = s(t). (7.4.3)
7.4 Signal Detection in Metric Space with Lattice Properties 307

The identity (7.4.3) directly implies, first, the kind of function F [x(t)] from the
relation (7.4.2a) of the system (7.4.2):

y (t) = F [x(t)] = s(t) ∧ x(t). (7.4.4)


2
Second, the relationship (7.4.3) implies that squared metric ky (t) − s(t)k is iden-
tically equal to zero: Z
|y (t) − s(t)|2 dt |θ=1 = 0.
Ts

The identity (7.4.4) implies that in the presence of the signal s(t) in the ob-
served
R process x(t) = s(t) ∨ n(t), at the instant t = t0 + T , correlation integral
y (t)s(t)dt |θ=1 takes a maximum value equal to the energy E of the signal s(t):
Ts

Z tZ
0 +T

max [ y (t)s(t)dt |θ=1 ] = s(t)s(t)dt = E. (7.4.5)


t=t0 +T
Ts t0

The set of values of the estimator y (t) |θ=0 , according to the identity (7.4.4), is
determined by the expression:

y (t) |θ=0 = s(t) ∧ [0 ∨ n(t)]. (7.4.6)

The identity (7.4.6) implies that under joint fulfillment of the inequalities s(t) > 0
and n(t) ≤ 0, the estimator y (t) |θ=0 takes values equal to zero:

y (t) |θ=0 = 0,

while under joint fulfillment of the inequalities s(t) > 0 and n(t) > 0, when the
inequality s(t) ≤ 0 holds, the estimator y (t) |θ=0 takes values equal to the instan-
taneous values of useful signals s(t):

y (t) |θ=0 = s(t).

Based on a prior assumption concerning evenness of univariate probability density


function of interference (noise) pn (x) = pn (−x), the estimator y (t) |θ=0 determined
by the identity (7.4.6) takes values equal to zero y (t) |θ=0 = 0 with probability
P = 1/4, and values equal to useful signal y (t) |θ=0 = s(t) with probability P = 3/4.
However, despite rather high cross-correlation between the estimator
R y (t) |θ=0 and
useful signal s(t), the maximum value of correlation integral y (t)s(t)dt |θ=0 on
Ts
θ = 0 at the instant t = t0 + T is equal to 3/4 of the energy E of the signal s(t):

Z tZ
0 +T

max [ y (t)s(t)dt |θ=0 ] = 3 s(t)s(t)dt/4 = 3E/4. (7.4.7)


t=t0 +T
Ts t0

The identities (7.4.5) and (7.4.7) confirm fulfillment of the constraint (7.4.2d) of
308 7 Synthesis and Analysis of Signal Processing Algorithms

the system (7.4.2), and specify the upper and lower bounds for the threshold level
l0 , respectively:
3E/4 < l0 < E. (7.4.8)
Thus, summarizing the relationships (7.4.2a) through (7.4.2c) of the system (7.4.2),
one can draw the conclusion that the deterministic signal detector has to form the
estimator y (t) = ŝ(t) that, according toR (7.4.4), is equal to ŝ(t) = s(t) ∧ x(t); also
it has to compute correlation integral y (t)s(t)dt in the interval Ts = [t0 , t0 + T ]
Ts
and to determine the presence or absence of the useful signal s(t). According to
the equation (7.4.2a) of the system (7.4.2), the decision θ̂ = 1 concerning the
presence of the signal s(t) (θ = 1) in the observed process xR(t) is made if, at the
instant t = t0 + T , the maximum value of correlation integral y (t)s(t)dt |θ=1 = E
Ts
exceeds the threshold level l0 . The decision θ̂ = 0 concerning the absence of the
useful signal s(t) (θ = 0R) in the observed process x(t) is made if the maximum value
of correlation integral y (t)s(t)dt |θ=0 = 3E/4 observed in the instant t = t0 + T
Ts
does not exceed the threshold level l0 .
The block diagram of a deterministic signal detection unit synthesized in signal
R includes the signal estimator ŝ(t) = y (t) formation
space with lattice properties
unit; correlation integral y (t)s(t)dt computing unit; the strobing circuit (SC)
Ts
and decision gate (DG) (see Fig 7.4.1). The correlation integral computing unit
consists of multiplier and integrator.

FIGURE 7.4.1 Block diagram of deterministic signal detection unit

We now analyze metric relations between the signals θs(t) = s(t) (θ = 1),
θs(t) = 0 (θ = 0) and their estimators ŝ(t) |θ=1 = y (t) |θ=1 , ŝ(t) |θ=0 = y (t) |θ=0 .
For the signals θs(t) = s(t), θs(t) = 0 and their estimators y (t) |θ=1 , y (t) |θ=0 , the
following metric relationships hold:
2 2 2 2
k0 − y (t) |θ=1 k + ks(t) − y (t) |θ=1 k = k0 − s(t)k = ks(t)k ; (7.4.9a)
2 2 2 2
k0 − y (t) |θ=0 k + ks(t) − y (t) |θ=0 k = k0 − s(t)k = ks(t)k , (7.4.9b)
2 2 2
where ka(t) − b(t)k = ka(t)k + kb(t)k − 2(a(t), b(t)) is a squared metric between
2
the functions a(t), b(t) in Hilbert space HS; ka(t)k is a squared norm of a function
7.4 Signal Detection in Metric Space with Lattice Properties 309
R
a(t) in Hilbert space HS; (a(t), b(t)) = a(t)b(t)dt is a scalar product of the
T∗
functions a(t) and b(t) in Hilbert space HS; T ∗ is a domain of definition of the
functions a(t) and b(t).
The relationships (7.4.4) and (7.4.9a) imply that on an arbitrary signal-to-
interference (signal-to-noise) ratio in the input of a detection unit, the cross-
correlation coefficient ρ[s(t), y (t) |θ=1 ] between the signal s(t) and the estimator
y (t) |θ=1 of the signal s(t), according to their identity (y (t) |θ=1 = s(t)) is equal to
1:
ρ[s(t), y (t) |θ=1 ] = 1,
and the squared metrics taken from (7.4.9a) are determined by the following rela-
tionships:
2 2 2 2
ks(t) − y (t) |θ=1 k = 0; k0 − y (t) |θ=1 k = k0 − s(t)k = ks(t)k = E,

where E is the energy of the signal s(t).


The relationships (7.4.7) and (7.4.9b) imply that in the absence of the signal in
the observed process x(t) = 0 ∨ n(t) in the input of a detector unit, the correlation
coefficient ρ[s(t), y (t) |θ=0 ] between the signal s(t) and the estimator y (t) |θ=0 is
equal to:
(y (t) |θ=0 , s(t)) p
ρ[s(t), y (t) |θ=0 ] = p = 3/4,
||y (t) |θ=0 ||2 ||s(t)||2
and the squared metrics taken from (7.4.9b) are determined by the following rela-
tionships:
2
k0 − y (t) |θ=0 k = 3E/4; (7.4.10a)
2
ks(t) − y (t) |θ=0 k = E/4; (7.4.10b)
2
k0 − s(t)k = E. (7.4.10c)
The relationships (7.4.10) indicate that the difference s(t) − y (t) |θ=0 between the
signal s(t) and the estimator y (t) |θ=0 and the estimator y (t) |θ=0 alone are orthog-
onal in Hilbert space:
(s(t) − y (t)) |θ=0 ⊥y (t) |θ=0 .
Figure 7.4.2 illustrates the signals z (t) and u(t) in the outputs of the correlation
integral computing unit (dash line) and strobing circuit (solid line), respectively,
and also the strobing pulses v0 (t) (dot line), obtained by statistical modeling under
the condition that the mixture x(t) = θs(t) ∨ n(t) contains signals θs(t) = s(t),
θs(t) = 0, θs(t) = s(t), θs(t) = 0 in the input of a detection unit distributed in
series over the intervals [0, T], [T, 2T], [2T, 3T], [3T, 4T], respectively.
The signal-to-noise ratio E/N0 is equal to E/N0 = 10−8 , where E is the energy
of the signal s(t); N0 is a power spectral density of noise n(t). The interference
n(t) is quasi-white Gaussian noise with independent samples. In the output of the
correlation integral computing unit, the function z (t) is linear.
As seen in Fig. 7.4.2, despite an extremely small signal-to-noise ratio, in the
input of the decision gate (in the output of strobing circuit), the signals u(t) are
310 7 Synthesis and Analysis of Signal Processing Algorithms

distinguished by their amplitude without any error. The identity (7.4.5) implies
that in the presence of the signal s(t) in x(t) = s(t) ∨ n(t) (θ = 1) in the output
of the integrator at the instant t = t0 + jT , j = 1, 3, the maximum value of
correlation integral (7.4.5) is equal to Eρ[s(t), y (t) |θ=1 ] = E. In the absence of the
signal in the observed process x(t) = 0 ∨ n(t) (θ = 0) in the output of the integrator
(see Fig. 7.4.1) at the instant t = t0 + jT , j = 2, 4, the value of correlation integral
is equal to Eρ[s(t), y (t) |θ=0 ] = 3E/4.

FIGURE 7.4.2 Signals z(t) and u(t) in FIGURE 7.4.3 Estimator θ̂ of unknown
outputs of correlation integral computing nonrandom parameter θ characterizing
unit (dash line) and strobing circuit (solid presence (θ = 1) or absence (θ = 0) of
line), respectively; strobing pulses v0 (t) useful signal s(t) in output of decision
(dot line) gate

Figure 7.4.3 shows the result of formation of the estimator θ̂ of unknown non-
random parameter θ, which characterizes the presence (θ = 1) or absence (θ = 0)
of the useful signal s(t) in the observed process x(t). The decision gate (DG)
(see Fig. 7.4.1) forms the estimator θ̂ according to the rule (7.4.2c) of the system
(7.4.2) by means of the comparison of the signal u(t) at the instants t = t0 + jT ,
j = 1, 2, 3, . . . with the threshold value l0 chosen according to two-sided inequality
(7.4.8).
Thus, regardless of the conditions of parametric and nonparametric prior un-
certainty and, respectively, independently of probabilistic-statistical properties of
interference (noise), the optimal deterministic signal detector in signal space with
lattice properties accurately detects the signals with the conditional probabilities
of the correct detection D = 1 and false alarm F = 0. Absolute values of quality
indices of signal detection D = 1, F = 0 are stipulated by the fact that the es-
timator y (t) = ŝ(t) of the received signal θs(t), θ ∈ {0, 1} formed in the input of
the correlation integral computing unit (see Fig. 7.4.1), regardless of instantaneous
values of interference (noise) n(t), can take only two values from a set {0, s(t)}.
The results of the investigation of algorithm and unit of deterministic signal
detection in signal space with lattice properties allow us to draw the following
conclusions.

1. Formulation of the synthesis problem of deterministic signal detection


algorithm, on the basis of minimization of squared metric in Hilbert space
HS (7.4.2b), with simultaneous rejection of the use of interference (noise)
7.4 Signal Detection in Metric Space with Lattice Properties 311

distribution prior data, provides the invariance property of the algorithm


and unit of signal detection with respect to the conditions of parametric
and nonparametric prior uncertainty confirmed by absolute values of the
conditional probabilities of correct detection D and false alarm F .
2. The existing distinctions between deterministic signal detection quality
indices for detection in linear signal space LS (+) and signal space with
lattice properties L(∨, ∧), are elucidated by fundamental differences in
the kinds of interactions between the signal s(t) and interference (noise)
n(t) in these spaces: additive s(t) + n(t) and the interaction in the form
of operations of join s(t) ∨ n(t) and meet s(t) ∧ n(t), respectively, and as
a consequence, by the distinctions of the properties of these signal spaces.

7.4.2 Detection of Harmonic Signal with Unknown Nonrandom


Amplitude and Initial Phase with Joint Estimation of Time of
Signal Arrival (Ending) in Presence of Interference (Noise) in
Metric Space with L-group Properties
As a rule, if time of signal arrival (ending) is known, the detection problem makes
no sense from the standpoint of obtaining information concerning the presence
(or absence) of the signal. Within this subsection, the signal detection problem is
considered along with the problem of estimation of time of signal arrival (ending).
Synthesis and analysis of optimal algorithm of detection of harmonic signal with
unknown nonrandom amplitude and initial phase with joint estimation of time of
signal arrival (ending) in the presence of interference (noise) in signal space with
lattice properties are fulfilled under the following assumptions. During synthesis, the
distribution of interference (noise) n(t) is considered arbitrary. During the further
analysis of the synthesized algorithm and unit of signal processing, interference
(noise) n(t) is considered Gaussian.
Let the interaction between the harmonic signal s(t) and interference (noise)
n(t) in signal space L(+, ∨, ∧) with L-group properties be described by two binary
operations ∨ and ∧ in two receiving channels, respectively:

x(t) = θs(t) ∨ n(t); (7.4.11a)

x̃(t) = θs(t) ∧ n(t), (7.4.11b)


where θ is an unknown nonrandom parameter that takes only two values θ ∈ {0, 1}:
θ = 0 (the signal is absent) and θ = 1 (the signal is present); t ∈ Ts , Ts is a domain
of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is an unknown time of arrival of the
signal s(t); t1 is an unknown time of signal ending; T = t1 − t0 is known signal
duration; Ts ⊂ Tobs ; Tobs is an observation interval of the signal: Tobs = [t00 , t01 ];
t00 < t0 , t1 < t01 .
Let the model of the received harmonic signal s(t) be determined by the expres-
sion: 
A cos(ω0 t + ϕ), t ∈ Ts ;
s(t) = (7.4.12)
0, t∈/ Ts ,
312 7 Synthesis and Analysis of Signal Processing Algorithms

where A is an unknown nonrandom amplitude of the useful signal s(t); ω0 = 2πf0 ;


f0 is known carrier frequency of the signal s(t); ϕ is an unknown nonrandom initial
phase of the useful signal, ϕ ∈ [−π, π ]; Ts is a domain of definition of the signal s(t),
Ts = [t0 , t1 ]; t0 is an unknown time of arrival of the signal s(t); t1 is an unknown
time of signal ending; T = t1 − t0 is known signal duration, T = Ns T0 ; Ns is a
number of periods of the harmonic signal s(t), Ns ∈ N, N is the set of natural
numbers; T0 is a period of signal carrier.
We assume that interference (noise) n(t) is characterized by such statistical
properties that two neighbor samples of interference (noise) n(tj ) and n(tj±1 ),
which are distant from each other over the interval |tj±1 − tj | = 1/f0 = T0 , are
independent. Instantaneous values (the samples) of the signal {s(tj )} and interfer-
ence (noise) {n(tj )} are the elements of signal space: s(tj ), n(tj ) ∈ L(+, ∨, ∧). The
samples s(tj ) and n(tj ) of the signal s(t) and interference (noise) n(t) are taken
on a domain of definition Ts of the signal s(t): tj ∈ Ts through the period of sig-
nal carrier T0 = 1/f0 providing the independence of interference (noise) samples
{n(tj )}.
Taking into account the aforementioned considerations, the equations of the
observations in both processing channels (7.4.11a) and (7.4.11b) take the form:

x(tj ) = θs(tj ) ∨ n(tj ); (7.4.13a)

x̃(tj ) = θs(tj ) ∧ n(tj ), (7.4.13b)


where tj = t − jT0 , j = 0, 1, . . . , J − 1, tj ∈ Ts ⊂ Tobs ; Ts = [t0 , t1 ], Tobs = [t00 , t01 ];
t00 < t0 , t1 < t01 ; J ∈ N, N is the set of natural numbers.
In this subsection, the problem of joint detection of the signal s(t) and estimation
of its time of ending t1 is formulated and solved on the basis of matched filtering
of the signal s(t), after which the joint detection of the signal s(t) and estimation
of its time of ending t1 are realized.
The content of matched filtering problem in the signal space with L-group (or
lattice) properties essentially differs from the similar problem solved in linear signal
space LS. As is well known, the base of the formulation and solving the matched
filtering problem in linear signal space LS is the criterion of maximum of signal-to-
noise ratio in the filter output [254], [163], or the criterion of minimum of average
risk, which determines the proper structure of a matched filter. In this subsection
the signal is not assumed to be known, and signal processing is not considered
linear; that defines certain features of processing.
The problem of matched filtering M F [s(t)] in signal space L(+, ∨, ∧) with
L-group properties is solved through step-by-step processing of statistical collec-
tions {x(tj )} and {x̃(tj )} determined by the observation equations (7.4.13a) and
(7.4.13b): 
 P F [s(t)]; (a)
M F [s(t)] = IP [s(t)]; (b) (7.4.14)
Sm[s(t)], (c)

where P F [s(t)] is primary filtering; IP [s(t)] is intermediate processing; Sm[s(t)]


7.4 Signal Detection in Metric Space with Lattice Properties 313

is smoothing; they all form the processing stages of a general algorithm of useful
signal matched filtering (7.4.14).
The criteria of optimality determining every processing stage P F [s(t)], IP [s(t)],
Sm[s(t)] are united into the single systems:

P F [s(t)] =

J−1 ^


 y (t) = arg min | ∧ [x(tj ) − y (t)]|; (a)
 ^ j=0


 y (t)∈Y ;t,tj ∈Tobs
 J−1 _
| ∨ [x̃(tj ) − y (t)]|;

 ỹ (t) = _ arg min (b)


 j=0
y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.15)

 J = arg min[ |y(t)|dt] n(t)≡0 , |y(t)|dt 6= 0; (c)


 y(t)∈Y ; Tobs Tobs
 w(t) = F [y (t), ỹ (t)];

 (d)
 R
|w(t) − s(t)|dt] n(t)≡0 → min , (e)




[t1 −T0 ,t1 ] w(t)∈W

where y (t), ỹ (t) are the solution functions for minimization of the metric between
the observed statistical collections {x(tj )}, {x̃(tj )} and optimization variables, i.e.,
` a
the functions y (t), y (t), respectively; w(t) is the function F [∗, ∗] of uniting the
results y (t), ỹ (t) of minimization of the functions of the observed collections {x(tj )}
and {x̃(tj )}; Tobs is an observation interval of the signal; J is a number of samples of
stochastic processes x(t), x̃(t) used under processing J ∈ N; N is the set of natural
numbers; 
2
 M{u (t)} s(t)≡0 → min; (a)

L
IP [s(t)] = M{[u(t) − s(t)]2 } n(t)≡0 = ε ; (b) (7.4.16)

 u(t) = L[w(t)], (c)
where M{∗} is the symbol of mathematical expectation; L[w(t)] is a functional
transformation of the process w(t) into the process u(t); ε is a constant that is
some function of a power of the signal s(t);

M −1
|u(tk ) − v ◦ (t)|;
P
v ( t ) = arg min (a)




v ◦ (t)∈V ;t,tk ∈T̃ k=0

Sm[s(t)] = ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.4.17)

 0
 M = arg max [δf (M )] M ∗ :δf (M ∗ )=δf,sm , (c)


M 0 ∈N∩[M ∗ ,∞[

where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t)) that
M −1
|u(tk ) − v ◦ (t)| between
P
is the solution of the problem of minimizing the metric
k=0
the instantaneous values of stochastic process u(t) and optimization variable, i.e.,
the function v ◦ (t); tk = t − M
k
∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is
an interval, in which smoothing of stochastic process u(t) is realized; M ∈ N, N is
the set of natural numbers; M is a number of the samples of stochastic process u(t)
used under smoothing on the interval T̃ ; δd (∆T̃ ) and δf (M ) are relative dynamic
and fluctuation errors of smoothing as the dependence on the quantity ∆T̃ of a
314 7 Synthesis and Analysis of Signal Processing Algorithms

smoothing interval T̃ and a number of the samples M , respectively; δd,sm , δf,sm


are the quantities of the relative dynamic and fluctuation errors of smoothing,
respectively.
The problems of joint detection Det[s(t)] of the signal s(t) and estimation Est[t1 ]
of time of its ending t1 are based on the detection and estimation criteria united
into one system, which is a logical continuation of the system (7.4.14):

Det[s(t)]/Est[t1 ] =
 d1
 Ev (t̂1 − T20 ) ≷ l0 (F );
 (a)
= d0 (7.4.18)
 t̂1 =
 arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ∨Φ2 ;t◦ ∈Tobs

where Ev (t̂1 − T20 ) is an instantaneous value of the envelope Ev (t) of the estimator
v (t) = ŝ(t) of useful signal s(t) at the instant t = t̂1 − T20 : Ev (t) = v 2 (t) + vH
p
2 (t);

vH (t) = H[v (t)] is Hilbert transform; d1 , d0 are the decisions made concerning true
value of an unknown nonrandom parameter θ, θ ∈ {0, 1}; l0 (F ) is some threshold
level as a function of a given conditional probability of false alarm F ; t̂1 is the
estimator of time of signal ending t1 ; Mϕ {(t1 − t◦ )2 } is a mean squared difference
between true value of time of signal ending t1 and optimization variable t◦ ; Mϕ {∗}
is a symbol of mathematical expectation with averaging over the initial phase ϕ of
the signal; Φ1 and Φ2 are possible domains of definition of the initial phase ϕ of
the signal: Φ1 = [−π/2, π/2] and Φ2 = [π/2, 3π/2].
We now explain the optimality criteria and single relationships appearing in
the systems (7.4.15), (7.4.16), (7.4.17) determining the successive stages P F [s(t)],
IP [s(t)], and Sm[s(t)] of the general algorithm of useful signal processing (7.4.14).
Equations (7.4.15a) and (7.4.15b) of the system (7.4.15) define the criteria of
minimum of metrics between the statistical sets of the observations {x(tj )} and
{x̃(tj )} and the results of primary processing y (t) and ỹ (t), respectively. The func-
J−1 ` J−1 a
tions of metrics | ∧ [x(tj ) − y (t)]|, | ∨ [x̃(tj ) − y (t)]| are chosen to provide the
j=0 j=0
metric convergence and the convergence in probability to the useful signal s(t) of
J−1 J−1
the sequences y (t) = ∧ x(tj ) and ỹ (t) = ∨ x̃(tj ) for the interactions of both
j=0 j=0
kinds (7.4.13a) and (7.4.13b).
The relationship (7.4.15c) determines the criterion of the choice of a number J of
the samples of stochastic processes xR(t) and x̃(t) used during signal processing based
on the minimization of the norm |y (t)|dt. The criterion (7.4.15c) is considered
Tobs
under two constraints:
R (1) interference (noise) is identically equal toR zero: n(t) ≡ 0;
(2) the norm |y (t)|dt of the function y (t) is not equal to zero: |y (t)|dt 6= 0.
Tobs Tobs
Equation (7.4.15e) defines the criterion of minimum
R of the metric between the
useful signal s(t) and the function w(t), i.e., |w(t) − s(t)|dt n(t)≡0 in the
[t1 −T0 ,t1 ]
interval [t1 − T0 , t1 ]. This criterion establishes the kind of function F [y (t), ỹ (t)]
(7.4.15d) uniting the results y (t) and ỹ (t) of primary processing of the observed
7.4 Signal Detection in Metric Space with Lattice Properties 315

processes x(t) and x̃(t). The criterion (7.4.15e) is considered when interference
(noise) is identically equal to zero: n(t) ≡ 0.
The equations (7.4.16a), (7.4.16b), (7.4.16c) of the system (7.4.16) define the
criterion of the choice of functional transformation L[w(t)]. The equation (7.4.16a)
defines the criterion of minimum of the second moment of the process u(t) in the
absence of the useful signal s(t) in the input of signal processing unit. The equation
(7.4.16b) determines the quantity of the second moment of the difference between
the signals u(t) and s(t) in the absence of interference (noise) n(t) in the input of
a signal processing unit.
The equation (7.4.17a) of the system (7.4.17) defines the criterion of minimum of
M −1
|u(tk ) − v ◦ (t)| between the instantaneous values of the process u(t) and
P
metric
k=0
optimization variable v ◦ (t) within the smoothing interval T̃ =]t − ∆T̃ , t], requiring
the final processing of the signal u(t) in the form of smoothing under the condition
that the useful signal is identically equal to zero: s(t) ≡ 0. The relationship (7.4.17b)
establishes the rule of the choice of the quantity ∆T̃ of smoothing interval T̃ based
on a relative dynamic error δd,sm of smoothing. The equation (7.4.17c) defines the
criterion of the choice of a number M of the samples of stochastic process u(t)
based on a relative fluctuation error δf,sm of smoothing.
The equation (7.4.18a) of the system (7.4.18) defines the criterion of the deci-
sion d1 concerning the signal presence (if Ev (t̂1 − T20 ) > l0 (F )) or the decision d0
regarding its absence (if Ev (t̂1 − T20 ) < l0 (F )). The relationship (7.4.18b) defines
the criterion of forming the estimator t̂1 of time of signal ending t1 based on mini-
mization of the mean squared difference Mϕ {(t1 − t◦ )2 } between true value of time
of signal ending t1 and optimization variable t◦ under the conditions that averag-
ing is realized over the initial phase ϕ of the signal taken in one of two intervals:
Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], and interference (noise) is absent: n(t) ≡ 0.
J−1 `
To solve the problem of minimizing the functions | ∧ [x(tj ) − y (t)]| (7.4.15a),
j=0
J−1 a
| ∨ [x̃(tj ) − y (t)]| (7.4.15b), we find the extrema of these functions, setting their
j=0
` a
derivatives with respect to y (t) and y (t) to zero, respectively:
J−1 ^ ^ J−1 ^
d| ∧ [x(tj ) − y (t)]|/d y (t) = −sign( ∧ [x(tj ) − y (t)]) = 0; (7.4.19a)
j=0 j=0

J−1 _ _ J−1 _
d| ∨ [x̃(tj ) − y (t)]|/d y (t) = −sign( ∨ [x̃(tj ) − y (t)]) = 0. (7.4.19b)
j=0 j=0

The solutions of Equations (7.4.19a) and (7.4.19b) are the values of the estimators
y (t) and ỹ (t) in the form of meet and join of the observation results {x(tj )} and
{x̃(tj )}, respectively:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − jT0 ); (7.4.20a)
j=0 j=0

J−1 J−1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − jT0 ). (7.4.20b)
j=0 j=0
316 7 Synthesis and Analysis of Signal Processing Algorithms
J−1 ` J−1 a
The derivatives of the functions | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]|, according
j=0 j=0
to the relationships (7.4.19a) and (7.4.19b) at the points y (t) and ỹ (t), change their
sign from minus to plus. Thus, the extrema determined by the formulas (7.4.20a)
and (7.4.20b) are minimum points of these functions, and the solutions of the equa-
tions (7.4.15a) and (7.4.15b) determining these estimation criteria.
The condition n(t) ≡ 0 of the criterion (7.4.15c) of the system (7.4.15) implies
the corresponding changes in the observation equations (7.4.13a,b): x(tj ) = s(tj )∨0,
x̃(tj ) = s(tj ) ∧ 0. Thus, according to the relationships (7.4.20a) and (7.4.20b), the
identities hold:
J−1
y (t) n(t)≡0 = ∧ [s(tj ) ∨ 0] =
j=0

= [s(t) ∨ 0] ∧ [s(t − (J − 1)T0 ) ∨ 0]; (7.4.21a)


J−1
ỹ (t) n(t)≡0 = ∨ [s(tj ) ∧ 0] =
j=0

= [s(t) ∧ 0] ∨ [s(t − (J − 1)T0 ) ∧ 0], (7.4.21b)


where tj =Rt − jT0 , j = 0, 1, . . . , J − 1. Based on (7.4.21a), we obtain the value of
the norm |y (t)|dt from the criterion (7.4.15c):
Tobs

Z 
4(Ns − J + 1)A/π, J ≤ Ns ;
|y (t)|dt = (7.4.22)
0, J > Ns ,
Tobs

where Ns is a number of periods of harmonic signal s(t); A is an unknown nonran-


dom amplitude of the useful signal s(t).
From Equation (7.4.22), according to the criterion (7.4.15c), we obtain opti-
mal value of J samples of stochastic processes x(t) and x̃(t) used during primary
processing (7.4.20a,b) equal to the number of signal periods Ns :

J = Ns . (7.4.23)

When the last identity holds, the processes determined by the relationships (7.4.21a)
and (7.4.21b) in the interval [t1 − T0 , t1 ] are, respectively, equal to:

y (t) n(t)≡0 = s(t) ∨ 0; (7.4.24a)

ỹ (t) n(t)≡0 = s(t) ∧ 0. (7.4.24b)
To realize the criterion (7.4.15e) of the system (7.4.15) under joint fulfillment of the
identities (7.4.24a), (7.4.24b) and (7.4.15d), it is necessary and sufficient that the
coupling equation (7.4.15d) between stochastic process w(t) and results of primary
processing y (t) and ỹ (t) has the form:

w(t) n(t)≡0 = y (t) n(t)≡0 ∨ 0 + ỹ (t) n(t)≡0 ∧ 0 =
= s(t) ∨ 0 ∨ 0 + s(t) ∧ 0 ∧ 0 = s(t), t ∈ [t1 − T0 , t1 ]. (7.4.25)
7.4 Signal Detection in Metric Space with Lattice Properties 317
R
As follows from the expression (7.4.25), the metric |w(t) − s(t)|dt n(t)≡0 ,
[t1 −T0 ,t1 ]
that must be minimized according to the criterion (7.4.15e), is minimal and equal
to zero.
Obviously, the coupling equation (7.4.15d) has to be invariant with respect to
the presence (the absence) of interference (noise) n(t), so the final variant of the
coupling equation can be written on the base of (7.4.25) in the form:

w(t) = y+ (t) + ỹ− (t); (7.4.26)


y+ (t) = y (t) ∨ 0; (7.4.26a)
ỹ− (t) = ỹ (t) ∧ 0. (7.4.26b)

Thus, the identity (7.4.26) determines the kind of the coupling equation (7.4.15d)
obtained from the joint fulfillment of the criteria (7.4.15a), (7.4.15b), and (7.4.15e).
The solution u(t) of the relationships (7.4.16a), (7.4.16b), (7.4.16c) of the system
(7.4.16), defining the criterion of the choice of functional transformation of the
process w(t), is the function L[w(t)] determining the gain characteristic of the
limiter: 
 a, w(t) ≥ a;
u(t) = L[w(t)] = w(t), −a < w(t) < a; (7.4.27)
−a, w(t) ≤ −a,

its linear part provides the condition (7.4.16b); its clipping part (above and below)
provides minimization of the second moment of the process u(t), according to the
criterion (7.4.16a).
The relationship (7.4.27) can be written in terms of L-group L(+, ∨, ∧) in the
form:
u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.4.27a)
where the limiter parameter a is chosen to be equal to: a = sup ∆A = Amax ,
A
∆A =]0, Amax ]; Amax is maximum possible value of useful signal amplitude.
We obtain the estimator v (t) = ŝ(t) of the signal s(t) by solving the equation of
minimization of the function on the base of criterion (7.4.17a) of the system (7.4.17).
M −1
|u(tk ) − v ◦ (t)|, setting its derivative with
P
We find the extremum of the function
k=0
respect to v ◦ (t) to zero:
M
X −1 M
X −1
◦ ◦
d{ |u(tk ) − v (t)|}/dv (t) = − sign[u(tk ) − v ◦ (t)] = 0.
k=0 k=0

The solution of the last equation is the value of the estimator v (t) in the form of
sample median med{∗} of the collection {u(tk )} of stochastic process u(t):

v (t) = med{u(tk )}, (7.4.28)


tk ∈T̃

k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is a smoothing
interval of stochastic process u(t); ∆T̃ is a quantity of the smoothing interval T̃ .
318 7 Synthesis and Analysis of Signal Processing Algorithms
M −1
|u(tk ) − v ◦ (t)| at the point v (t) changes its
P
A derivative of the function
k=0
sign from minus to plus. Thus, the extremum determined by the formula (7.4.28)
is minimum of this function, and the solution of Equation (7.4.17a) defining this
criterion of signal processing.
The rule of making the decision d1 concerning the presence of the signal (if
Ev (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
Ev (t̂1 − T20 ) < l0 (F )), stated by the equation (7.4.18a), suggests (1) formation
of the envelope Ev (t) of the estimator v (t) = ŝ(t) of useful signal s(t), and (2)
comparison of the value of the envelope Ev (t̂1 − T20 ) with a threshold value l0 (F ) at
the instant t = t̂1 − T20 determined by the estimator t̂1 ; as the result, the decision
making is realized:
T0 d1
Ev (t̂1 − ) ≷ l0 (F ). (7.4.29)
2 d0
The relationship (7.4.18b) defines the criterion of forming the estimator t̂1 of time
of signal ending t1 based on minimizing the mean squared difference Mϕ {(t1 −t◦ )2 }
between true value of time of signal ending t1 and optimization variable t◦ under
the conditions that averaging is realized over the initial phase ϕ of the signal taken
in one of two intervals: Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], and interference
(noise) is identically equal to zero: n(t) ≡ 0.
The solution of optimization equation (7.4.18b) of the system (7.4.18) is deter-
mined by the identity:

t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2];
t̂1 = (7.4.30)
t̂+ + (T0 /2) − (T0 ϕ̂/2π ), ϕ ∈ Φ2 = [π/2, 3π/2],
R R
where t̂± = ( tv± (t)dt)/( v± (t)dt) is the estimator of barycentric coordinate
Tobs Tobs
of the positive v+ (t) or the negative v− (t) parts of the smoothed stochastic process
v (t), respectively; v+ (t) = v (t) ∨ 0, v− (t) = v (t) ∧ 0; Tobs is an observation interval of
the signal: Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ; ϕ̂ = arcsin[2(t̂+ − t̂− )/T0 ] is the estimator
of an unknown nonrandom initial phase ϕ of the useful signal s(t).
The value of mean squared difference Mϕ {(t1 − t̂1 )2 } between the true value
of time of signal ending t1 and its estimator t̂1 determined by Equation (7.4.30),
under the given conditions, is equal to zero:

Mϕ {(t1 − t̂1 )2 } n(t)=0 = 0, on ϕ ∈ Φ1 = [−π/2, π/2] or ϕ ∈ Φ2 = [π/2, 3π/2].


If the initial phase ϕ of the signal can change within the interval from −π to
π, i.e., it is not known beforehand to which interval (Φ1 = [−π/2, π/2] or Φ2 =
[π/2, 3π/2] from the relationships (7.4.18b) and (7.4.30)) the phase ϕ belongs, then
the estimators t̂1 of time of signal ending t1 can be satisfactory and determined by
the identities:
t̂1 = max [t̂− , t̂+ ] + (T0 /4), ϕ ∈ [−π, π ]; (7.4.31a)
t̂± ∈Tobs

1 T0
or : t̂1 = (t̂− + t̂+ ) + , ϕ ∈ [−π, π ]. (7.4.31b)
2 2
7.4 Signal Detection in Metric Space with Lattice Properties 319

Summarizing the relationships (7.4.30), (7.4.29), (7.4.28), (7.4.27), (7.4.26),


(7.4.25), (7.4.20a,b), one can conclude that the estimator t̂1 of time of signal end-
ing and its envelope Ev (t) are formed from further processing of the estimator
v (t) = ŝ(t) of harmonic signal s(t) detected in the presence of interference (noise)
n(t), which is the function of smoothing stochastic process u(t) obtained by lim-
itation of the process w(t) that combines the results y (t) and ỹ (t) corresponding
to primary processing of the stochastic processes x(t) and x̃(t) in the observation
interval Tobs .
The block diagram of the processing unit, according to the relationships
(7.4.20a,b), (7.4.23), (7.4.26), (7.4.27), (7.4.28), (7.4.29), and (7.4.30), includes two
processing channels, each containing transversal filters; the units of formation of
positive y+ (t) and negative ỹ− (t) parts of the processes y (t) and ỹ (t), respectively;
an adder uniting the results of signal processing in both channels; a limiter L[w(t)];
a median filter (MF), an estimator formation unit (EFU), an envelope computation
unit (ECU), and a decision gate (DG) (see Fig. 7.4.4).

FIGURE 7.4.4 Block diagram of processing unit that realizes harmonic signal detection
with joint estimation of time of signal arrival (ending)

Transversal filters in two processing channels realize primary filtering P F [s(t)]:


N −1 N −1
y (t) = ∧ x(t − jT0 ), ỹ (t) = ∨ x̃(t − jT0 ) of the observed stochastic processes
j=0 j=0
x(t), x̃(t), according to the equations (7.4.20a,b) and (7.4.23), providing fulfillment
of the criteria (7.4.15a), (7.4.15b), and (7.4.15c) of the system (7.4.15). In two
processing channels, the units of formation of positive y+ (t) and negative ỹ− (t)
parts of the processes y (t) and ỹ (t) form the values of this functions, according
to the identities (7.4.26a) and (7.4.26b), respectively. The adder unites the results
320 7 Synthesis and Analysis of Signal Processing Algorithms

of signal processing in two channels, according to the equality (7.4.26), providing


fulfillment of the criteria (7.4.15d) and (7.4.15e) of the system (7.4.15).
The limiter L[w(t)] realizes intermediate processing IP [s(t)] in the form of lim-
itation of the signal w(t) in the output of the adder, according to the criteria
(7.4.16a), (7.4.16b) of the system (7.4.16), to exclude from further processing noise
overshoots whose instantaneous values exceed a given value a. The median filter
(MF) realizes smoothing Sm[s(t)] of the process u(t), according to formula (7.4.28),
fulfilling criterion (7.4.17a) of the system (7.4.17).
Envelope computation unit (ECU) forms the envelope Ev (t) of the signal v (t) in
the output of the median filter (MF). The estimator formation unit (EFU) forms the
estimator t̂1 of time of signal ending, according to the equation (7.4.30), providing
fulfillment of the criterion (7.4.18b) of the system (7.4.18).
The decision gate (DG) compares, at the instant t = t̂1 − T20 , the instantaneous
value of the envelope Ev (t) with threshold value l0 (F ), and makes the decision d1
concerning the presence of the signal (if Ev (t̂1 − T20 ) > l0 (F )) or the decision d0
concerning the absence of the signal (if Ev (t̂1 − T20 ) < l0 (F )), according to the rule
(7.4.29) of the criterion (7.4.18a) of the system (7.4.18).
Figures 7.4.5 through 7.4.8 illustrate the results of statistical modeling of signal
processing by a synthesized unit. The useful signal s(t) is harmonic with a number
of periods Ns = 16, with the initial phase ϕ = π/3. Signal-to-noise ratio E/N0 is
equal to E/N0 = 10−10 , where E is signal energy; N0 is power spectral density.
Product T0 fn,max = 32, where T0 is a period of a carrier of the signal s(t); fn,max
is a maximum frequency of power spectral density of noise n(t) in the form of
quasi-white Gaussian noise.
Fig. 7.4.5 illustrates the useful signal s(t) and realizations w∗ (t), v ∗ (t) of the
signals w(t), v (t) in the outputs of the adder and median filter. As shown in the
figure, the matched filter provides the compression of the useful harmonic signal
s(t) in such a way that the duration of the signal v (t) in the output of matched filter
is equal to a period T0 of the harmonic signal s(t); thus, compressing the useful
signal Ns times, where Ns is a number of periods of harmonic signal s(t).

FIGURE 7.4.5 Useful signal s(t) (dot FIGURE 7.4.6 Useful signal s(t) (dot
line); realization w∗ (t) of signal w(t) in line); realization v ∗ (t) of signal in output
output of adder (dash line); realization of median filter v(t) (solid line); δ-pulse
v ∗ (t) of signal v(t) in output of median determining time position of estimator t̂1
filter (solid line) of time of signal ending t1
7.4 Signal Detection in Metric Space with Lattice Properties 321

Figure 7.4.6 illustrates: the useful signal s(t); realization v ∗ (t) of the signal in the
output of median filter v (t); δ-pulses determining time position of the estimators t̂±
of barycentric coordinates of positive v+ (t) or negative v− (t) parts of the smoothed
stochastic process v (t); and δ-pulse determining time position of the estimator t̂1
of time of signal ending t1 , according to the formula (7.4.30).
Figure 7.4.7 illustrates: the useful signal s(t); realization v ∗ (t) of the signal in the
output of median filter v (t); δ-pulses determining time position of the estimators t̂±
of barycentric coordinates of positive v+ (t) or negative v− (t) parts of the smoothed
stochastic process v (t); and δ-pulse determining time position of the estimator t̂1 of
time of signal ending t1 , according to the formulas (7.4.30), (7.4.31a), and (7.4.31b).

FIGURE 7.4.7 Useful signal s(t) (dot FIGURE 7.4.8 Useful signal s(t) (dot
line); realization v ∗ (t) of signal in output line); realization Ev∗ (t) of envelope Ev (t)
of median filter v(t) (solid line); δ-pulses of signal v(t) (solid line); δ-pulses deter-
determining time position of estimators mining time position of estimators t̂± ; δ-
t̂± ; δ-pulse determining time position of pulse determining time position of esti-
estimator t̂1 of time of signal ending t1 mator t̂1 of time of signal ending t1

Figure 7.4.8 illustrates: the useful signal s(t); realization Ev∗ (t) of the envelope
Ev (t) of the signal v (t) in the output of median filter; δ-pulses determining time
position of the estimators t̂± of barycentric coordinates of positive v+ (t) or negative
v− (t) parts of the smoothed stochastic process v (t); and δ-pulse determining time
position of the estimator t̂1 of time of signal ending t1 according to the formula
(7.4.30).
We can determine the quality indices of harmonic signal detection by a synthe-
sized processing unit (Fig. 7.4.4). We use the theorem from [251] indicating that
the median estimator v (t) formed by median filter (MF) converges in distribution
to a Gaussian random variable with zero mean.
Along with the signal w(t) in the output of the adder (see formula (7.3.25)),
the process v (t) in the output of a median filter can be represented as the sum of
signal vs (t) and noise vn (t) components:

v (t) = vs (t) + vn (t). (7.4.32)

As mentioned in Section 7.3, variance Dvn (7.3.34) of the noise component vn (t) of
the process v (t) in the output of the filter (7.4.28) (under the condition that the
signal s(t) is a harmonic oscillation in the form (7.4.12)) can be decreased to the
322 7 Synthesis and Analysis of Signal Processing Algorithms

quantity: ( )
∆T̃ 2 Ns2 fn,max
2
Dvn ≤ Dv,max = a2 exp − √ , (7.4.33)
8 π

where a is a parameter of the limiter L[w(t)]; ∆T̃ is a value of smoothing interval T̃ ;


Ns is a number of periods of harmonic signal s(t); fn,max is a maximum frequency
of power spectral density of interference (noise) n(t).
Under additive interaction between signal vs (t) and noise vn (t) components in
the output of median filter (7.4.32), assuming the normalcy of distribution of noise
component vn (t) with zero mean, the conditional PDF pv (z/θ = 1) of the envelope
Ev (t) of the process v (t) in the output of median filter can be represented in the
form of Rice distribution:
(z 2 + E 2 )
   
z E·z
pv (z/θ = 1) = exp − I0 , z ≥ 0, (7.4.34)
Dv,max 2Dv,max Dv,max

where E is a value of the envelope Ev (t); Dv,max is a maximum value of a variance of


noise component vn (t) (7.4.33); θ is an unknown nonrandom parameter, θ ∈ {0, 1},
determining the presence or absence of a useful signal in the observed stochastic
process (7.4.11a,b).
Formula (7.4.343), on E = 0 (the signal is absent), represents Rayleigh distri-
bution:
z2
 
z
pv (z/θ = 0) = exp − , z ≥ 0. (7.4.35)
Dv,max 2Dv,max
The conditional probability of false alarm F , according to (7.4.35), is equal to:
Z∞
l02
 
F = pv (z/θ = 0)dz = exp − , (7.4.36)
2Dv,max
l0

where l0 is a threshold value determined by the inverse dependence:


q
l0 = −2Dv,max ln(F ), (7.4.37)

Dv,max is maximum value of a variance of noise component vn (t) determined by


the formula (7.4.33).
The conditional probability of correct detection D, according to (7.4.34), is
equal to:
Z∞
D = pv (z/θ = 1)dz, (7.4.38)
l0

where the value E of the envelope Ev (t) is equal to E = A; A is an amplitude of


useful signal.
It is too difficult to obtain an exact quantity D[δ t̂1 ] of a variance of relative
error δ t̂1 = (t̂1 − t1 )/T0 of the estimator t̂1 of time of signal ending t1 . For three
estimators determining by the formulas (7.4.30), (7.4.31a), and (7.4.31b), we point
7.4 Signal Detection in Metric Space with Lattice Properties 323

(a)

(b)
FIGURE 7.4.9 Block diagram of unit processing observations defined by Equations: (a)
(7.4.11a); (b) (7.4.11b)

to the upper bound |δ t̂1 |max of absolute quantity of relative error |δ t̂1 | = |t̂1 − t1 |/T0
of the estimator t̂1 of time of signal ending t1 , obtained by the statistical modeling
method:
|δ t̂1 | = |t̂1 − t1 |/T0 ≤ |δ t̂1 |max = 0.5. (7.4.39)
The quantity of the upper bound |δ t̂1 |max of absolute quantity of relative error |δ t̂1 |
of the estimator t̂1 of time of signal ending t1 along with detection quality indices
(7.4.36) and (7.4.38) do not depend on energetic relationships between the useful
signal and interference (noise).
When we have prior information concerning belonging of the initial phase of
the signal to intervals Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2], which determine the
methods of formation of the estimators t̂1 of time of signal ending t1 , according
324 7 Synthesis and Analysis of Signal Processing Algorithms

to (7.4.30), to construct the processing unit, enough information is contained only


in one of the observation equations (7.4.11a) or (7.4.11b), respectively. The block
diagrams of the processing unit are shown in Fig. 7.4.9a and Fig.7.4.9b, respectively.

7.4.3 Detection of Harmonic Signal with Unknown Nonrandom


Amplitude and Initial Phase with Joint Estimation of Amplitude,
Initial Phase, and Time of Signal Arrival (Ending) in Presence of
Interference (Noise) in Metric Space with L-group Properties
Synthesis and analysis of optimal algorithm of detection of harmonic signal with un-
known nonrandom amplitude and initial phase with joint estimation of amplitude,
initial phase, and time of signal arrival (ending) in the presence of interference
(noise) in metric space with L-group properties are fulfilled by the following as-
sumptions. Under the synthesis, the interference distribution is considered to be an
arbitrary one. While carrying out the further analysis, the interference (noise) n(t)
is considered to be quasi-white Gaussian noise.
The interaction between harmonic signal s(t) and interference (noise) n(t) in
metric space L(+, ∨, ∧) with L-group properties is described by two binary opera-
tions ∨ and ∧ in two receiving channels, respectively:

x(t) = θs(t) ∨ n(t); (7.4.40a)

x̃(t) = θs(t) ∧ n(t), (7.4.40b)


where θ is an unknown nonrandom parameter that take only two values θ ∈ {0, 1}:
θ = 0 (the signal is absent) and θ = 1 (the signal is present); t ∈ Ts , Ts is a domain
of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is an unknown time of arrival of the
signal s(t); t1 is an unknown time of signal ending; T = t1 − t0 is a known signal
duration; Ts ⊂ Tobs ; Tobs is an observation interval of the signal Tobs = [t00 , t01 ];
t00 < t0 , t1 < t01 .
Let the model of the received harmonic signal s(t) be determined by the expres-
sion: 
A cos(ω0 t + ϕ), t ∈ Ts ;
s(t) = (7.4.41)
0, t∈/ Ts ,
where A is an unknown nonrandom amplitude of the useful signal s(t); ω0 = 2πf0 ;
f0 is a known carrier frequency of the signal s(t); ϕ is an unknown nonrandom
initial phase of the useful signal, ϕ ∈ [−π, π ]; Ts is a domain of definition of the
signal s(t), Ts = [t0 , t1 ]; t0 is an unknown time of arrival of the signal s(t); t1 is an
unknown time of signal ending; T = t1 − t0 is known signal duration, T = Ns T0 ;
Ns is a number of periods of harmonic signal s(t), Ns ∈ N, N is the set of natural
numbers; T0 is a carrier period.
The instantaneous values (the samples) of the signal {s(tj )} and interference
(noise) {n(tj )} are the elements of the signal space s(tj ), n(tj ) ∈ L(+, ∨, ∧). The
samples of interference (noise) n(tj ) are considered independent. The samples s(tj )
and n(tj ) of the signal s(t) and interference (noise) n(t) are taken on the domain
of definition Ts of the signal s(t): tj ∈ Ts over the sample interval ∆t that provides
7.4 Signal Detection in Metric Space with Lattice Properties 325

the independence of the samples of interference (noise) {n(tj )}; ∆t << 1/f0 , where
f0 is known carrier frequency of the signal s(t).
Taking into account these considerations, the equations of observations in two
processing channels (7.4.40a) and (7.4.40b) take the form:

x(tj ) = θs(tj ) ∨ n(tj ); (7.4.42a)

x̃(tj ) = θs(tj ) ∧ n(tj ), (7.4.42b)


where tj = t − jT0 , j = 0, 1, . . . , N − 1, tj ∈ T ∗ ⊂ Ts ⊂ Tobs ; T ∗ is the interval
of primary processing T ∗ = [t − (N − 1)∆t, t]; Ts = [t0 , t1 ], Tobs = [t00 , t01 ]; t00 < t0 ,
t1 < t01 ; N ∈ N, N is the set of natural numbers.
In this subsection, as in Subsection 7.4.2, the problems of joint detection of the
signal s(t) and estimation of its time of ending t1 , and also estimation of both the
amplitude A and the initial phase ϕ are formulated and solved on the basis of the
extraction Ext[s(t)] of useful signal s(t) in the presence of interference (noise) n(t),
with further matched filtering M F [s(t)]. After that joint detection of the signal s(t)
and estimation of its time of ending t1 are realized, and estimation of the amplitude
A and the initial phase ϕ is fulfilled.
The problem of extraction Ext[s(t)] of the useful signal s(t) in the presence of
interference (noise) n(t) is formulated and solved similarly as shown in Section 7.3
on the basis of step-by-step processing of the statistical collections {x(tj )} and
{x̃(tj )} determined by the observation equations (7.4.42a) and (7.4.42b):

 P F [s(t)]; (a)
Ext[s(t)] = IP [s(t)]; (b) (7.4.43)
ISm[s(t)], (c)

where P F [s(t)] is primary filtering; IP [s(t)] is intermediate processing; ISm[s(t)] is


intermediate smoothing that form the successive stages of processing of the general
signal extraction algorithm (7.4.43).
The used optimality criteria determining every stage of processing P F [s(t)],
IP [s(t)], ISm[s(t)] are interrelated and involved into the single systems:

N −1 ^
 y (t) =

 arg min | ∧ [x(tj ) − y (t)]|; (a)
 ^ ∗ j=0


 y (t)∈Y ;t,tj ∈T

 N −1 _


 ỹ (t) = arg min | ∨ [x̃(tj ) − y (t)]|; (b)
 _ j=0
y (t)∈Ỹ ;t,tj ∈T ∗

P F [s(t)] = w(t) = F [y (t), ỹ (t)]; (c) (7.4.44)

N −1


 P


 |w(tj ) − s(tj )||n(t)≡0,∆t→0 → min ; (d)

 j=0 w(tj )∈W

 N = arg max [δd (N 0 )] N ∗ :δ (N ∗ )=δ ,

(e)
d d0


N 0 ∈N∩]0,N ∗ ]

where y (t), ỹ (t) are the solution functions of the problem of minimization of a metric
between the observed statistical collections {x(tj )}, {x̃(tj )} and the optimization
` a
variables, i.e., the functions y (t) and y (t), respectively; w(t) is the function F [∗, ∗]
326 7 Synthesis and Analysis of Signal Processing Algorithms

of uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} (7.4.42a) and {x̃(tj )} (7.4.42b); T ∗ is the processing interval;
N ∈ N, N is the set of natural numbers; N is a number of the samples of the
processes x(t) and x̃(t) used under their primary processing; δd (N ) is the relative
dynamic error of filtering as the dependence on the number of the samples N ; δd0
is the relative dynamic error of filtering;

2
 M{u (t)} s(t)≡0 → min; (a)

L
IP [s(t)] = M{[u(t) − s(t)]2 } n(t)≡0, ∆t→0 = ε ; (b) (7.4.45)

 u(t) = L[w(t)], (c)

where M{∗} is the symbol of mathematical expectation; L[w(t)] is a functional


transformation of stochastic process w(t) into the process u(t); ε is a constant,
which, in general case, is some function of a power of the signal s(t);

ISm[s(t)] =

M −1
|u(tk ) − v ◦ (t)|;
P
v ( t ) = arg min (a)




v ◦ (t)∈V ; t,tk ∈T̃ k=0

= ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.4.46)

= arg max [δf (M 0 )] M ∗ :δf (M ∗ )=δf,sm ,



 M (c)
0 ∗
M ∈N∩[M ,∞[

where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t)), that
is the solution of the problem of minimization of metric between the instantaneous
values of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t);
k
tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ , T̃ =]t − ∆T̃ , t]; T̃ is an interval on which
smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of a smoothing
interval T̃ ; M ∈ N, N is the set of natural numbers; M is a number of the samples
of stochastic process u(t) used under smoothing; δd (∆T̃ ) and δf (M ) are the relative
dynamic and fluctuation errors of smoothing as the dependences on the quantity
∆T̃ of the smoothing interval T̃ and the number of the samples M respectively;
δd,sm , δf,sm are relative dynamic and fluctuation errors of smoothing, respectively.
We now explain the optimality criteria and the single relationships involved
into the systems (7.4.44), (7.4.45), (7.4.46) determining the successive stages of
processing P F [s(t)], IP [s(t)], ISm[s(t)] of the general algorithm of signal extraction
(7.4.43).
Equations (7.4.44a) and (7.4.44b) of the system (7.4.44) define the criteria of
minimum of metrics between the statistical sets of the observations {x(tj )} and
{x̃(tj )} and the results of primary filtering y (t) and ỹ (t), respectively. The functions
N −1 ` N −1 a
of metrics | ∧ [x(tj ) − y (t)]| and | ∨ [x̃(tj ) − y (t)]| are chosen to provide the metric
j=0 j=0
convergence and the convergence in probability to the estimated parameter of the
N −1 N −1
sequences yN −1 = ∧ x(tj ), ỹN −1 = ∨ x̃(tj ) for the interactions in the form
j=0 j=0
(7.2.2a) and (7.2.2b) (see Section 7.2).
The equation (7.4.44d) of the system (7.4.44) defines the criterion of minimum
7.4 Signal Detection in Metric Space with Lattice Properties 327
−1
NP
of the metric |w(tj ) − s(tj )| between the useful signal s(t) and the function
j=0
w(t) in the processing interval T ∗ = [t − (N − 1)∆t, t]. This criterion establishes the
kind of the function F [y (t), ỹ (t)] (7.4.44c) uniting the primary processing results
y (t) and ỹ (t) obtained from the observed processes x(t) and x̃(t). The criterion
(7.4.44d) is considered under two constraint conditions: (1) interference (noise) is
identically equal to zero: n(t) ≡ 0; (2) the sampling interval ∆t tends to zero:
∆t → 0. The equation (7.4.44e) of the system (7.4.44) determines the criterion of
the choice of the number of the samples N of stochastic processes x(t), x̃(t), based
on the quantity δd0 of relative dynamic error of primary filtering.
Equations (7.4.45a), (7.4.45b), (7.4.45c) of the system (7.4.45) define the choice
of the functional transformation L[w(t)]. Equation (7.4.45a) determines the crite-
rion of minimum of the second moment of the process u(t) in the absence of useful
signal s(t) in the input of the signal processing unit. Equation (7.4.45b) determines
the second moment of the difference between the signals u(t) and s(t) under two
constraint conditions: (1) interference (noise) n(t) in the input of signal processing
unit is absent; (2) the sampling interval ∆t tends to zero: ∆t → 0.
Equation (7.4.46a) of the system (7.4.46) defines the criterion of minimum of
M −1
|u(tk ) − v ◦ (t)| between the instantaneous values of the process u(t) and
P
metric
k=0
optimization variable v ◦ (t) in the smoothing interval T̃ =]t− ∆T̃ , t], requiring inter-
mediate smoothing of the signal u(t). Equation (7.4.46b) determines the criterion
of the choice of the quantity ∆T̃ of the smoothing interval T̃ , based on a given
quantity of dynamic error of intermediate smoothing δd,sm . Equation (7.4.46c) de-
termines the criterion of the choice of a number of the samples M of stochastic
process u(t), based on a given quantity of fluctuation error of primary smoothing
δf,sm .
Thus, the criteria (7.4.44) through (7.4.46) define the signal extraction algorithm
in the presence of interference in the form of quasi-white noise with independent
samples (see Equation (7.3.3) from Section 7.3).
The further problem of matched filtering M F [s(t)] in signal space L(+, ∨, ∧)
with L-group properties is solved similarly, as shown in Subsection 7.4.2, with the
only difference that the statistical collections {v+ (tj )} and {v− (tj )} of the positive
v+ (t) = v (t) ∨ 0 and negative v− (t) = v (t) ∧ 0 parts of the process v (t) = ŝ(t),
obtained on the basis of the criterion (7.4.46a) of the system (7.4.46), take part in
signal processing. The intermediate processing in the limiter is excluded from the
algorithm M F [s(t)], inasmuch as it is foreseen in the signal extraction algorithm
Ext[s(t)]. Thus, the problem of matched filtering M F [s(t)] of useful signal s(t) in
the presence of interference (noise) n(t) is formulated and solved on the basis of
step-by-step processing of statistical collections {v+ (tj )} and {v− (tj )} determined
by the equations v+ (t) = v (t) ∨ 0 and v− (t) = v (t) ∧ 0:

IF [s(t)]; (a)
M F [s(t)] = (7.4.47)
Sm[s(t)], (b)
where IF [s(t)] is intermediate filtering; Sm[s(t)] is smoothing that together form
328 7 Synthesis and Analysis of Signal Processing Algorithms

the successive processing stages of the general algorithm of the useful signal matched
filtering (7.4.47).
The optimality criteria determining every stage of processing IF [s(t)] and
Sm[s(t)] are interrelated and involved in the separate systems:

IF [s(t)] =
J−1 ^


 Y (t) = arg min | ∧ [v+ (tj ) − Y (t)]|; (a)

 ^ j=0


 Y (t)∈Y ;t,tj ∈Tobs
 J−1 _
| ∨ [ṽ− (tj ) − Y (t)]|;


 Ỹ (t) = arg min (b)
j=0

 _
Y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.48)

 J = arg min [ | Y(t) | dt ] n(t)≡0 , |Y(t)|dt 6= 0; (c)
Y (t)∈Y ; Tobs

 Tobs




 W (Rt) = F [Y (t), Ỹ (t)]; (d)



 |W (t) − s(t)|dt] n(t)≡0 → min , (e)
[t1 −T0 ,t1 ] W (t)∈W

where {v+ (tj )} and {v− (tj )} are the processed statistical collections of the positive
v+ (t) = v (t) ∨ 0 and the negative v− (t) = v (t) ∧ 0 parts of the process v (t) = ŝ(t),
obtained on the basis of the criterion (7.4.46a) of the system (7.4.46); Y (t) and
Ỹ (t) are the solution functions of the problem of minimization of metric between
the observed statistical collections {v+ (tj )}, {v− (tj )} and optimization variables,
` a
i.e., the functions Y (t) and Y (t), respectively; W (t) is the function F [∗, ∗] of uniting
the results Y (t) and Ỹ (t) of minimizing the functions of the observed collections
{v+ (tj )} and {v− (tj )};

Sm[s(t)] =

M −1
|W (tk ) − V ◦ (t)| s(t)≡0 ;
 P


 V (t) = arg min (a)
V ◦ (t)∈V ;t,tk ∈T̃ k=0


= ∆T̃M F : δd (∆T̃M F ) = δd,sm MF
; (b) (7.4.49)


 MM F = 0 arg max [δf (M 0 )] M ∗ :δf (M ∗ )=δf,sm
MF , (c)




M ∈N∩[M ,∞[

where V (t) is the result of matched filtering that is the solution of the problem
of minimizing the metric between the instantaneous values of stochastic process
W (t) and optimization variable, i.e., the function V ◦ (t); tk = t − MM k
F
∆T̃M F ,
k = 0, 1, . . . , MM F − 1, tk ∈ T̃M F =]t − ∆T̃M F , t]; T̃M F is the interval in which
smoothing of the stochastic process W (t) is realized; M ∈ N, N is the set of natural
numbers; MM F is a number of the samples of stochastic process W (t) used during
smoothing in the interval T̃M F ; δd (∆T̃M F ) and δf (MM F ) are relative dynamic and
fluctuation errors of smoothing, as the dependences on the quantity ∆T̃M F of the
MF
smoothing interval T̃M F and a number of the samples MM F , respectively; δd,sm
MF
and δf,sm are the given quantities of relative dynamic and fluctuation errors of
smoothing, respectively.
We now explain the optimality criteria and the single relationships involved
7.4 Signal Detection in Metric Space with Lattice Properties 329

in the systems (7.4.48) and (7.4.49) determining the successive processing stages
IF [s(t)], Sm[s(t)] of the general matched filtering algorithm M F [s(t)] of the useful
signal (7.4.47).
Equations (7.4.48a) and (7.4.48b) of the system (7.4.48) define the criteria of
minimum of the metric between statistical sets of the observations {v+ (tj )} and
{v− (tj )} and the results of primary processing Y (t) and Ỹ (t), respectively. The
J−1 ^ J−1 _
functions of metrics | ∧ [v+ (tj ) − Y (t)]| and | ∨ [v− (tj ) − Y (t)]| are chosen to pro-
j=0 j=0
vide the metric convergence and the convergence in probability to the useful signal
J−1 J−1
s(t) of the sequences Y (t) = ∧ v+ (tj ) and Ỹ (t) = ∨ v− (tj ) based on the inter-
j=0 j=0
actions in the form (7.4.42a) and (7.4.42b). The relationship (7.4.48c) determines
the criterion of the choice of a number of the samples J of stochastic process
R v+ (t),
v− (t) used during signal processing on the basis of minimizing the norm |Y (t)|dt.
Tobs
The criterion (7.4.48c) is considered under two constraint conditions:R (1) interfer-
ence (noise) is identically equal to zero: n(t) ≡ 0; (2) the norm |Y (t)|dt of
R Tobs
the function Y (t) is not equal to zero: |Y (t)|dt 6= 0. Equation (7.4.48e) defines
Tobs R
the criterion of minimum of the metric |W (t) − s(t)|dt n(t)≡0 between the
[t1 −T0 ,t1 ]
useful signal s(t) and the function W (t) in the interval [t1 − T0 , t1 ]. This crite-
rion establishes the kind of the function F [Y (t), Ỹ (t)] (7.4.48d) uniting the results
Y (t) and Ỹ (t) of primary processing of the observed processes v+ (t) and v− (t).
The criterion (7.4.48e) is considered under the condition that interference (noise)
is identically equal to zero: n(t) ≡ 0.
Equation (7.4.49a) of the system (7.4.49) determines the criterion of minimum
M −1
|W (tk ) − V ◦ (t)| between the instantaneous values of the process
P
of the metric
k=0
W (t) and optimization variable V ◦ (t) in the smoothing interval T̃M F =]t−∆T̃M F , t],
requiring final processing of the signal W (t) by smoothing under the condition
that the useful signal s(t) is identically equal to zero: s(t) ≡ 0. The relationship
(7.4.49b) determines the rule of the choice of the quantity ∆T̃M F of the smoothing
MF
interval T̃M F , based on a given quantity of relative dynamic error δd,sm of smooth-
ing. Equation (7.4.49c) determines the criterion of the choice of a number of the
samples MM F of the stochastic process W (t), based on a given quantity of relative
MF
fluctuation error δf,sm of its smoothing.
The problem of joint detection Det[s(t)] of the signal s(t) and estimation Est[t1 ]
of time of its ending t1 is formulated on the detection and estimation criteria in-
volved in one system, which is a logical continuation of (7.4.43):

Det[s(t)]/Est[t1 ] =
 d1
 EV (t̂1 − T0

2 ) ≷ l0 (F ); (a)
= d0 (7.4.50)
 t̂1 =
 arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ;t◦ ∈Tobs
330 7 Synthesis and Analysis of Signal Processing Algorithms

where EV (t̂1 − T20 ) is an instantaneous value of the envelope EV (t) of the result
V (t) of matched filtering of the useful signal s(t) at the instant t = t̂1 − T20 : EV (t) =
p
V 2 (t) + VH2 (t); VH (t) = H[V (t)] is Hilbert transform; d1 and d0 are the decisions
concerning true values of unknown nonrandom parameter θ, θ ∈ {0, 1}; l0 (F ) is
some threshold level as the dependence on a given conditional probability of false
alarm F ; t̂1 is the estimator of the time of signal ending t1 ; Mϕ {(t1 −t◦ )2 } is a mean
squared difference between true value of time of signal ending t1 and optimization
variable t◦ ; Mϕ {∗} is the symbol of mathematical expectation with averaging over
the initial phase ϕ of the signal; Φ1 is a domain of definition of the initial phase ϕ
of the signal: Φ1 = [−π/2, π/2].
Equation (7.4.50a) of the system (7.4.50) determines the rule for making the
decision d1 concerning the presence of the signal (if EV (t̂1 − T20 ) > l0 (F )) or the
decision d0 concerning the absence of the signal (if EV (t̂1 − T20 ) < l0 (F )). The
relationship (7.4.50b) determines the criterion of formation of the estimator t̂1 of
time of signal ending t1 on the basis of minimization of mean squared difference
Mϕ {(t1 − t◦ )2 } between true value of the time of signal ending t1 and optimization
variable t◦ , when averaging is realized over the initial phase ϕ of the signal taken in
the interval Φ1 = [−π/2, π/2], and interference (noise) is identically equal to zero:
n(t) ≡ 0.
The problem of estimation of the amplitude A and the initial phase ϕ of useful
signal s(t) is formulated and solved on the basis of two estimation criteria within one
system that is a logical continuation of (7.4.44) through (7.4.46), (7.4.48) through
(7.4.50):

Est[A, ϕ] =
 R
 ϕ̂ = arg max
 v (t) cos(ω0 t + ϕ)dt; (a)
ϕ∈Φ1 ;t∈Tobs Ts
= (7.4.51)
[v (t) − A cos(ω0 t + ϕ)]2 dt,
R
 Â = arg min
 (b)
A∈∆A ;t∈Tobs Ts

where ϕ̂, Â are the estimators of the amplitude A and the initial phase ϕ of the
useful signal s(t), respectively; v (t) = ŝ(t) is the result of useful signal extraction
(the estimator ŝ(t) of the signal s(t)) obtained on the basis of the criterion (7.4.46a)
of the system (7.4.46); Ts is a domain of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is
an unknown time of arrival of the signal s(t); t1 is an unknown time of signal ending;
Tobs is the interval of the observation of the signal Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ;
Φ1 is a domain of definition of the initial phase of the signal: Φ1 = [−π/2, π/2]; ∆A
is a domain of definition of the signal amplitude: ∆A =]0, Amax ].
Solving of the signal processing problems determined by the equation systems
(7.4.43), (7.4.47), (7.4.50), (7.4.51) is realized in the following way.
Solving the problem of the extraction Ext[s(t)] of the useful signal s(t) in the
presence of interference (noise) n(t) is described in detail in Section 7.3. Here we
consider the intermediate results that determine the structure-forming elements of
the general processing algorithm.
The solutions of optimization equations (7.4.44a) and (7.4.44b) of the system
7.4 Signal Detection in Metric Space with Lattice Properties 331

(7.4.44) are the values of the estimators y (t), ỹ (t) in the form of meet and join of
the observation results {x(tj )} and {x̃(tj )}, respectively:
N −1 N −1
y (t) = ∧ x(tj ) = ∧ x(t − j ∆t); (7.4.52a)
j=0 j=0

N −1 N −1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − j ∆t). (7.4.52b)
j=0 j=0

The coupling equation (7.4.44c) is obtained from criteria (7.4.44a), (7.4.44b),


(7.4.44d), and takes the form:

w(t) = y+ (t) + ỹ− (t); (7.4.53)


y+ (t) = y (t) ∨ 0; (7.4.53a)
ỹ− (t) = ỹ (t) ∧ 0. (7.4.53b)

The solution u(t) of the relationships (7.4.45a), (7.4.45b), (7.4.45c) of the system
(7.4.45), that define the criterion of the choice of transformation of the process w(t),
is the function L[w(t)] that determines the gain characteristic of the limiter:

u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.4.54)

where the parameter of the limiter a is chosen to be equal to a = sup ∆A = Amax and
A
∆A =]0, Amax ]; Amax is a maximum possible value of the useful signal amplitude.
The solution of optimization equation (7.4.46a) of the system (7.4.46) is a value
of the estimator v (t) in the form of the sample median med{∗} of the sample
collection {u(tk )} of stochastic process u(t):

v (t) = med{u(tk )}, (7.4.55)


tk ∈T̃

k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is the interval,
in which smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of the
smoothing interval T̃ .
Summarizing the relationships (7.4.52) through (7.4.55), one can draw the con-
clusion that the estimator v (t) = ŝ(t) of the signal s(t) extracted in the presence
of interference (noise) n(t) is the function of smoothing of stochastic process u(t)
obtained by limiting the process w(t) that combines the results y (t) and ỹ (t) of
a proper primary processing of the observed stochastic processes x(t), x̃(t) in the
interval T ∗ = [t − (N − 1)∆t, t].
Solving the problem of matched filtering M F [s(t)] of useful signal s(t) in the sig-
nal space L(+, ∨, ∧) with L-group properties in the presence of interference (noise)
n(t) is described in detail in Subsection 7.4.2, so similarly we consider the interme-
diate results, which determine structure-forming elements of the general processing
algorithm.
The solutions of the optimization equations (7.4.48a) and (7.4.48b) of the system
332 7 Synthesis and Analysis of Signal Processing Algorithms

(7.4.48) are the values of the estimators Y (t) and Ỹ (t) in the form of meet and join
of the observation results {v+ (tj )} and {v− (tj )}, respectively:
J−1 J−1
Y (t) = ∧ v+ (tj ) = ∧ v+ (t − jT0 ); (7.4.56a)
j=0 j=0

J−1 J−1
Ỹ (t) = ∨ ṽ− (tj ) = ∨ ṽ− (t − jT0 ). (7.4.56b)
j=0 j=0

According to the criterion (7.4.48c), the optimal value of a number of the samples
J of stochastic process v+ (t), v− (t) used during primary processing (7.4.56a,b) is
equal to the number of periods Ns of the signal:

J = Ns . (7.4.57)

The coupling equation (7.4.48d) satisfying the criterion (7.4.48e) takes the form:

W (t) = Y (t) + Ỹ (t); Y (t) ≥ 0, Ỹ (t) ≤ 0. (7.4.58)

The solution of optimization equation (7.4.49a) of the system (7.4.49) is the value
of the estimator V (t) in the form of the sample median med{∗} of the collection of
the samples {W (tk )} of stochastic process W (t):

V (t) = med{W (tk )}, (7.4.59)


tk ∈T̃

k
where tk = t − MM F
∆T̃M F , k = 0, 1, . . . , MM F − 1; tk ∈ T̃M F =]t − ∆T̃M F , t]; T̃M F
is the interval in which smoothing of stochastic process W (t) is realized; ∆T̃M F is
a quantity of the smoothing interval T̃M F .
The sense of the obtained relationships (7.4.56) through (7.4.59) lies in the fact
that the result of matched filtering V (t) of useful signal s(t) extracted and detected
in the presence of interference (noise) n(t) is the function of smoothing of stochastic
process W (t) that is a combination of the results Y (t) and Ỹ (t) of signal processing
of the positive v+ (t) and the negative v− (t) parts of the observed stochastic process
v (t) in the interval T ∗ = [t − (Ns − 1)∆t, t].
Solving the problem of joint detection Det[s(t)] of the signal s(t) and estimation
Est[t1 ] of its time of ending t1 is described in detail in Subsection 7.4.2, so here we
note only the intermediate results which determine the structure-forming elements
of the general processing algorithm.
The rule of making the decision d1 concerning the presence of the signal (if
EV (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
EV (t̂1 − T20 ) < l0 (F )), determined by Equation (7.4.50a) of the system (7.4.50),
supposes formation of the envelope EV (t) of the estimator V (t) = ŝ(t) of the useful
signal s(t) and the comparison of the value of the envelope EV (t̂1 − T20 ) with the
threshold value l0 (F ) at the instant t = t̂1 − T20 determined by the estimator t̂1 ,
and as the result, the decision is made:
T0 d1
EV (t̂1 − ) ≷ l0 (F ). (7.4.60)
2 d0
7.4 Signal Detection in Metric Space with Lattice Properties 333

The relationship (7.4.50b) of the system (7.4.50) determines the criterion of forming
the estimator t̂1 of time of signal ending t1 on the basis of minimization of mean
squared difference Mϕ {(t1 −t◦ )2 } between true value of time of signal ending t1 and
optimization variable t◦ when averaging is realized over the initial phase ϕ of the
signal taken in the interval Φ1 = [−π/2, π/2], and interference (noise) is identically
equal to zero: n(t) ≡ 0.
Generally, as it is shown in Subsection 7.4.2, the solution of optimization equality
(7.4.50b) of the system (7.4.50) is determined by the identity:

t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2];
t̂1 = (7.4.61)
t̂+ + (T0 /2) − (T0 ϕ̂/2π ), ϕ ∈ Φ2 = [π/2, 3π/2],
R R
where t̂± = ( tV± (t)dt)/( V± (t)dt) is the estimator of the barycentric coordi-
Tobs Tobs
nate of the positive V+ (t) or the negative V− (t) parts of the smoothed stochastic pro-
cess V (t), respectively; V+ (t) = V (t) ∨ 0, V− (t) = V (t) ∧ 0; Tobs is an interval of the
observation of the signal Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ; ϕ̂ = arcsin[2(t̂+ − t̂− )/T0 ]
is the estimator of an unknown nonrandom initial phase ϕ of the useful signal s(t).
However, as mentioned above, the initial phase is determined in the interval
Φ1 = [−π/2, π/2], that, according to (7.4.61), supposes the estimator t̂1 takes the
form:
t̂1 = t̂− + (T0 /2) + (T0 ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2]. (7.4.61a)
The sense of the obtained relationships (7.4.61a) and (7.4.60) lies in the fact that
the estimator t̂1 of time of signal ending and the envelope EV (t) are formed on the
base of the proper processing of the result of matched filtering V (t) of the useful
signal s(t) extracted and detected in the presence of interference (noise) n(t). Signal
detection is fixed on the base of comparing the instantaneous value of the envelope
EV (t) with a threshold value at the instant t = t̂1 − T20 determined by the estimator
t̂1 .
Consider, finally, the problem of the estimation of unknown nonrandom ampli-
tude and initial phase of the signal stated, for instance, in [149], [155].
Equation (7.4.51a) of the system (7.4.51) implies that the estimator ϕ̂ of initial
phase ϕ is found by maximizing the expression (with respect to ϕ):
Z
Q(ϕ) = v (t) cos(ω0 t + ϕ)dt → max , (7.4.62)
ϕ∈Φ1
Ts

where v (t) is the result of extraction Ext[s(t)] of useful signal s(t) in the presence
of interference (noise), whose algorithm is described by the system (7.4.43); Ts is a
domain of definition of the signal s(t).
Factoring the cosine of the sum, we can compute the derivative of the function
Q(ϕ) with respect to ϕ, which we put to zero to determine the extremum:
Z Z
dQ(ϕ̂)
= − sin ϕ̂ v (t) cos(ω0 t + ϕ)dt − cos ϕ̂ v (t) sin(ω0 t + ϕ)dt = 0. (7.4.63)
dϕ̂
Ts Ts
334 7 Synthesis and Analysis of Signal Processing Algorithms

Equation (7.4.63) has a unique solution that is the maximum of the function
Q(ϕ):

ϕ̂ = −arctg(vs /vc ); (7.4.64)


Z
vs = v (t) sin(ω0 t + ϕ)dt; (7.4.64a)
Ts
Z
vc = v (t) cos(ω0 t + ϕ)dt. (7.4.64b)
Ts

Equation (7.4.51b) of the system (7.4.51) implies that the estimator  of the ampli-
tude A can be found on the basis of obtaining a minimum with respect to variable
A of the expression:
Z
Q(A) = [v (t) − A cos(ω0 t + ϕ)]2 dt → max . (7.4.65)
A∈∆A
Ts

We find the derivative of the function Q(A) with respect to A, setting it to zero to
find the extremum:
Z Z
dQ(Â)
= −2 v (t) cos(ω0 t + ϕ)dt + 2Â cos2 (ω0 t + ϕ)dt = 0. (7.4.66)
dÂ
Ts Ts

Equation (7.4.66) has a unique solution which determines the minimum of the
function Q(A): R
v (t) cos(ω0 t + ϕ)dt
Ts
 = R = 2Q(ϕ)/T, (7.4.67)
cos2 (ω0 t + ϕ)dt
Ts

where Q(ϕ) is the function determined by the relationship (7.4.62); T is the known
duration of the signal.
It is easy to make sure that the function Q(ϕ) can be represented in the form:
p
Q(ϕ) = vc2 + vs2 , (7.4.68)

where vs and vc are the quantities determined by the integrals (7.4.64a) and
(7.4.64b), respectively.
Taking into account (7.4.68), we write the final expression for the estimator Â
of the amplitude A of the signal:
p
 = 2 vc2 + vs2 /T. (7.4.69)

Thus, the solution of the problem of estimation of unknown nonrandom amplitude


and initial phase of harmonic signal is described by the relationships (7.4.64) and
(7.4.69). These estimators coincide with the estimators obtained by the maximum
likelihood method; see, for instance, [155, (3.3.32)], [149, (6.10)].
7.4 Signal Detection in Metric Space with Lattice Properties 335

The block diagram of the processing unit, according to the results of solving
the optimization equations involved into the systems (7.4.43), (7.4.47), (7.4.50),
(7.4.51), is described by the relationships (7.4.52) through (7.4.60); (7.4.61a);
(7.4.64) and (7.4.69) and includes: signal extraction unit (SEU), matched filter-
ing unit (MFU), signal detection unit (SDU), and also, amplitude and initial phase
estimator formation unit (EFU) (see Fig. 7.4.10).
The SEU realizes signal processing, according to the relationships (7.4.52)
through (7.4.55) and includes two processing channels, each containing transversal
filter realizing primary filtering of the observed stochastic processes x(t) and x̃(t);
the units of formation of the positive y+ (t) and the negative ỹ− (t) parts of the pro-
cesses y (t) and ỹ (t), respectively; an adder summing the results of signal processing
in two processing channels; a limiter; a median filter (MF) realizing intermediate
smoothing of the process u(t) (see Fig. 7.4.10).
Matched filtering unit realizes signal processing, according to the relationships
(7.4.56) through (7.4.59), and contains: two processing channels, each including the
units of formation of the positive v+ (t) and the negative v− (t) parts of the process
v (t); transversal filters realizing primary filtering of the observed stochastic process
v+ (t) and v− (t); an adder summing the results of signal processing Y (t), Ỹ (t) in
two channels; a median filter (MF) that smooths the process W (t) = Y (t) + Ỹ (t)
(see Fig. 7.4.10).
The SDU realizes signal processing, according to the relationships (7.4.60) and
(7.4.61a), and includes time of ending estimator formation unit (EFU), envelope
computation unit (ECU), and decision gate (DG).
The amplitude and initial phase estimator formation unit realizes signal pro-
cessing according to the relationships (7.4.64) and (7.4.69).
Transversal filters of signal extraction unit in two processing channels realize
N −1 N −1
primary filtering P F [s(t)]: y (t) = ∧ x(t − j ∆t), ỹ (t) = ∨ x̃(t − j ∆t) of the
j=0 j=0
stochastic processes x(t) and x̃(t), according to the equations (7.4.52a,b), fulfilling
criteria (7.4.44a) and (7.4.44b) of the system (7.4.44). The units of formation of
the positive y+ (t) and the negative ỹ− (t) parts of the processes y (t) and ỹ (t) in two
processing channels form the values of these functions according to the identities
(7.4.53a) and (7.4.53b), respectively. The adder sums the results of signal processing
in two processing channels, according to the equality (7.4.53), providing fulfillment
of the criteria (7.4.44c) and (7.4.44d) of the system (7.4.44).
The limiter L[w(t)] realizes intermediate processing IP [s(t)] by clipping the
signal w(t) in the output of the adder, according to the criteria (7.4.45a), (7.4.45b)
of the system (7.4.45) to exclude noise overshoots whose instantaneous values exceed
value a from further processing.
The median filter (MF) realizes intermediate smoothing ISm[s(t)] of w(t) =
y+ (t)+ ỹ− (t), according to the formula (7.4.55), providing fulfillment of the criterion
(7.4.46a) of the system (7.4.46).
In a matched filtering unit (MFU), the units of formation of the positive v+ (t)
and the negative v− (t) parts of the process v (t) form the values of these functions,
according to the identities v+ (t) = v (t) ∨ 0 and v− (t) = v (t) ∧ 0, respectively.
336 7 Synthesis and Analysis of Signal Processing Algorithms

FIGURE 7.4.10 Block diagram of processing unit that realizes harmonic signal detection
with joint estimation of amplitude, initial phase, and time of signal arrival (ending)

Transversal filters of matched filtering unit in two processing channels realize inter-
Ns −1 Ns −1
mediate filtering IF [s(t)]: Y (t) = ∧ v+ (t−jT0 ) and Ỹ (t) = ∨ v− (t−jT0 ) of the
j=0 j=0
observed stochastic processes v+ (t) and v− (t), according to Equations (7.4.56a,b),
providing fulfillment of the criteria (7.4.48a) and (7.4.48b) of the system (7.4.48).
The adder sums the results of signal processing in two processing channels, ac-
7.4 Signal Detection in Metric Space with Lattice Properties 337

cording to the equality (7.4.58), providing fulfillment of the criteria (7.4.48d) and
(7.4.48e) of the system (7.4.48).
The median filter (MF) realizes smoothing Sm[s(t)] of the process W (t) =
Y (t) + Ỹ (t) according to the formula (7.4.59), providing fulfillment of the criterion
(7.4.49a) of the system (7.4.49).
In a signal detection unit (SDU), the envelope computation unit (ECU) forms
the envelope EV (t) of the signal V (t) in the output of the median filter (MF) of the
matched filtering unit. The time of the signal ending estimator formation unit (EFU
t̂1 ) forms the estimator t̂1 , according to Equation (7.4.61a), providing fulfillment
of the criterion (7.4.50b) of the system (7.4.50). At the instant t = t̂1 − T20 , the
decision gate (DG) compares the instantaneous value of the envelope EV (t) with
the threshold value l0 (F ), and as the result, it makes the decision d1 concerning
the presence of the signal (if EV (t̂1 − T20 ) > l0 (F )) or the decision d0 concerning
the absence of the signal (if EV (t̂1 − T20 ) < l0 (F )), according to the rule (7.4.60) of
the criterion (7.4.50a) of the system (7.4.50).
Amplitude and initial phase estimator formation units compute the estimators
Â, ϕ̂ according to the formulas (7.4.69) and (7.4.64), respectively, so that the esti-
mator ϕ̂ is used to form the time of signal ending estimator t̂1 .
Figures 7.4.11 through 7.4.14 illustrate the results of statistical modeling of
signal processing by a synthesized unit under the following conditions: the useful
signal s(t) is harmonic with number of periods Ns = 8, and also with initial phase
ϕ = π/3. Signal-to-noise ratio E/N0 is equal to E/N0 = 10−10 , where E is an energy
of the signal; N0 is the power spectral density of noise. The product T0 fn,max = 64,
where T0 is the period of a carrier of the signal s(t); fn,max is maximum frequency
of power spectral density of noise n(t) in the form of quasi-white Gaussian noise.
Figure 7.4.11 illustrates the useful signal s(t) and realization w∗ (t) of the signal
w(t) in the output of the adder of the signal extraction unit (SEU). The noise
overshoots appear in the form of short pulses of considerable amplitude.

FIGURE 7.4.11 Useful signal s(t) (dot FIGURE 7.4.12 Useful signal s(t) (dot
line) and realization w∗ (t) of signal w(t) line) and realization v ∗ (t) of signal v(t)
in output of adder of signal extraction in output of median filter of signal ex-
unit (SEU) (solid line) traction unit (SEU) (solid line)

Figure 7.4.12 illustrates the useful signal s(t) and realization v ∗ (t) of the sig-
nal v (t) in the output of median filter of the signal extraction unit (SEU). Noise
overshoots in comparison with the previous figure are removed by median filter.
338 7 Synthesis and Analysis of Signal Processing Algorithms

Figure 7.4.13 illustrates the useful signal s(t) and realization W ∗ (t) of the signal
W (t) in the output of the adder of matched filtering unit (MFU). Comparing the
signal W ∗ (t) with the signal v ∗ (t), one can conclude that the remnants of noise
overshoots observed in the output of the median filter of signal extraction unit
(SEU), were removed during processing of the signal v (t) in the transversal filters
of matched filtering unit (MFU). The MFU compresses the harmonic signal s(t) in
such a way that duration of the signal W (t) in the input of median filter of the
MFU is equal to the period T0 of harmonic signal s(t), thus, compressing the useful
signal Ns times where Ns is a number of periods of harmonic signal s(t).

FIGURE 7.4.13 Useful signal s(t) (dot FIGURE 7.4.14 Useful signal s(t) (dot
line) and realization W ∗ (t) of signal W (t) line); realization V ∗ (t) of signal V (t)
in output of adder of matched filtering (solid line); realization EV∗ (t) of its en-
unit (MFU) (solid line) velope (dash line)

Figure 7.4.14 illustrates the useful signal s(t); realization V ∗ (t) of the signal V (t)
in the output of the median filter of the MFU; realization EV∗ (t) of the envelope
EV (t) of the signal V (t); δ-pulses determining the time position of the estimators
t̂± of barycentric coordinates of the positive v+ (t) or the negative v− (t) parts of
the smoothed stochastic process v (t); δ-pulse determining the time position of the
estimator t̂1 of time of signal ending t1 , according to the formula (7.4.61a). As can
be seen from the figure, the leading edge of realization V ∗ (t) of the signal V (t)
retards with respect to the useful signal s(t) for time (N − 1)/(2fn,max ), where
N is the number of the samples of stochastic processes x(t) and x̃(t) used during
primary processing in transversal filters of the SEU; fn,max is maximum frequency
of power spectral density of noise n(t) in the form of quasi-white Gaussian noise.
We can determine the quality indices of estimation of unknown nonrandom am-
plitude A and initial phase ϕ of harmonic signal s(t). The errors of the estimators
of these parameters are determined by dynamic and fluctuation components. Dy-
namic errors of the estimators  and ϕ̂ of the amplitude A and the initial phase
ϕ are caused by the method of their obtaining, while assuming the interference
(noise) absence in the input of signal processing unit. Fluctuation errors of these
estimators are caused by remains of noise in the input of the estimator formation
unit.
We first find the dynamic error ∆d ϕ̂ of the estimator of initial phase ϕ̂ of
7.4 Signal Detection in Metric Space with Lattice Properties 339

harmonic signal s(t) determined by the relationship (7.4.64):

∆d ϕ̂ = |ϕ̂ − ϕ|. (7.4.70)

The signal v (t) in the output of median filter of signal extraction unit in the absence
of interference (noise) in the input of signal processing unit can be represented in
the form:
− +

s(t), t ∈ Ts+ ∪ Ts− ;
v (t) = + − (7.4.71)
s(t − ∆T ), t ∈ Ts+ ∪ Ts− ,
+ − − +
where Ts = Ts+ ∪ Ts+ ∪ Ts− ∪ Ts− is the domain of definition of the signals v (t) and
s(t); ∆T is the quantity of the interval of primary processing T ∗ : T ∗ = [t − (N −
1)∆t, t], ∆T = (N − 1)∆t; ∆t is the sample interval providing independence of the
+ − − +
interference (noise) samples {n(tj )}; Ts+ , Ts+ , Ts− , Ts− are the domains with the
following properties:
 +
T = {t : (0 ≤ v (t) < s(t))&(s0 (t − ∆T ) > 0)};
 s+


= {t : (0 ≤ v (t) = s(t))&(s0 (t) < 0)};

Ts+
− (7.4.72)

 Ts− = {t : (s(t) < v (t) ≤ 0)&(s0 (t − ∆T ) < 0)};
 +
Ts− = {t : (s(t) = v (t) ≤ 0)&(s0 (t) > 0)}.

The smallness of the quantity ∆T of the interval of primary processing T ∗ implies


that the value of the signal v (t) in the output of the median filter of the signal
+ −
extraction unit is equal to: v (t) = s(t − ∆T ) on the domain t ∈ Ts+ ∪ Ts− , can
be approximately represented by two terms of expansion in Taylor series in the
neighborhood of the point t:

v (t) = s(t − ∆T ) ≈ s(t) − ∆T · s0 (t), t ∈ Ts+
+
∪ Ts− . (7.4.73)

The quantity vs determined by the integral (7.4.64a) can be approximately repre-


sented in the form of the following sum:
Z Z
vs = v (t) sin(ω0 t + ϕ)dt ≈ s(t) sin(ω0 t + ϕ)dt + Q+ + Q− , (7.4.74)
Ts Ts

s0 (t) sin(ω0 t + ϕ)dt, Q− = ∆T s0 (t) sin(ω0 t + ϕ)dt.


R R
where Q+ = ∆T
+ −
Ts+ Ts−
0
Taking into account that the derivation s (t) of the useful signal s(t) is equal to:
s0 (t) = −Aω0 sin(ω0 t + ϕ), the sum of the last two terms of the expansion (7.4.74)
is equal to:
2πA∆T
Q+ + Q− = (0.25T0 − ∆T ) cos ϕ. (7.4.75)
T0
Substituting the value of sum Q+ + Q− (7.4.75) into the relationship (7.4.74), we
obtain the final approximating expression for the quantity vs :
Z
2πA∆T
vs ≈ s(t) sin(ω0 t + ϕ)dt + (0.25T0 − ∆T ) cos ϕ. (7.4.76)
T0
Ts
340 7 Synthesis and Analysis of Signal Processing Algorithms

Similarly, the quantity vc determined by the integral (7.4.64b) can be approximately


represented in the form of the following sum:
Z Z
vc = v (t) cos(ω0 t + ϕ)dt ≈ s(t) cos(ω0 t + ϕ)dt + R+ + R− , (7.4.77)
Ts Ts

s0 (t) cos(ω0 t + ϕ)dt, R− = ∆T s0 (t) cos(ω0 t + ϕ)dt.


R R
where R+ = ∆T
+ −
Ts+ Ts−
The sum of the last two terms of expansion (7.4.77) is equal to:
2πA∆T
R+ + R− = (0.25T0 − ∆T ) sin ϕ. (7.4.78)
T0
Substituting the value of the sum R+ + R− (7.4.78) into the relationship (7.4.77),
we obtain the final approximating expression for the quantity vc :
Z
2πA∆T
vc ≈ s(t) cos(ω0 t + ϕ)dt + (0.25T0 − ∆T ) sin ϕ. (7.4.79)
T0
Ts

Taking into account the obtained approximations for the quantities vs and vc
(7.4.76) and (7.4.79), the approximating expression for the estimator ϕ̂ of initial
phase ϕ (7.4.64) takes the form:
   
−Q sin ϕ + q cos ϕ r sin(α − ϕ)
ϕ̂ ≈ −arctg = −arctg = ϕ − α, (7.4.80)
Q cos ϕ + q sin ϕ r cos(α − ϕ)
p
where Q = AT /2; q = 2πA∆T T0 (0.25T0 − ∆T ); r = Q2 + q 2 , α = tg(q/Q).
Substituting the approximate value of the estimator ϕ̂ of initial phase ϕ (7.4.80)
into the initial formula (7.4.70), we obtain approximate value of the dynamic error
∆d ϕ̂ of the estimator of initial phase ϕ̂ of the harmonic signal s(t):
 
π ∆T 4∆T
∆d ϕ̂ = tg (1 − ) , ∆T << T0 < T, (7.4.81)
T T0
where T is known signal duration; T = Ns T0 ; Ns is a number of periods of harmonic
signal s(t), N s = T f0 ; f0 is a carrier frequency of the signal s(t); T0 is a period of
a carrier: T0 = 1/f0 .
We now find the relative dynamic error δd  of the estimator  of the amplitude
A of the useful signal s(t):
|A − Â|
δd  = . (7.4.82)
A
Taking into account the obtained approximations for the quantities vs (7.4.76) and
vc (7.4.79), the approximating expression for the estimator  of the amplitude A
(7.4.69) takes the form:
p
 ≈ 2 (−Q sin ϕ + q cos ϕ)2 + (Q cos ϕ + q sin ϕ)2 /T =
p
= 2 Q2 + q 2 /T, (7.4.83)
7.4 Signal Detection in Metric Space with Lattice Properties 341

where Q = AT /2; q = 2πA∆T T0 (0.25T0 − ∆T ).


Substituting the approximate value of the estimator  of the amplitude A
(7.4.83) into the initial formula (7.4.82), we obtain the approximate value of the
relative dynamic error δd  of the amplitude A of the harmonic signal s(t):
s
 2  
2 π ∆ T 4∆ T
δd  ≈ 1 + 1− − 1 , ∆T << T0 < T, (7.4.84)
T T0

Find the fluctuation errors of the estimators ϕ̂ and  of the initial phase ϕ and the
amplitude A of the harmonic signal s(t), which characterize the synthesized signal
processing unit (Fig. 7.4.10). We use the theorem from [251] concerning the median
estimator v (t) obtained by the median filter (MF) converges in distribution to the
Gaussian random variable with zero mean.
As noted in Section 7.3, the variance Dvn (see Formula (7.3.34)) of the noise
component vn (t) of the process v (t) in the output of median filter can be decreased
to the quantity:
( )
2
∆T̃ 2 N 2 fn,max
2
Dvn ≤ Dv,max = a exp − √ , (7.4.85)
8 π

where a is a parameter of the limiter L[w(t)]; ∆T̃ is a quantity of the smoothing


interval T̃ ; N is a number of the samples of interference (noise) n(t) that simul-
taneously take part in signal processing; fn,max is a maximum frequency of power
spectral density of interference (noise) n(t).
The variances of fluctuation errors of the estimators ϕ̂ and  of the initial phase
ϕ and the amplitude A of the harmonic signal s(t) are determined on the basis of
the variances of the envelope and the phase of a vector with independent Gaussian
components vs and vc .
To determine expectations and variances of Gaussian components vs and vc ,
we use the relationships for the first moments of distribution of the process in the
output of correlation device [115, (3.6.65)], from which, while substituting the initial
values, we obtain the quantities of the variances Ds and Dc and the expectations
ms and mc of the components vs and vc , respectively:
Ds = 0.5A2 cos2 ϕ + Dv,max , Dc = 0.5A2 sin2 ϕ + Dv,max ; (7.4.86a)
√ √
ms = A sin ϕ/ 2, mc = A cos ϕ/ 2. (7.4.86b)
Then the variance Dϕ̂ of the estimator ϕ̂ of the initial phase ϕ is determined by
the formula [113, (3.93)], and the variance DÂ of the estimator of the amplitude
 is determined on the base of the formula [113, (3.73)] for the moments of the
envelope, and in the case of strong interference (noise) they are, respectively, equal
to:
Dϕ̂ ≈ Dv,max /A2 ; (7.4.87a)
DÂ ≈ 4Dv,max [1 − Dv,max /2A2 ]/T 2 , (7.4.87b)
where Dv,max is the maximum variance of the noise component vn (t) of the process
v (t) in the output of the median filter of the signal extraction unit.
342 7 Synthesis and Analysis of Signal Processing Algorithms

7.4.4 Features of Detection of Linear Frequency Modulated Signal with


Unknown Nonrandom Amplitude and Initial Phase with Joint
Estimation of Time of Signal Arrival (Ending) in Presence of
Interference (Noise) in Metric Space with L-group Properties
In this subsection we consider features of optimal algorithm synthesis of a detection
of linear frequency modulated (LFM) signal with unknown nonrandom amplitude
and initial phase with joint estimation of time of signal arrival (ending) in the pres-
ence of interference (noise) in metric space with L-group properties. Synthesis of the
optimal detection algorithm assumes interference (noise) distribution is arbitrary.
Let the interaction between LFM signal s(t) and interference (noise) n(t) in met-
ric space L(+, ∨, ∧) with L-group properties be described by two binary operations
of join ∨ and meet ∧ in two receiving channels, respectively:

x(t) = θs(t) ∨ n(t); (7.4.88a)

x̃(t) = θs(t) ∧ n(t), (7.4.88b)


where θ is an unknown nonrandom parameter that can take only two values θ ∈
{0, 1}: θ = 0 (the signal is absent) or θ = 1 (the signal is present); t ∈ Ts , Ts is a
domain of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is an unknown time of arrival
of the signal s(t); t1 is an unknown time of signal ending; T = t1 −t0 is known signal
duration; Ts ⊂ Tobs ; Tobs is the observation interval of the signal: Tobs = [t00 , t01 ];
t00 < t0 , t1 < t01 .
Let also the model of the received LFM signal s(t) be determined by the ex-
pression: 
A cos([(∆ω/T )t + ω0 ]t + ϕ), t ∈ Ts ;
s(t) = (7.4.89)
0, t∈/ Ts ,
where A is an unknown nonrandom amplitude of the useful signal s(t); ω0 = 2πf0 ; f0
is the known carrier frequency of the signal s(t); ∆ω is the known deviation of LFM
signal; ϕ is an unknown nonrandom initial phase of the useful signal, ϕ ∈ [−π, π ];
Ts is a domain of definition of the signal s(t), Ts = [t0 , t1 ]; t0 is an unknown time
of arrival of the signal s(t); t1 is an unknown time of signal ending; T = t1 − t0 is
known signal duration, T = Ns T0 ; Ns is integer number of periods of LFM signal
s(t), Ns ∈ N, N is the set of natural numbers; T0 is a period of signal carrier.
We suppose that two neighbor samples n(tj ) and n(tj±1 ) of interference (noise)
n(t) are statistically independent if a time distance between them is greater than
or equal to the interval:
|tj±1 − tj | ≥ T0,min , (7.4.90)
where T0,min is the minimal period of the oscillation of LFM signal s(t).
The instantaneous values (the samples) of the signal {s(tj )} and interference
(noise) {n(tj )} are the elements of signal space: s(tj ), n(tj ) ∈ L(+, ∨, ∧). The sam-
ples s(tj ) and n(tj ) of the signal s(t) and interference (noise) n(t) are taken on
the domain of definition Ts of the signal s(t): tj ∈ Ts over the variable interval
Tj± ≥ T0,min that provides the independence of interference (noise) samples {n(tj )}.
7.4 Signal Detection in Metric Space with Lattice Properties 343

Taking into account the aforementioned considerations, the observation equa-


tions in two processing channels (7.4.88a) and (7.4.88b) take the form:

x(tj ) = θs(tj ) ∨ n(tj ); (7.4.91a)

x̃(tj ) = θs(tj ) ∧ n(tj ), (7.4.91b)


where tj = t − Tj± , j = 0, 1, . . . , N − 1, tj ∈ Ts ⊂ Tobs ; Ts = [t0 , t1 ], Tobs = [t00 , t01 ];
t00 < t0 , t1 < t01 ; J ∈ N, N is the set of natural numbers; the interval Tj± may
change over duration depending on tj .
In this subsection and Subsection 7.4.2, the problem of joint detection of useful
LFM signal s(t) and estimation of its time of ending t1 is formulated and solved on
the basis of matched filtering of the signal s(t) and then the joint detection of the
signal s(t) and estimation of time of its ending t1 are performed.
The matched filtering problem M F [s(t)] in the signal space L(+, ∨, ∧) with
L-group properties is solved via step-by-step processing of statistical collec-
tions {x(tj )} and {x̃(tj )} determined by the observation equations (7.4.91a) and
(7.4.91b), as shown in Subsection 7.4.2 for harmonic signal, with the only distinction
that the features of time structure of LFM signal are taken into account:

 P F [s(t)]; (a)
M F [s(t)] = IP [s(t)]; (b) (7.4.92)
Sm[s(t)], (c)

where P F [s(t)] is primary filtering; IP [s(t)] is intermediate processing; Sm[s(t)] is


smoothing; together they form successive processing stages of the general algorithm
of matched filtering M F [s(t)] of the useful LFM signal (7.4.92).
The optimality criteria defining every single processing stage P F [s(t)], IP [s(t)],
Sm[s(t)] are involved in single systems:

P F [s(t)] =

J−1 ^


 y (t) = arg min | ∧ [x(tj ) − y (t)]|; (a)
 ^ j=0


 y (t)∈Y ;t,tj ∈Tobs
 J−1 _
| ∨ [x̃(tj ) − y (t)]|;



 ỹ (t) = arg min (b)
 _ j=0
y (t)∈Ỹ ;t,tj ∈Tobs
= R R (7.4.93)

 J = arg min[ |y(t)|dt] n(t)≡0 , |y(t)|dt 6= 0; (c)

 y(t)∈Y ; Tobs Tobs




 w(t) R= F [y (t), ỹ (t)]; (d)
|w(t) − s(t)|dt] n(t)≡0 → min , (e)




[t1 −T0,min ,t1 ] w(t)∈W

where y (t) and ỹ (t) are the solution functions of the problem of minimization of
metrics between the observed statistical collections {x(tj )}, {x̃(tj )} and optimiza-
` a
tion variables, i.e., the functions y (t) and y (t), respectively; w(t) is a function F [∗, ∗]
of uniting the results y (t) and ỹ (t) of minimization of the functions of the observed
collections {x(tj )} and {x̃(tj )}; Tobs is an observation interval of the signal; J is
344 7 Synthesis and Analysis of Signal Processing Algorithms

a number of the samples of stochastic processes x(t) and x̃(t) used under signal
processing, J ∈ N; N is the set of natural numbers;

2
 M{u (t)} s(t)≡0 → min; (a)

L
IP [s(t)] = 2
M{[u(t) − s(t)] } n(t)≡0 = ε ; (b) (7.4.94)

 u(t) = L[w(t)], (c)

where M{∗} is the symbol of mathematical expectation; L[w(t)] is a functional


transformation of the process w(t) into the process u(t); ε is a constant which is
some function of a power of the signal s(t);

M −1
|u(tk ) − v ◦ (t)|;
P
v ( t ) = arg min (a)




v ◦ (t)∈V ;t,tk ∈T̃ k=0

Sm[s(t)] = ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.4.95)

 0
 M = arg max [δf (M )] M ∗ :δ (M ∗ )=δ , (c)


f f,sm
M 0 ∈N∩[M ∗ ,∞[

where v (t) = ŝ(t) is the result of filtering (the estimator ŝ(t) of the signal s(t))
that is the solution of minimization of the metric between the instantaneous values
of stochastic process u(t) and optimization variable, i.e., the function v ◦ (t); tk =
k
t− M ∆T̃ , k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval, in which
smoothing of stochastic process u(t) is realized; M ∈ N, N is the set of natural
numbers; M is a number of the samples of stochastic process u(t) used during
smoothing in the interval T̃ ; δd (∆T̃ ), δf (M ) are relative dynamic and fluctuation
errors of smoothing, as the dependences on the quantity ∆T̃ of the smoothing
interval T̃ and the number of the samples M respectively; δd,sm , δf,sm are given
quantities of the relative dynamic and fluctuation errors of smoothing, respectively.
The problem of joint detection Det[s(t)] of LFM signal s(t) and estimation
Est[t1 ] of time of its ending t1 is formulated on the basis of the detection and
estimation criteria involved in the same system, which is a logical continuation of
the system (7.4.92):

Det[s(t)]/Est[t1 ] =
 d1
 Ev (t̂1 − T0,min

2 ) ≷ l0 (F ); (a)
= d0 (7.4.96)
 1
 t̂ = arg min Mϕ {(t1 − t◦ )2 } n(t)≡0 , (b)
ϕ∈Φ1 ∨Φ2 ;t◦ ∈Tobs

T
where Ev (t̂1 − 0,min2 ) is the instantaneous value of the envelope Ev (t) of the
T
estimator v (t) = ŝ(t) of the useful signal s(t) at the instant t = t̂1 − 0,min 2 :
p
2 2
Ev (t) = v (t) + vH (t), vH (t) = H[v (t)] is a Hilbert transform; d1 and d0 are
the decisions made about the true values of an unknown nonrandom parameter θ,
θ ∈ {0, 1}; l0 (F ) is some threshold level dependent on a given conditional prob-
ability of false alarm F ; t̂1 is the estimator of time of signal ending t1 ; T0,min is
the minimal period of the oscillation of the LFM signal s(t); Mϕ {(t1 − t◦ )2 } is the
7.4 Signal Detection in Metric Space with Lattice Properties 345

mean squared difference between true value of time of signal ending t1 and opti-
mization variable t◦ ; Mϕ {∗} is the symbol of mathematical expectation with the
averaging with respect to the initial phase ϕ of the signal; Φ1 and Φ2 are possible
domains of definition of the initial phase ϕ of the signal: Φ1 = [−π/2, π/2] and
Φ2 = [π/2, 3π/2].
We now explain the optimality criteria and some relationships appearing in
the systems (7.4.93), (7.4.94), (7.4.95) that define the successive stages of signal
processing P F [s(t)], IP [s(t)], Sm[s(t)] of the general algorithm of matched filtering
M F [s(t)] of the useful signal s(t) (7.4.92).
Equations (7.4.93a) and (7.4.93b) of the system (7.4.93) determine the criteria
of minimum of metrics between statistical sets of the observations {x(tj )}, {x̃(tj )}
and the results of primary processing y (t), ỹ (t), respectively.
J−1 ` J−1 a
The functions of metrics | ∧ [x(tj ) − y (t)]|, | ∨ [x̃(tj ) − y (t)]| are chosen to pro-
j=0 j=0
vide the metric convergence and the convergence in probability to the useful signal
J−1 J−1
s(t) of the sequences y (t) = ∧ x(tj ), ỹ (t) = ∨ x̃(tj ) based on the interactions of
j=0 j=0
the kind (7.4.91a), (7.4.91b).
The relationship (7.4.93c) determines the criterion of the choice of a number
of the samples J of stochastic processes x(tR), x̃(t) used during signal processing
on the basis of minimization of the norm |y (t)|dt. The criterion (7.4.93c) is
Tobs
considered under two constraint conditions:
R (1) interference (noise) is identically
equal to zero: n(t) ≡ 0; (2) the norm |y (t)|dt of the function y (t) is not equal
R Tobs
to zero: |y (t)|dt 6= 0.
Tobs

REquation (7.4.93e) determines the criterion of minimum of metric


|w(t) − s(t)|dt n(t)≡0 between the useful signal s(t) and the function w(t)
[t1 −T0 ,t1 ]
in the interval [t1 − T0,min , t1 ]. This criterion establishes the kind of the function
F [y (t), ỹ (t)] (7.4.93d) uniting the results y (t) and ỹ (t) of primary processing of the
observed stochastic processes x(t) and x̃(t). The criterion (7.4.93e) is considered
under the condition that interference (noise) is identically equal to zero: n(t) ≡ 0.
Equations (7.4.94a), (7.4.94b), (7.4.94c) of the system (7.4.94) determine the
criterion of the choice of functional transformation L[w(t)]. The equation (7.4.94a)
determines the criterion of minimum of the second moment of the process u(t) in
the absence of the LFM signal s(t) in the input of the signal processing unit. The
equation (7.4.94b) determines the quantity of the second moment of the difference
between the signals u(t) and s(t) in the absence of interference (noise) n(t) in the
input of the signal processing unit.
Equation (7.4.95a) of the system (7.4.95) determines the criterion of minimum of
M −1
|u(tk ) − v ◦ (t)| between the instantaneous values of the process u(t)
P
the metric
k=0
and optimization variable v ◦ (t) in the smoothing interval T̃ =]t − ∆T̃ , t], requiring
the final processing of the signal u(t) in the form of smoothing under the condition
that the useful signal is identically equal to zero: s(t) ≡ 0. The relationship (7.4.95b)
346 7 Synthesis and Analysis of Signal Processing Algorithms

determines the rule of the choice of the quantity ∆T̃ of the smoothing interval T̃ ,
based on providing a given quantity of relative dynamic error δd,sm of smoothing.
Equation (7.4.95c) determines the criterion of the choice of a number of the
samples M of stochastic process u(t), based on providing a given quantity of relative
fluctuation error δf,sm of its smoothing.
Equation (7.4.96a) of the system (7.4.96) determines the rule of making the
T
decision d1 concerning the presence of the signal (if Ev (t̂1 − 0,min
2 ) > l0 (F )) or the
T0,min
decision d0 concerning the absence of the signal (if Ev (t̂1 − 2 ) < l0 (F )).
The relationship (7.4.96b) determines the criterion of formation of the estima-
tor t̂1 of time of signal ending t1 on the basis of minimization of the mean squared
difference Mϕ {(t1 − t◦ )2 } between true value of time of signal ending t1 and op-
timization variable t◦ under the conditions that the averaging is realized over the
initial phase ϕ of the signal, taken in one of two intervals: Φ1 = [−π/2, π/2] or
Φ2 = [π/2, 3π/2], and interference (noise) is identically equal to zero: n(t) ≡ 0.
Solving the problem of matched filtering M F [s(t)] of the useful signal s(t) in sig-
nal space L(+, ∨, ∧) with L-group properties in the presence of interference (noise)
n(t) is described in detail in Subsection 7.4.2, so we consider intermediate results,
which determine the structure-forming elements of the general signal processing
algorithm, paying the attention only to the features of LFM signal processing.
The solutions of optimization equations (7.4.93a), (7.4.93b) of the system
(7.4.93) are the values of the estimators y (t), ỹ (t) in the form of meet and join
of the observation results {x(tj )}, {x̃(tj )}, respectively:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − Tj+ ); (7.4.97a)
j=0 j=0

J−1 J−1
ỹ (t) = ∨ x̃(tj ) = ∨ x̃(t − Tj− ), (7.4.97b)
j=0 j=0

where the variable intervals Tj± are determined by the following relationships:

0 + 00 +
Tj+ = t+ + +
m,J − tm,j ; {tm,j } : (s (tm,j ) = 0)&(s (tm,j ) < 0);

t+ + +
m,J = max{tm,j }, Tj=0 ≡ 0; (7.4.98a)
j

Tj− = t− − − 0 − 00 −
m,J − tm,j ; {tm,j } : (s (tm,j ) = 0)&(s (tm,j ) > 0);

t− − −
m,J = max{tm,j }, Tj=0 ≡ 0, (7.4.98b)
j

where {t+ m,j }, {tm,j } are the positions of local maximums and minimums of the
LFM signal s(t) on the time axis, respectively; s0 (t) and s00 (t) are the first and the
second derivatives of the useful signal s(t) with respect to time.
The condition s(t) ≡ 0 of the criterion (7.4.93c) of the system (7.4.93) implies
the corresponding specifications of the observation equations (7.4.91a,b): x(tj ) =
s(tj ) ∨ 0 and x̃(tj ) = s(tj ) ∧ 0. So, according to the relationships (7.4.97a), (7.4.97b),
7.4 Signal Detection in Metric Space with Lattice Properties 347

the identities hold:


J−1
y (t) n(t)≡0 = ∧ [s(tj ) ∨ 0] =
j=0
+
= [s(t) ∨ 0] ∧ [s(t − TJ−1 ) ∨ 0]; (7.4.99a)
J−1
ỹ (t) n(t)≡0 = ∨ [s(tj ) ∧ 0] =
j=0

= [s(t) ∧ 0] ∨ [s(t − TJ−1 ) ∧ 0], (7.4.99b)
where tj = t − Tj± , j = 0, 1, . . . , J − 1; Tj=0
±
≡ 0.
According to the criterion (7.4.93c) of the system (7.4.93), the optimal value of
a number of the samples J of stochastic processes x(t), x̃(t) used under primary
processing (7.4.97a,b) is equal to an integer number of periods of LFM signal Ns :

J = Ns . (7.4.100)

The final coupling equation can be written in the form:

w(t) = y+ (t) + ỹ− (t); (7.4.101)


y+ (t) = y (t) ∨ 0; (7.4.101a)
ỹ− (t) = ỹ (t) ∧ 0. (7.4.101b)

Thus, the identity (7.4.101) determines the kind of the coupling equation (7.4.93d)
obtained from joint fulfillment of the criteria (7.4.93a), (7.4.93b), and (7.4.93e) of
the system (7.4.93).
The solution u(t) of the relationships (7.4.94a), (7.4.94b), (7.4.94c) of the system
(7.4.94), determining the criterion of the choice of functional transformation of the
process w(t), is the function L[w(t)] that determines the gain characteristic of the
limiter:

u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.4.102)

where a is a parameter of the limiter chosen to be equal to: a = sup ∆A = Amax ,


A
∆A =]0, Amax ]; Amax is a maximum possible value of amplitude of the useful signal.
The solution of optimization equation (7.4.95a) of the system (7.4.95) is the
value of the estimator v (t) in the form of a sample median med{∗} of the collection
of the samples {u(tk )} of stochastic process u(t):

v (t) = med{u(tk )}, (7.4.103)


tk ∈T̃

k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1; tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval
in which smoothing of stochastic process u(t) is realized; ∆T̃ is a quantity of the
smoothing interval T̃ .
Solving the problem of joint detection Det[s(t)] of the signal s(t) and estimation
Est[t1 ] of time of its ending t1 is described in Subsection 7.4.2. We consider only
348 7 Synthesis and Analysis of Signal Processing Algorithms

the intermediate results determining the structure-forming elements of the general


signal processing algorithm.
The rule of making the decision d1 concerning the presence of the signal (if
T
Ev (t̂1 − 0,min
2 ) > l0 (F )) or the decision d0 concerning the absence of the signal
T
(if Ev (t̂1 − 0,min
2 ) < l0 (F )), determined by the equation (7.4.96a) of the system
(7.4.96), includes (1) forming the envelope Ev (t) of the estimator v (t) = ŝ(t) of
the useful signal s(t), and (2) comparison of the value of the envelope Ev (t) with a
T
threshold value l0 (F ) at the instant t = t̂1 − 0,min
2 determined by the estimator t̂1 .
Thus, decision making is realized in the following form:
T0,min d1
Ev (t̂1 − ) ≷ l0 (F ). (7.4.104)
2 d0

The solution of optimization equation (7.4.96b) of the system (7.4.96) is determined


by the identity:

t̂− + (T0,min /2) + (T0,min ϕ̂/2π ), ϕ ∈ Φ1 = [−π/2, π/2];
t̂1 = (7.4.105)
t̂+ + (T0,min /2) − (T0,min ϕ̂/2π ), ϕ ∈ Φ2 = [π/2, 3π/2],
R R
where t̂± = ( tv± (t)dt)/( v± (t)dt) is the estimator of barycentric coordinate
Tobs Tobs
of the positive v+ (t) or the negative v− (t) parts of the smoothed stochastic process
v (t), respectively; v+ (t) = v (t) ∨ 0, v− (t) = v (t) ∧ 0; Tobs is an observation interval
of the signal: Tobs = [t00 , t01 ]; t00 < t0 , t1 < t01 ; ϕ̂ = arcsin[2(t̂+ − t̂− )/T0,min ] is the
estimator of an unknown nonrandom initial phase ϕ of the useful signal s(t); T0,min
is a minimal period of the oscillation of the LFM signal s(t).
If the initial phase ϕ of the signal can change from −π to π, i.e.,it is unknown
beforehand, to which interval (Φ1 = [−π/2, π/2] or Φ2 = [π/2, 3π/2]) from the
relationships (7.4.96b) and (7.4.105) ϕ belongs, we use the estimators t̂1 of time of
signal ending t1 determined by the identities:

t̂1 = max [t̂− , t̂+ ] + (T0,min /4), ϕ ∈ [−π, π ]; (7.4.106a)


t̂± ∈Tobs

1 T0,min
or : t̂1 = (t̂− + t̂+ ) + , ϕ ∈ [−π, π ]. (7.4.106b)
2 2
Thus, summarizing the relationships (7.4.97), (7.4.100) through (7.4.105), one
can conclude that the estimator t̂1 of time of signal ending and the envelope Ev (t)
are formed on the basis of the further processing of the estimator v (t) = ŝ(t) of the
LFM signal s(t) detected in the presence of interference (noise) n(t). The estimator
ŝ(t) is the smoothing function of the stochastic process u(t) obtained by limitation
of the process w(t) that combines the results y (t) and ỹ (t) of the corresponding
primary processing of the observed stochastic process x(t) and x̃(t) in the interval
of observation Tobs .
The block diagram of signal processing unit, according to the relationships
(7.4.97), (7.4.100) through (7.4.105), includes: two processing channels, each con-
taining transversal filter; the units of formation of the positive y+ (t) and the neg-
ative ỹ− (t) parts of the processes y (t) and ỹ (t), respectively; the adder that sums
7.4 Signal Detection in Metric Space with Lattice Properties 349

the results of signal processing in two channels; the limiter; median filter (MF);
estimator formation unit (EFU), envelope computation unit (ECU), and decision
gate (DG) (see Fig. 7.4.15).

FIGURE 7.4.15 Block diagram of processing unit that realizes LFM signal detection with
joint estimation of time of signal arrival (ending)

Transversal filters in two processing channels realize primary filtering P F [s(t)]:


N −1 N −1
y (t) = ∧ x(t − jT0 ) and ỹ (t) = ∨ x̃(t − jT0 ) of the observed stochastic processes
j=0 j=0
x(t) and x̃(t), according to Equations (7.4.97a,b) and (7.4.100), providing fulfillment
of the criteria (7.4.93a), (7.4.93b), and (7.4.93c) of the system (7.4.93). The units
of formation of the positive y+ (t) and the negative ỹ− (t) parts of the processes y (t)
and ỹ (t) in two processing channels form the values of these functions, according to
the identities (7.4.101a) and (7.4.101b), respectively. The adder unites the results
of signal processing in two channels, according to the equality (7.4.101), providing
fulfillment of the criteria (7.4.93d) and (7.4.93e) of the system (7.4.93).
The limiter L[w(t)] realizes intermediate processing IP [s(t)] by means of limit-
ing the signal w(t) in the output of the adder, according to the criteria (7.4.94a),
(7.4.94b) of the system (7.4.94), to exclude the noise overshoots whose instanta-
neous values exceed value a from further processing.
The median filter (MF) realizes smoothing Sm[s(t)] of the process w(t) =
y+ (t) + ỹ− (t), according to the formula (7.4.103), providing fulfillment of the crite-
rion (7.4.95a) of the system (7.4.95). The envelope computation unit (ECU) forms
the envelope Ev (t) of the signal v (t) in the output of median filter (MF).
The estimator formation unit (EFU) forms the estimator t̂1 of the time of
signal ending, according to Equation (7.4.105), providing fulfillment of the cri-
T
terion (7.4.96b) of the system (7.4.96). At the instant t = t̂1 − 0,min 2 , the deci-
350 7 Synthesis and Analysis of Signal Processing Algorithms

sion gate (DG) compares the instantaneous value of the envelope Ev (t) with the
threshold value l0 (F ). The decision d1 concerning the presence of the signal (if
T
Ev (t̂1 − 0,min
2 ) > l0 (F )) or the decision d0 concerning the absence of the signal (if
T0,min
Ev (t̂1 − 2 ) < l0 (F )) is made according to the rule (7.4.104) of the criterion
(7.4.96a) of the system (7.4.96).
Figures 7.4.16 through 7.4.19 illustrate the results of statistical modeling of
signal processing realized by the synthesized unit under the following conditions: the
useful signal s(t) is LFM with an integer number of periods Ns = 10 and deviation
∆ω = ω0 /2, where ω0 is known carrier frequency of the signal s(t); and also with
initial phase ϕ = −π/6. The signal-to-noise ratio E/N0 is equal to E/N0 = 10−10 ,
where E is a signal energy; N0 is the power spectral density of noise. The product
T0,min fn,max = 125, where T0,min is a minimal period of the oscillation of the carrier
of the LFM signal s(t); fn,max is the maximum frequency of power spectral density
of noise n(t) in the form of quasi-white Gaussian noise.
Figure 7.4.16 illustrates the useful signal s(t); realization w∗ (t) of the signal w(t)
in the output of the adder; realization v ∗ (t) of the signal in the output of median
filter v (t).

FIGURE 7.4.16 Useful signal s(t) (dot FIGURE 7.4.17 Useful signal s(t) (dot
line) and realization w∗ (t) of signal w(t) line) and realization v ∗ (t) of signal v(t)
in output of adder (dash line) in output of median filter (solid line)

Figure 7.4.17 illustrates the useful signal s(t) and realization v ∗ (t) of the signal
v (t) in the output of median filter. The matched filter provides the compression of
the LFM signal s(t) in such a way that a duration of the signal v (t) in the output
of matched filter is equal to minimal period of the oscillation T0,min of the LFM
signal s(t).
Figure 7.4.18 illustrates: the useful signal s(t); realization v ∗ (t) of the signal v (t)
in the output of median filter; δ-pulses determining time positions of the estimators
t̂± of barycentric coordinates of the positive v+ (t) and negative v− (t) parts of the
smoothed stochastic process; δ-pulse determining time position of the estimator t̂1
of time of signal ending t1 , according to the formula (7.4.105).
Figure 7.4.19 illustrates the useful signal s(t); realization Ev∗ (t) of the envelope
Ev (t) of the signal v (t) in the output of median filter; δ-pulses determining time
position of the estimators t̂± of barycentric coordinates of the positive v+ (t) or the
negative v− (t) parts of the smoothed stochastic process v (t); δ-pulse determining
7.5 Signal Detection in Metric Space with Lattice Properties 351

FIGURE 7.4.18 Useful signal s(t) (dot FIGURE 7.4.19 Useful signal s(t) (dot
line) and realization v ∗ (t) of signal v(t) line) and realization Ev∗ (t) of envelope
in output of median filter (solid line) Ev (t) of signal v(t) (solid line)

time position of the estimator t̂1 of time of signal ending t1 , according to the formula
(7.4.105).
The results of the investigations of algorithms and units of signal detection in
signal space with lattice properties allow us to draw the following conclusions.

1. The proposed approach to the synthesis problem of deterministic and


quasi-deterministic signal detection algorithms, with rejection of the use
of prior data concerning interference (noise) distribution, provides the in-
variance properties of algorithms and units of signal detection with respect
to the conditions of parametric and nonparametric prior uncertainty con-
firmed by the obtained signal processing quality indices.
2. While solving the problem of deterministic signal detection in the presence
of interference (noise) in signal space with lattice properties, no constraints
are imposed on a lower bound of the signal-to-interference (signal-to-noise)
ratio E/N0 providing absolute values of signal detection quality indices,
i.e., the conditional probabilities of correct detection D and false alarm F
equal to D = 1, F = 0 on arbitrarily small ratio E/N0 6= 0.
3. The existing distinctions between signal detection quality indices describ-
ing the detection in linear signal space LS (+) and signal space with lattice
properties L(∨, ∧) are elucidated by fundamental differences in the kinds
of interactions between the signal s(t) and interference (noise) n(t) in
these spaces: the additive s(t) + n(t) and the interaction in the form of
join s(t) ∨ n(t) and meet s(t) ∧ n(t), respectively, and, therefore, by the
distinctions of the properties of these signal spaces.
352 7 Synthesis and Analysis of Signal Processing Algorithms

7.5 Classification of Deterministic Signals in Metric Space with


Lattice Properties
The known variants of the statement of signal classification problem are formulated
mainly for the case of additive interaction (in terms of linear space): x(t) = si (t) +
n(t) between the signal si (t), si (t) ∈ S and interference (noise) n(t) [155], [159],
[166], [127], [158]. Nevertheless, the statement of this problem is admissible on
the basis of more general model of interaction: x(t) = Φ[si (t), n(t)], where Φ is
some deterministic function [159], [166]. This section has a twofold goal. First, it is
necessary to synthesize the algorithm and unit of classification of the deterministic
signals in the presence of interference (noise) in a signal space with lattice properties.
Second, it is necessary to describe characteristics and properties of a synthesized
unit and to compare them with known analogues that solve the problem of signal
classification in linear signal space.
The synthesis of optimal algorithm of deterministic signal classification is ful-
filled on the assumption of an arbitrary distribution of interference (noise) n(t). Sig-
nal processing theory usually assumes that probabilistic character of the observed
stochastic process causes any algorithm of signal classification in the presence of
interference (noise) to be characterized by nonzero probabilities of error decisions.
However, as shown below, this is not always true.
During signal classification, it is necessary to determine which one from m useful
signals in the set S = {si (t)}, i = 1, . . . , m is received in the input of a signal
classification unit at a given instant. Consider the model of the interaction between
the signal si (t) from the set of deterministic signals S = {si (t)}, i = 1, . . . , m and
interference (noise) n(t) in signal space with the properties of distributive lattice
L(∨, ∧) with binary operations of join a(t) ∨ b(t) and meet a(t) ∧ b(t), respectively:
a(t) ∨ b(t) = supL (a(t), b(t)), a(t) ∧ b(t) = inf L (a(t), b(t)); a(t), b(t) ∈ L(∨, ∧):

x(t) = si (t) ∨ n(t), t ∈ Ts , i = 1, . . . , m, (7.5.1)

where Ts = [t0 , t0 + T ] is a domain of definition of the signal si (t); t0 is known time


of arrival of the signal si (t); T is a duration of the signal si (t); m ∈ N, N is the set
of natural numbers.
Let the signals from R the set S = {si (t)}, i = 1, . . . , m be characterized by the
same energy Ei = s2i (t)dt = E and cross-correlation coefficients rik equal to
R t∈Ts
rik = si (t)sk (t)dt/E. Assume that interference (noise) n(t) is characterized by
t∈Ts
arbitrary probabilistic-statistical properties.
To synthesize classification algorithms in linear signal space, i.e., when the re-
lation x(t) = si (t) + n(t) holds, signal processing theory uses the part of statistical
inference theory called statistical hypothesis testing. Within the signal classification
problem, any strategy of decision making in the literature supposes the likelihood
ratio to be computed and the likelihood function to be determined. Likelihood func-
tion is determined by the multivariate probability density function of interference
7.5 Classification of Deterministic Signals in Metric Space with Lattice Properties 353

(noise) n(t). Thus, while solving the problem of signal classification in linear space,
i.e., when the interaction equation x(t) = si (t) + n(t) holds, i = 1, . . . , m, m ∈ N,
to determine the likelihood function, the methodical trick is used that supposes
a change of variable: n(t) = x(t) − si (t). However, it is not possible to use the
same subterfuge to determine likelihood ratio when the interaction between the
signal and interference (noise) takes the form (7.5.1), inasmuch as the equation is
unsolvable with respect to the variable n(t) since the lattice L(∨, ∧) has no group
properties; thus, another approach is necessary here.
Based on (7.5.1), the solution of the problem of classification of the signal si (t)
from the set of deterministic signals S = {si (t)}, i = 1, . . . , m in the presence of
interference (noise) n(t) lies in formation of an estimator ŝi (t) of the received sig-
nal, which best allows (from the standpoint of the chosen criteria) an observer to
classify these signals. In this section, the problem of classification of the signals
from theR set S = {si (t)}, i = 1, . . . , m is based on minimization of the squared
metric |yi (t) − si (t)|2 dt |i=k between the function yi (t) = Fi [x(t)] of the ob-
t∈Ts
served process x(t) and the signal si (t) in the presence of the signal sk (t) in x(t):
x(t) = sk (t) ∨ n(t):

y (t) = Fi [x(t)] = ŝi (t ); (a)
 iR


2

 |yk ( t ) − sk ( t ) | dt x(t)=sk (t)∨n(t) → min ; (b)
yk (t)∈Y

t∈Ts
R (7.5.2)

 k̂ = arg max [ yi (t)si (t)dt] x(t)=sk (t)∨n(t) ; (c)


 i∈I;si (t)∈S t∈Ts
i ∈ I, I = N ∩ [0, m], m ∈ N, (d)

where yi (t) = ŝi (t) is the estimator of the signal si (t) in the presence of the signal
sk (t) in theR process x(t): x(t) = sk (t) ∨ n(t), 1 ≤ k ≤ m; Fi [∗] is some deterministic
2
function; |yk (t) − sk (t)|2 dt = kyk (t) − sk (t)k is the squared metric between the
t∈Ts
signals yk (t) and sk (t) in Hilbert space HS; k̂ is the decision concerning the number
of processing channel, which corresponds to the received signal sk (t) from the set
of deterministic signals S = {si (t)}, i = 1, .R. . , m; 1 ≤ k ≤ m, k ∈ I, I = N ∩ [0, m],
m ∈ N, N is the set of natural numbers; yi (t)si (t)dt = (yi (t), si (t)) is a scalar
t∈Ts
product of the signals yi (t) and si (t) in Hilbert space HS.
The relationship (7.5.2a) of the system (7.5.2) defines the rule of formation
of the estimator ŝi (t) of the received signal in the i-th processing channel in the
form of some deterministic function Fi [x(t)] of the process x(Rt). The relationship
(7.5.2b) determines the criterion of minimum squared metric |yk (t) − sk (t)|2 dt
t∈Ts
in Hilbert space HS between the signals yk (t) and sk (t) in the k-th processing chan-
nel. This criterion is considered under the condition when reception of the signal
sk (t): x(t) = sk (t) ∨ n(t) is realized. The relationship (7.5.2c) of the system (7.5.2)
determines the criterion of maximum value of the correlation integral between the
estimator yi (t) = ŝi (t) of the received signal sk (t) in the i-th processing channel
and the signal si (t). According to this criterion, the choice of a channel number k̂
354 7 Synthesis and Analysis of Signal Processing Algorithms
R
corresponds to maximum value of correlation integral yi (t)si (t)dt, i ∈ I. The
t∈Ts
relationship (7.5.2d) determines a domain of definition I of processing channel i.
The solution of the problem of minimization of the squared metric (7.5.2b)
between the function yk (t) = Fk [x(t)] and the signal sk (t) in its presence in the
process x(t): x(t) = sk (t) ∨ n(t), follows directly from the absorption axiom of the
lattice L(∨, ∧) (see page 269) contained in the third part of multilink identity:

yk (t) = sk (t) ∧ x(t) = sk (t) ∧ [sk (t) ∨ n(t)] = sk (t). (7.5.3)

The identity (7.5.3) directly implies the type of the function Fi [x(t)] from the
relationship (7.5.2a) of the system (7.5.2):

yi (t) = Fi [x(t)] = si (t) ∧ x(t) = ŝi (t). (7.5.4)

Also, the identity (7.5.3) directly implies that a squared metric is identically equal
to zero: Z
|yk (t) − sk (t)|2 dt x(t)=sk (t)∨n(t) = 0.

(7.5.5)
t∈Ts

The identity (7.5.4) implies that in the presence of the signal sk (t) in the process
x(t) = sk (t) ∨ n(t), the solution of optimization equation (7.5.2c) of the system
(7.5.2) is equal to:
Z

arg max [ yi (t)si (t)dt] x(t)=sk (t)∨n(t) = k̂, (7.5.6)
i∈I; si (t)∈S
t∈Ts
R
and at the instant t = t0 + T , correlation integral yi (t)si (t)dt takes maximum
t∈Ts
value on i = k equal to energy E of the signal si (t):
Z Z
yi (t)si (t)dt |i=k = sk (t)sk (t)dt = E. (7.5.7)
t∈Ts t∈Ts

To summarize the relationships (7.5.4) through (7.5.7), one can conclude that
the signal processing unit has to form the estimator yi (t) = ŝi (t) of the signal si (t)
in each of m processing channels, that is equal,R according to (7.5.4), to ŝi (t) =
si (t) ∧ x(t); compute the correlation integral yi (t)si (t)dt in the interval Ts =
t∈Ts
[t0 , t0 + T ] in each processing channel; and, according to Equation (7.5.2c), make
the decision that in the process x(t) = sk (t) ∨ n(t), there exists the signal sk (t),
which
R corresponds to the channel where the maximum value of correlation integral
yk (t)sk (t)dt is formed at the instant t = t0 + T equal to the signal energy E.
t∈Ts
The block diagram of the unit of classification of deterministic signals in a
signal space with lattice properties includes the decision gate (DG) and m parallel
processing channels, each containing the circuit of formation of Rthe estimator ŝi (t)
of the signal si (t); the correlation integral computation circuit ŝi (t)si (t)dt; and
t∈Ts
7.5 Classification of Deterministic Signals in Metric Space with Lattice Properties 355

FIGURE 7.5.1 Block diagram of deterministic signals classification unit

strobing circuit (SC) (see Fig. 7.5.1). The correlation integral computation circuit
consists of a multiplier and an integrator.
We can analyze the relations between the signals si (t) and sk (t) and their es-
timators ŝi (t) and ŝk (t) in the corresponding processing channels in the presence
of the signal sk (t): x(t) = sk (t) ∨ n(t) in the process x(t). Let the signals from the
set S = {si (t)}, i = 1, . . . , m be characterized by the same energy Ei = E. For the
signals si (t) and sk (t) and their estimators ŝi (t) and ŝk (t) in the corresponding pro-
cessing channels, on an arbitrary signal-to-interference (signal-to-noise) ratio in the
process x(t) = sk (t) ∨ n(t) observed in the input of classification unit, the following
metric relationships hold:
2 2 2
ksi (t) − ŝk (t)k + ksk (t) − ŝk (t)k = ksi (t) − sk (t)k ; (7.5.8a)
2 2 2
ksi (t) − ŝi (t)k + ksk (t) − ŝi (t)k = ksi (t) − sk (t)k , (7.5.8b)
2 2 2
where ka(t) − b(t)k = ka(t)k + kb(t)k − 2(a(t), b(t)) is a squared metric between
2
the functions a(t) and b(t) in Hilbert space HS; ka
R (t)k is a squared norm of the
function a(t) in Hilbert space HS; (a(t), b(t)) = a(t)b(t)dt is scalar product of
t∈T ∗

the functions a(t) and b(t) in Hilbert space HS; T is a domain of definition of the
functions a(t) and b(t).
356 7 Synthesis and Analysis of Signal Processing Algorithms

The relationships (7.5.4) and (7.5.8a) directly imply that on an arbitrary signal-
to-interference (signal-to-noise) ratio in x(t) = sk (t) ∨ n(t), the correlation coeffi-
cients ρ[sk (t), ŝk (t)] and ρ[si (t), ŝk (t)] between the signals si (t) and sk (t) and the
estimator ŝk (t) of the signal sk (t) in the k-th processing channel are, respectively,
equal to:
ρ[sk (t), ŝk (t)] = 1; (7.5.9a)
ρ[si (t), ŝk (t)] = rik , (7.5.9b)
and the squared metrics, according to (7.5.8a), are determined by the following
relationships:
2
ksk (t) − ŝk (t)k = 0; (7.5.10a)
2 2
ksi (t) − ŝk (t)k = ksi (t) − sk (t)k = 2E (1 − rik ), (7.5.10b)
where rik is the cross-correlation coefficient between the signals si (t) and sk (t); E
is energy of the signals si (t) and sk (t).
The relationship (7.5.8b) implies that on an arbitrary signal-to-interference
(signal-to-noise) ratio in the process x(t) = sk (t) ∨ n(t), the correlation coefficients
ρ[si (t), ŝi (t)] and ρ[sk (t), ŝi (t)] between the signals si (t) and sk (t) and the estimator
ŝi (t) of the signal si (t) in the i-th processing channel are equal to:
1
ρ[si (t), ŝi (t)] = 1 − (1 − rik ); (7.5.11a)
4
3
ρ[sk (t), ŝi (t)] = 1 − (1 − rik ), (7.5.11b)
4
and squared metrics from (7.5.8a) are determined by the following relationships:

2 1
ksi (t) − ŝi (t)k = E (1 − rik ); (7.5.12a)
2
2 3
ksk (t) − ŝi (t)k = E (1 − rik ); (7.5.12b)
2
2
ksi (t) − sk (t)k = 2E (1 − rik ). (7.5.12c)
The relationships (7.5.9a) and (7.5.11a) imply that while receiving the signal sk (t)
in the process x(t) = sk (t) ∨ n(t) in the k-th processing channel in the output of
integrator (see Fig. 7.5.1), at the instant t = t0 + T , the maximum value of the
correlation integral (7.5.7) formed is equal to E · ρ[sk (t), ŝk (t)] = E. In the i-th
processing
R channel (i 6= k) at the same time, the value of the correlation integral
yi (t)si (t)dt formed is equal to E · ρ[si (t), ŝi (t)] < E.
t∈Ts
Thus, regardless of the conditions of parametric and nonparametric prior uncer-
tainty and the probabilistic-statistical properties of interference (noise), the optimal
unit of deterministic signal classification (optimal demodulator) in signal space with
lattice properties realizes error-free classification of the signals from the given set
S = {si (t)}, i = 1, . . . , m.
The three segments of Fig. 7.5.2 illustrate the signals zi (t) and ui (t) in the
7.5 Classification of Deterministic Signals in Metric Space with Lattice Properties 357

outputs of the correlation integral computation circuit and the strobing circuit,
the strobing pulses in the first, second, and third processing channels obtained by
statistical modeling under the condition, that the signals s1 (t), s2 (t), s3 (t), and s1 (t)
were received in the input of classification unit in the mixture x(t) = si (t) ∨ n(t),
i = 1, . . . , m successively in time in the intervals [0, T ], [T, 2T ], [2T, 3T ], [3T, 4T ],
respectively.

(a) (b) (c)


FIGURE 7.5.2 Signals in outputs of correlation integral computation circuit zi (t) and
strobing circuit ui (t) within (a) first, (b) second, and (c) third processing channels

The signals s1 (t), s2 (t), s3 (t) are orthogonal phase-shift keying with equal en-
ergies. Signal-to-interference (signal-to-noise) ratio E/N0 is equal to the quantity
E/N0 = 10−8 , where E is an energy of the signal si (t); N0 is a power spectral
density of interference (noise). In the case of receiving of the k-th signal sk (t) from
the set of deterministic signals S = {si (t)}, i = 1, . . . , m, in the k-th processing
channel, the function zk (t) observed in the output of the correlation integral compu-
tation circuit is linear. Conversely, if in the i-th processing channel the j-th signal is
received, j =6 i, then the function zi (t) formed in the output of correlation integral
computation circuit, differs from the linear one.
As shown in Fig. 7.5.2, although the signal-to-interference (signal-to-noise) ratio
is rather small, the signals ui (t) in the inputs of the decision gate (in the outputs
of the strobing circuits) in each processing channel can be accurately distinguished
over their amplitude. The relationships (7.5.9a) and (7.5.11a) imply that while
receiving the signal sk (t) in the observed process x(t) = sk (t) ∨ n(t), in the k-th
processing channel in the output of the integrator (see Fig. 7.5.1), at the instant
t = t0 + jT , j = 1, 2, 3, . . ., the maximum value of correlation integral (7.5.7) formed
is equal to E · ρ[sk (t), ŝk (t)] = E. In the i-th processing channel (i =
6 k), at the same
time, the value of the correlation integral formed is equal to E·ρ[si (t), ŝi (t)] = 0.75E.
The Shannon theorem on capacity of the communication channel with additive
white Gaussian noise assumes the existence of lower bound inf[Eb /N0 ] of the ratio
Eb /N0 of an energy Eb that falls at one bit of information transferred by the signal
to the quantity of noise power spectral density N0 called the ultimate Shannon
limit [51], [52], [164].
This value inf[Eb /N0 ] = ln 2 establishes the limit below which error-free infor-
mation transmitting cannot be realized. The previous example implies that while
solving the problem of deterministic signal classification in the presence of inter-
ference (noise) in signal space with lattice properties, the value inf[Eb /N0 ] can be
358 7 Synthesis and Analysis of Signal Processing Algorithms

arbitrarily small as can the probability of signal receiving error while receiving a
signal from the set of deterministic signals S = {si (t)}, i = 1, . . . , m.
However, this does not mean that in such signal spaces one can achieve un-
bounded values of the capacity of a noisy communication channel. Sections 5.2
and 6.5 show that the capacity of communication channel, even in the absence of
interference (noise), is a finite quantity. It is impossible “. . . to transmit all the in-
formation in the Encyclopedia Britannica in the absence of noise by the only signal
si (t)”, that, in fact, follows from Theorem 5.1.1.
The results of investigation of deterministic signal classification problem in sig-
nal space with lattice properties permit us to draw the following conclusions.

1. Formulation of the synthesis problem of a deterministic signal classifica-


tion algorithm on the basis of minimization of a squared metric in Hilbert
space (7.5.2b) with the simultaneous rejection of data concerning the prior
probabilities of signal receiving and interference (noise) distribution pro-
vides the invariance property of the synthesized algorithm with respect to
the conditions of parametric and nonparametric prior uncertainty. The ob-
tained values of correlation coefficients (7.5.9a,b) and (7.5.11a,b) confirm
this fact.
2. While solving the problem of deterministic signal classification in the pres-
ence of interference (noise) in a signal space with lattice properties, no
constraints are imposed on a lower bound of the ratio Eb /N0 of the en-
ergy Eb , that falls at one bit of transmitted information, to the quantity
of power spectral density of interference (noise) N0 . This ratio can be ar-
bitrarily small as can the probability of error while receiving a signal from
the set of deterministic signals S = {si (t)}, i = 1, . . . , m, m ∈ N.
3. The essential distinctions in signal classification quality between the lin-
ear signal space and the signal space with lattice properties are elucidated
by fundamental differences in the interactions of the signal si (t) and in-
terference (noise) n(t) in these spaces: the additive si (t) + n(t), and the
interactions in the form of binary operations of join si (t) ∨ n(t) and meet
si (t) ∧ n(t), respectively.

7.6 Resolution of Radio Frequency Pulses in Metric Space with


Lattice Properties
The uncertainty function [118] introduced by P. M. Woodward as a generalized char-
acteristic of resolution of a signal processing unit accurately describes the properties
of all the signals applied in practice. However, utilizing uncertainty function as an
analytical tool for theoretical investigations of signal resolution problem in nonEu-
clidean spaces in general and in signal space with lattice properties in particular is
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 359

impossible due to specificity of the properties of the signals and the algorithms of
their processing in these signal spaces; so another approach is necessary here.
Parameter λ0 of received signal s0 (t, λ0 ) is usually mismatched with respect to
parameter λ of the expected useful signal s(t, λ) matched with some filter used for
signal processing. The effects of mismatching take place during signal detection,
and influence signal resolution and estimation of signal parameters. Mismatching
can be evaluated over the signal w(λ0 , λ) in the output of signal processing unit
matched with the expected signal s(t, λ). In the case of a signal s0 (t, λ0 ) with a
mismatched value of parameter λ0 in the input of the processing unit in the absence
of interference (noise), the output response w(λ0 , λ) is called a mismatching function
[267]. The normalized mismatching function ρ(λ0 , λ) is also introduced in [267]:

w(λ0 , λ)
ρ(λ0 , λ) = p . (7.6.1)
w(λ0 , λ0 )w(λ, λ)
The function so determined is called a normalized time-frequency mismatching func-
tion of a processing unit, if the vector parameter of the expected signal includes
two scalar parameters: delay time t00 and Doppler frequency shift F00 [267]. Vector
parameter λ0 of the received signal can be expressed by two similar scalar param-
eters t00 = t0 + τ and F00 = F0 + F , where τ and F are time delay and Doppler
frequency mismatching, respectively.
This section has a twofold goal. First, it is necessary to synthesize the algorithm
and unit of resolution of radio frequency (RF) pulses without intrapulse modulation
in signal space with lattice properties. Second, it is necessary to determine potential
resolution of this unit.
Synthesis and analysis of optimal algorithm of RF pulse resolution are fulfilled
on the following assumptions. In synthesis, interference (noise) distribution is con-
sidered arbitrary, and useful signals are considered harmonic whose amplitude, time
of arrival, and initial phase are unknown nonrandom. Other parameters of useful
signals are considered to be known. Under the further analysis of signal processing
algorithm, interference (noise) is assumed to be Gaussian.
Consider the model of interaction between two harmonic signals s1 (t) and s2 (t)
and interference (noise) n(t) in signal space L(∨, ∧) in the form of distributive
lattice properties with operations of join a(t) ∨b(t) and meet a(t) ∧b(t), respectively:
a(t) ∨ b(t) = supL (a(t), b(t)), a(t) ∧ b(t) = inf L (a(t), b(t)); a(t), b(t) ∈ L(∨, ∧):

x(t) = s1 (t) ∨ s2 (t) ∨ n(t). (7.6.2)

Let the model of the received signals s1 (t), s2 (t) be determined by the expression:

Ai cos(ω0 t + ϕ), t ∈ Ti ;
si (t) = (7.6.3)
0, t∈/ Ti ,

where Ai is an unknown nonrandom amplitude of the useful signal si (t); ω0 = 2πf0 ;


f0 is known carrier frequency of the signal si (t); ϕ is an unknown nonrandom initial
phase of the signal si (t), ϕ ∈ [−π, π ]; Ti is a domain of definition of the signal si (t),
Ti = [t0i , t0i + T ], i = 1, 2; t0i is an unknown time of arrival of the signal si (t); T
360 7 Synthesis and Analysis of Signal Processing Algorithms

is known signal duration, T = N T0 ; N is a number of periods of harmonic signal


si (t); N ∈ N, N is the set of natural numbers; T0 is a period of a signal carrier.
We suppose interference (noise) n(t) is characterized by such statistical proper-
ties that two neighbor independent interference (noise) samples n(tj ) and n(tj±1 )
are distant over the interval ∆τ = 1/f0 = T0 . The instantaneous values (samples)
of the signal {s(tj )} and interference (noise) {n(tj )} are the elements of the signal
space: s(tj ), n(tj ) ∈ L(∨, ∧).
The observation equation, according to the interaction Equation (7.6.2), takes
the form:
x(tj ) = s1 (tj ) ∨ s2 (tj ) ∨ n(tj ), (7.6.4)
where tj = t − jT0 , j = 0, 1, . . . , J − 1, tj ∈ T ∗ ; T ∗ is a processing interval:
T ∗ = [min{ t0i }, max{ t0i + T }]; T ∗ ⊂ T1 ∪ T2 ; J ∈ N, N is the set of natural
i=1,2 i=1,2
numbers.
The resolution problem lies in forming in the output of a processing unit such
estimators ŝ1 (t), ŝ2 (t) of the signals s1 (t), s2 (t) based on processing of the observed
signal x(t) (7.6.2) that allow (according to the chosen criteria) distinguishing these
signals by time parameter. The resolution problem Res[s1,2 (t)] for two harmonic
signals s1 (t) and s2 (t) in signal space L(∨, ∧) with lattice properties is formulated
and solved on the basis of step-by-step processing of statistical collection {x(tj )}
determined by the observation equation (7.6.4):

 P F [s1,2 (t)]; (a)
Res[s1,2 (t)] = IP [s1,2 (t)]; (b) (7.6.5)
Sm[s1,2 (t)], (c)

where P F [s1,2 (t)] is primary filtering; IP [s1,2 (t)] is intermediate processing;


Sm[s1,2 (t)] is smoothing; together they form successive processing stages of the
general algorithm of resolution Res[s1,2 (t)] of the signals s1 (t) and s2 (t) (7.6.5).
The optimality criteria determining every signal processing stage P F [s1,2 (t)],
IP [s1,2 (t)], Sm[s1,2 (t)] are interrelated and involved in single systems:

P F [s1,2 (t)] =
 J−1

 y (t) = arg min | ∧ [x(tj ) − y ◦ (t)]|; (a)
y ◦ (t)∈Y ; t,tj ∈T ∗ j=0




 w(t) = F [y (t)];R (b)


 R
= J = arg min[ |y (t)|dt x(t)=si (t)∨0 ], |y (t)|dt 6= 0; (c) (7.6.6)

 y(t)∈Y t∈T ∗ t∈T ∗
 R
|w12 (t) − [w1 (t) ∨ w2 (t)]|dt |t01 −t02 |∈∆t0i → min ; (d)




w(t)∈W

 t∈T


w12 (t) = w(t) x(t)=s1 (t)∨s2 (t) , w1,2 (t) = w(t) x(t)=s1,2 (t)∨0 ,

J−1
where y (t) is the solution of the problem of minimization of metric | ∧ [x(tj ) −
j=0
y ◦ (t)]| between the observed statistical collection {x(tj )} and optimization variable
(function) y ◦ (t); w(t) is some deterministic function F [∗] of the result y (t) of min-
imization of the function of the observed collection {x(tj )} (4); t0i is an unknown
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 361

time of arrival of the signal si (t), i = 1, 2; ∆t0i = [0.75T0 , 1.25T0 ]; T ∗ is a process-


ing interval; J is a number of the samples of stochastic process x(t) used during
signal processing, J ∈ N, N is the set of natural numbers;

2
 M{u (t)} s1,2 (t)≡0 → min ; (a)

L
IP [s1,2 (t)] = 2
M{[u(t) − si (t)] } x(t)=si (t)∨0 = ε ; (b) (7.6.7)

 u(t) = L[w(t)], (c)

where M{∗} is the symbol of mathematical expectation; L[w(t)] is the functional


transformation of the process w(t) into the process u(t); ε is a constant defined by
some function of a power of the signal s(t);

Sm[s1,2 (t)] =

M −1
|u(tk ) − v ◦ (t)|;
P
v (t) = arg min (a)




v ◦ (t)∈V ; t,tk ∈T̃ k=0

= ∆T̃ : δd (∆T̃ ) = δd,sm ; (b) (7.6.8)

[δf (M 0 )] M ∗ : δf (M ∗ )=δf,sm ,

 M = 0 arg max (c)



M ∈N∩[M ,∞[

where v (t) is smoothing function of the process u(t) that is the solution of the
M −1
|u(tk ) − v ◦ (t)| between the instantaneous
P
problem of minimizing the metric
k=0
values of stochastic process u(t) and optimization variable v ◦ (t); tk = t − M k
∆T̃ ,
k = 0, 1, . . . , M − 1, tk ∈ T̃ =]t − ∆T̃ , t]; T̃ is an interval, in which smoothing of
stochastic process u(t) is realized; M ∈ N, N is the set of natural numbers; M is a
number of samples of stochastic process u(t) used during smoothing in the interval
T̃ ; δd (∆T̃ ) and δf (M ) are relative dynamic and fluctuation errors of smoothing as
the dependences on the quantity ∆T̃ of the smoothing interval T̃ and the number
of the samples M , respectively; δd,sm and δf,sm are the quantities of dynamic and
fluctuation errors of smoothing, respectively.
We now explain the optimality criteria and the single relationships included
into the systems (7.6.6), (7.6.7), (7.6.8) determining the successive processing
stages P F [s1,2 (t)], IP [s1,2 (t)], Sm[s1,2 (t)] of the general algorithm of resolution
Res[s1,2 (t)] of the signals s1 (t) and s2 (t) (7.6.5).
Equation (7.6.6a) of the system (7.6.6) determines the criterion of minimum
of metric between the statistical set of the observation {x(tj )} and the result of
J−1
primary processing y (t). The choice of function of metric | ∧ [x(tj ) − y ◦ (t)]| should
j=0
take into account the metric convergence and the convergence in probability to
the estimated parameter of the sequence for the interaction in the form (7.6.2a)
(see Section 7.2). Equation (7.6.6b) establishes the interrelation between stochastic
processes y (t) and w(t). The relationship (7.6.6c) determines the criterion of the
choice of number of periods J of the signalsR s1 (t), s2 (t), used while processing
on the basis of minimization of the norm |y (t)|dt. The criterion (7.6.6c) is
t∈T ∗
considered under three constraint conditions: (1) interference (noise) is identically
362 7 Synthesis and Analysis of Signal Processing Algorithms

R n(t) ≡ 0; (2) the second signal sk (t), k 6= i, k = 1,R2 is absent; (3)


equal to zero:
the norm |y (t)|dt of the function y (t) is not equal to zero: |y (t)|dt 6= 0.
t∈T ∗ t∈T ∗
The relationship (7.6.6d) defines the criterion of minimum of norm of the difference
w12 (t)−[w1 (t)∨w2 (t)] of the processing unit responses w12 (t) and w1,2 (t) under joint
interaction between the signals s1 (t) and s2 (t) (x(t) = s1 (t) ∨ s2 (t)), and also under
the effect of only one of the signals si (t), i = 1, 2 (x(t) = s1,2 (t) ∨ 0), respectively.
The criterion (7.6.6d) is considered under the constraint condition that the module
of the difference |t01 − t02 | of time of arrival t0i of the signals si (t), i = 1, 2 takes
the values within the interval 0.75T0 ≤ |t01 − t02 | ≤ 1.25T0 .
Equations (7.6.7a), (7.6.7b), (7.6.7c) define the criterion of the choice of the
functional transformation L[w(t)]. Equation (7.6.7a) determines the criterion of
minimum of the second moment of the process u(t) in the absence of the signals
si (t), i = 1, 2 in the input of processing unit. Equation (7.6.7b) establishes the
quantity of the second moment of the difference between the signals u(t), s1,2 (t)
under two constraint conditions: (1) interference (noise) is identically equal to zero:
n(t) ≡ 0; (2) the second signal sk (t), k 6= i, k = 1, 2 is absent.
Equation (7.6.8a) of the system (7.6.8) determines the criterion of minimum of
metric between the process u(t) and the result of its smoothing v (t) in the interval
T̃ . The criterion (7.6.8a) is considered under the constraint condition that both
useful signals are identically equal to zero: si (t) ≡ 0, i = 1, 2, whereas interference
(noise) n(t) differs from zero: x(t) = n(t) ∨ 0. The relationship (7.6.8b) determines
the rule of choosing the quantity ∆T̃ of the smoothing interval T̃ based on a given
quantity of relative dynamic error δd,sm of smoothing. Equation (7.6.8c) determines
the number of samples M of stochastic process u(t) based on the relative fluctuation
error δf,sm of its smoothing.
To solve the problem of resolution algorithm synthesis according to the chosen
criteria, we obtain an expression for the process v (t) in the output of the processing
unit by successive solving the equation system (7.6.5).
J−1
To solve the problem of minimization of the function | ∧ [x(tj ) −y ◦ (t)]| (7.6.6a)
j=0
of the system (7.6.6), we find its extremum, setting the derivative with respect to
y ◦ (t) to zero:
J−1 J−1
d| ∧ (x(tj ) − y ◦ (t))|/dy ◦ (t) = −sign[ ∧ (x(tj ) − y ◦ (t))] = 0. (7.6.9)
j=0 j=0

The solution of Equation (7.6.9) is the value of the estimator y (t) in the form of
the meet of the observation results {x(tj )}:
J−1 J−1
y (t) = ∧ x(tj ) = ∧ x(t − jT0 ). (7.6.10)
j=0 j=0

J−1
The derivative of the function | ∧ [x(tj ) − y ◦ (t)]|, according to the relationship
j=0
(7.6.9), at point y (t) changes its sign from minus to plus. Thus, the extremum
determined by the formula (7.6.10) is the minimum point of this function and the
solution of Equation (7.6.6a) that determines this criterion of estimation.
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 363

The condition of the criterion (7.6.6c) of the system (7.6.6) x(t) = si (t) ∨ 0,
i = 1, 2 determines the observation Equation (7.6.2) of the following form: x(tj ) =
si (tj ) ∨ 0, j = 0, 1, . . . , J − 1; therefore, according to the relationship (7.6.10), the
identities hold:

y (t) x(t)=si (t)∨0 = [si (t) ∨ 0] ∧ [si (t − (J − 1)T0 ) ∨ 0]. (7.6.11)
R
On the basis of the identity (7.6.11), we obtain the value of the norm |y (t)|dt
t∈T ∗
from the criterion (7.6.6c):
Z 
4(N − J + 1)Ai /π, J ≤ N;
|y (t)|dt = (7.6.12)
0, J > N,
t∈T ∗

where N is a number of periods of harmonic signal si (t); Ai is an unknown non-


random amplitude of the signal si (t).
From Equation (7.6.12), according to the criterion (7.6.6c), we obtain optimal
value J of a number of signal periods used on primary processing (7.6.10) equal to
N:
J = N. (7.6.13)
Under joint effects of the signals s1 (t) and s2 (t) (x(t) = s1 (t) ∨ s2 (t)), the response
w12 (t) of the processing unit is the function of the expression (7.6.10) on J = N :

y (t) x(t)=s1 (t)∨s2 (t) = y+ (t) x(t)=s1 (t)∨s2 (t) + y− (t) x(t)=s1 (t)∨s2 (t) ; (7.6.14)

y+ (t) x(t)=s1 (t)∨s2 (t) = [s1 (t) ∨ s2 (t) ∨ 0]∧
∧ [s1 (t − (N − 1)T0 ) ∨ s2 (t − (N − 1)T0 ) ∨ 0]; (7.6.14a)

y− (t) x(t)=s1 (t)∨s2 (t) = [s1 (t) ∧ s1 (t − (N − 1)T0 ) ∧ 0]∨
∨ [s2 (t) ∧ s2 (t − (N − 1)T0 ) ∧ 0], (7.6.14b)

where y+ (t) x(t)=s1 (t)∨s2 (t) and y− (t) x(t)=s1 (t)∨s2 (t) are the positive and negative
parts of the function y (t) x(t)=s1 (t)∨s2 (t) , respectively:

y+ (t) x(t)=s1 (t)∨s2 (t) = y (t) x(t)=s1 (t)∨s2 (t) ∨ 0; (7.6.15a)

y− (t) x(t)=s1 (t)∨s2 (t) = y (t) x(t)=s1 (t)∨s2 (t) ∧ 0. (7.6.15b)
Under the effect of only one of the signals s1 (t) or s2 (t) (x(t) = s1,2 (t) ∨ 0), the
response w1,2 (t) of the processing unit is the function (7.6.6b) of the expression
(7.6.10) on J = N :

y (t) x(t)=s1,2 (t)∨0 = y+ (t) x(t)=s1,2 (t)∨0 + y− (t) x(t)=s1,2 (t)∨0 ; (7.6.16)

y+ (t) x(t)=s1,2 (t)∨0 = [s1,2 (t) ∨ 0] ∧ [s1,2 (t − (N − 1)T0 ) ∨ 0]; (7.6.16a)

y− (t) x(t)=s1,2 (t)∨0 = 0, (7.6.16b)
364 7 Synthesis and Analysis of Signal Processing Algorithms

where y+ (t) x(t)=s1,2 (t)∨0 and y− (t) x(t)=s1,2 (t)∨0 are the positive and negative parts
of the function y (t) x(t)=s1,2 (t)∨0 , respectively.
The identity (7.6.16b) is stipulated by the absence of the negative component
x− (t) of the input effect x(t) = s1,2 (t) ∨ 0: x− (t) = x(t) ∧ 0 = 0.
If the criterion (7.6.6d) |t01 − t02 | ∈ ∆t0i = [0.75T0 , 1.25T0 ] holds, then the
expression (7.6.14a) can be represented in the following form:

y+ (t) x(t)=s1 (t)∨s2 (t) = y+ (t) x(t)=s1 (t)∨0 + y+ (t) x(t)=s2 (t)∨0 , (7.6.17)

where y+ (t) x(t)=s1,2 (t)∨0 is the function determined by the relationship (7.6.16a).
AnalyzingR the relationships (7.6.14), (7.6.16), (7.6.17), it is easy to conclude that
the norm |w12 (t) − [w1 (t) ∨ w2 (t)]|dt of the difference of the functions w12 (t)
t∈T ∗
and w1 (t) ∨ w2 (t) is minimal and equal to zero if and only if between the functions
w(t) and y (t), the following identity holds:

w(t) x(t)=s1 (t)∨s2 (t) = y (t) x(t)=s1 (t)∨s2 (t) ∨ 0. (7.6.18)

The summand y− (t) x(t)=s1 (t)∨s2 (t) determined by the expression (7.6.14b) is ex-
cluded from further processing, and the following equalities hold:

w12 (t) = w1 (t) ∨ w2 (t); (7.6.19a)



w(t) x(t)=s1 (t)∨s2 (t) = w(t) x(t)=s1 (t)∨0 + w(t) x(t)=s2 (t)∨0 . (7.6.19b)
Thus, in the result of minimization
R of the norm of the difference of the functions
w12 (t) and w1 (t) ∨ w2 (t) — |w12 (t) − [w1 (t) ∨ w2 (t)]|dt, according to the crite-
t∈T ∗
rion (7.6.6d), the coupling equation (7.6.6b) between the functions w(t) and y (t) is
determined by the relationship (7.6.18):

F [y (t)] = y (t) ∨ 0.

It is obvious that the coupling equation (7.6.6b) has to be invariant with respect
to the presence (absence) of interference (noise) n(t), so the final variant of the
coupling equation can be written on the basis of the Equation (7.6.18) in the form:

w(t) = F [y (t)] = y (t) ∨ 0. (7.6.20)

Hence, the identity (7.6.20) determines the form of the coupling equation (7.6.6b)
obtained on the basis of the criterion (7.6.6d). According to the relationship (7.6.20),
the noninformative component of the process y (t), determined by its negative part
y− (t) = y (t) ∧ 0, must be excluded from signal processing, while the positive part
y+ (t) = y (t) ∨ 0 of the process y (t) takes part in the further processing, and y+ (t) is
the informational component of y (t). From the energetic standpoint, informational
y+ (t) and noninformational
R y− (t) components contain 1/N and (N − 1)/N parts
of the norm |y (t)|dt x(t)=s1,2 (t) , respectively, in the presence of the only signal
t∈T ∗
s1 (t) or s2 (t) in the input of the processing unit: x(t) = s1,2 (t) .
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 365

In the absence of interference (noise) n(t) = 0, between the signals in the input
and output of processing unit, the following relationships hold:

x12 (t) = x1 (t) ∨ x2 (t); (7.6.21a)

w12 (t) = w1 (t) ∨ w2 (t). (7.6.21b)



where x12 (t) = s1 (t) ∨ s2 (t) ∨ 0; x1,2 (t) = s1,2 (t) ∨ 0; w12 (t) = w(t) x(t)=s1 (t)∨s2 (t) ;
w1,2 (t) = w(t) x(t)=s1,2 (t)∨0
The relationships (7.6.21a,b) determine homomorphism between the signals in
the input and output of the processing unit. If the time difference |t01 −t02 | between
arrivals of the signals s1 (t) and s2 (t) exceeds the quantity 1.5T0 : |t01 − t02 | ≥ 1.5T0 ,
then the relationship (7.6.21b) does not hold. This fact is stipulated by nonlinear
interaction of the responses of the signals s1 (t) and s2 (t) under their simultaneous
effect in the input of the processing unit.
The solution u(t) of the relationships (7.6.7a), (7.6.7b), (7.6.7c) of the system
(7.6.7), which establish the criterion of the choice of functional transformation of
the process w(t) is the function L[w(t)] determining the gain characteristic of the
limiter:

u(t) = L[w(t)] = [(w(t) ∧ a) ∨ 0] + [(w(t) ∨ (−a)) ∧ 0], (7.6.22)

where a is the parameter of the limiter chosen to equal a = sup ∆A = Amax ,


A
∆A =]0, Amax ]; Amax is maximal possible value of the signal amplitude.
To obtain the expression for the process v (t) in the output of the processing unit,
we solve the function minimization equation on the basis of the criterion (7.6.8a)
of the
R system (7.6.8). To
solve this problem, we find the extremum of the function
|u(t) − v ◦ (t)|dt x(t)=n(t)∨0 , setting its derivative with respect to v ◦ (t) to
t∈T̃ ⊂T ∗
zero:
M
X −1
|u(tk ) − v ◦ (t)| x(t)=n(t)∨0 }/dv ◦ (t) =

d{
k=0
M
X −1
=− sign[u(tk ) − v ◦ (t)]|x(t)=n(t)∨0 = 0.
k=0

The solution of the last equation is the value of the estimator v (t) in the form of the
sample median med{∗} of the sample collection {u(tk )} of the stochastic process
u(t) in the interval T̃ =]t − ∆T̃ , t]:

v (t) = med{u(tk )}, tk ∈ T̃ , (7.6.23)


tk ∈T̃

k
where tk = t − M ∆T̃ , k = 0, 1, . . . , M − 1, and the quantities ∆T̃ and M are chosen
according to the criteria (7.6.8b) and (7.6.8c) of the system (7.6.8), respectively.
M −1
|u(tk ) − v ◦ (t)||x(t)=n(t)∨0 at the point v (t)
P
The derivative of the function
k=0
366 7 Synthesis and Analysis of Signal Processing Algorithms

changes its sign from minus to plus. Hence, the extremum determined by the for-
mula (7.6.23) is minimum of this function and the solution of the equation (7.6.8a)
determining this estimation criterion.
Thus, summarizing the relationships (7.6.10), (7.6.13), (7.6.20), (7.6.22),
(7.6.23), we conclude that the estimator v (t) of the signals s1 (t) and s2 (t) received
in the presence of interference (noise) n(t) is the function of smoothing of a stochas-
tic process u(t) obtained by limitation of the process w(t) that is the positive part
y+ (t) of the process y (t): w(t) = y+ (t) = y (t) ∨ 0, which is the result of primary
processing of the observed statistical collection {x(t − jT0 )}, j = 0, 1, . . . , N − 1:
N −1
y (t) = ∧ x(t − jT0 ):
j=0
k
v (t) = med{u(t − ∆T̃ )}; (7.6.24a)
M
u(t) = L[w(t)]; (7.6.24b)
N −1
w(t) = [ ∧ x(t − jT0 )] ∨ 0, (7.6.24c)
j=0

where k = 0, 1, . . . , M − 1; ∆T̃ is a quantity of the smoothing interval T̃ ; T̃ =


]t − ∆T̃ , t].
The block diagram of the signal resolution unit, according to the processing al-
gorithm (7.6.24), involves transversal filter realizing primary over-period processing
N −1
in the form ∧ x(t − jT0 ); the unit of formation of the positive part w(t) = y+ (t)
j=0
of the process y (t) (7.6.20) that excludes its negative instantaneous values from
further processing; the limiter L[w(t)]; and also median filter (MF) (see Fig. 7.6.1).

FIGURE 7.6.1 Block diagram of signal resolution unit

We first shall analyze the resolution ability of the obtained processing unit on
the basis of normalized mismatching function (7.6.1), not taking into account the
influence of interference (noise), then we shall do it in the presence of interference
(noise) within the model (7.6.2). This analysis will be carried out by the signal w(t)
(7.6.24c) in the input of the limiter (see Fig. 7.6.1), that avoids inessential details
while preserving the physical sense of the features of this processing.
In the absence of interference (noise) n(t) = 0 in the input of the resolution unit
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 367

(see Fig. 7.6.1), under the signal model (7.6.3), interaction Equation (7.6.2) takes
the form:
x(t) = s(t) ∨ 0; (7.6.25a)

A cos(ω0 t + ϕ), t ∈ Ts ;
s(t) = (7.6.25b)
0, t∈
/ Ts ,
where all the variables have the same sense contained in the signal model (7.6.3):
A is an unknown nonrandom amplitude of the useful signal s(t); ω0 = 2πf0 ; f0 is
the known carrier frequency of the signal s(t); ϕ is an unknown nonrandom initial
phase of the signal s(t), ϕ ∈ [−π, π ]; Ts is a domain of definition of the signal s(t),
Ts = [t0 , t0 + T ]; t0 is an unknown time of arrival of the signal s(t); T is known signal
duration, T = N T0 ; N is a number of periods of harmonic signal s(t); N ∈ N, N
is the set of natural numbers; T0 is a period of carrier.
Information concerning the values of unknown nonrandom amplitude A and
initial phase ϕ of the signal s(t) is contained in its estimator ŝ(t) = w(t) in the
interval Tŝ , t ∈ Tŝ :
Tŝ = [t0 + (N − 1)T0 , t0 + N T0 ], (7.6.26)
where t0 is an unknown time of arrival of the signal s(t); N is a number of periods
of harmonic signal s(t); N ∈ N, N is the set of natural numbers; T0 is a period of
carrier.
We note that this filter cannot be described adequately by the pulse-response
characteristic used for linear filter definition. We can easily make sure that the filter
response on δ-function is identically equal to zero. The response of the filter that
realizes processing of the harmonic signal s(t) (7.6.25b) in the absence of interference
(noise) is the estimator ŝ(t), which, according to the expression (7.6.24c), takes the
values:

s(t) = A cos[ω0 (t − t0 ) + ϕ], s(t) ≥ 0, t ∈ Tŝ ;
ŝ(t) = (7.6.27)
0, s(t) < 0, t ∈ Tŝ or t ∈
/ Tŝ .

Due to this property, the filter of the signal space with lattice properties funda-
mentally differs from the filter of the linear signal space, matched with the same
signal s(t), whose response is determined by the autocorrelation function of the
signal. The relationship (7.6.27) shows that the filter, realizing primary processing
(7.6.24c), provides the compression of the useful signal in N = T f0 times, and as any
nonlinear device, expands the spectrum of the processed signal. Using the known
analogy, the result (7.6.27) can be interpreted as the potential possibilities of the
filter in signal resolution in a time domain under an extremely large signal-to-noise
ratio E/N0 → ∞ in the input. The expression (7.6.27) implies that filter resolution
∆τ in time parameter is about a quarter of a carrier period T0 : ∆τ ∼ 1/4f0 = T0 /4,
where f0 is a carrier frequency of the signal.
Figure 7.6.2 illustrates the signal w(t) in the output of the unit of formation of
the positive part (see Fig. 7.6.1) during the interaction of two harmonic signals s1 (t)
and s2 (t) in the input of a synthesized unit in the absence of interference (noise)
n(t) = 0. In the figure, 1 is the signal s1 (t); 2 is the signal s2 (t); 3 is the response
368 7 Synthesis and Analysis of Signal Processing Algorithms

FIGURE 7.6.2 Signal w(t) in input of limiter in absence of interference (noise). 1 and 2:
signals s1 (t) and s2 (t), respectively; 3 and 4: responses ŝ1 (t) and ŝ2 (t) of signals s1 (t) and
s2 (t), respectively

ŝ1 (t) of the signal s1 (t); 4 is the response ŝ2 (t) of the signal s2 (t). The responses 3
and 4 of the signals s1 (t) and s2 (t) are shown by the solid line.
We can determine normalized time-frequency mismatching function (7.6.1) of
the filter that realizes the primary processing algorithm (7.6.24c) of harmonic signal
s(t) within the model (7.6.25b) in signal space with lattice properties in the absence
of interference (noise), assuming for simplicity that ϕ = −π/2:

s(t) = A cos[ω0 (t − t0 ) − π/2], t ∈ Ts = [t0 , t0 + T ]. (7.6.28)

The received signal s0 (t) is transformed as a result of the Doppler effect, and its
time-frequency characteristics differ from those of the initial signal s(t):

s0 (t) = A0 cos[ω00 (t − t00 ) − π/2], t ∈ Ts0 = [t00 , t00 + T 0 ], (7.6.28a)

where A0 is a changed amplitude of the received signal s0 (t); ω00 = 2πf00 is a changed
cyclic frequency of a carrier; f00 = f0 (1 + δF ) is a changed carrier frequency of the
received signal s0 (t); δF = F/f0 is a relative quantity of Doppler frequency shift;
F is an absolute quantity of Doppler frequency shift; t00 is time of arrival of the
received signal; T 0 = T /(1 + δF ) = N · T00 is a changed duration of the signal; N
is a number of periods of the received signal s0 (t); T00 = T0 /(1 + δF ) is a changed
period of a carrier.
The response w(t) of the signal s0 (t) in the output of the filter in the absence
of interference (noise) is described by the function:

A0 · sin[2πf0 (1 + δF )(t − t01 )], t01 ≤ t < t0m ;



w↑ (t) =
w(t) = (7.6.29)
w↓ (t) = −A0 · sin[2πf0 (1 + δF )(t − t02 )], t0m ≤ t < t02 ,

where t01 and t02 are times of beginning and ending of the response w(t); t0m =
(t01 + t02 )/2 is time corresponding to maximum value of the response w(t); A0 is an
amplitude of the transformed signal s0 (t).
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 369

The first part w↑ (t) of the function w(t) characterizes the leading edge of the
pulse in the output of the filter, and the second part w↓ (t) is its trailing edge. The
leading edge w↑ (t) corresponds to the first quarter of the wave of the first period of
the signal s0 (t + (N − 1)T00 ) delayed on N − 1 periods T00 of a carrier oscillation. The
trailing edge w↓ (t) corresponds to the second quarter of wave of the last period of
the received signal s0 (t).
In the case of the positive δF > 0 and negative δF < 0 relative Doppler shifts,
the values t01 , t0m , and t02 are determined by the following relationships:
 0
 t1 = tm + (N − 1)∆T0 · 1(−δF ) − 0.25T00 ;
t0 = tm + (N − 1)∆T0 · 1(δF ) + 0.5∆T0 + 0.25T00 ; (7.6.30)
 20
tm = (t01 + t02 )/2 = tm + 0.5[(N − 1) + 0.5]∆T0 ,

where tm is time corresponding to maximum value of the response w(t) in the


output of the filter on zero Doppler shift δF = 0; T00 = T0 /(1 + δF ) is a period of
a changed carrier of transformed signal s0 (t); ∆T0 = T00 − T0 = T0 (−δF )/(1 + δF )
is the difference of the periods of a carrier of the transformed s0 (t) and the initial
s(t) signals; 1(x) is Heaviside step function.
Normalizing the function, according to the definition (7.6.1), and realizing trans-
formation of the variable t in the formula (7.6.29) with respect to location tm and
scale T00 parameters:
δτ = (t − tm )/T00 , (7.6.31)
we obtain the expression for normalized time-frequency mismatching function
ρ(δτ, δF ) of the filter:

sin[2π (1 + δF )(δτ − δτ1 )], δτ1 ≤ δτ < δτm ;
ρ(δτ, δF ) = (7.6.32)
− sin[2π (1 + δF )(δτ − δτ2 )], δτm ≤ δτ < δτ2 ,

where δτ and δF are relative time delay and frequency shift, respectively; δτ1
and δτ2 are relative time of beginning and time of ending of mismatching function
ρ(δτ, δF ) on F =const, respectively; δτm = (δτ1 + δτ2 )/2 is relative time correspond-
ing to maximum value of mismatching function ρ(δτ, δF ) on F =const.
In the case of the positive δF > 0 and negative δF < 0 relative frequency shifts,
the values τ1 , τm , and τ2 are determined, according to the transformation (7.6.31)
and the relationship (7.6.30), by the following expressions:

 δτ1 = (N − 1)δT0 · 1(−δF ) − 0.25;
δτ2 = (N − 1)δT0 · 1(δF ) + 0.5δT0 + 0.25; (7.6.33)
δτm = (δτ1 + δτ2 )/2 = 0.5[(N − 1) + 0.5]δT0 ,

where δT0 = (T00 − T0 )/T0 = (−δF )/(1 + δF ) is a relative difference of the carrier
periods of transformed s0 (t) and initial s(t) signals.
The form of normalized time-frequency mismatching function ρ(δτ, δF ) of the
filter on N = 50 is shown in Fig. 7.6.3. Cut projections of normalized time-frequency
mismatching function ρ(δτ, δF ) (7.6.32), made by horizontal planes ρ(δτ, δF )=const
that are parallel to coordinate plane (δτ, δF ), on N >> 1, are similar by form
370 7 Synthesis and Analysis of Signal Processing Algorithms

to a parallelogram with a center in the origin of coordinates, so that its small


diagonal belongs to the axis δτ , and two sides are perpendicular to this diagonal.
They are parallel to the axis δF and belong to the second and fourth quadrants.
The sizes of cut projection ρ(δτ, δF ) = 0 along the axes δτ and δF are equal to
1/2 and 1/[2(N − 1)], respectively. Cut projections of normalized time-frequency
mismatching function ρ(δτ, δF ) made by horizontal planes ρ(δτ, δF )=const that
are parallel to coordinate plane (δτ, δF ) on N = 50, are shown in Fig. 7.6.4.
The curves 1 throug 5 in Fig. 7.6.4 correspond to the values ρ(δτ, δF ) =0; 0.2;
0.4; 0.6; and 0.8, respectively.
Any two copies of the signal whose mutual relative mismatching in time δτ and
in frequency δF exceeds resolution measure of the filter in relative time delay δτ
and frequency shift δF , respectively:

|δτ | > δτ , |δF | > δF , (7.6.34)

are considered resolvable, so that the quantities δτ and δF are the doubled minimal
values of the roots of the equations ρ(δτ, 0) = 0.5, ρ(0, δF ) = 0.5, respectively:

δτ = 2 inf {arg[ρ(δτ, 0) = 0.5]}; (7.6.35a)


δτ ∈∆δτ

δF = 2 inf {arg[ρ(0, δF ) = 0.5]}. (7.6.35b)


δF ∈∆δF

FIGURE 7.6.4 Cut projections of


normalized time-frequency mismatch-
FIGURE 7.6.3 Normalized time-frequency ing function. Lines 1 through 5 corre-
mismatching function ρ(δτ, δF ) of filter on spond to ρ(δτ, δF ) = 0; 0.2; 0.4; 0.6;
N = 50 and 0.8, respectively

The relationships (7.6.32) and (7.6.33) imply that potential resolutions of the
filter, matched with harmonic signal (7.6.28a) in signal space with lattice properties,
in relative time delay δτ and relative frequency shift δF are equal to:

δτ = 1/3; δF = 1/[3(N − 1)]. (7.6.36)


7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 371

This means that potential resolutions of such a filter in time delay ∆τ and frequency
shift ∆F are determined by the relationships:

∆τ = T0 /3; ∆F = f0 /[3(N − 1)], (7.6.37)

and their product is determined by a number of periods N of harmonic signal s(t):

∆τ · ∆F = δτ · δF = 1/[9(N − 1)]. (7.6.38)

The relationship (7.6.38) implies that to provide simultaneously the desired values
of resolution in time delay ∆τ and frequency shift ∆F , it is necessary to use the
signals with sufficiently large numbers of periods N of oscillations. This implies that
to provide simultaneously high resolution in both time and frequency parameters
in a signal space with lattice properties, it is not necessary to use signals with large
time-bandwidth products, inasmuch as this problem can be solved easily by means
of harmonic signals in the form (7.6.28).
We now determine: how does the presence of interference (noise) affect the
filter resolution? While receiving the realization s∗ (t) of the signal s(t), conditional
probability density function (PDF) py (z ; t/s∗ ) ≡ py (z/s∗ ) of instantaneous value
N −1
y (t) of statistics (7.6.10): y (t) = ∧ x(t − jT0 ), t ∈ Tŝ (see Formula (7.6.26)) is
j=0
determined by the expression:

py (z/s∗ ) = P (Cc ) · δ (z − s∗ (t)) + N · pn (z )[1 − Fn (z )]N −1 · 1(z − s∗ (t)), (7.6.39)

where P (Cc ) = 1 − P (Ce ) is a probability of correct formation of the estimator


ŝ(t) = s∗ (t):
P (Cc ) = 1 − [1 − Fn (s∗ (t))]N ; (7.6.40)
P (Ce ) is a probability of error formation of the estimator ŝ(t) = s∗ (t):

P (Ce ) = [1 − Fn (s∗ (t))]N ; (7.6.41)

δ (z ), 1(z ) are the Dirac delta and Heaviside step functions, respectively; Fn (z ) is
the cumulative distribution function (CDF) of interference (noise) n(t).
Any interval Tŝ is a partition of two sets T 0 and T 00 , respectively:

Tŝ = T 0 ∪ T 00 , T 0 ∩ T 00 = ∅; s(t0 ) ≤ 0, t0 ∈ T 0 ; s(t00 ) > 0, t00 ∈ T 00 ; (7.6.42)

T0 ϕT0 3T0 ϕT0


T 0 = [t0 + (N − 1)T0 + − ; t0 + (N − 1)T0 + − ]; T 00 = T − T 0 ,
4 2π 4 2π
and the measures of the intervals T 0 , T 00 are equal to m(T 0 ) = m(T 00 ) = T0 /2.
Then, depending on whether the sample w(t) belongs to one of sets T 0 and T 00 of
a partition (7.6.42) of the interval Tŝ , the conditional PDF pw (z ; t/s∗ ) ≡ pw (z/s∗ ),
t ∈ Tŝ of the instantaneous values w(t) of stochastic process w(t) = y (t) ∨ 0 in the
input of the limiter is determined by the relationships:

pw (z/s∗ ) = P · δ (z ) + N · pn (z )[1 − Fn (z )]N −1 · 1(z ), t ∈ T 0 ⊂ Tŝ , (7.6.43)


372 7 Synthesis and Analysis of Signal Processing Algorithms

R0
where P = P (Cc ) + N · pn (z )[1 − Fn (z )]N −1 dz, s(t) ≤ 0, t ∈ T 0 ;
s(t)

pw (z/s∗ ) = P (Cc )δ (z − s∗ (t))+


+ N · pn (z )[1 − Fn (z )]N −1 1(z − s∗ (t)), t ∈ T 00 ⊂ Tŝ . (7.6.44)

Obviously, when s(t) > 0, t ∈ T 00 ⊂ Tŝ , the PDF pw (z/s∗ ) of the stochastic process
w(t) in the output of the filter is identically equal to the PDF py (z/s∗ ) (7.6.39):

pw (z/s∗ ) = py (z/s∗ ), t ∈ T 00 ⊂ Tŝ .

The probability density function pw (z/s∗ ), t ∈ Tŝ is also the PDF of the estimator
ŝ(t) of the instantaneous value of the signal s(t) in the input of the limiter.
Every random variable n(t) is characterized by a PDF pn (z ) with zero expecta-
tion, so assuming that s∗ (t) ≥ 0, the inequality holds:

Fn (s∗ (t)) ≥ 1/2. (7.6.45)

According to the inequality (7.6.45), the estimator of upper bound of the probability
P (Ce ) of error formation of the estimator ŝ(t) is determined by the relationship:

P (Ce ) ≤ 2−N ⇒ sup[P (Ce )] = 2−N . (7.6.46)

Correspondingly, the lower bound of the probability P (Cc ) of the correct formation
of the estimator ŝ(t) is determined by the inequality:

P (Cc ) ≥ 1 − 2−N ⇒ inf[P (Cc )] = 1 − 2−N . (7.6.47)

Analysis of the relationship (7.6.44) allows us to conclude that the response of the
signal s(t), t ∈ Tŝ is observed in the filter output in the interval Tŝ = [t0 + (N −
1)T0 , t0 + N T0 ] with extremely high probability P (Cc ) ≥ 1 − 2−N regardless of
signal-to-noise ratio. The relationship (7.6.44) also implies that the estimator ŝ(t)
is biased; nevertheless, it is asymptotically unbiased and consistent, inasmuch as it
converges in probability and in distribution to the estimated value s(t).
The expressions (7.6.43) and (7.6.44) for conditional PDF pw (z/s∗ ) of the in-
stantaneous values of stochastic process w(t) in the limiter output for arbitrary
instants should be specified, inasmuch as they were obtained on the assumption
that t ∈ Tŝ , where the interval Tŝ (7.6.26) corresponds to the domain of definition
of the signal response in the filter output (7.6.27), and, respectively, to domain of
definition of the estimator ŝ(t) of the signal s(t).
Figure 7.6.5 illustrates the realization w∗ (t) of the signal w(t) including the
signal response and the residual overshoots in both sides of the signal response
with amplitudes equal to the instantaneous values of the signal s(t) ≥ 0 in the
random instants t ∈ Tk from the intervals {Tk } corresponding to the time positions
of the positive semiperiods of the autocorrelation function of the harmonic signal
s(t):

Tk = [t0 + ((N − 1) + k )T0 ; t0 + (N + k )T0 ], k = 0, ±1, . . . , ±(N − 1), (7.6.48)


7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 373

where N is a number of periods of harmonic signal s(t), N = T f0 ; T0 is a period of


harmonic signal; f0 is a carrier frequency of the signal; t0 is time of signal arrival;
Tk=0 ≡ Tŝ .

FIGURE 7.6.5 Realization w∗ (t) of signal w(t) including signal response and residual
overshoots. 1 = signal s(t); 2 = signal response ŝ(t); 3 = residual overshoots

Fig. 7.6.5 shows: 1 is the signal s(t); 2 is the signal response ŝ(t), t ∈ Tŝ ; 3
represents the residual overshoots.
Any interval Tk is a partition of two sets Tk0 and Tk00 , respectively:

Tk = Tk0 ∪ Tk00 , Tk0 ∩ Tk00 = ∅; s(t0 ) ≤ 0, t0 ∈ Tk0 ; s(t00 ) > 0, t00 ∈ Tk00 ; (7.6.49)
T0 ϕT0 3T0 ϕT0
Tk0 = [t0 +[(N − 1)+ k ]T0 +− ; t0 +[(N − 1)+ k ]T0 + − ]; Tk00 = Tk −Tk0 ,
4 2π 4 2π
and the measures of the intervals Tk0 and Tk00 are equal to m(Tk0 ) = m(Tk00 ) = T0 /2.
Formation of the signal w(t) in the input of the limiter is realized according to
the rule (7.6.24c):
N −1
w(t) = y (t) ∨ 0 = [ ∧ x(t − jT0 )] ∨ 0, t ∈ Tk , (7.6.50)
j=0

where Tk is determined by the formula (7.6.48), k = 0, ±1, . . . , ±(N − 1), Tk=0 ≡ Tŝ .
The expression (7.6.50) implies that as a result of N − |k| tests from N , at the
instant t ∈ Tk , the signal w(t) can take the values from the set {0, s(ti ), n(tl )},
i = 1, . . . , N − |k|; l = 1, . . . , N − |k|. As the result of |k| tests from N , it can take
the values from the set {0, n(tm )}, m = 1, . . . , |k|. Thus, in the intervals {Tk }, the
signal s(t), t ∈ Tk is used less often (at |k|) to form the estimator than in the interval
Tk=0 ≡ Tŝ , that naturally makes the signal processing quality worse. Thus, at the
instant t ∈ Tk , the signal w(t) is determined by the least ymin (t) = y1 (t) ∧ y2 (t) of
two random variables y1 (t), y2 (t):

w(t) = ymin ∨ 0 = [y1 (t) ∧ y2 (t)] ∨ 0, t ∈ Tk , (7.6.51)


|k|−1 N −1
where y1 (t) = ∧ x(tj ), y2 (t) = ∧ x(tj ).
j=0 j=|k|
We determine the conditional PDF pw (z/s∗), t ∈ Tk , based on the fact that
only the samples in the form {0 ∨ n(tj )} take part in formation of random variable
y1 (t), and only the samples in the form {s(tj ) ∨ n(tj )} take part in formation of
374 7 Synthesis and Analysis of Signal Processing Algorithms

the random variable y2 (t). Then, the PDF py (z/s∗), t ∈ Tk of random variable
ymin (t) = y1 (t) ∧ y2 (t) is determined by the relationship:

py (z/s∗ ) = py1 (z/s∗ = 0)[1 − Fy2 (z/s∗ )] + py2 (z/s∗ )[1 − Fy1 (z/s∗ = 0)], (7.6.52)

where, according to (7.6.39), the PDFs py1 (z/s∗ = 0) and py2 (z/s∗ ) of random
variables y1 (t) and y2 (t) are determined by the expressions:

py1 (z/s∗ = 0) = P (Cc )||k|,s=0 · δ (z ) + |k| · pn (z )[1 − Fn (z )]|k|−1 · 1(z );

py2 (z/s∗ ) = P (Cc )|N −|k| ·δ (z−s∗ (t))+(N −|k|) ·pn (z )[1 −Fn (z )]N −|k|−1 · 1(z−s∗ (t)),
and their CDFs Fy1 (z/s∗ = 0) and Fy2 (z/s∗ ) are determined by the relationships:

Fy1 (z/s∗ = 0) = P (Cc )||k| · 1(z ), Fy2 (z/s∗ ) = P (Cc )|N −|k| · 1(z − s∗ (t));

where P (Cc )|q = 1 − [1 − Fn (s∗ (t))]q and P (Ce )|q = [1 − Fn (s∗ (t))]q , q=const.
Then PDF py2 (z/s∗ ) (7.6.52) can be represented in the form:

py (z/s∗ ) = py1 (z/s∗ = 0){1 − 1(z − s∗ (t)) + [1 − Fn (z )]N −|k| · 1(z − s∗ (t))}+
+ py2 (z/s∗ ){1 − 1(z ) + [1 − Fn (z )]|k| · 1(z )}. (7.6.53)

Depending on the values taken by the signal s(t) t ∈ Tk in the interval Tk (7.6.49),
s(t0 ) ≤ 0, t0 ∈ Tk0 or s(t00 ) > 0, t00 ∈ Tk00 , PDF py (z/s∗ ) (7.6.53) is determined by
the expressions:

py (z/s∗ )|t∈Tk0 = P (Cc )|N −|k| · δ (z − s∗ (t))+


+ (N − |k|) · pn (z )[1 − Fn (z )]N −|k|−1 · [1(z − s∗ (t)) − 1(z )]+
+ P (Cc )||k|,s=0 P (Ce )|N −|k|,s=0 δ (z ) + N · pn (z )[1 − Fn (z )]N −1 · 1(z ); (7.6.54)

py (z/s∗ )|t∈Tk00 = P (Cc )||k|,s=0 ·δ (z )+ |k|·pn (z )[1 −Fn (z )]|k|−1 · [1(z ) − 1(z−s∗ (t))]+
+ P (Cc )|N −|k| P (Ce )||k| δ (z − s∗ (t))+
+ N · pn (z )[1 − Fn (z )]N −1 · 1(z − s∗ (t)). (7.6.55)

Due to its non-negative definiteness, under the condition that s(t) > 0, t ∈ Tk00 , the
PDF py (z/s∗ )|t∈Tk00 , pw (z/s∗ )|t∈Tk00 in the filter output is identically equal to PDF
(7.6.55):
pw (z/s∗ )|t∈Tk00 ≡ py (z/s∗ )|t∈Tk00 . (7.6.56)
If the signal s(t) takes the values s(t) ≤ 0, t ∈ Tk0 in the interval Tk , then the PDF
is equal to:

pw (z/s∗ )|t∈Tk0 = [P (Cc )||k| + P + P (Cc )||k|, s=0 · P (Ce )|N −|k|, s=0 ]δ (z )+
+ N · pn (z )[1 − Fn (z )]N −1 · 1(z ), (7.6.57)
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 375

where
Z0
P = (N − |k|) · pn (z )[1 − Fn (z )]N −|k|−1 dz ;
s∗ (t)

P (Cc )||k|,s=0 = 1 − 2−|k| ; P (Ce )|N −|k|,s=0 = 2−N +|k| .


The identity (7.6.56) implies that on small signal-to-interference (signal-to-noise)
ratio in the filter input, at the instant t ∈ Tk00 , the output of the filter forms: (1)
the signal estimator with the probability P [C (w = ŝ)] ≈ 2−|k| − 2−N ; (2) the
value equal to zero w(t) = 0 with the probability P [C (w = 0] = 1 − 2−|k| , or (3)
the least positive value nmin of interference (noise) n(t) from the set {n(tj )} with
the probability P [C (w = nmin )] ≈ 2−N . The expression (7.6.57) implies that on
small signal-to-interference (signal-to-noise) ratio in the filter input, at the instant
t ∈ Tk0 the output of the filter forms: (1) the values equal to zero w(t) = 0 with
the probability P [C (w = 0] = 1 − 2−N , or (2) the values equal to the least positive
value nmin of interference (noise) n(t) from the set {n(tj )} with probability P [C (w =
nmin )] ≈ 2−N .
The signal w(t) in the limiter input can be represented by the linear combination
of signal ws (t) and interference (noise) wn (t) components (see (7.3.25)):

w(t) = ws (t) + wn (t).

The signal component ws (t) is a stochastic process, whose realization instantaneous


values in each interval Tk are equal to the positive instantaneous values of the
signal s(t) with probability P [C (w = ŝ)] or to zero with probability P [C (w = 0)].
The interference (noise) component wn (t) is a stochastic process, whose realization
instantaneous values in each interval Tk are equal to the least positive instantaneous
value of interference (noise) nmin with probability P [C (w = nmin )], or to zero with
probability P [C (w = 0)].
The relationships (7.6.54) and (7.6.55) imply that on small signal-to-interference
(signal-to-noise) ratio in the filter input, the PDF pws (z/s∗ ) of the signal component
ws (t) in its output is equal to:

pws (z/s∗ ) =
[1 − 2−|k| + 2−N ]δ (z ) + [2−|k| − 2−N ]δ (z − s∗ (t)), t ∈ Tk00 ;

= (7.6.58)
δ (z ), t ∈ Tk0 .

Hence, mathematical expectation M{ws (t)} of the signal component ws (t) of the
process w(t) is equal to:

Z∞
M{ws (t)} = zpws (z/s∗ )dz =
−∞

= [2−|k| − 2−N ] · [s∗ (t) ∨ s∗ (t − (N − 1)T0 ) ∨ 0], t ∈ Tk . (7.6.59)

Figure 7.6.6 illustrates the realization w∗ (t) of stochastic process w(t) in the output
376 7 Synthesis and Analysis of Signal Processing Algorithms

of the unit of formation of the positive part, and Fig. 7.6.7 shows the realization
v ∗ (t) of stochastic process v (t) in the output of median filter (see Fig. 7.6.1), under
the interaction between two harmonic signals s1 (t), s2 (t) and interference (noise)
n(t) in the input of synthesized unit obtained by statistical modeling of processing
of the input signal x(t). In the figures 1 denotes the signal s1 (t); 2 is the signal
s2 (t); 3 is the response of the signal s1 (t); 4 is the response of the signal s2 (t); 5
represents residual overshoots of the signal component ws∗ (t) of realization w∗ (t) of
stochastic process w(t).

FIGURE 7.6.6 Realization w∗ (t) of FIGURE 7.6.7 Realization v ∗ (t) of


stochastic process w(t) in input of limiter stochastic process v(t) in output of me-
and residual overshoots dian filter

The examples correspond to the following conditions. The signals s1 (t) and
s2 (t) are narrowband RF pulses without intrapulse modulation; interference is a
quasi-white Gaussian noise with the ratio of maximum frequency of interference
power spectral density fn,max to a carrier frequency f0 of the signals s1 (t) and s2 (t):
fn,max /f0 = 8; the signal-to-interference (signal-to-noise) ratios for the signals s1 (t)
and s2 (t) take the values Es1 /N0 = 8 · 10−7 and Es2 /N0 = 3.2 · 10−6 , respectively
(where Es1 , Es2 are the energies of the signals s1 (t) and s2 (t); N0 is power spectral
density of interference (noise)). The number of samples N of the input signal x(t)
used in signal processing and determined by the number of periods of a carrier of
the signals s1 (t) and s2 (t) is equal to 10. The delay of the signal s1 (t) with respect
to the signal s2 (t) is equal to 1.25T0 , where T0 is a period of a carrier oscillation:
T0 = 1/f0 .
The responses of the signals s1 (t) and s2 (t), shown in Fig. 7.6.6, are easily
distinguished in the remnants of the nonlinear interaction between interference
(noise) and the signals s1 (t), s2 (t) in the form of residual overshoots (line 5) of
the signal component ws∗ (t) of realization w∗ (t) of stochastic process w(t). As can
be seen from Fig. 7.6.7, median filter (see Fig. 7.6.1) removes residual overshoots
(line 5) of the signal component ws∗ (t) of realization w∗ (t) of stochastic process w(t)
and slightly cuts the top of the responses of the signals s1 (t) and s2 (t). The results
of statistical modeling of processing of the input signal x(t) shown in Figs. 7.6.6
and 7.6.7, confirm the high efficiency of harmonic signal resolution in the presence
of strong interference provided by the synthesized processing algorithm.
Using the identity w(t) = s∗ (t) with respect to the formula (7.6.59), where
7.6 Resolution of Radio Frequency Pulses in Metric Space with Lattice Properties 377

w(t) is determined by the function (7.6.29), normalizing this function along with
transformation of the variable t (7.6.31), we obtain the expression for normalized
time-frequency mismatching function ρ(δτ, δF ) of the filter in the presence of strong
interference (see Fig. 7.6.8):

a · sin[2π (1 + δF )(δτ − δτ1,k )], δτ1,k ≤ δτ < δτm,k ;
ρ(δτ, δF ) = (7.6.60)
−a · sin[2π (1 + δF )(δτ − δτ2,k )], δτm,k ≤ δτ < δτ2,k ,

where δτ and δF are relative time delay and relative frequency shift; a is a multiplier
equal to (2−|k| − 2−N )/(1 − 2−N ); δτ1,k , δτ2,k are relative times of beginning and
ending of the intervals of definition of mismatching function ρ(δτ, δF ) on F =const;
δτm,k = (δτ1,k + δτ2,k )/2 is relative time corresponding to the maximum value of
mismatching function ρ(δτ, δF ) on F =const.

FIGURE 7.6.8 Normalized time-frequency mismatching function of filter in presence of


strong interference

In cases of positive F > 0 and negative F < 0 relative frequency shifts, the values
δτ1,k , δτm,k , and δτ2,k are determined, according to the transformation (7.6.31) and
the relationship (7.6.30), by the following expressions:

 δτ1,k = (N − 1)δT0 · 1(−δF ) − 0.25 + k ;
δτ2,k = (N − 1)δT0 · 1(δF ) + 0.5δT0 + 0.25 + k ; (7.6.61)
δτm,k = (δτ1,k + δτ2,k )/2 = 0.5[(N − 1) + 0.5]δT0 + k,

where δT0 = (T00 − T0 )/T0 = (−δF )/(1 + δF ) is a relative difference of the periods
of a carrier of the transformed s0 (t) and the initial s(t) signals.
The expression (7.6.60) and the relationships (7.6.61) imply that resolutions
in relative time delay δτ and relative frequency shift δF of the filter, matched
with the harmonic signal (7.6.28a) in signal space with lattice properties, remain
invariable under interaction between the signal and interference (noise) regardless
of the energetic relationships between them as determined by Equations (7.6.36).
378 7 Synthesis and Analysis of Signal Processing Algorithms

The multipeak character of the function ρ(δτ, δF ), that spreads along the axis of
relative time delay δτ in the interval [−0.25 − (N − 1), (N − 1) + 0.25] with maxima
in the points δτ = k, k = 0, ±1, . . . , ±(N − 1), stipulates the ambiguity of time
delay measurement.
The investigation of the resolution algorithm of RF pulses without intrapulse
modulation with a rectangular envelope in signal space with lattice properties allows
us to draw the following conclusions.

1. While solving the problem of signal resolution in signal space with lattice
properties, there exists the possibility of realization of so-called “needle-
shaped” response in the output of a signal processing unit without side
lobes. The independence of the responses of the transformed s0 (t) (7.6.28b)
and the initial signal s(t) (7.6.28a) in the range of time delay ∆τ and
frequency shift ∆F , that are out of the bounds of filter resolution in time
∆τ and in frequency ∆F parameters (|∆τ | > ∆τ , |∆F | > ∆F ), is achieved
by nonlinear processing in signal space with lattice properties.
2. The effect of an arbitrarily strong interference in signal space with lat-
tice properties does not change the filter resolutions in time delay and
in frequency shift. However, it does cause the ambiguity of time delay
measurement.
3. The absence of the constraints imposed by uncertainty principle of Wood-
ward [118] allows us to obtain any resolution in time delay ∆τ and in
frequency shift ∆F in signal space with lattice properties even under har-
monic signal use. The last feature is provided by the proper choice of car-
rier frequency f0 and number of periods N of carrier oscillations within
the signal duration T .

7.7 Methods of Mapping of Signal Spaces into Signal Space with


Lattice Properties
Finally we consider some possible methods of constructing the signal space with
lattice properties, which in principle differ from the direct methods of realization
of signal spaces with such properties. The direct methods of realization of signal
spaces with given properties assume the use of physical medium (in a general case,
with nonlinear properties) in which useful and interference signals interact on the
basis of operations determining algebraic properties of signal space. The methods
considered in this section are based on the signal mappings from a single space
(or from several spaces) into a space with given properties (in this case, lattice
properties).
We first consider the method of construction of signal space with lattice prop-
erties on the base of a space with group properties (linear signal space), and then
7.7 Methods of Mapping of Signal Spaces into Signal Space with Lattice Properties 379

consider the construction of a signal space with lattice properties on the base of
spaces with semigroup properties.

7.7.1 Method of Mapping of Linear Signal Space into Signal Space with
Lattice Properties
Signal space with lattice properties L(∨, ∧) can be obtained by transformation of
the signals of linear space in such a way that the results of interactions x(t) and
x̃(t) between the signal s(t) and interference (noise) n(t) in signal space with lattice
properties L(∨, ∧) with operations of join ∨ and meet ∧ are realized according to
the relationships:

x(t) = s(t) ∨ n(t) = {[s(t) + n(t)] + |s(t) − n(t)|}/2; (7.7.1a)

x̃(t) = s(t) ∧ n(t) = {[s(t) + n(t)] − |s(t) − n(t)|}/2, (7.7.1b)


which are the consequences of the equations [221, Section XIII.3;(14)], [221, Sec-
tion XIII.4;(22)].
The identities (7.7.1) determine the mapping T of linear signal space LS (+) into
signal space with lattice properties L(∨, ∧): T : LS (+) → L(∨, ∧). The mapping T −1
inversed with respect to the initial one T : T −1 : L(∨, ∧) → LS (+) is determined by
the known identity [221, Section XIII.3;(14)]:

s(t) + n(t) = s(t) ∨ n(t) + s(t) ∧ n(t).

Based on the relationships (7.7.1), to form the results of interaction between the sig-
nal s(t) and interference (noise) n(t) in signal space with lattice properties L(∨, ∧),
it is necessary to have two linearly independent equations with respect to s(t) and
n(t). Let two linearly independent functions a(t) and b(t) of useful signal s(t) and
interference (noise) n(t), received by a directional antenna A and an omnidirec-
tional antenna B (see Fig. 7.7.1), arrive onto two inputs of the unit of mapping T
of linear signal space into signal space with lattice properties.

FIGURE 7.7.1 Block diagram of mapping unit


380 7 Synthesis and Analysis of Signal Processing Algorithms

FIGURE 7.7.2 Directional field patterns FA (θ), FB (θ) of antennas A, B; θs , θn are direc-
tions of arrival of signal and interference (noise), respectively

There are two signals in two outputs of the mapping unit T : x(t) = s(t) ∨ n(t)
and x̃(t) = s(t) ∧ n(t) determined by the relationships (7.7.1). The useful signal
s(t) and interference (noise) n(t) are received by the antennas of channels A and
B (see Fig. 7.7.2), which are such that: (1) the antennas A and B have the same
phase center and (2) the antennas A and B are characterized by the directional field
patterns FA (θ) and FB (θ), respectively, so that FA (θs ) = G; FA (θn ) = g; G > g
and G > 1; FB (θ) = 1; complex gain-frequency characteristics K̇A (ω ) and K̇B (ω )
of the receiving channels A and B are identical: K̇A (ω ) = K̇B (ω ), and there are
two signals a(t) and b(t) in the inputs of the mapping unit T :

a(t) = G · s(t) + g · n(t); (a)
(7.7.2)
b(t) = s(t) + n(t), (b)

where G = FA (θs ) is the gain of antenna A from the direction of arrival of the
signal s(t); g = FA (θn ) is the gain of antenna A from the direction of arrival of
interference (noise) n(t).
We suppose that the direction of arrival of the signal θs is known, and the
direction of arrival of interference (noise) θn is unknown. We also consider that
interference n(t) is quasi-white Gaussian noise with independent samples {n(tj )},
j = 0, 1, 2, . . ., that are distant over time interval ∆t = |tj − tj±1 | = 1/(2fn,max ),
where fn,max is an upper bound frequency of power spectral density of interference,
and the useful signal s(t) changes slightly in the interval ∆t, i.e., s(t) = s(t ± ∆t).
The equation system (7.7.2) with respect to s(t) and n(t) cannot be solved due
to the presence of the additional unknown quantity g. To solve it, in addition to the
system (7.7.2), one more equation system can be used. It is formed on the basis of
the observations a(t) and b(t), delayed at the interval ∆t, taking into account the
last assumption concerning a slow (as against ∆t) signal change (s(t) = s(t ± ∆t)):

a(t0 ) = G · s(t) + g · n(t0 ); (a)



(7.7.3)
b(t0 ) = s(t) + n(t0 ), (b)

where t0 = t − ∆t, ∆t = 1/(2fn,max ).


7.7 Methods of Mapping of Signal Spaces into Signal Space with Lattice Properties 381

We now obtain the relationships determining the algorithm of the mapping unit
T : a(t), b(t) → x(t), x̃(t); a(t), b(t) ∈ LS (+); x(t), x̃(t) ∈ L(∨, ∧).
Joint fulfillment of both pairs of Equations (7.7.2a), (7.7.3a) and (7.7.2b),
(7.7.3b) implies the system:

a(t) − a(t0 ) = g · [n(t) − n(t0 )];



(7.7.4)
b(t) − b(t0 ) = n(t) − n(t0 ).

The equations of the system (7.7.4) imply the identity determining the gain g of
antenna A, which affect interference (noise) n(t) arriving from the direction θn :

a(t) − a(t0 )
g = FA (θn ) = . (7.7.5)
b(t) − b(t0 )

Multiplying both parts of Equation (7.7.2b) by some coefficient k and subtracting


them from the corresponding parts of Equation (7.7.2a) of the same system, mul-
tiplying the difference a(t) − kb(t) by some coefficient q, we compose the auxiliary
identity:

q [a(t) − kb(t)] = q (G − k ) · s(t) + q (g − k ) · n(t) = s(t) − n(t). (7.7.6)

To make the identity (7.7.6) to hold, it is necessary to select values k and q, which
are the roots of the system of equations:

q (G − k ) = 1;
q (g − k ) = −1.

By solving it, we obtain the values of coefficients k and q providing the identity to
hold (7.7.6):
k = (G + g )/2; (7.7.7a)
q = 2/(G − g ), (7.7.7b)
Taking into account the identity (7.7.6), one can compose the required relationship:

s(t) − n(t) = q [a(t) − k · b(t)], (7.7.8)

where k, q are the coefficients determined by the relationships (7.7.7a) and (7.7.7b),
respectively.
Substituting the sum s(t) + n(t) from Equation (7.7.2b) and the difference
s(t) − n(t) from Equation (7.7.8) into the identities (7.7.1a,b), we obtain the de-
sired relationships determining the algorithm of the mapping unit T : a(t), b(t) →
x(t), x̃(t); a(t), b(t) ∈ LS (+); x(t), x̃(t) ∈ L(∨, ∧):


 x(t) = [b(t) + q|a(t) − k · b(t)|]/2; (a)
 x̃(t) = [b(t) − q|a(t) − k · b(t)|]/2; (b)


T = k = (G + g ) /2; (c) (7.7.9)
q = 2 / ( G − g ); ( d)




g = [a(t) − a(t0 )]/[b(t) − b(t0 )]. (e)

382 7 Synthesis and Analysis of Signal Processing Algorithms

One possible variant of the block diagram of the signal space mapping unit is
described in [268], [269].
Generally, the mapping of linear signal space LS (+) into the signal space with
lattice properties L(∨, ∧) supposes the further use of the processing algorithms
that are invariant with respect to the conditions of parametric and nonparametric
prior uncertainties. If further signal processing is constrained by the use of the
algorithms that are critical with respect to energetic characteristics of the useful
and interference signals, the mapping unit T must form the signals in the outputs
x(t) and x̃(t) of the following form:

x(t) = Gs(t) ∨ n(t) = {[Gs(t) + n(t)] + |Gs(t) − n(t)|}/2; (7.7.10a)

x̃(t) = Gs(t) ∧ n(t) = {[Gs(t) + n(t)] − |Gs(t) − n(t)|}/2, (7.7.10b)


where G = FA (θs ) is a gain of antenna A expected from the known direction of
arrival of the signal θs .
With an approach used while forming Equation (7.7.6), we can formulate re-
quired relationships based on the identities (7.7.10):

q1 [a(t) − k1 b(t)] = q1 (G − k1 ) · s(t) + q1 (g − k1 ) · n(t) = Gs(t) − n(t); (7.7.11a)

q2 [a(t) + k2 b(t)] = q2 (G + k2 ) · s(t) + q2 (g + k2 ) · n(t) = Gs(t) + n(t). (7.7.11b)


To ensure the identities (7.7.11) hold, it is necessary to find values k1,2 and q1,2 ,
which are the roots of the following equation systems:

q1 (G − k1 ) = G;
(7.7.12a)
q1 (g − k1 ) = −1,

q2 (G + k2 ) = G;
(7.7.12b)
q2 (g + k2 ) = 1.
Solving the equation systems (7.7.12), we obtain the values of coefficients k1,2 and
q1,2 that are equal, respectively, to:

k1 = G(1 + g )/(G + 1); (7.7.13a)

q1 = (G + 1)/(G − g ), (7.7.13b)

k2 = G(1 − g )/(G − 1); (7.7.14a)


q2 = (G − 1)/(G − g ). (7.7.14b)
Then, according to the identities (7.7.11), we can arrange the required relationships
of the following form:

Gs(t) − n(t) = q1 [a(t) − k1 b(t)]; (7.7.15a)

Gs(t) + n(t) = q2 [a(t) + k2 b(t)], (7.7.15b)


7.7 Methods of Mapping of Signal Spaces into Signal Space with Lattice Properties 383

where k1,2 , q1,2 are the coefficients determined by the pairs of relationships (7.7.13a),
(7.7.14a) and (7.7.13b), (7.7.14b), respectively.
Substituting the values Gs(t) − n(t) and Gs(t) + n(t) from Equations (7.7.15)
into the identities (7.7.10), we obtain the required relationships determining the
algorithm of the mapping unit T : a(t), b(t) → x(t), x̃(t); a(t), b(t) ∈ LS (+);
x(t), x̃(t) ∈ L(∨, ∧):


 x(t) = {q2 [a(t) + k2 b(t)] + q1 |a(t) − k1 · b(t)|}/2; (a)



 x̃(t) = {q2 [a(t) + k2 b(t)] − q1 |a(t) − k1 · b(t)|}/2; (b)
 k1 = G(1 + g )/(G + 1); (c)


T = q1 = (G + 1)/(G − g ); (d) (7.7.16)
k = G (1 − g ) / ( G − 1); ( e )

2



q = (G − 1)/(G − g ); (f )


 2


0 0
g = (a(t) − a(t ))/(b(t) − b(t )). (g )

7.7.2 Method of Mapping of Signal Space with Semigroup Properties


into Signal Space with Lattice Properties
The signal space with lattice properties L(∨, ∧) can be realized by transformation
of the signals from two spaces with semigroup properties, namely by transformation
from spaces with the properties of additive SG(+) and multiplicative SG(·) semi-
groups (see Definition 4.1.1). To realize this method, it is necessary to synthesize
the units in which the additive and multiplicative signal interactions take place.
The results of interactions x(t) and x̃(t) between the signal s(t) and interference
n(t) in signal space with lattice properties L(∨, ∧) with operations of join ∨ and
meet ∧ are realized according to the relationships (7.7.1). The function |s(t) − n(t)|
can be formed on the basis of the known equation:
p
|s(t) − n(t)| = u2 (t) − 4v (t) = w(t), (7.7.17)

where the functions u(t) and v (t) are determined over operations of addition and
multiplication between the signal s(t) and interference n(t) that take place in signal
spaces with additive SG(+) and multiplicative SG(·) semigroups, respectively:

u(t) = s(t) + n(t); (a)
(7.7.18)
v (t) = s(t) · n(t). (b)
Based on the relationships (7.7.1), to form the results of interactions x(t) and x̃(t)
between the signal s(t) and interference n(t) in signal space with lattice properties
L(∨, ∧), it is necessary to have two independent equations with respect to s(t) and
n(t) (7.7.18) that form equations:
p
x(t) = s(t) ∨ n(t) = [u(t) + w(t)]/2 = [u(t) + u2 (t) − 4v (t)]/2; (7.7.19a)
p
x̃(t) = s(t) ∧ n(t) = [u(t) − w(t)]/2 = [u(t) − u2 (t) − 4v (t)]/2, (7.7.19b)
p
where w(t) = u2 (t) − 4v (t) = |s(t) − n(t)| (7.7.17).
The desired relationships determining the method of mapping T 0 : u(t), v (t) →
384 7 Synthesis and Analysis of Signal Processing Algorithms

x(t), x̃(t); u(t) ∈ SG(+), v (t) ∈ SG(·); and x(t), x̃(t) ∈ L(∨, ∧) are defined by the
equations system:


 x(t) = s(t) ∨ n(t) = [u(t) + w(t)]/2; ( a)


 x̃( t ) = s( t) ∧ n (t ) = [ u( t) − w ( t)]/2; (b)
0
p
T = 2
w(t) = u (t) − 4v (t) = |s(t) − n(t)|; (c) (7.7.20)
u ( t) = s (t ) + n ( t); ( d)




v (t) = s(t) · n(t);

(e)

The example of signal space with mentioned properties is also a ring R(+, ·), i.e.,
an algebraic structure in which two binary operations are defined (addition and
multiplication), which are such that: (1) R(+) is an additive group with neutral
element 0, 0 ∈ R(+): ∀a, a ∈ R(+): ∃ − a: a + 0 = a; a + (−a) = 0; (2) R(·) is mul-
tiplicative semigroup; (3) operations of addition and multiplication are connected
through distributive laws:

a(b + c) = ab + ac; (b + c)a = ba + ca; a, b, c ∈ R(+, ·)

Generally, the methods and algorithms of signal processing within signal spaces
with lattice properties are single-channel types regardless of the number of interfer-
ence sources affecting input of the processing unit. These methods can be realized in
two ways: (1) utilizing the physical medium with lattice properties, where the wave
interactions are determined by lattice operations (the direct methods); (2) utilizing
the methods of mapping of signal spaces with group (semigroup) properties into
the signal spaces with lattice properties (the indirect methods).
Unfortunately, the materials with such properties are unknown to the author;
and realization of the second group methods inevitably causes various destabilizing
factors exerting their negative influence upon the efficiency of signal processing,
that is stipulated by inaccurate reproduction of the operations of join and meet
between the signals. This circumstance causes violation of the initial properties of
signal space. Signal processing methods used in signal space with lattice properties
cease to be optimal; thus, it is necessary to reoptimize signal processing algorithms
obtained under assumption that lattice properties hold within the signal space.
That is a subject for separate consideration.
Here, however, we note, that “reoptimized” signal processing algorithms oper-
ating in signal space with lattice properties in the presence of destabilizing factors
(for instance, those that decrease statistical dependence (or cross-correlation) of
interference in the receiving channels), can be more efficient than the known signal
processing algorithms functioning in linear signal space.
Certainly, the direct methods of realization of signal spaces with given proper-
ties are essentially more promising. They assume the use of physical medium with
nonlinear properties in which useful and interference signals interact on the basis of
lattice operations. The problem of synthesis of physical medium where interaction
of wave processes is described by the lattice operations oversteps the bounds of the
signal processing theory and requires special research.
Conclusion

The main hypothesis underlying this book can be formulated in the following way.
It is impossible to construct signal processing theory without providing unity of con-
ceptual basics with information theory (at least within its syntactical aspects), and
as a consequence, without their theoretical compatibility and harmonious association.
Apparently, the inverse statement is also true: it is impossible to state information
theory logically (within its syntactical aspects) in isolation from signal processing
theory.
There are two axiomatic statements underlying this book: the main axiom of
signal processing theory and axiom of a measure of binary operation between the el-
ements of signal space built upon generalized Boolean algebra with a measure. The
first axiom establishes the relationships between information quantities contained
in the signals in the input and output of processing unit. The second axiom deter-
mines qualitative and quantitative aspects of informational relationships between
the signals (and their elements) in a space built upon generalized Boolean algebra
with a measure. All principal results obtained in this work are the consequences
from these axiomatic statements.
The main content of this work can be formulated by the following statements:

1. Information can exist only in the presence of the set of its material carriers,
i.e., the signals forming the signal space.
2. Information contained in a signal exists only in the presence of the struc-
tural diversity between the signal elements.
3. Information contained in a couple of signals exists due to the identities
and the distinctions between the elements of these signal structures.
4. Signal space is characterized by properties peculiar to nonEuclidean ge-
ometry.
5. Information contained in the signals of signal space becomes measurable
if a measure of information quantity is introduced in such space.
6. Measure of information quantity is an invariant of a group of signal map-
pings in signal space.
7. Measure of information quantity induces metric in signal space.
8. From the standpoint of providing minimum losses of information contained
in the signals, it is expedient to realize signal processing within the signal
spaces with lattice properties.

385
386 Conclusion

9. Algorithms and units of signal processing in metric spaces with lattice


properties are characterized by invariance property with respect to para-
metric and nonparametric prior uncertainty conditions.
Unlike the approaches formulated in [150], [152], [157], [166], algorithms and units
of signal processing in metric spaces with lattice properties, characterized by signal
processing quality indices that at qualitative and quantitative levels are unattain-
able for their analogues in linear spaces, have been obtained as a consequence of
answering Question 1.5 in the Introduction.
In this work, the problem of qualitative increasing the signal processing efficiency
under prior uncertainty conditions has been solved theoretically without overstep-
ping the bounds of signal processing theory. The interaction between useful and
interference signals is investigated on the basis of the operations of the space with
lattice properties, i.e., without invasion into the subject matter of other branches
of science that could answer the questions concerning the practical realization of
wave process interactions with aforementioned properties.
The interrelations between information theory, signal processing theory, and al-
gebraic structures, established in the book and shown in Fig. C.1, have the following
meaning.
The notion of physical signal space has been introduced by Definition 4.1.1 on
the basis of the most general (among all known algebraic structures) concept of the
semigroup. Properties of signal spaces used in signal processing theory may differ,
but all the signal spaces are always semigroups. In Chapters 6 and 7, the models of
signal spaces have been considered on the basis of L-groups possessing both group
and lattice properties. Linear spaces, widely used as signal space models are also
characterized by group properties.
The main interrelation considered in this book, namely between generalized
Boolean algebra with a measure and mathematical foundations of information the-
ory, shown in Fig. C.1 has been investigated within Chapters 2 through 5. A mea-
sure introduced upon generalized Boolean algebra accomplishes two functions: (1)
it is a measure of information quantity; (2) it induces metric in signal space. This
statement is a cornerstone of interrelation between information theory and signal
processing theory.
Informational and metric relationships between the signals interacting in signal
spaces with various algebraic properties have been established on the basis of prob-
abilistic and informational characteristics of stochastic signals that are invariants
of groups of signal mappings introduced in Chapter 3. This provides indirect inter-
relation between probability theory and signal processing theory realized through
information theory. Finally, the relationships determining the channel capacity in
signal spaces with L-group properties cited in Section 6.5 establish the interrelation
between information theory and L-groups.
It is impossible to consider “Fundamentals of Signal Processing. . . ” to be accom-
plished, since interesting directions remain to be investigated, for instance, solving
Conclusion 387

FIGURE C.1 Suggested scheme of interrelations between information theory, signal pro-
cessing theory, and algebraic structures

the known signal processing problems within linear spaces on the basis of methods
of signal processing in spaces with L-group properties; expanding the book’s prin-
cipal ideas into the signal spaces in the form of stochastic fields; development of
information processing methods and units (including quantum and optic-electronic)
based on generalized Boolean algebra with a measure, and others.
The solution of nonlinear electrodynamics problem of synthesis of a physical
medium in which wave interaction is described by L-group operations requires spe-
cial efforts from the scientific community. However, such an advance would overstep
the bounds of signal processing theory subject and require a multidisciplinary ap-
proach. Practical application of suggested algorithms and units of signal processing
in metric spaces with lattice properties is impossible without the breakthroughs to
the aforementioned technologies and cooperative efforts noted above.
The main problem related to fundamentals of signal processing theory and infor-
mation theory rely on finding three necessary compromises among: (1) mathematics
and physics, (2) algebra and geometry, and (3) continuity and discreteness.
388 Conclusion

An attempt to compress the content of this book into a single paragraph would
read as follows:

Information is the property that is inherent to nonEuclidean signal space form-


ing a set of material carriers and is also inherent to every single signal. This
property is explained by identities and distinctions between signals and their
structural elements. It also reveals itself in signal interactions. A measure of in-
formation quantity induces metric in signal space, and this measure is invariant
of a group of signal mappings.

The paragraph above summarizes the principal idea behind this book.
A secondary idea is the corollary from the principal one. It arises from the need
to increase the efficiency of signal processing by decreasing or even eliminating
the inevitable losses of information that accompany the interactions of useful and
interference signals. Achieving the goal of more efficient signal processing requires
researching and developing new technologies based upon suggested signal spaces,
in which interactions of useful and interference signals take place with essentially
lesser losses of information as against linear spaces.
The author hopes the book will inform a broad research audience and re-
spectable scientific community. We live in times when the most insane fantasies
become perceptible realities with an incredible speed. The author expresses his
confidence that the signal processing technologies and methods described in this
book will appear sooner rather than later.
Bibliography

[1] Whittaker, E. T. On the functions which are represented by the expansions of the
interpolation theory. Proc. Royal Soc. Edinburgh, 10(1):57–64, 1922.
[2] Carson, J.R. Notes on the theory of modulation. Proc. IRE, 10(1):57–64, 1922.
[3] Gabor, D. Theory of communication. J. IEE, 93(26):429–457, 1946.
[4] Belyaev, Yu.K. Analytical stochastic processes. Probability Theory and Appl.,
4(4):437–444, 1959 (in Russian).
[5] Zinoviev, V.A. and Leontiev, V.K. On perfect codes. Prob. Inf. Trans., 8(1):26–35,
1975 (in Russian).
[6] Wyner, A. The common information of two dependent random variables. IEEE Trans.
Inf. Theory, IT–21(2):163–179, 1975.
[7] Pierce, J.R. An Introduction to Information Theory: Symbols, Signals and Noise.
Dover Publications, New York, 2nd edition, 1980.
[8] Nyquist, H. Certain factors affecting telegraph speed. Bell Syst. Tech.J., 3:324–346,
1924.
[9] Nyquist, H. Certain topics in telegraph transmission theory. Trans. AIEE, 47:617–644,
1928.
[10] Tuller, W.G. Theoretical limitations on the rate of transmission of information. Proc.
IRE, 37(5):468–478, 1949.
[11] Kelly, J. A new interpretation of information rate. Bell Syst. Tech. J., 35:917–926,
1956.
[12] Blackwell, D., Breiman, L., and Thomasian, A.J. The capacity of a class of channels.
Ann. Math. Stat., 30:1209–1220, 1959.
[13] McDonald, R.A. and Schultheiss, P.M. Information rates of Gaussian signals under
criteria constraining the error spectrum. Proc. IEEE, 52:415–416, 1964.
[14] Wyner, A.D. The capacity of the band-limited Gaussian channel. Bell Syst. Tech. J.,
45:359–371, 1965.
[15] Pinsker, M.S. and Sheverdyaev, A.Yu. Transmission capacity with zero error and
erasure. Probl. Inf. Trans., 6(1):13–17, 1970.
[16] Ahlswede, R. The capacity of a channel with arbitrary varying Gaussian channel
probability function. Trans. 6th Prague Conf. Inf. Theory, 13–21, 1971.
[17] Blahut, R. Computation of channel capacity and rate distortion functions. IEEE
Trans. Inf. Theory, 18:460–473, 1972.
[18] Ihara, S. On the capacity of channels with additive non-Gaussian noise. Inf. Control,
(37(1)):34–39, 1978.
[19] El Gamal, A.A. The capacity of a class of broadcast channels. IEEE Trans. Inf.
Theory, 25(2):166–169, 1979.

389
390 Bibliography

[20] Gelfand, S.I. and Pinsker, M.S. Capacity of a broadcast channel with one deterministic
component. Probl. Inf. Trans., 16(1):17–21, 1980.
[21] Sato, H. The capacity of the Gaussian interference channel under strong interference.
IEEE Trans. Inf. Theory, 27(6):786–788, 1981.
[22] Carleial, A. Outer bounds on the capacity of the interference channel. IEEE Trans.
Inf. Theory, 29:602–606, 1983.
[23] Ozarow, L.H. The capacity of the white Gaussian multiple access channel with feed-
back. IEEE Trans. Inf. Theory, 30:623–629, 1984.
[24] Teletar, E. Capacity of multiple antenna Gaussian channels. Eur. Trans. Telecom-
mun., 10(6):585–595, 1999.
[25] Hamming, R.V. Error detecting and error correcting codes. Bell System Tech. J.,
29:147–160, 1950.
[26] Rice, S.O. Communication in the presence of noise: probability of error for two
encoding schemes. Bell System Tech. J., 29:60–93, 1950.
[27] Huffman, D.A. A method for the construction of minimum redundancy codes. Proc.
IRE, 40:1098–1101, 1952.
[28] Elias, P. Error-free coding. IRE Trans. Inf. Theory, 4:29–37, 1954.
[29] Shannon, C.E. Certain results in coding theory for noisy channels. Infor. Control.,
1:6–25, 1957.
[30] Shannon, C.E. Coding theorems for a discrete source with a fidelity criterion. IRE
Natl. Conv. Record, 7:142–163, 1959.
[31] Bose, R.C. and Ray-Chaudhuri, D.K. On a class of error correcting binary group
codes. Inf. Control, (3):68–79, 1960.
[32] Wozencraft, J. and Reiffen, B. Sequential Decoding. MIT Press, Cambridge, MA,
1961.
[33] Viterbi, A.J. On coded phase-coherent communications. IRE Trans. Space Electron.
Telem., 7:3–14, 1961.
[34] Gallager, R.G. Low Density Parity Check Codes. MIT Press, Cambridge, MA, 1963.
[35] Abramson, N.M. Information Theory and Coding. McGraw-Hill, New York, 1963.
[36] Viterbi, A.J. Error bounds for convolutional codes and an asymptotically optimum
decoding algorithm. IEEE Trans. Inf. Theory, IT-13:260–269, 1969.
[37] Forney, G.D. Convolutional codes: algebraic structure. IEEE Trans. Inf. Theory,
16(6):720–738, 1970.
[38] Berger, T. Rate Distortion Theory: A Mathematical Basis for Data Compression.
Prentice-Hall, Englewood Cliffs, 1971.
[39] Viterbi, A.J. Convolutional codes and their performance in communication systems.
IEEE Trans. Commun. Technol., 19(5):751–772, 1971.
[40] Ziv, J. Coding of sources with unknown statistics. Distortion relative to a fidelity
criterion. IEEE Trans. Inf. Theory, 18:389–394, 1972.
[41] Slepian, D. and Wolf, J.K. A coding theorem for multiple access channels with cor-
related sources. Bell Syst. Tech. J., 52:1037–1076, 1973.
[42] Pasco, R. Source Coding Algorithms for Fast Data Compression. PhD thesis, Stanford
University, Stanford, 1976.
Bibliography 391

[43] Wolfowitz, J. Coding Theorems of Information Theory. Prentice-Hall, Englewood


Cliffs, 1978.
[44] Viterbi, A.J. and Omura, J.K. Principles of Digital Communication and Coding.
McGraw-Hill, New York, 1979.
[45] Blahut, R.E. Theory and Practice of Error Control Codes. Addison-Wesley, Reading,
MA, 1983.
[46] Lin, S. and Costello, D.J. Error Control Coding: Fundamentals and Applications.
Prentice-Hall, Englewood Cliffs, 3rd edition, 1983.
[47] Gray, R.M. Source Coding Theory. Kluwer, Boston, 1990.
[48] Berrou, C., Glavieux, A., and Thitimajshima, P. Near Shannon limit error-correcting
codes and decoding: turbo codes. Proc. 1993 Int. Conf. Commun., 1064–1070, 1993.
[49] Sayood, K. Introduction to Data Compression. Morgan Kaufmann, San Francisco,
1996.
[50] Hartley, R.V.L. Transmission of information. Bell Syst. Tech. J., 7(3):535–563, 1928.
[51] Shannon, C.E. A mathematical theory of communication. Bell System Tech. J.,
27:379–423, 623–656, 1948.
[52] Shannon, C.E. Communication in the presence of noise. Bell Syst. Tech. J., 37:10–21,
1949.
[53] Goldman, S. Information Theory. Prentice-Hall, Englewood Cliffs, 1953.
[54] Powers, K.H. A unified theory of information. Technical Report 311, R.L.E. Mas-
sachusetts Institute of Technology, 1956.
[55] Gelfand, I.M., Kolmogorov, A.N., and Yaglom, A.M. On the general definition of the
quantity of information. Dokl. Akad. Nauk SSSR, 111(4):745–748, 1956.
[56] Feinstein, A. A Foundation of Information Theory. McGraw-Hill, New York, 1958.
[57] Stam, A. Some inequalities satisfied by the quantities of information of Fisher and
Shannon. Inf. Control., (2):101–112, 1959.
[58] Fano, R.M. Transmission of Information: A Statistical Theory of Communication.
Wiley, New York, 1961.
[59] Kolmogorov, A.N. Three approaches to the quantitative definition of information.
Probl. Inf. Trans., 1(1):1–7, 1965.
[60] Jelinec, F. Probabilistic Information Theory. McGraw-Hill, New York, 1968.
[61] Kolmogorov, A.N. Logical basis for information theory and probability theory. IEEE
Trans. Inform. Theory, 14:662–664, 1968.
[62] Schwartz, M. Information, Transmission, Modulation and Noise. McGraw-Hill, New
York, 1970.
[63] Harkevich, A.A. Information Theory. Pattern Recognition. In Selected Proceedings.
Volume 3. Nauka, 1972 (in Russian).
[64] Slepian, D. Key Papers in the Development of Information Theory. IEEE Press, New
York, 1974.
[65] Cover, T.M. Open problems in information theory. Proc. Moscow Inf. Theory Work-
shop, 35–36, 1975.
[66] Stratonovich, R.L. Information Theory. Soviet Radio, Moscow, 1975 (in Russian).
[67] Blahut, R.E. Principles and Practice of Information Theory. Addison-Wesley, Read-
ing, MA, 1987.
392 Bibliography

[68] Dembo, A., Cover, T.M., and Thomas, J.A. Information theoretic inequalities. IEEE
Trans. Inf. Theory, (37 (6)):1501–1518, 1991.
[69] Ihara, S. Information Theory for Continuous Systems. World Scientific, Singapore,
1993.
[70] Cover, T.M. and Thomas, J.A. Elements of Information Theory. Wiley, Hoboken,
2nd edition, 2006.
[71] Bennett, W.R. Time-division multiplex systems. Bell Syst. Tech. J., 20:199–221,
1941.
[72] Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J., 28:656–
715, 1949.
[73] Wozencraft, J.M. and Jacobs, I.M. Principles of Communication Engineering. Wiley,
New York, 1965.
[74] Gallager, R.G. Information Theory and Reliable Communication. Wiley, New York,
1968.
[75] Fink, L.M. Discrete Messages Transmission Theory. Soviet Radio, Moscow, 1970 (in
Russian).
[76] Liao, H. Multiple Access Channels. PhD thesis, Department of Electrical Engineering,
University of Hawaii, Honolulu, 1972.
[77] Penin, P.I. Digital Information Transmission Systems. Soviet Radio, Moscow, 1976
(in Russian).
[78] Lindsey, W.G. and Simon, M.K. Telecommunication Systems Engineering. Prentice-
Hall, Englewood Cliffs, 1973.
[79] Thomas, C.M., Weidner, M.Y., and Durrani, S.H. Digital amplitude-phase keying
with m-ary alphabets. IEEE Trans. Commun., 22(2):168–180, 1974.
[80] Spilker, J. Digital Communications by Satellite. Prentice-Hall, Englewood Cliffs, 1977.
[81] Penin, P.I. and Filippov, L.I. Information Transmission Electronic Systems. Radio i
svyaz, Moscow, 1984 (in Russian).
[82] Varakin, L.E. Communication Systems with Noise-Like Signals. Radio i svyaz,
Moscow, 1985 (in Russian).
[83] Zyuko, A.G., Falko, A.I., and Panfilov, I.P. Noise Immunity and Efficiency of Com-
munication Systems. Radio i svyaz, Moscow, 1985 (in Russian).
[84] Zyuko, A.G., Klovskiy, D.D., Nazarov, M.V., and Fink, L.M. Signal Transmission
Theory. Radio i svyaz, Moscow, 1986 (in Russian).
[85] Wiener, N. Cybernetics, or Control and Communication in the Animal and the Ma-
chine. Wiley, New York, 1948.
[86] Bar-Hillel, Y. Semantic information and its measures. Trans. 10th Conf. Cybernetics,
33-48, 1952.
[87] Rashevsky, N. Life, information theory and topology. Bull. Math. Biophysics, 17:229–
235, 1955.
[88] Cherry, C.E. On Human Communication: A Review, a Survey, and a Criticism. MIT
Press, Cambridge, MA, 3rd edition, 1957.
[89] Kullback, S. Information Theory and Statistic. Wiley, New York, 1959.
[90] Brillouin, L. Science and Information Theory. Academic Press, New York, 1962.
Bibliography 393

[91] Shreider, Yu.A. On a model of semantic information theory. Probl. Cybernetics,


(13):233–240, 1965 (in Russian).
[92] Mityugov, V.V. Physical Foundations of Information Theory. Soviet Radio, Moscow,
1976 (in Russian).
[93] Chaitin, G.J. Algorithmic Information Theory. Cambridge University Press, Cam-
bridge, 1987.
[94] Yockey, H.P. Information Theory and Molecular Biology. Cambridge University Press,
New York, 1992.
[95] Goppa, V.D. Introduction to Algebraic Information Theory. Nauka, Moscow, 1995
(in Russian).
[96] Bennet, C.H. and Shor, P.W. Quantum information theory. IEEE Trans. Inf. Theory,
44:2724–2742, 1998.
[97] Holevo, A.S. Introduction to Quantum Information Theory. MCCME, Moscow, 2002
(in Russian).
[98] Chernavsky, D.S. Synergetics and Information (Dynamic Information Theory). Edi-
torial URSS, Moscow, 2nd edition, 2004 (in Russian).
[99] Schottky, W. Über spontane stromschwankungen in verschiedenen elektrizitätsleitern.
Annalen der Physik, 57:541–567, 1918.
[100] Schottky, W. Zur berechnung und beurteilung des schroteffektes. Annalen der Physik,
68:157–176, 1922.
[101] Nyquist, H. Thermal agitation of electric charge in conductors. Phys. Rev., 32:110–
113, 1928.
[102] Johnson, J.B. Thermal agitation of electricity in conductors. Phys. Rev., 32:97–109,
1928.
[103] Papaleksy, N.D. Radio Noise and Noise Reduction. Gos. Tech. Izdat, Moscow, 1942
(in Russian).
[104] Rice, S.O. Mathematical analysis of random noise. Bell System Tech. J., 23(3):282–
332, 1944.
[105] Rice, S.O. Statistical properties of a sine wave plus random noise. Bell System Tech.
J., 27(1):109–157, 1948.
[106] Davenport, W.B. and Root, W.L. Introduction to the Theory of Random Signals and
Noise. McGraw-Hill, New York, 1958.
[107] Bunimovich, V.I. Fluctuating Processes in Radio Receivers. Soviet Radio, Moscow,
1951 (in Russian).
[108] Tihonov, V.I. Fluctuating Noise in the Elements of Receiving and Amplifying Sets.
VVIA, Moscow, 1951 (in Russian).
[109] Doob, J.L. Stochastic Processes. Wiley, New York, 1953.
[110] Zheleznov, N.A. On some questions of spectral-correlative theory of non-stationary
signals. Radiotekhnika i elektronika, 4(3):359–373, 1959 (in Russian).
[111] Deutsch, R. Nonlinear Transformations of Random Processes. Prentice-Hall, Engle-
wood Cliffs, 1962.
[112] Tihonov, V.I. Statistical Radio Engineering. Soviet Radio, Moscow, 1966 (in Russian).
[113] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Volume 1. Soviet Ra-
dio, Moscow, 1969 (in Russian).
394 Bibliography

[114] Malahov, A.N. Cumulant Analysis of Stochastic non-Gaussian Processes and Their
Transformations. Soviet Radio, Moscow, 1978 (in Russian).
[115] Tihonov, V.I. Statistical Radio Engineering. Radio i svyaz, Moscow, 1982 (in Russian).
[116] Tihonov, V.I. Nonlinear Transformations of Stochastic Processes. Radio i svyaz,
Moscow, 1986 (in Russian).
[117] Stark, H. and Woods, J.W. Probability and Random Processes with Applications to
Signal Processing. Prentice-Hall, Englewood Cliffs, 2002.
[118] Woodward, P.M. Probability and Information Theory, with Application to Radar.
Pergamon Press, Oxford, 1953.
[119] Siebert, W.M. Studies of Woodward’s uncertainty function. Technical report, R.L.E.
Massachusetts Institute of Technology, 1958.
[120] Wilcox, C.H. The synthesis problem for radar ambiguity functions. Technical Report
157, Mathematical Research Center, U.S. Army, University of Wisconsin, Madison,
1958.
[121] Cook, C.E. and Bernfeld, M. Radar Signals. An Introduction to Theory and Applica-
tion. Academic Press, New York, 1967.
[122] Franks, L.E. Signal Theory. Prentice-Hall, Englewood Cliffs, 1969.
[123] Papoulis, A. Signal Analysis. McGraw-Hill, New York, 1977.
[124] Varakin, L.E. Theory of Signal Systems. Soviet Radio, Moscow, 1978 (in Russian).
[125] Kolmogorov, A.N. Interpolation and extrapolation of stationary random sequences.
Izv. AN SSSR. Ser. Math., 5(1):3–11, 1941 (in Russian).
[126] North, D.O. Analysis of the factors which determine signal/noise discrimination in
pulsed carrier systems. Technical Report PTR-6C, RCA Lab., Princeton, N.J., 1943.
(reprinted in Proc. IRE, Volume 51, 1963, 1016–1027).
[127] Kotelnikov, V.A. Theory of Potential Noise Immunity. Moscow Energetic Institute,
Moscow, 1946 (in Russian).
[128] Wiener, N. Extrapolation, Interpolation and Smoothing of Stationary Time Series.
MIT Press, Cambridge, MA, 1949.
[129] Slepian, D. Estimation of signal parameters in the presence of noise. IRE Trans. Inf.
Theory, 3:68–89, 1954.
[130] Middleton, D. and Van Meter, D. Detection and extraction of signals in noise from
the point of view of statistical decision theory. J. Soc. Ind. Appl. Math., 3:192–253,
1955.
[131] Amiantov, I.N. Application of Decision Making Theory to the Problems of Signal
Detection and Signal Extraction in Background Noise. VVIA, Moscow, 1958 (in Rus-
sian).
[132] Kalman, R.E. A new approach to linear filtering and prediction problem. J. Basic
Eng. (Trans. ASME), 82D:35–45, 1960.
[133] Blackman, R.B. Data Smoothing and Prediction. Addison-Wesley, Reading, MA,
1965.
[134] Rabiner, L.R. and Gold, B. Theory and Application of Digital Signal Processing.
Prentice-Hall, Englewood Cliffs, 1975.
[135] Oppenheim, A.V. and Schafer, R.W. Digital Signal Processing. Prentice-Hall, Engle-
wood Cliffs, 1975.
Bibliography 395

[136] Applebaum, S.P. Adaptive arrays. IEEE Trans. Antennas Propag., 24(5):585–598,
1976.
[137] Monzingo, R. and Miller, T. Introduction to Adaptive Arrays. Wiley, Hoboken, NJ,
1980.
[138] Widrow, B. and Stearns, S.D. Adaptive Signal Processing. Prentice-Hall, Englewood
Cliffs, 1985.
[139] Alexander, S.T. Adaptive Signal Processing: Theory and Applications. Springer Ver-
lag, New York, 1986.
[140] Oppenheim, A.V. and Schafer, R.W. Discrete-Time Signal Processing. Prentice-Hall,
Englewood Cliffs, 1989.
[141] Lim, J.S. Two-dimensional Signal and Image Processing. Prentice-Hall, Englewood
Cliffs, 1990.
[142] Haykin, S. Adaptive Filter Theory. Prentice-Hall, Englewood Cliffs, 2002.
[143] Middleton, D. An Introduction to Statistical Communication Theory. McGraw-Hill,
New York, 1960.
[144] Vaynshtein, L.A. and Zubakov, V.D. Signal Extraction in Background Noise. So-
viet Radio, Moscow, 1960 (in Russian).
[145] Viterbi, A.J. Principles of Coherent Communication. McGraw-Hill, New York, 1966.
[146] Van Trees, H.L. Detection, Estimation, and Modulation Theory. Wiley, New York.
[147] Sage, A.P. and Melsa, J.L. Estimation Theory with Applications to Communications
and Control. McGraw Hill, New York, 1971.
[148] Amiantov, I.N. Selected Questions of Statistical Communication Theory. Soviet Radio,
Moscow, 1971 (in Russian).
[149] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Volume 2. Soviet
Radio, Moscow.
[150] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Volume 3. Soviet
Radio, Moscow.
[151] Gilbo, E.P. and Chelpanov, I.B. Signal Processing on the Basis of Ordered Selection:
Majority Transformation and Others That are Close to it. Soviet Radio, Moscow,
1977 (in Russian).
[152] Repin, V.G. and Tartakovsky, G.P. Statistical Synthesis under Prior Uncertainty and
Adaptation of Information Systems. Soviet Radio, Moscow, 1977 (in Russian).
[153] Sosulin, Yu.G. Detection and Estimation Theory of Stochastic Signals. Soviet Radio,
Moscow, 1978 (in Russian).
[154] Kulikov, E.I. and Trifonov, A.P. Estimation of Signal Parameters in the Background
of Noise. Soviet Radio, Moscow, 1978 (in Russian).
[155] Tihonov, V.I. Optimal Signal Reception. Radio i Svyaz, Moscow, 1983 (in Russian).
[156] Akimov, P.S., Bakut, P.A., Bogdanovich, V.A., et al. Signal Detection Theory. Radio
i Svyaz, Moscow, 1984 (in Russian).
[157] Kassam, S.A. and Poor, H.V. Robust techniques for signal processing: A Survey.
Proc. IEEE, 73(3):433–481, 1985.
[158] Trifonov, A.P. and Shinakov, Yu. S. Joint Classification of Signals and Estimation
of Their Parameters in the Noise Background. Radio i Svyaz, Moscow, 1986 (in
Russian).
396 Bibliography

[159] Levin, B.R. Theoretical Basics of Statistical Radio Engineering. Radio i Svyaz,
Moscow, 1989 (in Russian).
[160] Kay, S.M. Fundamentals of Statistical Signal Processing. Prentice-Hall, Englewood
Cliffs, 1993.
[161] Poor, H.V. An Introduction to Signal Detection and Estimation. Springer, New York,
2nd edition, 1994.
[162] Helstrom, C.W. Elements of Signal Detection and Estimation. Prentice-Hall, Engle-
wood Cliffs, 1994.
[163] Middleton, D. An Introduction to Statistical Communication Theory. IEEE, New
York, 1996.
[164] Sklar, B. Digital Communications: Fundamentals and Applications. Prentice-Hall,
Englewood Cliffs, 2nd edition, 2001.
[165] Minkoff, J. Signal Processing Fundamentals and Applications for Communications
and Sensing Systems. Artech House, Norwood, MA, 2002.
[166] Bogdanovich, V.A. and Vostretsov, A.G. Theory of Robust Detection, Classification
and Estimation of Signals. Fiz. Math. Lit, Moscow, 2004 (in Russian).
[167] Vaseghi, S.V. Advanced Digital Signal Processing and Noise Reduction. Wiley, New
York, 2006.
[168] Huber, P.J. Robust Statistics. Wiley, New York, 1981.
[169] Fraser, D.A. Nonparametric Methods in Statistics. Wiley, New York, 1957.
[170] Noether, G.E. Elements of Nonparametric Statistics. Wiley, New York, 1967.
[171] Hajek, J. and Sidek, Z. Theory of Rank Tests. Academic Press, New York, 1967.
[172] Lehmann, E.L. Nonparametrics: Statistical Methods Based on Ranks. Holden-Day,
Oakland, CA, 1975.
[173] Kolmogorov, A.N. Complete metric Boolean algebras. Philosophical Studies, 77(1):57–
66, 1995.
[174] Horn, A. and Tarski, A. Measures in Boolean algebras. Trans. Amer. Math. Soc.,
64:467–497, 1948.
[175] Vladimirov, D.A. Boolean Algebras. Nauka, Moscow, 1969 (in Russian).
[176] Sikorsky, R. Boolean Algebras. Springer, Berlin, 2nd edition, 1964.
[177] Boltyansky, V.G. and Vilenkin, N.Ya. Symmetry in Algebra. Nauka, Moscow, 1967
(in Russian).
[178] Weyl, H. Symmetry. Princeton University Press, 1952.
[179] Wigner, E.P. Symmetries and Reflections: Scientific Essays. Indiana University Press,
Bloomington, 1967.
[180] Dorodnitsin, V.A. and Elenin, G.G. Symmetry of nonlinear phenomena. In Computers
and Nonlinear Phenomena: Informatics and Modern Nature Science, pages 123–191.
Nauka, Moscow, 1988 (in Russian).
[181] Bhatnagar, P.L. Nonlinear Waves in One-dimensional Dispersive Systems. Clarendon
Press, Oxford, 1979.
[182] Kurdyumov, S.P., Malinetskiy, G.G., Potapov, A.B., and Samarskiy, A.A. Structures
in nonlinear medium. In Computers and Nonlinear Phenomena: Informatics and
Modern Nature Science, pages 5–43. Nauka, Moscow, 1988 (in Russian).
Bibliography 397

[183] Whitham, G.B. Linear and Nonlinear Waves. Wiley, New York, 1974.
[184] Dubrovin, B.A., Fomenko, A.T., and Novikov, S.P. Modern Geometry. Nauka,
Moscow, 1986 (in Russian).
[185] Klein, F. Non-Euclidean Geometry. Editorial URSS, Moscow, 2004 (in Russian).
[186] Rosenfeld, B.A. Non-Euclidean Spaces. Nauka, Moscow, 1969 (in Russian).
[187] Dirac, P.A.M. Principles of Quantum Mechanics. Clarendon Press, Oxford, 1930.
[188] Shannon, C.E. The bandwagon. IRE Trans. Inf. Theory, 2(1):3, 1956.
[189] Ursul, A.D. Information. Methodological Aspects. Nauka, Moscow, 1971 (in Russian).
[190] Berg, A.I. and Biryukov, B.V. Cybernetics: the way of control problem solving. In
Future of Science. Volume 3. Nauka, 1970 (in Russian).
[191] Steane, A.M. Quantum computing. Rep. Progr. Phys., (61):117–173, 1998.
[192] Feynman, R.P. Quantum mechanical computers. Found. Phys., (16):507–531, 1987.
[193] Wiener, N. I Am a Mathematician. Doubleday, New York, 1956.
[194] Chernin, A.D. Physics of Time. Nauka, Moscow, 1987 (in Russian).
[195] Leibniz, G.W. Monadology: An Edition for Students. University of Pittsburgh Press,
1991.
[196] Riemann, B. On the Hypotheses Which Lie at the Foundation of Geometry. Göttingen
University, 1854.
[197] Klein, F. Highest Geometry. Editorial URSS, Moscow, 2004 (in Russian).
[198] Zheleznov, N.A. On Some Questions of Informational Electric System Theory.
LKVVIA, Leningrad, 1960 (in Russian).
[199] Yaglom, A.M. and Yaglom, I.M. Probability and Information. Nauka, Moscow, 1973
(in Russian).
[200] Prigogine, I. and Stengers, I. Time, Chaos and the Quantum: Towards the Resolution
of the Time Paradox. Harmony Books, New York, 1993.
[201] Shreider, Yu.A. On quantitative characteristics of semantic information. Sci. Tech.
Inform., (10), 1963 (in Russian).
[202] Ogasawara, T. Compact metric Boolean algebras and vector lattices. J. Sci. Hi-
roshima Univ., 11:125—128, 1942.
[203] Mibu, Y. Relations between measures and topology in some Boolean spaces. Proc.
Imp. Acad. Tokio, 20:454–458, 1944.
[204] Ellis, D. Autometrized Boolean algebras. Canadian J. Math., 3:87–93, 1951.
[205] Tomita, M. Measure theory of complete Boolean algebras. Mem. Fac. Sci. Kyusyu
Univ., 7:51–60, 1952.
[206] Hewitt, E. A note on measures in Boolean algebras. Duke Math. J., 20:253–256, 1953.
[207] Vulih, B.Z. On Boolean measure. Uchen. Zap. Leningr. Ped. Inst., 125:95–114, 1956
(in Russian).
[208] Lamperti, J. A note on autometrized Boolean algebras. Amer. Math. Monthly, 64:188–
189, 1957.
[209] Heider, L.J. A representation theorem for measures on Boolean algebras. Mich. Math.
J., 5:213–221, 1958.
[210] Kelley, J.L. Measures in Boolean algebras. Pacific J. Math., 9:1165–1177, 1959.
398 Bibliography

[211] Vladimirov, D.A. On the countable additivity of a Boolean measure. Vestnik Leningr.
Univ. Mat. Mekh. Astronom, 16(19):5–15, 1961 (in Russian).
[212] Vinokurov, V.G. Representations of Boolean algebras and measure spaces. Math. Sb.,
56 (98)(3):374–391, 1962 (in Russian).
[213] Vladimirov, D.A. Invariant measures on Boolean algebras. Math. Sb., 67 (109)(3):440–
460, 1965 (in Russian).
[214] Stone, M.H. Postulates for Boolean algebras and generalized Boolean algebras. Amer.
J. Math., 57:703–732, 1935.
[215] Stone, M.H. The theory of representations for Boolean algebras. Trans. Amer. Math.
Soc., 40:37–111, 1936.
[216] McCoy, N.H. and Montgomery, D. A representation of generalized Boolean rings.
Duke Math. J., 3:455–459, 1937.
[217] Grätzer, G. and Schmidt, E.T. On the generalized Boolean algebras generated by a
distributive lattice. Nederl. Akad. Wet. Proc., 61:547–553, 1958.
[218] Subrahmanyan, N.V. Structure theory for generalized Boolean rings. Math. Ann.,
141:297–310, 1960.
[219] Whitney, H. The abstract properties of linear dependence. Amer. J. Math., 37:507–
533, 1935.
[220] Menger, K. New foundations of projective and affine geometry. Ann. Math., 37:456–
482, 1936.
[221] Birkhoff, G. Lattice Theory. American Mathematical Society, Providence, 1967.
[222] Blumenthal, L.M. Boolean geometry. Rend. Coirc. Math. Palermo, 1:1–18, 1952.
[223] Artamonov, V.A., Saliy, V.N., Skornyakov, L.A., Shevrin, L.N., and Shulgeyfer, E.G.
General Algebra. Volume 2. Nauka, Moscow, 1991 (in Russian).
[224] Hilbert, D. The Foundations of Geometry. Open Court Company, 2001.
[225] Marczewski, F. and Steinhaus, H. On certain distance of sets and the corresponding
distance of functions. Colloq. Math., 6:319–327, 1958.
[226] Zolotarev, V.M. Modern Theory of Summation of Independent Random Variables.
Nauka, Moscow, 1986 (in Russian).
[227] Buldygin, V.V. and Kozachenko, Yu.V. Metric Characterization of Random Variables
and Random Processes. American Mathematical Society, Providence, 2000.
[228] Samuel, E. and Bachi, R. Measure of distance of distribution functions and some
applications. Metron, 13:83–112, 1964.
[229] Dudley, R.M. Distances of probability measures and random variables. Ann. Math.
Statist, 39(5):1563–1572, 1968.
[230] Senatov, V.V. On some properties of metrics at the set of distribution functions.
Math. Sb., 31(3):379–387, 1977.
[231] Kendall, M.G. and Stuart, A. The Advanced Theory of Statistics. Inference and
Relationship. Charles Griffin, London, 1961.
[232] Cramer, H. Mathematical Methods of Statistics. Princeton University Press, 1946.
[233] Melnikov, O.V., Remeslennikov, V.N., Romankov, V.A., Skornyakov, L.A., and Shes-
takov, I.P. General Algebra. Volume 1. Nauka, Moscow, 1990 (in Russian).
[234] Prudnikov, A.P., Brychkov, Yu.A., and Marichev, O.I. Integrals and Series: Elemen-
tary Functions. Gordon & Breach, New York, 1986.
Bibliography 399

[235] Paley, R.E. and Wiener, N. Fourier Transforms in the Complex Domain. American
Mathematical Society, Providence, 1934.
[236] Baskakov, S.I. Radio Circuits and Signals. Vysshaya Shkola, Moscow, 2nd edition,
1988 (in Russian).
[237] Oxtoby, J. Measure and Category. Springer, New York, 2nd edition, 1980.
[238] Kotelnikov, V.A. On the transmission capacity of “ether” and wire in electro-
communications. In Modern Sampling Theory: Mathematics and Applications.
Birkhauser, Boston, 2000. (Reprint of 1933 edition).
[239] Whittaker, J.M. Interpolatory function theory. Cambridge Tracts on Math. and Math.
Physics, (33), 1935.
[240] Jerry, A.J. The Shannon sampling theorem: its various extensions and applications:
a tutorial review. Proc. IEEE, (65):1565–1596, 1977.
[241] Dmitriev, V.I. Applied Information Theory. Vysshaya shkola, Moscow, 1989 (in
Russian).
[242] Popoff, A.A. Sampling theorem for the signals of space built upon generalized Boolean
algebra with a measure. Izv. VUZov. Radioelektronika, (1):31–39, 2010 (in Russian).
Reprinted in Radioelectronics and Communications Systems, 53 (1): 25–32, 2010.
[243] Tihonov, V.I. and Harisov, V.N. Statistical Analysis and Synthesis of Electronic Means
and Systems. Radio i svyaz, Moscow, 1991 (in Russian).
[244] Harkevich, A.A. Noise Reduction. Nauka, Moscow, 1965 (in Russian).
[245] Deza, M.M. and Laurent, M. Geometry of Cuts and Metrics. Springer, Berlin, 1997.
[246] Aleksandrov, P.S. Introduction to Set Theory and General Topology. Nauka, Moscow,
1977 (in Russian).
[247] Borisov, V.A., Kalmykov, V.V., and Kovalchuk, Ya.M. Electronic Systems of Infor-
mation Transmission. Radio i svyaz, Moscow, 1990 (in Russian).
[248] Zyuko, A.G., Klovskiy, D.D., Korzhik, V.I., and Nazarov, M.V. Electric Communi-
cation Theory. Radio i svyaz, Moscow, 1999 (in Russian).
[249] Tihonov, V.I. and Mironov, M.A. Markov Processes. Soviet Radio, Moscow, 1977 (in
Russian).
[250] Zacks, S. The Theory of Statistical Inference. Wiley, New York, 1971.
[251] Lehmann, E.L. Theory of Point Estimation. Wiley, New York, 1983.
[252] David, H.A. Order Statistics. Wiley, New York, 1970.
[253] Gumbel, E.J. Statistics of Extremes. Columbia University Press, New York, 1958.
[254] Van Trees, H. L. Detection, Estimation, and Modulation Theory. Wiley, New York,
1968.
[255] Grätzer, G. General Lattice Theory. Akademie Verlag, Berlin, 1978.
[256] Le Cam, L. On some asymptotic properties of maximum likelihood estimates and
related bayes estimates. Univ. California Publ. Statist., 1:277–330, 1953.
[257] Mudrov, V.I. and Kushko, V.L. The Least Modules Method. Znanie, Moscow, 1971
(in Russian).
[258] Kendall, M.G. and Stuart, A. Distribution theory. In Advanced Theory of Statistics.
Volume 1. Charles Griffin, 1960.
[259] Cohn, P.M. Universal Algebra. Harper & Row, New York, 1965.
400 Bibliography

[260] Grätzer, G. Universal Algebra. Springer, Berlin, 1979.


[261] Polyà, G. Remarks on characteristic functions. Proc. First Berkeley Symp. Math.
Stat. Probabil., (1):115–123, 1949.
[262] Dwight, H.B. Tables of Integrals and Other Mathematical Data. MacMillan, New
York, 1961.
[263] Tukey, J.W. Exploratory Data Analysis. Addison-Wesley, Reading, MA, 1977.
[264] Tihonov, V.I. Overshoots of Random Processes. Nauka, Moscow, 1970 (in Russian).
[265] Mallows, C.L. Some theory of nonlinear smoothers. Ann. Stat., 8:695–715, 1980.
[266] Gallagher, N.C. and Wise, G.L. A theoretical analysis of the properties of median
filters. IEEE Trans. Acoustics, Speech and Signal Proc., ASSP-29:1136–1141, 1981.
[267] Shirman, Ya.D. and Manzhos, V.N. Theory and Technique for Radar Data Processing
in the Presence of Noise. Radio i Svyaz, Moscow, 1981 (in Russian).
[268] Popoff, A.A. Unit of signal space mapping. Patent of Ukraine 56926, H 04 B 15/00,
2011.
[269] Popoff, A.A. Method of signal space mapping. Patent of Ukraine 60051, H 04 B 15/00,
2011.
[270] Popoff, A.A. Interrelation of signal theory and information theory. The ways of logical
difficulties overcoming. J. State Univ. Inform. Comm. Tech., 4(4):312–324, 2006 (in
Russian).
[271] Popoff, A.A. Probabilistic-statistical and informational characteristics of stochastic
processes that are invariant with respect to the group of bijective mappings. J. State
Univ. Inform. Comm. Tech., 5(1):52–62, 2007 (in Russian).
[272] Popoff, A.A. Informational relationships between the elements of signal space built
upon generalized Boolean algebra with a measure. J. State Univ. Inform. Comm.
Tech., 5(2):175–184, 2007 (in Russian).
[273] Popoff, A.A. Information quantity measure in signal space built upon generalized
Boolean algebra with a measure. J. State Univ. Inform. Comm. Tech., 5(3):253–261,
2007 (in Russian).
[274] Popoff, A.A. Informational and physical signal interactions in signal space. The notion
of ideal signal interaction. J. State Univ. Inform. Comm. Tech., 5(4):19–27, 2007 (in
Russian).
[275] Popoff, A.A. Invariants of one-to-one functional transformations of stochastic pro-
cesses. Izv. VUZov. Radioelektronika, (11):35–43, 2007 (in Russian). Reprinted in
Radioelectronics and Communications Systems, 50 (11): 609–615, 2007.
[276] Popoff, A.A. Information quantity transferred by binary signals in the signal space
built upon generalized Boolean algebra with a measure. J. State Univ. Inform. Comm.
Tech., 6(1):27–32, 2008 (in Russian).
[277] Popoff, A.A. Homomorphic mappings in the signal space built upon generalized
Boolean algebra with a measure. J. State Univ. Inform. Comm. Tech., 6(3):238–247,
2008 (in Russian).
[278] Popoff, A.A. Information quantity transferred by m-ary signals in the signal space
built upon generalized Boolean algebra with a measure. J. State Univ. Inform. Comm.
Tech., 6(4):287–295, 2008 (in Russian).
[279] Popoff, A.A. The unified axiomatics of signal theory and information theory. Inform.
Sec. Sci. Proc., (5):199–205, 2008 (in Russian).
Bibliography 401

[280] Popoff, A.A. Comparative analysis of estimators of unknown nonrandom signal pa-
rameter in linear space and K-space. Izv. VUZov. Radioelektronika, (7):29–40, 2008
(in Russian). Reprinted in Radioelectronics and Communications Systems, 51 (7):
368–376, 2008.
[281] Popoff, A.A. Possibilities of processing the signals with completely defined parameters
under interference (noise) background in signal space with algebraic lattice properties.
Izv. VUZov. Radioelektronika, (8):25–32, 2008 (in Russian). Reprinted in Radioelec-
tronics and Communications Systems, 51 (8): 421–425, 2008.
[282] Popoff, A.A. Characteristics of processing the harmonic signals in interference (noise)
background under their interaction in K-space. Izv. VUZov. Radioelektronika, (10):69–
80, 2008 (in Russian). Reprinted in Radioelectronics and Communications Systems,
51 (10): 565–572, 2008.
[283] Popoff, A.A. Informational characteristics and properties of stochastic signal con-
sidered as subalgebra of generalized Boolean algebra with a measure. Izv. VUZov.
Radioelektronika, (11):57–67, 2008 (in Russian). Reprinted in Radioelectronics and
Communications Systems, 51 (11): 615–621, 2008.
[284] Popoff, A.A. Noiseless channel capacity in signal space built upon generalized Boolean
algebra with a measure. J. State Univ. Inform. Comm. Tech., 7(1):54–62, 2009 (in
Russian).
[285] Popoff, A.A. Geometrical properties of signal space built upon generalized Boolean
algebra with a measure. J. State Univ. Inform. Comm. Tech., 7(3):27–32, 2009 (in
Russian).
[286] Popoff, A.A. Characteristics and properties of signal space built upon generalized
Boolean algebra with a measure. Izv. VUZov. Radioelektronika, (5):34–45, 2009 (in
Russian). Reprinted in Radioelectronics and Communications Systems, 52 (5): 248–
255, 2009.
[287] Popoff, A.A. Peculiarities of continuous message filtering in signal space with alge-
braic lattice properties. Izv. VUZov. Radioelektronika, (9):29–40, 2009 (in Russian).
Reprinted in Radioelectronics and Communications Systems, 52 (9): 474–482, 2009.
[288] Popoff, A.A. Informational characteristics of scalar random fields that are invariant
with respect to group of their bijective mappings. Izv. VUZov. Radioelektronika,
(11):67–80, 2009 (in Russian). Reprinted in Radioelectronics and Communications
Systems, 52 (11): 618–627, 2009.
[289] Popoff, A.A. Analysis of stochastic signal filtering algorithm in noise background
in K-space of signals. J. State Univ. Inform. Comm. Tech., 8(3):215–224, 2010 (in
Russian).
[290] Popoff, A.A. Resolution of the harmonic signal filter in the space with algebraic lattice
properties. J. State Univ. Inform. Comm. Tech., 8(4):249–254, 2010 (in Russian).
[291] Popoff, A.A. Classification of the deterministic signals against background noise in
signal space with algebraic lattice properties. J. State Univ. Inform. Comm. Tech.,
9(3):209–217, 2011 (in Russian).
[292] Popoff, A.A. Advanced electronic Counter-Counter-Measures technologies under ex-
treme interference environment. Mod. Inform. Tech. Sphere Defence, (2):65–74, 2011.
[293] Popoff, A.A. Quality indices of APSK signal processing in signal space with L-group
properties. Mod. Special Tech., 25(2):61–72, 2011 (in Russian).
[294] Popoff, A.A. Invariants of groups of bijections of stochastic signals (messages) with ap-
plication to statistical analysis of encryption algorithms. Mod. Inform. Sec., 10(1):13–
20, 2012 (in Russian).
402 Bibliography

[295] Popoff, A.A. Detection of the deterministic signal against background noise in signal
space with lattice properties. J. State Univ. Inform. Comm. Tech., 10(2):65–71, 2012
(in Russian).
[296] Popoff, A.A. Detection of the harmonic signal with joint estimation of time of signal
arrival (ending) in signal space with L-group properties. J. State Univ. Inform. Comm.
Tech., 10(4):32–43, 2012 (in Russian).
[297] Popoff, A.A. Invariants of groups of mappings of stochastic signals samples in metric
space with L-group properties. J. State Univ. Inform. Comm. Tech., 11(1):28–38,
2013 (in Russian).
[298] Popoff, A.A. Comparative analysis of informational relationships under signal inter-
actions in spaces with various algebraic properties. J. State Univ. Inform. Comm.
Tech., 11(2):53–69, 2013 (in Russian).
[299] Popoff, A.A. Unit of digital signal filtering. Patent of Ukraine 57507, G 06 F 17/18,
2011.
[300] Popoff, A.A. Method of digital signal filtering. Patent of Ukraine 57507, G 06 F 17/18,
2011.
[301] Popoff, A.A. Radiofrequency pulse resolution unit. Patent of Ukraine 59021,
H 03 H 15/00, 2011.
[302] Popoff, A.A. Radiofrequency pulse resolution method. Patent of Ukraine 65236,
H 03 H 15/00, 2011.
[303] Popoff, A.A. Unit of signal filtering. Patent of Ukraine 60222, H 03 H 17/00, 2011.
[304] Popoff, A.A. Method of signal filtering. Patent of Ukraine 61607, H 03 H 17/00, 2011.
[305] Popoff, A.A. Deterministic signals demodulation unit. Patent of Ukraine 60223,
H 04 L 27/14, 2011.
[306] Popoff, A.A. Deterministic signals demodulation method. Patent of Ukraine 60813,
H 04 L 27/14, 2011.
[307] Popoff, A.A. Transversal filter. Patent of Ukraine 71310, H 03 H 15/00, 2012.
[308] Popoff, A.A. Transversal filter. Patent of Ukraine 74846, H 03 H 15/00, 2012.
[309] Popoff, A.A. Fundamentals of Signal Processing in Metric Spaces with Lattice Prop-
erties. Part I. Mathematical Foundations of Information Theory with Application to
Signal Processing. Central Research Institute of Armament and Defence Technologies,
Kiev, 2013 (in Russian).
Index

Symbols harmonic signal detection algorithm


σ-additive measure, 28 synthesis and analysis, 324–341
linear frequency modulated signal
A detection algorithm synthesis
Abelian group, 59 and analysis, 342–351
Abit (absolute unit), 63, 169, 177–178, Analytical stochastic processes, 104–105
182, 200–201 Analyticity condition of Gaussian
Abit per second (abit/s) measure of stochastic process, 104
channel capacity, 249 Angle, 48
Absolute geometry, 45–46 Angular measure, 52
Absolute unit (abit), 63, 169, 177–178, Autocorrelation function (ACF), 175,
182, 200–201 179–181
Active mapping, 56 Axioms
Additive interaction main axiom of signal processing
channel capacity in presence of theory, 22–23, 385
noise, 245 measure of binary operations,
features of signal interaction in 59–60, 109–110, 126, 385
signal space, 147–156 signal spaces with lattice
ideal/quasi-ideal interactions, properties, 269–270
147–156 system of metric space built upon
information quantity in a signal, generalized Boolean algebra
146 with a measure, 46–51
optimality criteria, 146 congruence, 48–49
quality indices problems connection (containment), 47
signal classification, 230–232 continuity, 49
signal detection, 238 order, 47–48
signal classification problem, 352 parallels for sheet and plane,
signal detection problem, 304 49–50
unknown nonrandom parameter
estimation in sample space B
with lattice properties, 271 Barycenter, 54, 55, 58, 318, 321, 333,
Additive measure on Boolean algebra, 338, 348, 350
28 Binary signals, information quantity
Additive noise and information carried by, 174–179
quantity, 16 Bit per second (bit/s) measure of
Amplitude, 173 channel capacity, 244, 247, 249
Amplitude estimation Bits as measure of information
quantity, 178, 200–201

403
404 Index

Boolean algebra, 27–28 Coefficient of statistical interrelation,


Boolean algebra with a measure, 23, 25, 78, 116
177, See also Generalized Communication channel, 196, 243, See
Boolean algebra with a measure also Channel capacity
Boolean lattice, 27 Communication channel capacity, See
Boolean ring, 28 Channel capacity
Butterworth filters, 107–108 Complemented lattice, 27
Complements, 27
C Conditional probability density
Cantor’s axiom, 49 functions, See Probability
Capacity of a communication channel, density functions
See Channel capacity Conditional probability of false alarm,
Carrier signal, 173, 208 237, 310, 314, 322, 330, 344,
Channel capacity, See Channel 351
capacity, 173 Congruence axiom for metric space
always a finite quantity, 7, 188, 196, built upon generalized Boolean
199, 201, 247, 358 algebra with a measure, 48–49
bit/s and abit/s measures, 244, 247, Connection (containment) axiom for
249 metric space built upon
definition, 243, 248 generalized Boolean algebra
generalized discrete random with a measure, 47
sequence, 183–190 Continuity axiom for metric space built
information quantity carried by upon generalized Boolean
binary signals, 174–179 algebra with a measure, 49
information quantity carried by Continuous channel capacity, 190–196
continuous signals, 190–196 distinguishing continuous and
information quantity carried by discrete channels, 196
m-ary signals, 179–190 noiseless channels, 200–201
noiseless channels, 196–206 presence of interference (noise),
continuous channel capacity, 244–247
200–201 Continuous random variable entropy, 15
discrete channel capacity, 199 Correlation ratio, 76
evaluation of, 201–206 Cosine theorem for metric space built
presence of interference (noise), upon generalized Boolean
242–250 algebra with a measure, 51–54
continuous channels, 244–247 Cos-invariant theorem, 52
discrete channels, 247–250 Cramer-Rao lower bound, 271
related concepts and notation, 173 Cryptographic coding paradox, 17
Shannon’s limit, 155–156, 357 Cumulative distribution functions
signal classification problem, (CDFs)
357–358 estimator for quality indices of
Wiener’s statement, 186 unknown nonrandom
Classification, See Signal classification parameter, 219-221
Closed interval, 38 probabilistic measure of statistical
interrelationship, 98-99
Index 405

RF signal resolution algorithm, 371 Discretization, 67–68, 134–136


signal detection quality index, Disjoint elements, 65
239–240 Distributive lattice, 26–27
signal extraction algorithm Diversity, 2, 3, 5, 12, 21
analysis, 294–296 Doppler shifts, 368–369
signal resolution-detection quality Dynamic error of signal filtering, 301
indices, 252 Dynamic error of signal smoothing, 302
signal resolution-estimation quality
indices, 262 E
stochastic process description, 76 Effective width, 79–80, 197, 203–206
unknown nonrandom parameter Elliptic geometry, 51
estimation in sample spaces Entropy, 7, 8
with lattice properties, 274–275 differential entropy noninvariance
Curvature measure, 68, 137 problem, 17
Cybernetics, 6 information relationship, 9, 14–18,
21
D transition from discrete to
Decision gate (DG), 309–310, 320, 335, continuous random variable,
337, 349, 354 15–16
Deterministic approach, 2 Envelope computation unit (ECU), 320,
combining probabilistic approaches 337, 348
for measures of information Equivalence principle in sampling
quantity, 18–19 theorem, 134, 136, See also
Deterministic signal classification, Equivalent representation
algorithm synthesis and theorems
analysis, 352–358 Equivalent representation theorems,
Deterministic signal detection, 69–71, 138–141
algorithm synthesis and Estimation errors, 271
analysis, 305–311 Estimation problem, 208–209
Difference, 27 quality indices of unknown
Differential entropy, 15-16 parameter, 218–229 See also
Differential entropy noninvariance Estimators; Quality indices
property, 17 Estimator error, fluctuation component
Dirac delta function, 108, 132, 262, 295, of, 301
371 Estimator formation unit (EFU),
Discrete channel capacity 348–349
distinguishing continuous and Estimators, 208, 272
discrete channels, 196 for known and unknown
generalized discrete random modulating functions, 302–303
sequence, 183–190 quality index of estimator
noiseless channels, 199 definitions, 212, 227, 233–234,
normalized autocorrelation 240, 253, 283
function, 179–180 quality indices for metric sample
presence of interference (noise), space, 282–287
247–250 superefficiency of, 226, 271, 281
406 Index

unknown nonrandom parameters in ideal/quasi-ideal signal interactions


sample spaces with lattice during additive interactions in
properties, 271–281 See also signal space, 151–156
Quality indices informational properties of
Estimator space with metric, 261–262, information carrier space,
282 59–73, See also Information
Euclidean geometry, 10–11 carrier space, informational
Everywhere dense set, 41 properties
Extraction problem, See Signal filtering notation and identities, 26–28
subalgebra for physical and
F informational signal space, 124
False alarm conditional probability, 237, Generalized Boolean lattice, 27
314, 322, 330, 344, 351 Generalized Boolean ring, 27, 59, 130
Filtering (extraction) problem, See Generalized discrete random sequence,
Signal filtering 183–190
Filtering error, fluctuation component Generator points of plane, 44
of, 301 Generator points of sheet, 42
Filter of signal space with lattice Genetic information transmission, 6
properties, 367
Finite additive measure, 28 H
Fischer’s measure of information Half-perimeter of triangle, 52
quantity, 5 Harmonic signal detection, algorithm
Fluctuation component of signal synthesis and analysis, 311
estimator error, 301 block diagrams, 319
Fourier transform, 78–79 with joint estimation of signal
Franks, Lewis, 11 arrival, 311–324
Frequency modulated signal detection, with joint estimation of signal
algorithm synthesis and arrival, initial phase, and
analysis for metric space with amplitude, 324–341
L-group properties, 342–351 Hartley’s logarithmic measure, 14, 184,
Frequency modulation, 173 191
Heaviside step function, 132, 139, 182,
G 198, 262, 275, 294, 371
Galilei, Galileo, 9 Hexahedron, 30–32
Gaussian communication channel, 196 Hilbert transform, 303, 314, 330, 344
Generalized Boolean algebra, 27 Hilbert’s axiom system, 35–36
Generalized Boolean algebra with a Homomorphic mappings in signal space
measure, 23, 26, 122, 126, 386 built upon generalized Boolean
axiom of measure of binary algebra with a measure,
operations, 109–110, 126, 385 133–145
definition, 28–29 Homomorphism of stochastic process,
geometrical properties of metric 134–140
space built upon, 29–59 Hyperbolic geometry, 51
homomorphic mappings in signal Hyperspectral density (HSD), 79–81,
space built upon, 133–145 202–206
Index 407

Hypothesis testing, 305 information quantity


relationships, 65–67
I sets under discretization, 67–68
Ideal and quasi-ideal signal interaction, theorems on equivalent
147–156 representation, 69–71
Ideal filter, 197 theorems on isomorphic mapping,
Identity of signals in informational 71–73
sense, 150 notation and identities, 26–28 See
Indexed set, 26, 39 also Informational signal space;
Information, defining, 4–5, 25, 388 Metric space built upon
Informational inequality (informational generalized Boolean algebra
paradox) in signal space, with a measure; Signal space
149–150 Information distribution density (IDD)
Informational inequality of signal of stochastic processes,
processing theorem, 213–214 101–109, 112, 122, 134, 198
Informational interrelations, 125 features of signal interaction in
Informational paradox (informational signal space with algebraic
inequality), 149–150 properties, 148
Informational signal space, 123–133 homomorphism of continuous
features of signal interaction, See stochastic process, 134, 136–140
Signal space, features of signal informational signal space
interaction in space with definition, 125
algebraic properties information quantity carried by
information quantity definitions binary signals, 176
and relationships, 126–133, 149 information quantity carried by
morphism of physical signal space, m-ary signals, 181–182
122 information quantity definitions
units of information quantity, and relationships, 110–113
127–128 See also Information mutual IDD, 115–122
carrier space; Metric space noiseless channel capacity
built upon generalized Boolean evaluation, 201–206
algebra with a measure; Signal physical signal space definition, 124
space See also Mutual information
Informational variable, 173 distribution density
Information and entropy relationship, 9, Information losses, 267
14–18 of the first genus, 68, 137
Information carrier space in physical signal space during
definition, 28–29 additive interactions, 149–151,
informational properties, 59–73 267
axiom of a measure of binary of the second genus, 68–69, 137
operation, 59–60 signal processing quality indices,
information losses of first and See Quality indices
second genus, 68–69 Information quantity, 267
information quantity definitions, carried by binary signals, 174–179
60–64
408 Index

carried by continuous signals, Generalized Boolean algebra


190–196 with a measure
carried by m-ary signals, 179–190 distinguishing continuous and
entropy as measure of, 21 discrete channels, 196
generalized discrete random information quantity in a signal,
sequence, 183–190 146–147
main axiom of signal processing measurement and, 7–8
theory, 22–23 morphism of physical signal space,
measures, See Measure of 124
information quantity natural sciences and, 5–10
mutual information in a signal, signal processing theory
146–147 relationship, 25–26, 385,
theoretical problems, 14–22 See 386–387
also Channel capacity; subjective perception of message
Information distribution problem, 18
density (IDD) of stochastic theoretical problems, 14–22 See also
processes; Normalized measure Information quantity; Measure
of statistical interrelationship; of information quantity
Probabilistic measure of Information transmitting and receiving
statistical interrelationship; channel, 196, See also Channel
specific quantities capacity
Information quantity, definitions and Initial phase estimation
relationships harmonic signal detection algorithm
features of signal interaction in synthesis and analysis, 324–341
signal space with algebraic linear frequency modulated signal
properties, 148, 157–158 detection algorithm synthesis
informational signal space, 126–133, and analysis, 342–351
149 Interference (noise) conditions, channel
information carrier space, 60–67 capacity and, 242–250, See also
information losses of first and Channel capacity
second genus, 68–69, 137 Interpolation problem, See Signal
metric and informational interpolation
relationships in signal space, Invariance, 2–3, 75, 386
164, 167–170 differential entropy noninvariance
physical signal space, 149 problem, 17
for stochastic processes, 110–113 signal-resolution detection quality
theorems on equivalent indices problem, 259
representation, 69–71, 138–141 theorem of normalized function of
theorems on isomorphic mapping, statistical interrelationship,
71–73, 143–145 See also specific 80–81
quantities Invariance principles, 2
Information theory Isomorphic mapping theorems, 71–73,
constructing using Boolean algebra 143–145
with a measure, 23, See also Isomorphism of stochastic process, 135
Index 409

Isomorphism preserving a measure, 29, Likelihood ratio computation, 305,


125 352-358
Limit of everywhere dense linearly
J ordered indexed set, 41
Join, 82, 88, 92, 153–156, 164 Line, 4
Joint detection and estimation, 312, with linearly ordered elements, 40
314, 325, 329, 332, 343, 344, in metric space built upon
347 generalized Boolean algebra
with a measure, 32–41
K with partially ordered elements, 39
Kepler, Johannes, 1
Linear frequency modulated (LFM)
Keying, 173
signal detection, algorithm
Klein, Felix, 11
synthesis and analysis for
L metric space with L-group
Lattice, 26–27, 82–83, 269 properties, 342–351
algebraic properties, 269–271 Linearly ordered indexed set, 39
ideal/quasi-ideal signal interactions Linear sample space, 271–272
during additive interactions in signal detection problem, 304–305
signal space, 153–154 unknown nonrandom parameter
signal interaction informational estimation, 271–276, 286
relationships in signal spaces quality indices of estimators in
with algebraic properties, 156 metric sample space, 283
See also L-groups; Metric space relative estimation efficiency,
with lattice properties; Sample 276–281 See also Sample space
space with lattice properties with lattice properties
Lattice-ordered groups, See L-groups Linear spaces, 11–12, 19–20, 270, 386
Lattice with relative components, 26–27 mapping into signal space with
Least modules method (LMM) based lattice properties, 379–383 See
estimators, 272 also Linear sample space;
Least-squares method (LSM) based Physical signal space
estimators, 272 Logarithmic measure of information
Leibniz, Gottfried, 10–11 quantity, 14, 184, 191–193
L-groups (lattice-ordered groups),
83–84, 218, 270, 386
M
Main axiom of signal processing theory,
algebraic axioms, 270
22–23, 385
channel capacity, See Channel
Mapping methods for metric space, 56
capacity
Mapping signal spaces into signal space
main signal processing problems,
with lattice properties, 378–384
208–210
linear signal space, 379–383
quality indices for signal processing
signal space with semigroup
in metric spaces, See Quality
properties, 383–384
indices See also Metric space
M -ary signals, information quantity
with lattice properties; Sample
carried by, 179–190
space with lattice properties
Matched filtering in signal space with
410 Index

L-group properties, 312–313, upon generalized Boolean


327, 331, 343, 345 algebra with a measure, 51–54
Matched filtering unit (MFU), 335–336, Metric function theorems for quality
338 indices of unknown nonrandom
Measurement, 7–8 parameter estimation, 221–225
Measurement errors, 271, 272 Metric function theorems for signal
Measure of binary operations axiom, interactions in physical signal
59–60, 109–110, 126 space, 165, 172
Measure of information quantity, 14, Metric inequality of signal processing,
17–18, 25, 157, 385, 386, 388 212
absolute unit (abit), 63, 169, Metric space, 11, 29–30, 56–57, 125, 385
177–178, 200–201 Metric space built upon generalized
bits, 178, 200–201 Boolean algebra with a
building using Boolean algebra with measure, 29
a measure, 25 algebraic properties, 58–59
combining probabilistic and definition, 28, 56–57
deterministic approaches, 18–19 geometrical properties, 57–58
entropy as, 21, See also Entropy axiomatic system, 46–51
Fischer’s measure, 5 line, 32–41
logarithmic measure, 14, 184, main relationships between
191–193 elements, 29–32
Shannon’s measure, 17–18 metric and trigonometrical
suggested criteria for, 22 relationships, 51–54
units of in signal space, 127–128 properties with normalized
See also Generalized Boolean metric, 54–59
algebra with a measure; sheet and plane, 41–46
Information quantity; Metric informational space definition, 125
space built upon generalized mapping methods, 56
Boolean algebra with a normalized measure of statistical
measure; Normalized measure interrelationship properties,
of statistical interrelationship; 93–98
Probabilistic measure of pseudometric space, 166 See also
statistical interrelationship Informational signal space;
Meet, 82, 88, 92, 153–156, 164 Information carrier space;
Metric, 29, 31, 77–78, 80, 84, 101, 115, Physical signal space; Sample
125, 385 space with lattice properties;
interacting samples in L-group, Signal space
83–84 Metric space with lattice properties, 386
between stochastic processes, 78, channel capacity in presence of
116 noise, 242–250, See also
between two samples, 78, 84 See Channel capacity
also Measure of information quality indices of unknown
quantity nonrandom parameter
Metric and trigonometrical estimators, 282–287
relationships, metric space built
Index 411

signal classification algorithm and Nonlinearity, 3, 75, 387


analysis, 352–358 Normalized autocorrelation function
signal detection algorithm synthesis (ACF), 175, 179–181
and analysis, 304–351 Normalized function of statistical
signal processing algorithm interrelationship (NFSI),
synthesis and analysis, See 76–82, 122, 148
Signal processing algorithms, definition, 77, 113
synthesis and analysis discrete random sequence with
signal processing quality indices, arbitrary state transition
See Quality indices probabilities, 179–190
stochastic signal extraction hyperspectral density, 79–81
algorithm synthesis and informational signal space
analysis, 287–304 See also definition, 125
Sample space with lattice information distribution density,
properties 101–109
Mills’ function, 279 information quantity carried by
Mismatching parameter for signal binary signals, 175–176
resolution algorithm, 359, invariance theorem, 80–81
368–370, 377 mutual NFSI, 77–78, 115, 118
Mismatching function, 359 necessary condition for carrying
Mityugov, Vadim, 247 information, 106–107
Modulating functions, 173, 178, 208, noiseless channel capacity
210, 247, 260, 302–303 evaluation, 202, 204
Modulation, 173 quality index of signal
Mutual information distribution density classification, 233
(mutual IDD), 115–122 Normalized measure, 28, 58
physical signal space definition, 124
Normalized measure of statistical
Mutual information quantity, See interrelationship (NMSI),
Quantity of mutual information 82–98, 122, 163
Mutual normalized function of definition, 84–85
statistical interrelationship quality index of estimator
(mutual NFSI), 77–78, 115, 118 definitions, 212, 240
signal interaction informational
N relationships in signal spaces
Natural sciences with algebraic properties,
concepts and research methodology, 157–158
1–5 signal processing inequality
information theory and, 5–10 theorem, 213
space-related problems, 10–12 Normalized metric, metric space built
Newton, Isaac, 8, 10 upon generalized Boolean
Noise conditions, channel capacity and, algebra with a measure, 54–59
See Channel capacity Normalized mismatching function, 359
Noiseless channel capacity, 196–206 Normalized time-frequency
Non-Euclidean geometries, 3–4, 385, mismatching function, 359,
388 368–370, 377
412 Index

Normalized variance function, 76 Physical interaction, 124, 269


Null element, 26 Physical signal space, 123–124, 269, 386
algebraic properties, 123–124
O ideal/quasi-ideal interactions during
Optimality criteria, 146, 207 additive interactions, 151–156
choice considerations, 268–269 information losses during additive
harmonic signal detection interactions, 149–151, 267
algorithm, 325–329 information quantity definitions
linear frequency modulated signal and relationships, 149, 157–158
detection algorithm, 343, 345 metric and informational
RF signal resolution algorithm, relationships between
360–362 See also Quality interacting signals, 163–172
indices morphism into informational signal
Order axiom for metric space built space, 122, 124
upon generalized Boolean signal interaction informational
algebra with a measure, 47–48 relationships in spaces with
Ordinate set, 58, 108, 114–115, 118, algebraic properties, 156–163
120–121 stochastic process generalization,
Orthogonality relationships, metric 156 See also Linear spaces;
space built upon generalized Metric space built upon
Boolean algebra with a generalized Boolean algebra
measure, 30–32 with a measure; Signal space
Overall quantity of information, 64–66, Plane and sheet, notions for metric
111, 130, 132, 132–133, 137, space built upon generalized
139, 197, See also Quantity of Boolean algebra with a
overall information measure, 41–46
invariance property of, 72, 73, 144, axioms for parallels, 49–50
145, 171 Potential resolutions of a filter, 359,
370–371
P Power spectral density, 81–82, 105, 107
Paley-Wiener condition, 105, 107 Prior uncertainty, 122, 229, 237, 242,
Parabolic geometry, 51 247, 259, 264, 268, 303–304,
Parallels for a plane, axiom, 50 310–311, 351, 356, 358, 386
Parallels for a sheet, axiom, 50–51 Probabilistic approach, 2, 75
Parameter filtering, 208 combining deterministic approaches
Partially ordered set, 82–84, 100, 156, for measures of information
269 quantity, 18–19
Passive mapping, 56 Probabilistic measure of statistical
Phase estimation interrelationship (PMSI),
harmonic signal detection algorithm 98–101, 122, 156–157, 163
synthesis and analysis, 324–341 Probabilities, 14
linear frequency modulated signal Probability density functions (PDFs)
detection algorithm synthesis entropy and information
and analysis, 342-351 relationship, 14–15
Phase modulation, 173
Index 413

generalized discrete random Quantity of absolute information,


sequence, 183–185 60–63, 110–111, 126, 127, 148,
information quantity carried by 149–150, 164, 168–169
binary signals, 174 invariance property of, 72, 144, 171
probabilistic measure of statistical unit of measure (abit), 63, 169,
interrelationship, 98–99 177–178
RF signal resolution algorithm, Quantity of mutual information, 60–61,
371–376 110, 112, 120, 121, 126–127,
signal detection quality index, 239 148, 149–150, 157, 164, 167, 211
signal extraction algorithm invariance property of, 72, 73, 144,
analysis, 294–301 145, 171
signal resolution-detection quality Quantity of overall information, 60–61,
indices, 252 110, 121, 126, 148, 149, 150,
signal resolution-estimation quality 163–164, 164, 168, 176
indices, 261–263 invariance property of, 72, 73, 144,
stochastic process description, 145, 171
76–77, 82 Quantity of particular relative
unknown nonrandom parameter information, 60, 62, 110, 121,
estimation in sample spaces 127
with lattice properties, 274–276 invariance property of, 72, 73, 144,
Pseudometric space, 166 145
Quantity of relative information, 60–62,
Q 110–111, 121, 127, 167
Quality indices, 207 invariance property of, 72, 73, 144,
estimator space with metric, 145, 171
261–262, 282 Quantum information theory, 7
main signal processing problems, Quasi-ideal signal interactions in signal
208–210 space, 152–156
metric function theorems, 221–225
optimality criteria, 146 R
quality index of estimator Radio frequency (RF) pulse resolution,
definitions, 212, 227, 233–234, 358–378, See also Signal
240 resolution
resolution-detection, 250–259 Relative component, 27
resolution-estimation, 259–265 Relative quantity of information, 64–66,
sample size and, 228 111, 130, 132–133, 137, 197,
signal classification, 229–237 200, See also Quantity of
signal detection, 237–242, 321 relative information
signal filtering (extraction), invariance property of, 72, 144
210–218 Research methodology, 1–5
signal processing inequality Resolution, See Signal resolution
theorems, 212–214 Resolution-classification, 209
unknown nonrandom parameter Resolution-detection, 210 quality index,
estimation, 218–229 See also 250–259 See also Signal
Optimality criteria detection; Signal resolution
414 Index

Resolution-estimation, 210 Sampling theorem, 12–13, 19, 133–134


quality indices, 259–265 equivalence principle, 134, 136, See
resolution-detection-estimation also Equivalent representation
problem, 259–260 See also theorems
Signal resolution homomorphic mappings in signal
Resolution in relative frequency shift, space built upon generalized
377 Boolean algebra with a
Resolution in relative time delay, 377 measure, 134
Resolution measure of a filter, 370 Second Law of thermodynamics, 8
Resolution of radio frequency (RF) Semantic information theory, 10
pulses, 358–378, See also Signal Semigroup properties
resolution defined binary operations, 157
Riemann, Bernhard, 11 mapping signal spaces into signal
Ring, 384 space with lattice properties,
383-384
S physical signal space definition,
Sample space with lattice properties, 123, 269–270
218, 269–271 usual interaction of signals, 158
algebraic properties, 268–269 Shannon, Claude, and information
mapping signal space into, 378–384 theory, 4, 6, 9, 15
linear signal space, 379–383 limit on channel capacity, 155–156,
signal space with semigroup 357
properties, 383–384 measure of information quantity,
matched filtering problem, 312–313, 17–18
327, 331, 343, 345 Sheet and plane, 41–46
unknown nonrandom parameter axioms for parallels, 49–50
estimation, 271–276, 286–287 Signal amplitude estimation
efficiency based on quality indices harmonic signal detection algorithm
in metric sample space, 282–287 synthesis and analysis, 324–341
estimator variance theorem and linear frequency modulated signal
corollary, 276–279 detection algorithm synthesis
indirect measurement models, and analysis, 342–351
272–273 Signal classification, 209
quality indices of estimators in algorithm synthesis and analysis,
metric sample space, 282–287 352–358
relative estimation efficiency with likelihood ratio computation,
respect to linear sample space, 352–353
276–281 See also L-groups; processing unit block diagram, 355
Metric space with lattice quality indices, 229–237
properties resolution-classification, 209–210
Sample space with L-group properties, Signal detection, 209, 304
218, See also Sample space algorithm synthesis and analysis,
with lattice properties 304, 351
Sampling, 134, 135, 141 block diagrams, 308, 323, 335,
Sampling interval, 136, 140, 141 336, 348–349
Index 415

deterministic signal, 305–311 Signal processing algorithms, synthesis


harmonic signal, 311–341 and analysis, 267–268
likelihood ratio computation, 305 block diagrams
linear frequency modulated classification unit, 355
signal, 342–351 detection units, 308, 319, 323,
optimality criteria, 325–329, 343, 335, 336, 348–349
345 extraction unit, 293, 303, 336
quality indices, 237–242, 321 matched filtering unit, 336
quality indices of resolution unit, 366
resolution-detection, 250–259 choice of optimality criteria,
resolution-detection-estimation 268–269
problem, 259–260 decision gates, 309–310, 320, 335,
Signal estimator, 208 337, 349, 354
Signal estimator error, fluctuation deterministic signal classification,
component of, 301 352–358
Signal extraction, See Signal filtering estimators for known and unknown
Signal filtering (extraction), 208, 210 modulating functions, 302–303
algorithm synthesis and analysis, mapping signal spaces into signal
287–288, 303–304 space with lattice properties,
analysis of optimal algorithm, 378–384
294–302 matched filtering problem, 312–313,
further processing possibilities, 327, 331, 343, 345
302–303 prior uncertainty problem, 268
optimal algorithm synthesis, signal detection, 304, 351
288–293 deterministic signal, 305–311
dynamic error, 301 harmonic signal, 311-341
estimators for known and unknown linear frequency modulated
modulating functions, 302–303 signal, 342–351
processing unit block diagram, 293, optimality criteria, 325–329, 343,
303 345
quality indices, 210–218 signal resolution, 358–378
Signal initial phase estimation block diagram, 366
harmonic signal detection algorithm conditional PDF, 371–376
synthesis and analysis, 324–341 Doppler shifts, 368–369
linear frequency modulated signal filter of signal space with lattice
detection algorithm synthesis properties, 367
and analysis, 342–351 normalized time-frequency
Signal interpolation, 208, See also mismatching function, 359,
Signal smoothing 368–370, 377
Signal parameter estimation, 208, See optimality criteria, 360–362
also Estimation problem stochastic signal extraction,
Signal parameter estimator, 208, See 287–288, 303–304
also Estimators analysis of optimal algorithm,
Signal parameter filtering, 208, See also 294–302
Signal filtering
416 Index

further processing possibilities, filter of signal space with lattice


302–303 properties, 367
optimal algorithm synthesis, filter potential resolution,
288–293 370–371
processing unit block diagram, filter resolution measure, 370
293 normalized time-frequency
Signal processing inequality theorems, mismatching function, 368–369,
212–214 377
Signal processing problems, 207–210, optimality criteria, 360–362
267 uncertainty function, 358–359
Signal processing theory Signal smoothing, 208, 302
constructing using Boolean algebra Signal space, 11–12, 19–20, 123, 385
with a measure, 23, See also homomorphic mappings of
Generalized Boolean algebra continuous signal, 133–143
with a measure theorems on equivalent
information theory and, 25–26, 385, representation, 138–141
386–387 theorems on isomorphic mapping,
main axiom, 22–23, 385 143–145
main problems, general model of information carrier space definition,
interaction, 267 28–29
main problems, quality indices information carrier space notation
context, 207–210, See also and identities, 26–28
Quality indices information quantity definitions
sampling theorem, 12–13, 133–134 and relationships, 126–133, See
Signal receiving error probability, 230 also Information quantity
Signal representation equation, 173 mapping into signal space with
Signal resolution, 358–360 lattice properties, 378–384
mismatching parameter, 359, physical and informational, 122,
368–370, 377 123–133, See also Informational
quality indices, 259–265 signal space; Physical signal
in relative frequency shift, 377 space
in relative time delay, 377 theoretical problems, 25
resolution-classification, 209–210 units of information quantity,
resolution-detection, 210 127–128 See also Information
resolution-detection-estimation carrier space; Linear sample
problem, 259–260 space; Metric space built upon
resolution-detection quality indices, generalized Boolean algebra
250–259 with a measure; Sample space
resolution-estimation, 210 with lattice properties
RF pulse resolution algorithm Signal space, features of signal
synthesis and analysis, 358–378 interaction in space with
block diagram, 366 algebraic properties, 146–163
conditional PDF, 371–376 additive interaction and
Doppler shifts, 368–369 ideal/quasi-ideal interaction,
147–156
Index 417

informational paradox in signal theory, 11–12, 19–20, See


(informational inequality), also Signal space
149–150 theoretical problems, 10–12 See
information quantity definitions also Informational signal space;
and relationships, 148–150 Information carrier space;
optimality criteria, 146 Linear spaces; Metric space
space with various algebraic built upon generalized Boolean
properties, 156–163 algebra with a measure;
summary conclusions, 162–163 Physical signal space; Sample
Signal space, metric and informational space with lattice properties;
relationships between Signal space
interacting signals, 163–172 Statistical approaches, 5, 14
information quantity definitions Statistical hypotheses testing, 305
and relationships, 164, 167–170 Stochastic processes, 75–76, 191
theorems on metric functions, 165, analyticity, 104–105
172 Butterworth filters and, 107–108
Signal spaces with lattice properties, classification, 78
mapping of signal spaces into, dependence between instantaneous
378–384 values of interacting signals,
linear signal space, 379–383 82–83
signal space with semigroup discretization of continuous signal,
properties, 383–384 See also 135–136
Sample space with lattice generalized for physical signal space
properties as a whole, 156
Signal theory, space in, 11–12, 19–20, homomorphism of continuous
See also Signal space process, 134–140
Signal time of arrival (ending) hyperspectral density, 79–81
estimation informational characteristics and
harmonic signal detection algorithm properties, 108–122, 123, See
synthesis and analysis, 311–341 also Information quantity
linear frequency modulated signal axiom of measure of binary
detection algorithm synthesis operations, 109–110
and analysis, 342–351 information distribution density,
Simplex, 54, 55–56 101–108, 112, See also
Sine theorem for metric space built Information density
upon generalized Boolean distribution (IDD) of stochastic
algebra with a measure, 52 processes
Sin-invariant theorem, 53 information quantity definitions
Smoothing problem, See Signal and relationships, 110–113
smoothing mutual information density
Space, 3-4 distribution, 115–122
estimator space with metric, necessary condition for carrying
261–262, 282 information, 106–107
probabilistic approach, 75 isomorphism of, 135
metric between, 78, 116
418 Index

normalized function of statistical detection algorithm synthesis


interrelationship, 77–82, 113 and analysis, 342–351
normalized measure of statistical Triangle, 31, 48, 51
interrelationship, 82–98 Triangle angle sum, 53
probabilistic measure of statistical Trigonometrical relationships, metric
interrelationship, 98–101 space built upon generalized
quantity of information carried by Boolean algebra with a
signals, See Information measure, 51–54
quantity Tuller, W. G., 23
signal extraction in metric space
with lattice properties, 287–303 U
theorems on equivalent Ultimate Shannon’s limit, 155–156
representation, 69–71, 143–145 Uncertainty function, 358–359
theorems on isomorphic mapping, Unit element, 26
71–73, 143–145 See also Unit of measure of information quantity
Normalized function of (abit), 63, 169, 177–178
statistical interrelationship; Unity, 26
Normalized measure of Universal algebra, 26–27, 282
statistical interrelationship; Usual interaction of two signals, 158
Probabilistic measure of
statistical interrelationship V
Stone’s duality, 28, 59, 64, 130 Valuation, 84, 222, 233
Straight lines, 4 Valuation identity, 88
Strobing circuit (SC), 355
Superefficient estimators, 226, 271, 281
W
Weak stationary stochastic process, 78
Superposition principle, 3
Wiener, Norbert, 6, 9, 186
Swift, Jonathan, 17–18
Woodward, P. M., 268, 358
Symmetry, 2–3
System of elements, 28 Z
Zero, 26
T Zheleznov, Nikolai, 13
Tetrahedron, 30, 32, 55
Theorem on cos-invariant, 52
Theorem on sin-invariant, 53
Theorems on equivalent representation,
69–71, 138–141
Theorems on isomorphic mapping,
71–73, 143–145
Thermodynamic entropy, 7, 8
Time direction, 8
Time of signal arrival (ending)
estimation
harmonic signal detection algorithm
synthesis and analysis, 311–341
linear frequency modulated signal

You might also like