A Binary Analog To The Entropy Power Inequality,"
A Binary Analog To The Entropy Power Inequality,"
6, NOVEMBER 1990
Substituting (19) and (16) into (151, we have Thus the entropy-powerAu,JX ) is the variance of an i.i.d. Gauss-
ian random sequence {X,l}for which
REFERENCES 1
a ( X ) = Iim all(X )= - e 2 H ( X ) ,
R. B. Ash, Information Theorv. New York: Interscience. 1965 n +CC 27re
G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE
Trans. Inform. Theory, vol. IT-28, pp. 55-67, Jan. 1982. and
H ( X )= Iim H,,( X ) , etc.
I I ‘X
The “entropy-power inequality” [ l , Sect. 7.101 [3, Sect. 221 is a Corresponding to the stationary binary sequence { X,,},let (in}
useful lower bound on the differential entropy of the sum of two be the independent identically distributed (i.i.d.) binary se-
independent real-valued stationary random sequences. In this quence with the same entropy, i.e.,
correspondence, we will establish an analogous inequality for
H ( X )=H ( X )=H(2,) =h(a(X)), (7)
the modulo-2 sum of two independent stationary binary random
sequences. This bound is a generalization of “Mrs. Gerber’s where h ( l )= - (‘ log(1- l ) - ( l - l ) l o g ( l - i),0 5 iI 1, is the
Lemma” [4, Theorem 11 and is proved in a similar way. binary entropy function, and a ( X ) is taken to be in [0, i]. In
We begin by stating the entropy-power inequality. Let (X,,],other words d X ) , defined by
- m < n < m be a stationary, real-valued random sequence, with
probability density function for = (XI,. . ., X,,) given by a ( X ) =h-’(H( X ) ) , 0<o(X)I;, (8)
p,(x), x E s”,1 I n < m . The “nth order differential entropy” is the “success” probability in a Bernoulli sequence with the
of the sequence is’ same entropy as {X17].The quantity a ( X ) corresponding to the
binary random sequence { X J is analogous to the entropy power
of a continuous random sequence in that
IECE TRANSACTIONS O N INFORMATION THFORY, VOL.. 36, NO. 6, NOVEMBER 1990 1429
machine that defines the constraints.) If (X,,} is this max- Repeating the entire argument for Y instead of X (with
entropic sequence, then our theorem gives a lower bound on H ( X , , I X ( " ) )fixed) we have
H( Z ) , which implies that H(Z,,IZ'"') 2 h [h - H ( X , , I X ' " ' ) ) * h - l ( H(Y,,IY'"'))].
-
I(
C. E. Shannon, “ A mathematical theory of communication,” ESTJ, vol. of a nonlinear measurement model for measurement, we need
27, pp. 379-423, pp. 623-656, Oct. 1948, Reprinted in D. Slepian, Key
Puper.s in tl7e Dei.elopment of Information Theory. NY: I E E E Press, to work on a new corresponding measurement model (relative to
1974. our original measurement model) after we apply superposition
A. D. Wyner and J . Ziv, “A theorem o n the entropy of certain binary to the linear models of signal and multiplicative noise.
sequences and applications (Part I),” IEEE Trans. Inform. Tlzeory, vol.
IT-19, pp, 769-777, Nov. 1973. Multiplicative noise is important in many cases such as in the
H. S. Witsenhausen, “Entropy inequalities for discrete channels,” IEEE situation of fading or reflection of the transmitted signal over an
Trans. Inform.Theory, vol. IT-20, pp. 610-616, Sept. 1974.
S . Shamai (Shitz) and Y . Kofman, “ O n the capacity o f binary and
ionospheric channel, and also certain situations involving sam-
Gaussian channels with run-length limited inputs,” to appear in IEEE pling, gating, or amplitude modulation. Most of the research
Trans. Commun. about multiplicative noise concerns uncertain observations
R. Ahlswede and J. Korner, “On the connection between the entropies
of input and output distributions of discrete memoryless channels,” [4]-[8] in the presence of a discrete type of random switching
Proc. Fifth Conf. in Probability Theory, Braslov, 1974, pp. 13-23, sequence, to determine if there exists signal in data. Under the
Academy Rep. Soc. Romania, Bucharest, 1977. assumption of a white sequence, Nahi [4] derived an optimal
linear mmse recursive filter. Monzingo [5] extended it to an
optimal linear smoother. Tugnait [6] studied the stability of
A New Recursive Filter for Systems with Nahi’s estimator. Hadidi and Schwartz 171 used a two-state
Multiplicative Noise Markoff chain to develop a more general model for the switch-
ing sequence. They also proved that the optimal linear mmse
B. S. CHOW A N D W. P. BIRKEMEIER filter could not be achieved by the conventional structure of
Kalman filter. Wang [8] reached the same conclusion by a
Abstract-In a previous work, an optimal linear recursive MMSE different approach.
estimator was developed for a zero-mean signal corrupted hy multiplica-
However, little research has been done for the case of multi-
tive noise in its measurement model. This recursive filter cannot he
obtained by the recursive structure of a conventional Kalman filter plicative noise with a continuous range of values. Rajasekaran
where the new estimate is a linear Combination of the previous estimate [9] developed a linear mmse recursive estimator for a continuous
and the new data. Instead, the recursive structure was achieved by white noise case, and Tugnait [IO] has also analyzed the stability
combining the previous estimate with a recursive innovation, a linear of this estimator. In a different category, Koning [ I l l has studied
combination of the most recent two data samples and the previous the optimal estimation for systems with white stochastic parame-
estimate. In this correspondence the signal is extended to be nonzero- ters in the signal model. In our previous work [3], we have
mean. In the conventional Kalman filter, the superposition principle can developed a model for nonwhite continuous multiplicative noise,
be applied to both the signal and the measurement models for this described by a dynamic equation. Rajasekaran’s model turns out
nonzero-mean extension. However, when multiplicative noise exists, the to be a special case of our model, but his approach is not
measurement model becomes nonlinear. Therefore, a new recursive
suitable for the more general case of our model because our
structure for the innovation process needs to he developed to achieve a
recursive filter.
nonwhite multiplicative noise in the measurement model makes
his form of innovation process invalid.
Index Terms -Multiplicative noise, Kalman filter, recursive estima-
tion, innovation process, nonlinear measurement model. FORMULATION
11. PROBLEM
A. Notation Specification
I. INTRODUCTION
The notation in this correspondence obeys the following two
The Kalman filter [ l ] is a well-known estimator with a simple rules.
recursive structure. This structure is made possible by the ele-
gant form of its innovation process [2]. The innovation process is 1) Random variables are distinguished from deterministic
based on its linear signal and measurement models. Therefore, constants by their time arguments being parenthesized
the signal and measurement models are very important in the instead of being subscripted.
development of the Kalman filter type estimators. 2 ) Matrices and vectors are distinguished from scalars by
However, the signal in the Kalman filter’s measurement model being written in upper case.
is assumed to be corrupted only by an additive noise. In our B. Systems Models
previous work [3] we included a multiplicative noise in the
measurement model and retained a same signal model (with a Consider the following system.
zero-mean constraint). Since multiplicative noise makes the Signal Model:
measurement model nonlinear, we cannot exploit the Kalman X(k + 1) = A , X ( k ) + B , U ( k ) . (1)
filter’s form of innovation. As a result, we developed a new
Multiplicative Noise Model:
structure of a recursive estimator based upon a new form of
recursive innovation process. r(k +I) = c k r (k ) + d k c (k ) . (2)
In this correspondence, we generalize the zero-mean signal to Measurement Model:
nonzero-mean case. For the Kalman filter the superposition
principle can be applied to both the signal and the measurement Z(k)=r(k)H,X(k)+F,N(k), (3)
models for nonzero-mean extension. For our problem, because where
Manuscript received June 1989, revised August 1989. I ) X ( k ) , r ( k ) , N ( k ) , and Z ( k ) are signal, multiplicative
B. S. Chow is with the Department of Electrical Engineering, National Sun noise, additive noise, and data, respectively.
Yat-Sen University, Kaohsiung, Taiwan 80424, R.O.C.
W. P. Birkemeier is at S-11 463 Soeldner Road, Spring Green, WI 53588. 2 ) U ( k ) and c ( k ) are the generating random sequences for
IEEE Log Number 9038000. the signal and the multiplicative noise.