An Analytical Constant Algorithm: Modulus
An Analytical Constant Algorithm: Modulus
5, MAY 1996
1136
Abstract-Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition
of cochannel constant modulus (CM) signals impinging on an
antenna array. These algorithms have certain deficiencies in the
context of convergence to local minima and the retrieval of all
individual CM signals that are present in the channel. In this paper, we show that the underlying constant modulus factorization
problem is, in fact, a generalized eigenvalue problem, and may
be solved via a simultaneous diagonalization of a set of matrices.
With this new analytical approach, it is possible to detect the
number of CM signals present in the channel, and to retrieve all
of them exactly, rejecting other, non-CM signals. Only a modest
amount of samples is required. The algorithm is robust in the
presence of noise and is tested on measured data collected from
an experimental set-up.
:
)I ,
: ,
,I
I. INTRODUCTION
A. Blind Signal Separation
N elementary problem in the area of spatial signal
processing is that of blind beamfonning. This problem
arises, e.g., in the following wireless communications scenario,
illustrated in Fig. 1. Consider a number of sources (users)
at distinct locations, all broadcasting signals at the same
frequency and at the same time. The signals are received by a
central platform containing an array of antennas. By linearly
combining the antenna outputs, the objective is to separate
the signals and to copy each of them without interference
from the other signals. The task of the blind beamformer is
to compute the proper weight vectors w, from the measured
data only, without detailed knowledge of the signals and the
channel.
Mathematically, the situation is described by the simple and
well-known data model
X=AS
(1)
2-
1 I37
1138
(2)
W X = s,
ISijl = 1
CM = { S I I
Szjl = 1, all i , j }.
Problem P1 asks for all row vectors w (the rows of W ) such
that W X = s is a CM signal, for linearly independent signals
s. Hence, we have the following lemma.
1139
( A ) s E row (X),
(B) S E C M .
From this formulation, it is straightforward to devise an algorithm based on alternating projections: start with a (random)
choice of s in the row span of X , and alternatingly project it
onto CM and back onto the row span of X. The set CM is
not a linear subspace, so that the the projection onto CM is
nonlinear as follows:
(w(i)x)]
xi.
w(i+l) = [PCM
(3)
(Note that s ( ~=
) w ( ' ) X and that . X t X is a projection onto
the row span of X . )
It is interesting to note that this is a well-established
algorithm in the field of optics for solving the phase-retrieval
problem, where it is known as the Gerchberg-Saxton algorithm
(GSA) 1381. The connection of the phase-retrieval problem
with the CM problem was made only recently [37]. Essentially
the same algorithm was derived from the CMA by Agee [39],
called the LSCMA, and claimed to have faster convergence
than the standard CMA. It is also closely related to the OCMA
variant by Gooch and Lundell [ I l l , who replaced the LMStype updating of the CMA by an RLS-update. One difference
of the GSA and LSCMA with other CMA methods is that
they are block methods: they iterate on X rather than update
vectors XA.. Hence, they typically require less data (smaller
n), although of course the standard iterative CMA could reuse
old data as well. Conversely, the GSALSCMA methods could
be used on data matrices of increasing sizes by introducing
updating versions for the pseudoinverse, which leads to the
OCMA. The disadvantage of using these iterative algorithms
on small finite data sets is that global convergence properties
are lost-spurious local minima could be introduced. It is not
known how large the block size has to be before the asymptotic
global convergence results are applicable.
(A): s E row(X)
V : d x n.
s = wV,
( A ) :s = [(~)1.. . ( s ) ~E ]CM
[1(s)1I2
. . . I(s)n12]= [I . . . I ]
wv1v;w*
wv,v;w*
!
1.
= 1,
k = 1,.. . n
(4)
rYll
II : II
Y12
(5)
1 I40
vec-'(y) :=
If we set
Simple choices for Q suffice; e.g., Q could be a Householder
transformation [40],
P=
(8)
.
T
pY=
':]
(9)
y=w@W.
Py =0
y=w@\?l
(10)
+ +
+ +
ajya = w @ w
a1y1 ' .
a 1 Y 1 + . . . + agYg = w*w
'
($
I141
for more than two matrices) such that the result is a rank1 Hermitian matrix, hence factorizable as w * w . Linearly
independent solutions w correspond to linearly independent
y, and in turn to linearly independent parameter vectors
[al . . . ai].Hence, we may rewrite the conditions (13) in terms
of the Y k , which gives a new problem statement that is entirely
equivalent to the original CM problem.
Problem P2 (Equivalent CM Problem): Let X be the given
matrix, from which the set of d x d matrices {YI. . . . , Yi} are
derived as discussed above. The CM problem P1 is precisely
equivalent to the following problem: determine all independent
nonzero parameter vectors [a1. . . as] such that
a1Y1
+ . + a$Y$= w * w .
(14)
>
d2.
ment and can only happen if there are specific phase relations
between the signals. Explicit examples can be constructed for
the case of two CM signals sl,s2: a rank deficiency occurs
if and only if there are constants a , c E such that for each
sample point k
(Sl)k(SZ)k
Q(SZ)k(Sl)k
= c.
Writing ( s z )=
~ ( s 1 ) k expj$k, where $ k is the phase difference between the two signals, this reduces to
e-J$k
+ aeJ+k = c.
(15)
B. Computation of W
We will assume from now on that dim kcrp = 6 is equal
to the number of CM signals in X . The CM problem is
solved once we have found all S independent parameter vectors
[ a l , . . . , a&]
that make the generalized matrix pencil (14) rank1 and Hermitian. This problem is in essence a generalized
eigenvalue problem. Indeed, if d = 6 = 2, then there are
two matrices Y1 and Y2, each of size 2 x 2, and we have to
find X = az/al such that Yl XU, has its rank reduced by
one (to become one). For larger 6, there are more than two
matrices, and the rank should be reduced to one by taking
linear combinations of all of them. This can be viewed as an
extension of the generalized eigenvalue problem.
From the opposite perspective, suppose that the solutions
of the CM problem are w 1 , . . , wg. We already showed
that w1 8 W l , . . . , wg 8 W g is a linearly independent set
of vectors; together they are a basis of the kernel of P .
Moving to matrices, each of the matrices Yl. . . . , Ys is a
different (independent) linear combination of the pencil basis
w T w l , . , w i w 6 , i.e.
+.
+..
Y1 = X l l w ~ w l + X ~ ~ w ~. .w+ ~
X ~ ~ W Z W ~= W*hlW
We first show that there is a relation between 6,the number Y2 = X2lwTwl+X2zw~w2
.+X2gw;wg = W*AZW
of CM signals that are present in X , and 6, the dimension
of the kernel of P. With 6 independent constant modulus
+ . . . + X 6 g ~ g * ~=s W*AsW
signals, there are 6 linearly independent solutions w to the CM Yg = X&lw~wl+Xgzw,*wz
(16)
factorization problem, corresponding to 6 linearly independent
where
vectors y = w 8 W in the kepel of P . Hence, it is necessary
that d i m k e r P 2 b . Since P : ( n - 1) x d 2 , we also have
dim kerp 2 d2 - (n- 1).To be able to detect S from dim kerp,
we have to require that, at least, n 2 d2 1.
Proposition 5: Let 6 be the number of CM signals in X ,
and suppose n > d2. Then d i m k e r P 2 S. Generically,
d i m k r r P = 6.
Proof: See Appendix B.
0
In the proof it is shown that the occurrence of the nongeneric Hence, by the existence of a solution to the CM problem,
case where S > S is independent of the propagation environ- there must be a matrix W whose inverse simultaneously
1142
Yl = W*AlW ( A I , . . , A h E C ~ diagonal3)
~ ~ ,
Y2 = W*AZW
(18)
G*YlN = A 1
M*Y2N = A 2
M and I? are unique up to equal permutations of their columns
and (possibly different) right diagonal invertible factors. This
uniqueness implies that, after a suitable diagonal sca!ing, we
can arrange it such that M = N = Wt, or W = N1,with
each row of W having norm n1I2.
For the case where YI and Y2are not of rank 6, it is possible
that they do not fully determine W , so that the other Y k also
have to be taken into account. It is obvious that it is possible
to obtain W also in this case, but we omit the details of this
more general procedure at this point. Numerically, it is better
to take all Y k into account in all cases, and this is of course
preferable in the presence of noise as well. Such an algorithm
is described in the next section.
Hence, we have shown at this point that in absence of noise,
the CM problem is, in fact, a generalized eigenvalue problem
and can be solved explicitly.
a,Y,
+ .. .
B. Simultaneous Diagonalization as a
Super-Generalized Schur Problem
Assume for the moment that there is no noise added to X .
As we have seen in Theorem 6, the matrix W E (
I
?
that we
try to find is full rank and such that
1143
Ys = W*A&W.
With noise, we can try to find M = Wt to simultaneously
make M * Y l M , . . . , M*Y6M as diagonal as possible. Because
M is not unitary, the fact that it has to have full rank is hard
to quantify, and it makes sense to rewrite this S-generalized
(R1)11
AIR-',
R=
(R1)66
(R6)66
: ]
(R6)11
'
+ +
+ +
1144
C".
1. Estimate row(X);
p = first d rows of V
0 =:
[v,
"'
P:
v,]
P := [vec(vlv;) ... v ~ c ( v , v ~ ) ] ~
[ ]
:= Q P , with Q as in (11)
b. Compute SVD(p):
p =: UpXpVi
YI = vec-'(yl);..,~a = vec-'(ya)
f'k := a k l Y ~
Pk
e:
e:w;wk
Sk
:= WkQ
The vectors
SI,
Fig. 2. Analytic constant modulus factorization algorithm (ACMA). The vectoring operations in Steps 2a and 3a may be replaced by Hermitian vectoring
operations (see Appendix A).
matrices
Yl.
. . . Y6, find Q, 2 (unitary) such that
I145
x,
1146
x
0.25-
N=100
x N=26
+ N=17
0.2 -
mx
..
0.15.
* x
0.15.
N=100
N=26
+ N=17
+P
0.1 .
0.1 -
O O O O 0 0 0 0
0&
088
+y
0.05.
0.05.
0'
0
10
15
(b)
3K
0.25
+ +
x * x
0.2
x i +
0.15-
S T #
0 0 0 0 0 0 0
10
0.1 -
0 N=100
x N=26
+ N=17
0.05.
0'
0
15
(c)
Fig. 3. Singular values of P : no noise,
02
= 30, for n = 100; 26.17. (a) Four CM signals. (b) Two CM signals; (c) similar to (a), but with 02 = 5'.
D. Computational Complexity
We briefly investigate the computational complexity of the
proposed algorithm (Fig. 2). The ACMA consists of mainly
three computational steps: an SVD of X (size m x n),an SVD
of P (size n - 1 x d 2 ) , and a simultaneous diagonalization
of S matrices Y , of size d x d. The second SVD is the
most expensive and has order n(d2)' operations. Since we
require n > m2, and m 2 d, the complexity of this
step is at least C?(d')). In comparison, the first SVD has
C3(m2n)2 O(d4) operations, and the complexity of the
simultaneous diagonalization step is also 0( d 4 ) . This implies
that d cannot be very large and that the algorithm is too
complex for equalization purposes (where sometimes d >
100 is taken).5 Since only subspaces are needed but not the
individual singular vectors, the SVD's may be replaced by
any other principal subspace estimator, such as provided by
the Schur subspace estimation (SSE) method [48], the URV
updating [49], the PAST method [50],the FSD [51], or the
FST [52]. The latter three algorithms can also exploit the
fact that only d kernel vectors out of d2 singular vectors are
needed, which gives rise to significant savings. In addition,
As mentioned before, the CM equalization problem satisfies the same
model, but has different properties: the data matrix has a Hankel structure, and,
in principle, it suffices to find only one solution. In this case, a combination
with other (intersection-type) algorithms that make use of this structure is
probably preferable.
TABLE I
APPROXIMATE
COMPUTATIONAL
COMPLEXITY
OF
BLINDALGORITHMS
(IC FLOP PER SNAPSHOT)
GSA (kfloph)
ACMA (kfloph)
m=4
10
m=4
d=2
0.7
1.4
2.4
3.7
d=2
2.9
3.6
4.6
10
0.81.21.82.4
5.9
39.2 40.5
5.6 7.2
10
93.6
10
8.8
1147
gerchberg iterations
0.3 -
0.25.
x
0.2
+
* x
N=100
N=26
5 0.25 %
J 0.2?3
0.151
\\
'\\ \
- - - _ _ _= m n d p
E0.15I
n
0.1 m
0.05
bqr*
10
0.05 -
15
(a)
Fig. 4. Four CM signals. Each signal has SNR= 15 dB, 02 = 30'. (a) Singular values of P . (b) Convergence of Gerchberg algorithm for analytically
computed initial points, and for random initial points, for n = 17.
A. Computer-Generated Data
I148
gerchberg iterations
I
**
0.25.
* +
0.2.
* x i
0.15 -
$ +
O O O
\
\
'
0.2
E 0.25-
N=100
N=26
\
\
,0.3-,
SNR = 15 dB
= 30".
02
N=26, SNR=15dB .
SNR(2) = 5 dB
d=4, dl=4
\:\
0.1 5
0.05
0
0
10
"0
15
(a)
10
15
20
(b)
6'2
0.250.2.
**
)K
**++
*
"m
0.15
0.1
'
ooooooooo*
"
0.05.
0'
0
+j
5
10
15
(a)
Fig. 7. Four CM signals. Each signal
after each iteration step (solid line), for the case n = 17.
We have chosen this "1-2" norm rather than the "2-2" norm
in (17) because it has a nicer physical interpretation (as the
standard deviation of the modulus of signals), and because
the convergence of the GSA is usually monotonic in this
norm, but not in (17). It is seen that the post-processing
hardly changes the computed wk,which is reflected by the
horizontal lines: they are almost equal to the optimal values.
For n = 26 (not shown), the lines are perfectly straight.
Although not clearly visible in Fig. 4(b), all four signals are
resolved; because the signals have the same amplitude, the
modulus error lines tend to overlap. The independence of the
retrieved signals was checked by computing their covariance.
The value of the modulus error is commensurate with the
I149
gerchberg iterations
0.41
1 %
IN=26. SNR=15dBI
1=4- - _ \ \ ' - - - - - d_+,
- d--CnaflC
- - -0.3 \ \
\
+
L
0 N=100
X
N=26
-In
- - random
0.15 0 0
0.1 .
0.05
ooooooo
0
0
10
"0
15
I
5
10
(a)
TABLE I1
WORSTSIR [dBl AFTERSEPARATION,
CASESNR(s2) = 17.6
dB (WORSTRECEIVED
SIR = -2.4 dB/ANTENNA)
d^=5,8=4
36.0
34.8
SO
27.2
26.8
26
12.6
22.8
35.4
348
23.5
26.9
6.6
25.1
d=5,8=5
6=6,8=4
35.1
34.8
19.5
26.9
+GSA
36.0
34.8
(18.9)
(36.9)
3.0
25.3
17
8.2
17.2
(.): not all signals were recovered
TABLE I11
WORSTSIR [dB1 AFTERSEPARATION,
CASESNR(s2) = 17 G
dB (WORSTRECEIVED
SIR = -11
dB/ANTtNNA)
ACMA +GSA
, ACMA
+GSA
(14.1)
(34.9)
(14.2)
(33.2)
(8.2)
(7.0)
(9.5)
(.): not all signals were recovered
20
(b)
Fig. 8. Four CM signals. Each signal has
2=4,8=4
15
(19.6)
(12.7) (26.1)
(12.4)
(12.8)
SNR = 1 5 dB,
H2 =
5O.
1150
. . s4) and an
>
d-1
1
-
2.
n
I151
Fig. 10. Experimcnt with four FM signals and six antennas; SNR(s2) = 17.6 dB: (a) singular values of S ; (b) singular values of P , with
0.25
\\\
\\
0.2-
\\
\
Q)
3 0.15Lo
\\ \
\
\
\
\
\
,\. .
'.\
-.
-- -
Fig. 11.
10
(7 = 4 , i
(32)
n>
[-
d-
'i-
\\
11
m=6, N=17
d=4, d l = 4
- - - - - - = --random
anW-:
'\
.
\
'_
-_-_----__---
..
b
'., .
\
$0.05\
m
0
+ 20 log cond(A)
\
\
0.15-
20
15
- 2 - .
-. J.
\ '\\
U)
-0
(33)
1 - rrJZconcI(A)/JIIL]
(we still require n,> d 2 , too). They also allow to set automatic
decision thresholds for yank detection in subspace
VI. CONCLUDING
REMARKS
In this paper, we have described an analytic method for
solving the constant modulus problem. The method condenses
= 4.
El-
\
\
\\'\
'\
0.2
L
'
\ \ \\
\
$ 0.05-
0.1-
3.
.
\
'\
m=6, N=50
d=4,dl=4
-analytic
- -random
ti
- _ _- -. --_
71
--
= 17
deconvo'ution
Of
'Onstant
nals. However, it gives fundamental solutions to a number of problems that have plagued iterative CM algorithms
ever since their inception in the early 1980's. The most
important advantages of the analytic approach are listed below.
1152
sv(X)/N
lo-
+
m
lo-*
m
o
lo-
+
m
0
1
0
+
mn
m=6, d=4
N=100
3 N=50
+ N=26
o-i
N=100
m N=50
1o - ~
+ N=26
0
10
(4
8, m
15
(b)
N=100
m N=50
+ N=26
0
10
15
20
25
(c)
Fig. 12. Experiment with four FM signals (SNR(s2) = 7.6 dB): (a) singular values of S ; (b) singular values of
0.25-
E
&
. \
m=6, N=50
d=4,dl=4
-analytic
- -random
'
\ \\ \ \ \
\ \
U)
\\ \
\\ \
0.2-
\\\
0.1 -
E 0.05-
1153
\\\
m=6, N=26
d=4, dl=4
0.25.
\\
5:-
-1
\\
y:\
'\\:\ . ,'.
'<\;
gerchberg iterations
---
=
C
L
-
----
y = vech(Y)=
-
i.e.
c.
APPENDIXB
PROOFS
Proof of Lemma I : Without loss of generality, we may take
d = m. Our approach is to determine how many vectors w
there can be such that W X is a CM signal. As derived in
Section 11-C, each column of X gives a quadratic equation
that the entries of w have to satisfy. We assume that these
constraints are independent.
*__
= 26.
+ +
' . nny6 = 0
=+-az=O,z=l,.",S]
U [alYl Q'Y2 ' . . CuY6 = 0
U [QlYl+ 0 2 y 2
'
+ +
* a, = 0, i = 1,. . . , SI
e rank[Yl Y2... Ya]= 6
U rank[wI w4 . . .wg*]= 6
0
Proof of Lemma 4: The only issue to show is the equivalence of ply = n1l2to llwll = n l / ' . This proof consists of
two (technical) steps.
1) We first show that ply = nl/' U tr(Y) = n, where
Y = vec-'y. (tr(.) is the trace operator.) Indeed, let
PI = vec-l(p1). We show that PI = rL-1/21.For this,
we use the fact that Q is unitary and P is constructed
from V, an isometry. p1 only depends on the first row
of Q. This row must be equal to n - l l 2 [ 1 . . .l],because
all other rows of Q are necessarily orthogonal to this
vector. Using the definition of P gives
nl/'F1 = vlv?
+ . . . + v,v:
*..
-- VV* = I .
1 I54
(35)
0
ACKNOWLEDGMENT
P = PSf[AT@A*],
C(l= C(1-pky)2
WPkW*)2
(y = w 8 w*)
w)112
s.t. w w * = n
REFERENCES
A. J. van der Veen and A. Paulraj, A constant modulus factorization
technique for smart antenna applications in mobile communications, in
Proc. SPIE, Adv. Signal Processing Algorithms, Architect., Implementat. V , CA, F.T. Luk, Ed., San Diego, CA, July 1994, vol. 2296, pp.
230-241.
A. J. van der Veen, S. Talwar, and A. Paulraj, Blind estimation of
multiple digital signals transmitted over FIR channels, IEEE Signal
Processing Lett., vol. 2, pp. 99-102, May 1995.
R. 0. Schmidt, Multiple emitter location and signal parameters estimation, IEEE Trans. Antennas Propagat., vol. 34, pp. 276-280, Mar.
1986.
R. Roy. A. Paulraj, and T. Kailath, ESPRIT-A subspace rotation
approach to estimation of parameters of cisoids in noise, IEEE Trans.
Acoust., Speech, Signal Processing, vol. ASSP-34, no. 5, pp. 1340-1342,
1986.
Y. Sato. A method of self-recovering equalization for multilevel
amplitude-modulation systems, IEEE Trans. Commun., vol. COM-23,
pp. 679-682, June 1975.
D. N.Godard, Self-recovering equalization and carrier tracking in twodimensional data communication systems, IEEE Trans. Commun., vol.
COM-28, pp. 1867-1875, Nov. 1980.
J. R. Treichler and B. G. Agee, A new approach to multipath correction
3f constant modulus signals, IEEE Trans. Acousl., Speech, Signal
Processing, vol. ASSP-31, pp. 4 5 9 4 7 1 , Apr. 1983.
M. G. Larimore and J. R. Treichler, Convergence behavior of the
constant modulus algorithm, in Proc. ZEEE ICASSP, 1983, vol. 1, pp.
13-16.
S. Haykin, Ed., Blind Deconvolution: Englewood Cliffs, NJ: PrenticeHall. 1994.
J. R. Treichler and M. G. Larimore, New processing techniques based
on constant modulus adaptive algorithm, IEEE Trans. Acoust., Speech,
Signal Processing, vol. ASSP-33, pp. 420-431, Apr. 1985.
R. Gooch and J. Lundell, The CM array: An adaptive beamformer for
constant modulus signals, in Proc. IEEE ICASSP, Tokyo, Japan, 1986,
pp. 2523-2526.
R. P. Gooch and B. J. Suhlett, Joint spatial and temporal equalization
in a decision-directed adaptive antenna system, in 22nd IEEE Asilomar
Con$ Signals, Syst., Comput., 1988, vol. 1, pp. 255-259.
S. Talwar, M. Viberg, and A. Paulraj, Blind estimation of multiple
co-channel digital signals arriving at an antenna array, in 27th IEEE
Asilomar Conj Signals, Syst. Comput., 1993, vol. 1, pp. 349-353.
B. G. Agee, S. V. Schell, and W. A. Gardner, Spectral self-coherence
restoral: A new approach to blind adaptive signal extraction using
antenna arrays, Proc. IEEE, vol. 78, pp. 753-767, Apr. 1990.
J. F. Cardoso, Super-symmetric decomposition of the fourth-order
cumulant tensor. Blind identification of more sources than sensors, in
Proc. IEEE ICASSP, Toronto, Canada, 1991, vol. 5, pp. 3109-3112.
J. F. Cardoso, Iterative techniques for blind source separation using
only fourth-order cumulants, in Signal Processing VI: Proc. EUSIPCO92, J. Vandewalle et al. Ed. Brussels, Belgium: Elsevier, 1992, vol. 2,
pp. 739-742.
J. F. Cardoso and A. Souloumiac, Blind beamforming for non-Gaussian
signals, in IEE Proc. F (Radar Signal Processingj, vol. 140, pp.
362-370. Dec. 1993.
Liu, Waveform-preserving blind estimation of multiple independent sources, IEEE Trans. Signal Processing,
vol. 41, pp. 2461-2470, July 1993.
[ 191 S. Mayrarguc, A blind spatio-temporal equalizer for a radio-mobile
channel using the constant modulus algorithm (CMA), in Proc. IEEE
ICASSP, 1994, pp. IV:317-320.
[20] L. Tong, G. Xu, and T. Kailath, Blind identification and equalization
based on second-order statistic5: A time domain approach, IEEE Trans.
InfiJrm. Theory, vol. 40, pp. 340-349, Mar. 1994.
[21] E. Moulines, P. Duhamel, J. F. Cardoso, and S. Mayrargue, Subspace
methods for the blind identification of multichannel FIR filters, in Proc.
IEEE ICASSP, 1994, pp. IV573-576.
[22] D. T. M. Slock, Blind fractionally-spaced equalization, perfectreconstruction filter banks and multichannel linear prediction, in Proc.
IEEE ICASSP, 1994, pp. IV:585-588.
[23] Y. Li and Z. Ding, Global convergence of fractionally spaced Godard
equalizers, in 28th IEEE Asilomar Conj; Signals, Syst., Comput., 1994,
pp. 617-621.
[24] I. Fijalkow, F. Lopez de Victoria, and C. R. Johnson Jr., Adaptive fractionally spaced blind equalization, in Pmc. 6 / h IEEE DSP Workshop,
Yoscmitc, 1994, pp. 257-260.
[25] H. Liu and G. Xu, A deterministic approach to hlind symhol estimation, IEEE Signcrl Processing Lellers, vol. 1, pp. 205-207, Dec.
1994.
[26] J. .I.Shynk and R. P. Gooch, Convergence properties of the Multistage
CMA adaptive beamformer, in 2 7 h IEEE Asilomar Con$ Signals, Syst.,
CcJmpUt., 1993, Vol. 1, pp. 622-626.
1271 A. V. Keerthi, A. Mathur, and J. J. Shynk, Direction-finding performance of the multistage CMA array, in 28th IEEE Asilomur Con$
Signul.s, Syst., Comput., 1994, vol. 2, pp. 847-852.
[28] B. G. Agee, Fast adaptive polarization control using the least-squares
constant modulus algorithm, in 20th IEEE Asilomar Cor$ Signals,
Syst., comput., 1987, pp. 590-595.
1291 ~,
Blind separation and capture of communication signals using
a multitarget constant modulus beamformer, in Proc.
Boston, MA, 1989, vol. 2, pp. 340-346.
[30] F. McCarthy, Multiple signal direction-finding and interference reduction techniques, in Wescon/Y3 Conf Rec., San Francisco, CA, Sept.
1993, pp. 354-361.
[3I ] ~,
Demonstration in Workshop Smart Antennas Wireless Mobile
Commun., Stanford University, Stanford, CA, June 1994.
[32] B. G. Agee, Convergent bchavior of modulus-restoring adaptive arrays
in Gaussian interference environments, in 22ndAsilomar Conjf Signals,
Syst., Comput., Pacific Grove, CA, Nov. 1988, pp. 818-822.
1331 C. R. Johnson, Admissibility in blind adaptive channel equalization,
IEEE Control Sysl. Mag., vol. 11, pp. 3-15, Jan. 1991.
[34] H. Jamali and S . L. Wood, Error surface analysis for the complex
constant modulus adaptive algorithm, in 24th IEEE Asilomar Con$
Signdc., Sys/., C~~mput.,
1990, vol. I , pp. 248-252.
[35] H. Jamali, S. L. Wood, and R. Cristi, Experimental validation of the
Kronecker product Godard blind adaptive algorithms, in 26th Asilomcrr
C ( J ~ $Signals, Syst., Comput.,1992, vol. 1, pp. 1-5.
[36] K. Dogancay and R. A. Kennedy, A globally admissible off-line
modulus restoral algorithm for low-order adaptive channel equalisers, in Proc. IEEE ICASSP, Adelaide, Australia, 1994, vol. 3, pp.
111/61-64.
[37] Y. Wang et al., A matrix factorization approach to signal copy of
constant modulus signals arriving at an antenna array, in Proc. 28th
Con$ Inform. Sci. Syst., Princeton, NJ, Mar. 1994.
[38] R. W. Gerchberg and W. 0. Saxton, A practical algorithm for the
determination of phase from image and diffraction plane pictures,
Optik, vol. 35, pp. 237-246, 1972.
1391 B. G. Agee, The least-squares CMA: A new technique for rapid
correction of constant modulus signals, in Proc. IEEE ICASSP, Tokyo,
Japan, 1986, pp. 953-956.
[40] G. H. Goluh and C. F. Van Loan, Marrix Computations. Baltimore,
MD: The Johns Hopkins Univ. Press, 1989.
[41] M. D. Zoltowski and D. Stavrinides, Sensor array signal processing via
a Procrustus rotations based eigenanalysis of the ESPRIT data pencil,
IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 832-861,
June 1989.
[42] A. A. Shah and D. W. Tufts, Determination of the dimension of a signal
subspace from short data records, IEEE Tram. Signal Processing, vol.
42, pp, 2531-2535, Sept. 1994.
[43] L. Tong, Y. Inouye, and R.-W. Liu, A finite-step global convergence
algorithm for the cumulant-based parameter estimation of multichannel moving average processes, in P ~ o c .IEEE ICASSP, 1991, pp.
V:3445-3448.
1155