Vdoc - Pub Hadamard-Transforms
Vdoc - Pub Hadamard-Transforms
Vdoc - Pub Hadamard-Transforms
Transforms
Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360.676.3290
Fax: +1 360.647.1445
Email: [email protected]
Web: http://spie.org
Copyright
c 2011 Society of Photo-Optical Instrumentation Engineers (SPIE)
All rights reserved. No part of this publication may be reproduced or distributed in
any form or by any means without written permission of the publisher.
The content of this book reflects the work and thoughts of the author(s). Every
effort has been made to publish reliable and accurate information herein, but the
publisher is not responsible for the validity of the information or for any outcomes
resulting from reliance thereon. For the latest updates about this title, please visit
the book’s page on our website.
xi
xiii
Figure 1.1 James Joseph Sylvester (1814–1897, London, England) is known especially
for his work on matrices, determinants, algebraic invariants, and the theory of numbers. In
1878, he founded the American Journal of Mathematics, the first mathematical journal in the
United States (from: www.gap-system.org/~history/Biographies).
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟
⎟
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ + − − + + − −⎟⎟⎟⎟⎟
⎜+ − − + + − − +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟, (1.3b)
⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟
⎟
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ + − − − − + +⎟⎟⎟⎟
⎠
+ − − + − + + −
⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟
⎟
⎜⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − + + − − + + − − + + − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + + − − + + − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − − + + + + − − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − − + − + + − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + + + + − − − − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
− − + − + + − + − − + − + + −⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎟
H16 (1.3c)
⎜⎜⎜+ + + + + + + + − − − − − − −
⎜⎜⎜+ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ + − − + + − − − − + + − − + +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − − − − − − + + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − + − + − + − + + − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎝ + − − − − + + − − + + + + − −⎟⎟⎟⎟
⎠
+ − − + − + + − − + + − + − − +
The symbols + and − denote +1 and −1, respectively, throughout the book.
Figure 1.2 displays the Sylvester-type Hadamard matrices of order 2, 4, 8, 16,
and 32. The black squares correspond to the value of +1, and the white squares
correspond to the value of −1.
00 • • • •
01 • • • −1
10 • • • •
11 • • • •
and so on,
+ + + + + +
H2n = H2 ⊗ H2 ⊗ · · · ⊗ H2 = ⊗ ⊗ ··· ⊗ . (1.5)
+ − + − + −
00 1 1 1 1
01 1 −1 1 −1
10 1 1 −1 −1
11 1 −1 −1 1
The set of functions {walh (0, k), walh (1, k), . . . , walh (n − 1, k)}, where
Figure 1.4 The first eight continuous Walsh Hadamard functions on the interval [0, 1).
The set of functions {walh (0, t), walh (1, t), . . . , walh (n − 1, t)} is called the
continuous Walsh–Hadamard system. The discrete Walsh–Hadamard system can
be generated by sampling continuous Walsh–Hadamard functions {walh (k, t), k =
0, 1, 2, . . . , n − 1} at t = 0, 1/N, 2/N, . . . , (N − 1)/N. The first eight continuous
Walsh–Hadamard functions are shown in Fig. 1.4.
The discrete 1D forward and inverse WHTs of the signal x[k], k = 0, 1, . . . , N −1
are defined as
1 1
y = √ HN x, (forward WHT) and x = √ HN y, (inverse WHT), (1.9)
N N
N−1
1
y[k] = √ x[n]walh [n, k], k = 0, 1, . . . , N − 1, (1.10)
N n=0
N−1
1
x[n] = √ y[k]walh [n, k], n = 0, 1, . . . , N − 1. (1.11)
N k=0
Or, the discrete 2D forward and inverse WHTs of a 2D signal X in the matrix form
is defined as
1
Y= HN XHNT ,
N2 (1.14)
X = HNT Y HN .
The 2D WHT can be computed via the 1D WHTs. In other words, the 1D WHT
is evaluated for each column of the input data (array) X to produce a new array A.
Then, the 1D WHT is evaluated for each row of A to produce y as in Fig. 1.5.
Let a 2D signal have the form
9 7
X= . (1.15)
5 3
Thus, the 2D Walsh–Hadamard discrete basis functions are obtained from the
1D basis function as follows:
1 1 1 −1
1 1
1 1 = , 1 −1 = ,
1 1 1 1 1 −1
(1.17)
1 1 1 1 1 −1
1 1 = , 1 −1 = .
−1 −1 −1 −1 −1 1
(1.19)
Selected Properties
• The row vectors of H define a complete set of orthogonal functions.
• The elements in the first column and the first row are equal to one—all positive.
The elements in all of the other rows and columns are evenly divided between
positive and negative.
• The WHT matrix is orthogonal; this means that the inner product of its any two
distinct rows is equal to zero. This is equivalent to HH T = NIN . For example,
+ + + + 2 0
H2 H2 =
T
= = 2I2 . (1.20)
+ − + − 0 2
⎜
walh (m, k) = exp ⎜⎜⎝ jπ mi ki ⎟⎟⎟⎠ = (−1) i=0 ,
mi ki
i=0
√
j = −1, since exp( jπr)
= cos(πr) + j sin(πr). (1.25)
It has been shown (see the proof in the Appendix) that if A is an N × N matrix
with Axn = an xn , n = 1, 2, . . . , N, and B is an M × M matrix with Bym = bm ym ,
m = 1, 2, . . . , M, then
This means that if {xn } is a Karhunen–Loeve transform (KLT)46 for A, and {ym }
is a KLT for B, then xn ⊗ ym is the KLT transform for A ⊗ B. Using this fact, we
may find the eigenvalues and the eigen decomposition of matrix Hn .
• If H f and Hg are WHTs of vectors f and g, respectively, then H ( f ∗ g) =
H f · Hg, where * is the dyadic convolution of two vectors f and g, which is
defined by v(m) = k=0 N−1
f (k)g(m ⊕ k), where m ⊕ k is the decimal number whose
binary extension is [(m0 + k0 ) mod 2, (m1 + k1 ) mod 2, . . . , (mn−1 + kn−1 ) mod 2],
and m, k are given by Eq. (1.7).
⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜⎜ f0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f0 + f1 + f2 + f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F0 ⎟⎟⎟⎟
⎟ ⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ f1 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f0 − f1 + f2 − f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F1 ⎟⎟⎟⎟
F = H4 f = ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟ = ⎜ ⎟, (1.31a)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎜ f2 ⎟⎟⎟ ⎜⎜⎜ f0 + f1 − f2 − f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F2 ⎟⎟⎟⎟
⎜ ⎠⎟ ⎝⎜
⎝ ⎟⎠ ⎜⎝ ⎟⎠
+ − − + f f0 − f 1 + f 2 − f 3 F3
3
⎛ ⎞ ⎛⎜g ⎞⎟ ⎛⎜g + g + g + g ⎞⎟ ⎛⎜G ⎞⎟
⎜⎜⎜+ + + +⎟⎟ ⎜⎜ 0 ⎟⎟ ⎜⎜ 0
⎟⎜ ⎟ ⎜
1 2 3⎟ ⎟⎟ ⎜⎜⎜⎜ 0 ⎟⎟⎟⎟
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜⎜g1 ⎟⎟⎟⎟ ⎜⎜⎜⎜g0 − g1 + g2 − g3 ⎟⎟⎟⎟ ⎜⎜⎜⎜G1 ⎟⎟⎟⎟
G = H4 g = ⎜⎜⎜⎜ ⎟⎜ ⎟ = ⎜ ⎟ = ⎜ ⎟.
−⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎜g2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜g0 + g1 − g2 − g3 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜G2 ⎟⎟⎟⎟
(1.31b)
⎜⎜⎝+ + −
+ − − + g3 ⎝ ⎠ ⎝ ⎠ ⎜⎝ ⎟⎠
g0 − g 1 + g 2 − g 3 G3
Now, compute vm = 3
k=0 fk g(m ⊕ k). We find that
v0 = f0 g0 + f1 g1 + f2 g2 + f3 g3 , v1 = f0 g1 + f1 g0 + f2 g3 + f3 g2 ,
(1.32)
v2 = f0 g2 + f1 g3 + f2 g0 + f3 g1 , v3 = f0 g3 + f1 g2 + f2 g1 + f3 g0 .
⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜v0 ⎟⎟⎟ ⎜⎜⎜v0 + v1 + v2 + v3 ⎟⎟⎟ ⎜⎜⎜⎜F0G0 ⎟⎟⎟⎟
⎜⎜⎜+ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟
− + −⎟⎟⎟⎟ ⎜⎜⎜⎜v1 ⎟⎟⎟⎟ ⎜⎜⎜⎜v0 − v1 + v2 − v3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F1G1 ⎟⎟⎟⎟
H4 v = ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜ ⎟. (1.33)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎜v2 ⎟⎟⎟ ⎜⎜⎜v0 + v1 − v2 − v3 ⎟⎟⎟ ⎜⎜⎜F2G2 ⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠ ⎜
⎝ ⎠⎟
+ − − + v3 v0 − v 1 + v 2 − v 3 F3G3
Below, we present the Paley matrices of orders 2, 4, 8, and 16 (see Fig. 1.6).
For n = 1 we have
P1 ⊗ (++) (+) ⊗ (++) + +
P2 = = = . (1.35)
P1 ⊗ (+−) (+) ⊗ (+−) + −
For n = 3, we have
⎛⎛+ + + +⎟⎟
⎞ ⎞ ⎛+ + + + + + + +⎞⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜⎜⎜⎜+
⎜⎜⎜⎜⎜⎜ + − −⎟⎟⎟⎟⎟ ⎟⎟⎟ ⎜⎜⎜+
⎟⎟⎟ ⎜⎜⎜ + + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎝+ − + −⎟⎟⎟⎠⎟ ⊗ (++) ⎟⎟⎟ ⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜+ ⎟
P ⊗ (++) ⎜⎜ + − − + + − − − − + +⎟⎟⎟⎟
P8 = 4 = ⎜⎜⎜⎜⎛+ + + +⎟⎟
⎞ ⎟⎟⎟ = ⎜⎜⎜
⎟⎟⎟ ⎜⎜⎜+ − + − + − +
⎟
−⎟⎟⎟⎟ . (1.37)
P4 ⊗ (+−) ⎜⎜⎜⎜⎜
⎜⎜⎜⎜⎜⎜+ + − −⎟⎟⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟
⎟
⎜⎜⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎝⎜⎜⎝+ − + −⎟⎟⎟⎟⎠ ⊗ (+−) ⎟⎠⎟ ⎜⎝⎜+ − − + + − − +⎟⎟⎟⎠
+ − − + + − − + − + + −
walh (0, t) = wal p (0, t), walh (4, t) = wal p (1, t),
walh (1, t) = wal p (4, t), walh (5, t) = wal p (5, t),
(1.42)
walh (2, t) = wal p (2, t), walh (6, t) = wal p (3, t),
walh (3, t) = wal p (6, t), walh (7, t) = wal p (7, t).
This means that most of the Walsh–Hadamard matrices and functions are true for
the Walsh–Paley basic functions case.
1 1 1 1
0 0 0 0
–1 –1 –1 –1
0 10 0 10 0 10 0 10
1 1 1 1
0 0 0 0
–1 –1 –1 –1
0 10 0 10 0 10 0 10
1 1 1 1
0 0 0 0
–1 –1 –1 –1
0 10 0 10 0 10 0 10
1 1 1 1
0 0 0 0
–1 –1 –1 –1
0 10 0 10 0 10 0 10
Figure 1.8 The first eight continuous Walsh–Paley functions in the interval [0, 1).
the Walsh system. On the basis of this system, we derive two important
orthogonal systems, namely the Cal–Sal and Haar systems, to be discussed in
the following sections. Both of these systems have applications in signal/image
processing, communication, and digital logic.1–79 The Walsh–Hadamard function
was introduced in 1923 by Walsh.21
1.3.1 Walsh system
Walsh matrices are often described as discrete analogues of the cosine and sine
functions. The Walsh matrix is constructed recursively by
WN = W2 ⊗ A1 , (W2 R) ⊗ A2 , . . . , W2 ⊗ A(N/2)−1 , (W2 R) ⊗ A(N/2) , (1.43)
where W2 = ++ +− , R = 01 10 , and Ai is the i’th column of a Walsh matrix of order
N = 2n .
Example: Walsh matrices of order 4 and 8 have the following structures:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟
⎜⎜⎜+ + − −⎟⎟⎟
W4 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎝+ − − +⎟⎟⎟⎟⎠
+ − + −
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟ (1.44)
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜
+ + − − + + − −⎟⎟⎟⎟⎟
W8 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + − + + −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟⎟
⎝ ⎠
+ − + − + − + −
Indeed,
+ 0 1 +
W4 = W2 ⊗ , W2 ⊗
+ 1 0 −
⎛ ⎞
⎜⎜⎜⎜+ + + +⎟⎟⎟⎟
+ + + + + 0 1 + ⎜⎜+ + − −⎟⎟⎟
= ⊗ , ⊗ = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝+ − − +⎟⎟⎟⎟⎠
(1.45a)
+ − + + − 1 0 −
+ − + −
⎛ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜−⎟⎟⎟ ⎜⎜⎜−⎟⎟⎟⎟⎟⎟
W8 = ⎜⎜⎜⎜W2 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , (W2 R) ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , W2 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , (W2 R) ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟⎟⎟⎟⎟
⎜⎝⎜ ⎜⎝⎜+⎟⎠⎟ ⎜⎝⎜−⎟⎠⎟ ⎜⎝⎜−⎟⎠⎟ ⎜⎝⎜+⎟⎠⎟⎟⎠⎟
+ − + −
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜⎜+⎟⎟⎟⎟ ⎜⎜⎜⎜+⎟⎟⎟⎟ ⎜⎜⎜⎜+⎟⎟⎟⎟ ⎜⎜⎜⎜+⎟⎟⎟⎟
+ + ⎜⎜⎜+⎟⎟⎟ + + ⎜⎜⎜+⎟⎟⎟ + + ⎜⎜⎜−⎟⎟⎟ + + ⎜⎜⎜−⎟⎟⎟
= ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟.
+ − ⎜⎜⎜⎜⎝+⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝−⎟⎟⎟⎟⎠ + − ⎜⎜⎜⎜⎝−⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝+⎟⎟⎟⎟⎠
(1.45b)
+ − + −
Figure 1.9 The first eight continuous Walsh functions in the interval [0, 1).
where N = 2n and jm , km are the m’th bits in the binary representations of j and k,
respectively.
The set of functions {walw (0, k), walw (1, k), . . . , walw (n − 1, k)}, where
walw (0, k) = {walw (0, 0), walw (0, 1), walw (0, 2), . . . , walw (0, n − 1)} ,
walw (1, k) = {walw (1, 0), walw (1, 1), walw (1, 2), . . . , walw (1, n − 1)} ,
.. .. .. .. ..
. . . . ... .
walw (n − 1, k) = {walw (n − 1, 0), walw (n − 1, 1), walw (n − 1, 2), . . . , walw (n − 1, n − 1)} ,
(1.47)
is called a discrete Walsh system, or discrete Walsh basis functions. The set of
functions {walw (0, t), walw (1, t), . . . , walw (n − 1, t)}, t ∈ [0, 1) are called continuous
Walsh functions (Fig. 1.9).
The continuous Walsh functions can be defined as
walw (2m + p, t) = walw [m, 2(t + 1/2)] + (−1)m+p walw [m, 2(t − 1/2)], t ∈ [0, 1),
(1.48)
where m = 0, 1, 2, . . ., walw (0, t) = 1, for all t ∈ [0, 1).
where the symbol ⊕ denotes the logic operation Exclusive OR, i.e., 0 ⊕ 0 = 0,
0 ⊕ 1 = 1, 1 ⊕ 0 = 1, and 1 ⊕ 1 = 0.
For example, let n = 3 or 3 = 0·22 +1·21 +1·20 and m = 5 or 5 = 1·22 +0·21 +1·20 ,
but
Hence, we obtain
walw (3, t)walw (5, t) = walh (3 ⊕ 5, t) = walh (6, t). (1.51)
where walw ( j, k) is the ( j’th, k’th) element of the Walsh matrix defined in
Eq. (1.46).
n−1
The Cal–Sal matrix elements can be calculated by T ( j, k) = (−1) i=0 pi ki , where
j, k = 0, 2n − 1 and p0 = jn−1 , p0 = jn−1 , p1 = jn−2 + jn−1 , . . . , pn−2 = j1 + j2 ,
pn−1 = j0 + j1 .
Cal–Sal Hadamard matrices of orders 4 and 8 are of the following form:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟
+ + ⎜⎜⎜+ − − +⎟⎟⎟⎟
T2 = , T 4 = ⎜⎜⎜⎜ ⎟,
+ − ⎜⎜⎝+ − + −⎟⎟⎟⎟⎠
+ + − −
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟
⎟
⎜⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ (1.53)
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜
+ − + − − + − +⎟⎟⎟⎟⎟
T 8 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − − + − + + −⎟⎟⎟⎟
⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟
⎝ ⎠
+ + + + − − − −
Cal–Sal matrices of order 2, 4, 8, 16, and 32 are shown in Fig. 1.10, and the first
eight continuous Cal–Sal functions are shown in Fig. 1.11.
Figure 1.11 The first eight continuous Cal–Sal functions in the interval [0, 1).
There are many selected properties of the Cal–Sal system. The Walsh functions
can be constructed by
where the symbol ⊕ denotes the logic operation Exclusive OR, i.e.,
0 ⊕ 0 = 0, 0 ⊕ 1 = 1, 1 ⊕ 0 = 0, and 1 ⊕ 1 = 0. (1.56)
For example, let
then
n ⊕ m = (0 ⊕ 1) · 22 + (1 ⊕ 0) · 21 + (1 ⊕ 1) · 20 = 1 · 22 + 1 · 21 + 0 · 20 = 6,
(1.58)
thus,
walw (n ⊕ m, t) = walw (3 ⊕ 5, t) = walw (3, t)walw (5, t) = walw (6, t). (1.59)
Furthermore,
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Binary 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Gray 0000 0001 0011 0010 0110 0111 0101 0100 1100 1101 1111 1110 1010 1011 1001 1000
The conversion from a Gray-coded number to binary can be achieved by using the
following scheme:
• To find the binary next-to-MSB (most significant bit), add the binary MSB and
the Gray code next-to-MSB.
• Fix the sum.
• Continue this computation from the first to last numbers.
Note that both the binary and the Gray-coded numbers will have a similar
number of bits, and the binary MSB (left-hand bit) and Gray-code MSB are always
the same.
ci = bn−i−1 , i = 0, 1, . . . , n − 1. (1.68)
bi = cn−i−1 , i = 0, 1, . . . , n − 1. (1.69)
Binary Inverse 1 0 1 1 1 0
↓ ↓ ↓ ↓ ↓ ↓
⊕ ⊕ ⊕ ⊕ ⊕ ↓ (1.73)
Output Code 1 1 0 0 1 0
g0 g1 g2 g3 g4 g5
1/2 1/2
where am = f
−1/2
(x)cal(m, x)dx, bm = −1/2
f (x)sal(m, x)dx, m = 0, 1, 2, . . ..
Defining cm = a2m + b2m , αm = tan−1 (bm /am ), and plotting them versus the
sequence m yields plots similar to Fourier spectra and phase. Here, cm provides an
analogy to the modulus, while the artificial phase αm is analogous to a classical
phase.
It can be shown that any signal f (x) is square integrable over to [01]. Therefore,
f (x) can be represented by a Walsh–Fourier series. The Parseval identity is also
valid.
the first two Haar coefficients. The Haar transform is real, allowing simple
implementation as well as simple visualization and interpretation. The advantages
of these basis functions are that they are well localized in time and may be
very easily implemented and are by far the fastest among unitary transforms.
The Haar transform provides a transform domain in which a type of differential
energy is concentrated in localized regions. This kind of property is very useful in
image processing applications such as edge detection and contour extraction. The
Haar transform is the simplest example of an orthonormal wavelet transform. The
orthogonal Haar functions are defined as follows:42,46
H00 (k) = 1.
⎧ q + 0.5
⎪
⎪
⎪
q
⎪
⎪ 2(i−1)/2 , if (i−1) ≤ k < (i−1) ,
⎪
⎪
⎪ 2 2
⎪
⎨
Hiq (k) = ⎪ q + 0.5 q+1 (1.76)
⎪
⎪
⎪−2 (i−1)/2
, if (i−1) ≤ k < (i−1) ,
⎪
⎪
⎪ 2 2
⎪
⎪
⎩0, at all other points,
where i = 1, 2, . . . , n, q = 0, 1, . . . , 2i−1 − 1.
Note that for any n there will be 2n Haar functions. Discrete sampling of the
set of Haar functions gives the orthogonal matrix of order 2n . The Haar transform
matrix is defined as
⎛ ⎞
⎜⎜⎜ ⎟⎟⎟
n−1
⊗ +
[Haar]2n = H(2n ) = ⎜⎜⎜⎜⎝ √ ⎟⎟⎟ ,
H(2 ) (+1 1)
⎟⎠ n = 2, 3, . . . , (1.77)
2 I(2 ) ⊗ (+1 − 1)
n−1 n−1
+1 +1
where H(2) = +1 −1 , ⊗ is the Kronecker product, and I(2n ) is the identity matrix
of order 2n .
√
Below are the Haar matrices of orders 2, 4, 8, and 16 (here s = 2).
1 1
[Haar]2 = , (1.78a)
1 −1
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟
[Haar]4 = ⎜⎜⎜⎜ ⎟⎟ , (1.78b)
⎜⎜⎝ s −s 0 0⎟⎟⎟⎟⎠
0 0 s −s
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜0 0 0 0 s s −s −s ⎟⎟⎟⎟⎟
[Haar]8 = ⎜⎜⎜⎜ ⎟,
⎜⎜⎜2 −2 0 0 0 0 0 0⎟⎟⎟⎟⎟
(1.78c)
⎜⎜⎜ ⎟
⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 2 −2 0 0⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 2 −2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜1 1 1 1 1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ s s s s −s −s −s −s 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 s s s s −s −s −s −s ⎟⎟⎟
⎜⎜⎜2 2 −2 −2 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2 2 −2 −2 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 2 2 −2 −2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 0 0 0 0 0 0 0 0 0 2 2 −2 −2 ⎟⎟⎟⎟⎟
[Haar]16 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜2s −2s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 2s −2s 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2s −2s 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 2s −2s 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 2s −2s 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 2s −2s 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 2s −2s 0 0 ⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 2s −2s
(1.78d)
Figure 1.14 shows the structure of Haar matrices of different orders, and
Fig. 1.15 shows the structure of continuous Haar functions.
The discrete Haar basis system can be generated by sampling Haar systems at
t = 0, 1/N, 2/N, . . . , (N − 1)/N. The 16-point discrete Haar functions are shown in
Fig. 1.16.
Properties:
(1) The Haar transform Y = [Haar]2n X (where X is an input signal) provides a
domain that is both globally and locally sensitive. The first two functions reflect
the global character of the input signal; the rest of the functions reflect the local
characteristics of the input signal. A local change in the data signal results in a
local change in the Haar transform coefficients.
(2) The Haar transform is real (not complex like a Fourier transform), so real data
give real Haar transform coefficients.
Figure 1.14 The structure of Haar matrices of order 2, 4, 8, 16, 32, 64, 128, and 256.
Figure 1.15 The first eight continuous Haar functions in the interval [0, 1).
0.2 0 0 0
0 0 0 0
0 0 0 0
–1 –1 –1 –1
0 10 20 0 10 20 0 10 20 0 10 20
1 1 1 1
0 0 0 0
–1 –1 –1 –1
0 10 20 0 10 20 0 10 20 0 10 20
Figure 1.17 Two images (left) and their 2D Haar transform images (right).
+ +
where H(2) = + − , and I(2n ) is the identity matrix of order 2n .
√
Example: For n = 2 and n = 8, we have (remember that s = 2)
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟
⎟⎟
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟
H(4) = ⎜⎜⎜⎜⎜ ⎟, (1.83a)
⎜⎜⎝ s 0 −s 0⎟⎟⎟⎟
⎠
0 −s 0 −s
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s 0 −s 0 s 0 −s 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 s 0 −s 0 s 0 −s ⎟⎟⎟⎟
H(8) = ⎜⎜⎜⎜ ⎟⎟ . (1.83b)
⎜⎜⎜2 0 0 0 −2 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 2 0 0 0 −2 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 2 0 0 0 −2 0⎟⎟⎟⎟
⎝ ⎠
0 0 0 2 0 0 0 −2
⎛⎛ ⎞ ⎞ ⎛ ⎞
⎜⎜⎜⎜⎜⎜+ + + +⎟⎟⎟ ⎟ ⎜1 1 1 1 1 1 1 1⎟
⎜⎜⎜⎜⎜⎜+ + − −⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟ ⊗ + + ⎟⎟⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜⎝+ − 0 0 ⎟⎟⎟⎠ ⎟⎟⎟ ⎜⎜⎜1 1 −1 −1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜ 0 0 + − ⎟⎟⎟ ⎜⎜⎜0 0 0 0 1 1 −1 −1⎟⎟⎟
[Haar]8 = ⎜⎜⎜⎜⎛ ⎞ ⎟⎟⎟ = ⎜⎜⎜
⎟⎟⎟ ⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ,
⎟
⎜⎜⎜⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎟ ⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 ⎟⎟⎟ ⎟ ⎜
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⊗ + − ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎝⎜⎜⎝0 0 + 0 ⎟⎟⎠ ⎟⎟⎠ ⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎠
0⎡ 0⎛ 0 + ⎞ 0 0⎛ 0 0 ⎞ 0 0 ⎤ 1 −1
⎢⎢⎢ ⎜⎜⎜1 1 2 0⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟ ⎥⎥⎥
1 ⎢⎢⎢⎢ 1 ⎜⎜⎜⎜1 1 −2 0⎟⎟⎟⎟ +1 ⎜⎜0 1 0 0⎟⎟ +1 ⎥⎥
−1
[Haar]8 = ⎢⎢⎢ ⎜⎜⎜
−1 ⎟⎟⎟ ⊗ +1 , ⎜⎜⎜⎜⎜0 0 1 0⎟⎟⎟⎟⎟ ⊗ −1 ⎥⎥⎥⎥⎥ (1.85b)
2 ⎢⎣ 4 ⎜⎝ 1 0 2 ⎟⎠ ⎜⎝ ⎟⎠ ⎥⎦
1 −1 0 −2 0 0 0 1
⎛ ⎞
⎜⎜⎜1 1 2 0 4 0 0 0⎟⎟⎟
⎜⎜⎜⎜1 1 2 0 −4 0 0 0⎟⎟⎟⎟
⎜⎜⎜1 1 −2 0 0 4 0 0⎟⎟⎟
⎜⎜ ⎟⎟
1 ⎜⎜⎜⎜1 1 −2 0 0 −4 0 0⎟⎟⎟⎟
= ⎜⎜⎜ ⎟⎟ .
8 ⎜⎜⎜1 −1 0 2 0 0 4 0⎟⎟⎟⎟
⎜⎜⎜1 −1 0 2 0 0 −4 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1 −1 0 −2 0 0 0 4⎟⎟⎟⎠
1 −1 0 −2 0 0 0 −4
where ek is the entire one-row vector of length k, I(m) is the identity matrix of order
m, ⊗ is the sign of the Kronecker product, and [AH](k) = A(k) is an orthogonal
matrix of order k, which has the following form:
1 1 ··· 1 1
A(k) = . (1.88)
A1k
1 1
[AH](2) = , (1.89a)
exp( jα) − exp( jα)
⎛ ⎞
⎜⎜[AH](2) ⊗ 1 1 ⎟
[AH](4) = ⎜⎜⎜⎝ √ ⎟⎟⎟⎟ =
2I(2) ⊗ e jα −e jα ⎠
⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟⎟ (1.89b)
⎜⎜⎜e jα jα
−e jα
−e jα ⎟⎟⎟
⎜⎜⎜ √ e√ ⎟⎟⎟
⎜⎜⎜ 2e jα − 2e jα 0 ⎟⎟⎟ ,
⎜⎜⎝ √ jα √ jα ⎟⎟⎠
0
0 0 2e − 2e
⎛ ⎞
⎜⎜[AH](4) ⊗ 1 1 ⎟
[AH](8) = ⎜⎜⎜⎝ ⎟⎟⎟⎟ =
2I(4) ⊗ e jα −e jα ⎠
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜e jα e√jα e jα√ e jα√ −e jα −e jα −e jα −e jα ⎟⎟⎟
⎜⎜⎜ √ ⎟⎟⎟
⎜⎜⎜ 2e jα 2e jα − 2e jα − 2e jα 0√ ⎟⎟⎟
⎜⎜⎜ 0√ 0√ 0√ ⎟⎟ (1.89c)
⎜⎜⎜0 jα ⎟
⎜⎜⎜ 0 0 0 2e jα 2e jα − 2e jα − 2e ⎟⎟⎟⎟⎟
⎜⎜⎜2e jα ⎟⎟⎟ .
⎜⎜⎜ −2e jα 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 2e jα
−2e jα
0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 2e jα −2e jα 0 0 ⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 2e jα −2e jα
(3) A new Haar-like system matrix is generalized based on the Haar matrix of
orders 3 and 9,
⎛ ⎞
⎜⎜⎜ 1 1 1 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟
⎜⎜⎜ 2 2 √ ⎟⎟⎟⎟
⎜⎜⎜ − 2⎟⎟⎟⎟
[AH](3) = ⎜⎜⎜ 2 2 ⎟⎟⎟ (1.90a)
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 ⎟⎟
⎝ − 0⎠
2 2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 ⎟⎟⎟ 1 1
⎜⎜⎜⎜ √ √ √ √ √ √ ⎟⎟
⎜⎜⎜ 2 2 2 2 2 √ 2 √ ⎟⎟⎟⎟ √
⎜⎜⎜ − 2 − 2⎟⎟⎟⎟− 2
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 6 6 6 6 ⎟⎟⎟
⎜⎜⎜ − − 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 √ ⎟⎟⎟
⎜⎜⎜ − 6 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜⎜ √18 √ ⎟⎟⎟
⎟⎟⎟
⎜⎜ 18
[AH](9) = ⎜⎜⎜⎜ 2 − 0 0 0 0 0 0 0 ⎟⎟⎟ .
⎟⎟⎟ (1.90b)
⎜⎜⎜ 2
√ √ ⎟⎟⎟
⎜⎜⎜ √ ⎟⎟⎟
⎜⎜⎜0 6 6
− 6 0 ⎟⎟⎟
⎜⎜⎜ 0 0 0 0
⎟⎟⎟
⎜⎜⎜ √
2 2
√ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 18
−
18
⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0
⎟⎟⎟
⎜⎜⎜ 2 2
√ √ ⎟⎟⎟
⎜⎜⎜ √ ⎟
⎜⎜⎜0 0 0 0 0 0
6 6
− 6⎟⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 18 18 ⎟⎠
⎝0 0 0 0 0 0 − 0
2 2
"
n n
(det B)2 ≤ b2i, j , (1.91)
i=1 j=1
where equality is achieved when B is an orthogonal matrix. In the case bi, j = ±1,
the determinant will obtain its maximum absolute value, and B will be a Hadamard
matrix. Equality in this bound is attained for a real matrix M if and only if M
is a Hadamard matrix. A square matrix Hn of order n with elements −1 and +1
having a maximal determinant is known as a Hadamard matrix.72 The geometrical
interpretation of the maximum determinant problem is to look for n vectors from
the origin contained within the cubes −1 ≤ bi, j ≤ +1, i, j = 1, 2, . . . , n and forming
a rectangular parallelepiped of maximum volume.
We have seen that the origin of the Hadamard matrix goes back to 1867, to the
time when Sylvester constructed Hadamard matrices of the order 2n . It is obvious
that the Sylvester, Walsh–Hadamard, Cal–Sal, and Walsh matrices are classical
examples of equivalent Hadamard matrices. Now we provide an example of a
Hadamard matrix of the order 12, which cannot be constructed from the above-
defined classical Hadamard matrices.
⎛ ⎞
⎜⎜⎜+ + + + + − − − + − − −⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−
⎜⎜⎜ + − + + + + − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
⎜⎜⎜ + + − + − + + + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
⎜⎜⎜ − + + + + − + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − − + + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜+ + + − − + − + + + + −⎟⎟⎟⎟
H12 = ⎜⎜⎜⎜ ⎟⎟. (1.93)
⎜⎜⎜+
⎜⎜⎜ − + + − + + − + − + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟⎟
⎜⎜⎜ + − + − − + + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ ⎟⎟
⎜⎜⎜ − − − + − − − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + + − + + + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − + + + − + + − + + −⎟⎟⎟⎟
⎝ ⎟⎠
+ + − + + + − + − − + +
The expression in Eq. (1.92) is equivalent to the statement that any two distinct
rows (columns) in a matrix Hn are orthogonal. It is clear that rearrangement
of rows (columns) in Hn and/or their multiplication by −1 will preserve this
property.
⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + − − + + − − + + − − + + − −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − − + + − − + + − − + + − − +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜⎜+ + + + − − − − + + + + − − − −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − + − − + − + + − + − − + − +⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + − − − − + + + + − − − − + +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ − − + − + + − + − − + − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ + + + + + + + − − − − − − − −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ + − − + + − − − − + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + + − − − − − − − − + + + +⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − + − − + − + − + − + + − + −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ + − − − − + + − − + + + + − −⎟⎟⎟⎟
⎝ ⎠
+ − − + − + + − − + + − + − − +
⎛ ⎞ (1.94)
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + − − − − − − − −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + + − − − − + + + + − − − −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + + − − − − − − − − + + + +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ + − − + + − − + + − − + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ + − − + + − − − − + + − − + +⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + − − − − + + + + − − − − + +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ + − − − − + + − − + + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟.
⎜⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − − + − + + − + − − + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ − + − − + − + − + − + + − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+ − − + + − − + + − − + + − − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ − − + − + + − + − − + − + + −⎟⎟⎟⎟
⎠
+ − − + − + + − − + + − + − − +
t1 + t2 + t3 + t4 = n,
t1 + t 2 − t 3 − t 4 = 0,
(1.95)
t1 − t 2 + t 3 − t 4 = 0,
t1 − t 2 − t 3 + t 4 = 0.
Hadamard matrices of the order (pi + 1)(q j + 1), where pi ≡ 3 (mod 4), q j ≡ 1
i, j
(mod 4) are prime powers. Paley’s theorem states that Hadamard matrices can
be constructed for all positive orders divisible by 4 except those in the following
sequence: multiples of 4 not equal to a power of 2 multiplied by q + 1, for some
power q of an odd prime.
• Multiplicative methods:31,47 Hadamard’s original construction of Hadamard
matrices seems to be a “multiplication theorem” because it uses the fact that the
(1) Show that if H1 and H2 are complex Hadamard matrices of order n and m, then
there exists an Hadamard matrix of order mn/2.
(2) For any natural number n, how many equivalent classes of complex Hadamard
matrices of order n exist?
where
1 a
[PS ]2 (a, b) = .
b −1
The i’th, k’th elements of the complex Sylvester–Hadamard matrix [PS ]2n may be
defined by
n−1
it +(it ⊕kt )/2
h(i, k) = (−1) t=0 , (1.99)
where (in−1 , in−2 , . . . , i0 ) and (kn−1 , kn−2 , . . . , k0 ) are binary representations of i and
k, respectively.40
For instance, from Eq. (1.98), for n = 2, we obtain
⎛ ⎞
⎜⎜⎜ 1 j j −1⎟⎟⎟
⎜⎜⎜
⎟
1 j 1 j ⎜⎜⎜− j −1 1 − j ⎟⎟⎟⎟⎟
[PS ]4 = ⊗ = ⎜⎜ ⎜ ⎟.
⎜⎜⎜− j 1 −1 − j ⎟⎟⎟⎟⎟
(1.100)
− j −1 − j −1
⎝⎜ ⎟⎠
−1 j j 1
The element h1,3 of [PS ]4 in the second row [i = (01)] and in the fourth column
[k = (11)] is equal to h1,3 = (−1)1+(1⊕1)/2+0+(0⊕1)/2 = (−1)1+1/2 = − j.
where
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
1 1 ⎜⎜⎜1 −1 −1 j ⎟⎟⎟⎟
[WH]c1 = , [WH]c2 = ⎜⎜⎜⎜ ⎟.
−j j ⎜⎜⎝1 1 −1 −1⎟⎟⎟⎟⎠
1 −1 1 −1
This recurrent relation gives a complex Hadamard matrix of order 2m . (It is also
called a complex Walsh–Hadamard matrix.)
Note that if H and Q1 = (A1 , A2 ) and Q2 = (B1 , B2 , B3 , B4 ) are complex
Hadamard matrices of orders m and n, respectively, then the matrices C1 and C2
are complex Hadamard matrices of order mn:
C1 = [H ⊗ A1 , (HR) ⊗ A2 ] ,
C2 = [H ⊗ B1 , (HR) ⊗ B2 , H ⊗ B3 , (HR) ⊗ B4 ] , (1.103)
or
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 −1 −1 −j −j j j ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 −1 −1 −j − j ⎟⎟⎟⎟
[WH]3 = ⎜⎜⎜⎜⎜
1 j j
⎟. (1.106)
⎜⎜⎜ j −j −j j j −j −j j ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j −j j −j 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j −j −j j −j j j − j ⎟⎟⎟⎟⎟
⎝ ⎠
j −j j −j −1 1 −1 1
1
0
–1
–5
–2
–2
1 3 5 7 1 3 5 7
Figure 1.20 The first eight real (left) and imaginary (right) parts of discrete complex
Hadamard functions corresponding to the matrix [WH]3 .
1 1
0 0
–1 –1
0 0
0 0
0 0
0 0
0 0
0 0
0 0
Figure 1.21 The first eight real (left) and imaginary (right) parts of continuous complex
Hadamard functions corresponding to the Paley complex Hadamard matrix W3p .
1 1
0 0
–1 –1
0 0
0 0
0 0
0 0
0 0
0 0
0 0
Figure 1.22 The first eight real (left) and imaginary (right) parts of continuous complex
Walsh–Hadamard functions corresponding to the complex Walsh matrix W3 .
or
⎛ ⎛ ⎞ ⎞
⎜⎜⎜ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ ⎜⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟ ⎟⎟⎟
⎟⎟⎟
1) ⊗ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜(1
⎜⎜⎜ ⎜⎜⎜1 − j −1 j ⎟⎟⎟⎟⎟ ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎝ ⎠ ⎟⎟
1 j −1 j
W3 = ⎜⎜⎜⎜ ⎛ ⎞⎛ ⎞ ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜ ⎟⎜ ⎟⎟
⎜⎜⎜ ⎜1 −1 1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 0 0⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜(1 − 1) ⊗ ⎜⎜⎜⎜ ⎟⎟ ⎟⎟
⎜⎜⎝ ⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 j 0⎟⎟⎟⎟ ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠⎠
1 −1 −1 1 0 0 0 j
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟
⎟⎟
⎜⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1
⎜⎜⎜ − j −1 j 1 − j −1 j ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜1 j −1 − j 1 j −1 − j ⎟⎟⎟⎟
⎜ ⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟⎟⎟ . (1.110)
⎜⎜⎜ ⎟
⎜⎜⎜1 1 j j −1 −1 − j − j ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ −1 j − j −1 1 − j j ⎟⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ 1 −j −j −1 −1 j j ⎟⎟⎟⎟
⎠
1 −1 − j j −1 1 j −j
References
30. J. Hadamard, “Resolution d’une question relative aux determinants,” Bull. Sci.
Math. 17, 240–246 (1893).
31. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes in
Mathematics, 1168, Springer, New York (1985).
32. J. Williamson, “Hadamard determinant theorem and sum of four squares,”
Duke Math. J. 11, 65–81 (1944).
33. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, John Wiley & Sons,
Hoboken, NJ (1992).
52. A. B. Németh, “On Alfred Haar’s original proof of his theorem on best
approximation,” in Proc. A. Haar Memorial Conf. I, II, Amsterdam, New York,
pp. 651–659 (1987).
53. B. S. Nagy, “Alfred Haar (1885–1933),” Resultate Math. 8 (2), 194–196
(1985).
54. K. J. L. Ray, “VLSI computing architectures for Haar transform,” Electron.
Lett. 26 (23), 1962–1963 (1990).
55. T. J. Davis, “Fast decomposition of digital curves into polygons using the Haar
transform,” IEEE Trans. Pattern Anal. Mach. Intell. 218, 786–790 (1999).
56. B. J. Falkowski and S. Rahardja, “Sign Haar Transform,” in Proc. of IEEE Int.
Symp. Circuits Syst., ISCAS ’94 2, 161–164 (1994).
57. K.-W. Cheung, C.-H. Cheung and L.-M. Po, “A novel multi wavelet-based
integer transform for lossless image coding,” in Proc. Int. Conf. Image
Processing, ICIP 99 1, 444–447, City Univ. of Hong Kong, Kobe (1999).
58. B. J. Falkowski and S. Rahardja, “Properties of Boolean functions in spectral
domain of sign Haar transform,” Inf. Commun. Signal Process 1, 64–68 (1997).
59. B. J. Falkowski and C.-H. Chang, “Properties and applications of paired Haar
transform,” Inf. Commun. Signal Process. 1997, ICICS 1, 48–51 (1997).
60. S. Yu and R. Liu, “A new edge detection algorithm: fast and localizing to a
single pixel,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’93
1, 539–542 (1993).
61. T. Lonnestad, “A new set of texture features based on the Haar transform,”
Proc. 11th IAPR Int. Conf. on Pattern Recognition, Image, Speech and Signal
Analysis, (The Hague, 30 Aug.–3 Sept., 1992), 3, 676–679 (1992).
62. G. M. Megson, “Systolic arrays for the Haar transform,”in IEE Proc. of
Computers and Digital Techniques, vol. 145, pp. 403–410 (1998).
63. G. A. Ruiz and J. A. Michell, “Memory efficient programmable processor
chip for inverse Haar transform,” IEEE Trans. Signal Process 46 (1), 263–268
(1998).
64. M. A. Thornton, “Modified Haar transform calculation using digital
circuit output probabilities,” Proc. of IEEE Int. Conf. on Information,
Communications and Signal Processing 1, 52–58 (1997).
65. J. P. Hansen and M. Sekine, “Decision diagram based techniques for the Haar
wavelet transform,” in Proc. of Int. Conf. on Information, Communications and
Signal Processing 1, pp. 59–63 (1997).
66. Y.-D. Wang and M. J. Paulik, “A discrete wavelet model for target recognition,”
in Proc. of IEEE 39th Midwest Symp. on Circuits and Systems 2, 835–838
(1996).
67. K. Egiazarian and J. Astola, “Generalized Fibonacci cubes and trees for DSP
applications,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’96
2, 445–448 (1996).
68. L. M. Kaplan and J. C.-C. Kuo, “Signal modeling using increments of extended
self-similar processes,” in Proc. of IEEE Int. Conf. on Acoustics, Speech, and
Signal Processing, ICASSP-94 4, 125–128 (1994).
69. L. Prasad, “Multiresolutional Fault Tolerant Sensor Integration and Object
Recognition in Images,” Ph.D. dissertation, Louisiana State University (1995).
70. B. J. Falkowski and C. H. Chang, “Forward and inverse transformations
between Haar spectra and ordered binary decision diagrams of Boolean
functions,” IEEE Trans. Comput. 46 (11), 1272–1279 (1997).
71. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOS
circuits from Haar and Walsh spectrum analysis: on the fault coverage of
Haar reduced analysis,” in Theory and Application of Spectral Techniques,
C. Moraga, Ed., Dortmund University Press, pp. 97–106 (1988).
72. J. Brenner and L. Cummings, “The Hadamard maximum determinant
problem,” Am. Math. Mon. 79, 626–630 (1972).
73. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
74. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image
coding,” Proc. IEEE 57, 58–68 (1969).
75. R. K. Yarlagadda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis,
Kluwer Academic Publishers, Boston (1996).
76. S. Agaian and A. Matevosian, “Haar transforms and automatic quality test of
printed circuit boards,” Acta Cybernet. 5 (3), 315–362 (1981).
77. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and Hadamard
matrices,” Prob. Inf. Transmission 16 (3), 203–211 (1980).
78. S. Georgiou, C. Koukouvinos and J. Seberry, “Hadamard matrices, orthogonal
designs and construction algorithms,” available at Research Online, http://ro.
uow.edu.au/infopapers/308.
79. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOS
circuits from Haar and Walsh spectrum analysis: on the fault coverage of
Haar reduced analysis,” in Theory and Application of Spectral Techniques,
C. Moraga, Ed., University Dortmund Press, pp. 97–106 (1988).
80. http://www.websters-online-dictionary.org/Gr/Gray+code.html.
81. F. Gray, “Pulse code communication,” U.S. Patent No. 2,632,058 (March 17
1953).
51
This chapter describes efficient (in terms of space and time) computational
procedures for a commonly used class of 2n -point HT and Haar transforms. There
are many distinct fast HT algorithms involving a wide range of mathematics. We
will focus mostly on a matrix approach. Section 2.1 describes a general concept
of matrix-based fast DOT algorithms. Section 2.2 presents the 2n -point WHT.
Section 2.3 presents the fast Walsh–Paley transform. Section 2.4 presents fast Cal-
Sal transforms. Sections 2.5 and 2.6 describe the complex HTs and the fast Haar
transform algorithm.
N−1
1
Y[k] = √ f [n]φn [k], k = 0, 1, . . . , N − 1, (2.1)
N n=0
√
where {φn [k]} is an orthogonal system. Or, in matrix form, Y = (1/ N)HN f , and
Eq. (2.1) can be written as
The idea of a fast algorithm is to map the given computational problem into
several subproblems, which leads to a reduction of the order of complexity of the
problem:
General Concept in the Design of Fast DOT Algorithms: A fast transform T N f may
be achieved by factoring the transform matrix T N by the multiplication of k sparse
matrices. Typically, N = 2n , k = log2 , N = n, and
T 2n = Fn Fn−1 · · · F2 F1 , (2.4)
T 2−1
n = T 2n = (F n F n−1 · · · F 2 F 1 )
T T
= F1T F2T · · · Fn−1
T
FnT . (2.5)
Thus, one can implement the transform T N f via the following consecutive
computations:
2D DOTs: The simplest and most common 2D DOT algorithm, known as the row-
column algorithm, corresponds to first performing the 1D fast DOTs (by any of
the DOT algorithms of all the rows and then of all the columns, or vice versa). 2D
transforms can be performed in two steps, as follows:
Step 1. Compute N-point 1D DOT on the columns of the data.
Step 2. Compute N-point DOT on the rows of the intermediate result.
This idea can be very easily extended to the multidimensional case (see Fig. 2.1).
1
Y = √ HN X, (2.7)
N
1
Y = √ HN Y. (2.8)
N
It has been shown that a fast WHTs algorithm exists with C(N) = N log2 N
addition/subtraction operations.1 To understand the concept of the construction of
fast transform algorithms, we start with the 8-point WHT
1
F = √ H8 f, (2.9)
8
(2) The Hadamard matrix H8 can be expressed as the product of the following
three matrices:
H8 = B3 B2 B1 , (2.12)
where
B1 = H2 ⊗ I4 , (2.13)
B2 = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ), (2.14)
B3 = I4 ⊗ H2 , (2.15)
or
⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜ ⎟⎟ , (2.16)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 −
⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 + 0 − 0 0 0 0 ⎟⎟⎟⎟
B2 = ⎜⎜⎜⎜ ⎟⎟ , (2.17)
⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 + 0 −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟
B3 = ⎜⎜⎜⎜ ⎟⎟ . (2.18)
⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + +⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 + −
The comparison of Eq. (2.11) with Eq. (2.12) shows the following:
• That expression Eq. (2.12), which computes the DHT, produces exactly the same
result as evaluating the DHT definition directly [see Eq. (2.11)].
• The direct calculation of an 8-point HT H8 f requires 56 operations. However,
for calculation of H8 f via the fast algorithm, only 24 operations are required.
This is because each product of the matrix and vector requires only eight
additions or subtractions, since each sparse matrix has only two nonzero
elements in each row. Thus, all operations (additions, subtractions) that are
required for the H8 f calculation equal 24 = 8 log2 8. The difference in speed can
be significant, especially for long data sets, where N may be in the thousands or
millions.
• To perform the 8-point HT requires only eight storage locations.
b a+e+c+g–b–f–d–h
b+f b+f+d+h
c c+g a+e–c–g+b+f–d–h
a+e–c–g
d d+h a+e–c–g–b–f+d+h
b+f–d–h
a (a+e+c+g)–(b+f+d+h)
b (a+e+c+g)– (b+f+d+h)
c (a+e–c–g)+ (b+f–d–h)
d (a+e–c–g)–(b+f–d–h)
e (a–e+c–g)+(b–f+d–H)
f (a–e+c–g)–(b–f+d–h)
g (a–e–c+g)+(b–f–d+h)
h (a–e–c+g)–(b–f–d+h)
• The inverse 8-point HT matrix can be expressed by the product of three matrices:
H8 = B1 B2 B3 .
The fast WHT algorithms are best explained using signal flow diagrams, as shown
in Fig. 2.2. These diagrams consist of a series of nodes, each representing a
variable, which is itself expressed as the sum of other variables originating from
the left of the diagram, with the node block connected by means of solid lines.
A dashed connecting line indicates a term to be subtracted. Figure 2.2 shows the
signal flow graph illustrating the computation of the WHT coefficients for N = 8
and shows the flow graph of all steps simultaneously.
In general, the flow graph is used without the node block (Fig. 2.3).
⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟⎟ + +
B1 = ⎜⎜⎜ ⎟= ⊗ I4 = H2 ⊗ I4 = I1 ⊗ H2 ⊗ I4 , (2.22)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟ + −
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 −
⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜
⎜ 0 + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ H 2 ⊗ I2 0
B2 = ⎜⎜⎜ ⎟⎟ = = I2 ⊗ H2 ⊗ I2 , (2.23)
⎜⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟⎟ 0 H2 ⊗ I2
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
B3 = ⎜⎜⎜⎜⎜ ⎟⎟ = I4 ⊗ H2 = I4 ⊗ H2 ⊗ I1 . (2.24)
⎜⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 0 + −
F1 = H2 ⊗ I8 , (2.26)
F2 = I2 ⊗ (H2 ⊗ I4 ) = (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ), (2.27)
F3 = I4 ⊗ (H2 ⊗ I2 ) = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ), (2.28)
F4 = I8 ⊗ H2 . (2.29)
X(0) Y(0)
X(1) Y(1)
X(2) Y(2)
X(3) Y(3)
X(4) Y(4)
X(5) Y(5)
X(6) Y(6)
X(7) Y(7)
X(8) Y(8)
X(9) Y(9)
X(10) Y(10)
X(11) Y(11)
X(12) Y(12)
X(13) Y(13)
X(14) Y(14)
X(15) Y(15)
Now, using the properties of the Kronecker product, we obtain the desired
results. From this, it is not difficult to show that the WHT matrix of order 2n can
be factored as
"
n
H2n = (I2m−1 ⊗ H2 ⊗ I2n−m ). (2.30)
m=1
then
H2n−1 O2n−1
H2n = (H2 I2 ) ⊗ (I2n−1 H2n−1 ) = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (H2 ⊗ I2n−1 ) .
O2n−1 H2n−1
(2.33)
(2) The WHT of the signal f can be computed with n2n addition/subtraction
operations.
Proof: From the definition of the Walsh–Hadamard matrix, we have
H n−1 H2n−1
H2n = 2 . (2.35)
H2n−1 −H2n−1
Using Lemma 2.2.1, we may rewrite this equation in the following form:
H2n = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ), I20 = 1. (2.36)
Using the same procedure with the Walsh–Hadamard matrix of order 2n−1 , we
obtain
H2n−2 H2n−2
H2n−1 = = (I20 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 ). (2.37)
H2n−2 −H2n−2
4 8 12 3/2
8 24 56 7/3
16 64 240 15/4
32 160 992 31/5
64 384 4,032 63/6
128 896 16,256 127/7
256 2,048 65,280 255/8
512 4,508 261,632 511/9
1025 10,240 125,952 1023/10
Thus, we have
Note that [WP]1 = (1). Using the properties of the Kronecker product, we obtain
⎛ ⎞
⎜⎜⎜[WP] n−1 ⊗ (+ +)⎟⎟⎟
[WP]2n = ⎜⎝ ⎜
⎜ 2 ⎟⎟⎟⎠
[WP] n−1 ⊗ (+ −)
⎛ 2 ⎞
⎜⎜⎜[WP] n−1 I n−1 ⊗ I (+ +)⎟⎟⎟
= ⎜⎜⎜⎝ 2 2 1
⎟⎟⎟⎠ (2.42)
[WP] n−1 I2n−1 ⊗ I1 (+ −)
⎛ 2 ⎞⎛ ⎞
⎜⎜⎜[WP] n−1 ⊗ I ⎟⎟⎟ ⎜⎜⎜I n−1 ⊗ I (+ +)⎟⎟⎟
= ⎜⎜⎝ ⎜ 2 1⎟ ⎜ 2
⎟⎟⎠ ⎜⎜⎝ 1 ⎟⎟⎟ .
⎠
[WP]2n−1 ⊗ I1 I2n−1 ⊗ I1 (+ −)
we obtain
[WP]2n−1 0 I2n−1 ⊗ (+ +)
[WP]2n = . (2.44)
0 [WP]2n−1 I2n−1 ⊗ (+ −)
Thus,
I2n−1 ⊗ (+ +)
[WP]2n = (I2 ⊗ [WP]2n−1 ) . (2.45)
I2n−1 ⊗ (+ −)
Example 2.3.1: The Walsh–Paley matrices of order 4, 8, and 16 can be factored
as
⎛ ⎞⎛ ⎞
⎜⎜⎜+ + 0 0 ⎟⎟⎟ ⎜⎜⎜+ + 0 0 ⎟⎟⎟
+ + ⎜ ⎟ ⎜
⎜⎜+ − 0 0 ⎟⎟⎟ ⎜⎜⎜0 0 + +⎟⎟⎟⎟
N = 4: [WP]4 = I2 ⊗ P1 = ⎜⎜⎜⎜ ⎟⎜ ⎟ , (2.46)
+ − ⎜⎜⎝0 0 + +⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝+ − 0 0 ⎟⎟⎟⎟⎠
0 0 + − 0 0 + −
+ +
N = 8: [WP]8 = I4 ⊗ (I2 ⊗ P1 ) P2 , (2.47)
+ −
where
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ ⎟
⎜⎜⎜+ + 0 0 ⎟⎟ ⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜0 0 + +⎟⎟ 0 0 0 0 0 0 + +⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ ⎟⎟⎟ , P2 = ⎜⎜⎜⎜
⎜⎜⎝+ − 0 0 ⎟⎟⎠ ⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ . (2.48)
⎜⎜⎜ ⎟⎟
0 0 + − ⎜⎜⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 + −
+ +
N = 16: [WP]16 = I8 ⊗ (I4 ⊗ P1 ) (I2 ⊗ P3 ) , (2.49)
+ −
where
⎛+
⎜⎜⎜ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜0 + +⎟⎟⎟⎟
= ⎜⎜⎜⎜+
0 0 0 0 0 0 0 0 0 0 0 0 0
P16
⎜⎜⎜ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟ . (2.50)
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
x0 y0
x1 y4
x2 y2
x3 y6
x4 y1
x5 y5
x6 y3
x7 y7
and
⎛ ⎞
⎜⎜⎜0 0 ··· 1⎟⎟0
⎜⎜⎜⎜0 ⎟
⎜⎜⎜ 0 ··· 0⎟⎟⎟⎟⎟
1
R2n = ⎜⎜⎜⎜... .. .. .. ⎟⎟⎟⎟ .
..
⎜⎜⎜ . . . ⎟⎟⎟
. (2.58)
⎟
⎜⎜⎜0 1 · · · 0 0⎟⎟⎟⎟
⎝ ⎠
1 0 ··· 0 0
This matrix can also be expressed as
Example 2.3.2: A factorization of Walsh matrices of orders 4, 8, and 16, using the
relation of Eq. (2.56) is obtained as follows:
(1) For N = 4, as [see Eq. (2.52)]
[WP]2 = H2 , (2.61)
H2 0 I2 ⊗ (+ +)
[WP]4 = (2.62)
0 H2 I2 ⊗ (+ −)
and
I 0 I 0 I2 0
G4 = I1 ⊗ 2 I2 ⊗ 1 = , (2.63)
0 R2 0 R1 0 R2
then, we obtain
I 0 H2 0 I2 ⊗ (+ +)
W4 = 2 . (2.64)
0 R2 0 H2 I2 ⊗ (+ −)
⎛+ + 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
A2 = ⎜⎜⎜⎜0 + + 0 ⎟⎟⎟⎟ , (2.70c)
⎜⎜⎜ 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎠
0 0 0 0 0 0 + −
⎛+ + 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜0 0 + 0 0 0 + +⎟⎟⎟⎟⎟
A3 = ⎜⎜⎜⎜+ − 0 ⎟⎟⎟⎟ . (2.70d)
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 + − 0 0 ⎟⎟⎟⎠
0 0 0 0 0 0 + −
because
I8 0 I 0 I 0
G16 = I2 ⊗ 4 I4 ⊗ 2 (I8 ⊗ I2 ) . (2.72)
0 R8 0 R4 0 R2
⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎟
⎜⎜⎜0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0
⎟
⎜⎜⎜0 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎟
⎜0 + 0 ⎟⎟⎟⎟
A0 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0 0 0 0 0 0
⎟ (2.73)
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜+
⎜⎜⎜ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.74)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 + − 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A2 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.75)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟
⎜⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A3 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.76)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 + +⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎝ 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎟
⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟
A4 = ⎜⎜⎜⎜ ⎟⎟ . (2.77)
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟
⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
Example 2.3.3: A factorization of Walsh matrices of orders 4, 8, and 16, using the
relation of Eq. (2.59) is obtained as follows:
Because
"
n−1
H2n = I2m ⊗ (H2 ⊗ I2n−m−1 ), (2.78)
i=0
⎡ ⎛ ⎞⎤
"
n−1 ⎢
⎢⎢⎢ ⎜⎜⎜I2m ⊗ (+ 0) ⎟⎟⎟⎥⎥⎥
Q 2n = ⎢⎣I2n−m−1 ⊗ ⎜⎜⎝ ⎟⎟⎠⎥⎥⎦ . (2.79)
m=0 R2m ⊗ (0 +)
Then, using Eq. (2.59), the Walsh matrix W2m can be factored as
⎡ ⎛ ⎞⎤ ⎛ ⎞
⎢⎢ ⎜⎜I ⊗ (+ 0) ⎟⎟⎟⎥⎥⎥ ⎜⎜⎜I4 ⊗ (+ 0) ⎟⎟⎟
W8 = (I4 ⊗ I2 ) ⎢⎢⎣I2 ⊗ ⎜⎜⎝ 2 ⎟⎠⎥⎦ ⎜⎝ ⎟⎠ (H2 ⊗ I4 ) [I2 ⊗ (H2 ⊗ I2 )] (I4 ⊗ H2 )
R2 ⊗ (0 +) R4 ⊗ (0 +)
⎛ ⎞⎛ ⎞⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 0 + 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟ ⎜⎜⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 0 0 0 +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 0 +⎟⎟⎠ ⎜⎜⎝0 0 0 + 0 0 0 0 ⎟⎟⎠ ⎜⎜⎝0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 0 0 + 0 0 0 0 0 0 0 0 0 + 0 0 0 −
⎛ ⎞⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎜0 + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
× ⎜⎜⎜ ⎜ ⎟
⎟⎜ ⎜ ⎟⎟ .
⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
(2.82)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟ ⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 − 0 0 0 0 0 0 + −
where u = 2n−1 un−1 + 2n−2 un−2 + · · · + u0 , v = 2n−1 vn−1 + 2n−2 vn−2 + · · · + v0 , and
pn−1 = u0 , pi = un−i−1 + un−i−2 , i = 0, 1, . . . , n − 2.
Let x = (x0 , x1 , . . . , xN−1 )T be an input signal vector; then, the forward and
inverse Cal–Sal transform can be expressed as
1
y= Hcs x, x = Hcs y. (2.84)
N
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟ wal(0, t) 0
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟ cal(1, t) 1
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟ cal(2, t) 2
⎜⎜⎜ ⎟⎟
⎜1 −1 −1 −1 −1 1⎟⎟⎟⎟
Hcs (8) = ⎜⎜⎜⎜⎜
1 1 cal(3, t) 3
⎟
−1⎟⎟⎟⎟⎟
(2.86)
⎜⎜⎜1 −1 1 −1 1 −1 1 sal(4, t) 4
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 −1 1 −1 1 1 −1⎟⎟⎟⎟⎟ sal(3, t) 3
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −1 −1 1 1 −1 −1⎟⎟⎟⎟⎟ sal(2, t) 2
⎝ ⎠
1 1 1 1 −1 −1 −1 −1 sal(1, t) 1.
Similar to other HT matrices, the Cal–Sal matrix Hcs (N) of order N can be
factored into some sparse matrices leading to the fast algorithm. For example, we
have
⎛⎛ ⎞ ⎛ ⎞ ⎞ ⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟ ⎟⎟⎟ ⎜⎜⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ ⎟⎟ ⎜⎜⎜⎜⎝ ⎟ O2 ⎟⎟⎟⎟⎟
⎜⎜ 0 + 0 + ⎟⎟⎟⎟ ⎜⎜⎜ + −⎠ ⎟
Hcs (4) = ⎜⎜⎜⎜⎛ ⎞ ⎛ ⎞ ⎟⎟⎟ ⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜0 +⎟⎟⎟ ⎜⎜⎜0 −⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜ O ⎜⎜⎜+ +⎟⎟⎟ ⎟⎟⎟
⎜⎝⎜⎝⎜ ⎟⎠ ⎜⎝ ⎟⎠ ⎠ ⎝ 2 ⎜⎝ ⎟⎠ ⎟⎠
+ 0 − 0 − +
⎛ ⎞⎛ ⎞
⎜⎜⎜ I2 I2 ⎟⎟⎟ ⎜⎜⎜H2 O2 ⎟⎟⎟
= ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ , (2.87)
R2 −R2 O2 H2 R2
⎛⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 + 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟⎟ ⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎝0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 + 0 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜⎜⎜I2 I2 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜ ⎠ ⎝ ⎠ ⎟ ⎜⎝ ⎜ ⎟⎠ O4 ⎟⎟⎟
+ + ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ I2 −I2 ⎟
Hcs (8) = ⎜⎜⎜⎜⎜⎛ ⎞ ⎟⎟⎟⎟
0 0 0 0 0 0
⎞ ⎛ ⎞ ⎟⎜ ⎛
⎜⎜⎜⎜⎜⎜0 0 0 +⎟⎟⎟ ⎜⎜⎜0 0 0 −⎟⎟⎟ ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ ⎜⎜⎜ I2 I2 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟ ⎝ O4 ⎜⎝ ⎟⎠ ⎟⎠
⎜⎜⎜⎜⎜⎜0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 − 0 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟⎟ −I2 I2
⎜⎜⎜⎜⎜⎜⎜⎜0 0 + ⎟
0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 −
⎟⎟
0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟⎟
⎜⎜⎝⎜⎜⎝ ⎠ ⎝
0
⎠⎠
+ 0 0 0 − 0 0 0
⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜⎜⎝ ⎟⎠ O2 O2 O2 ⎟⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟⎟⎟
⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟
⎜⎜⎜⎜ O2 ⎝⎜ ⎟⎠ O2 O2 ⎟⎟⎟⎟⎟
+ − ⎟⎟⎟
× ⎜⎜⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟ (2.88)
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜ O2 O2 ⎜⎝ ⎟⎠ O2 ⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟
⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟ ⎟⎟⎟⎟
⎝ O2 O2 O2 ⎜⎝ ⎟⎠ ⎠⎟
+ −
where
⎛⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟ ⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 0 0 + 0 0 0 ⎟⎟ ⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 + 0 ⎟⎟ ⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟ ⎜⎜0 + 0 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜ 0 0 0 0 + 0 0 ⎟⎟ ⎜⎜0 0 0 0 0 + 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 0 + 0 0 0 0 ⎟⎠ ⎜⎝0 0 0 + 0 0 0 0 ⎟⎟⎟⎠ ⎟⎟⎟⎟
⎜⎜⎜⎝ ⎟
⎜0 0 0 0 0 0 0 + 0 0 0 0 0 0 0 + ⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜⎛ ⎞ ⎛ ⎞ ⎟⎟⎟ , (2.90)
⎜⎜⎜⎜⎜0 0 0 0 0 0 0 +⎟⎟ ⎜⎜0 0 0 0 0 0 0 −⎟⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟ ⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜ 0 0 + 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 − 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 0 0 0 + 0 0 ⎟⎟ ⎜⎜0 0 0 0 0 − 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟ ⎜⎜0 − 0 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 + 0 ⎟⎟ ⎜⎜0 0 0 0 0 0 − 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜0 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 0 0 0 + 0 0 ⎟⎠ ⎜⎝0 0 0 0 0 − 0 0 ⎟⎟⎟⎠ ⎟⎟⎟⎟
⎝⎝ ⎠
+ 0 0 0 0 0 0 0 − 0 0 0 0 0 0 0
⎛ ⎞
⎜⎜⎜ I4 I4 ⎟⎟⎟
⎜⎜⎜ I −I O8 ⎟⎟
B2 = ⎜⎜⎜⎜ ⎜ 4 4 ⎟⎟⎟⎟⎟ , (2.91)
⎜⎜⎝ O I4 I4 ⎟⎟⎟
⎠
8
−I4 I4
⎛ ⎞
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ I −
O4 O4 O4 ⎟⎟⎟
⎜⎜⎜ 2 I 2 ⎟⎟⎟
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ O O O ⎟⎟⎟
⎜⎜⎜ 4
−I2 I2 4 4 ⎟⎟⎟
B3 = ⎜⎜⎜ ⎟⎟⎟ , (2.92)
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ O4 O4
I2 − I 2
O4 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ I2 I2 ⎟⎟⎟⎟
⎜⎝ O4 O4 O4 ⎠
−I2 I2
⎛ ⎞
⎜⎜⎜ + + ⎟⎟⎟
⎜⎜⎜ + − O 2 O 2 O 2 O 2 O 2 O 2 O 2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ + + ⎟⎟
⎜⎜⎜ O2
⎜⎜⎜ O O2 O2 O2 O2 O2 ⎟⎟⎟⎟⎟
− + 2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ + +
⎜⎜⎜ O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
+ +
⎜⎜⎜ O2
⎜⎜⎜ O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟⎟
− + ⎟⎟⎟
B4 = ⎜⎜⎜ ⎟⎟⎟ . (2.93)
⎜⎜⎜ + +
⎜⎜⎜ O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ O2 + + ⎟⎟⎟
⎜⎜⎜ O 2 O 2 O 2 O 2
− +
O 2 O 2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜ O + + ⎟⎟⎟
⎜⎜⎜ 2 O O O O O O ⎟⎟⎟
2 2 2 2 2
+ − 2
⎟
⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜ + + ⎟⎟⎟
⎝ O2 O2 O2 O2 O2 O2 O2 ⎠
− +
We will now introduce the column bit reversal (CBR) operation. Let A be an
m × m (m is the power of 2) matrix. [CBR](A) is the m × m matrix obtained from
matrix A whose columns are rearranged in bit reversal order. For example, consider
the following 4 × 4 matrix:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜a11 a12 a13 a14 ⎟⎟
⎟ ⎜⎜⎜a11 a13 a12 a14 ⎟⎟
⎟
⎜⎜⎜a a22 a23 a24 ⎟⎟⎟⎟ ⎜⎜⎜a a23 a22 a24 ⎟⎟⎟⎟
A = ⎜⎜⎜⎜ 21 ⎟, then [CBR](A) = ⎜⎜⎜⎜ 21 ⎟.
a34 ⎟⎟⎟⎠⎟ a34 ⎟⎟⎟⎠⎟
(2.94)
⎜⎜⎝a31 a32 a33 ⎜⎜⎝a31 a33 a32
a41 a42 a43 a44 a41 a43 a42 a44
The horizontal reflection (HR) operation for any size matrix is defined as
⎛ ⎞
⎜⎜⎜a14 a13 a12 a11 ⎟⎟
⎟
⎜⎜⎜a a23 a22 a21 ⎟⎟⎟⎟
[HR](A) = ⎜⎜⎜⎜ 24 ⎟.
a31 ⎟⎟⎟⎠⎟
(2.95)
⎜⎜⎝a34 a33 a32
a44 a43 a42 a41
Similarly, we can define the block horizontal reflection (BHR) operation. Using
these notations, we can represent the Cal–Sal matrices Hcs (4), Hcs (8), and Hcs (16)
as follows:
For n = 4, we have
where
[CBR](I2 ) [CBR](I2 ) H2 O2
B1 = , B2 = . (2.97)
[HR]{[CBR](I2 )} −[HR]{[CBR](I2 )} O2 [HR](H2 )
For n = 8, we have
where
[CBR](I4 ) [CBR](I4 )
B1 = ,
[HR]([CBR](I4 )) −[HR]{[CBR](I4 )}
H2 ⊗ I2 O4 (2.99)
B2 = ,
O4 [BHR](H2 ⊗ I2 )
B3 = (I4 ⊗ H2 ) .
where
⎛ ⎞
⎜⎜⎜ [CBR](I ) [CBR](I ) ⎟⎟⎟
B1 = ⎜⎜⎜⎝ 8 8 ⎟⎟⎟ ,
⎠
[HR]{[CBR](I8 )} −[HR]{[CBR](I8 )}
⎛ ⎞
⎜⎜⎜H ⊗ I O8 ⎟⎟⎟
B2 = ⎜⎜⎝ ⎜ 2 4 ⎟⎟⎟ ,
⎠
O8 [BHR](H2 ⊗ I4 )
⎛ ⎞ (2.101)
⎜⎜⎜H ⊗ I O4 ⎟⎟⎟
B3 = I2 ⊗ ⎜⎝ ⎜
⎜ 2 2
⎟⎟⎟⎠ ,
O4 [BHR](H2 ⊗ I2 )
⎛ ⎞
⎜⎜⎜H O ⎟⎟⎟
B4 = I4 ⊗ ⎜⎜⎝⎜ 2 2
⎟⎟⎠⎟ .
O2 [HR](H2 )
where
⎛ ⎞
⎜⎜⎜ [CBR](I n−1 ) [CBR](I2n−1 ) ⎟⎟⎟
B1 = ⎜⎜⎝ ⎜ 2 ⎟⎟⎟ ,
⎠
[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎛ ⎞ (2.103)
⎜⎜⎜H ⊗ I n−i O2n−i+1 ⎟⎟⎟
Bi = I2i−2 ⊗ ⎜⎝ ⎜
⎜ 2 2 ⎟⎟⎟ , i = 2, 3, . . . , n.
⎠
O2n−i+1 [BHR](H2 ⊗ I2n−i )
For odd n, n ≥ 3,
where
⎛ ⎞
⎜⎜⎜ [CBR](I n−1 ) [CBR](I n−1 )
⎟⎟⎟
B1 = ⎜⎜⎜⎝ 2 2 ⎟⎟⎟ , Bn = I2n−1 ⊗ H2 ,
⎠
[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎛ ⎞ (2.105)
⎜⎜⎜H ⊗ I n−i O2n−i+1 ⎟⎟⎟
Bi = I2i−2 ⊗ ⎜⎜⎝ ⎜ 2 2 ⎟
⎟⎟⎠ , i = 2, 3, . . . , n − 1.
O2n−i+1 [BHR](H2 ⊗ I2n−i )
HH ∗ = H ∗ H = NIN , (2.106)
Theorem 2.5.1: The complex Sylvester matrix of order 2n [see Eq. (2.107)] can be
factored as
⎡ n−1 ⎤
⎢⎢⎢" ⎥⎥⎥
[CS ]2n = ⎢⎢⎣⎢ (I2m−1 ⊗ H2 ⊗ I2n−m )⎥⎥⎦⎥ (I2n−1 ⊗ [CS ]2 ). (2.108)
m=1
Proof: Indeed, from the definition of complex Sylvester matrix in Eq. (2.107), we
have
[CS ]2n−1 [CS ]2n−1
[CS ]2n = = H2 ⊗ [CS ]2n−1 . (2.109)
[CS ]2n−1 −[CS ]2n−1
[CS ]2n = (H2 ⊗ I2n−1 ) ⊗ [I2 ⊗ (H2 ⊗ [CS ]2n−2 )]. (2.112)
Note that [CS ]2n is a Hermitian matrix, i.e., [CS ]∗2n = [CS ]2n .
Because [CS ]2 = −1j −1j , it follows from Eq. (2.109) that the complex
Sylvester–Hadamard matrix of orders 4 and 8 are of the form
⎛ ⎞
⎜⎜⎜ 1 j 1 j ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜− j −1 − j −1⎟⎟⎟⎟
[CS ]4 = ⎜⎜⎜⎜⎜ ⎟, (2.113)
⎜⎜⎜ 1 j −1 − j ⎟⎟⎟⎟⎟
⎝ ⎠
− j −1 j 1
⎛ ⎞
⎜⎜⎜ 1 j 1 j 1 j 1 j ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− j −1 − j −1 − j −1 −j −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 j −1 − j 1 j −1 − j ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜− j −1 j 1 − j −1 1⎟⎟⎟⎟⎟
[CS ]8 = ⎜⎜⎜⎜⎜
j
⎟. (2.114)
⎜⎜⎜ 1 j 1 j −1 −j −1 − j ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− j −1 − j −1 j 1 j 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 j −1 − j −1 −j 1 j ⎟⎟⎟⎟⎟
⎝ ⎠
− j −1 j 1 j 1 −j −1
Now, according to Eq. (2.112), the matrix in Eq. (2.114) can be expressed as the
product of two matrices,
where
⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟
⎟
⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜0 + +⎟⎟⎟⎟⎟
A = ⎜⎜⎜⎜⎜
0 0 0 0 0
⎟,
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ + 0 0 0 − 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟
⎟ ⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟
⎟
⎜⎜⎜0 − 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜− 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 − + 0 ⎟⎟⎟⎟⎟ ⎜− + 0 ⎟⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜⎜ B2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0 0
⎟, ⎟ . (2.116)
⎜⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0 0 0 0 0 − 0 −⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 − 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟ ⎜⎜⎝0 0 0 0 0 + 0 −⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 0 − 0 + 0 0 0 0 − 0 + 0
Step 1. Calculate B1 F:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜a ⎟⎟ ⎜⎜ a + c ⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 − 0 − 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜−b − d ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟ ⎜
⎜⎜⎜+
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜c ⎟⎟⎟⎟ ⎜⎜⎜⎜ a − c ⎟⎟⎟⎟⎟
⎜0 − + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−b + d ⎟⎟⎟⎟⎟
B1 F = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎜ ⎟ = ⎜ ⎟.
0 ⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜ e + g ⎟⎟⎟⎟
(2.117)
⎜⎜⎜0 0 0 0 + 0 +
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 0 0 0 − 0 −⎟⎟ ⎜⎜ f ⎟⎟ ⎜⎜− f − h⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 ⎟⎟ ⎜⎜g ⎟⎟ ⎜⎜ e − g ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 0 − 0 + h −f + h
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟ ⎜⎜ a + c ⎟⎟ ⎜⎜ (a + c) + (e + g) ⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟ ⎜⎜⎜−b − d ⎟⎟⎟ ⎜⎜⎜−(b + d) − ( f + h)⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ a − c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ (a − c) + (e − g) ⎟⎟⎟⎟⎟
⎜⎜⎜
+ +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−b + d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−(b − d) − ( f − h)⎟⎟⎟⎟⎟
A(B1 F) = ⎜⎜⎜⎜⎜
0 0 0 0 0 0
⎟⎜ ⎟=⎜ ⎟ . (2.118)
⎜⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ e + g ⎟⎟⎟⎟ ⎜⎜⎜⎜ (a + c) − (e + g) ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟ ⎜⎜− f − h⎟⎟ ⎜⎜−(b + d) + ( f + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 + 0 0 0 − 0 ⎟⎟ ⎜⎜ e − g ⎟⎟ ⎜⎜ (a − c) − (e − g) ⎟⎟⎟⎟
⎠⎝ ⎠ ⎝ ⎠
0 0 0 + 0 0 0 0 −f + h −(b − d) + ( f − h)
Step 3. Calculate B2 F:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟ ⎜⎜a ⎟⎟ ⎜⎜ b + d ⎟⎟
⎜⎜⎜− ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜−a − c ⎟⎟⎟⎟⎟
⎜⎜⎜0 + 0 − 0 0 0 ⎟ ⎜ ⎟ ⎜
0 ⎟⎟⎟⎟ ⎜⎜⎜⎜c ⎟⎟⎟⎟ ⎜⎜⎜⎜ b − d ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜− + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−a + c ⎟⎟⎟⎟⎟
B2 F = ⎜⎜⎜⎜⎜
0 0 0 0 0
⎟⎜ ⎟ = ⎜ ⎟.
+⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜ f + h⎟⎟⎟⎟
(2.119)
⎜⎜⎜0 0 0 0 0 + 0
⎜⎜⎜0 ⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
⎟ ⎟
⎜⎜⎜ 0 0 0 − 0 − 0 ⎟⎟ ⎜⎜ f ⎟⎟ ⎜⎜−e − g ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 + 0 −⎟⎟ ⎜⎜g ⎟⎟ ⎜⎜ f − h⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 − 0 + 0 h −e + g
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟ ⎜⎜ b + d ⎟⎟ ⎜⎜ (b + d) + ( f + h)⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟ ⎜⎜⎜−a − c ⎟⎟⎟ ⎜⎜⎜−(a + c) − (e + g) ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ b − d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ (b − d) + ( f − h)⎟⎟⎟⎟⎟
⎜⎜⎜
+ +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−a + c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−(a − c) − (e − g) ⎟⎟⎟⎟⎟
A(B2 F) = ⎜⎜⎜⎜⎜
0 0 0 0 0 0
⎟⎜ ⎟=⎜ ⎟ . (2.120)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f + h⎟⎟⎟⎟ ⎜⎜⎜⎜ (b + d) − ( f + h)⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ + 0 0 0 − 0 0 ⎟⎟ ⎜⎜−e − g ⎟⎟ ⎜⎜−(a + c) + (e + g) ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟ ⎜⎜ f − h⎟⎟ ⎜⎜ (b − d) − ( f − h)⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 + 0 0 0 0 −e + g −(a − c) + (e − g)
Figure 2.6 Flow graph of fast 8-point complex Sylvester–Hadamard transform: (a) real
part; (b) imaginary part.
For example, C + ([CS ]4 ) = 4, C + ([CS ]8 ) = 16, and C + ([CS ]16 ) = 48. From
Theorem 2.5.1, it follows that the complex Hadamard matrix of order 16 can be
represented as
where
A1 = H2 ⊗ I8 ,
x0 y0
x1 y1
x2
y2
x3 y3
x4 y4
x5 y5
x6 y6
x7 y7
x8 y8
x9 y9
x10 y10
x11 y11
x12 y12
x13 y13
x14 y14
x15 y15
Figure 2.7 Flow graph of the fast 16-point complex Sylvester–Hadamard transform (real
part).
A2 = (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ),
A3 = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ),
+ 0 0 +
B1 = I8 ⊗ T 1 , B2 = I8 ⊗ T 2 , where T 1 = , T2 = . (2.124)
0 − − 0
1 1
X= [Haar]N f = H(N) f, (2.125)
N N
where [Haar]N = H(N) is the Haar transform matrix of order N, and f is the signal
vector of length N.
x0 y0i
x1 y1i
x2 y2i
x3 y3i
x4 y4i
x5 y5i
x6 y6i
x7 y7i
x8 y8i
x9 y9i
x10 i
y10
x11 i
y11
x12 i
y12
x13 i
y13
x14 i
y14
x15 i
y15
Figure 2.8 Flow graph of the fast 16-point complex Sylvester–Hadamard transform
(imaginary part).
First, consider an example. Let N = 8, and let the input data vector be
f = (a, b, c, d, e, f, g, h)√T . It is easy to check that the direct evaluation of the Haar
transform (below s = 2)
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟ ⎜⎜⎜a + b + c + d + e + f + g + h⎟⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜a + b + c + d − e − f − g − h⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟⎟⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ s(a + b − c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 s s −s −s ⎟⎟⎟ ⎜⎜⎜d ⎟⎟⎟ ⎜⎜⎜ s(e + f − g − h) ⎟⎟⎟
H(8) f = ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜ 2 −2 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
e 2(a − b) ⎟⎟⎟
⎟
⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ ⎜ ⎟ ⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝0 0 0 0 2 −2 0 0⎟⎟⎠ ⎜⎜⎝g ⎟⎟⎠ ⎜⎜⎝ 2(e − f ) ⎟⎟⎟
⎠
0 0 0 0 0 0 2 −2 h 2(g − h)
(2.126)
requires 56 operations.
The Haar matrix H(8) order N = 8 may be expressed by the product of three
matrices
H(8) = H1 H2 H3 , (2.127)
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 s 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜0 0 0 s 0 0 0 0⎟⎟⎟⎟⎟ ⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟⎟
H1 = ⎜⎜⎜⎜⎜ ⎟⎟ , H2 = ⎜⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 0 2 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 2 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎠ ⎜⎜⎝0 0 0 0 0 0 2 0⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2
⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟
⎟
⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 0
⎜⎜⎜ 0 1 1 0 0⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 1 1⎟⎟⎟⎟⎟
H3 = ⎜⎜⎜⎜⎜ ⎟. (2.128)
⎜⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟
⎟
⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎟⎟
⎠
0 0 0 0 0 0 1 −1
Step 1. Calculate
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟ ⎜⎜⎜a + b⎟⎟⎟
⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜c + d ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜0 0 0 0 1 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e + f ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜0 0 0 0 0 0 1 1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜g + h⎟⎟⎟⎟⎟
H3 f = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟. (2.129)
⎜⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜a − b⎟⎟⎟⎟⎟
⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜c − d ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝g ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝e − f ⎟⎟⎟⎟⎠
0 0 0 0 0 0 1 −1 h g−h
Step 2. Calculate
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a + b⎟⎟⎟ ⎜⎜⎜a + b + (c + d)⎟⎟⎟
⎜⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜c + d ⎟⎟⎟⎟ ⎜⎜⎜⎜e + f + (g + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e + f ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a + b − (c + d)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟ ⎜
0 0 1 −1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜g + h⎟⎟⎟⎟ ⎜⎜⎜⎜e + f − (g + h)⎟⎟⎟⎟⎟
H2 (H3 f ) = ⎜⎜⎜⎜⎜ ⎟⎜ ⎟=⎜ ⎟. (2.130)
⎜⎜⎜0 0 0 0 2 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a − b⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 2 0 0⎟⎟⎟ ⎜⎜⎜c − d ⎟⎟⎟ ⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 2 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e − f ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(e − f ) ⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 0 0 0 2 g−h 2(g − h)
1/8
A
1/8
B
–1
sqrt(2)8
–1 C
sqrt(2)8
D
–1
–1
2/8
E
–1 2/8
F
2/8
–1
G
2/8
H
–1
Figure 2.9 Signal flow diagram of the fast 8-point 1D Haar transform.
Step 3. Calculate
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a + b + (c + d)⎟⎟⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜e + f + (g + h)⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 s 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a + b − (c + d)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 s 0 0 0 0⎟⎟⎟ ⎜⎜⎜e + f − (g + h)⎟⎟⎟⎟⎟
H1 [H2 (H3 f )] = ⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟ ⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝ 2(e − f ) ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 1 2(g − h)
⎛ ⎞
⎜⎜⎜a + b + c + d + (e + f + g + h)⎟⎟⎟
⎜⎜⎜⎜a + b + c + d − (e + f + g + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ s[a + b − (c + d)] ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜
⎜ s[e + f − (g + h)] ⎟⎟⎟.
= ⎜⎜⎜ ⎟⎟⎟ (2.131)
⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟
⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝ 2(e − f ) ⎟⎟⎠
2(g − h)
⎧⎛ % &⎞ ⎫
⎪
⎪
⎪ ⎜⎜⎜I ⊗ ⎟⎟⎟ ⎪
⎪
⎪
⎪
⎨⎜⎜ ⎜ 2 1 1 ⎟
⎟ ⎪
⎬
H2 = diag ⎪ ⎜
⎜⎜⎝ % & ⎟
⎟
⎟⎟⎠ , I ⎪ , (2.133)
⎪
⎪
⎪
4 ⎪
⎪
⎪
⎩ I2 ⊗ 1 −1 ⎭
⎛ % &⎞
⎜⎜⎜I ⊗ ⎟
1 1 ⎟⎟⎟⎟
H3 = ⎜⎜⎜⎜⎝
4
⎟⎟⎠ . (2.134)
I4 ⊗ 1 −1
where
1 1
H = H(2) = , (2.136)
1 −1
Using the property of the Kronecker product from Eq. (2.137), we obtain
⎛ * +* + ⎞
⎜⎜⎜ H(2n−1 ) ⊗ I(20 ) I(2n−1 ) ⊗ (+1 + 1) ⎟⎟⎟
H(2n ) = ⎜⎜⎜⎜⎝* √ +* + ⎟⎟⎟⎟⎠ , n = 2, 3, . . . . (2.138)
2n−1 I(2n−1 ) ⊗ I(20 ) I(2n−1 ) ⊗ (+1 − 1)
Then, from Eq. (2.138) and from the following property of matrix algebra:
AB A 0 B
= ,
CD 0 C D
we obtain
n−1
H(2n−1 ) √ 0 I(2 ) ⊗ (+1 + 1)
H(2 ) =
n
, n = 2, 3, . . . . (2.139)
0 n−1 n−1
2 I(2 ) I(2 n−1
) ⊗ (+1 − 1)
Examples:
(1) Let n = 2; then, the Haar matrix of order four can be represented as a product
of two matrices:
H(2) √ 0 I(2) ⊗ (+1 + 1)
H(4) = = H1 H2 ; (2.140)
0 2I(2) I(2) ⊗ (+1 − 1)
√
where (s = 2),
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 1 0 0⎟⎟⎟
⎜⎜⎜1 −1 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 1⎟⎟⎟
H1 = ⎜⎜⎜⎜ ⎟⎟ , H2 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝0 0 s 0⎟⎟⎟⎟⎠ ⎜⎜⎝1 −1 0 0⎟⎟⎟⎟⎠
(2.141)
0 0 0 s 0 0 1 −1
(2) Let n = 3; then, the Haar matrix of order 8 can be expressed as a product of
three matrices,
H(8) = H1 H2 H3 , (2.142)
where $
1 1 √
H1 = diag , 2I2 , 2I4 ,
1 −1
⎧⎛ % &⎞ ⎫
⎪
⎪ ⎜⎜⎜I ⊗ ⎟ ⎪ ⎪
⎪
⎪
⎨⎜⎜⎜ 1 ⎟⎟⎟⎟ ⎪ ⎪
& ⎟⎟⎟ , I4 ⎬
2 1
H2 = diag ⎪ ⎜⎜⎜ % ,
⎪
⎪ ⎟⎠ ⎪ ⎪
⎪
⎩⎝ I 2 ⊗ 1
⎪ −1 ⎪
⎭
⎛ % &⎞
⎜⎜⎜I ⊗ ⎟
⎜⎜⎜ 4 1 1 ⎟⎟⎟⎟
H3 = ⎜⎜⎜ % & ⎟⎟⎟ . (2.143)
⎝I4 ⊗ ⎟⎠
1 −1
Now, from Eq. (2.145), and following the property of matrix algebra,
AB 0 A 0 B 0
= ,
0 αI(M) 0 αI(M) 0 I(M)
we obtain
⎛ ⎞⎛ ⎞
⎜⎜⎜H(2) √ 0 0 ⎟⎟ ⎜⎜I(2) ⊗ (+1
⎟⎟ ⎜ + 1) 0 ⎟⎟
⎜⎜⎜ ⎟ I(4) ⊗ (+1 + 1)
H(8) = ⎜⎜ 0 2I(2) 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎝I(2) ⊗ (+1 − 1) 0 ⎟⎟⎟⎟
⎠ I(4) ⊗ (+1
⎝ ⎠ − 1)
0 0 2I(4) 0 I(4)
= H1 H2 H3 . (2.146)
where
Hn = diag H(2), 21/2 I(2), 2I(4), 23/2 I(8), . . . , 2(n−1)/2 I(2n−1 ) , (2.148)
n−1
I(2 ) ⊗ (1 1)
H1 = , (2.149)
I(2n−1 ) ⊗ (1 −1)
⎛ m−1 ⎞
⎜⎜⎜I(2 ) ⊗ 1 1 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
Hm = ⎜⎜⎜I(2 ) ⊗ 1 −1
m−1
0 ⎟⎟⎟ , m = 2, 3, . . . , n − 1. (2.150)
⎜⎝ ⎟
m ⎠
0 I(2 − 2 )
n
(2) The Haar transform may be calculated via 2(2n − 1) operations or via O(N)
operations.
(3) Only 2n storage locations are returned to perform the 2n -point Haar transform.
(4) The inverse 2n -point Haar transform matrix be represented as
Note that each Hm [see Eq. (2.150)] has the 2m rows with only two nonzero
elements and 2n − 2m rows with only one nonzero element, so the product of a
matrix Hm by a vector requires only 2n − 4 addition operations, an H1 transform
[see, Eq. (2.148)] requires only 2n additions, and an Hn transform requires 2
additions and 2n − 2 multiplications.
So a 2n -point Haar transform requires 2· 2n − 2 addition and 2n − 2 multiplication
operations.
From Eqs. (2.148) and (2.150), we obtain the following factors of the Haar
transform matrix of order 16:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜I2 ⊗ (+ +) O2×12 ⎟⎟⎟ ⎜⎜⎜I4 ⊗ (+ +) O4×8 ⎟⎟⎟
I ⊗ (+ +)
H1 = 8 , ⎜ ⎜
H2 = ⎜⎜⎜⎝I2 ⊗ (+ −) O2×12 ⎟⎟⎟⎠ , H3 = ⎜⎜⎜⎝I4 ⊗ (+ −) O4×8 ⎟⎟⎟⎟⎠ ,
⎟
I8 ⊗ (+ −)
O12×4 I12 O8×8 I8
$
+ + √ √
H4 = diag , 2I2 , 2I4 , 8I8 , (2.152)
+ −
References
1. S. Agaian, Advances and problems of the fast orthogonal transforms
for signal-image processing applications (Part 1), Pattern Recognition,
Classification, Forecasting, Yearbook, 3, Russian Academy of Sciences,
Nauka, Moscow (1990) 146–215 (in Russian).
2. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
3. G. R. Reddy and P. Satyanarayana, “Interpolation algorithm using
Walsh–Hadamard and discrete Fourier/Hartley transforms,” Circuits and
Systems 1, 545–547 (1991).
4. C.-F. Chan, “Efficient implementation of a class of isotropic quadratic filters
by using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp. on Circuits
and Systems, June 9–12, Hong Kong, 2601–2604 (1997).
5. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and Synthesis
with Applications and Signal/Image Processing, Kluwer Academic Publishers,
Boston (1996).
6. L. Chang and M. Wu, “A bit level systolic array for Walsh–Hadamard
transforms,” IEEE Trans. Signal Process 31, 341–347 (1993).
7. P. M. Amira and A. Bouridane, “Novel FPGA implementations of
Walsh–Hadamard transforms for signal processing,” IEE Proc. of Vision,
Image and Signal Processing 148, 377–383 (2001).
8. S. K. Bahl, “Design and prototyping a fast Hadamard transformer for
WCDMA,” in Proc. of 14th IEEE Int. Workshop on Rapid Systems
Prototyping, 134–140 (2003).
9. S. V. J. C. R. Hashemian, “A new gate image encoder; algorithm, design
and implementation,” in Proc. of 42nd IEEE Midwest Symp. Circuits and
Systems 1, 418–421 (1999).
10. B. J. Falkowski and T. Sasao, “Unified algorithm to generate Walsh functions
in four different orderings and its programmable hardware implementations,”
IEE Proc.-Vis. Image Signal Process. 152 (6), 819–826 (2005).
11. S. Agaian, Advances and problems of the fast orthogonal transforms
for signal-image processing applications (Part 2), Pattern Recognition,
Classification, Forecasting, Yearbook, 4, Russian Academy of Sciences,
Nauka, Moscow (1991) 156–246 (in Russian).
12. S. Agaian, K. Tourshan, and J. Noonan, “Generalized parametric slant-
Hadamard transforms,” Signal Process 84, 1299–1307 (2004).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamard
transforms,” Multiple Valued Logic Soft Comput. J. 10 (2), 173–187 (2004).
14. S. Agaian, K. Tourshan, and J. Noonan, “Performance of parametric
Slant-Haar transforms,” J. Electron. Imaging 12 (3), 539–551 (2003)
[doi:10.1117/1.1580494].
93
In this chapter, we present the WHT based on the fast discrete orthogonal
algorithms such as Fourier, cosine, sine, slant, and others. The basic idea of these
algorithms is the following: first we compute the WHT coefficients, then using
the so-called correction matrix, we convert these coefficients to transform domain
coefficients. These algorithms are useful for development of integer-to-integer
DOTs and for new applications, such as data hiding and signal/image encryption.
X = F N x, (3.1)
where x = (x0 , x1 , . . . , xN−1 ) and X = (X0 , X1 , . . . , XN−1 ) denote the input and
output column vectors, respectively, and F N is an arbitrary DOT matrix of order N.
We can represent Eq. (3.1) in the following form:
1
X = FN x = F N HN HNT x, (3.2)
N
X = AN HN x. (3.3)
In other words, the HT coefficients are computed first and then they are used to
obtain the coefficients of discrete transform F N . This is achieved by the transform
matrix AN , which is orthonormal and has a block-diagonal structure. We will
call AN a correction transform. Thus, any transform can be decomposed into two
orthogonal transforms, namely, (1) an HT and (2) a correction transform.
Lemma 3.1.1: Let the orthogonal transform matrix F N = F2n have the following
representation:
F 2n−1 F 2n−1
F2n = , (3.4)
B2n−1 −B2n−1
that is, the AN matrix has a block-diagonal structure, where ⊕ denotes the direct
sum of matrices.
Proof: Clearly, this is true for n = 1. Let us assume that Eq. (3.5) is valid for
N = 2k−1 ; i.e.,
Using the definitions of F2k−1 and H2k−1 once again, we can rewrite Eq. (3.7) as
X = F N x, (3.10)
where x = (x0 , x1 , . . . , xN−1 )T and X = (X0 , X1 , . . . , XN−1 )T denote the input and
output column vectors, respectively, and
N−1
F N = WNkm (3.11)
k,m=0
and
H2n−1 H2n−1
H 2n = , H1 = (1). (3.17)
H2n−1 −H2n−1
This means that first, the HT coefficients are computed, and then they are used to
obtain the DFT coefficients. Using Eqs. (3.13) and (3.14), we can represent the
DFT matrix by Eq. (3.4). Hence, according to Lemma 3.1.1, the matrix
AN = (1/N)F N HN (3.19)
Figure 3.1 Generalized block diagram of the procedure for obtaining HT coefficients.
Without losing the generalization, we prove it for the cases N = 4, 8, and 16.
Case N = 4: The Fourier matrix of order 4 is
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
3 ⎜⎜⎜1 − j −1 j ⎟⎟⎟⎟
F4 = W4km = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝1 −1 1 −1⎟⎟⎟⎟⎠
(3.21)
k,m=0
1 j −1 − j
Using the permutation matrix
⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟
⎟
⎜⎜⎜0 0 1 0⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ ⎟,
0⎟⎟⎟⎠⎟
(3.22)
⎜⎜⎝0 1 0
0 0 0 1
we can represent the matrix F4 in the following equivalent form:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜0 0 1 0⎟⎟⎟ ⎜⎜⎜1 − j −1 j ⎟⎟⎟ ⎜⎜⎜1 −1 1 −1⎟⎟⎟ H2 H2
F 4 = P1 F4 = ⎜⎜⎜ ⎜ ⎟
⎟⎜ ⎜ ⎟
⎟=⎜ ⎜ ⎟
⎟= . (3.23)
⎜⎜⎝0 1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝1 −1 1 −1⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝1 − j −1 j ⎟⎟⎟⎟⎠ B2 −B2
0 0 0 1 1 j −1 − j 1 j −1 − j
Then, we obtain
1 0 1 0
A4 = (1/4)F4 H4 = (1/4) (2H2 H2 ⊕ 2H2 B2 ) = ⊕ , (3.24)
0 1 0 −j
i.e., A4 is the block-diagonal matrix.
Case N = 8: The Fourier matrix of order 8 is
⎛ 0 ⎞
⎜⎜⎜W8 W80 W80 W80 W80 W80 W80 W80 ⎟⎟⎟
⎜⎜⎜⎜ 0 ⎟⎟
⎜⎜⎜W8 W81 W82 W83 −W80 −W81 −W82 −W83 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜⎜⎜W8 W82 −W80 −W82 W80 W82 −W80 −W82 ⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜⎜W W83 −W82 W81 −W80 −W83 W82 −W81 ⎟⎟⎟⎟
F8 = ⎜⎜⎜⎜ 80 ⎟⎟ . (3.25)
⎜⎜⎜W −W 0 W 0 −W 0 W 0
⎜⎜⎜ 8 −W80 W80 −W80 ⎟⎟⎟⎟⎟
8 8
⎜⎜⎜W 0 −W 1 W 2 −W 3 −W 0
8 8
⎟⎟
⎜⎜⎜ 8 W81 −W82 W83 ⎟⎟⎟⎟
8 8 8
⎜⎜⎜W 0 −W 2 −W 0 W 2 W 0
8 ⎟⎟
⎜⎜⎝ 8 −W82 −W80 W82 ⎟⎟⎟⎟
8 8 8 8 ⎟⎠
W80 −W83 −W82 −W81 −W80 W83 W82 W81
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜W 0 W 0 W 0 W 0 ⎟⎟⎟ ⎜⎜⎜W 0 W 1 W 2 W 3 ⎟⎟⎟
⎜⎜⎜⎜ 8 8 8 8⎟ ⎟ ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟
⎜⎜⎜W 0 W 2 −W 0 −W 2 ⎟⎟⎟⎟⎟ ⎜⎜⎜W 0 W 3 −W 2 W 1 ⎟⎟⎟⎟⎟
F4 = ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟, B4 = ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟ . (3.28)
⎜⎜⎜W 0 −W 0 W 0 −W 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜W 0 −W 1 W 2 −W 3 ⎟⎟⎟⎟⎟
⎜⎜⎜⎝ 8 8 8 8⎟ ⎟⎟⎠ ⎜⎜⎜⎝ 8 8 8 8⎟ ⎟⎟
⎠
W80 −W82 −W80 W82 W80 −W83 −W82 −W81
where
⎛ ⎞
⎜⎜⎜1 a − j −a∗ ⎟⎟⎟ √
⎜⎜⎜ ⎟
1 1 1 −j ⎜1 −a∗ j a ⎟⎟⎟⎟⎟ 2
F2 = H2 = , B2 = , B4 = ⎜⎜⎜⎜ , a= (1 − j).
1 −1 1 j ⎜⎜⎜1 −a − j a∗ ⎟⎟⎟⎟⎟ 2
⎝ ⎠
1 a∗ j −a
(3.30)
We can show that the correction matrix of order 8 has the following form:
1
A8 = (D0 ⊕ D1 ⊕ D2 ) , (3.31)
8
where
1− j 1+ j
D0 = 8I2 , D1 = 4 ,
1+ j 1− j
⎛ ⎞
⎜⎜⎜(1 − j) + (a − a∗ ) (1 − j) − (a − a∗ ) (1 + j) + (a + a∗ ) (1 + j) − (a + a∗ )⎟⎟
⎜⎜⎜(1 + ∗ ⎟
j) + (a − a∗ ) (1 + j) − (a − a∗ ) (1 − j) − (a + a∗ ) (1 − j) + (a + a )⎟⎟⎟⎟
D2 = 2 ⎜⎜⎜⎜ ⎟.
⎜⎜⎝(1 + j) − (a − a∗ ) (1 − j) + (a − a∗ ) (1 + j) − (a + a∗ ) (1 + j) + (a + a∗ )⎟⎟⎟⎟⎠
(1 − j) − (a − a∗ ) (1 + j) + (a − a∗ ) (1 − j) + (a + a∗ ) (1 − j) − (a + a∗ )
(3.32)
√ √
Because a − a∗ = − j 2 and a + a∗ = 2,
⎛ √ √ ⎞ ⎛ √ √ ⎞
⎜⎜⎜1 1 + √2 1 − ⎟ ⎜⎜⎜ 1 + − ⎟
⎜⎜⎜
1 √2⎟⎟⎟⎟ ⎜⎜⎜ √ 2 1 √2 −1 −1⎟⎟⎟⎟
⎜1 1 − √2 1 + ⎟
⎟ ⎜ ⎟⎟
D2 = 2 ⎜⎜⎜⎜⎜
1 √2⎟⎟⎟⎟ − 2 j ⎜⎜⎜⎜−1 + √2 −1 − √2 1 1⎟⎟⎟⎟ . (3.33)
⎜⎜⎜1 ⎟⎟⎟ ⎜⎜⎜ 1 − ⎟
⎝
1 1 − √2 1 + 2
√ ⎟⎠ ⎜⎝ √2 1 + √2 −1 −1⎟⎟⎟⎠
1 1 1+ 2 1− 2 −1 − 2 −1 + 2 1 1
√ √
We introduce the notations: b = (1/4) + ( 2/4), c = (1/4) − ( 2/4). Now the
correction matrix A8 = Ar8 + jAi8 can be written as
⎛ ⎞
⎜⎜⎜⎜1/4 1/4 b c ⎟⎟⎟⎟
1 0 1/2 1/2 ⎜⎜⎜1/4 1/4 c b⎟⎟⎟
Ar8 = ⊕ ⊕⎜ ⎟,
1/2 1/2 ⎜⎜⎜⎜⎝1/4 1/4 c b⎟⎟⎟⎟⎠
(3.34)
0 1
1/4 1/4 b c
⎛ ⎞
⎜⎜⎜⎜ b c −1/4 −1/4⎟⎟⎟⎟
0 0 −1/2 1/2 ⎜⎜⎜−c −b 1/4 1/4⎟⎟⎟
Ai8 = ⊕ ⊕⎜ ⎟.
1/2 −1/2 ⎜⎜⎜⎜⎝ c b −1/4 −1/4⎟⎟⎟⎟⎠
(3.35)
0 0
−b −c 1/4 1/4
y0 z0r y0 z0i = 0
y1 z1r y1 z1i = 0
1/2 1/2
y2 z2r = z3r y2 z2i = – z3i
y3 y3
1/4 a
y4 z4r = z7r y4 z4i
b b
y5 y5 z5i
a
a 1/4 i
y6 z5r = z6r y6 z6
a b
y7 y7 z7i
b
Figure 3.2 Flow graph (real and imaginary parts) of an 8-point correction transform.
where
⎛ ⎞
⎜⎜⎜W 0 0
W16 0
W16 0
W16 0
W16 0
W16 0
W16 0 ⎟
W16 ⎟⎟⎟
⎜⎜⎜⎜ 16 ⎟⎟
⎜⎜⎜W 0 6 ⎟ ⎟⎟⎟
⎜⎜⎜ 16
2
W16 4
W16 6
W16 −W16
0
−W16
2
−W16
4
−W16 ⎟⎟⎟
⎜⎜⎜ 0 4 ⎟ ⎟⎟⎟⎟
⎜⎜⎜W16 4
W16 −W16
0
−W16
4 0
W16 4
W16 −W16
0
−W16 ⎟⎟
⎜⎜⎜
⎜⎜W 0 2 ⎟ ⎟⎟⎟
6
−W16
4 2
−W16
0
−W16
6 4
−W16
F8 = ⎜⎜⎜⎜⎜ 16 ⎟⎟⎟ ,
W16 W16 W16
⎜⎜⎜W 0 0 ⎟
⎟
⎜⎜⎜ 16 −W16
0 0
W16 −W16
0 0
W16 −W16
0 0
W16 −W16 ⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
6 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
0 4
W16 −W16
6
−W16
0 2
W16 −W16
4
W16 ⎟⎟⎟
⎜⎜⎜ 0 4 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
2
−W16
0 4
W16 −W16
0
−W16
4
−W16
0
W16 ⎟⎟⎟
⎜⎜⎝
2 ⎠
0
W16 −W16
6
−W16
4
−W16
2 0
W16 6
W16 4
W16 W16
⎛ ⎞ (3.37)
⎜⎜⎜W 0 1
W16 2
W16 3
W16 4
W16 5
W16 6
W16 7 ⎟
W16 ⎟⎟⎟
⎜⎜⎜⎜ 16 ⎟⎟
⎜⎜⎜W 0 5 ⎟
⎜⎜⎜ 16
3
W16 6
W16 −W16
1
−W16
4
−W16
7 2
W16 W16 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
3 ⎟
⎜⎜⎜W16 5
W16 −W16
2
−W16
7 4
W16 −W16
1
−W16
6
W16 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜W 0 1 ⎟
7
−W16
6 5
−W16
4 3
−W16
2
W16 ⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜ 16
W16 W16 W16
⎟⎟ .
⎜⎜⎜W 0 7 ⎟ ⎟⎟⎟
⎜⎜⎜ 16 −W16
1 2
W16 −W16
3 4
W16 −W16
5 6
W16 −W16 ⎟⎟⎟
⎜⎜⎜ 0 5 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
3 6
W16 1
W16 −W16
4 7
W16 2
W16 −W16 ⎟⎟⎟
⎜⎜⎜ 0 3 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
5
−W16
2 7
W16 4
W16 1
W16 −W16
6
−W16 ⎟⎟⎟
⎜⎜⎝
1 ⎠
0
W16 −W16
7
−W16
6
−W16
5
−W16
4
−W16
3
−W16
2
−W16
Similarly, we obtain
⎛ ⎞
⎜⎜⎜W 0 W16 0 ⎟
⎟⎟⎟
F
F8 = 4
F4
, F4 =
F2 F2
, F2 = ⎜⎜⎝ ⎜ 16 ⎟⎟ ,
B4 −B4 B2 −B2 0 ⎠
0
W16 −W16
⎛ ⎞
⎜⎜⎜W 0 2
W16 4
W16 6 ⎟
W16 ⎟⎟⎟
⎛ ⎞ ⎜
⎜⎜⎜ 16 ⎟⎟ (3.38)
⎜⎜⎜W 0 4 ⎟⎟⎟⎟ ⎜⎜⎜W 0 2 ⎟ ⎟⎟⎟
W16 6
W16 −W16
4
W16
B2 = ⎜⎜⎜⎝ 16 ⎟⎟⎠ , B4 = ⎜⎜⎜ ⎜ 16 ⎟⎟⎟ .
⎜ 6 ⎟
0
W16 −W16
4
⎜⎜⎜W16 −W16 W16 −W16 ⎟⎟⎟⎟⎟
⎜ 0 2 4
⎜⎝ 0 2 ⎠
⎟
W16 −W166
−W16
4
−W16
Therefore, the Fourier transform matrix of order 16 from Eq. (3.36) can be
represented in the following equivalent form:
⎛ ⎞
⎜⎜⎜F2 F2 F2 F2 F2 F2 F2 F2 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜
⎜ B2 −B2 B2 −B2 B2 −B2 B2 −B2 ⎟⎟⎟⎟
F16 = ⎜⎜⎜ ⎟⎟⎟ . (3.39)
⎜⎜⎝ B4 −B4 B4 −B4 ⎟⎟⎠
B8 −B8
π π
1
W16 = cos − j sin = c − js = b,
8 8 √
π π 2
2
W16 = cos − j sin = (1 − j) = a,
4 4 2
3π 3π
3
W16 = cos − j sin = s − jc = − jb∗ ,
8 8
π π
4
W16 = cos − j sin = − j, (3.40)
2 2
5π 5π
5
W16 = cos − j sin = s + jc = jb,
8 8 √
3π 3π 2
6
W16 = cos − j sin = (1 + j) = a∗ ,
4 4 2
7π 7π
7
W16 = cos − j sin = −c − js = −b∗ .
8 8
where
1 0 0 −1 1 1
B12 = , B22 = , F2 = . (3.42)
1 0 0 1 1 −1
where
⎛ √ √ ⎞ ⎛ √ √ ⎞
⎜⎜⎜ 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 0 ⎟
⎟⎟⎟ ⎜
⎜⎜⎜0 − −1 ⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟
⎟ ⎜
⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟⎟ ⎜
⎜⎜⎜0 2 2 ⎟⎟⎟
⎜
⎜⎜⎜ 1 0 ⎟
⎟
⎟ ⎜ 1 − ⎟⎟
B4 = ⎜⎜⎜
1
√2 ⎟
√2 ⎟⎟⎟⎟ , B4 = ⎜⎜⎜⎜2 ⎜ √2 √2 ⎟⎟⎟⎟⎟ . (3.44)
⎜⎜⎜ 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 − 0 − ⎟⎟⎟ ⎜⎜⎜0 −1 − ⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟⎟ ⎟ ⎜
⎜⎜⎜ √2 √2 ⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜ ⎟
⎜⎝ 2 2 ⎟⎟⎠ ⎝0 − 2 1 2 ⎟⎟⎟⎠
1 − 0 −
2 2 2 2
⎛ ⎞
⎜⎜⎜1 b a − jb∗ − j jb a∗ −b∗ ⎟⎟
⎜⎜⎜⎜1 − jb∗ a∗ −b ⎟
⎜⎜⎜ j b∗ a jb ⎟⎟⎟⎟⎟
⎜⎜⎜1 jb −a b∗ − j −b −a∗ − jb∗ ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
∗
1 −b −a ∗
j − jb∗ −a b ⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜
jb
∗ ⎟ . (3.45)
⎜⎜⎜1 −b a ∗
jb − j − jb a ∗
b ⎟⎟⎟⎟
⎜⎜⎜1 jb∗ a∗ b ⎟
⎜⎜⎜ j −b∗ a − jb ⎟⎟⎟⎟
⎜⎜⎜1 − jb −a −b∗ − j b −a∗ jb∗ ⎟⎟⎟⎟⎟
⎝ ⎠
1 b∗ −a∗ − jb j jb∗ −a −b
where
⎛ √ √ ⎞
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜1
⎜⎜⎜
c s 0 s −c⎟⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 s −c 0 c s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 2 2 ⎟
⎜⎜⎜ s − c 0 −c − s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
2 2 ⎟
⎜⎜⎜1 −c − s 0 s − c⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜
1
√2 √2 ⎟⎟⎟ ,
⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜1 −c −s 0 −s c⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜1 −s c 0 −c −s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 2 2 ⎟
⎜⎜⎜ −s − −c 0 c − −s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎟
2 2 ⎟
1 c − −s 0 −s − −c⎠
2 2
⎛ √ √ ⎞
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 −s −
2
c −1
2
−s⎟⎟⎟⎟⎟
⎜⎜⎜ c
⎟⎟⎟
⎜⎜⎜ √2 √2
⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜0 −c s 1 s − c⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜0 2 2 ⎟⎟
⎜⎜⎜ c s −1 s − −c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
√ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜0 −s − c 1 −c −s⎟⎟⎟⎟
⎜
B18 = ⎜⎜⎜⎜ √2 √2 ⎟⎟⎟ .
⎟⎟⎟ (3.47)
⎜⎜⎜ ⎟
⎜⎜⎜0 −
2
c −1 −c
2
s⎟⎟⎟⎟⎟
⎜⎜⎜ s
⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜0 c −s 1 −s − −c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜0 −c −s −1 −s − c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎝0 s − −c 1 c s⎠
2 2
Now, using Eq. (3.39), the Fourier transform matrix can be represented in the
following equivalent form:
F16 = F16
1
+ jF16
2
, (3.48)
where
⎛ ⎞
⎜⎜⎜H2 H2 H2 H2 H2 H2 H2 H2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B1 −B12 B12 −B12 B12 −B12 B12 −B12 ⎟⎟⎟⎟
1
F16 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ ,
⎜⎜⎜ B14 −B14 B14 −B14 ⎟⎟⎟⎟
⎜⎜⎝ ⎟⎠
B18 −B18
⎛ ⎞ (3.49)
⎜⎜⎜O2 O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B2 −B22 B22 −B22 B22 −B22 B22 −B22 ⎟⎟⎟⎟
2
F16 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ ,
⎜⎜⎜ B24 −B24 B24 −B24 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
B28 −B28
where
1
A116 = 8I2 ⊕ 8B12 H2 ⊕ 4B14 H4 ⊕ 2B18 H8 ,
16 (3.51)
1
A216 = O2 ⊕ 8B22 H2 ⊕ 4B24 H4 ⊕ 2B28 H8 .
16
Now we want to show that the transform can be realized via fast algorithm. We
denote y = H16 x. Then, X = (1/16)A16 y. We perform the transform as
Let z = (z0 , z1 , . . . , z15 ) and y = (y0 , y1 , . . . , y15 ). First we compute a real part of
this transform. Using the following notations:
we obtain
A1 = y2 − y3 , Bi1 = y4 + y5 ,
Bi2 = uy6 + vy7 , Bi3 = vy6 + uy7 ,
(3.57)
C1i = c1 y10 + c2 y11 , C2i = c2 y10 + c1 y11 ,
S 1i = s1 y8 + s2 y9 , S 2i = s2 y8 + s1 y9 ,
Q = qy12 + ty13 + gy14 + ey15 ,
T = ty12 + qy13 + hy14 + f y15 ,
(3.58)
R = ry12 + py13 + hy14 + f y15 ,
P = py12 + ry13 + f y14 + hy15 ,
we obtain
Figure 3.3 Flow graph of the real part of 16-point Fourier correction transform.
transform, if using the correction transform, needs only 68+64 = 132 real addition
and 56 real multiplication operations (see Figs. 3.3 and 3.4).
where
Figure 3.4 Flow graph of the imaginary part of a 16-point Fourier correction transform.
1
z = [Hart]N x = [Hart]N HN HN x = BN x, (3.62)
N
where
BN = (1/N)[Hart]N HN , (3.63)
where
1
BN = [Hart]N HN (3.68)
N
can be represented as a block-diagonal structure [see Eq. (3.5)]. Without losing the
generalization, we can prove it for the cases N = 4, 8, and 16.
Case N = 4: The discrete Hartley transform matrix of order 4 is
⎛ ⎞
⎜⎜⎜c0,0 + s0,0 c0,1 + s0,1 c0,2 + s0,2 c0,3 + s0,3 ⎟⎟
⎟
⎜⎜⎜c + s c1,1 + s1,1 c1,2 + s1,2 c1,3 + s1,3 ⎟⎟⎟⎟
[Hart]4 = ⎜⎜⎜⎜ 1,0 1,0
⎟.
c2,3 + s2,3 ⎟⎟⎟⎠⎟
(3.69)
⎜⎝⎜c2,,0 + s2,0 c2,1 + s2,1 c2,2 + s2,2
c3,0 + s3,0 c3,1 + s3,1 c3,2 + s3,2 c3,3 + s3,3
By using the relations in Eq. (3.67) and ordering the rows of [Hart]4 as 0, 2, 1, 3,
we obtain
⎛ ⎞
⎜⎜⎜c0,0 + s0,0 c0,1 + s0,1 c0,0 + s0,0 c0,1 + s0,1 ⎟⎟
⎟
⎜⎜⎜c + s c2,1 + s2,1 c2,0 + s2,0 c2,1 + s2,1 ⎟⎟⎟⎟
[Hart]4 = ⎜⎜⎜⎜ 2,0 2,0
⎟⎟⎟ = A2 A2 , (3.70)
⎜⎜⎝c1,0 + s1,0 c1,1 + s1,1 −(c1,0 + s1,0 ) −(c1,1 + s1,1 )⎟⎟⎠ P2 −P2
c3,0 + s3,0 c3,1 + s3,1 −(c3,0 + s3,0 ) −(c3,1 + s3,1 )
where
A2 = P2 = H2 , (3.71)
i.e., [Hart]4 is the Hadamard matrix; therefore, the correction transform in this case
(B4 ) is the identity matrix.
where
⎛ √ ⎞
⎜⎜⎜1 ⎟⎟
√ ⎟⎟⎟⎟
2 1 0
⎜⎜⎜⎜
1 1 ⎜⎜1 0 −1 2⎟⎟⎟⎟
H2 = , P4 = ⎜⎜⎜⎜ √ ⎟.
⎜⎜⎜1 − 2 1 0 ⎟⎟⎟⎟⎟
(3.73)
1 −1
⎜⎜⎝ √ ⎟⎟⎠
1 0 −1 − 2
Note that
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜0 ⎟
0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜0 ⎟
0⎟⎟⎟⎟⎟
⎜⎜⎜ 0 1 0 0 0 0
⎟ ⎜⎜⎜ 0 1 0 0 0 0
⎟
⎜⎜⎜0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 1 0 0
⎟ ⎜⎜⎜ 1 0 0 0 0 0
⎟
⎜⎜⎜0 0 0 0 0 0 1 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟
Q1 = ⎜⎜⎜⎜ ⎟⎟ , Q2 = ⎜⎜⎜⎜ ⎟⎟ . (3.75)
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎝0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
The correction matrix in this case will be B8 = (1/8)[4I2 ⊕ 4I2 ⊕ 2P4 H4 ], i.e.,
⎡ ⎛ ⎞⎤
⎢⎢⎢ ⎜⎜⎜ b a s −s ⎟⎟⎟⎥⎥⎥
1 ⎢⎢ ⎢ ⎜
⎜⎜ s −s a b⎟⎟⎟⎟⎥⎥⎥⎥
B8 = ⎢⎢⎢⎢I4 ⊕ ⎜⎜⎜⎜ ⎟⎥ ,
⎜⎜⎝ a b −s s ⎟⎟⎟⎟⎠⎥⎥⎥⎥⎦
(3.76)
8 ⎢⎢⎣
−s s b a
where
√
s= 2, a = 2 − s, b = 2 + s. (3.77)
We can see that the third block of matrix B8 may be factorized as (see Fig. 3.5)
⎛ ⎞ ⎛ ⎞⎛ ⎞⎛ ⎞
⎜⎜⎜ b a s −s ⎟⎟⎟ ⎜⎜⎜1 1 0 1⎟⎟⎟ ⎜⎜⎜2 0 0 0⎟⎟ ⎜⎜1 1 0 0⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ s −s a b⎟⎟⎟ ⎜⎜⎜0 1 1 −1⎟⎟⎟ ⎜⎜⎜0 s 0 0⎟⎟ ⎜⎜1 −1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟⎜
⎜⎜⎜ a b −s s ⎟⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜1 −1 0 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 2
⎟⎟⎟ ⎜⎜⎜ ⎟.
0⎟⎟⎠ ⎜⎜⎝0 0 1 −1⎟⎟⎟⎟⎠
(3.78)
⎝ ⎠ ⎝ ⎠⎝
−s s b a 0 −1 1 1 0 0 0 s 0 0 1 −1
Case N = 16: Using the properties of the elements of a Hartley matrix [see
Eq. (3.67) ], the Hartley transform matrix of order 16 can be represented as
⎛ ⎞
⎜⎜⎜A2 A2 A2 A2 A2 A2 A2 A2 ⎟⎟⎟
⎜⎜⎜⎜P2 −P P2 −P2 P2 −P2 P2
⎟
−P2 ⎟⎟⎟⎟
A16 = ⎜⎜⎜⎜ 2
⎟⎟⎟ , (3.79)
⎜⎜⎝ P4 −P4 P4 −P4 ⎟⎟⎠
P8 −P8
where
1 1
A2 = C 2 + S 2 = ,
1 −1
1 1
P2 = Pc2 + P2s = ,
1 −1
⎛ √ ⎞ (3.80)
⎜⎜⎜1 2 1 0√ ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
1 0√ −1 2⎟⎟⎟⎟
P4 = Pc4 + P4s = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜1 − 2 1 0√ ⎟⎟⎟⎟⎟
⎝ ⎠
1 0 −1 − 2
and P8 = Pc8 + P8s [here we use the notations ci = cos(iπ/8) and si = sin(iπ/8)]:
⎛ ⎞
⎜⎜⎜1 c1 c2 c3 0 −c3 −c2 −c1 ⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ c3 −c2 −c1 0 c1 c2 −c3 ⎟⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ −c3 −c2 c1 0 −c1 c2 c3 ⎟⎟⎟⎟
⎟
⎜⎜1 −c1 c2 −c3 0 c3 −c2 c1 ⎟⎟⎟⎟
P8 = ⎜⎜⎜⎜
c ⎟,
c1 ⎟⎟⎟⎟⎟
(3.81)
⎜⎜⎜1 −c1 c2 −c3 0 c3 −c2
⎜⎜⎜ ⎟
⎜⎜⎜⎜1 −c3 −c2 c1 0 −c1 c2 c3 ⎟⎟⎟⎟
⎟
⎜⎜⎜1 c3 −c2 −c1 0 c1 c2 −c3 ⎟⎟⎟⎟
⎝ ⎠
1 c1 c2 c3 0 −c3 −c2 −c1
⎛ ⎞
⎜⎜⎜0 s1 s2 s3 1 s3 s2 s1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 s3 s2 −s1 1 −s1 s2 s3 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ s3 −s2 −s1 1 −s1 −s2 s3 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
s1 −s2 s3 1 s3 −s2 s1 ⎟⎟⎟⎟
P8 = ⎜⎜⎜⎜
c ⎟⎟ . (3.82)
⎜⎜⎜0 −s1 s2 −s3 1 −s3 s2 −s1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 −s3 s2 s1 1 s1 s2 −s3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝0 −s3 −s2 s1 1 s1 −s2 −s3 ⎟⎟⎟⎟
⎠
0 −s1 −s2 −s3 1 −s3 −s2 −s1
From Eq. (3.79) and Lemma 3.1.1, we obtain the Hartley correction matrix as
1
B16 = 8A2 H2 ⊕ 8P2 H2 ⊕ 4P4 H4 ⊕ 2P8 H8 ; (3.83)
16
denoted by
√
s = 2, a = 2 − s, b = 2 + s,
e = 1 − s, f = 1 + s,
π 3π π 3π
c+ = 2 cos + cos , c− = 2 cos − cos , (3.84)
8 8 8 8
π 3π π 3π
s+ = 2 sin + sin , s− = 2 sin − sin .
8 8 8 8
A2 H2 = P2 H2 = 2I2 ,
⎛ ⎞
⎜⎜⎜ b a s −s ⎟⎟⎟
⎜⎜⎜ s −s a b⎟⎟⎟
P4 H4 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎝ a b −s s ⎟⎟⎟⎟⎠
(3.85)
−s s b a
P8 H8 = Pc8 H8 + P8s H8 .
Now, we wish to show that the Hartley transform can be realized via fast
algorithms. The 16-point Hartley transform z = [Hart]16 x can be realized as
follows. First, we perform the 16-point HT y = H16 x, then we compute the 16-point
correction transform. Using Eq. (3.83), we find that
⎛ ⎞ ⎛ ⎞
⎜⎜⎜y4 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟
⎜ ⎟ ⎜⎜⎜y ⎟⎟⎟
⎜
⎜y ⎟ ⎟ ⎜ 9⎟
z = 8A2 H2 0 ⊕ 8P2 H2 2 ⊕ 4P4 H4 ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ ⊕ 2P8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ .
y y
(3.89)
y1 y3 ⎜⎜⎝y6 ⎟⎟⎠ ⎜⎜⎜. ⎟⎟⎟
y7 ⎝ ⎠
y15
The coefficients
⎛ ⎞ ⎛ ⎞
⎜⎜⎜z8 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟
⎜⎜⎜z ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
⎜⎜⎜ 9 ⎟⎟⎟ ⎜⎜ 9 ⎟⎟
⎜⎜⎜⎜.. ⎟⎟⎟⎟ = P8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ (3.91)
⎜⎜⎝. ⎟⎟⎠ ⎜⎜⎝. ⎟⎟⎠
z15 y15
z8 = A1 + B1 + C1 + D,
z9 = A3 + B3 − C3 + D,
z10 = A5 + B5 + C3 − D,
z11 = A7 + B7 − C4 + D,
(3.92)
z12 = A2 + B2 − C2 + D,
z13 = A4 + B4 + C2 − D,
z14 = A6 + B6 − C3 − D,
z15 = A8 + B8 + C4 + D,
where
A5 y12 C1
c+
–1
A6 y13 C3
c
b B3 y14 C2
s+
y8 a B1 y15 C4
s
c+ s
y9 B4 y12 D
c B2 y13
A7 y14
–1
A8 y15
where
√
2
a0 = , ak = 1, k 0. (3.95)
2
For more detail on DCT transforms, see also Refs. 9, 19, 32, 33, 40, 49, 80–82,
and 98.
We can check that C N is an orthogonal matrix, i.e., C N C NT = (N/2)IN . We denote
the elements of the DCT-2 matrix (without normalizing coefficients ak ) by
(2n + 1)kπ
ck,n = cos , k, n = 0, 1, . . . , N − 1. (3.96)
2N
N
c2k,n = c2k,N−n−1 , c2k+1,n = c2k+1,N−n−1 , k, n = 0, 1, . . . , − 1. (3.97)
2
⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟
⎟
⎜⎜⎜c c3 −c3 −c1 ⎟⎟⎟⎟
C4 = ⎜⎜⎜⎜ 1 ⎟.
⎜⎜⎝c2 −c2 −c2 c2 ⎟⎟⎟⎟⎠
(3.99)
c3 c1 −c1 −c3
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟ ⎜⎜⎜⎜ ⎟
0⎟⎟⎟⎟⎟ 0⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜⎜ P2 = ⎜⎜⎜⎜⎜
0 0 1 0 1 0
⎟, ⎟, (3.100)
⎜⎜⎜0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0 0 1 0 0 1 0
we obtain
⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟
⎜⎜⎜c −c2 c2 −c2 ⎟⎟⎟⎟⎟
C4 = P1C4 P2 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ = C2 C2 . (3.101)
⎜⎜⎝c1 c3 −c1 −c3 ⎟⎟⎠ D2 −D2
c3 c1 −c3 −c1
1 2 0 2(c1 + c3 ) 2(c1 − c3 )
A4 = 2C2 H2 ⊕ 2D2 H2 = ⊕ . (3.102)
4 0 2c2 2(c1 + c3 ) −2(c1 − c3 )
Figure
√ 3.9 Flow graph of the 4-point cosine correction transform (r1 = c1 + c3 , r2 = c1 − c3 ,
s = 2).
Case N = 8: The DCT matrix of order 8 has the form [here we use the notation
ci = cos(iπ/16)]
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟
⎟
⎜⎜⎜c c3 c5 c7 −c7 −c5 −c3 −c1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1
⎜⎜⎜c2
⎜⎜⎜ c6 −c6 −c2 −c2 −c6 c6 c2 ⎟⎟⎟⎟⎟
⎟
⎜c −c7 −c1 −c5 c5 c1 c7 −c3 ⎟⎟⎟⎟
C8 = ⎜⎜⎜⎜⎜ 3 ⎟. (3.103)
⎜⎜⎜c4 −c4 −c4 c4 c4 −c4 −c4 c4 ⎟⎟⎟⎟
⎜⎜⎜c5 ⎟
⎜⎜⎜ −c1 c7 c3 −c3 −c7 c1 −c5 ⎟⎟⎟⎟
⎟
⎜⎜⎜c6 −c2 c2 −c6 −c6 c2 −c2 c6 ⎟⎟⎟⎟
⎝ ⎠
c7 −c5 c3 −c1 c1 −c3 c5 −c7
Let
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
0⎟⎟⎟⎟⎟ 0⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜⎜ P2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 1 0 0 0 1 0 0 0
⎟, ⎟,
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 0 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 1 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0
⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎛ ⎞
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜0 0 0⎟⎟⎟⎟⎟ ⎜0 0⎟⎟⎟⎟⎟
P3 = ⎜⎜⎜⎜⎜ Q = ⎜⎜⎜⎜⎜
0 1 0 0 0 1 0 Q 0
⎟, ⎟, P4 = . (3.104)
⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1⎟⎟⎟⎟⎟ 0 Q
⎜⎜⎜⎜0 0 0⎟⎟⎟⎟⎟
⎟ ⎝ ⎠
⎜⎜⎜ 0 0 0 1 0
⎟
0 0 1 0
⎜⎜⎜0 0 0⎟⎟⎟⎟⎟
⎜⎜⎝ 0 0 0 0 1
⎠
0 0 0 0 0 0 0 1
Using the above-given matrices, we obtain the block representation for the DCT
matrix of order 8 as
⎛ ⎞
⎜⎜⎜C2 C2 C2 C2 ⎟⎟⎟
⎜⎜⎜ ⎟
C8 = P3 P1C8 P2 P4 = ⎜⎜ B2 −B2 B2 −B2 ⎟⎟⎟⎟ , (3.105)
⎝ ⎠
D4 Q −D4 Q
where
⎛ ⎞
⎜⎜⎜c1 c3 c7 c5 ⎟⎟⎟
1 1 c2 c6 ⎜⎜⎜c3 −c7 −c5 −c1 ⎟⎟⎟
C2 = , B2 = , D4 Q = ⎜⎜⎜⎜c −c ⎟
c3 c7 ⎟⎟⎟⎟⎠ . (3.106)
c4 −c4 c6 −c2 ⎜⎝ 5 1
c7 −c5 −c1 c3
Therefore, the correction matrix can take the following block-diagonal form:
⎡ ⎛ ⎞⎤
⎢⎢⎢ ⎜⎜⎜⎜ a1 a2 a3 a4 ⎟⎟⎟⎟⎥⎥⎥⎥
⎢
1 ⎢⎢ 1 0 r 1 r2 ⎜⎜−b b2 b3 −b4 ⎟⎟⎟⎥⎥⎥
A8 = ⎢⎢⎢⎢8 ⊕4 ⊕ ⎜⎜⎜⎜ 1 ⎟⎥ ,
⎜⎝⎜−b4 b3 −b2 b1 ⎟⎟⎟⎠⎟⎥⎥⎥⎦⎥
(3.107)
8 ⎢⎣⎢ 0 c4 −r2 r1
−a4 −a3 a2 a1
where
a1 = c1 + c3 + c5 + c7 , a2 = c1 − c3 − c5 + c7 ,
a3 = c1 + c3 − c5 − c7 , a4 = c1 − c3 + c5 − c7 ,
b1 = c1 − c3 + c5 + c7 , b2 = c1 + c3 − c5 + c7 , (3.108)
b3 = c1 + c3 + c5 − c7 , b4 = c1 − c3 − c5 − c7 ,
r1 = c2 + c6 , r2 = c2 − c6 .
Case N = 16: Denote rk = cos(kπ/32). From the cosine transform matrix C16 of
order 16 we generate a new matrix by the following operations:
(1) Rewrite the rows of the matrix C16 in the following order: 0, 2, 4, 6, 8, 10, 14,
1, 3, 5, 7, 9, 11, 13, 15.
(2) Rewrite the first eight rows of the new matrix as 0, 2, 4, 6, 1, 3, 5, 7.
(3) Reorder the columns of this matrix as follows: 0, 1, 3, 2, 4, 5, 7, 6, 8, 9, 11, 10,
12, 13, 15, 14.
Finally, the DCT matrix of order 16 can be represented by the equivalent block
matrix as
⎛ ⎞
⎜⎜⎜C2 C2 C2 C2 C2 C2 C2 C2 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜A2 −A2 A2 −A2 A2 −A2 A2 −A2 ⎟⎟⎟⎟
⎜⎜⎜⎜ B11 B12 −B11 −B12 B11 B12 −B11 −B12 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜B B22 −B21 −B22 B21 B22 −B21 −B22 ⎟⎟⎟⎟
C16 = ⎜⎜⎜⎜⎜ 21 ⎟, (3.109)
⎜⎜⎜ B31 B32 B34 B33 −B31 −B32 −B34 −B33 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ B41 B42 B44 B43 −B41 −B42 −B44 −B43 ⎟⎟⎟⎟⎟
⎜⎜⎜ B ⎟
⎜⎝ 51 B52 B54 B53 −B51 −B52 −B54 −B53 ⎟⎟⎟⎠
B61 B62 B64 B63 −B61 −B62 −B64 −B63
where
1 1 r4 r12
C2 = r −r , A2 = r −r ;
8 8 12 4
r r r r r −r r r
B11 = r2 −r6 , B12 = −r14 −r10 , B21 = r10 −r2 , B22 = −r6 r14 ,
6 14 10 2 14 10 2 6
r r r r r r r r
B31 = r1 r3 , B32 = −r7 r5 , B33 = −r9 −r11 , B34 = −r15 −r13 ,
3 9 11 15 5 1 13 7
r5 r15 −r3 −r7 −r13 r9 r11 r1
B41 = r −r , B42 = r −r , B43 = r r , B22 = −r −r , (3.110)
7 11 15 3 1 13 9 5
r −r r −r13 −r15 −r3 r7 r11
B51 = r9 −r5 , B52 = r1 r9 , B53 = −r3 r7 , B54 = −r5 r15 ,
11 1 13
r −r −r r r r r −r
B61 = r13 −r7 , B62 = −r5 r1 , B63 = r11 −r15 , B64 = −r3 r9 .
15 13 9 11 7 5 1 3
where
P1,1 = a1 + a2 + a3 + a4 , P1,2 = a1 − a2 − a3 + a4 ,
P1,3 = a1 + a2 − a3 − a4 , P1,4 = a1 − a2 + a3 − a4 ,
P1,5 = b1 + b2 + b3 + b4 , P1,6 = b1 − b2 − b3 + b4 ,
P1,7 = b1 + b2 − b3 − b4 , P1,8 = b1 − b2 + b3 − b4 ;
⎛ ⎞
⎜⎜⎜ P1,1 P1,2 P1,3 P1,4 P1,5 P1,6 P1,7 P1,8 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 2,1 P P 2,2 P 2,3 P 2,4 P2,5 P 2,6 P 2,7 P 2,8 ⎟ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ P3,1 P3,2 P3,3 P3,4 P3,5 P3,6 P3,7 P3,8 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ P4,1 P4,2 P4,3 P4,4 P4,5 P4,6 P4,7 P4,8 ⎟⎟⎟⎟⎟
P = ⎜⎜⎜ ⎜ ⎟⎟ . (3.118)
⎜⎜⎜ P4,8 P4,7 P4,6 P4,5 −P4,4 −P4,3 −P4,2 −P4,1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−P −P −P −P P3,4 P3,3 P3,2 P3,1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 3,8 3,7 3,6 3,5
⎟⎟
⎜⎜⎜
⎜⎜⎜ P2,8 P2,7 P2,6 P2,5 −P2,4 −P2,3 −P2,2 −P2,1 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
−P1,8 −P1,7 −P1,6 −P1,5 P1,4 P1,3 P1,2 P1,1
The following shows that the cosine transform can be done via a fast algorithm.
Denote y = H16 x. Then, z = A16 y. Using Eq. (3.113), we find that
⎛ ⎞ ⎛ ⎞
⎜ y ⎟ ⎜⎜⎜y8 ⎟⎟⎟
⎜
⎜⎜⎜ ⎟⎟⎟4 ⎟ ⎜⎜⎜y ⎟⎟⎟
⎜ ⎟ ⎜ 9⎟
⊕ 4D4 H4 ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ ⊕ 2D8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ .
y0 y2 y
z = 8C2 H2 ⊕ 8D2 H2 (3.119)
y1 y3 ⎜⎜⎝y6 ⎟⎟⎠ ⎜⎜⎜⎝. ⎟⎟⎟⎠
y7 y15
√
From Eqs. (3.115) and (3.116), we obtain (here s = 2)
z0 = 2y0 ,
z1 = sy1 ,
z2 = r4 (y2 + y3 ) + r12 (y2 − y3 ),
z3 = r4 (y2 − y3 ) + r12 (y2 + y3 ),
(3.120)
z4 = q1 (y4 + y5 ) + q2 (y4 − y5 ) + t1 (y6 + y7 ) + t2 (y6 − y7 ),
z5 = −q1 (y4 − y5 ) + t2 (y4 + y5 ) + t1 (y6 − y7 ) + q2 (y6 + y7 ),
z6 = −t1 (y4 − y5 ) + q2 (y4 + y5 ) − q1 (y6 − y7 ) − t2 (y6 + y7 ),
z7 = −t1 (y4 + y5 ) + t2 (y4 − y5 ) + q1 (y6 + y7 ) − q2 (y6 − y7 ).
we obtain
2
y0 z0
s
y1 z1
r4
y2 z2
r12
r12
y3 z3
–r4
z4
t1
q1 q2
t2
q2 z6 t2
y4 y6
t1 q1
q2 t2
q1 t1
y5 y7
z5 q1
t1
q2
t2
z7
From Eq. (3.123), it follows that the matrix X2n can be represented as
⎛ ⎞
⎜⎜⎜X n−1 X2n−1 ⎟⎟⎟
X2n ≡ ⎜⎝ ⎜
⎜ 2 ⎟⎟⎟ . (3.125)
⎠
2(n−1)/2 I2n−1 −2(n−1)/2 I2n−1
1
AN = XN HN (3.126)
N
Figure 3.12 Flow graph of the computation of components zi , i = 10, 11, 12, 13.
Figure 3.13 Flow graph of the computation of components z14 and z15 .
Case N = 16: Consider a Haar matrix of order 16. For n = 4 from Eq. (3.33), we
obtain
⎛ ⎞
⎜⎜⎜X X8 ⎟⎟⎟⎟
X16 = ⎜⎝ ⎜
⎜ 8
⎟⎟⎠ ,
23/2 I8 −23/2 I8
⎛ ⎞
⎜⎜⎜X4 X4 ⎟⎟⎟
X8 = ⎝⎜ ⎟⎠ , (3.131)
2I4 −2I4
⎛ ⎞
⎜⎜⎜X2 X2 ⎟⎟⎟⎟
X4 = ⎜⎝ ⎜ √ √ ⎟⎠ .
2I2 − 2I2
Note that
+ +
X2 = H2 = . (3.132)
+ −
Hence, using Eq. (3.131), the Haar transform matrix X16 of order 16 is represented
as
⎛ ⎞
⎜⎜⎜H H2 ⎟⎟⎟⎟⎟
⎜⎜⎜ 2 H2 H2 H2 H2 H2 H2
⎜⎜⎜ √ √ √ √ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 2I2 − 2I2 2I2 − 2I2 2I2 − 2I2 2I2 − 2I2 ⎟⎟⎟⎟
X16 = ⎜⎜⎜ ⎟⎟⎟ ,
⎜⎜⎜⎜ 2I4 −2I4 2I4 −2I4
⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟⎠
⎝
23/2 I8 −23/2 I8
(3.133)
√
or as (here s = 2)
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s 0 −s 0 s 0 −s 0 s 0 −s 0 s 0 −s 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 −s 0 −s 0 −s s 0 −s ⎟⎟⎟⎟⎟
⎜⎜⎜ s 0 s 0 s 0
⎜⎜⎜2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 ⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s
y0 z0
8
l2
y1 z1
z2
y2
8 √2
H2
x0
y3
H16 z3
y4 z4
x15 4
H4
y7 z7
z8
y8
4 √2
H8
y15 z15
1
A16 = (4I2 ⊕ 2sH2 ⊕ 2H4 ⊕ sH8 ) . (3.134)
4
Now we want to show that the Haar transform can be realized via a fast
algorithm. Denote y = H16 x and z = A16 y. Using Eq. (3.134), we find that
⎡ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢ ⎜⎜⎜y4 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟⎥⎥⎥
1 ⎢⎢ y ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟⎥⎥
z = ⎢⎢⎢⎢4 0 ⊕ 2sH2 2 ⊕ 2H4 ⎜⎜⎜⎜... ⎟⎟⎟⎟ ⊕ sH8 ⎜⎜⎜⎜... ⎟⎟⎟⎟⎥⎥⎥⎥ .
y
(3.135)
4 ⎣⎢ y1 y3 ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠⎥⎦
y7 y15
Most linear transforms, however, yield noninteger outputs even when the inputs
are integers, making them unsuitable for many applications such as lossless
compression. In general, the transformed coefficients require theoretically infinite
bits for perfect representation. In such cases, the transform coefficients must be
rounded or truncated to a finite precision that depends on the number of bits
available for their representation. This, of course, introduces an error, which in
general degrades the performance of the transform. Recently, reversible integer-to-
integer wavelet transforms have been introduced.23 An integer-to-integer transform
is an attractive approach to solving the rounding problem, and it offers easier
hardware implementation. This is because integer transforms can be exactly
represented by finite bits.
The purpose of Section 3.6 is to show how to construct an integer slant transform
and reduce the computational complexity of the algorithm for computing the 2D
slant transform. An effective algorithm for computing the 1D slant transform via
Hadamard is also introduced.
X = S 2n x, x = S 2Tn X, (3.136)
where O0 and O0 are row and column zero vectors, respectively, and the parameters
a2n and b2n are defined recursively by
−(1/2)
b2n = 1 + 4a22n−1 , a2n = 2b2n a2n−1 , a2 = 1. (3.139)
From Eq. (3.138), it follows that Q2n QT2n is the diagonal matrix, i.e.,
Q2n QT2n = diag 2, 2(a22n + a22n ), 2I2n−1 −2 , 2, 2(a22n + a22n ), 2I2n−1 −2 . (3.140)
Because a22n + b22n = 1, Q2n is the orthogonal matrix and Q2n QT2n = 2I2n .
⎜⎜⎜ ⎟⎟⎟ 21
⎜⎜⎜3 1 −1 −3 −3 −1 1 3⎟⎟⎟⎟ 1
·√
⎜⎜⎜ ⎟⎟⎟ 5
⎟
1 ⎜⎜⎜⎜7 −1 −9 −17 17 9 1 −7⎟⎟⎟⎟ ·√
1
S 8 = √ ⎜⎜⎜ ⎟⎟⎟ 105 . (3.142)
8 ⎜⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 −1 −1 1 −1 1 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
1⎟⎟⎟⎟
1
⎜⎜⎜1 −3 3 −1 −1 3 −3 ·√
⎜⎜⎝ ⎟⎟⎟ 5
1 −3 3 −1 1 −3 3 −1⎠ 1
·√
5
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜a b c d −d −c −b −a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜e f − f −e −e − f f e ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜b −d −a −c a d −b ⎟⎟⎟⎟
[PS ]8 (a, b, c, d, e, f ) = ⎜⎜⎜⎜⎜
c
⎟. (3.144)
⎜⎜⎜1 −1 −1 1 1 −1 −1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜c −a d b −b −d a −c ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ f −e e − f −f e −e f ⎟⎟⎟⎟⎟
⎝ ⎠
d −c b −a a −b c −d
We call the matrices in Eqs. (3.143) and (3.144) parametric slant Hadamard
matrices. Note that [PS ]4 (1, 1) and [PS ]8 (1, 1, 1, 1, 1, 1) are Hadamard matrices of
order 4 and 8, respectively. Note also that the matrix in Eq. (3.144) is a slant-type
matrix if it satisfies the following conditions:
a ≥ b ≥ c ≥ d, e ≥ f, and ab = ac + bd + cd.
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟
⎟⎟⎟
⎜⎜⎜⎜ ⎟ 2
−a ⎟⎟⎟⎟
·√
⎜⎜⎜a b c d −d −c −b a2 + · · · + d 2
⎜⎜⎜ ⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜e f − f −e −e − f f e ⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ e2 + f 2
⎜ ⎟
1 ⎜⎜⎜⎜b −d −a −c c a d −b ⎟⎟⎟⎟ ·√
2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜4 −4⎟⎟⎟⎟⎟
1
⎜⎜⎜ 2 2 0 0 −2 −2 ·√
⎜⎜⎜ ⎟⎟⎟ /6
⎜⎜⎜2 1 −1 −2 −2 −1 1 2⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ 5
⎟
1 ⎜⎜⎜⎜⎜2 0 −4 −2 −2⎟⎟⎟⎟ 1
2 4 0 ·√
√ ⎜⎜⎜ ⎟⎟⎟ 6
. (3.147)
8 ⎜⎜⎜1
⎜⎜⎜ −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜2 ⎟⎟
⎜⎜⎜ −4 0 2 −2 0 4 −2⎟⎟⎟⎟ ·√
1
⎜⎜⎜ ⎟⎟⎟ /6
⎟
⎜⎜⎜1 −2 2 −1 −1 2 −2 1⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ 5
⎝0 −2 2 −4 4 −2 2 0⎠ 1
·√
6
Construction 2:25–27 Introduce the following expressions for a2n and b2n [see Eq.
(3.139)] to construct parametric slant HTs of order 2n :
. .
3 · 22n−2 22n−2 − β2n
a2n = , b2 n = , (3.148)
4 · 22n−2 − β2n 4 · 22n−2 − β2n
where
⎟⎟⎟ (3.151)
⎜⎜⎜0 0 O0 0 1 O ⎟⎟⎟
⎜⎜⎜ 0
⎟⎟⎟
⎜⎜⎜⎜0 a2n O0 −b2n 0 O0 ⎟⎟⎠
⎝ 0 0
O O O2n−1 −2 O0 O0 I2n−1 −2
where M2 = I2 · Om denotes a zero matrix of order m, Im denotes an identity
matrix of order m, H2n is the Hadamard-ordered Walsh–Hadamard matrix of
size 2n , the parameters a2n and b2n are given in Eq. (3.148), and O0 and O0
denote the zero row and zero column, both of length 2n−1 − 2, respectively.
Example: For 2n = 8 we have, respectively, classical case (β2n = 1), constant-β
case (β2n = 1.7), multiple-β case (β4 = 1.7 and β8 = 8.1), and Hadamard case
(β4 = 4, β8 = 16). Figure 3.15 shows the basis vectors for this example.
(1) Classical case (β2n = 1):
⎛ ⎞
⎜⎜⎜1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.5275 1.0911 0.6547 0.2182 −0.2182 −0.6547 −1.0911 −1.5275⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.0000⎟⎟⎟⎟
⎜ ⎟⎟
1 ⎜⎜⎜⎜0.4472 −1.3416 1.3416 −0.4472 0.4472 −1.3416 1.3416 −0.4472⎟⎟⎟⎟
S Classical = √ ⎜⎜⎜ ⎟⎟.
8 ⎜⎜⎜⎜1.3416 0.4472 −0.4472 −1.3416 −1.3416 −0.4472 0.4472 1.3416⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0.6831 −0.0976 −0.8783 −1.6590 1.6590 0.8783 0.0976 −0.6831⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.0000⎟⎟⎟⎟
⎠
0.4472 −1.3416 1.3416 −0.4472 −0.4472 1.3416 −1.3416 0.4472
(3.152)
(2) Constant-β case (β2n = 1.7):
⎛ ⎞
⎜⎜⎜1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.5088 1.1245 0.6310 0.2467 −0.2467 −0.6310 −1.1245 −1.5088⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.0000⎟⎟⎟⎟
⎜ ⎟⎟
1 ⎜⎜⎜⎜0.5150 −1.3171 1.3171 −0.5150 0.5150 −1.3171 1.3171 −0.5150⎟⎟⎟⎟
S Const = √ ⎜⎜⎜ ⎟⎟.
8 ⎜⎜⎜⎜1.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.3171⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0.6770 −0.0270 −0.9312 −1.6352 1.6352 0.9312 0.0270 −0.6770⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.0000⎟⎟⎟⎟
⎠
0.5150 −1.3171 1.3171 −0.5150 −0.5150 1.3171 −1.3171 0.5150
(3.153)
Figure 3.15 Parametric slant-transform basis vectors for (2n = 8): (a) classical case,
(b) constant-β case (β2n = 1.7), (c) multiple-β case (β4 = 1.7 and β8 = 8.1), and (d) Hadamard
case (β4 = 4, β8 = 16).
S 2N = [H2 ⊗ A1 , H1 ⊗ A2 , . . . , H2 ⊗ AN−1 , H1 ⊗ AN ] ,
⎡ −1 ⎤
⎢⎢⎢H2 ⊗ B1 ⎥⎥⎥
⎢⎢⎢ −1 ⎥
⎢⎢⎢H1 ⊗ B2 ⎥⎥⎥⎥⎥
⎢ ⎥⎥⎥⎥ (3.156)
−1
S 2N = ⎢⎢⎢⎢⎢... ⎥⎥⎥
⎢⎢⎢ −1
⎢⎢⎢H2 ⊗ BN−1 ⎥⎥⎥⎥⎥
⎣ −1 ⎦
H1 ⊗ B N
are the forward and inverse sequential slant HT matrices of order 2N, where Ai
−1
and Bi are the i’th column
+ +and i’th row of the S N and S N matrices, respectively,
+ +
H2 = + − , and H1 = − + .
The construction will be based on the parametric sequential integer slant
matrices and Lemma 3.7.1. Examples of parametric sequential slant matrices and
their inverse matrices of order 3 and 5 are given below:
⎛ ⎞
⎜⎜⎜ 1 1 1 ⎟⎟⎟⎟
⎜⎜
⎜
⎛ ⎞ ⎜⎜⎜ 3 2a 6b ⎟⎟⎟⎟⎟
⎜⎜⎜1 1 1⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜ ⎟
[PS ]3 (a, b) = ⎜⎜⎜⎜a 0 −a⎟⎟⎟⎟ , [PS ]−1 (a, b) = ⎜⎜⎜⎜⎜ 1 0 − 1 ⎟⎟⎟⎟⎟ ,
⎝ ⎠ 3
⎜⎜⎜ 3 3b ⎟⎟⎟⎟
b −2b b ⎜⎜⎜ ⎟
⎜⎜⎝ 1 1 1 ⎟⎟⎟⎠
−
3 2a 6b
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − ⎟
⎜⎜⎜ 5 5b
⎜⎜⎜ 6a 10b 15c ⎟⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 ⎟⎟ ⎜
⎜ − ⎟
10c ⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜ 5 10b 0
⎜⎜⎜ 2b b 0 −b −2b⎟⎟⎟⎟ ⎜ 5b
⎜⎜⎜ ⎟ ⎜
⎜⎜⎜ 1 ⎟
[PS ]5 (a, b, c) = ⎜⎜⎜⎜ a 0 −2a 0 a ⎟⎟⎟⎟ , [PS ]−1 1 1 ⎟⎟⎟⎟ .
⎟ 5 =⎜ ⎜
⎜ 0 − 0 ⎟⎟
⎜⎜⎜−b 2b 0 −2b b ⎟⎟⎟⎟⎠ ⎜⎜⎜ 5 3a 15c ⎟⎟⎟⎟
⎜⎝ ⎜⎜⎜ ⎟
2c −3c 2c −3c 2c ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − ⎟⎟
⎜⎜⎜ 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 ⎟⎟⎟⎠
−
5 5b 6a 10b 15c
(3.157)
Remark 1: The slant transform matrices in Eqs. (3.143), (3.144), and (3.157)
possess the sequency property in ordered form.
Remark 2: One can construct a class of slant HTs of order 3 · 2n , 4 · 2n , 5 · 2n ,
and 8 · 2n , for n = 1, 2, . . ., by utilizing Lemma 3.7.1 and the parametric integer
slant-transform matrices in Eqs. (3.143), (3.144), (3.156), and (3.157).
Example 3.7.1: (a) Using Eqs. (3.156) and (3.157), for N = 3 and n = 1, we have
the forward integer slant HT matrix [PS ]6 and inverse slant HT matrix [PS ]−1
6
of order 6:
⎡ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢ ⎜1⎟ ⎜ 1 ⎟⎟
⎟⎟⎟ + + ⎜⎜⎜⎜⎜ 1⎟⎟⎟⎟⎟⎥⎥⎥⎥⎥
⎢⎢ + + ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ +
⎢ + ⎜⎜⎜⎜
[PS ]6 (a, b) = ⎢⎢ ⊗ a , ⊗ ⎜ 0 ⎟⎟⎟ , ⊗ ⎜−a⎟⎥
⎣ + − ⎜⎜⎝ ⎟⎟⎠ − + ⎜⎝⎜ ⎠ + − ⎜⎜⎝ ⎟⎟⎠⎥⎥⎦
b −2b b
⎛1 1 1 ⎞
⎜⎜⎜ 1 1 1⎟⎟
⎜⎜⎜a a 0
⎜⎜⎜ 0 −a −a⎟⎟⎟⎟⎟
⎟
⎜⎜b b −2b −2b
⎜ b b⎟⎟⎟⎟
= ⎜⎜⎜ ⎟. (3.158)
⎜⎜⎜⎜1 −1 −1 1 1 −1⎟⎟⎟⎟
⎟
⎜⎜⎝a −a 0 0 −a a⎟⎟⎟⎠
b −b 2b −2b b −b
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 3 2a 6b 3 2a
⎜⎜⎜ 6b ⎟⎟⎟⎟⎟
⎛ % & ⎞ ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ + + 1 1 1 ⎟⎟⎟ ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜⎜ + ⊗ ⎟ ⎜
⎜⎜⎜ 3 2a 6b 3 2a 6b ⎟⎟⎟
⎜⎜⎜ − 3 2a 6b ⎟⎟⎟⎟ ⎜ 1 ⎟⎟⎟⎟
⎟
⎜⎜⎜ + % & ⎟⎟⎟⎟ 1 ⎜⎜⎜⎜ 1 0 − 1 − 1 0 ⎟⎟
⎜⎜⎜ − 1 1 ⎟⎟⎟ ⎜⎜ 3b ⎟⎟⎟⎟.
[PS ]−1 (a, b) = ⎜⎜⎜ + ⊗ 0 − ⎟⎟⎟ = ⎜⎜⎜⎜ 3 3b 3
⎟
+ ⎜⎜⎜⎜ 1 1 ⎟⎟⎟
6
⎜⎜⎜ 3 3b ⎟
⎟ 2 1 1
⎜⎜⎜ + % &⎟ ⎟ ⎜⎜⎜ 0 − − ⎟⎟⎟⎟
1 1 ⎟⎟⎟⎟
0
⎜⎝ + 1
⎠ ⎜⎜⎜ 3 3b 3 3b ⎟⎟⎟
⊗ − ⎜ ⎟
+ − 3 2a 6b ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟⎟
⎜⎜⎜ 3 2a 6b 3 2a 6b ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟
− − − ⎠
3 2a 6b 3 2a 6b
(3.159)
(b) Using Eqs. (3.143) and (3.157), for N = 4 and n = 1, we obtain, using the
notation c = 2(a2 + b2 ),
⎡ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢ ⎜⎜1⎟⎟ ⎜⎜ 1⎟⎟ ⎜⎜ 1⎟⎟ ⎜⎜ 1⎟⎟⎥⎥
⎢⎢⎢⎢ + + ⎜⎜⎜⎜⎜a⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜ b⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜−b⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜−a⎟⎟⎟⎟⎟⎥⎥⎥⎥⎥
[PS ]8 (a, b) = ⎢⎢⎢ ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟⎥
⎢⎢⎣ + − ⎜⎜⎜⎜⎝1⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝−1⎟⎟⎟⎟⎠ + − ⎜⎜⎜⎜⎝−1⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝ 1⎟⎟⎟⎟⎠⎥⎥⎥⎥⎦
b −a a −b
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜a a b b −b −b −a −a⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎟
⎜b b −a −a a a −b −b⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟. (3.160)
⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜a −a −b b −b b a −a⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 −1 1 −1 −1 1 −1 1⎟⎟⎟⎟⎠
b −b a −a a −a b −b
⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ +⎟⎟⎟ % 1 a 1 b ⎟⎟⎟&
⎜⎜⎜⎜⎜⎝ ⎜ ⎟⎟⎠ ⊗ ⎟⎟⎟
⎜⎜⎜ + − 4 c 4 c ⎟⎟⎟
⎜⎜⎜⎛ ⎟⎟
⎜⎜⎜⎜⎜+ −⎞⎟⎟ % 1 & ⎟⎟
b 1 a ⎟⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⊗ − − ⎟
⎜⎜⎜⎝ ⎠ c 4 c ⎟⎟⎟⎟⎟
1 ⎜
⎜⎜⎜⎛ + + 4
[PS ]−1 = ⎟
(a, b)
2 ⎜⎜⎜⎜⎜+ +⎟⎟ ⎜
⎜ ⎞ % & ⎟⎟⎟
b 1 a ⎟⎟⎟⎟
8
⎜⎜⎜⎜⎜⎜⎝ ⎟⎟ 1
⎟⎟
⎜⎜⎜ + −⎟⎠ ⊗ 4 − −
c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎛ ⎞ & ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜+ −⎟⎟⎟ % 1 a 1 b ⎟⎟
⎜⎝⎜⎜⎝ ⎟⎟⎠ ⊗ − − ⎟⎠
+ + 4 c 4 c
⎛ ⎞
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 4 c − 4 −
c
−
4
−
c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜
1 ⎜⎜⎜⎜⎜ 4 c − 4 −
c 4 c
−
4
− ⎟⎟⎟
c ⎟⎟⎟ .
= ⎜⎜⎜ ⎟⎟ (3.161)
2 ⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝ 1 a 1 b 1 a 1 b ⎟⎟⎟⎠
− − − −
4 c 4 c 4 c 4 c
(c) For N = 5 and n = 1, we have integer slant HT matrix [PS ]10 of order 10:
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 2b −b −b −2b −2b⎟⎟⎟⎟⎟
⎜⎜⎜ 2b b b 0 0
⎟⎟
⎜⎜⎜
⎜⎜⎜ a a 0 0 −2a −2a 0 0 a a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−b −b 2b 2b 0 0 −2b −2b b b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜ 2c 2c −3c −3c 2c −3c 2c ⎟⎟⎟⎟⎟
[PS ]10 (a, b, c) = ⎜⎜⎜⎜⎜
2c 3c 2c
⎟⎟ , (3.162)
⎜⎜⎜ 1
⎜⎜⎜ −1 −1 1 1 −1 −1 1 1 −1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 2b ⎟⎟⎟
⎜⎜⎜ −2b −b b 0 0 b −b −2b 2b⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ a −a 0 0 −2a 2a 0 0 a −a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−b b −2b 2b 0 0 2b −2b b −b ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
2c −2c 3c −3c 2c −2c 3c −3c 2c −2c
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟⎟
⎜⎜⎜ 5 5a 6a 10a 15c 5 5a 6a 10a 15c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − − − − ⎟⎟
⎜⎜⎜ 5 5a 6a 10a 15c 5 5a 6a 10a 15c ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1
⎜⎜⎜ 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
0 − − − 0 − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ 0 − 0 − ⎟⎟
⎜⎜⎜ 5 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
−1 1 ⎜⎜⎜⎜ 5 0 − 0 0 − 0
15c ⎟⎟⎟⎟⎟ .
⎟
[PS ]10 (a, b, c) = ⎜⎜⎜ 3a 15c 5 3a
⎟
2 ⎜⎜⎜ 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 −
1
0
1
−
1
0
1
0 − ⎟
⎜⎜⎜ 5 3a 15c 5 3a 15c ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − − 0 ⎟⎟
⎜⎜⎜ 5 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − − 0 − − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ 1
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
− − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 5b 6a 10b 15c 5 5b 6a 10b 15c ⎟⎟⎟⎟⎟
⎟
⎜⎝⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎠
− − − − −
5 5b 6a 10b 15c 5 5b 6a 10b 15c
(3.163)
Some useful properties of the integer slant HT matrix are given below.
Properties:
(a) The slant HT matrix S 2N is an orthogonal matrix only if N is a power of two.
(b) If S N is sequential, then S 2N is also a sequential integer slant HT matrix [see
Eq. (3.156)].
Proof: Let Ri and R1i be i’th rows of S N and S 2N , respectively, and let ui, j be
an i’th and j’th element of S N , i, j = 0, 1, . . . , N − 1. The top half of S 2N ,
R1i i = 0, 1, . . . , N − 1, is obtained from (1, 1) ⊗ ui, j , which does not alter the
sequential number of the rows.
Thus, the sequential number of R1i is equal to the sequential number of Ri ,
i = 0, 1, . . . , N − 1. The bottom half of S 2N , R1i , i = N, N + 1, . . . , 2N − 1, is obtained
from (1, −1) ⊗ ui, j , and (−1, 1) ⊗ ui, j . This causes the sequential number of each row
to increase by N. Thus, the sequential number of each R1i , i = N, N + 1, . . . , 2N − 1,
is equal to the sequential number of its corresponding Ri , i = 0, 1, . . . , N − 1 plus
N. This implies that the sequential number of R1i i = 0, 1, . . . , 2N − 1 grows with
its index and S 2N is sequential, as can be seen from the examples given above.
(c) One can construct the same size slant-transform matrix in different ways.
Indeed, the slant-transform matrix of order N = 16 can be obtained by two
ways using Lemma 3.7.1 with initial matrix [PS ]4 (a, b) [see Eq. (3.143)]
or using Lemma 3.7.1 once with the initial matrix [PS ]8 (a, b, c, d, e, f ) [see
Eq. (3.144)]. It shows that we can construct an integer slant transform of
order 2n .
(d) The integer slant matrices [PS ]4 (a, b) and [PS ]−1 4 (a, b) = Q4 (a, b) can be
factored as
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 1⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜0 0 b a⎟⎟⎟ ⎜⎜⎜0 1 1 0⎟⎟⎟⎟⎟
[PS ]4 (a, b) = S 2 S 1 = ⎜⎜⎜⎜ ,
⎜⎜⎜1 −1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 −1 0⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
0 0 −a b 1 0 0 −1
⎛ ⎞⎛ ⎞ (3.164)
⎜⎜⎜1 0 1 0⎟⎟⎟ ⎜⎜⎜c 0 c 0⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜0 1 0 1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c 0 −c 0⎟⎟⎟⎟⎟
Q4 (a, b) = Q2 Q1 = ⎜⎜⎜⎜ ⎟⎜ ⎟,
⎜⎜⎜⎝0 1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 a 0 b⎟⎟⎟⎟⎠
1 0 −1 −1 0 b 0 −a
where
c = (a2 + b2 )/2. (3.165)
For N = 4, we have
* +
S 8 S 8T = 2 I4 ⊕ 2(a2 + b2 ) . (3.168)
We can also check that the inverse matrix of S 4 (a, b) has the following form:
⎛ ⎞
⎜⎜⎜c a c b⎟⎟⎟
⎜ ⎟
1 ⎜⎜⎜⎜c b −c −a⎟⎟⎟⎟ a2 + b2
Q4 (a, b) = ⎜⎜⎜ ⎟⎟⎟ , c = , (3.169)
4c ⎜⎜⎜c −b −c a⎟⎟⎟ 2
⎝ ⎠
c −a c −b
i.e., S 4 (a, b)Q4 (a, b) = Q4 (a, b)S 4 (a, b) = I4 , and if parameters a and b are both
even or odd, the matrix in Eq. (3.169) is an integer matrix without granting a
coefficient.
One can verify that the following matrices are mutually inverse matrices of
order 8:
S 8 (a, b) = [H2 ⊗ A1 , H1 ⊗ A2 , H2 ⊗ A3 , H1 ⊗ A4 ] ,
1 (3.170)
Q8 (a, b) = [H2 ⊗ Q1 , H1 ⊗ Q2 , H2 ⊗ Q3 , H1 ⊗ Q4 ] ,
4c
where Ai and Qi are the i’th column and row of the matrices S 4 (a, b) and Q4 (a, b),
respectively.
where
⎛ ⎞
⎜⎜⎜1 0 O0 0 0 ⎟⎟⎟ O0
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 bn O0 an 0 O0 ⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜O O0 I2n−1 −2 O0 O0 O2n−1 −2 ⎟⎟⎟⎟
M2n = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (3.172)
⎜⎜⎜⎜0 0 O0 0 1 O0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 an O0 −bn 0 O0 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
O0 O0 O2n−1 −2 O0 O0 I2n−1 −2
where Om denotes a zero matrix of order m and M2 = I2 . One can show that a slant
matrix of order 2n can be factored as
where
It is easy to prove that the fast algorithm based on decomposition in Eq. (3.173)
requires C + (2n ) addition and C × (2n ) multiplication operations,
We see that the integer slant matrices in Eqs. (3.143) and (3.169) can be factored
as
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 1⎟⎟⎟
⎜⎜⎜⎜ ⎟⎜ ⎟
a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 1 0⎟⎟⎟⎟⎟
S 4 (a, b) = S 2 S 1 = ⎜⎜⎜⎜⎜
0 0 b
⎟⎜ ⎟. (3.176)
⎜⎜⎜1 −1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 −1 0⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
0 0 −a b 1 0 0 −1
⎛ ⎞⎛ ⎞
⎜⎜⎜1 0 1 0⎟⎟⎟ ⎜⎜⎜c 0 c 0⎟⎟⎟
⎜⎜⎜⎜0 ⎟⎜
1 0 1⎟⎟⎟⎟ ⎜⎜⎜⎜c
⎟
0 −c 0⎟⎟⎟⎟
Q4 (a, b) = Q2 Q1 = ⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ . (3.177)
⎜⎜⎜0 1 0 −1⎟⎟⎟⎟ ⎜⎜⎜⎜0 a 0 b⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
1 0 −1 0 0 b 0 −a
Now, using the above-given representation of matrices S 4 (a, b), Q4 (a, b), and the
formula in Eq. (3.170), we can find the following respective complexities:
• 2n+1 additions and 2n multiplications for forward transform.
• 2n+1 additions and 2n+1 multiplications for inverse transform.
⎜⎜⎜1 1 −2 −2 1 1⎟⎟⎟⎟⎟
·
⎜⎜⎜ ⎟⎟⎟ /
2
·
1
⎜
S 5 = ⎜⎜⎜ 1 0 −2 0 1⎟⎟⎟⎟ ·
5
S 6 = ⎜⎜⎜ ⎟⎟ 2
(3.179)
⎜⎜⎜
⎜⎜⎜−1 2 0 −2 1⎟⎟⎟⎟⎟
⎟ 6
⎜⎜⎜⎜1 −1 −1 1 1 −1⎟⎟⎟⎟⎟ /
1
⎜⎜⎜ ⎟
⎜⎜⎜2 −2 0 0 −2 2⎟⎟⎟⎟⎟
· 3
⎜⎜⎝ ⎟⎟⎠ 2 ·
8
2 −3 2 −3 2 ⎜⎜⎜ ⎟
⎝1 −1 2 −2 1 −1⎟⎟⎠
1 /
· 1
6 ·
2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜⎜4 2 2 0 0 −2 −2 −4⎟⎟⎟⎟
/
⎜⎜⎜ ⎟⎟⎟ ·
1
⎜⎜⎜ 6
2⎟⎟⎟⎟⎟
/
⎜⎜⎜2 1 −1 −2 −2 −1 1 2
⎜⎜⎜ ⎟⎟⎟ ·
⎜⎜⎜ 5
−2⎟⎟⎟⎟⎟
/
⎜⎜⎜2 0 −4 −2 2 4 0 ·
1
S 8 = ⎜⎜⎜⎜ ⎟⎟ (3.180)
1⎟⎟⎟⎟⎟
6
⎜⎜⎜⎜1 −1 −1 1 1 −1 −1 /
⎜⎜⎜ ⎟⎟
−2⎟⎟⎟⎟
1
⎜⎜⎜2 −4 0 2 −2 0 4 ·
⎜⎜⎜ ⎟⎟⎟ /
6
⎜⎜⎜1 ⎟
1⎟⎟⎟⎟
2
⎜⎜⎜ −2 2 −1 −1 2 −2 ·
⎜⎜⎝ ⎟⎟⎟ /
5
⎟
0⎠
1
0 −2 2 −4 4 −2 2 ·
6
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜⎜ /
⎜⎜⎜4 3 2 1 0 −1 −2 −3 −4⎟⎟⎟⎟ ·
3
⎜⎜⎜ ⎟⎟⎟ 20
⎜⎜⎜1 ⎟ /
⎜⎜⎜ 1 1 −2 −2 −2 1 1 1⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟⎟ /
2
⎟
⎜⎜⎜1 0 −1 −2 0 2 1 0 −1⎟⎟⎟⎟ ·
3
⎜⎜⎜ ⎟⎟⎟ 4
⎜ 3⎟⎟⎟⎟
S 9 = ⎜⎜⎜⎜3 0 −3 0 −3 1
0 0 0 · (3.181)
⎜⎜⎜ ⎟⎟⎟ 2
/
⎜⎜⎜2
⎜⎜⎜ −1 −4 3 0 −3 4 1 −2⎟⎟⎟⎟⎟ ·
3
⎜⎜⎜ ⎟⎟⎟ /
20
−1⎟⎟⎟⎟⎟
3
⎜⎜⎜1 −2 1 0 0 0 −1 2 ·
⎜⎜⎜ ⎟⎟⎟ /
4
⎜⎜⎜ ⎟
1⎟⎟⎟⎟
1
⎜⎜⎜1 −2 1 1 −2 1 1 −2 ·
2
⎜⎝ ⎟⎟
1⎠
1
1 −2 1 −2 4 −2 1 −2 ·
2
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
−13⎟⎟⎟⎟⎟
/
⎜⎜⎜13 9 15 11 5 1 7 3 −3 −7 −1 −5 −11 −15 −9 1
⎜⎜⎜ ⎟⎟ ·
⎜⎜⎜ 1 −1 −1 −1 −1 −1 −1 −1 −1 1 ⎟⎟⎟⎟⎟
85
⎜⎜⎜ 1 1 1 1 1 1
⎟⎟
⎜⎜⎜ /
⎜⎜⎜ 1 1 1 1 −3 −3 −3 −3 3 3 3 3 −1 −1 −1 −1 ⎟⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟ 5
⎜⎜⎜ 3
⎜⎜⎜ 1 −1 −3 −9 −3 3 9 9 3 −3 −9 −3 −1 1 3 ⎟⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟⎟ 5
/
⎜⎜⎜ 3 1 −1 −3 −3 −1 1 3 −3 −1 1 3 3 1 −1 −3 ⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟⎟ 5
⎜⎜⎜ 9
⎜⎜⎜ 3 −3 −9 3 1 −1 −3 −3 −1 1 3 −9 −3 3 9 ⎟⎟⎟⎟⎟ ·
1
⎜⎜ ⎟⎟⎟ 5
−13⎟⎟⎟⎟
/
1 ⎜⎜⎜⎜⎜11 5 −3 −13 11 5 −3 −13 11 5 −3 −13 11 5 −3
⎟⎟⎟ ·
1
S 16 = ⎜⎜⎜ 85
.
4 ⎜⎜⎜ 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ /
⎜⎜⎜ 3 −3 ⎟⎟⎟⎟
1
−3 −3 3 1 −1 −1 1 −1 1 1 −1 −3 3 3 ·
⎜⎜⎜ ⎟⎟⎟ 5
⎜⎜⎜ 1 −1 −1 −1 1 −1 −1 1 −1 −1 −1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1
⎟⎟
/
⎜⎜⎜
−1 ⎟⎟⎟⎟
1
·
⎜⎜⎜ 1 −1 −1 1 −3 3 3 −3 3 −3 −3 3 −1 1 1 5
⎜⎜⎜ ⎟⎟⎟
1 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ 1 −3 3 −1 −3 9 −9 3 3 −9 9 −3 −1 3 −3 ·
⎜⎜⎜ ⎟⎟⎟
5
/
⎜⎜⎜
−1 ⎟⎟⎟⎟
1
⎜⎜⎜ 1 −3 3 −1 −1 3 −3 1 −1 3 −3 1 1 −3 3 ·
⎜⎜⎜ ⎟⎟⎟ 5
⎜⎜⎜ 3 3 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ −9 9 −3 1 −3 3 −1 −1 3 −3 1 −3 9 −9 ·
⎟⎟⎟
5
/
⎜⎝
−1 ⎠
1
1 −3 3 −1 1 −3 3 −1 1 −3 3 −1 1 −3 3 ·
5
(3.182)
3.7.3 Iterative parametric slant Haar transform construction
The forward and inverse parametric slant Haar transforms of order 2n (n ≥ 1) with
parameters β22 , β23 , . . . , β2n are defined as29
X = S 2n (β22 , β23 , . . . , β2n )x,
(3.183)
x = S 2Tn (β22 , β23 , . . . , β2n )X,
where x is an input data vector of length 2n and S 2n is generated recursively as
A2 ⊗ S 2,2n−1
S 2n = S 2n (β22 , β23 , . . . , β2n ) = Q2n , (3.184)
I2 ⊗ S 2n −2,2n−1
where S 2,2n−1 is a matrix of the dimension 2 × 2n−1 comprising the first two rows of
S 2n−1 , and S 2n−1 −2,2n−1 is a matrix of the dimension 2n−1 − 2 × 2n−1 comprising the
third to the 2n−1 rows of S 2n−1 , ⊗ denotes the operator of the Kronecker product, and
1 1 1
A2 = √ . (3.185)
2 1 −1
S 4 is the 4-point parametric slant HT constructed in the previous chapter. Q2n is
the recursion kernel matrix defined as
⎡ ⎤
⎢⎢⎢1 0 0 0 · · · 0⎥
⎢⎢⎢0 b n a n 0 · · · 0⎥⎥⎥⎥⎥
⎢⎢⎢ 2 2 ⎥⎥
⎢⎢⎢⎢0 a2n −b2n 0 · · · 0⎥⎥⎥⎥⎥
Q2n = ⎢⎢⎢⎢0 0 0 1 · · · 0⎥⎥⎥⎥⎥ , (3.186)
⎢⎢⎢ ⎥
⎥
⎢⎢⎢.. .. .. . .⎥
⎢⎢⎣. . . 0 . . .. ⎥⎥⎥⎥
⎦
0 0 0 0 ··· 1
where
n−2 n−2
−22 ≤ β2n ≤ 22 , n ≥ 3. (3.188)
R1 RT2 = (A2 ⊗ S 2,2n−1 )(I2 ⊗ S 2n−1 −2,2n−1 )T = A2 I2T ⊗ S 2,2n−1 S 2Tn−1 −2,2n−1
= A2 ⊗ O2,2n−1 −2 = O4,2n −4 , (3.190)
R2 RT1 = (I2 ⊗ S 2n−1 −2,2n−1 )(A2 ⊗ S 2,2n−1 ) = T
I2 AT2 ⊗ T
S 2n−1 −2,2n−1 S 2,2 n−1
but
⎡ .. ⎤
⎡ ⎤ ⎢⎢⎢ R RT . R1 RT2 ⎥⎥⎥⎥⎥
⎢⎢⎢R1 ⎥⎥⎥ 1 2 ⎢⎢⎢ 1 1 ⎥⎥⎥ T
⎢
S 2n S 2Tn = Q2n ⎢⎢⎢⎢⎣−−⎥⎥⎥⎥⎦ RT ... RT QT2n = Q2n ⎢⎢⎢⎢− − − ..
. − − −⎥⎥⎥⎥ Q2n
1 2 ⎢⎢⎢ ⎥⎥⎦
R2 ⎣ ..
R2 RT1 . R2 RT2
⎡ .. ⎤
⎢⎢⎢⎢ I4 . O4,2n −4 ⎥⎥⎥⎥⎥
⎢⎢⎢ ⎥⎥
= Q2n ⎢⎢⎢⎢− − −− ... − − −−⎥⎥⎥⎥ QT2n (3.193)
⎢⎢⎢ ⎥⎥⎥
⎣ . ⎦
O2n −4,4 .. I2n −4,4
Figure 3.16 Parametric slant Haar transform basis vectors for (2n = 8): (a) classical case
(β4 = 1, β8 = 1), (b) multiple-β case (β4 = 4, β8 = 16), (c) constant-β case (β4 = 1.7, β8 = 1.7),
and (d) multiple-β case (β4 = 1.7, β8 = 7).
The parametric slant-Haar transform falls into one of at least three different
categories according to β2n values:
• For β4 = β8 = · · · = β2n = β = 1, we obtain the classical slant Haar transform.20
• For β4 = β8 = · · · = β2n = β for β ≤ |4|, we refer to this as the constant-β slant
Haar transform.
n−2 n−2
• For β4 β8 · · · β2n for −22 ≤ β2n ≤ 22 , n = 2, 3, 4, . . ., we refer to this
as the multiple-β slant Haar transform; some of the β2n values can be equal, but
not all of them.
⎜⎜⎜ ⎟⎟⎟ 5
⎟
1 ⎜⎜⎜⎜7 −1 −9 −17 17 9 1 −7⎟⎟⎟⎟ ·√
1
S Classical = √ ⎜⎜⎜ ⎟⎟⎟ 105 . (3.194)
8 ⎜⎜⎜⎜1 −1 −1 1 0 0 0 0⎟⎟⎟⎟ √
· 2
⎜⎜⎜⎜0 0 0 0 ⎟⎟
⎜⎜⎜ 1 −1 −1 1⎟⎟⎟⎟ √
· 2
⎜⎜⎜ ⎟⎟⎟ /
⎟
⎜⎜⎜1 −3 3 −1 0 0 0 0⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ /
5
⎜⎝ ⎟
−3 −1⎠
2
0 0 0 0 1 3 ·
5
(b) The multiple-β case (β4 = 4, β8 = 16). Note that this is a special case of Haar
transform:
⎛ ⎞
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ 1 1 1 −1 −1 −1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜
⎜ ⎟
1 ⎜1 −1 1 −1 −1 −1 1 ⎟⎟⎟⎟
= √ ⎜⎜⎜⎜⎜ √
1
S Multiple
⎜ √ √ √ ⎟⎟⎟. (3.195)
8 ⎜⎜ 2
⎜⎜⎜ 2 − 2 − 2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 2 − 2 − 2 2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ √ √ √ √ ⎟⎟⎟
⎜⎜⎜0 − 2 − 2⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 2 2
⎜⎝ √ √ √ √ ⎟⎟⎠
0 0 0 0 2 − 2 − 2 2
References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer-Verlag, Berlin (1985).
2. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonal
transforms and their application to region-based image compression,” IEEE
Trans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
3. M. Barazande-Pour and J.W. Mark, “Adaptive MHDCT coding of images,”
in Proc. IEEE Image Proces. Conf., ICIP-94 1, 90–94 (Nov. 1994).
4. G.R. Reddy and P. Satyanarayana, “Interpolation algorithm using Walsh–
Hadamard and discrete Fourier/Hartley transforms,” in Circuits and Systems
1990, Proc.33rd Midwest Symp. 1, 545–547 (Aug. 1990).
5. Ch.-Fat Chan, “Efficient implementation of a class of isotropic quadratic
filters by using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp.
on Circuits and Systems, Hong Kong, 2601–2604 (June 9–12, 1997).
6. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniques
utilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402
(1994).
7. C. Anshi, Li Di and Z. Renzhong, “A research on fast Hadamard transform
(FHT) digital systems,” in Proc. of IEEE TENCON 93, Beijing, 541–546
(1993).
8. H.G. Sarukhanyan, “Hadamard matrices: construction methods and
applications,” in Proc. of Workshop on Transforms and Filter Banks,
Tampere, Finland, 95–130 (Feb. 21–27, 1998).
9. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
10. S.S. Agaian and H.G. Sarukhanyan, “Hadamard matrices representation by
(−1, +1)-vectors,” in Proc. Int. Conf. Dedicated to Hadamard Problem’s
Centenary, Australia, (1993).
155
then
⎛ ⎞
⎜⎜⎜ An Bn Cn Dn ⎟⎟⎟
⎜⎜⎜−B ⎟
⎜⎜⎜ n An −Dn Cn ⎟⎟⎟⎟⎟ (4.2)
⎜⎜⎜⎝−Cn Dn An −Bn ⎟⎟⎠⎟
−Dn −Cn Bn An
where U is the (0, 1) matrix of order m with first row (0 1 0 · · · 0), second row
obtained by one-bit cyclic shifts, third row obtained by 2-bit cyclic shifts, and so
on. For m = 5, we have the following matrices:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 1 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 0 0⎟⎟⎟
⎜⎜⎜0 0 1 0 0⎟⎟⎟ ⎜⎜⎜0 0 0 1 0⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
U = ⎜⎜⎜⎜⎜0 0 0 1 0⎟⎟⎟⎟⎟ , U 2 = ⎜⎜⎜⎜⎜0 0 0 0 1⎟⎟⎟⎟⎟ ,
⎜⎜⎜0 0 0 0 1⎟⎟⎟ ⎜⎜⎜1 0 0 0 0⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
1 0 0 0 0 0 1 0 0 0
⎛ ⎞ ⎛ ⎞ (4.4)
⎜⎜⎜0 0 0 1 0⎟⎟⎟ ⎜⎜⎜0 0 0 0 1⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 1⎟⎟⎟⎟ ⎜⎜⎜1 0 0 0 0⎟⎟⎟
⎜⎜ ⎟⎟
⎜⎜⎜ ⎟⎟⎟
U = ⎜⎜⎜1 0 0 0 0⎟⎟⎟ , U = ⎜⎜⎜⎜⎜0 1 0 0 0⎟⎟⎟⎟⎟ .
3 4
⎜⎜⎜0 1 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 0 0⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
0 0 1 0 0 0 0 0 1 0
U 0 = Im , U p U q = U p+q , U m = Im . (4.5)
Therefore, the cyclic matrix of order n with first row (a0 a1 a2 · · · an−1 ) has the
form
⎛ ⎞
⎜⎜⎜a0 a1 · · · an−1 ⎟⎟⎟
⎜⎜⎜⎜an−1 a0 · · · an−2 ⎟⎟⎟⎟⎟
C(a0 , a1 , . . . , an−1 ) = ⎜⎜⎜⎜⎜.. .. . . .. ⎟⎟⎟⎟ . (4.6)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
a1 a2 · · · a0
In other words, each row of A is equal to the previous row rotated downward by
one element. Thus, a cyclic matrix of order n is specified (or generated) by its
first row and denoted by C(a0 , a1 , . . . , an−1 ). For example, starting with the vector
(a, b, c, d), we can form the 4 × 4 cyclic matrix
⎛ ⎞
⎜⎜⎜a b c d⎟⎟
⎟
⎜⎜⎜⎜d a b c ⎟⎟⎟⎟
⎜⎜⎜ ⎟.
b⎟⎟⎟⎟⎠
(4.7)
⎜⎜⎝c d a
b c d a
It can be shown that the multiplication of two cyclic matrices is also cyclic. This
can be proved by direct verification. For N = 4, we obtain the multiplication
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a0 a1 a2 a3 ⎟⎟ ⎜⎜b0
⎟⎜
b1 b2 b3 ⎟⎟ ⎜⎜c0
⎟ ⎜
c1 c2 c3 ⎟⎟
⎟
⎜⎜⎜⎜a3 a0 a1 a2 ⎟⎟⎟⎟ ⎜⎜⎜⎜b3 b0 b1 b2 ⎟⎟⎟⎟ ⎜⎜⎜⎜c3 c0 c1 c2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟=⎜ ⎟.
a1 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝b2 b1 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝c2 c1 ⎟⎟⎟⎟⎠
(4.8)
⎜⎜⎝a2 a3 a0 b3 b0 c3 c0
a1 a2 a3 a0 b1 b2 b3 b0 c1 c2 c3 c0
If A, B, C, D are cyclic symmetric (+1, −1) matrices of order n, then the first
relation of Eq. (4.1) is automatically satisfied, and the second condition becomes
A2 + B2 + C 2 + D2 = 4nIn . (4.9)
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − − − −⎟⎟
⎟⎟⎟ ⎜⎜⎜+ + − − +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ − + + −⎟⎟
⎟
⎜⎜⎜⎜− + − − −⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + + − −⎟⎟⎟ ⎜⎜⎜− +
⎜⎜⎜ − + +⎟⎟⎟⎟⎟
⎜⎜ ⎟ ⎟ ⎟
A5 = B5 = ⎜⎜⎜⎜− − + − −⎟⎟⎟⎟ , C5 = ⎜⎜⎜⎜− + + + −⎟⎟⎟⎟ , D5 = ⎜⎜⎜⎜+ − + − +⎟⎟⎟⎟ . (4.13)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝− − − + −⎟⎟ ⎜⎜⎝− − + + +⎟⎟ ⎜⎜⎝+ + − + −⎟⎟⎟⎟
⎠ ⎠ ⎠
− − − − + + − − + + − + + − +
(4) The first rows and Williamson matrices of order 7 are given as follows:
⎛ ⎞
⎜⎜⎜+ − − − − + − − − − + + − − + + − + + −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − − − + − − − + + + − − − + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + − − − − + − − − + + + − + − + − +⎟⎟⎟⎟
⎟
⎜⎜⎜−
⎜⎜⎜ − − + − − − − + − − − + + + + + − + −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜⎜− − − − + − − − − + + − − + + − + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜⎜ + + + + + − − − − − + − − + + + − − +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ − + + + − + − − − + − + − − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + + − − + − − − + − + − − + + + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + + − + − − − + − − − + − + − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜ + + + − − − − − + + − − + − + − − + +⎟⎟⎟⎟
H20 = ⎜⎜⎜⎜ ⎟⎟⎟. (4.17)
⎟
⎜⎜⎜⎜− − + + − + − + + − + − − − − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + + − + − + + − + − − − + − + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − − − + + − + − + − − + − − + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + − − − + + − + − − − − + − + + + − +⎟⎟⎟⎟
⎟
⎜⎜⎜− + + − − − + + − + − − − − + + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − − + + − + − − − − + − − − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − − − − + + − + − − − − + − − −⎟⎟⎟⎟
⎟
⎜⎜⎜−
⎜⎜⎜ + − + − + − − − + − − + − − − − + − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜− − + − + + + − − − − − − + − − − − + −⎟⎟⎟⎟
⎝ ⎠
+ − − + − − + + − − − − − − + − − − − +
1 n, where n ≤ 100 except 35, 39, 47, 53, 67, 73, 83, 89, and 949
2 3a , where a is a natural number10
3 (p + 1)pr /2, where is a prime power, and r is a natural number11,12
4 n(4n + 3), n(4n − 1), where n ∈ {1, 3, 5, . . . , 25}13
5 (p + 1)(p + 2), where p ≡ 1(mod 4) is a prime number and p + 3 is an order of symmetric Hadamard
matrix9
6 2n(4n + 7), where 4n + 1 is a prime number and n ∈ {1, 3, 5, . . . , 25}9
7 2.39, 2.103, 2.303, 2.333, 2.669, 2.695, 2.160911
8 2n, where n is an order of Williamson-type matrices9
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y,
Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,
(4.22)
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y,
Di = Di−1 ⊗ X − Ci−1 ⊗ Y,
Taking into account the conditions of Eqs. (4.1) and (4.19) and summarizing the
last expressions, we find that
Similarly, we obtain
Now, summarizing the last two equations and taking into account that A0 , B0 ,
C0 , D0 are Williamson matrices of order n, and X and Y satisfy the conditions of
Eq. (4.19), we have
Let us now prove equality of A1 BT1 = B1 AT1 . From Eq. (4.22), we have
where
P, Q ∈ {A1 , B1 , C1 , D1 } . (4.29)
prove that Ai+1 , Bi+1 , Ci+1 , and Di+1 also are Williamson matrices. Check only the
second condition of Eq. (4.1). By computing
35, 37, 39, 43, 49, 51, 55, 63, 69, 77, 81, 85, 87, 93, 95, 99, 105, 111, 115, 117, 119, 121, 125, 129, 133, 135,
143, 145, 147, 155, 161, 165, 169, 171, I75, 185, 187, 189, 195, 203, 207, 209, 215, 217, 221, 225, 231, 243,
247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 333, 335, 341, 345, 351, 357,
361, 363, 377, 387, 391, 403, 405, 407, 425, 429, 437, 441, 455, 459, 465, 473, 475, 481, 483, 495, 513, 525,
527, 529, 551, 559, 561, 567, 575, 589, 609, 621, 625, 627, 637, 645, 651, 667, 675, 693, 713, 725, 729, 731,
751, 759, 775, 777, 783, 817, 819, 825, 837, 851, 891, 899, 903, 925, 957, 961, 989, 1023, 1073, 1075, 1081,
1089, 1147, 1161, 1221, 1247, 1333, 1365, 1419, 1547, 1591, 1729, 1849, 2013
⎛ ⎞
⎜⎜⎜+ + + + − − + + + + + − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + − − + + + + + − + − − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + + + + − − + − + + + − + − − +⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + + + + + − − + − + + + − + − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜− + + + + + + + − − + − + + + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − + + + + + + + − − + − + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − − + + + + + + + − − + − + + + −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + − − + + + + + − + − − + − + + +⎟⎟⎟⎟
⎟
⎜⎜⎜+ + + − − + + + + + − + − − + − + +⎟⎟⎟⎟⎟
A18 = B18 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (4.47a)
⎜⎜⎜+ + − + − − + − + − + + + − − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − + − − + − + − + + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − + − − + + + − + + + − − +⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ − + + + − + − − + + + − + + + − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜−
⎜⎜⎜ + − + + + − + − − + + + − + + + −⎟⎟⎟⎟
⎟
⎜⎜⎜− − + − + + + − + − − + + + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + − + + + − + − − + + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− + − − + − + + + + + − − + + + − +⎟⎟⎟⎟
⎠
+ − + − − + − + + + + + − − + + + −
C18 = D18
⎛ ⎞
⎜⎜⎜+ − − − + + − − − + + − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − − + + − − + + + − + − − + −⎟⎟⎟⎟
⎜⎜⎜− − ⎟
⎜⎜⎜ + − − − + + − − + + + − + − − +⎟⎟⎟⎟⎟
⎜⎜⎜− − ⎟
⎜⎜⎜ − + − − − + + + − + + + − + − −⎟⎟⎟⎟
⎜⎜⎜+ − ⎟⎟
⎜⎜⎜ − − + − − − + − + − + + + − + −⎟⎟⎟⎟
⎟
⎜⎜⎜+ +
⎜⎜⎜ − − − + − − − − − + − + + + − +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜− + + − − − + − − + − − + − + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + + − − − + − − + − − + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜− − − + + − − − + + − + − − + − + +⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟⎟⎟ . (4.47b)
⎜⎜⎜+ +
⎜⎜⎜ − + − − + − + − − − − + + − − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ + + − + − − + − − − − − − + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− + + + − + − − + − − − − − − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + + + − + − − − − − − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + + + − + − + − − − − − − − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + − + + + − + + + − − − − − − −⎟⎟⎟⎟
⎜⎜⎜+ − ⎟
⎜⎜⎜ − + − + + + − − + + − − − − − −⎟⎟⎟⎟⎟
⎜⎜⎜− + ⎟
⎜⎝ − − + − + + + − − + + − − − − −⎟⎟⎟⎟
⎠
+ − + − − + − + + − − − + + − − − −
Ai ATj = A j ATi , i, j = 1, 2, . . . , 8,
8 (4.49)
Ai ATi = 8nIn .
i=1
A1 + A2 A1 − A2
X1 = P1 ⊗ − P2 ⊗ ,
2 2
A 1 − A2 A 1 + A2
X2 = P1 ⊗ + P2 ⊗ ,
2 2
A1 + A2 A1 − A2
X3 = P3 ⊗ − P4 ⊗ ,
2 2
A 1 − A2 A 1 + A2
X4 = P3 ⊗ + P4 ⊗ , (4.51)
2 2
A3 + A4 A 3 − A4
X5 = P1 ⊗ − P2 ⊗ ,
2 2
A3 − A4 A3 + A4
X6 = P1 ⊗ + P2 ⊗ ,
2 2
A3 − A4 A 3 + A4
X7 = P3 ⊗ − P4 ⊗ ,
2 2
A 3 + A4 A 3 − A4
X8 = P3 ⊗ + P4 ⊗ .
2 2
Below, we check that Xi , i = 1, 2, . . . , 8 are 8-Williamson matrices of order mn,
i.e., the conditions of Eq. (4.49) are satisfied. Check the first condition,
Xi X Tj = X j XiT , i, j = 1, 2, . . . , 8. (4.53)
Now we check the second condition of Eq. (4.49). With this purpose, we calculate
But, (A1 + A2 )(A1 + A2 )T + (A1 − A2 )(A1 − A2 )T = 2(A1 AT1 + A2 AT2 ). Thus, from
Eq. (4.55) we have
4 4
1
Xi XiT = Pi PTi ⊗ (A1 AT1 + A2 AT2 ). (4.56)
i=1
2 i=1
Summarizing both parts of equalities [Eqs. (4.56) and (4.57)], we find that
8 4 4
1
Xi XiT = Pi PTi ⊗ Ai ATi . (4.58)
i=1
2 i=1 i=1
Now, substituting the last expressions into Eq. (4.58), we conclude that
8
Xi XiT = 8mnImn . (4.60)
i=1
Step 2. Substitute the matrices Xi into the array in Eq. (4.50) to obtain a
Williamson–Hadamard matrix of order 24n:
⎛ ⎞
⎜⎜⎜ X1 X2 X3 X4 X5 X3 X3 X3 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜−X2 X1 X4 −X3 X3 −X5 −X3 X3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−X3 −X4 X1 X2 X3 X3 −X5 −X3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−X4 X3 −X2 X2 X3 −X3 X3 −X5 ⎟⎟⎟⎟
⎜⎜⎜−X −X −X −X ⎟. (4.63)
⎜⎜⎜ 5 3 3 3 X 1 X2 X3 X4 ⎟⎟⎟⎟⎟
⎜⎜⎜−X ⎟
⎜⎜⎜ 3 X 5 −X 3 X 3 −X 2 X1 −X4 X3 ⎟⎟⎟⎟⎟
⎜⎜⎜−X ⎟
⎜⎜⎝ 3 X3 X5 −X3 −X3 X4 X1 −X2 ⎟⎟⎟⎟⎟
⎠
−X3 −X3 X3 X5 −X4 −X3 X2 X 1
⎛ ⎞
⎜⎜⎜+ − − + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + + + + + +⎟⎟⎟⎟
⎜⎜⎜− − + ⎟
⎜⎜⎜ + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + +
⎜⎜ + − − + + +⎟⎟⎟⎟⎟
⎟
X2 = ⎜⎜⎜⎜+ + + − + − + + +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + + + + + + − −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + − + −⎟⎟⎟⎟
⎜⎝ ⎠
+ + + + + + − − +
⎛ ⎞
⎜⎜⎜+ − − + − − + − −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟
⎜⎜⎜− − + ⎟
⎜⎜⎜ − − + − − +⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − + − − + − −⎟⎟⎟⎟⎟
⎜ ⎟
X4 = ⎜⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟ ,
⎜⎜⎜⎜− − + − − + − − +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − −
⎜⎜⎜ + − − + − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟
⎝ ⎠
− − + − − + − − +
⎛ ⎞
⎜⎜⎜+ + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟
⎜⎜⎜+ + + ⎟
⎜⎜⎜ + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + +
⎜⎜ + + + + + +⎟⎟⎟⎟⎟
⎟
X5 = ⎜⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟
⎜⎝ ⎠
+ + + + + + + + +
⎛ ⎞
⎜⎜⎜+ − − − + + − + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + − + + − +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜ + + + − − − + +⎟⎟⎟⎟⎟
⎟
X3 = X6 = X7 = X8 = ⎜⎜⎜⎜+ − + − + − + − +⎟⎟⎟⎟ . (4.64)
⎜⎜⎜⎜+ + − − − + + + −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + + − + + + − −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ − + + − + − + −⎟⎟⎟⎟
⎝ ⎠
+ + − + + − − − +
From Theorem 4.2.2 and Corollary 4.1.3, we have the following.
3, 5, . . . , 39, 43, 45, 49, 51, 55, 57, 63, 65, 69, 75, 77, 81, 85, 87, 91, 93, 95, 99, 105, 111, 115, 117, 119, 121,
125, 129, 133, 135, 143, 145, 147, 153, 155, 161, 165, 169, 171, 175, 185, 187, 189, 195, 203, 207, 209, 215,
217, 221, 225, 231, 243, 247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 325,
333, 341, 345, 351, 361, 375, 377, 387, 391, 399, 403, 405, 407, 425, 435, 437, 441, 455, 459, 473, 475, 481,
483, 493, 495, 513, 525, 527, 529, 551, 555, 559, 567, 575, 589, 609, 621, 625, 629, 637, 645, 651, 667, 675,
703, 713, 725, 729, 731, 775, 777, 783, 817, 819, 837, 841, 851, 899, 903, 925, 961, 989, 999, 1001, 1073
From Corollary 4.2.1 and Theorem 4.2.3, we conclude that there are eight
Williamson-type matrices of order 2mn, where
m ∈ W8 , n ∈ W ∪ L. (4.67)
Ai A j = J, i j, i, j = 1, 2, . . . , s,
ATi A j = A j ATi , i = 1, 2, . . . , s, (4.69)
s
(Ai ATi + ATi Ai ) = 2smIm .
i=1
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + −⎟⎟⎟ ⎜⎜⎜+ + −⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
B1 = ⎜⎜⎜⎜− + +⎟⎟⎟⎟⎟ , B2 = ⎜⎜⎜⎜+ + −⎟⎟⎟⎟⎟ ; (4.71)
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
+ − + + + −
i.e.,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − + + − + + −⎟⎟
⎟ ⎜⎜⎜+ + − + + − + + −⎟⎟
⎟
⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ + − + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ − + + − + + − +⎟⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + − + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− + + + − + + + −⎟⎟⎟⎟ ⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟
⎟ ⎟
A1 = ⎜⎜⎜⎜⎜+ − + + + − − + +⎟⎟⎟⎟ ,
⎟ A2 = ⎜⎜⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟ .
⎟
⎜⎜⎜+ + − − + + + − +⎟⎟⎟⎟⎟ ⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+ − + + + − − + +⎟⎟⎟⎟ ⎜⎜⎜+ − + + − + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝+ + − − + + + − +⎟⎟⎟⎟ ⎜⎜⎝+ − + + − + + − +⎟⎟⎟⎟
⎠ ⎠
− + + + − + + + − + − + + − + + − +
(4.72)
n ∈ R1 = {5, 9, 13, 17, 25, 29, 37, 41, 49, 53, 61, 73, 81, 89, 97} ,
m ∈ R2
= {3, 7, 11, 19, 23, 27, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 119,
127, 131, 139, 151, 163, 167, 179, 191} . (4.73)
or
⎛ k ⎞
⎜⎜⎜a11 B(k−1)m+1 ak12 B(k−1)m+2 ··· ak1m Bkm ⎟⎟⎟
⎜⎜⎜⎜ k ⎟⎟⎟
⎜⎜a21 B(k−1)m+2 ak22 B(k−1)m+3 ··· a2m B(k−1)m+1 ⎟⎟⎟⎟
k
Ck = ⎜⎜⎜⎜⎜ .. .. ..
⎟⎟⎟ ,
⎟⎟⎟ (4.75)
⎜⎜⎜ ..
⎜⎜⎝ . . . . ⎟⎟⎟
⎠
akm1 Bkm akm2 B(k−1)m+1 ··· akmm Bkm−1
Theorem 4.3.5: Let 8-Williamson matrices of order n and the regular 4n-sequ-
ence of matrices of order m exist. Then 8-Williamson matrices of order mn also
exist.
We can see that X1 X2T = X2 X1T . We can also show that Xi X Tj = X j XiT , for all i,
j = 1, 2, . . . , 8. Now, we will prove the second condition of Eq. (4.49). With this
purpose, we calculate the i’th and j’th element P(i, j) of the matrix 8i=1 Xi XiT :
n
P(i, j) = a1i,r a1j,r Qi+r−1 QTj+r−1 + a2i,r a2j,r Qn+i+r−1 QTn+ j+r−1
r=1
+ a3i,r a3j,r Qi+r−1 QTj+r−1 + a4i,r a4j,r Qn+i+r−1 QTn+ j+r−1
+ a5i,r a5j,r Q2n+i+r−1 QT2n+ j+r−1 + a6i,r a6j,r Q3n+i+r−1 QT3n+ j+r−1
+ a7i,r a7j,r Q2n+i+r−1 QT2n+ j+r−1 + a8i,r a8j,r Q3n+i+r−1 QT3n+ j+r−1 . (4.79)
From the conditions of Eqs. (4.49) and (4.69), and from the above relation, we
obtain
8 n
P(i, j) = Jm ati,r atj,r = 0, i j,
t=1 r=1
(4.80)
4n
P(i, i) = Q j QTj + QTj Q j = 8mnIm .
j=1
X = A ⊗ X1 + B ⊗ X2 + C ⊗ X3 + D ⊗ X4 ,
Y = −B ⊗ X1 + A ⊗ X2 − D ⊗ X3 + C ⊗ X4 ,
(4.84)
Z = −C ⊗ X1 + D ⊗ X2 + A ⊗ X3 − B ⊗ X4 ,
W = −D ⊗ X1 − C ⊗ X2 + B ⊗ X3 + A ⊗ X4
gives a Baumert–Hall array (for more detail, see forthcoming chapters). There are
infinite classes of T -matrices of orders 2a 10b 26c + 1, where a, b, c are nonnegative
integers.
where
n−1
XY(i, j) = Jm ai,k bk, j ,
k=0
n−1
(4.93)
Y X(i, j) = Jm bi,k ak, j , i, j = 1, 2, . . . , n − 1.
k=0
Hence, the i’th, j’th block elements of matrices XRY T and YRX T have the
following form:
n−1
XRY T (i, j) = ai,n−k−1 b j,k Q(n−i−1−k) QTn+(n− j+k) ,
k=0
n−1
(4.95)
YRX (i, j) =
T
bi,n−k−1 a j,k Qn+(2n−i−1−k) QT(n− j+k) .
k=0
n−1
XRY T (i, j) = Jm ai,n−k−1 b j,k ,
k=0
n−1
(4.96)
YRX (i, j) = Jm
T
bi,n−k−1 a j,k .
k=0
Thus, the second condition of Eq. (4.87) is satisfied, which means that we have
Now we are going to prove the third condition of Eq. (4.87). We can see that the
i’th block rows of matrices X, Y, Z, W, have the following forms, respectively:
ai,1 Q(n−i) ai,2 Q(n−i+1) · · · ai,n Q(2n−i−1) ;
bi,1 Qn+(n−i) bi,2 Qn+(n−i+1) · · · bi,n Qn+(2n−i−1) ;
(4.98)
ci,1 QT(n−i) ci,2 QT(n−i+1) · · · ci,n QT(2n−i−1) ;
di,1 QTn+(n−i) di,2 QTn+(n−i+1) · · · di,n QTn+(2n−i−1) .
m ∈ W81 = {23, 71, 103, 119, 151, 167, 263, 311, 359, 423, 439}. (4.102)
References
1. J. Williamson, “Hadamard determinant theorem and sum of four squares,”
Duke Math. J. 11, 65–81 (1944).
2. J. Williamson, “Note on Hadamard’s determinant theorem,” Bull. Am. Math.
Soc. (53), 608–613 (1947).
3. W. D. Wallis, A. P. Street, and J. S. Wallis, Combinatorics: Room Squares,
Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, 292,
Springer, Berlin/Heidelberg (1972) 273–445.
4. J. S. Wallis, “Some matrices of Williamson type,” Utilitas Math. 4, 147–154
(1973).
5. J. M. Geothals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”
Can. J. Math. 19, 1001–1010 (1967).
6. J. M. Geothals and J. J. Seidel, “A skew Hadamard matrix of order 36,”
J. Austral. Math. Soc. 11, 343–344 (1970).
7. M. Hall Jr., Combinatorial Theory, Blaisdell Publishing Co., Waltham, MA
(1970).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Comb. Theory,
Ser. A 12, 319–322 (1972).
9. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory, Ser. A 18, 149–164
(1975).
10. A. G. Mukhopodhyay, “Some infinitive classes of Hadamard matrices,”
J. Comb. Theory, Ser. A 25, 128–141 (1978).
11. J. S. Wallis, “Williamson matrices of even order,” in Combinatorial Mathe-
matics, Proc. 2nd Austral. Conf., Lecture Notes in Mathematics, 403 132–142
Springer, Berlin/Heidelberg (1974).
12. E. Spence, “An infinite family of Williamson matrices,” J. Austral. Math. Soc.,
Ser. A 24, 252–256 (1977).
13. J. S. Wallis, “Construction of Williamson type matrices,” Lin. Multilin.
Algebra 3, 197–207 (1975).
14. S. S. Agaian and H. G. Sarukhanian, “Recurrent formulae of the construction
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
15. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer, Berlin/Heidelberg (1985).
16. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences”, Doctoral
thesis, Institute for Informatics and Automation Problems NAS RA, Yerevan,
Armenia (1998).
47. M. H. Dawson and S. E. Tavares, “An expanded set of S-box design criteria
based on information theory and its relation to differential-like attacks,”
in Advances in Cryptology—EUROCRYPT’91, Lecture Notes in Computer
Science, 547 352–367 Springer-Verlag, Berlin (1991).
48. G. M’gan Edmonson, J. Seberry, and M. Anderson, “On the existence of
Turyn sequences of length less than 43,” Math. Comput. 62, 351–362 (1994).
49. S. Eliahou, M. Kervaire, and B. Saffari, “A new restriction on the lengths of
Golay complementary sequences,” J. Combin. Theory, Ser A 55, 49–59 (1990).
50. S. Eliahou, M. Kervaire, and B. Saffari, “On Golay polynomial pairs,” Adv.
Appl. Math. 12, 235–292 (1991).
51. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
Hadamard matrices,” in Congressus Numerantium, Proc. 9th Manitoba Conf.
on Numerical Mathematics 27, 23–29 (1979).
52. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
applications,” IEEE Trans. Inf. Theory 27 (6), 772–779 (1981).
53. H. F. Harmuth, Transmission of Information by Orthogonal Functions,
Springer-Verlag, Berlin (1972).
54. C. Koukouvinos, C. Kounias, and K. Sotirakoglou, “On Golay sequences,”
Disc. Math. 92, 177–185 (1991).
55. C. Koukouvinos, M. Mitrouli, and J. Seberry, “On the smith normal form of
d-optimal designs,” J. Lin. Multilin. Algebra 247, 277–295 (1996).
56. Ch. Koukouvinos and J. Seberry, “Construction of new Hadamard matrices
with maximal excess and infinitely many new SBIBD (4k2, 2k2 + k, k2 + k),”
in Graphs, Matrices and Designs: A Festschrift for Norman J. Pullman,
R. Rees, Ed., Lecture Notes in Pure and Applied Mathematics, Marcel Dekker,
New York (1992).
57. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
58. D. Sarvate and J. Seberry, “A note on small defining sets for some SBIBD(4t−
1, 2t − 1, t − 1),” Bull. Inst. Comb. Appl. 10, 26–32 (1994).
59. J. Seberry, “Some remarks on generalized Hadamard matrices and theorems
of Rajkundlia on SBIBDs,” in Combinatorial Mathematics VI, Lecture Notes
in Mathematics, 748 154–164 Springer-Verlag, Berlin (1979).
60. J. Seberry, X.-M. Zhang, and Y. Zheng, “Cryptographic Boolean functions
via group Hadamard matrices,” Australas. J. Combin. 10, 131–145 (1994).
61. S. E. Tavares, M. Sivabalan, and L. E. Peppard, “On the designs of {SP}
networks from an information theoretic point of view,” in Advances in
Cryptology—CRYPTO’92, Lecture Notes in Computer Science, 740 260–279
Springer-Verlag, Berlin (1992).
189
then,
⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜ −B A −D C ⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎝ −C D A −B⎟⎟⎟⎟⎠
W4n (5.2)
−D −C B A
n−1
A= ai U i , (5.3)
i=0
where U is a cyclic matrix of order n with the first row (0, 1, 0, . . . , 0) of length n,
and U n+i = U i , ai = an−i , for i = 1, 2, . . . , n − 1.
Thus, the four cyclic symmetric Williamson matrices A ⇔ (a0 , a1 , . . . , an−1 ),
B ⇔ (b0 , b1 , . . . , bn−1 ), C ⇔ (c0 , c1 , . . . , cn−1 ), D ⇔ (d0 , d1 , . . . , dn−1 ) can be repre-
sented as
n−1
A(a0 , a1 , . . . , an−1 ) = ai U i ,
i=0
n−1
B(b0 , b1 , . . . , bn−1 ) = bi U i ,
i=0
n−1
(5.4)
C(c0 , c1 , . . . , cn−1 ) = ci U ,
i
i=0
n−1
D(d0 , d1 , . . . , dn−1 ) = di U i ,
i=0
Additionally, if a0 = b0 = c0 = d0 = 1, then
where Q+ denotes the (0, 1) matrix, which is obtained from the (+1, −1) matrix
Q by replacement of −1 by zero, and Q− denotes the (0, 1) matrix, which is
obtained from the (+1, −1) matrix Q by replacement of −1 by +1 and +1 by zero,
respectively.
A2 + B2 + C 2 + D2 = 4nIn (5.7)
can be expressed by
, -2 , -2 , -2 , -2
2A+ − J + 2B+ − J + 2C + − J + 2D+ − J = 4nIn , (5.8)
It has been shown that for any ai , bi , ci, , di , 0 ≤ i ≤ n − 1 with |ai | = |bi | = |ci | =
|di | = 1, for all 0 ≤ i ≤ n − 1 and ai = an−i , bi = bn−i , ci = cn−i, , di = dn−i , the matrix
W4n (a0 , . . . , an−1 , . . . , d0 , . . . dn−1 ) is a Williamson–Hadamard matrix of order 4n.
The following is an example. Let (a0 , a1 , a1 ), (b0 , b1 , b1 ), (c0 , c1 , c1 ), and (d0 ,
d1 , d1 ) be the first rows of parametric Williamson-type cyclic symmetric matrices
of order 3. Using Algorithm 5.1.1, we can construct the following parametric
We form the second (and third) block P1 as follows: (1) from the second, fifth,
eighth, and eleventh elements of the first row, we make the first row (a1 , b1 , c1 , d1 )
of block P1 ; (2) from the second, fifth, eighth, and eleventh elements of the fourth
row we make the second row (−b1 , a1 , −d1 , c1 ) of block P1 , and so on.
Hence, we obtain
⎛ ⎞
⎜⎜⎜ a1 b1 c1 d1 ⎟⎟⎟
⎜⎜⎜−b a1 −d1 c1 ⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ 1 ⎟.
⎜⎜⎝ 1 d1 a1 −b1 ⎟⎟⎟⎟⎠
(5.14)
−c
−d1 −c1 b1 a1
or
⎛ ⎞
⎜⎜⎜P0 P1 P1 ⎟⎟⎟
⎜⎜⎜ ⎟
[BW]12 = ⎜⎜⎜P1 P0 P1 ⎟⎟⎟⎟⎟ , (5.16)
⎝ ⎠
P1 P1 P0
n−1
[BW]4n = U i ⊗ Qi , (5.18)
i=0
where
⎛ ⎞
⎜⎜⎜ ai bi ci di ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟
Qi (ai , bi , ci , di ) = ⎜⎜⎜⎜ i i i i⎟
⎟,
⎜⎜⎝ −ci di ai −bi ⎟⎟⎟⎟⎠
(5.19)
−di −ci bi ai
⎛ ⎞
⎜⎜⎜+ + + + + − − − + − − −⎟⎟
⎟
⎜⎜⎜⎜− + − + + + + − + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + − + − + + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜− − + + + + − + + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − − − + + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜+ + + − − + − + + + + −⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟,
+⎟⎟⎟⎟⎟
[BW]12 (5.20)
⎜⎜⎜+ − + + − + + − + − +
⎜⎜⎜+ + − + − − + + + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − − − + − − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − + + + − − + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎝ − + + + − + + − + + −⎟⎟⎟⎟⎠
+ + − + + + − + − − + +
or
⎛ ⎞
⎜⎜⎜Q0 (+1, +1, +1, +1) Q4 (+1, −1, −1, −1) Q4 (+1, −1, −1, −1)⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
[BW]12 = ⎜⎜Q4 (+1, −1, −1, −1) Q0 (+1, +1, +1, +1) Q4 (+1, −1, −1, −1)⎟⎟⎟⎟ . (5.21)
⎜⎝ ⎟⎠
Q4 (+1, −1, −1, −1) Q4 (+1, −1, −1, −1) Q0 (+1, +1, +1, +1)
From Eq. (5.18), we can see that all of the blocks are Hadamard matrices of
the Williamson type of order 4. In Ref. 14, it was proved that cyclic symmetric
Williamson–Hadamard block matrices can be constructed using only five different
blocks, for instance, as
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟ ⎜⎜⎜+ + + −⎟⎟
⎟ ⎜⎜⎜+ + − +⎟⎟
⎟
⎜⎜⎜− + − +⎟⎟⎟⎟ ⎜⎜⎜− + + +⎟⎟⎟⎟ ⎜⎜⎜− + − −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟, Q2 = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝− + + −⎟⎟⎟⎟⎠ ⎜⎜⎝− − + −⎟⎟⎟⎟⎠ ⎜⎜⎝+ + + −⎟⎟⎟⎟⎠
− − + + + − + + − + + +
⎛ ⎞ ⎛ ⎞ (5.22)
⎜⎜⎜⎜+ − + +⎟⎟
⎟ ⎜⎜⎜⎜+ − − −⎟⎟
⎟
⎜⎜+ + − +⎟⎟⎟⎟ ⎜⎜+ + + −⎟⎟⎟⎟
Q3 = ⎜⎜⎜⎜ ⎟, Q4 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝− + + +⎟⎟⎟⎟⎠ ⎜⎜⎝+ − + +⎟⎟⎟⎟⎠
− − − + + + − +
The set of blocks with a fixed first block with odd +1 is as follows:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + −⎟⎟
⎟ ⎜⎜⎜+ − − +⎟⎟
⎟ ⎜⎜⎜− + − +⎟⎟
⎟
⎜⎜⎜− + + +⎟⎟⎟⎟ ⎜⎜⎜+ + − −⎟⎟⎟⎟ ⎜⎜⎜− − − −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜
1 ⎟, Q1 = ⎜⎜⎜⎜
1 ⎟, Q2 = ⎜⎜⎜⎜
1 ⎟,
⎜⎜⎝− − + −⎟⎟⎟⎟⎠ ⎜⎜⎝+ + + +⎟⎟⎟⎟⎠ ⎜⎜⎝+ + − −⎟⎟⎟⎠⎟
+ − + + − + − + − + + −
⎛ ⎞ ⎛ ⎞ (5.23)
⎜⎜⎜− − + +⎟⎟
⎟ ⎜⎜⎜+ + + +⎟⎟
⎟
⎜⎜⎜+ − − +⎟⎟⎟⎟ ⎜⎜⎜− + − +⎟⎟⎟⎟
Q13 = ⎜⎜⎜⎜ ⎟, Q14 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝− + − +⎟⎟⎟⎟⎠ ⎜⎜⎝− + + −⎟⎟⎟⎟⎠
− − − − − − + +
F = [BW]4n f. (5.24)
n−1
f = Pi ⊗ X i , (5.25)
i=0
where Pi are column vectors of dimension n whose i’th element is equal to 1, the
remaining elements are equal to 0, and
n−1 n−1
[BW]4n f = U i P j ⊗ Qi X j = B j, (5.28)
i, j=0 j=0
where B j = U i P j ⊗ Qi X j .
From Eq. (5.28), we see that in order to perform the fast Williamson–Hadamard
transform, we need to calculate the spectral coefficients of the block transforms,
such as Yi = Qi X. Here, Qi , i = 0, 1, 2, 3, 4 have the form of Eq. (5.22), and
Y0 = (y00 , y10 , y20 , y30 ), Y1 = (y01 , y11 , y21 , y31 ), Y2 = (y02 , y12 , y22 , y32 ),
(5.31)
Y3 = (y03 , y13 , y23 , y33 ), Y4 = (y04 , y14 , y24 , y34 ).
0
x0 y0
1
x1 y0
Q0 X
2
x2 y0
x3 3
y0
x0 0
y1
x1 1
y1
Q1 X
x2 2
y1
x3 3
y1
0
x0 y2
1
x1 y2
Q2 X
2
x2 y2
x3 3
y2
0
x0 y3
1
x1 y3
Q3 X
x2 2
y3
3
x3 y3
0
x0 y4
x1 1
y4
Q4 X
2
x2 y4
x3 3
y4
Step 1. Split vector F36 into nine parts as follows: F36 = (X0 , X1 ,
. . . , X8 )T , where
It is easy to check that the joint four-point transform computation requires fewer
operations than its separate computations. The separate computations of transforms
Q0 X and Q1 X require 14 addition/subtraction operations and six one-bit shifts;
however, for their joint computation, only 10 addition/subtraction operations and
three one-bit shifts are necessary. Thus, using this fact, the complexity of the fast
Williamson–Hadamard transform will be discussed next.
Y0 Y1 Y2
Q0 X0 Q1X1 –Q2X2
A (Q0, Q1, Q2) Q1 X0 A (Q0, Q1, Q2) Q0X1 Q1X2
–Q2 X0 Q1X1 Q0X2
Q0 X0 Q0 X1 Q0 X2
Q1 X0 –Q2X1 Q1X2
–Q1 X0 Q1X1 –Q2X2
Q1 X0 Q1 X1 Q1 X2
–Q1 X0 –Q1X1 Q1X2
Q2 X0 Q1 X0 Q2 X 1 –Q1X1 Q2 X2 –Q1X2
–Q2X1 Q1X2
Q1 X0
Y3 Y4 Y5
Q1X3 –Q1X4 –Q1X5
A (Q0, Q1, Q2) –Q2X3 A (Q0, Q1, Q2) Q1X4 A (Q0, Q1, Q2) –Q1X5
Q1X3 –Q2X4 Q1X5
Q 0 X3 Q0 X4 Q0 X5
Q0X3 Q1X4 Q1X5
Q1X3 Q0X4 –Q2X5
Q 1 X3 Q1 X4 Q1 X5
–Q2X3 Q1X4 Q1X5
Y6 Y7 Y8
Q1X6 –Q2X7 Q1 X8
A (Q0, Q1, Q2) –Q1X6 A (Q0, Q1, Q2) Q1X7 A (Q0, Q1, Q2) –Q2X8
Q1X7
–Q2X6 Q0X8
For n = 5,
⎛ ⎞
⎜⎜⎜+ − − − − + 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜− + − − − 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + − − 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + − 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜− − − − + +⎟⎟⎟⎟
X = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟ ,
⎜⎜⎜+ 0 0 0 0 − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 + 0 0 0 + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 + 0 0 + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 0 + 0 + + + − +⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 + + + + + −
⎛ ⎞ (5.39)
⎜⎜⎜ 0 0 0 0 0 0 + − − +⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 0 0 0 + 0 + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 0 − + 0 + −⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 0 − − + 0 +⎟⎟⎟⎟⎟
⎟⎟
⎜⎜ 0 + − − + 0 ⎟⎟⎟⎟
Y = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟ .
⎜⎜⎜ 0 − + + − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 − + + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − 0 − + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + − 0 − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎝ ⎟⎠
− + + − 0 0 0 0 0 0
Let A0 = (1), B0 = (1), C0 = (1), D0 = (1) and A = B = (1, −1, −1, −1, −1),
C = (1, 1, −1, −1, 1), and D = (1, −1, 1, 1, −1) be cyclic symmetric matrices of
order 1 and 5, respectively. Then, from Eq. (5.36), we obtain Williamson matrices
of order 10, i.e.,
⎛ ⎞
⎜⎜⎜+ − − − − + + − − +⎟⎟
⎟
⎜⎜⎜− + − − − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ − + − − − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜− − − + − − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜
− − − − + + − − + +⎟⎟⎟⎟⎟
A1 = A3 = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜+ − + + − − + + + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ + − + + + − + + +⎟⎟⎟⎟
⎟
⎜⎜⎜+ − + − + + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ + − + − + + + − +⎟⎟⎟⎟⎠
− + + − + + + + + −
⎛ ⎞ (5.42)
⎜⎜⎜+ − − − − + − + + −⎟⎟
⎟
⎜⎜⎜− + − − − − + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ − + − − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜− − − + − + + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
− − − − + − + + − +⎟⎟⎟⎟⎟
A2 = A4 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜ + + − − + − + + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + + − − + − + + +⎟⎟⎟⎟
⎟
⎜⎜⎜− + + + − + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− − + + + + + + − +⎟⎟⎟⎟⎠
+ − − + + + + + + −
Now the Williamson–Hadamard matrix of order 40 can be synthesized as
⎛ ⎞
⎜⎜⎜ A1 A2 A1 A2 ⎟⎟⎟
⎜⎜⎜−A A1 −A2 A1 ⎟⎟⎟⎟
[WH]40 = ⎜⎜⎜⎜ 2 ⎟.
⎜⎝⎜−A1 A2 A1 −A2 ⎟⎟⎟⎠⎟
(5.43)
−A2 −A1 A2 A1
P = X ⊗ H1 + Y ⊗ S 4m H1 , (5.44)
Figure 5.3 Flow graph for the joint computation of XF and Y F transforms.
From Eq. (5.50), it follows that joint computation of XF and Y F requires only
18 additions/subtractions (see Fig. 5.3). Then, from Eq. (5.47), we can conclude
that the complexity of the PF transform algorithm can be obtained by
Example 5.5.1: Let Hm be a Hadamard matrix of order m, let X and Y have the
form as in Algorithm 5.1.1, and let F = ( fi )6m
i=1 be an input vector. Then, we have
a Hadamard matrix of order 6m of the form H6m = X ⊗ Hm + Y ⊗ S m Hm . As in
Eq. (5.55), we have H6m = A1 (I6 ⊗ Hm ), where A1 = X ⊗ Im + Y ⊗ S m , and
⎛ ⎞
⎜⎜⎜ Im Om Om Im −Im −Im ⎟⎟
⎜⎜⎜ Om ⎟
⎜⎜⎜ Im Om −Im Im −Im ⎟⎟⎟⎟
⎟
⎜⎜ O Om Im −Im −Im Im ⎟⎟⎟⎟
X ⊗ Im = ⎜⎜⎜⎜ m ⎟,
⎜⎜⎜⎜ Im −Im −Im −Im Om Om ⎟⎟⎟⎟⎟
⎜⎜⎜−Im
⎝ Im −Im Om −Im Om ⎟⎟⎟⎟⎠
−Im −Im Im Om Om −Im
⎛ ⎞ (5.56)
⎜⎜⎜Om Sm Sm Om Om Om ⎟⎟
⎟
⎜⎜⎜ S m Om Sm Om Om Om ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜ S Sm Om Om Om Om ⎟⎟⎟⎟
Y ⊗ S m = ⎜⎜⎜⎜ m ⎟.
⎜⎜⎜Om Om Om Om Sm S m ⎟⎟⎟⎟⎟
⎜⎜⎜O S m ⎟⎟⎟⎟⎠
⎜⎝ m Om Om Sm Om
Om Om Om Sm Sm Om
⎛ ⎞ ⎛ ⎞
⎜⎜⎜T 1 + T 4 − (T 5 + T 6 )⎟⎟⎟ ⎜⎜⎜S m (T 2 + T 3 )⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜T 2 + T 5 − (T 4 + T 6 )⎟⎟⎟⎟⎟ ⎜⎜⎜S m (T 1 + T 3 )⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
T + T 6 − (T 4 + T 5 )⎟⎟⎟⎟ S (T + T 2 )⎟⎟⎟⎟
(X ⊗ Im )T = ⎜⎜⎜⎜⎜ 3 ⎟, (Y ⊗ S m )T = ⎜⎜⎜⎜⎜ m 1 ⎟ . (5.58)
⎜⎜⎜T 1 − T 4 − (T 2 + T 3 )⎟⎟⎟⎟⎟ ⎜⎜⎜S m (T 5 + T 6 )⎟⎟⎟⎟⎟
⎜⎜⎜⎜T − T − (T + T )⎟⎟⎟⎟ ⎜⎜⎜⎜S (T + T )⎟⎟⎟⎟
⎜⎜⎝ 2 5 1 3 ⎟ ⎟⎠ ⎜⎜⎝ m 4 6 ⎟ ⎟⎠
T 3 − T 6 − (T 1 + T 2 ) S m (T 4 + T 5 )
From Eqs. (5.57) and (5.58), it follows that the computational complexity of
transform H6m F is C(H6m ) = 24m + 6C(Hm ), where C(Hm ) is a complexity of an
m-point HT.
Table 5.1 Values of parameters n, m, tm , Nm, j and the complexity of the Williamson-type HT
of order 4n.
n 4n M tm Nm, j Cr (H4n ) Direct comp.
3 12 0 0 0 60 132
5 20 0 0 0 140 380
7 28 2 1 2 224 756
9 36 2 1 3 324 1260
11 44 2 1 2 528 1892
13 52 3 1 2 676 2652
15 60 2 3 3, 2, 2 780 3540
17 68 2 2 2, 3 1088 4558
19 76 2 3 2„4, 3 1140 5700
21 84 2 3 2, 2, 5 1428 6972
23 92 2 3 4, 2, 2 1840 8372
25 100 2 3 2, 7, 2 1850 9900
With this observation, one can reduce several operations in summing up the
vectors Yi (see step 3 of the above example and its corresponding flow graphs). Let
m be a length of the cyclic congruent circuits of the first block row of the block-
cyclic, block-symmetric Hadamard matrix of order 4n, tm be a number of various
cyclic congruent circuits of length m, and Nm, j be the number of cyclic congruent
circuits of type j and length m. Then, the complexity of the HT of order 4n takes
the form
⎡ ⎤
⎢⎢⎢ tm ⎥⎥⎥
Cr (H4n ) = 4n ⎢⎢⎣n + 2 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ .
⎢ (5.63)
j=1
C ± = 2n(2n + 3),
(5.64)
C sh = 3n,
3 12 60 54 9 60 54 132
5 20 140 130 15 140 130 380
7 28 252 238 21 224 210 756
9 36 396 378 27 324 306 1260
11 44 572 550 33 528 506 1892
13 52 780 754 39 676 650 2652
15 60 1020 990 45 780 750 3540
17 68 1292 1258 51 1088 1054 4558
19 76 1596 1558 57 1140 1102 5700
21 84 1932 1890 63 1428 1386 6972
23 92 2300 2254 69 1840 1794 8372
25 100 2700 2650 75 1900 1850 9900
results, are given in the formula in Eq. (5.66) and in Table 5.2, respectively.
C = 4n(n⎡ + 2), ⎤
⎢⎢⎢ m tm ⎥⎥⎥
Cr = 4n ⎢⎢⎢⎣n + 2 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ ,
i=2 j=1
C ± = 2n(2n + 3),
(5.66)
C sh = 3n,⎡ ⎤
⎢⎢⎢ m tm ⎥⎥⎥
Cr± = 2n ⎢⎢⎢⎣n + 3 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ ,
i=2 j=1
C sh = 3n.
can be factorized as
where
n, F T = ( f1 , f2 , . . . , fmkn ).
F
Hmk (5.70)
Hm Complexity
References
1. N. Ahmed and K. Rao, Orthogonal Transforms for Digital Signal Processing,
Springer-Verlag, New York (1975).
213
Definition 6.1.2: A Hadamard matrix H4n of order 4n of the form H4n = I4n + S 4n
T
is called skew-symmetric type, skew symmetric, or skew if S 4n = −S 4n .34,35
We can see that if H4n = I4n + S 4n is a skew-symmetric Hadamard matrix of
order 4n, then
T
S 4n S 4n = S 4n
2
= (1 − 4n)I4n . (6.2)
Indeed,
T
H4n H4n = (I4n + S 4n )(I4n − S 4n ) = I4n − S 4n + S 4n − S 4n
2
= I4n − S 4n
2
= 4nI4n , (6.3)
2
from which we obtain S 4n = (1 − 4n)I4n .
A skew-Hadamard matrix Hm of order m can always be written in a skew-normal
form as
1 e
Hm = , (6.4)
−eT Cm−1 + Im−1
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟
⎟
⎜⎜⎜− + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + + − − + + −⎟⎟⎟⎟⎟
⎜− − + + − − + +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟, (6.7)
⎜⎜⎜⎜− + + + + − − −⎟⎟⎟⎟
⎟
⎜⎜⎜− − − + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− + − − + − + +⎟⎟⎟⎟
⎠
− − + − + + − +
⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟
⎟
⎜⎜⎜⎜− + − + − + − + − + − + − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + − − + + − − + + − − + + −⎟⎟⎟⎟⎟
⎜⎜⎜− − + + − − + + − − + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + + − − − − + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + + + + − − − − + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − + + − + − − + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
− + − + + − + − − + − + + − +⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎟
H16 (6.8)
⎜⎜⎜− + + + + + + + + − − − − − −
⎜⎜⎜− − − + − + − + + + + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + − − − + + − + − + + + − − +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜− − + − − − + + + + − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − − − − + − − − + + + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − − + + − + − + + + − − + − +⎟⎟⎟⎟
⎟
⎜⎜⎜−
⎝ + − − + − − + + − + + − + + −⎟⎟⎟⎟
⎠
− − + − + + − − + + − + − − + +
A = In + A1 , AT1 = −A1 ,
BT = B, C T = C, DT = B, (6.9)
AA + BB + CC + DD = 4nIn .
T T T T
where U is the cyclic matrix of order n, with the first row (0, 1, 0, . . . , 0), U 0 =
U n = In being an identity matrix of order n, and U n+i = U i , ai = −an−i , bi = bn−i ,
ci = cn−i , di = dn−i , for i = 1, 2, . . . , n − 1. Now, the skew-symmetric Williamson-
type Hadamard matrix H4n can be represented as
n−1
H4n = U i ⊗ Pi , (6.15)
i=0
where
⎛ ⎞
⎜⎜⎜ ai bi ci di ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−bi ai di −ci ⎟⎟⎟⎟⎟
Pi = ⎜⎜⎜ ⎟, i = 0, 1, . . . , n − 1, (6.16)
⎜⎜⎜−ci −di ai bi ⎟⎟⎟⎟⎟
⎝ ⎠
−di ci −bi ai
and ai , bi , ci , di = ±1.
We will call the Hadamard matrices of the form of Eq. (6.15) skew-symmetric,
block-cyclic Williamson–Hadamard matrices.
An example of a skew-symmetric, block-cyclic Williamson–Hadamard matrix
of order 12 is given as follows:
⎛ ⎞
⎜⎜⎜+ + + + − − − + + − − +⎟⎟⎟
⎜⎜⎜⎜− + + − + − + + + + + +⎟⎟⎟⎟
⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + + + − − − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ + − + − − + − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − + + + + + − − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − + + − + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ . (6.17)
⎜⎜⎜+ − + − − − + + + − − −⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + + − + − + − − + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜⎜ − − + + − − + + + + +⎟⎟⎟⎟⎟
⎟
⎜⎜⎜+ − + + + + + + − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − − − + − + − − − + +⎟⎟⎟⎟
⎝ ⎠
− − + − − − + + − + − +
x0 0
y0
1
x1 y0
P0 X
2
x2 y0
x3 3
y0
x0 0
y1
x1 y1
1
P1X
x2 2
y1
x3 3
y1
0 1 2 3 0 1 2 3
y0 y0 y0 y0 y1 y1 y1 y1
x0 0 0
y0 x0 y0
1 1
x1 y0 x1 y0
P3X P2 X
2 2
x2 y0 x2 y0
3 3
x3 y0 x3 y0
x0 0 x0 0
y0 y0
x1 1 x1 1
y0 y0
P5 X P4X
x2 2 x2 2
y0 y0
x3 3 3
y0 x3 y0
x0 0 x0 0
y1 y1
x1 1 x1 1
y1 y1
P6 X P7X
x2 2 x2 2
y1 y1
x3 3 x3 3
y1 y1
Now, from Eqs. (6.19a)–(6.20), we can see that the joint computation of 4-point
transforms Pi X, i = 0, 1, . . . , 7 requires only 12 addition/subtraction operations. In
Fig. 6.1, the joint Pi X transforms, i = 0, 1, . . . , 7, are shown.
Let us give an example. The block-cyclic, skew-symmetric Hadamard matrix of
the Williamson type of order 36 has the following form:
⎛ ⎞
⎜⎜⎜ P0 −P3 −P1 −P2 P2 −P5 P5 P6 P4 ⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ P4 P0 −P3 −P1 −P2 P2 −P5 P5 P6 ⎟⎟⎟⎟
⎜⎜⎜ P ⎟
⎜⎜⎜ 6 P4 P0 −P3 −P1 −P2 P2 −P5 P5 ⎟⎟⎟⎟⎟
⎜⎜⎜ P5 ⎟
⎜⎜ P6 P4 P0 −P3 −P1 −P2 P2 −P5 ⎟⎟⎟⎟
⎟
H36 = ⎜⎜⎜⎜⎜−P5 P5 P6 P4 P0 −P3 −P1 −P2 P2 ⎟⎟⎟⎟⎟ . (6.21)
⎜⎜⎜ ⎟
⎜⎜⎜ P2 −P5 P5 P6 P4 P0 −P3 −P1 −P2 ⎟⎟⎟⎟
⎜⎜⎜−P ⎟
⎜⎜⎜ 2 P2 −P5 P5 P6 P4 P0 −P3 −P1 ⎟⎟⎟⎟⎟
⎜⎜⎜−P ⎟
⎜⎝ 1 −P2 P2 −P5 P5 P6 P4 P0 −P3 ⎟⎟⎟⎟
⎠
−P3 −P1 −P2 P2 −P5 P5 P6 P4 P0
where
XiT = f4i , f4i+1 , f4i+2 , f4i+3 , i = 0, 1, . . . , 8. (6.23)
From Eqs. (6.19a)–(6.19d) and the above-given equalities for Yi , we can see that
in order to compute all transforms Pi X j i = 0, 1, . . . , 6, j = 0, 1, . . . , 8 resulting
in Yi , i = 0, 1, . . . , 8, 108 addition operations are necessary, as in the block-cyclic,
block-symmetric case. Hence, the complexity of the block-cyclic, skew-symmetric
HT can be calculated by the formula
Analysis of the 4-point transforms given above shows that their joint com-
putation requires fewer operations than does their separate computations. For
example, the transforms P0 X and P1 X require 14 addition/subtraction operations
and three one-bit shifts; however, for their joint computation, only 10 addition/
subtraction operations and three one-bit shifts are necessary.
One can show that formulas of the complexity, in this case, are similar to ones
in the case of symmetric Williamson–Hadamard matrices, i.e.,
3 60 9 60 54
5 140 15 140 130
7 252 21 252 238
9 396 27 360 342
11 572 33 484 462
13 780 39 676 650
15 1020 45 900 870
17 1292 51 1088 1054
19 1596 57 1216 1178
21 1932 63 1596 1554
23 2300 69 1840 1794
25 2700 75 2100 2050
References
1. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook of
Combinatorial Designs, C. J. Colbourn and J. H. Dinitz, Eds., pp. 370–377
CRC Press, Boca Raton (1996).
2. D. Z. Djokovic, “Skew Hadamard matrices of order 4 × 37 and 4 × 43,”
J. Combin. Theory, Ser. A 61, 319–321 (1992).
3. D. Z. Djokovic, “Ten new orders for Hadamard matrices of skew type,” Univ.
Beograd. Pupl. Electrotehn. Fak., Ser. Math. 3, 47–59 (1992).
4. D. Z. Djokovic, “Construction of some new Hadamard matrices,” Bull.
Austral. Math. Soc. 45, 327–332 (1992).
5. D. Z. Djokovic, “Good matrices of order 33, 35 and 127 exist,” J. Combin.
Math. Combin. Comput. 14, 145–152 (1993).
6. D. Z. Djokovic, “Five new orders for Hadamard matrices of skew type,”
Australas. J. Combin. 10, 259–264 (1994).
7. D. Z. Djokovic, “Six new orders for G-matrices and some new orthogonal
designs,” J. Combin. Inform. System Sci. 20, 1–7 (1995).
8. R. J. Fletcher, C. Koukouvinos, and J. Seberry, “New skew-Hadamard
matrices of order 4 · 49 and new D-optimal designs of order 2 · 59,” Discrete
Math. 286, 251–253 (2004).
9. S. Georgiou and C. Koukouvinos, “On circulant G-matrices,” J. Combin.
Math. Combin. Comput. 40, 205–225 (2002).
10. S. Georgiou and C. Koukouvinos, “Some results on orthogonal designs and
Hadamard matrices,” Int. J. Appl. Math. 17, 433–443 (2005).
11. S. Georgiou, C. Koukouvinos, and J. Seberry, “On circulant best matrices and
their applications,” Linear Multilin. Algebra 48, 263–274 (2001).
12. S. Georgiou, C. Koukouvinos, and S. Stylianou, “On good matrices, skew
Hadamard matrices and optimal designs,” Comput. Statist. Data Anal. 41,
171–184 (2002).
13. S. Georgiou, C. Koukouvinos, and S. Stylianou, “New skew Hadamard
matrices and their application in edge designs,” Utilitas Math. 66, 121–136
(2004).
14. S. Georgiou, C. Koukouvinos, and S. Stylianou, “Construction of new skew
Hadamard matrices and their use in screening experiments,” Comput. Stat.
Data Anal. 45, 423–429 (2004).
15. V. Geramita and J. Seberry, Orthogonal Designs: Quadratic Forms and
Hadamard Matrices, Marcel Dekker, New York (1979).
16. J. M. Goethals and J. J. Seidel, “A skew Hadamard matrix of order 36,”
J. Austral. Math. Soc. 11, 343–344 (1970).
17. H. Kharaghani and B. Tayfeh-Rezaie, “A Hadamard matrix of order 428,”
J. Combin. Des. 13, 435–440 (2005).
18. C. Koukouvinos and J. Seberry, “On G-matrices,” Bull. ICA 9, 40–44 (1993).
19. S. Kounias and T. Chadjipantelis, “Some D-optimal weighing designs for
n ≡ 3 (mod 4),” J. Statist. Plann. Inference 8, 117–127 (1983).
20. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
21. J. Seberry Wallis, “A skew-Hadamard matrix of order 92,” Bull. Austral. Math.
Soc. 5, 203–204 (1971).
22. J. Seberry Wallis, “On skew Hadamard matrices,” Ars Combin. 6, 255–275
(1978).
23. J. Seberry Wallis and A. L. Whiteman, “Some classes of Hadamard matrices
with constant diagonal,” Bull. Austral. Math. Soc. 7, 233–249 (1972).
24. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Contemporary Design Theory—A Collection of Surveys, J. H.
Dinitz and D. R. Stinson, Eds., 431–560 Wiley, Hoboken, NJ (1992).
25. E. Spence, “Skew-Hadamard matrices of order 2(q + 1),” Discrete Math. 18,
79–85 (1977).
26. G. Szekeres, “A note on skew type orthogonal ±1 matrices,” in Combinatorics,
Colloquia Mathematica Societatis, Vol. 52, J. Bolyai, A. Hajnal, L. Lovász,
and V. T. Sòs, Eds., 489–498 North-Holland, Amsterdam (1988).
27. W. D. Wallis, A. P. Street and J. Seberry Wallis, Combinatorics: Room
Squares, Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics,
292, Springer, New York, 1972.
229
1. p1 q1 = p2 q2 = n ≡ 0 (mod 4),
2. Xi ∗ X j = 0, i j, i, j = 1, 2, . . . , k, * is Hadamard product,
k
3. Xi is a (+1, −1) matrix,
i=1
k k
4. Xi XiT ⊗ Ai ATi + Xi X Tj ⊗ Ai ATj = nIn , i j,
i=1 i, j=1
k k
5. XiT Xi ⊗ ATi Ai + XiT X j ⊗ ATi A j = nIn , i j.
i=1 i, j=1
The first three conditions are evident. The two last conditions are jointly equivalent
to conditions
HH T = H T H = nIn . (7.2)
Now, let us consider the case where Ai are (+1, −1) vectors. Note that any
Hadamard matrix Hn of order n can be represented as
where X, Y are (0, ±1) matrices of dimension n × (n/2), Ai are (0, ±1) matrices of
dimension n × (n/4), and vi are the following four-dimensional (+1, −1) vectors:
B1 = A1 + A2 + A7 + A8 , B2 = A3 + A4 + A5 + A6 ,
(7.9)
B3 = A1 − A2 − A5 + A6 , B4 = −A3 + A4 + A7 − A8 .
Theorem 7.1.2:15 For the existence of Hadamard matrices of order n, the existence
of (0, ±1) matrices Bi , i = 1, 2, 3, 4 of dimension n × (n/4) is necessary and
sufficient, satisfying the following conditions:
1. B1 ∗ B2 = 0, B3 ∗ B4 = 0,
2. B1 ± B2 , B3 ± B4 are (+1, −1)-matrices,
4
n
3. Bi BTi = In , (7.10)
i=1
2
4. BTi B j= 0, i j, i, j = 1, 2, 3, 4,
n
5. BTi Bi = In/4 , i, j = 1, 2, 3, 4.
2
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + v8 ⊗ A8 . (7.11)
Ai ∗ A j = 0, i j, i, j = 1, 2, . . . , 8,
(7.12)
A1 + A2 + · · · + A8 is a (+1, −1)-matrix.
On the other hand, it is not difficult to show that the matrix Hn can also be presented
as
Q1 = (B1 + B2 )T , Q2 = (B1 − B2 )T ,
(7.16)
Q3 = (B3 + B4 )T , Q4 = (B3 − B4 )T
Qi QTj = 0, i j, i = 1, 2, 3, 4,
(7.17)
Qi QTi = nIn/4 , i = 1, 2, 3, 4.
X = A1 ⊗ (B1 + B2 )T + A2 ⊗ (B1 − B2 )T ,
(7.18)
Y = A3 ⊗ (B3 + B4 )T + A4 ⊗ (B3 − B4 )T .
XY T = X T Y = 0,
mn (7.19)
XX T + YY T = X T X + Y T Y = Imn/4 .
2
Again, we rewrite matrices X, Y in the following form:
X1 ∗ X2 = X3 ∗ X4 = Y1 ∗ Y2 = Y3 ∗ Y4 = 0,
X1 ± X2 , X3 ± X4 , Y1 ± Y2 , Y3 ± Y4 are (+1, −1) matrices,
4 4
Xi YiT = XiT Yi = 0, (7.21)
i=1 i=1
4 4 mn
Xi XiT + Yi YiT = XiT Xi + YiT Yi = Imn/4 .
i=1 i=1
4
(+1, −1) matrices P and Q of orders pq/4 can be constructed in a manner similar
to the construction of Hadamard matrices of order p and q, with the conditions of
Eq. (7.19).
Now, consider the following (0, ±1) matrices:
P+Q P−Q
Z= , W= ,
2 2 (7.22)
Ci = Xi ⊗ Z + Yi ⊗ W, i = 1, 2, 3, 4.
Z ∗ W = 0,
ZW T = WZ T , (7.23)
pq
ZZ T = Z T Z = WW T = W T W = I pq/4 ,
8
assuming that matrices Ci of dimension (mnpq/16) × (mnpq/64) satisfy the
conditions of Eq. (7.10).
Hence, according to Theorem 7.1.2, the matrix
where Xi , Yi , Zi , Wi , i = 1, 2 are (0, ±1) matrices of the dimension (n2 /4) × (n2 /2).
From the condition of Eq. (7.17) and the representation of Eq. (7.25), we find
that
X1 ∗ X2 = Y1 ∗ Y2 = Z1 ∗ Z2 = W1 ∗ W2 = 0,
X1 ± X2 , Y1 ± Y2 , Z1 ± Z2 , W1 ± W2 are (+1, −1) matrices,
X1 Y1T + X2 Y2T = 0, X1 Z1T + X2 Z2T = 0, X1 W1T + X2 W2T = 0, (7.26)
Y1 Z1T + Y2 Z2T = 0, Y1 W1T + Y2 W2T = 0, Z1 W1T + Z2 W2T = 0,
n2
X1 X1T + X2 X2T = Y1 Y1T + Y2 Y2T = Z1 Z1T + Z2 Z2T = W1 W1T + W2 W2T = In /4 .
2 2
Now, we define the following matrices:
X ⊗ A1 + Y1 ⊗ A2 X ⊗ A1 + Y2 ⊗ A2
C1 = 1 , C2 = 2 ,
Z1 ⊗ A1 + W1 ⊗ A2 Z2 ⊗ A1 + W2 ⊗ A2
(7.27)
X ⊗ A3 + Y1 ⊗ A4 X ⊗ A3 + Y2 ⊗ A4
C3 = 1 , C4 = 2 .
Z1 ⊗ A3 + W1 ⊗ A4 Z2 ⊗ A3 + W2 ⊗ A4
Theorem 7.1.3: For any natural numbers k and t, there is a Hadamard matrix
of order [n1 n2 . . . nt(k+2)+1 ]/2t(k+3) , where ni ≥ 4 are orders of known Hadamard
matrices.
Proof: The case for t = 1 and k = 1, 2, . . . was proved in Corollary 7.1.4. Let t > 1
and assume that the assertion is correct for t = t0 > 1, i.e., there is a Hadamard
matrix of order [n1 n2 · · · nt0 (k+3) ]/2t0 (k+3) .
Prove the theorem for t = t0 + 1. We have k + 3 Hadamard matrices of the following
orders:
n1 n2 . . . nt0 (k+2)+1
m1 = , nt0 (k+2)+2 , . . . , nt0 (k+2)+k+3 . (7.28)
2t0 (k+3)
n1 n2 . . . n(t0 +1)(k+2)+1
. (7.29)
2(t0 +1)(k+3)
Proof: We prove the lemma for k = 3, 5. For the other value k, the proof is similar.
For k = 3, allow a Hadamard matrix Hn of order n of the type in Eq. (7.30) to exist,
i.e.,
B1 = A1 + A2 , B2 = A3 , B3 = A1 − A2 , B4 = −A3 (7.33)
of dimension n × (n/4) must satisfy all the conditions in Eq. (7.10). In particular,
the following conditions should be satisfied:
n
BT2 B4 = 0, BT2 B2 = In/4 . (7.34)
2
That is, on the one hand AT3 A3 = 0, and on the other hand AT3 A3 = (n/2)In/4 , which
is impossible. Now we consider the case k = 5.
Let
B1 = A1 + A2 , B2 = A3 + A4 + A5 , B3 = A1 − A2 − A5 , B4 = −A3 + A4
(7.37)
must satisfy the conditions of Eq. (7.10). We can see that the conditions
n
BT1 B1 = BT3 B3 = In/4 (7.38)
2
mean that any column of matrices B1 and B3 contains precisely n/2 nonzero
elements. From this point, we find that A5 = 0, which contradicts the condition
of Lemma 7.1.1.
Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk . (7.39)
We call the Hadamard matrices having the representation in Eq. (7.39) an A(n, k)-
type Hadamard matrix or simply an A(n, k)-matrix.
Bi ∗ B j = 0, i j, i, j = 1, 2, . . . , k,
k
Bi is a (+1, −1) matrix,
i=1
k
n (7.40)
Bi BTi = In ,
i=1
k
BTi B j = 0, i j, i, j = 1, 2, . . . , k,
n
BTi Bi = In/k , i = 1, 2, . . . , k.
k
Proof: Necessity: To avoid excessive formulas, we prove the theorem for the case
k = 4. The general case is then a direct extension. Let Hn be a Hadamard matrix of
type A(n, 4), i.e., Hn has the form of Eq. (7.39), where
vi vTj = 0, i j, i, j = 1, 2, 3, 4,
(7.41)
vi vTi = 4, i = 1, 2, 3, 4.
Consider the last two conditions of Eq. (7.40). Note that the Hadamard matrix Hn
has the form
C1 = B1 + B2 , C2 = B3 + B4 , C3 = B1 − B2 , C4 = B3 − B4 (7.45)
Hence, taking into account the last two conditions of Eq. (7.40), we can see that
the matrices Bi satisfy the following equations:
BTi B j = 0, i j, i, j = 1, 2, 3, 4,
n (7.47)
BTi Bi = In/4 , i = 1, 2, 3, 4.
4
Sufficiency: Let (0, ±1) matrices Bi , i = 1, 2, 3, 4 satisfy the conditions of
Eq. (7.40). We shall show that the matrix in Eq. (7.43) is a Hadamard matrix.
Indeed, calculating Hn HnT and HnT Hn , we find that
4 4
n
Hn HnT = 4 Bi BTi = HnT Hn = vTi vi ⊗ In/4 = nIn . (7.48)
i=1 i=1
4
Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk , (7.49)
where (0, ±1) matrices Bi of dimension n × n/k satisfy the conditions of Eq. (7.40).
Note that the fifth condition of Eq. (7.40) means that BTi are orthogonal (0, ±1)
matrices and any row of this matrix contains n/k nonzero elements. Hence, for
T
matrix BTi , a matrix Bi corresponds to it having the following form:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜/// ··· ···
· · ·⎟⎟
⎟ ⎜⎜⎜· · · /// ···· · ·⎟⎟
⎟ ⎜⎜⎜· · · ··· ··· ///⎟⎟
⎜⎜⎜⎜.. .. .... ⎟⎟⎟⎟ ⎜⎜⎜.
⎜⎜.. .. .... ⎟⎟⎟⎟ ⎜⎜⎜.
⎜⎜.. .. .. .. ⎟⎟⎟⎟⎟
⎜. . . . ⎟⎟⎟ . . . ⎟⎟⎟ . . . ⎟⎟⎟
B1 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , B2 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , . . . , Bk = ⎜⎜⎜⎜⎜
T T T
⎟,
⎜⎜⎜/// · · · · · · · · ·⎟⎟ ⎜⎜⎜· · · /// · · · · · ·⎟⎟ ⎜⎜⎜· · · · · · · · · ///⎟⎟⎟⎟
⎜⎝.
.. .. .. .. ⎟⎟⎠ ⎜⎝.
.. .. .. .. ⎟⎟⎠ ⎜⎝.
.. .. .. .. ⎟⎟⎠
. . . . . . . . .
(7.50)
where the shaded portions of rows contain ±1, and other parts of these rows are
filled with zeros.
T
From the condition Bi Bi = (n/k)In/k , it follows that the shaded pieces of i’th
T
rows of matrices Bi contain an even number of ±1s, and from the condition
T
Bi B j = 0, i j, (7.51)
it follows that other parts of the i’th row also contain an even number of ±1s. It
follows that n/k is an even number, i.e., n/k = 2l; hence, n ≡ 0 (mod 2k).
Naturally, the following problem arises:
For any n, n ≡ 0 (mod 2k), construct an A(n, k)-type Hadamard matrix.
Next, we present some properties of A(n, k)-type Hadamard matrices.
Property 7.2.1: (a) If A(n, k)- and A(m, r)-type Hadamard matrices exist, then an
A(mn, kr)-type Hadamard matrix also exists.
(b) If a Hadamard matrix of order n exists, then there also exists an A(2i−1 n, 2i )-
type Hadamard matrix, i = 1, 2, . . ..
(c) If Hadamard matrices of order ni , i = 1, 2, . . . exist, then a Hadamard matrix
of type A{[n1 n2 . . . nt(r+2)+2 ]/2t(k+3) , 4} exists, where k, t = 1, 2, . . ..
Proof: Represent the A(n, k) matrix Hn as HnT = PT1 , PT2 , . . . , PTr , where (+1, −1)
matrices Pi of dimension n/r × n have the form
k
Pi = vt ⊗ Ai,t , i = 1, 2, . . . , r, (7.53)
t=1
At,i ⊗ At, j = 0, i j, t = 1, 2, . . . , r, i, j = 1, 2, . . . , k,
k
At,i is a (+1, −1) matrix, t = 1, 2, . . . , r,
i=1
k
(7.54)
Ai,t ATj,t = 0, i j, i, j = 1, 2, . . . , r,
t=1
k
n
Ai,t ATi,t = In/r , i = 1, 2, . . . , r.
t=1
r
k
U2i−1 ∗ U2i = 0, i = 1, 2, . . . , ,
2
U2i−1 ± U2i is a (+1, −1) matrix,
(7.57)
k
m
Ui UiT = Im .
i=1
2
By using the conditions of Eqs. (7.54) and (7.57), we can verify that these matrices
satisfy the conditions of Eq. (7.52). From Theorem 7.2.3, some useful corollaries
follow.
Corollary 7.2.2:1,2,18 The existence of Hadamard matrices of orders m and n
implies the existence of a Hadamard matrix of order mn/2.
Indeed, according to Theorem 7.2.3, for k = r = 2, there are (0, ±1) matrices
B1,1 and B1,2 , satisfying the conditions of Eq. (7.52). Now it is not difficult to show
that (++) ⊗ B1,1 + (+−) ⊗ B1,2 is a Hadamard matrix of order mn/2.
Corollary 7.2.3:19 If Hadamard matrices of order m and n exist, then there are
(0, ±1) matrices X, Y of order mn/4, satisfying the conditions
XY T = 0,
mn (7.59)
XX T + YY T = Imn/4 .
2
According to Theorem 7.2.3, for k = 2 and r = 4, we have two pairs of (0,
±1) matrices B1,1 , B1,2 and B2,1 , B2,2 of dimension mn/4 × mn/8 satisfying the
conditions of Eq. (7.52). We can show that matrices
satisfying the conditions of Eq. (7.59). Here, vi are mutually orthogonal four-
dimensional (+1, −1) vectors. The proof of this corollary follows from Theorem
7.2.3 for r = k = 4. As mentioned, the length of k mutually orthogonal (+1, −1)-
vectors is equal to 2 or k ≡ 0 (mod 4).
Below, we consider vectors of the dimension k = 2t . Denote the set of all
Hadamard matrices by C and the set of A(n, k)-type Hadamard matrices by Ck .
From Theorem 7.2.1, it follows that C = C2 , and from Corollary 7.2.1 it directly
follows that
C = C 2 ⊃ C 4 ⊃ C 8 ⊃ · · · ⊃ C 2k . (7.62)
where (0, ±1) matrices Ai, j of dimensions n/k × m/r satisfy the conditions
Ai,t ∗ Ai,p = 0, i = 1, 2, . . . , k, t p, t, p = 1, 2, . . . , r,
r
Ai,t is a (+1, −1)-matrix, i = 1, 2, . . . , k,
t=1
r
(7.64)
Ai,t ATp,t = 0, i p, i, p = 1, 2, . . . , k,
t=1
r
m
Ai,t ATi,t = Im/k , i = 1, 2, . . . , k.
t=1
r
One can show that matrices Di satisfy the conditions of Eq. (7.40). According
to Theorem 7.2.1, this means that there is a Hadamard matrix of type A(mn/k, r),
where Hmn/k ∈ Cr , thus proving the theorem.
Hi H Tj = 0, i j, i, j = 1, 2, . . . , k,
k
(7.66)
Hi HiT = mIm .
i=1
where vi are mutually orthogonal k-dimensional (+1, −1) vectors, and Ai, j are (0,
±1) matrices of the dimension n/4 × n/k satisfying the conditions
At,i ∗ At, j = 0, t = 1, 2, 3, 4, i j, i, j = 1, 2, . . . , k,
k
At,i is a (+1, −1) matrix, t = 1, 2, 3, 4,
i=1
k
(7.68)
At,i ATr,i = 0, t r, t, r = 1, 2, 3, 4,
i=1
k
n
At,i ATt,i = In/4 , t = 1, 2, 3, 4.
i=1
k
P1 + P2 P1 − P2 P3 + P4 P3 − P4
U1 = , U2 = , U3 = , U4 = . (7.69)
2 2 2 2
U1 ∗ U2 = U3 ∗ U4 = 0,
U1 ± U2 , U3 ± U4 are (+1, −1) matrices,
4 (7.70)
m
Ui UiT = Im .
i=1
2
One can show that these matrices satisfy the conditions of Eq. (7.59). According
to Corollary 7.2.3, from the existence of Hadamard matrices of orders p and q,
the existence of (+1, −1) matrices X1 , Y1 of order pq/4 follows, satisfying the
conditions of Eq. (7.59). Now we can show that (0, ±1) matrices
X1 + Y1 X1 − Y1
Z= , W= , (7.72)
2 2
satisfy the conditions
Z ∗ W = 0,
Z ± W is a (+1, −1) matrix,
ZW T = WZ T , (7.73)
pq
ZZ T = WW T = I pq/4 .
8
Finally, we introduce (0, ±1) matrices Bi , i = 1, 2, . . . , k of dimensions (mnpq/16)×
(mnpq/16):
, - , -
Bi = U1 ⊗ A1,i + U2 ⊗ A2,i ⊗ Z + U3 ⊗ A3,i + U4 ⊗ A4,i ⊗ W. (7.74)
We can show that the matrices Bi satisfy the conditions of Theorem 7.2.1. Hence,
there is a Hadamard matrix of type A[(mnpq/16), k].
From Corollary 7.2.2 and Theorems 7.3.1 and 7.3.2, the following ensues:
Corollary 7.3.1: (a) If there is an A(n1 , k) matrix and Hadamard matrices of
orders ni , i = 2, 3, 4, . . ., then a Hadamard matrix also exists of type
%n n . . . n &
1 2 3t+1
A 4t
, k , t = 1, 2, . . . . (7.75)
2
(b) If there are Hadamard matrices of orders ni , i = 1, 2, . . ., then there are also
%n n . . . n & %n n . . . n &
1 2 3t+1 1 2 3t+1
A , 4 and A , 8 (7.76)
24t−1 24t−2
matrices, t = 1, 2, . . ..
(c) If Hadamard matrices of orders ni , i = 1, 2, . . ., exist, then there is also a
Hadamard matrix of order (n1 n2 . . . n3i+2 )/24i+1 .
Theorem 7.3.3: If there is an A(n, k) matrix and orthogonal design OD m; mk ,
k , . . . , k , then orthogonal design OD k ; k2 , k2 , . . . , k2 exists.
m m mn mn mn mn
References
1. S. S. Agaian and H. G. Sarukhanyan, “Recurrent formulae for construction of
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
2. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics 1168, Springer-Verlag, Berlin, (1985).
3. R. Craigen, J. Seberry, and X. Zhang, “Product of four Hadamard matrices,”
J. Combin. Theory, Ser. A 59, 318–320 (1992).
4. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,
decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003) [doi:10.1117/12.473134].
5. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Construction of
Williamson type matrices and Baumert–Hall, Welch and Plotkin arrays,” in
Proc. First Int. Workshop on Spectral Techniques and Logic Design for Future
Digital Systems, Tampere, Finland SPECLOG’2000, TICSP Ser. 10, 189–205
(2000).
6. J. Seberry and M. Yamada, “On the multiplicative theorem of Hadamard
matrices of generalize quaternion type using M-structure,” http://www.uow.
edu.au/∼jennie/WEB/WEB69-93/max/183_1993.pdf.
7. S. M. Phoong and K. Y. Chang, “Antipodal paraunitary matrices and
their application to OFDM systems,” IEEE Trans. Signal Process. 53 (4),
1374–1386 (2005).
8. W. A. Rutledge, “Quaternions and Hadamard matrices,” Proc. Am. Math. Soc.
3 (4), 625–630 (1952).
9. M.J.T. Smith and T.P. Barnwell III, “A procedure for designing exact
reconstruction filter banks for tree-structured subband coders,” in Proc. of
IEEE Int. Conf. Acoust. Speech, Signal Process, San Diego, 27.11–27.14 (Mar.
1984).
10. P. P. Vaidyanathan, “Theory and design of M-channel maximally decimated
quadrature mirror filters with arbitrary M, having perfect reconstruction
property,” IEEE Trans. Acoust., Speech, Signal Process. ASSP-35, 476–492
(Apr. 1987).
11. S.S. Agaian, “Spatial and high dimensional Hadamard matrices,” in Mathe-
matical Problems of Computer Science (in Russian), NAS RA, Yerevan,
Armenia, 12, 5–50 (1984).
12. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Decomposition of
Hadamard matrices,” in Proc. of First Int. Workshop on Spectral Techniques
and Logic Design for Future Digital Systems, 2–3 June 2000 Tampere, Finland
SPECLOG’2000, TICSP Ser. 10, pp. 207–221 (2000).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based
fast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)
[doi:10.1117/12.467969].
14. http://www.uow.edu.au/∼jennie.
15. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences,” Doctoral
thesis, Institute for Informatics and Automation Problems of NAS RA,
Yerevan, Armenia (1998).
16. H. G. Sarukhanyan, “Decomposition of Hadamard matrices by orthogonal
(−1, +1) vectors and algorithm of fast Hadamard transform,” Rep. Acad. Sci.
Armenia 97 (2), 3–6 (1997) (in Russian).
17. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform”, Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science 1296, pp. 575–581 (1997).
18. H. G. Sarukhanyan, S. S. Agaian, J. Astola, and K. Egiazarian, “Decomposi-
tion of binary matrices and fast Hadamard transforms,” Circuits, Systems, and
Signal Processing 24 (4), 385–400 (1993).
19. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs”, in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, 431–560 John Wiley & Sons, Hoboken, NJ
(1992).
20. J. S. Wallis, “On Hadamard matrices,” J. Combin. Theory, Ser. A 18, 149–164
(1975).
249
X = B1 ⊗ (A1 + A2 )T + B2 ⊗ (A1 − A2 )T ,
(8.8)
Y = B3 ⊗ (A3 + A4 )T + B4 ⊗ (A3 − A4 )T .
Z = Q1 ⊗ (P1 + P2 )T + Q2 ⊗ (P1 − P2 )T ,
(8.11)
W = Q3 ⊗ (P3 + P4 )T + Q4 ⊗ (P3 − P4 )T .
Z+W Z−W
P= , Q= . (8.12)
2 2
Step 8. Construct the Hadamard matrix as
Hmnpq/16 = X ⊗ P + Y ⊗ Q. (8.13)
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + vk ⊗ Ak (8.14)
is called the Hadamard matrix of type A(n, k), or the A(n, k) matrix,1,2,4–6 where
vi are orthogonal (+1, −1) vectors of length k, and Ai are (0, ±1) matrices of
dimension n × n/k.
Theorem 8.2.1: A matrix Hn of order n is an A(n, k)-type Hadamard matrix if and
only if, there are nonzero (0, ±1) matrices Ai , i = 1, 2, . . . , k of size n×n/k satisfying
the following conditions:
Ai ∗ A j , i j, i, j = 1, 2, . . . , k,
k
Ai is a (+1, −1) matrix,
i=1
k
n (8.15)
Ai ATi = In ,
i=1
k
ATi A j = 0, i j, i, j = 1, 2, . . . , k,
n
ATi Ai = In/k , i = 1, 2, . . . , k.
k
Proof: Necessity: In order to avoid excessive formulas, we prove the theorem for
the case k = 4. The general case is then a straightforward extension of the proof.
Let Hn be a Hadamard matrix of type A(n, k), i.e., Hn has the form of Eq. (8.14),
where
Consider the last two conditions of Eq. (8.15). Note that the Hadamard matrix Hn
has the form
C1 = A1 + A2 , C2 = A3 + A4 , C3 = A1 − A2 , C4 = A3 − A4 (8.20)
4 4
n
Hn HnT = 4 Ai ATi = HnT Hn = vTi vi ⊗ In/4 = nIn . (8.23)
i=1 i=1
4
Below, we give an algorithm based on this theorem. Note that any Hadamard
matrix Hn of order n can be presented as
where X, Y are (0, ±1) matrices with dimension n × n/2. Examples of the
decomposition of Hadamard matrices are given below.
Example 8.2.1: (1) The following Hadamard matrix of order 4 can be decom-
posed:
(a) via two vectors (+ +), (+ −),
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ +⎟⎟
⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟
⎟
⎜⎜⎜+ − + −⎟⎟ ⎜⎜0 ⎜⎜+ +⎟⎟⎟⎟
⎟⎟⎟ = (++) ⊗ ⎜⎜⎜⎜⎜
0 ⎟⎟
H4 = ⎜⎜⎜⎜ ⎟⎟⎟ + (+−) ⊗ ⎜⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟⎠
(8.27)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎝+ −⎟⎟⎠ ⎜⎜⎝0
+ − − + 0 0 + −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜+⎟
H4 = (+ + ++) ⊗ ⎜⎜⎜ ⎟⎟⎟ + (+ − +−) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟ ⎜⎜⎜⎜ ⎟⎟⎟⎟
+ (+ + −−) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ + (+ − −+) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ .
0 0
(8.28)
⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 +
F = B1 + B2 + · · · + Bn/k . (8.40)
Example 8.3.1: The 12-point FHT algorithm. Consider the block-cyclic Hada-
mard matrix H12 of order 12 with first block row (Q0 , Q1 , Q1 ), i.e.,
H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U 2 , (8.41)
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟ ⎜⎜⎜+ − − −⎟⎟
⎟
⎜⎜⎜− + − +⎟⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎠ +⎟⎟⎟⎟⎠
(8.42)
⎜⎜⎝− + + ⎜⎜⎝+ − +
− − + + + + − +
Algorithm 8.3.2:
Input: An A(12, 2)-type Hadamard matrix H12 , X = (x1 , x2 , . . . , x12 )T signal
vector and Pi column vectors of dimension 12/2 = 6, whose i’th element
is equal to 1, and whose remaining elements are equal to 0.
Step 1. Decompose H12 as H12 = (+ +) ⊗ A1 + (+−) ⊗ A2 , where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + 0 − 0 −⎟⎟⎟ ⎜⎜⎜0 0 + 0 + 0 ⎟⎟⎟
⎜⎜⎜⎜0 ⎟ ⎜⎜⎜⎜− ⎟
⎜⎜⎜ 0 + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ − 0 + 0 +⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 + 0 +⎟⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ + + 0 + 0 ⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜−
⎜⎜⎜ + + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ 0 0 − 0 −⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0
⎜⎜⎜ − + + 0 −⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ 0 0 0 + 0 ⎟⎟⎟⎟
⎟⎟ ⎟⎟
⎜⎜⎜+ 0 0 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜0 + − − 0 +⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜ ⎟⎟ , A2 = ⎜⎜⎜⎜ ⎟⎟ . (8.43)
⎜⎜⎜0 + 0 0 0 +⎟⎟⎟⎟ ⎜⎜⎜+ 0 − + + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜+ 0 − + + 0 ⎟⎟⎟⎟ ⎜⎜⎜0 − 0 0 0 −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 − 0 − + +⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜0 + 0 + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 + 0 + 0 0 ⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 − +⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
+ 0 + 0 − + 0 − 0 − 0 0
f = X1 ⊗ P1 + X2 ⊗ P2 + · · · + X6 ⊗ P6 , (8.44)
B1 B2 B3 B4 B5 B6
where
f f f
X1 = 1 , X2 = 3 , X3 = 5 ,
f2 f4 f6
(8.45)
f f f
X4 = 7 , X5 = 9 , X6 = 11 .
f8 f10 f12
Step 3. Perform the fast WHTs
+ +
X for i = 1, 2, . . . , 6. (8.46)
+ − i
Step 4. Compute
B j = (++)X j ⊗ A1 P j + (+−)X j ⊗ A2 P j , j = 1, 2, . . . , 6, (8.47)
where the values of B j are shown in Table 8.1.
Step 5. Compute the spectral elements of the transform as F = B1 + B2 +
· · · + B6 .
Output: The 12-point HT coefficients.
Flow graphs of 12-dimensional vectors B j , j = 1, 2, . . . , 6 computations are
given in Fig. 8.1.
Note that A1 and A2 are block-cyclic matrices of dimension 12 × 6 with the first
block rows represented as (R0 , R1 , R1 ) and (T 0 , T 1 , T 1 ), where
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 −⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟ ⎜⎜⎜− −⎟⎟⎟ ⎜⎜⎜0 +⎟⎟⎟
R0 = ⎜⎜⎜⎜ ⎟⎟ , R1 = ⎜⎜⎜⎜ ⎟⎟ , T 0 = ⎜⎜⎜⎜ ⎟⎟ , T 1 = ⎜⎜⎜⎜ ⎟⎟ . (8.48)
⎜⎝⎜0 0 ⎟⎟⎟⎠⎟ ⎜⎝⎜0 +⎟⎟⎟⎠⎟ ⎜⎝⎜− +⎟⎟⎟⎠⎟ ⎜⎝⎜+ 0 ⎟⎟⎟⎠⎟
− + + 0 0 0 0 −
Thus,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜R0 R1 R1 ⎟⎟⎟ ⎜⎜⎜T 0 T 1 T 1 ⎟⎟⎟
⎜
A1 = ⎜⎜⎜⎝R1 R0 R1 ⎟⎟⎟⎟⎠ , ⎜
A2 = ⎜⎜⎜⎝T 1 T 0 T 1 ⎟⎟⎟⎟⎠ . (8.49)
R1 R1 R0 T1 T1 T0
f1 f2 f3 f4
1 –2 –3 –4 5 6 7 8 9 10 11 12 1 –2 3 4 –5 6 7 –8 –9 10 11 –12
f5 f6 f7 f8
1 2 3 4 5 –6 –7 –8 9 10 11 12 –1 2 3 –4 5 –6 7 8 –9 10 11 –12
f9 f10 f11 f12
Note that above we ignored the interior structure of a Hadamard matrix. Now
we examine it in more detail. We see that (1) the Hadamard matrix H12 is a block-
cyclic, block-symmetric matrix; (2) the matrices in Eq. (8.46) are also block-cyclic,
block-symmetric matrices; and (3) the 12-point HT requires only 60 addition
operations.
Let us prove the last statement. In reality, to compute all elements of vectors
B1 and B2 , it is necessary to perform four addition operations, i.e., two 2-point
HTs are necessary. Then, it is not difficult to see that the computation of the sum
B1 + B2 requires only 12 additions because there are four repetition pairs, as well as
B1 (4+i)+B2 (4+i) = B1 (8+i)+B2 (8+i), for i = 1, 2, 3, 4. A similar situation occurs
for computing B3 + B4 and B5 + B6 . Hence, the complete 12-point HT requires only
60 addition operations.
Now, we continue Example 8.3.1 for an inverse transform. Note that the 12-point
inverse HT can be computed as
1 T
X= H Y. (8.50)
12 12
Step 2. Split the input vector Y into six parts as Y = Y1 ⊗P1 +Y2 ⊗P2 +· · ·+
Y6 ⊗ P6 , where
y y y
Y1 = 1 , Y2 = 3 , Y3 = 5 ,
y2 y4 y6
(8.52)
y y y
Y4 = 7 , Y5 = 9 , Y6 = 11 .
y8 y10 y12
D, D2 D3 D4 D5 D6
B1 B2 B3 B4 B5
f1 + f2 f3 + f 4 − f5 − f6 f7 − f 8 − f9 − f10
− f1 + f2 − f3 + f4 f5 − f6 f7 + f 8 f9 − f10
− f1 + f2 f3 − f 4 − f5 − f6 − f7 + f8 f9 + f10
− f1 − f2 f3 + f 4 f5 − f6 − f7 − f8 − f9 + f10
− f1 − f2 f3 − f 4 f5 + f6 f7 + f 8 − f9 − f10
f1 − f2 f3 + f 4 − f5 + f6 − f7 + f8 f9 − f10
− f1 − f2 − f3 + f4 − f5 + f6 f7 − f 8 − f9 − f10
f1 − f2 − f3 − f4 − f5 − f6 f7 + f 8 f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 f7 − f 8 f9 + f10
f1 − f2 − f3 − f4 f5 − f6 f7 − f 8 − f9 + f10
f1 + f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 + f10
− f1 + f2 − f3 − f4 f5 − f6 − f7 − f8 − f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 − f3 − f4 f5 − f6 − f7 − f8 f9 − f10
f1 + f2 − f3 + f4 f5 + f6 − f7 + f8 − f9 − f10
− f1 + f2 − f3 − f4 − f5 + f6 − f7 − f8 f9 − f10
− f1 − f2 f3 − f 4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 f3 + f 4 f5 − f6 − f7 − f8 f9 − f10
− f1 − f2 − f3 + f4 f5 + f6 − f7 + f8 f9 + f10
f1 − f2 − f3 − f4 − f5 + f6 − f7 − f8 − f9 + f10
B6 B7 B8 B9 B10
− f11 + f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 f17 − f18 f19 + f20
− f11 + f12 f13 + f14 − f15 + f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 + f20
f11 + f12 f13 − f14 − f15 − f16 f17 − f18 − f19 − f20
− f11 + f12 f13 + f14 − f15 + f16 f17 + f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20
f11 + f12 − f13 − f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 + f12 f13 − f14 f15 + f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 f17 + f18 − f19 + f20
f11 + f12 f13 − f14 − f15 − f16 − f17 + f18 − f19 − f20
f11 − f12 f13 + f14 f15 + f16 − f17 − f18 f19 − f20
f11 + f12 − f13 + f14 − f15 + f16 f17 − f18 f19 + f20
− f11 + f12 − f13 + f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 − f14 f15 + f16 f17 − f18 − f19 − f20
− f11 + f12 − f13 − f14 f15 − f16 f17 + f18 f19 + f20
− f11 − f12 f13 − f14 f15 + f16 − f17 + f18 − f19 + f20
− f11 + f12 − f13 − f14 − f15 + f16 − f17 + f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 − f17 − f18 f19 + f20
a1 a1
5 1
a2 6 a2 2
b1 7 b1 3
b2 4
8 b2
9
9
a1 = f1 + f2 a1 = f5 + f6
10
10
a2 = f1 – f2 a2 = f5 – f6
11
b1 = f3 + f4 11 b1 = f7 + f8
b2 = f3 – f4 b2 = f7 – f8 12
12
13
13
a1 + b1 = C2 (5) 14
14
a1 + b1 = C1 (1)
15 –a2 – b2 = C2 (6) 15
–a2 – b2 = C1 (2) –a2 + b2 = C2 (7)
16
16
–a2 + b2 = C1 (3) –a1 + b1 = C2 (8)
–a1 + b1 = C1 (4) 17
17
18
18
19
19
20
20
C1 (.)
C2 (.)
Note that above we ignored the interior structure of the Hadamard matrix. Now
we examine it in more detail. We can see that to compute all of the elements of
vector Bi , it is necessary to perform two addition operations, i.e., 2-point HTs are
necessary. Then, it is not difficult to see that the computation of the sum B2i–1 + B2i
requires only eight additions. Hence, the complete 20-point HT requires only 140
addition operations.
We introduce the notation Ci = B2i–1 + B2i , i = 1, 2, 3, 4, 5; the spectral elements
of the vector F can be calculated as F = C1 + C2 + · · · + C5 . The flow graphs of
computation of Ci are given in Figs. 8.2–8.4.
a1 a1
1 1
a2 2 a2 2
b1 3 b1 3
b2 4 4
b2
5 5
a1 = f9 + f10 a1 = f13 + f14
6 6
a2 = f9 – f10 a2 = f13 – f14
7 7
b1 = f11 + f12 b1 = f15 + f16
8 8
b2 = f11 – f12 b2 = – f16
13 9
14
a1 + b1 = C4 (13) 10
a1 + b1 = C3 (9) 15 11
–a2 – b2 = C4 (14)
16 12
–a2 – b2 = C3 (10) –a2 + b2 = C4 (15)
–a2 + b2 = C3 (11) 17 17
–a1 + b1 = C4 (16)
18 18
–a1 + b1 = C3 (12)
19 19
20
20
C3 (.) C4 (.)
a1 1
a2 2
3
b1
b2 4
a1 = f17 + f18
6
a2 = f17 – f18
b1 = f19 + f20 7
b2 = f19 – f20
8
9
a1 + b1 = C5 (17)
10
–a2 – b2 = C5 (18)
–a2 + b2 = C5 (19) 11
–a1 + b1 = C5 (20)
12
13
14
15
16
C5 (.)
R1 ( j) = v1 X j ⊗ D1 P j , R2 ( j) = v2 X j ⊗ D2 P j ,
(8.62)
R3 ( j) = v3 X j ⊗ D2 P j , R4 ( j) = v4 X j ⊗ D4 P j .
B1 B2 B3
F = B1 + B2 + · · · + B6 . (8.63)
Output: 24-point HT coefficients.
B4 B5 B6
− f13 − f14 − f15 − f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
− f13 − f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 + f23 + f24
f13 + f14 − f15 − f16 − f17 − f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 − f17 − f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 + f14 − f15 + f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 + f23 − f24
f13 − f14 − f15 + f16 − f17 + f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 − f17 + f18 + f19 − f20 f21 − f22 − f23 + f24
− f13 + f14 + f15 − f16 − f17 + f18 − f19 + f20 f21 − f22 + f23 − f24
Thus, the 4-point WHT can be computed by seven additions and three one-bit shift
operations (two operations to calculate z0 , one for z1 , and four for y0 , y1 , y2 , y3 , and
three one-bit shift operations).
We next demonstrate the full advantages of shift/add architecture on the 16-point
FHT algorithm.
Algorithm 8.5.1: 16-point FHT.
Input: The signal vector X = (x0 , x1 , . . . , x15 )T .
Step 1. Split input vector X as
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜ x0 ⎟⎟⎟ ⎜⎜⎜ x4 ⎟⎟⎟ ⎜⎜⎜ x8 ⎟⎟⎟ ⎜⎜⎜ x12 ⎟⎟⎟
⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟
X 0 = ⎜⎜⎜⎜ 1 ⎟⎟⎟⎟ , X 1 = ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ , X 2 = ⎜⎜⎜⎜ 9 ⎟⎟⎟⎟ , X 3 = ⎜⎜⎜⎜ 13 ⎟⎟⎟⎟ . (8.66)
⎜⎜⎝ x2 ⎟⎟⎠ ⎜⎜⎝ x6 ⎟⎟⎠ ⎜⎜⎝ x10 ⎟⎟⎠ ⎜⎜⎝ x14 ⎟⎟⎠
x3 x7 x11 x15
Pi = H4 X i , i = 0, 1, 2, 3. (8.67)
Step 3. Define the vectors
r0 = P1 + P2 + P3 , r1 = P0 − r0 . (8.68)
Step 4. Compute the vectors
Y 0 = (y0 , y1 , y2 , y3 )T = r0 + P0 , Y 1 = (y4 , y5 , y6 , y7 )T = r1 + 2P2 ,
Y 2 = (y8 , y9 , y10 , y11 )T = r1 + 2P1 , Y 3 = (y12 , y13 , y14 , y15 )T = r1 + 2P3 .
(8.69)
T
Output: The 16-point FHT coefficients, i.e., Y 0 , Y 1 , Y 2 , Y 3 .
We conclude that a 1D WHT of order 16 requires only 56 addition/subtraction
operations and 24 one-bit shifts. In Fig. 8.5, the flow graph of a 1D WHT with
shifts for N = 16 is given.
Now, let us consider the j’th item of the sum of Eq. (8.73),
B j = v1 X j ⊗ A1 P j + v2 X j ⊗ A2 P j + · · · + vk X j ⊗ Ak P j . (8.71)
x0
2
x1
x2 2
x3 2 r0
x4
2 r0
x5
x6 2
2
x7 2 r1
2 r2
x8
x9 2
x10 2
2
r3
x11 2
x12 r1
x13 2
x14 2
2
x15
operations, and in order to obtain each sum B2i–1 + B2i , i = 1, 2, . . . , n/2k , we need
only 2k+2 operations. Hence, the complexity of the Hn f transform can be calculated
as
n n % n & % n &
C(n, 2k ) = k k2k + k+1 2k+2 + n k+1 − 1 = n k + k+1 + 1 . (8.73)
2 2 2 2
12 3 1 4 16 64 72 60 132
24 3 2 5 32 160 168 144 552
48 3 3 6 64 384 384 336 2256
96 3 4 7 128 896 864 768 9120
20 5 1 5 32 160 200 140 380
40 5 2 6 64 384 440 320 1560
80 5 3 7 128 896 960 720 6320
160 5 4 8 256 2048 2080 1600 25,440
320 5 5 9 512 4608 4480 3520 102,080
28 7 1 5 32 160 392 252 766
56 7 2 6 64 384 840 560 3080
112 7 3 7 128 896 1792 1232 12432
224 7 4 8 256 2048 3808 2688 49,952
36 9 1 6 64 384 648 396 1260
72 9 2 7 128 896 1458 864 5112
144 9 3 8 256 2048 2880 1872 20,592
288 9 4 9 512 4608 6048 4032 82,656
References
1. S. S. Agaian and H. G. Sarukhanyan, Hadamard matrices representation by
(−1, +1)-vectors, in Proc. of Int. Conf. Dedicated to Hadamard Problem’s
Centenary, Australia, (1993).
9.1 ODs
The original definition of OD was proposed by Geramita et al.6 Dr. Seberry
(see Fig. 9.1), a co-author of that paper, is world renowned for her discoveries
on Hadamard matrices, ODs, statistical designs, and quaternion OD (QOD).
She also did important work on cryptography. Her studies of the application
of discrete mathematics and combinatorial computing via bent functions and
S -box design have led to the design of secure crypto algorithms and strong
hashing algorithms for secure and reliable information transfer in networks and
telecommunications. Her studies of Hadamard matrices and ODs are also applied
in CDMA technologies.11
An OD of order n and type (s1 , s2 , . . . , sk ), denoted by OD(n; s1 , s2 , . . . , sk ) is
an n × n matrix D with entries from the set (0, ±x1 , ±x2 , . . . , ±xk ) where each xi
275
occurs si times in each row and column, such that the distinct rows are pairwise
orthogonal, i.e.,
k
D(x1 , x2 , . . . , xk )DT (x1 , x2 , . . . , xk ) = si xi2 In , (9.1)
i=1
It is well known that the maximum number of variables that may appear in an
OD is given by Radon’s function ρ(n), which is defined by ρ(n) = 8c + 2d , where
n = 2a b, b is an odd number, and a = 4c + d, 0 ≤ d < 4 (see, for example, Ref. 5).
Now we present two simple OD construction methods.5 Let two cyclic matrices
A1 , A2 of order n exist, satisfying the condition
then
⎛ ⎞
⎜⎜⎜ A BR CR DR ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −BR T ⎟
⎜⎜⎜ A D R −C R⎟⎟⎟⎟
T
is a W(4n, f ) when A, B, C, D are (0, −1, +1) matrices, and an OD, OD(4n; s1 ,
s2 , . . . , sk ), on x1 , x2 , . . . , xk when A, B, C, D are entries from (0, ±x1 , ±x2 , . . . ,
±xk ) and f = ki=1 si xi2 . Here, R is a back-diagonal identity matrix of order n.
• If there are four sequences A, B, C, D of length n with entries from (0, ±x1 ,
±x2 , ±x3 , ±x4 ) with zero periodic or nonperiodic autocorrelation functions, then
these sequences can be used as the first rows of cyclic matrices that can be used
in the Goethals–Seidel array to form an OD(4n; s1 , s2 , s3 , s4 ). Note that if there
are sequences of length n with zero nonperiodic autocorrelation functions, then
there are sequences of length n + m for all m ≥ 0.
• OD of order 2t = (m − 1)n and type (1, m − 1, mn − m − n) exist.
• If two Golay sequences of length m and a set of two Golay sequences of length
k exist, then a three-variable full OD, OD[4(m + 2k); 4m, 4k, 4k], exists.76
k
CC H = C H C = si s2i In , (9.9)
i=1
k
S HS = |zi |2 In . (9.10)
i=1
We can generalize these definitions to allow the design entries to be real linear
combinations of the permitted variables and their quaternion multipliers, in which
case we say the design is by linear processing.
Examples:
• The matrix X = −x − jx2 kx1 is a QOD on real variables x1 , x2 .
1 ix2
• The matrix Z = iz−1jz∗2 izjz2∗1 is a QOD on complex variables z1 , z2 .
• The matrix A = a0 0a is the most obvious example of a QOD on quaternion
variable a. Note that QODs on quaternion variables are the most difficult to
construct.
Theorem 9.1.1: 93 Let A and B be CODs, COD(n, n; s1 , s2 , . . . , sk ) and COD(n, n;
t1 , t2 , . . . , tk ), respectively, on commuting complex variables z1 , z2 , . . . , zk . If H H B
is symmetric, then A + jB is QOD QOD(n, n; s1 + t1 , s2 + t2 , . . . , sk6 + tk ) on the
complex variables z1 , z2 , . . . , zk , where AH is the quaternion transpose.
Xi ∗ X j = 0, i j, i, j = 1, 2, 3, 4;
Xi X j = X j Xi , i, j = 1, 2, 3, 4;
Xi RX Tj = X j RXiT , i, j = 1, 2, 3, 4;
4
(9.18)
Xi is a (+1, −1) matrix;
i=1
4
Xi XiT = kIk .
i=1
Note that in Refs. 10 and 92, only cyclic T matrices were constructed. In this case,
the second and the third conditions of Eq. (9.18) are automatically satisfied.
The first rows of some examples of cyclic T matrices of orders 3, 5, 7, 9 are
given as follows:
9.3 A Matrices
H(x11 , x21 , . . . , xl1 )H T (x12 , x22 , . . . , xl2 ) + H(x12 , x22 , . . . , xl2 )H T (x11 , x21 , . . . , xl1 )
l
2m
= xi1 xi2 Im . (9.22)
l i=1
• Baumert–Hall array if l = 4.
• Plotkin array if l = 8.
• Yang array if l = 2.
The A matrix of order 12 depending on three parameters is given as follows:
⎛ ⎞
⎜⎜⎜ a b c b −c a c b −a c a −b⎟⎟⎟
⎜⎜⎜⎜ c a b −c a b b −a c a −b c⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ b c a a b −c −a c b −b c a⎟⎟⎟⎟⎟
⎜⎜⎜−b c −a a b c −c b −a c −a b⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ c −a −b c a b b −a −c −a b c⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−a −b c b c a −a −c b b c −a⎟⎟⎟⎟⎟
A(a, b, c) = ⎜⎜⎜ ⎟ . (9.23)
⎜⎜⎜ −c −b a c −b a a b c −b −a c⎟⎟⎟⎟⎟
⎜⎜⎜−b a −c −b a c c a b −a c −b⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ a −c −b a c −b b c a c −b −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −c −a b −c a −b b a −c a b c⎟⎟⎟⎟⎟
⎜⎜⎜−a b −c a −b −c a −c b c a b⎟⎟⎟
⎜⎝ ⎟⎠
b −c −a −b −c a −c b a b c a
Note that for a, b, c = ±1, the above-given matrix is the Hadamard matrix of
order 12. For a = 1, b = 2, and c = 1, this matrix is the integer orthogonal matrix
⎛ ⎞
⎜⎜⎜ 1 2 1 2 −1 1 1 2 −1 1 1 −2⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 2 −1 1 2 2 −1 1 1 −2 1⎟⎟⎟⎟
⎜⎜⎜ 2 ⎟
⎜⎜⎜ 1 1 1 2 −1 −1 1 2 −2 1 1⎟⎟⎟⎟⎟
⎜⎜⎜−2 ⎟
⎜⎜⎜ 1 −1 1 2 1 −1 2 −1 1 −1 2⎟⎟⎟⎟
⎟
⎜⎜⎜ 1
⎜⎜⎜ −1 −2 1 1 2 2 −1 −1 −1 2 1⎟⎟⎟⎟⎟
⎟
⎜⎜−1 −2 1 2 1 1 −1 −1 2 2 1 −1⎟⎟⎟⎟
A(1, 2, 1) = ⎜⎜⎜⎜ ⎟⎟ . (9.24)
⎜⎜⎜−1 −2 1 1 −2 1 1 2 1 −2 −1 1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−2 1 −1 −2 1 1 1 1 2 −1 1 −2⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ 1 −1 −2 1 1 −2 2 1 1 1 −2 −1⎟⎟⎟⎟
⎟
⎜⎜⎜−1
⎜⎜⎜ −1 2 −1 1 −2 2 1 −1 1 2 1⎟⎟⎟⎟⎟
⎟
⎜⎜⎜−1 2 −1 1 −2 −1 1 −1 2 1 1 2⎟⎟⎟⎟
⎝ ⎠
2 −1 −1 −2 −1 1 −1 2 −1 2 1 1
We can see that if H(x1 , x2 , . . . , xl ) is an A matrix, then H(±1, ±1, . . . , ±1) is the
Hadamard matrix.
Theorem 9.3.1:14,15,98 For the existence of an A matrix of order m depending
on l parameters, it is necessary and sufficient that there are (0, ±1) matrices Ki ,
I = 1, 2, . . . , l satisfying the conditions
Ki ∗ K j = 0, i j, i, j = 1, 2, . . . , l,
Ki K Tj + K j KiT = 0, i j, i, j = 1, 2, . . . , l, (9.25)
m
Ki KiT = Im , i = 1, 2, . . . , l.
l
Case 2. l = 4,
H1 (x1 , x2 , x3 , x4 ) = x1 K1 + x2 K2 + x3 K3 + x4 K4 ,
H2 (x1 , x2 , x3 , x4 ) = H1 (−x2 , x1 , −x4 , x3 ),
(9.28)
H3 (x1 , x2 , x3 , x4 ) = H1 (−x3 , x4 , x1 , −x2 ),
H4 (x1 , x2 , x3 , x4 ) = H1 (−x4 , −x3 , x2 , x1 ).
Case 3. l = 8,
H1 (x1 , x2 , . . . , x8 ) = x1 K1 + x2 K2 + · · · + x8 K8 ,
H2 (x1 , x2 , . . . , x8 ) = H1 (−x2 , x1 , x4 , −x3 , x6 , −x5 , −x8 , x7 ),
H3 (x1 , x2 , . . . , x8 ) = H1 (−x3 , −x4 , x1 , x2 , x7 , x8 , −x5 , −x5 ),
H4 (x1 , x2 , . . . , x8 ) = H1 (−x4 , x3 , −x2 , x1 , x8 , −x7 , x6 , −x5 ),
(9.29)
H5 (x1 , x2 , . . . , x8 ) = H1 (−x5 , −x6 , −x7 , −x8 , x1 , x2 , x3 , x4 ),
H6 (x1 , x2 , . . . , x8 ) = H1 (−x6 , x5 , −x8 , x7 , −x2 , x1 , −x4 , x3 ),
H7 (x1 , x2 , . . . , x8 ) = H1 (−x7 , x8 , x5 , −x6 , −x3 , x4 , x1 , −x2 ),
H8 (x1 , x2 , . . . , x8 ) = H1 (−x8 , −x7 , x6 , x5 , −x4 , −x3 , x2 , x1 ).
The following lemma relates to the construction of 4-basic frame using T matrices.
⎛ ⎞ ⎛ ⎞
⎜⎜⎜X B −X −X1 X2 ⎟⎟⎟⎟⎟ ⎜⎜⎜X B −X2 −X1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 3 n 4
⎟⎟⎟ ⎜⎜⎜ 4 n X3
⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜ X4 X3 Bn −X2T −X1T ⎟⎟⎟⎟ ⎜⎜⎜ −X3 X4 Bn X1T −X2T ⎟⎟⎟⎟
K3 = ⎜⎜⎜ ⎟⎟⎟ , K4 = ⎜⎜⎜ ⎟⎟⎟ . (9.30b)
⎜⎜⎜ −X −X T −X B −X4T ⎟⎟⎟⎟ ⎜⎜⎜ −X X1T −X4 Bn X3T ⎟⎟⎟⎟
⎜⎜⎜ 1 2 3 n
⎟⎟⎠ ⎜⎜⎜ 2
⎟⎟⎠
⎝ ⎝
X2 −X1T X4T −X3 Bn −X1 −X2T −X3T −X4 Bn
Example 9.3.1: The 4-basic frame of order 12. Using the following T matrices:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 ⎟⎟⎟ ⎜⎜⎜0 + 0 ⎟⎟⎟ ⎜⎜⎜0 0 +⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
X1 = ⎜⎜⎜0 + 0 ⎟⎟⎟⎟ , X2 = ⎜⎜⎜0 0 +⎟⎟⎟⎟ , X3 = ⎜⎜⎜+ 0 0 ⎟⎟⎟⎟ , (9.31)
⎝⎜ ⎠⎟ ⎝⎜ ⎟⎠ ⎝⎜ ⎠⎟
0 0 + + 0 0 0 + 0
we obtain
⎛ ⎞
⎜⎜⎜0
⎜⎜⎜ 0 + 0 + 0 0 0 + 0 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎜⎜ + 0 0 0 + + 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎜ 0 0 + 0 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ − 0 0 0 + 0 0 0 0 + 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 − 0 + 0 0 0 0 0 0 +⎟⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
+ + 0 ⎟⎟⎟⎟
K1 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0
⎟⎟ , (9.32a)
⎜⎜⎜0 0 − 0 0 0 0 0 − 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 0 0 − 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎜⎜ − 0 0 0 0 − 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + 0 0 0 − 0 0 −⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎝ 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎠
0 0 0 + 0 0 0 − 0 − 0 0
⎛ ⎞
⎜⎜⎜0
⎜⎜⎜ + 0 − 0 0 0 0 0 0 0 −⎟⎟⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ 0 0 0 − 0 0 0 0 − 0 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 − 0 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎜ 0 0 0 + 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ + 0 + 0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟
+ + + 0 ⎟⎟⎟⎟
K2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0
⎟⎟ , (9.32b)
⎜⎜⎜0 0 0 0 + 0 0 − 0 − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 + 0 0 0 0 − 0 0 −⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 0 + 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎝ 0 0 0 0 0 0 + 0 − 0 0 ⎟⎟⎟⎟
⎟⎠
0 + 0 0 0 0 0 0 + 0 0 −
⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 − 0 0 0 + 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 + 0 0 0 0 − 0 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 0 0 0 − + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 + 0 0 0 0 − − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 0 + 0 0 − 0 0 0 −⎟⎟⎟⎟
K3 = ⎜⎜⎜⎜ ⎟⎟ , (9.32c)
⎜⎜⎜− 0 0 0 0 − 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 − 0 − 0 0 0 − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 − 0 − 0 − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜0 + 0 − 0 0 0 0 0 − 0 0 ⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎜⎝ 0 + 0 − 0 0 0 0 0 0 −⎟⎟⎟⎟⎟
⎠
+ 0 0 0 0 − 0 0 0 0 − 0
⎛ ⎞
⎜⎜⎜0 0 0 0 0 + 0 − 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 + 0 0 0 0 − 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 0 0 0 −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 − 0 0 0 + 0 0 0 0 −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− 0 0 0 0 0 0 + 0 − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 − 0 0 0 0 0 0 + 0 − 0 ⎟⎟⎟⎟
K4 = ⎜⎜⎜⎜ ⎟⎟ . (9.32d)
⎜⎜⎜0 − 0 + 0 0 0 0 0 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 − 0 + 0 0 0 0 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 + 0 0 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 − 0 − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎝0 − 0 − 0 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟
⎠
0 0 − 0 − 0 − 0 0 0 0 0
Ai = X1 ⊗ Ai−1 − X2 ⊗ BTi−1 ,
Bi = X1 ⊗ Bi−1 + X2 ⊗ ATi−1 ,
(9.33)
Ci = X1 ⊗ Ci−1 − X2 ⊗ DTi−1 ,
Di = X1 ⊗ Di−1 + X2 ⊗ Ci−1
T
.
Proof: Prove that the matrices in Eq. (9.33) satisfy the conditions of Eq. (9.18).
Prove the theorem by induction. Let i = 1. Compute
P ∗ Q = 0, P Q, P, Q ∈ {A1 , B1 , C1 , D1 }. (9.35)
Hence, we have A1 Rmn BT1 = B1 Rmn AT1 . Similarly, we can determine that
A1 + B1 + C1 + D1 = X1 ⊗ (A0 + B0 + C0 + D0 ) + X2
⊗ (AT0 − BT0 + C0T − DT0 ). (9.39)
Now, assuming that matrices Ai , Bi , Ci , Di are T matrices for all i ≤ k, prove the
theorem for i = k + 1. We verify only the fifth condition of Eq. (9.18),
From Theorem 9.3.2 and Lemma 9.3.3, there follows the existence of T matrices
of order 2n, where n takes its values from the following set of numbers:
{63, 65, 75, 77, 81, 85, 87, 91, 93, 95, 99, 111, 115, 117, 119, 123,
125, 129, 133, 135, 141, 148, 145, 147, 153, 155, 159, 161, 165, 169, 171,
175, 177, 185, 189, 195, 203, 205, 209, 215, 221, 225, 231, 235, 243, 245,
247, 255, 259, 265, 273, 275, 285, 287, 295, 297, 299, 301, 303, 305, 315,
323, 325, 329, 343, 345, 351, 357, 361, 371, 375, 377, 385, 387, 399, 403,
405, 413, 425, 427, 429, 435, 437, 441, 455, 459, 465, 475, 481, 483, 495,
505, 507, 513, 525, 533, 551, 555, 559, 567, 575, 585, 589, 603, 609, 611,
615, 621, 625, 627, 637, 645, 651, 663, 665, 675, 689, 693, 703, 705, 707,
715, 725, 729, 735, 741, 765, 767, 771, 775, 777, 779, 783, 793, 805, 817,
819, 825, 837, 845, 855, 861, 875, 885, 891, 893, 903, 915, 925, 931, 945,
963, 969, 975, 987, 999, 1005, 1007, 1025, 1029, 1045, 1053, 1071, 1075,
1083, 1107, 1113, 1121, 1125, 1127, 1155, 1159, 1161, 1175, 1197, 1203,
1215, 1225, 1235, 1239, 1251, 1269, 1275, 1281, 1285, 1305, 1313, 1323,
1325, 1365, 1375, 1377, 1407, 1425, 1431, 1463, 1475, 1485, 1515, 1525,
1539, 1563, 1575, 1593, 1605, 1625, 1647, 1677, 1701, 1755, 1799, 1827,
1919, 1923, 1935, 2005, 2025, 2085, 2093, 2121, 2187, 2205, 2243, 2403,
2415, 2451, 2499, 2525, 2565, 2613, 2625, 2709, 2717, 2727, 2807, 2835,
2919, 3003, 3015, 3059}.
where R is the back-diagonal identity matrix and A, B, C, and D are cyclic (−1, +1)
matrices of order n satisfying
If A, B, C, and D are cyclic symmetric (−1, +1) matrices, then a Williamson array
results in
⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜ −B A −D C ⎟⎟⎟⎟
W = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝ −C D A −B⎟⎟⎟⎟⎠
(9.50)
−D −C B A
A = aT 1 + bT 2 + cT 3 + dT 4 ,
B = −bT 1 + aT 2 + dT 3 − cT 4 ,
(9.51)
C = −cT 1 − dT 2 + aT 3 + bT 4 ,
D = −dT 1 + cT 2 − bT 3 + aT 4 .
to form an OD(4t; t, t, t), where R is the permutation matrix that transforms cyclic
to back-cyclic matrices or type 1 to type 2 matrices.
X = T 1 ⊗ A + T 2 ⊗ B + T 3 ⊗ C + T 4 ⊗ D,
Y = −T 1 ⊗ B + T 2 ⊗ A + T 3 ⊗ D − T 4 ⊗ C,
(9.53)
Z = −T 1 ⊗ C − T 2 ⊗ D + T 3 ⊗ A + T 4 ⊗ B,
W = −T 1 ⊗ D + T 2 ⊗ C − T 3 ⊗ B + T 4 ⊗ A
B1 = B2 = B3 = B4 = 1,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
A1 = ⎜⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , A2 = A3 = A4 = ⎜⎜⎜⎜⎝− + −⎟⎟⎟⎟⎠ . (9.56)
+ + + − − +
Consider a (0, ±1) matrix P = (pi, j ) of order 4n with elements pi, j , defined as
⎧
⎪
⎪
⎪ p = 1, p2i,2i−1 = −1, i = 1, 2, . . . , 2n,
⎨ 2i−1,2i
⎪
⎪ pi, j = 0, if i 2k − 1 & (9.61)
⎪
⎩ j 2k or i 2k & j 2k − 1, k = 1, 2, . . . , 2n.
S (X1 , X2 , X3 , X4 ) = X ⊗ H1 + Y ⊗ H2 , (9.64)
where matrices
1 A+B C+D 1 A−B C−D
X= , Y= , (9.65)
2 C + D −A − B 2 −C + D A − B
satisfy the conditions
X ∗ Y = 0,
X ± Y is a (+1, −1) matrix,
(9.66)
XY T = Y X T ,
XX T + YY T = 2kI2k .
where
aS + bT cU + dV
B8n (a, b, c, d) = . (9.72)
−cU − dV aS + bT
In Ref. 100, a Plotkin array of order 24 was presented, and the following
conjuncture was given.
Problem for exploration (Plotkin conjuncture): There are Plotkin arrays in every
order 8n, where n is a positive integer. Only two Plotkin arrays of order 8t are
known at this time. These arrays of order 8 and 24 are given below.92,100
⎛ ⎞
⎜⎜⎜ y x x x −w w z y −z z w −y⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −x y x −x −z z −w −y w −w z −y⎟⎟⎟⎟
⎜⎜⎜⎜ −x ⎟
−x y x −y −w y −w −z −z w z⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −x x −x y w w −z −w −y z y z⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−w −w −z −y z x x x −y −y z −w⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
−z −w −x −x −w −w −z y⎟⎟⎟⎟
B(x, y, z, w) = ⎜⎜⎜⎜⎜
y y z x
⎟ . (9.75)
⎜⎜⎜−w w −w −y −x −x z x z y y z⎟⎟⎟⎟
⎜⎜⎜ z ⎟
⎜⎜⎜⎜ −w −w z −x x −x z y −y y w⎟⎟⎟⎟⎟
⎟
⎜⎜⎜ z −z y −w y y w −z w x x x⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ y −y −z −w −z −z −w −y −x w x −x⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝ z z y −z w −y −y w −x −x w x⎟⎟⎟⎟
⎠
−w −z w −z −y y −y z −x x −x w
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−b −a −c c −d⎟⎟⎟ ⎜⎜⎜ a b −d d b⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −b −a −c c⎟⎟⎟⎟ ⎜⎜⎜ b a b −d d⎟⎟⎟⎟
⎟ ⎟
W3,1 = ⎜⎜⎜⎜⎜ c −d −b −a −c⎟⎟⎟⎟⎟ , W3,2 = ⎜⎜⎜⎜⎜ d b a b −d⎟⎟⎟⎟⎟ , (9.76e)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ −c c −d −b −a⎟⎟⎟⎟ ⎜⎜⎝−d d b a b⎟⎟⎟⎟
⎠ ⎠
−a −c c −d −b b −d d b a
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d −b c c b⎟⎟⎟ ⎜⎜⎜ −c a −d −d −a⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −d −b c c⎟⎟⎟⎟ ⎜⎜⎜−a −c a −d −d⎟⎟⎟⎟
⎟ ⎟
W3,3 = ⎜⎜⎜⎜⎜ c b −d −b c⎟⎟⎟⎟⎟ , W3,4 = ⎜⎜⎜⎜⎜−d −a −c a −d⎟⎟⎟⎟⎟ , (9.76f)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ c c b −d −b⎟⎟⎟⎟ ⎜⎜⎝−d −d −a −c a⎟⎟⎟⎟
⎠ ⎠
−b c c b −d a −d −d −a −c
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−a −b −d d −b⎟⎟⎟ ⎜⎜⎜ b −a c −c −a⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −a −b −d d⎟⎟⎟⎟ ⎜⎜⎜−a b −a c −c⎟⎟⎟⎟
⎟ ⎟
W4,1 = ⎜⎜⎜⎜⎜ d −b −a −b −d⎟⎟⎟⎟⎟ , W4,2 = ⎜⎜⎜⎜⎜ −c −a b −a c⎟⎟⎟⎟⎟ , (9.76g)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝−d d −b −a −b⎟⎟⎟⎟ ⎜⎜⎝ c −c −a b −a⎟⎟⎟⎟
⎠ ⎠
−b −d d −b −a −a c −c −a b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c a d d −a⎟⎟⎟ ⎜⎜⎜−d b c c −b⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a c a d d⎟⎟⎟⎟ ⎜⎜⎜−b −d b c c⎟⎟⎟⎟
⎟ ⎟
W4,3 = ⎜⎜⎜⎜⎜ d −a c a d⎟⎟⎟⎟⎟ , W4,4 = ⎜⎜⎜⎜⎜ c −b −d b c⎟⎟⎟⎟⎟ . (9.76h)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ d d −a c a⎟⎟⎟⎟ ⎜⎜⎝ c c −b −d b⎟⎟⎟⎟
⎠ ⎠
a d d −a c b c c −b −d
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a a a b c d −b −d −c⎟⎟⎟ ⎜⎜⎜ b −a a b c −d b d −c⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a d b c −c −b −d⎟⎟⎟⎟ ⎜⎜⎜ a b −a −d b c −c b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a c d b −d −c −b⎟⎟⎟⎟ ⎜⎜⎜−a a b c −d b d −c b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d −c a a a b c d⎟⎟⎟⎟⎟ ⎜⎜⎜ b c −d b −a a b c −d⎟⎟⎟⎟⎟
⎜ ⎟⎟ ⎜⎜⎜ ⎟
A1 = ⎜⎜⎜⎜⎜ −c −b −d a a a d b c⎟⎟⎟⎟ , A2 = ⎜⎜⎜−d b c a b −a −d b c⎟⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −c −b a a a c d b⎟⎟⎟⎟ ⎜⎜⎜ c −d b −a a b c −d b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c d −b −d −c a a a⎟⎟⎟⎟ ⎜⎜⎜ b c −d −a a b b −a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ d b c −c −b −d a a a⎟⎟⎟⎟ ⎜⎜⎜−d b c b −a a a b −a⎟⎟⎟⎟⎟
⎝ ⎟⎠ ⎝ ⎠
c d b −d −c −b a a a c −d b a b −a −a a b
(9.78a)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c −a a −b c d b −d c⎟⎟⎟ ⎜⎜⎜ d −a a b −c d −b d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a c −a d −b c c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜ a d −a d b −c c −b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a a c c d −b −d c b⎟⎟⎟⎟⎟ ⎜⎜⎜−a a d −c d b d c −b⎟⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜ b −d c c −a a −b c d⎟⎟⎟⎟⎟ ⎜⎜⎜−b d c d −a a b −c d⎟⎟⎟⎟
⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
A3 = ⎜⎜⎜⎜ c b −d a c −a d −b c⎟⎟⎟⎟ , A4 = ⎜⎜⎜⎜ c −b d a d −a d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d c b −a a c c d −b⎟⎟⎟⎟⎟ ⎜⎜⎜ d c −b −a a d −c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜−b c d b −d c c −a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜ b −c d −b d c d −a a⎟⎟⎟⎟
⎜⎜⎜ d −b c c b −d a c −a⎟⎟⎟ ⎜⎜⎜ d ⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ b −c c −b d a d −a⎟⎟⎟⎟
⎠
c d −b −d c b −a a c −c d b d c −b −a a d
(9.78b)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−b a −a −b c −d −b d −c⎟⎟⎟ ⎜⎜⎜ a a a b −c −d −b d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟
−b a −d −b c −c −b d⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ a a
⎟
a −d b −c −c −b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a −b c −d −b d −c −b⎟⎟⎟⎟ ⎜⎜⎜ a a a −c −d b d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜−b ⎟
⎜⎜⎜−b d −c −b a −a −b c −d⎟⎟⎟⎟ d −c a a a b −c −d⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
B1 = ⎜⎜⎜⎜ −c −b d −a −b a −d −b c⎟⎟⎟⎟ , B2 = ⎜⎜⎜⎜ −c −b d a a a −d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ d −c −b a −a −b c −d −b⎟⎟⎟⎟ ⎜⎜⎜ d −c −b a a a −c −d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b c −d −b d −c −b a −a⎟⎟⎟⎟ ⎜⎜⎜ b −c −d −b d −c a a a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝−d −b c −c −b d −a −b a⎟⎟⎟⎟ ⎜⎜⎝−d b −c −c −b d a a a⎟⎟⎟⎟
⎠ ⎠
c −d −b d −c −b a −a −b −c −d b d −c −b a a a
(9.78c)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d −a a b −c −d −b −d −c⎟⎟⎟ ⎜⎜⎜ c a −a b c d −b −d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −d −a −d b −c −c −b −d⎟⎟⎟⎟⎟ ⎜⎜⎜−a c a d b c c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a a −d −c −d b −d −c −b⎟⎟⎟⎟ ⎜⎜⎜ a −a c c d b −d c −b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d −c −d −a a b c −d⎟⎟⎟⎟
⎟ ⎜⎜⎜−b −d c c a −a b c d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟⎟
B3 = ⎜⎜⎜⎜ −c −b −d a −d −a −d b c⎟⎟⎟⎟ , B4 = ⎜⎜⎜⎜ c −b −d −a c a d b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −c −b −a a −d c −d b⎟⎟⎟⎟ ⎜⎜⎜−d c −b a −a c c d b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c −d −b −d −c −d −a a⎟⎟⎟⎟ ⎜⎜⎜ b c d −b −d c c a −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝−d b c −c −b −d a −d −a⎟⎟⎟⎟ ⎜⎜⎝ d b c c −b −d −a c a⎟⎟⎟⎟⎠
⎠
c −d b −d −c −b −a a −d c d b −d c −b a −a c
(9.78d)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ −c a −a −b −c d b −d −c⎟⎟⎟ ⎜⎜⎜ d a −a b c d −b d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ −c a d −b −c c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜−a d a d b c −c −b d⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜⎜ a ⎟
⎜⎜⎜ a
⎜⎜⎜ −a −c −c d −b −d c b⎟⎟⎟⎟ ⎜⎜⎜ −a d c d b d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎟ ⎟
⎜⎜⎜ b −d −c −c a −a −b −c d⎟⎟⎟⎟ ⎜⎜⎜−b d −c d a −a b c d⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
C1 = ⎜⎜⎜⎜ −c b −d −a −c a d −b −c⎟⎟⎟⎟ , C2 = ⎜⎜⎜⎜ −c −b d −a d a d b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d −c b a −a −c −c d −b⎟⎟⎟⎟ ⎜⎜⎜ d −c −b a −a d c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −c d b −d −c −c a −a⎟⎟⎟⎟ ⎜⎜⎜ b c d −b d −c d a −a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝ d −b −c −c b −d −a −c a⎟⎟⎟⎟ ⎜⎜⎝ d b c −c −b d −a d a⎟⎟⎟⎟
⎠ ⎠
−c d −b −d −c b a −a −c c d b d −c −b a −a d
(9.78e)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a a a −b c −d b d −c⎟⎟⎟ ⎜⎜⎜−b −a a −b c d −b −d −c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a −d −b c −c b d⎟⎟⎟⎟⎟ ⎜⎜⎜ a −b −a d −b c −c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a c −d −b d −c b⎟⎟⎟⎟⎟ ⎜⎜⎜−a a −b c d −b −d −c −b⎟⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜ b d −c a a a −b c −d⎟⎟⎟⎟⎟ ⎜⎜⎜−b −d −c −b −a a −b c d⎟⎟⎟⎟
⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
C3 = ⎜⎜⎜⎜ −c b d a a a −d −b c⎟⎟⎟⎟ , C4 = ⎜⎜⎜⎜ −c −b −d a −b −a d −b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ d −c b a a a c −d −b⎟⎟⎟⎟⎟ ⎜⎜⎜−d −c −b −a a −b c d −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b c −d b d −c a a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜−b c d −b −d −c −b −a a⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝−d −b c −c b d a a a⎟⎟⎟⎟⎠ ⎜⎜⎜ d
⎜⎝ −b c −c −b −d a −b −a⎟⎟⎟⎟
⎠
c −d −b d −c b a a a c d −b −d −c −b −a a −b
(9.78f)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d a −a b −c −d −b −d c⎟⎟⎟ ⎜⎜⎜ −c −a a b −c d −b −d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟
−d a −d b −c c −b −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜ a ⎟
−c −a d b −c −c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a −d −c −d b −d c −b⎟⎟⎟⎟ ⎜⎜⎜−a a −c −c d b −d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d c −d a −a b −c −d⎟⎟⎟⎟ ⎜⎜⎜−b −d −c −c −a a b −c d ⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
D1 = ⎜⎜⎜⎜ c −b −d −a −d a −d b −c⎟⎟⎟⎟ , D2 = ⎜⎜⎜⎜ −c −b −d a −c −a d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d c −b a −a −d −c −d b⎟⎟⎟⎟ ⎜⎜⎜−d c −b −a a −c −c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −c −d −b −d c −d a −a⎟⎟⎟⎟ ⎜⎜⎜ b −c d −b −d −c −c −a a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝−d b −c c −b −d −a −d a⎟⎟⎟⎟ ⎜⎜⎝ d b −c −c −b −d a −c −a⎟⎟⎟⎟
⎠ ⎠
−c −d b −d c −b a −a −d −c d b −d −c −b −a a −c
(9.78g)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ b a −a b c d b −d −c⎟⎟⎟ ⎜⎜⎜ a a a −b −c d b −d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a b a d b c −c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜ a a a d −b −c c b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a b c d b −d −c b⎟⎟⎟⎟⎟ ⎜⎜⎜ a a a −c d −b −d c b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −d −c b a −a b c d⎟⎟⎟⎟⎟ ⎜⎜⎜ b −d c a a a −b −c d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
D3 = ⎜⎜⎜⎜ −c b −d −a b a d b c⎟⎟⎟⎟ , D4 = ⎜⎜⎜⎜ c b −d a a a d −b −c⎟⎟⎟⎟ .
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d c b a −a b c d b⎟⎟⎟⎟⎟ ⎜⎜⎜−d c b a a a −c d −b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c d b −d −c b a −a⎟⎟⎟⎟⎟ ⎜⎜⎜−b −c d b −d c a a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ d b c −c b −d −a b a⎟⎟⎟⎟⎠ ⎜⎜⎝ d −b −c c b −d a a a⎟⎟⎟⎟⎠
c d b −d −c b a −a b −c d −b −d c b a a a
(9.78h)
Now, if Xi , i = 1, 2, 3, 4 are T matrices of order k, then by substituting matrices
4 4 4 4
A= Ai ⊗ X i , B= Bi ⊗ Xi , C= C i ⊗ Xi , D= Di ⊗ Xi (9.79)
i=1 i=1 i=1 i=1
into the array in Eq. (9.20), we obtain the Baumert–Hall array of order 4 · 9k. Using
the Welch array we can obtain the Baumert–Hall array of order 4 · 5k. Hence, from
Remark 9.2.1 and Theorem 9.2.1, we have the following:
Corollary 9.6.1: There are Baumert–Hall arrays of orders 4k, 20k, and 36k,
where k ∈ M.
Corollary 9.6.2: (a) The matrices in Eq. (9.79) are the generalized parametric
Williamson matrices of order 9k.
(b) Matrices {Ai }4i=1 , {Bi }4i=1 , {Ci }4i=1 , and {Di }4i=1 are generalized parametric
Williamson matrices of order 9. Furthermore, the array in Eq. (9.77), where
Ai , Bi , Ci , Di are mutually commutative parametric matrices, will be called a
Welch-type array.
Theorem 9.6.1: Let there be a Welch array of order 4k. Then, there is also a Welch
array of order 4k(p + 1), where p ≡ 1 (mod 4) is a prime power.
Proof: Let Eq. (9.9) be a Welch array of order 4k, i.e., {Ai }4i=1 , {Bi }4i=1 , {Ci }4i=1 , {Di }4i=1
are parametric matrices of order k satisfying the conditions
PQ = QP, P, Q ∈ {Ai , Bi , Ci , Di } ,
4 4 4 4 4 4
Ai BTi = AiCiT = Ai DTi = BiCiT = Bi DTi = Ci DTi = 0,
i=1 i=1 i=1 i=1 i=1 i=1 (9.80)
4 4 4 4
Ai ATi = Bi BTi = CiCiT = Di DTi = k(a2 + b2 + c2 + d2 )Ik .
i=1 i=1 i=1 i=1
Now, let p ≡ 1 (mod 4) be a prime power. According to Ref. 8, there exist cyclic
symmetric Williamson matrices of orders (p + 1)/2 of the form I + A, I − A, B, B.
Consider the matrices
I B A 0
X= , Y= . (9.81)
B −I 0 A
We can verify that (0, ±1) matrices X, Y of orders p + 1 satisfy the conditions
X ∗ Y = 0,
X T = X, Y T = Y,
XY = Y X, (9.82)
X ± Y is a (+1, −1)matrix,
X 2 + Y 2 = (p + 1)I p+1 .
Now we introduce the following matrices:
X1 = X ⊗ A 1 + Y ⊗ A 2 , X2 = X ⊗ A2 − Y ⊗ A1 ,
X3 = X ⊗ A3 + Y ⊗ A4 , X4 = X ⊗ A4 − Y ⊗ A3 ;
Y1 = X ⊗ B1 + Y ⊗ B2 , Y2 = X ⊗ B2 − Y ⊗ B1 ,
Y3 = X ⊗ B3 + Y ⊗ B4 , Y4 = X ⊗ B4 − Y ⊗ B3 ;
(9.83)
Z1 = X ⊗ C1 + Y ⊗ C2 , Z2 = X ⊗ C2 − Y ⊗ C1 ,
Z3 = X ⊗ C3 + Y ⊗ C4 , Z4 = X ⊗ C4 − Y ⊗ C3 ;
W1 = X ⊗ D1 + Y ⊗ D2 , W2 = X ⊗ D2 − Y ⊗ D1 ,
W3 = X ⊗ D3 + Y ⊗ D4 , W4 = X ⊗ D4 − Y ⊗ D3 .
Let us prove that the parametric matrices in Eq. (9.83) of order k(p + 1) satisfy the
conditions of Eq. (9.80). The first condition is evident. We will prove the second
condition of Eq. (9.80).
Similarly, we can prove the validity of the second condition of Eq. (9.80) for all
matrices Xi , Yi , Zi , Wi , i = 1, 2, 3, 4.
Now, prove the third condition of Eq. (9.80). We obtain
Summarizing, we obtain
Hence, taking into account the third condition of Eq. (9.80), we have
4
Xi XiT = k(p + 1)(a2 + b2 + c2 + d2 )Ik(p+1) . (9.91)
i=1
Remark 9.6.1: There are Welch arrays of orders 20(p + 1) and 36(p + 1), where
p ≡ 1 (mod 4) is a prime power.
References
1. A. Hurwitz, “Uber die komposition der quadratischen formen,” Math. Ann.
88 (5), 1–25 (1923).
2. J. Radon, Lineare scharen orthogonaler matrizen, Abhandlungen aus dem,
presented at Mathematischen Seminar der Hamburgischen Universitat, 1–14,
1922.
3. T.-K. Woo, “A novel complex orthogonal design for space–time coding in
sensor networks,” Wireless Pers. Commun. 43, 1755–1759 (2007).
4. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook of
Combinatorial Design, Ch. J. Colbourn and J. H. Dinitz, Eds., 370–377
CRC Press, Boca Raton (1996).
5. A. V. Geramita and J. Seberry, Orthogonal Designs. Quadratic Forms and
Hadamard Matrices, in Lecture Notes in Pure and Applied Mathematics 45,
Marcel Dekker, New York (1979).
6. A. V. Geramita, J. M. Geramita, and J. Seberry Wallis, “Orthogonal designs,”
Linear Multilinear Algebra 3, 281–306 (1976).
7. A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays: Theory
and Applications, Springer-Verlag, New York (1999).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Combin. Theory,
Ser. A 12, 19–322 (1972).
9. M. Hall Jr., Combinatorics, Blaisdell Publishing Co., Waltham, MA (1970).
10. R. J. Turyn, “Hadamard matrices, Baumert–Hall units, four-symbol sequen-
ces, puls compression, and surface wave encoding,” J. Combin. Theory, Ser.
A 16, 313–333 (1974).
11. http://www.uow.edu.au/∼jennie.
12. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and construction of
Hadamard matrices,” Prob. Transmission Inf. 16 (3), 50–59 (1982).
13. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory Ser. A 18, 149–164
(1975).
14. S. Agaian and H. Sarukhanyan, “On Plotkin’s hypothesis,” Dokladi NAS RA
LXVI (5), 11–15 (1978) (in Russian).
15. S. Agaian and H. Sarukhanyan, “Plotkin hypothesis about D(4k, 4) decompo-
sition,” J. Cybernetics and Systems Analysis 18 (4), 420–428 (1982).
16. H. Sarukhanyan, “Construction of new Baumert–Hall arrays and Hadamard
matrices,” J. of Contemporary Mathematical Analysis, NAS RA, Yerevan 32
(6), 47–58 (1997).
17. J. M. Goethals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”
Can. J. Math. 19, 1001–1010 (1967).
Figure 10.1 Rock salt crystal: The black circles represent sodium atoms; the white circles
represent chlorine atoms.1,2
309
Definition 10.1.1:3,4 The 3D matrix H = (hi, j,k )ni, j,k=1 is called a Hadamard matrix
if all elements hi, j,k = ±1 and
Matrices satisfying Eq. (10.2) are called “proper” or regular Hadamard matrices.
Matrices satisfying Eq. (10.1) but not all of Eq. (10.2) are called “improper.”
A 3D Hadamard matrix of order 2 [or size (2 × 2 × 2)] is presented in Fig. 10.2.
Three-dimensional Hadamard matrices of order 2m (see Figs. 10.3 and 10.4) can
be generated as follows:
(1) From m − 1 successive direct products (see Appendix A.1 of 23 Hadamard
matrices.
(2) The direct product of three 2D Hadamard matrices of order m in different
orientations.5
Example 10.2.1: It can be shown that from the 3D Williamson array (see
Fig. 10.5) we obtain the following:
(1) The matrices parallel to plane (X, Y) are Williamson arrays
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a b c d⎟⎟⎟ ⎜⎜⎜−b a −d c ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟ ⎜⎜⎜−a −b −c −d⎟⎟⎟⎟
AX,Y = ⎜⎜⎜⎜ ⎟⎟ , BX,Y = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝−c d a −b⎟⎟⎟⎟⎠ ⎜⎜⎝ d c −b −a⎟⎟⎟⎟⎠
(10.5a)
−d −c b a −c d a −b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−c d a −b⎟⎟⎟ ⎜⎜⎜−d −c b a⎟⎟⎟
⎜⎜⎜−d −c b a⎟⎟⎟ ⎜⎜⎜ c −d −a b⎟⎟⎟
C X,Y = ⎜⎜⎜⎜ ⎟⎟ , DX,Y = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝−a −b −c −d⎟⎟⎟⎟⎠ ⎜⎜⎝−b a −d c ⎟⎟⎟⎟⎠
(10.5b)
b −a d −c −a −b −c −d
(2) Similarly, the matrices parallel to plane (X, Z) are Williamson arrays
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a b c d⎟⎟⎟ ⎜⎜⎜−b a −d c ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟ ⎜⎜⎜−a −b −c −d⎟⎟⎟⎟
AX,Z = ⎜⎜⎜⎜ ⎟⎟ , BX,Z = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝−c d a −b⎟⎟⎟⎟⎠ ⎜⎜⎝−d −c b a⎟⎟⎟⎟⎠
(10.6a)
−d −c b a c −d −a b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−c d a −b⎟⎟⎟ ⎜⎜⎜ −d −c b a⎟
⎜⎜⎜ d c −b −a⎟⎟⎟ ⎜⎜⎜−c d a −b⎟⎟⎟⎟⎟
C X,Z = ⎜⎜⎜⎜ ⎟⎟ , DX,Z = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝−a −b −c −d⎟⎟⎟⎟⎠ ⎜⎜⎝ b −a d −c ⎟⎟⎟⎟⎠
(10.6b)
−b a −d c −a −b −c −d
(3) Similarly, the following matrices that are parallel to plane (Y, Z) are Williamson
arrays:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a −b −c −d⎟⎟ ⎜⎜⎜ b a d −c ⎟⎟
⎜⎜⎜−b ⎟ ⎟
−a d −c ⎟⎟⎟⎟ ⎜⎜⎜ a −b c d⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟, BY,Z = ⎜⎜⎜⎜ ⎟,
−d −a b⎟⎟⎟⎟⎠ −c −b −a⎟⎟⎟⎟⎠
AY,Z (10.7a)
⎜⎜⎝−c ⎜⎜⎝ d
−d c −b −a −c −d a −b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c −d a b⎟⎟ ⎜⎜⎜ d c −b a⎟⎟
⎜⎜⎜−d ⎟ ⎟
−c −b a⎟⎟⎟⎟ ⎜⎜⎜ c −d −a −b⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟, DY,Z = ⎜⎜⎜⎜ ⎟.
b −c d⎟⎟⎟⎟⎠ a −d −c ⎟⎟⎟⎟⎠
CY,Z (10.7b)
⎜⎜⎝ a ⎜⎜⎝−b
b −a −d −c a b c −d
b c d
a
–d
–b a c
–c d a –b
–c b a
–d –b a –d c
–a –b –c
–d
d c –b
–a
d a
–c –b
–c –b
d a
–d
–c b a
–a
–b –d
–c
b d –c
–d –a
a
–c b
c
–d –a b
–b
a c
–d
–a
–b –c –d
Sylvester–Hadamard matrices parallel to planes (X, Y), (X, Z), and (Y, Z) have
the following forms, respectively:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ − + −⎟⎟
⎟
⎜⎜⎜+ − + −⎟⎟ ⎜⎜− − − −⎟⎟⎟⎟
HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜
1 1 1 ⎟⎟⎟ , HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜⎜
2 2 2 ⎟ , (10.8a)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎝+ − − +⎟⎟⎟⎠⎟
+ − − + − − + +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − −⎟⎟
⎟ ⎜⎜⎜+ − − +⎟⎟
⎟
⎜⎜⎜+ − − +⎟⎟⎟⎟ ⎜⎜− − + +⎟⎟⎟⎟
HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜
3 3 3 ⎟ , HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜⎜
4 4 4 ⎟ . (10.8b)
⎜⎜⎝− − − −⎟⎟⎟⎟⎠ ⎜⎜⎝− + − +⎟⎟⎟⎟⎠
− + − + + + + +
Let
V0 = R, U, U 2 , . . . , U n−2 , U n−1 ,
V1 = U, U 2 , . . . , U n−1 , R ,
V2 = U 2 , U 3 , . . . , U n−1 , R, U , (10.14)
...,
Vn−1 = U n−1 , R, U, . . . , U n−3 , U n−2 ,
or
33 0 0 0 3
331 0 0 ... 0 0 000 1 0 ... 0 000 000 0 0 ... 0 0 1 33
330 0 0 0 3
1 0 ... 0 0 000 0 1 ... 0 000 001 0 0 ... 0 0 0 33
33 0 0 0 3
0 0 1 ... 0 0 00. . . ... . . 00 000 1 0 ... 0 0 0 33
V0 = 33 ··· . (10.15)
33. . . ... . . 000. . . ... . . 000 000. . . ... . . . 333
330 0 0 ... 1 0 0000 0 0 ... 0 100 0000
0 0 0 ... 1 0 0 333
33
0 0 0 ... 0 1 01 0 0 ... 0 00 00 0 0 ... 0 0 03
n−1 n−1
H(i, j, 0)H(i, j, 1) = 0,
i=0 j=0
n−1 n−1
(10.17)
H(i, j, 0)H(i, j, 0) = n . 2
i=0 j=0
B C D
A
A –D C
–B
D A –B
–C
–C B A
–D
–B A –D
C
–A –B –C
–D
D C –B
–A
D A
–C –B
–C
D A –B
–D
B A
–C
–A
–D
–B –C
B
D –C
–D –A
–C B A
C
–D –A B
–B A –D C
–A
–B –C –D
Thus,
However,
i.e.,
n−1 n−1
RA (s, t) = A(i, j)A (i + s)mod n, ( j + s)mod m = 0, (s, t) (0, 0).
i=0 j=0
(10.20)
An example of PBA(6, 6) is given by (see Ref. 31)
⎛ ⎞
⎜⎜⎜− + + + + −⎟⎟
⎟
⎜⎜⎜+ − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜+ + − + + −⎟⎟⎟⎟
A = ⎜⎜⎜⎜⎜ ⎟. (10.21)
⎜⎜⎜+ + + − + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
+ + + − −⎟⎟⎟⎟
⎝⎜ ⎠
− − − − − +
Theoremm−1 10.3.2: (See more detail in Ref. 31). If A is PBA(m, m), then B =
B(i, j, k) i, j,k=0 is a 3D Hadamard matrix of order m, where
B(i, j, k) = A (k + i)mod m, (k + j)mod m, m , 0 ≤ i, j, k ≤ m − 1. (10.22)
Now, using Theorem 10.3.2 and Eq. (10.21), we present a 3D Hadamard matrix of
order 6. Because B(i, j, k) = B( j, i, k), which means that the layers of the x and y
directions are the same, we need give only the layers in the z and y directions as
follows:
Layers in the z direction:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− + + + + −⎟⎟
⎟ ⎜⎜⎜− + + + − +⎟⎟
⎟
⎜⎜⎜⎜+ − + + + −⎟⎟⎟⎟⎟ ⎜⎜⎜⎜+ − + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜+ + − + + −⎟⎟⎟⎟ ⎜+ + − + − +⎟⎟⎟⎟
[B(i, j, 0)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 1)] = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜+ + + − + −⎟⎟⎟⎟ ⎜⎜⎜+ + + − − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ + + + − −⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎝ − − − + −⎟⎟⎟⎟
⎠ ⎠
− − − − − + + + + + − −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− + + − + +⎟⎟
⎟ ⎜⎜⎜− + − + + +⎟⎟
⎟
⎜⎜⎜+ − + − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜+ + − − + +⎟⎟⎟⎟ ⎜⎜− − + − − −⎟⎟⎟⎟
[B(i, j, 2)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 3)] = ⎜⎜⎜⎜⎜ ⎟ , (10.23)
⎜⎜⎜− − − + − −⎟⎟⎟⎟ ⎜⎜⎜+ + − − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ + + − − +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎝ + − + − +⎟⎟⎟⎟
⎠ ⎠
+ + + − + − + + − + + −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− − + + + +⎟⎟
⎟ ⎜⎜⎜ + − − − − −⎟⎟
⎟
⎜⎜⎜− + − − − −⎟⎟⎟⎟⎟ ⎜⎜⎜− − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜+ − − + + +⎟⎟⎟⎟ ⎜⎜− + − + + +⎟⎟⎟⎟
[B(i, j, 4)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 5)] = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − + − + +⎟⎟⎟⎟ ⎜⎜⎜− + + − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ − + + − +⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎝ + + + − +⎟⎟⎟⎟
⎠ ⎠
+ − + − + − − + + + + −
To prove this statement, we can verify the correctness of the condition Eq. (10.20)
only for the matrix A4 , i.e., we can prove that
3
RA (s, t) = A(i, j)A (i + s)mod 4, ( j + t)mod 4 , (s, t) (0, 0). (10.26)
i, j=0
RA (0, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 = 0,
RA (0, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 = 0, (10.28)
RA (0, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 = 0.
Case for s = 1, t = 0, 1, 2, 3:
RA (1, 0) = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 = 0,
RA (1, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 = 0,
(10.30)
RA (1, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 = 0,
RA (1, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 = 0.
Case for s = 2, t = 0, 1, 2, 3:
RA (2, 0) = A(0, 0)A(2, 0) + A(0, 1)A(2, 1) + A(0, 2)A(2, 2) + A(0, 3)A(2, 3)
+ A(1, 0)A(3, 0) + A(1, 1)A(3, 1) + A(1, 2)A(3, 2) + A(1, 3)A(3, 3)
+ A(2, 0)A(0, 0) + A(2, 1)A(0, 1) + A(2, 2)A(0, 2) + A(2, 3)A(0, 3)
+ A(3, 0)A(1, 0) + A(3, 1)A(1, 1) + A(3, 2)A(1, 2) + A(3, 3)A(1, 3),
(10.31a)
RA (2, 1) = A(0, 0)A(2, 1) + A(0, 1)A(2, 2) + A(0, 2)A(2, 3) + A(0, 3)A(2, 0)
+ A(1, 0)A(3, 1) + A(1, 1)A(3, 2) + A(1, 2)A(3, 3) + A(1, 3)A(3, 0)
+ A(2, 0)A(0, 1) + A(2, 1)A(0, 2) + A(2, 2)A(0, 3) + A(2, 3)A(0, 0)
+ A(3, 0)A(1, 1) + A(3, 1)A(1, 2) + A(3, 2)A(1, 3) + A(3, 3)A(1, 0),
RA (2, 2) = A(0, 0)A(2, 2) + A(0, 1)A(2, 3) + A(0, 2)A(2, 0) + A(0, 3)A(2, 1)
+ A(1, 0)A(3, 2) + A(1, 1)A(3, 3) + A(1, 2)A(3, 0) + A(1, 3)A(3, 1)
+ A(2, 0)A(0, 2) + A(2, 1)A(0, 3) + A(2, 2)A(0, 0) + A(2, 3)A(0, 1)
+ A(3, 0)A(1, 2) + A(3, 1)A(1, 3) + A(3, 2)A(1, 0) + A(3, 3)A(1, 1),
(10.31b)
RA (2, 3) = A(0, 0)A(2, 3) + A(0, 1)A(2, 0) + A(0, 2)A(2, 1) + A(0, 3)A(2, 2)
+ A(1, 0)A(3, 3) + A(1, 1)A(3, 0) + A(1, 2)A(3, 1) + A(1, 3)A(3, 2)
+ A(2, 0)A(0, 3) + A(2, 1)A(0, 0) + A(2, 2)A(0, 1) + A(2, 3)A(0, 2)
+ A(3, 0)A(1, 3) + A(3, 1)A(1, 0) + A(3, 2)A(1, 1) + A(3, 3)A(1, 2).
By substituting the elements of matrix A4 into these expressions, we obtain
RA (2, 0) = 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA (2, 1) = 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
(10.32)
RA (2, 2) = 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA (2, 3) = −1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.
Case for s = 3, t = 0, 1, 2, 3:
RA (3, 0) = −1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA (3, 1) = −1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
(10.34)
RA (3, 2) = −1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA (3, 3) = 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.
1
W −1 = W. (10.35)
2n
1
F = W f, f = WF (10.36)
2n
Theorem 10.4.1:32 Let W = [W(i, j, k)] be the 3D Hadamard matrix in Eq. (10.37)
and f = [ f (i, j)], 0 ≤ i, j ≤ 2n − 1, an image signal. Then, the transform
1
F = [F(i, j)] = W f and f = WF (10.38)
2n
can be factorized as
"
n−1
F =Wf = (I2i ⊗ A ⊗ I2n−1−i ) f, (10.39)
i=0
F(0, 0) = f (0, 0) + f (1, 0) + f (2, 0) + f (3, 0), F(1, 0) = f (0, 0) − f (1, 0) + f (2, 0) − f (3, 0),
F(0, 1) = f (0, 1) − f (1, 1) + f (2, 1) − f (3, 1), F(1, 1) = − f (0, 1) − f (1, 1) − f (2, 1) − f (3, 1),
F(0, 2) = f (0, 2) + f (1, 2) − f (2, 2) − f (3, 2), F(1, 2) = f (0, 2) − f (1, 2) − f (2, 2) + f (3, 2),
F(0, 3) = f (0, 3) − f (1, 3) − f (2, 3) + f (3, 3); F(1, 3) = − f (0, 3) − f (1, 3) + f (2, 3) + f (3, 3);
(10.41)
F(2, 0) = f (0, 0) + f (1, 0) − f (2, 0) − f (3, 0), F(3, 0) = f (0, 0) − f (1, 0) − f (2, 0) + f (3, 0),
F(2, 1) = f (0, 1) − f (1, 1) − f (2, 1) + f (3, 1), F(3, 1) = − f (0, 1) − f (1, 1) + f (2, 1) + f (3, 1),
F(2, 2) = − f (0, 2) − f (1, 2) − f (2, 2) − f (3, 2), F(3, 2) = − f (0, 2) + f (1, 2) − f (2, 2) + f (3, 2),
F(2, 3) = − f (0, 3) + f (1, 3) − f (2, 3) + f (3, 3); F(3, 3) = f (0, 3) + f (1, 3) + f (2, 3) + f (3, 3).
F = W f = (A ⊗ I2 ) (I2 ⊗ A) f = W1 W2 f = W1 R, (10.44)
where W1 and W2 are taken from Figs. 10.13 and 10.14, respectively. From
Example 10.3.1, we have R = {R(p, q)} = W2 f , where
b1 −1 b2 −1 bm −1
C1 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0
× B1 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
− ··· A2 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0
× B2 (t1 , . . . , tm , km+1 , . . . , kn ),
b1 −1 b2 −1 bm −1
(10.47)
C2 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0
× B2 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
+ ··· A2 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0
× B1 (t1 , . . . , tm , km+1 , . . . , kn ).
b1 −1 b2 −1 bm −1
C1 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0
× B1 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
− ··· A2 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0
× B2 (t1 , . . . , tm , km+1 , . . . , kn ),
b1 −1 b2 −1 bm −1
(10.48)
C2 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0
× B2 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
+ ··· A2 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0
× B1 (t1 , . . . , tm , km+1 , . . . , kn ).
Note also that for every given integer n and the size a1 × a2 × · · · × an
satisfying conditions (a) and (b), there is one, and only one, identity matrix.
That identity matrix is defined as follows:
(c) If n = 2m, then the identity matrix I = [I(i1 , i2 , . . . , in )] of size a1 × · · · ×
am × a1 × · · · × am is defined as
1, if (i1 , i2 , . . . , im ) = (im+1 , im+2 , . . . , in ),
I(i1 , i2 , . . . , in ) = (10.51)
0, otherwise.
∗ ∗ ∗
(AB) = B A .
Definition 10.6.1: A 3D complex Hadamard matrix H = (hi, j,k )ni, j,k=1 of order n
is called a regular 3D complex Hadamard matrix if the following conditions are
satisfied:
n n
hi,a,r h∗i,b,r = ha, j,r h∗b, j,r = nδa,b ,
i=1 j=1
n n
hi,q,a h∗i,q,b = ha,q,k h∗b,q,k = nδa,b , (10.54)
i=1 k=1
n n
h p, j,a h∗p, j,b = h p,a,k h∗p,b,k = nδa,b ,
j=1 k=1
√
where ha,b,c ∈ {−1, +1, − j, + j}, j = −1, δa,b is a Kronecker function, i.e., δa,a = 1
and δa,b = 0 if a b.
From the conditions of Eq. (10.54), it follows that for fixed i0 , j0 , k0 , the matrices
(hi0 , j,k )nj,k=1 , (hi, j0 ,k )ni,k=1 , and (hi, j,k 0 )ni, j=1 are 2D complex Hadamard matrices of
order n. In Fig. 10.15, two 3D complex Hadamard matrices of size 2 × 2 × 2 are
given.
The higher size of 3D complex Hadamard matrices can be obtained by the
Kronecker product. The 3D complex Hadamard matrix of size 4 × 4 × 4 constructed
3
D(m, 0, k) = C(m, n, k) f (m, 0, k), m, k = 0, 1, 2, 3. (10.55)
n=0
The set of elements of the matrix in Eq. (10.58) with fixed values i j1 , i j2 , . . . , i jk of
indices i j1 , i j2 , . . . , i jk (1 ≤ jr ≤ n, 1 ≤ k ≤ n − 1) defines a k-tuple section of the
orientation (i j1 , i j2 , . . . , i jk ), and is given by the (n − k)-dimensional matrix of order
m. The matrix
Let [A]n = [Ai1 ,i2 ,...,in ] and [B]r = [B j1 , j2 ,..., jr ] be n- and r-dimensional matrices of
order m, respectively, (i1 , i2 , . . . , in , j1 , . . . , jr = 1, 2, . . . , m).
Definition 10.7.1:34 The (λ, μ) convolute product of the matrix [A]n to the matrix
[B]r with decomposition by indices s and c is called a t-dimensional matrix [D]t of
order m, defined as
⎡ ⎤
⎢⎢⎢ ⎥⎥⎥
[D]t = [Dl,s,k ] = ([A]n [B]r ) = ⎢⎣⎢
(λ,μ) ⎢ Al,s,c Bc,s,k ⎥⎥⎦⎥ , (10.62)
(c)
where
n = k + λ + μ, r = ν + λ + μ, t = n + r − (λ + 2μ),
l = (l1 , l2 , . . . , lk ), s = (s1 , s2 , . . . , sλ ), c = (c1 , c2 , . . . , cμ ), (10.63)
k = (k1 , k2 , . . . , kν ).
Definition 10.7.3:8,35 The matrix H of order m will be called the (λ, μ) orthogonal
matrix by all axial-oriented directions with parameters λ, μ, if, for fixed values λ, μ
(μ 0), the following conditions hold:
λ,μ
Ht Ht = mμ E(λ, k), t = 1, 2, . . . , N, (10.65)
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜i
⎜⎜⎜ 1 i2 · · · it−1 it ⎟⎟⎟⎟⎟ ⎜⎜⎜i
⎜⎜⎜ t it+1 · · · in−1 in ⎟⎟⎟⎟⎟
⎜⎝ ⎟ ⎟
Ht = i2
(H ) i3 · · · it i1 ⎟⎠ , Ht = in
(H )
⎜⎝
it · · · in−2 in−1 ⎟⎠ . (10.68)
where
⎛ ⎞
⎜⎜⎜i
⎜⎜⎜ 1 · · · it−1 it iq iq+1 · · · in ⎟⎟⎟⎟⎟
⎜⎝ ⎟
Ht,q = i2
(H ) · · · it i1 in iq · · · in−1 ⎟⎠ ,
⎛ ⎞ (10.70)
⎜⎜⎜i
⎜⎜⎜ 1 · · · it−1 it iq iq+1 · · · in ⎟⎟⎟⎟⎟
⎜⎝ ⎟
Ht,q = i2
(H ) · · · it i1 in iq · · · in−1 ⎟⎠ .
φ(i, j)
H1 = {hi, j } = γ p , i, j = 0, 1, 2, . . . , m − 1. (10.71)
m−1
φ(i1 , j)−φ(i2 , j) m, if i1 = i2 ,
γp = (10.72)
0, if i1 i2 .
j=0
m2 −1
The matrix H1(2) = H1 ⊗ H1 = h(2)
i, j can be defined as
i, j=0
φ(i1 , j1 )+φ(i0 , j0 )
h(2) (2)
i, j = hmi1 +i0 ,m j1 + j0 = hi1 , j1 hi0 , j0 = γ p , (10.73)
where i, j = 0, 1, . . . , m2 − 1, i0 , i1 , j0 , j1 = 0, 1, . . . , m − 1.
Now, consider the 3D matrix A = [H(p, m)]3 with elements Ai1 ,i2 ,i3 (i1 , i2 , i3 =
0, 1, . . . , m − 1),
In other words, any section of a matrix A = {Ai1 ,i2 ,i3 } of the orientation i1 is the
i1 (m + 1)’th row of the matrix H1(2) .
Prove that A is the 3D generalized matrix A = [H(p, m)]3 . For this, we can check
the matrix system in Eq. (10.67), which can be represented as
0,2
(A1t A2t ) = m2 E(0, 1), t = 1, 2, 3, (10.75)
Now we will check the system in Eq. (10.75) for defining the matrix A by
Eq. (10.74).
(1) ⎛ ⎛
⎜⎜⎜i i i ⎟⎟⎟ ⎟
⎞⎞
⎜⎜⎜ ⎜⎜⎜ 1 2 3 ⎟⎟⎟ ⎟
⎜ ⎟⎟
⎜⎜⎜(AA∗ )⎝i3 i1 i2 ⎠ ⎟⎟⎟⎟⎟ = m2 E(0, 1),
m−1 m−1
0,2 ⎜
⎜
⎜⎜⎜ ⎟⎟⎟ i.e., Ai1 ,i2 ,i3 A j1 ,i2 ,i3 = m2 δi1 , j1 ,
⎝ ⎠ i2 =0 i3 =0
(10.77)
or according to Eqs. (10.72) and (10.74),
m−1 m−1
φ(i1 ,i2 )+φ(i1 ,i3 )−φ( j1 ,i2 )−φ( j1 ,i3 )
γp = m2 δi1 , j1 . (10.78)
i2 =0 i3 =0
(2) ⎛ ⎛⎜ ⎞ ⎛ ⎞⎞
⎜⎜⎜ ⎜⎜⎜⎜⎜i1 i2 ⎟⎟⎟⎟⎟⎟ ⎜⎜⎜i i ⎟⎟⎟ ⎟
⎜⎜⎜ 2 3 ⎟⎟⎟ ⎟
0,2 ⎜
⎜⎜ ⎜⎝i i ⎟⎠ ∗ ⎜⎝i i ⎟⎠ ⎟⎟⎟⎟ m−1 m−1
⎜⎜⎜A 2 1 (A ) 3 2 ⎟⎟⎟ = m2 E(0, 1), i.e. Ai1 ,i2 ,i3 Ai1 , j2 ,i3 = m2 δi2 , j2 ,
⎜⎜⎝ ⎟⎟⎠
i1 =0 i3 =0
(10.79)
or, according to Eqs. (10.72) and (10.74),
m−1 m−1
φ(i1 ,i2 )+φ(i1 ,i3 )−φ(i1 , j2 )−φ( j1 ,i2 )
γp = m2 δi2 , j2 . (10.80)
i1 =0 i3 =0
(3) ⎛ ⎛⎜ ⎞ ⎞
⎜⎜⎜ ⎜⎜⎜⎜⎜i1 i2 i3 ⎟⎟⎟⎟⎟⎟ ⎟⎟⎟
0,2 ⎜
⎜⎜ ⎜⎝i i i ⎟⎠ ∗ ⎟⎟⎟ m−1 m−1
⎜⎜⎜A 2 3 1 A ⎟⎟⎟ , i.e., Ai1 ,i2 ,i3 Ai1 , j2 , j3 = m2 δi3 , j3 , (10.81)
⎜⎜⎝ ⎟⎟⎠
i1 =0 i2 =0
Hence, the matrix A defined by Eq. (10.74) is the 3D generalized Hadamard matrix
[H(p, m)]3 .
The generalized matrices contained in the 3D generalized Hadamard matrix
[H(3, 3)]3 of order 3 are given below.
• Generalized Hadamard matrices parallel to the flat (X, Y) are
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜ x1 x2 1⎟⎟⎟ ⎜⎜⎜ x1 1 x2 ⎟⎟⎟
⎜⎜⎜⎜1 x1 x2 ⎟⎟⎟⎟ , ⎜⎜⎜⎜ x2 x1 1⎟⎟⎟⎟ , ⎜⎜⎜⎜1 1 1 ⎟⎟⎟⎟ . (10.83)
⎝ ⎠ ⎝ ⎠ ⎝ ⎠
1 x 2 x1 1 1 1 x2 1 x 1
References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes of
Mathematics, 1168, Springer-Verlag, Berlin (1985).
2. H. Harmuth, Sequency Theory, Foundations and Applications, Academic
Press, New York (1977).
3. P. J. Shlichta, “Three- and four-dimensional Hadamard matrices,” Bull. Am.
Phys. Soc., Ser. 11 16, 825–826 (1971).
4. S. S. Agaian, “On three-dimensional Hadamard matrix of Williamson type,”
(Russian–Armenian summary) Akad. Nauk Armenia, SSR Dokl. 72, 131–134
(1981).
5. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.
Theory IT-25 (5), 566–572 (1979).
6. S.S. Agaian, “A new method for constructing Hadamard matrices and the
solution of the Shlichta problem,” in Proc. of 6th Hungarian Coll. Comb.,
Budapesht, Hungary, 6–11, pp. 2–3 (1981).
7. A. M. Trachtman and B. A. Trachtman, (in Russian) Foundation of the Theory
of Discrete Signals on Finite Intervals, Nauka, Moscow (1975).
8. S. S. Agaian, “Two and high dimensional block Hadamard matrices,” (In
Russian) Math. Prob. Comput. Sci. 12, 5–50 (1984) Yerevan, Armenia.
Definition 11.1.1.1:1 A square matrix H(p, N) of order N with elements of the p’th
root of unity is called a generalized Hadamard matrix if
HH ∗ = H ∗ H = NIN ,
343
√ √ √
where x0 = 1, x1 = −(1/2) + j( 3/2), x2 = −(1/2) − j( 3/2), j = −1. A
generalized Hadamard matrix H(p, N) with the first row and first column of the
form (11 . . . .1) is called a normalized matrix.
For example, from H(3, 6), one can generate a normalized matrix by two stages.
At first, multiplying the columns with numbers 1, 3, 4, and 6 of the matrix H(3, 6)
by x1 , x2 , x2 , and x1 , respectively, we obtain the matrix
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ x1 x2 1 x2 x1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ x 1 x2 x2 1
⎟
x1 ⎟⎟⎟⎟
H 1 (3, 6) = ⎜⎜⎜⎜ 1 ⎟⎟ . (11.2)
⎜⎜⎜ 1 1 x1 x2 x1 x2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ x1 x2 x1 1 1 x2 ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x1 x1
Then, multiplying the rows with numbers 2, 3, and 5 of the matrix H 1 (3, 6) by
x2 , we obtain the normalized matrix corresponding to the generalized Hadamard
matrix H(3, 6) of the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 x1 x2 x1 1 x2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜1 x2 x1 x1 x2 1 ⎟⎟⎟⎟
Hn (3, 6) = ⎜⎜⎜⎜ ⎟⎟ . (11.3)
⎜⎜⎜1 1 x1 x2 x1 x2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x1 1 x2 x2 x1 ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x1 x1
Note that generalized Hadamard matrices also can be defined as the matrix with
one of the elements being the p’th root of unity. For example, the matrix Hn (3, 6)
can be represented as follows:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 x x2 x 1 x2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜1 x2 x x x2 1 ⎟⎟⎟⎟
Hn (3, 6) = ⎜⎜⎜⎜ 2⎟
⎟, (11.4)
⎜⎜⎜1 1 x x2 x x ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x 1 x2 x2 x ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x x
√
where x = −(1/2) + j( 3/2).
In Refs. 1 and 13 it was proven that for any prime p, nonnegative integer m,
and any natural number k (m ≤ k), there exists an H(p2m , pk ) matrix. If an
H(2, N) matrix exists, then for any nonzero natural number p, an H(2p, N) matrix
exists.
The Kronecker product of two generalized Hadamard matrices is also a
generalized Hadamard matrix. For example,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜1 1 1 ⎟⎟⎟
⎜ ⎟ ⎜ ⎟
H(3, 3) ⊗ H(3, 3) = ⎜⎜⎜⎜⎜1 x x2 ⎟⎟⎟⎟⎟ ⊗ ⎜⎜⎜⎜⎜1 x x2 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
1 x2 x 1 x2 x
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x x2 1 x x2 1 x x2 ⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x2 x 1 x2 x 1 x2 x ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 x x x x2 x2 x2 ⎟⎟⎟⎟⎟
⎜ ⎟⎟
= ⎜⎜⎜⎜⎜1 x x2 x x2 1 x2 1 x ⎟⎟⎟⎟ . (11.5)
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x2 x x 1 x2 x2 x 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 x2 x2 x2 x x x ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x x2 x2 1 x x x2 1 ⎟⎟⎟⎟
⎜⎝ ⎟⎠
1 x2 x x2 x 1 x 1 x2
• If p is prime, then the generalized Hadamard matrix H(p, N) can exist only for
N = pt, where t is a natural number.
• If p = 2, then the generalized Hadamard matrix H(p, 2p) can exist,
• If pn is a prime power, then a generalized Hadamard matrix H(pn , N) can exist
only for N = pt, where t is a positive integer.
Problems for exploration: The inverse problem, i.e., the problem of construction
or proof of the existence of the generalized Hadamard matrix H(p, pt) for any
prime p, remains open.
More complete information about construction methods and applications of
generalized Hadamard matrices can be obtained from Refs. 2,11, and 15–30.
HH ∗ = H ∗ H = NIN ,
Example: N = 4,
⎛ ⎞
⎜⎜⎜1 1 1 1⎟
⎜⎜⎜1 je jα −1 − je jα ⎟⎟⎟⎟⎟ √
H4 = ⎜⎜⎜⎜ ⎟⎟ , where α ∈ [0, π), j = −1.
−1⎟⎟⎟⎠⎟
(11.6)
⎜⎜⎝1 −1 1
1 − je jα −1 je jα
N = p2 t, N = 2pt,
or (11.9)
N=p N = p.
and denote
A 0 0 B
X= 0 C , Y = −D 0 . (11.13)
Now it is not difficult to prove that X and Y satisfy the conditions in Eq. (11.11);
i.e., Γ(p, n) = {X, Y} is the generalized hyperframe.
Now, let H and G be generalized Hadamard matrices H(p, m) of the following
form:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ h0 ⎟⎟⎟ ⎜⎜⎜ h1 ⎟⎟⎟
⎜⎜⎜ h ⎟⎟⎟ ⎜⎜⎜ −h ⎟⎟⎟
⎜⎜⎜ 1 ⎟⎟⎟ ⎜⎜⎜ 0 ⎟
⎜⎜⎜ h2 ⎟⎟⎟ ⎜⎜⎜ h3 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
H = ⎜⎜⎜⎜ h3 ⎟⎟⎟⎟ , G = ⎜⎜⎜⎜ −h2 ⎟⎟⎟⎟ . (11.17)
⎜⎜⎜ .. ⎟⎟⎟ ⎜⎜⎜ .. ⎟⎟⎟
⎜⎜⎜ . ⎟⎟⎟ ⎜⎜⎜ . ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝hm−2 ⎟⎟⎟⎟⎠ ⎜⎜⎝ hm−1 ⎟⎟⎟⎟⎠
hm−1 −hm−2
It is evident that
HG H + GH H = 0. (11.18)
Hn = X ⊗ Hn−1 + Y ⊗ Gn−1 ,
(11.19)
Gn = X ⊗ Gn−1 − Y ⊗ Hn−1 , n≥1
are
• generalized Hadamard matrices H(2p, mkn ), where p = l.c.m.(p1 , p2 ), if
l.c.m.(p1 , 2) = 1 and l.c.m.(p2 , 2) = 1;
• generalized Hadamard matrices H(p, mkn ), where p = l.c.m.(p1 , p2 ), if p1
and/or p2 , are even.
Proof: First, let us prove that Hn HnH = mkn Imkn . Using the properties of the
Kronecker product, from Eq. (11.19), we obtain
⊗ Gn−1 Hn−1
H
+ YY H ⊗ Gn−1Gn−1
H
Similarly, we can show that GnGnH = mkn Imkn . Now prove that HnGnH + Gn HnH = 0.
Indeed,
Hn = M1 M2 · · · Mn+1 , (11.22)
where
Mn+1 = Ikn ⊗ H0 ,
Mi = Imi−1 ⊗ (X ⊗ Imki−1 + Y ⊗ Pmki−1 ) ,
⎛ ⎞ (11.23)
⎜⎜⎜ 0 1⎟⎟⎟
Pmki−1 = I(mki−1 )/2 ⊗ ⎜⎜⎝ ⎜ ⎟⎟⎟ , i = 1, 2, . . . , n.
⎠
−1 0
It is easy to show that H(p, pn ) exists where p is a prime number. Indeed, these
matrices can be constructed using the Kronecker product. Let us give an example.
Let p = 3; then, we have
⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟
⎜⎜ ⎟⎟
H(3, 3) = ⎜⎜⎜⎜1 a a2 ⎟⎟⎟⎟ ,
⎝⎜ ⎟⎠
1 a2 a
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a a2 1 a a2 1 a a2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a2 a 1
⎜⎜⎜ a2 a 1 a2 a ⎟⎟⎟⎟⎟
⎟
⎜⎜⎜1 1 1 a
⎜⎜ a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎟⎟
H(3, 9) = ⎜⎜⎜⎜⎜1 a a2 a a2 1 a2 1 a ⎟⎟⎟⎟ . (11.24)
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a2 a a 1 a2 a2 a 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a a2 a2
⎜⎝ 1 a a a2 1 ⎟⎟⎟⎟⎟
⎠
1 a2 a a2 a 1 a 1 a2
ABH = BAH ,
(11.25)
AAH + BBH = 2nIn .
Note that for p = 2, the generalized Yang matrix coincides with classical Yang
matrices.11 Now, let us construct generalized Yang matrices. We will search A and
B as cyclic matrices, i.e.,
A = a0 U 0 + a1 U 1 + a2 U 2 + · · · + an−1 U n−1 ,
(11.26)
B = b0 U 0 + b1 U 1 + b2 U 2 + · · · + bn−1 U n−1 .
We see that
n−1
(ai a∗i + bi b∗i ) = 2n,
i=0
(11.28)
n−1 * + %n&
ai a∗(i−t)mod n + bi b∗(i−t)mod n = 0, t = 0, 1, . . . , .
i=0
2
Ai+1 = X ⊗ Ai + Y ⊗ Bi ,
(11.29)
Bi+1 = X ⊗ Bi − Y ⊗ Ai , i = 0, 1, . . .
are generalized Yang matrices A(2p, nki+1 ) and B(2p, nki+1 ), where p = l.c.m.(p1 ,
p2 ), l.c.m.(p1 , 2) = 1, and l.c.m.(p2 , 2) = 1, i ≥ 1. We find generalized Yang
matrices A(2p, nki+1 ) and B(2p, nki+1 ), where p = l.c.m.(p1 , p2 ), and p1 and/or
p2 are even numbers.
Corollary 11.1.4.1: The following matrix is the generalized Hadamard matrix :
Ai Bi
H(2p, n2 ) =
i+1
. (11.30)
−Bi Ai
system, and the latter plays an important role in Walsh–Fourier analysis. The
Rademacher functions {rn (x)} form an incomplete set of orthogonal, normalized,
periodic square wave functions with their period equal to 1. Using the Rademacher
system functions, one may generate the Walsh–Hadamard, Walsh–Paley, and
Harmuth function systems, as well the Walsh–Rademacher function systems. The
Rademacher function is defined in Refs. 2, 5–8, 31, and 32.
Rademacher function systems {rn (x)} may be also defined by the formula
rn (x) = sign sin(2 j πx) , (11.34)
Properties:
• The Rademacher matrix is a rectangular (n + 1) × 2n matrix with (+1, −1)
elements where the first row has +1 elements,
then define Rademacher functions Rad(n, x) via the dilation operation α(2nx).
φ0 (x) = 1,
(11.40)
φn (x) = ϕan11 (x)ϕan22 (x) · · · ϕanmm (x),
where n = nm
k=0 ak pnk , 0 < ak < 1, n1 > n2 > · · · > nm .
Note that the 2n Walsh functions for any n constitute a closed set of orthogonal
functions; the multiplication of any two functions always generates a function
within this set. However, the Rademacher functions are an incomplete set of n + 1
orthogonal functions, which is a subset of Walsh functions, and from which all
2n Walsh functions can be generated by multiplication. The Rademacher functions
may be defined as follows:
We see that
Rad(0, k) = Wal(0, k),
Rad(1, k) = Wal(1, k),
Rad(2, k) = Wal(3, k),
Rad(3, k) = Wal(7, k),
(11.45)
Rad(1, k) ∗ Rad(2, k) = Wal(2, k),
Rad(1, k) ∗ Rad(3, k) = Wal(6, k),
Rad(2, k) ∗ Rad(3, k) = Wal(4, k),
Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k) = Wal(5, k).
where Ch(0)
3 = 1.
An alternative definition for the Chrestenson functions yields the same complete
set, but in dissimilar order, as follows:
$
2π
Ch(p) (k, t) = exp j C(k, t) ,
p
n−1 (11.53)
C(k, t) = km tn−1−m ,
m=0
Finally, we can also consider a subset of the Chrestenson functions for any
p, n, which constitute the generalization of the Rademacher functions, and from
which the complete set of orthogonal functions for the given p, n can be generated
by element-by-element multiplication. The generalized Rademacher functions are
defined as
$
2π
Rad(p) (k, t) = exp j C (k, t) ,
p
n−1 (11.55)
C (k, t) = km tm ,
m=0
where km here is a subset of k, whose decimal identification numbers are 0, 1 and all
higher values of k that are divisible by a power of p. The closed set of Chrestenson
functions for p = 3, n = 2 generated from the reduced set can be represented as
follows:
Ch(3) (0, t) = Rad(3) (0, t),
Ch(3) (1, t) = Rad(3) (1, t),
Ch(3) (2, t) = Rad(3) (1, t) ∗ Rad(3) (1, t),
Ch(3) (3, t) = Rad(3) (3, t),
Ch(3) (4, t) = Rad(3) (1, t) ∗ Rad(3) (3, t), (11.56)
Ch(3) (5, t) = Rad(3) (1, t) ∗ Rad(3) (1, t) ∗ Rad(3) (3, t),
Ch(3) (6, t) = Rad(3) (3, t) ∗ Rad(3) (3, t),
Ch(3) (7, t) = Rad(3) (1, t) ∗ Rad(3) (3, t) ∗ Rad(3) (3, t),
Ch(3) (8, t) = Rad(3) (1, t) ∗ Rad(3) (1, t) ∗ Rad(3) (3, t) ∗ Rad(3) (3, t).
where
v0 = (x0 + x1 + x2 ) + j(y0 + y1 + y2 ),
2π 2π
v1 = x0 + (x1 + x2 ) cos − (y1 − y2 ) sin
3 3
2π 2π
+ j y0 + (x1 − x2 ) sin + (y1 + y2 ) cos ,
3 3 (11.58)
2π 2π
v2 = x0 + (x1 + x2 ) cos + (y1 − y2 ) sin
3 3
2π 2π
+ j y0 − (x1 − x2 ) sin + (y1 + y2 ) cos .
3 3
Therefore, the complexity of the C31 transform is: C + (C31 ) = 13, C × (C31 ) = 4, where
C + and C × denote the number of real additions and multiplications, respectively.
Now, let Z T = (xi + jyi )i=0
N−1
be a complex-valued vector of length N = 3n (n >
1). We introduce the following notations: Pi denotes a (0, 1) column vector of
length N/3 whose only i’th element is equal to 1 (i = 0, 1, . . . , N/3 − 1) and
Z i = (x3i + jy3i , x3i+1 + jy3i+1 , x3i+2 + jy3i+2 ).
The 1D forward Chrestenson transform of order N can be performed as follows
[see Eq. (11.52)]:
⎛ n−1 ⎞
⎜⎜⎜ (C3 ⊗ i3 )Z ⎟⎟⎟
⎜⎜ ⎟⎟
C3n Z = ⎜⎜⎜⎜(C3n−1 ⊗ b3 )Z ⎟⎟⎟⎟ . (11.60)
⎜⎝ n−1 ⎟⎠
(C3 ⊗ b∗3 )Z
From Eq. (11.58), it follows that C + (b3 Z i ) = 6 and C × (b3 Z i ) = 4. Then we obtain
Similarly, we obtain
C + (C3n−1 ⊗ b∗3 ) = C + (C3n−1 ) + 3 · 3n−1 ,
(11.65)
C × (C3n−1 ⊗ b∗3 ) = C × (C3n−1 ).
or
C + (C3n ) = 13 · 3n−1 n,
(11.67)
C × (C3n−1 ⊗ b∗3 ) = 4 · 3n−1 n, n ≥ 1.
For example, we have C + (C32 ) = 78, C × (C32 ) = 24, C + (C33 ) = 351, C × (C33 ) = 108.
From the relations in Eq. (11.49), we obtain the Chrestenson transform matrix of
order 5:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎜⎜i5 ⎟⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟
C51 = ⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ . (11.69)
⎜⎜⎜ ⎟ ⎜ ∗⎟
⎜⎜⎝1 a3 a a4 a2 ⎟⎟⎟⎟⎠ ⎜⎜⎝⎜a2 ⎟⎟⎠⎟
1 a4 a3 a2 a a∗1
t1 = x1 + x4 , t2 = x2 + x3 , t3 = y1 + y4 , t4 = y2 + y3 ,
b1 = x1 − x4 , b2 = x2 − x3 , b3 = y1 − y4 , b4 = y2 − y3 ,
2π 4π 2π 4π
c1 = b1 sin , c2 = b2 sin , c3 = b3 sin , c4 = b4 sin ,
5 5 5 5
2π 4π 2π 4π
d1 = t1 cos , d2 = t2 cos , d3 = t3 cos , d4 = t4 cos ,
5 5 5 5 (11.73)
4π 2π 4π 2π
e1 = t1 cos , e2 = t2 cos , e3 = t3 cos , e4 = t4 cos ,
5 5 5 5
4π 2π 4π 2π
f1 = b1 sin , f2 = b2 sin , f3 = b3 sin , f4 = b4 sin ,
5 5 5 5
A1 = x0 + d1 + d2 , A2 = c3 + c4 , A3 = y0 + d3 + d4 , A4 = c1 + c2 ,
B1 = y0 + e1 + e2 , B2 = f3 − f4 , B3 = y0 + e3 + e4 , B4 = f1 − f2 .
v0 = x0 + t1 + t2 + j(y0 + t3 + t4 ),
v1 = A1 − A2 + j(A3 + A4 ),
v2 = B1 − B2 + j(B3 + B4 ), (11.74)
v3 = B1 + B2 + j(B3 − B4 ),
v4 = A1 + A2 + j(A3 − A4 ).
C + (i5 Z = v0 ) = 8, C × (i5 Z) = 0,
C + (a1 Z = v1 ) = 8, C × (a1 Z) = 8,
C + (a2 Z = v2 ) = 8, C × (a2 Z) = 8, (11.75)
C + (a∗2 Z = v3 ) = 2, C × (a∗2 Z) = 0,
C + (a∗1 Z = v4 ) = 2, C × (a∗1 Z) = 0.
The numerical results of the complexities of the Chrestenson transforms are given
in Table 11.1.
3 1 13 4
9 2 78 24
27 3 351 108
81 4 1404 432
5 1 28 16
25 2 280 160
125 3 2 100 1200
625 4 14,000 8000
0
H0,0 (k) = 1, 0 ≤ k < 1,
q
√ i−1 2q 2q + 1
Hi,1 (k) = 2 , ≤k< ,
2i 2i
√ i−1 (11.85)
q 2q + 1 2q + 2
Hi,2 (k) = − 2 , ≤k< ,
2i 2i
q
Hi,t = 0, at all other points,
from which we generate a classical Haar transform matrix of order 2n (see previous
chapters in this book).
1 1
01
Row 1: H11 (k) = 1, 0 ≤ k < , 02
Row 2: H11 (k) = 1, 0 ≤ k < ,
3 3
1 2 1 2 (11.86a)
01
H12 (k) = a, ≤k< , 02
H12 (k) = a2 , ≤k< ,
3 3 3 3
2 2
01
H13 (k) = a2 , ≤ k < 1; 02
H13 (k) = a, ≤ k < 1;
3 3
1 1 4
01
Row 3: H21 (k) = s, 0 ≤ k < , 11
Row 4: H21 (k) = s, ≤k< ,
9 3 9
1 2 4 5
01
H22 (k) = sa, ≤k< , 11
H22 (k) = sa, ≤k< , (11.86b)
9 9 9 9
2 5 2
01
H23 (k) = sa2 , ≤ k < 1; 11
H23 (k) = sa2 , ≤k< ;
9 9 3
2 7 1
21
Row 5: H21 (k) = s, ≤k< , 02
Row 6: H21 (k) = s, 0 ≤ k < ,
3 9 9
7 8 1 2
21
H22 (k) = sa, ≤k< , 02
H22 (k) = sa2 , ≤k< , (11.86c)
9 9 9 9
2 8 2 1
H23 (k) = sa ,
21
≤ k < 1; H23 (k) = sa,
02
≤k< ;
9 9 3
1 4 2 7
12
Row 7: H21 (k) = s, ≤k< , 22
Row 8: H21 (k) = s, ≤k< ,
3 9 3 9
4 5 7 8
12
H22 (k) = sa2 , ≤k< , 22
H22 (k) = sa2 , ≤k< , (11.86d)
9 9 9 9
5 2 8
12
H23 (k) = sa, ≤k< ; 22
H23 (k) = sa, ≤ k < 1.
9 3 9
Therefore, the complete orthogonal generalized Haar transform matrix for p = 3,
n = 2 has the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s sa sa2 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜ ⎟
H9 = ⎜⎜⎜⎜⎜0 0 0 s sa sa2 0 0 0 ⎟⎟⎟⎟⎟ . (11.87)
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 s sa sa ⎟⎟⎟ 2
⎜⎜⎜ ⎟
⎜⎜⎜ s sa2 sa 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 s sa2 sa 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 s sa2 sa
From the above-given generalized Haar transform matrices, we can see that the
Haar transform is globally sensitive for the first p of the pn row vectors, but locally
sensitive for all subsequent vectors.
for n =
1,
1 1 i
H2 = = 2 ; (11.89)
1 −1 j2
for n = 2,
⎛ ⎞
⎜⎜⎜ 1 1 1 1⎟⎟
⎟
⎜⎜⎜ 1
⎜⎜⎜ √ √1 −1 −1⎟⎟⎟⎟⎟ H2 ⊗ i2 (11.90)
H4 = ⎜⎜ ⎟ = √
⎜⎜⎜ 2 − 2 √0 √0⎟⎟⎟⎟⎟
;
2I2 ⊗ j2
⎝ ⎠
0 0 2 − 2
for n = 3,
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜ 1 1
⎜⎜⎜⎜ √ √ √1 √1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜ 2 2 − 2 − 2 ⎟⎟
⎜⎜⎜ √ √ √ √ ⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 2 2 − 2 − 2 ⎟⎟⎟ H4 ⊗ i2 (11.91)
H8 = ⎜⎜ ⎟⎟⎟ = .
⎜⎜⎜ 2 −2 0 0 0 0 0 0⎟⎟ ⎟ 2I 4 ⊗ j 2
⎜⎜⎜ ⎟
⎜⎜⎜ 0 0 2 −2 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ 0 0 ⎟
⎜⎝ 0 0 2 −2 0 0⎟⎟⎟⎟⎠
0 0 0 0 0 0 2 −2
where
y0 = (x0 + x1 ) + (x2 + x3 ),
y1 = (x
√0 + x1 ) − (x2 + x3 ), (11.94)
y2 = √2(x0 − x1 ),
y3 = 2(x2 − x3 ).
√ n−1
Now we compute the complexity of the ( 2 IN/2 ⊗ j2 )X transform,
√ n−1 %√ n−1 &
( 2 IN/2 ⊗ j2 )X = 2 IN/2 ⊗ j2 P1 ⊗ X 1 + P2 ⊗ X 2 + · · · + PN/2 ⊗ X N/2
√ n−1
= 2 P1 (x0 − x1 ) + P2 (x2 − x3 ) + · · · + PN/2 (xN−2 − xN−1 )
⎛ ⎞
⎜⎜⎜ x0 − x1 ⎟⎟⎟
√ n−1 ⎜⎜⎜⎜ x2 − x3 ⎟⎟⎟⎟⎟
⎜
= 2 ⎜⎜⎜⎜ .. ⎟⎟⎟
⎟⎟⎟ (11.98)
⎜⎜⎜ . ⎟⎠
⎝
xN−2 − xN−1
C + (H2n ) = 2n+1 − 2,
(11.100)
C × (H2n ) = 2n − 2, n = 1, 2, 3, . . . .
for n = 1,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜ i3 ⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
H3 = ⎜⎜1 a a2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜⎝b3 ⎟⎟⎟⎟⎠ ;
(11.101)
⎝ ⎠
1 a2 a b∗3
for n = 2,
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1
⎜⎜⎜ a2 a2 a2 a a a ⎟⎟⎟⎟⎟
⎟ ⎛ ⎞
⎜⎜⎜ s sa sa2
⎜⎜ 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ H3 ⊗ i3 ⎟⎟⎟
⎟⎟ ⎜ ⎜ ⎟⎟ (11.102)
H9 = ⎜⎜⎜⎜⎜0 0 0 s sa sa2 0 0 0 ⎟⎟⎟⎟ = ⎜⎜⎜⎜ sI3 ⊗ b3 ⎟⎟⎟⎟ ;
⎜⎜⎜ ⎟⎟ ⎜⎝ ⎟⎠
⎜⎜⎜⎜0 0 0 0 0 0 s sa sa2 ⎟⎟⎟⎟ sI3 ⊗ b∗3
⎜⎜⎜ ⎟⎟⎟
2
⎜⎜⎜ s sa sa 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 s sa2 sa 0 0 0 ⎟⎟⎟⎟
⎝ ⎟⎠
0 0 0 0 0 s sa2 sa
for n = 3,
⎛ ⎞
⎜⎜⎜ H9 ⊗ i3 ⎟⎟⎟
⎜ ⎟
H27 = ⎜⎜⎜⎜⎜ s2 I9 ⊗ b3 ⎟⎟⎟⎟⎟ . (11.103)
⎝ 2 ⎠
s I3 ⊗ b∗3
Continuing this process, we obtain a recursive representation of the generalized
Haar matrices of any order 3n as follows:
⎛ ⎞
⎜⎜⎜ H3n−1 ⊗ i3 ⎟⎟⎟
⎜⎜⎜ n−1 ⎟
H3n = ⎜⎜⎜ s I3n−1 ⊗ b3 ⎟⎟⎟⎟⎟ . (11.104)
⎝ n−1 ⎠
s I3n−1 ⊗ b∗3
Now we compute the complexity of the generalized Haar transform of order 3n .
First, we calculate the complexity of the H3 transform. Let Z T = (z0 , z1 , z2 ) = (x0 +
jy0 , x√
1 + jy1 , x2 + jy2 ) be a complex-valued vector of length 3, a = exp[ j(2π/3)],
j = −1. Because the generalized Haar transform matrix H3 is identical to the
Chrestenson matrix of order 3, the 1D forward generalized Haar transform of
order 3 can be performed in the manner that was shown in Section 11.3.1 [see
Eqs. (11.57) and (11.58)], and has the complexity
C + (i3 Z) = 4, C × (i3 Z) = 0,
C + (b3 Z) = 6, C × (b3 Z) = 4, (11.105)
C + (b∗3 Z) = 1, C × (b∗3 Z) = 0.
That is, C + (H3 ) = 11, C × (H3 ) = 4.
Now, let Z T = (z0 , z1 , . . . , zN−1 ) be a complex-valued vector of length N = 3n .
We introduce the following notations: Pi denotes a (0, 1) column vector of length
N/3 whose only i’th element is equal to 1 (i = 0, 1, . . . , N/3 − 1), and (Z i )T =
(z3i , z3i+1 , z3i+2 ). The 1D forward generalized Haar transform of order N can be
performed as follows:
⎛ ⎞
⎜⎜⎜ (H3n−1 ⊗ i3) Z ⎟⎟⎟
⎜⎜ n−1 ⎟⎟
H3n Z = ⎜⎜⎜⎜ s I3n−1 ⊗ b3 Z ⎟⎟⎟⎟ . (11.106)
⎜⎝ n−1 ⎟⎠
s I3n−1 ⊗ b∗3 Z
We obtain a similar result for the (sn−1 I3n−1 ⊗ b∗3 )Z transform. Hence, using Eq.
(11.105), we obtain
or
where
Because
we can write
Because
⎛ ⎞ ⎛ ⎞
⎜⎜z4i−4 ⎟⎟ ⎜⎜ x4i−4 + jy4i−4 ⎟⎟
⎜⎜⎜⎜z4i−3 ⎟⎟⎟⎟ ⎜⎜⎜⎜ x4i−3 + jy4i−3 ⎟⎟⎟⎟
⎟
a1 Z i = 1 j −1 − j ⎜⎜⎜⎜ ⎟⎟ = 1 j −1 − j ⎜⎜⎜ ⎟
⎜⎜⎝z4i−2 ⎟⎟⎟⎟⎠ ⎜⎜⎜ x4i−2 +
⎝ jy4i−2 ⎟⎟⎟⎠⎟
z4i−1 x4i−1 + jy4i−1
= (x4i−4 − x4i−2 ) − (y4i−3 − y4i−1 )
+ j[(y4i−4 − y4i−2 ) + (x4i−3 − x4i−1 )], (11.122)
we obtain
and
because
Now we will compute the complexity of the generalized Haar transform of order
5n . First, we calculate the complexity of the H5 transform. Let Z T = (z0 , z1 , . . . , z4 )
be a complex-valued vector of length 5; then,
⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎛⎜⎜z0 ⎞⎟⎟ ⎛⎜⎜ i5 ⎞⎟⎟ ⎛⎜⎜z0 ⎞⎟⎟ ⎛⎜⎜v0 ⎞⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜v1 ⎟⎟⎟⎟⎟
⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
H5 Z = ⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟ ⎜⎜⎜⎜z2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜a2 ⎟⎟⎟⎟ ⎜⎜⎜⎜z2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜v2 ⎟⎟⎟⎟ , (11.133)
⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ∗ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎜1 a3 a a4 a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎝z3 ⎟⎟⎟⎠ ⎜⎜⎜⎝a2 ⎟⎟⎟⎠ ⎜⎜⎜⎝z3 ⎟⎟⎟⎠ ⎜⎜⎜⎝v3 ⎟⎟⎟⎠
⎝ ⎠ a∗1 z4
1 a4 a3 a2 a z4 v4
where
π 2π
vi2 = y0 − (y1 + y4 ) cos + (y2 + y3 ) cos
5 5
π 2π
+ (x1 − x4 ) sin − (x2 − x3 ) sin ,
5 5
π 2π
vr3 = x0 − (x1 + x4 ) cos + (x2 + x3 ) cos
5 5
π 2π
+ (y1 − y4 ) sin − (y2 − y3 ) sin ,
5 5
π 2π
vi3 = y0 − (y1 + y4 ) cos + (y2 + y3 ) cos
5 5
π 2π
− (x1 − x4 ) sin − (x2 − x3 ) sin .
5 5
X1 = x1 + x4 , X2 = x2 + x3 , X1 = x1 − x4 , X2 = x2 − x3 ,
Y1 = y1 + y4 , Y2 = y2 + y3 , Y1 = y1 − y4 , Y2 = y2 − y3 ;
2π π 2π π
C1 = X1 cos , C2 = X2 cos , C3 = Y1 cos , C4 = Y2 cos ,
5 5 5 5
π 2π π 2π
T 1 = X1 cos , T 2 = X2 cos , T 3 = Y1 cos , T 4 = Y2 cos ;
5 5 5 5
2π π 2π π
S 1 = X1 sin , S 2 = X2 sin , S 3 = Y1 sin , S 4 = Y2 sin ,
5 5 5 5
π 2π π 2π
R1 = Y1 sin , R2 = Y2 sin , R3 = X1 sin , R4 = X2 sin .
5 5 5 5
(11.135)
v0 = x0 + X1 + X2 + j(y0 + Y1 + Y2 ),
v1 = (x0 + C1 − C2 ) − (S 3 + S 4 ) + j[(y0 + C3 − C4 ) + (S 1 + S 2 )],
v4 = (x0 + C1 − C2 ) + (S 3 + S 4 ) + j[(y0 + C3 − C4 ) − (S 1 + S 2 )], (11.136)
v2 = (x0 − T 1 + T 2 ) − (R1 − R2 ) + j[(y0 − T 3 + T 4 ) + (R3 − R4 )],
v3 = (x0 − T 1 + T 2 ) + (R1 − R2 ) + j[(y0 − T 3 + T 4 ) − (R3 − R4 )].
be performed as follows:
⎛ ⎞
⎜⎜⎜ (H5n−1 ⊗ i5 ) Z ⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟⎟
⎜⎜⎜⎜ 5 I5n−1 ⊗ a1 Z ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟⎟
H5n Z = ⎜⎜⎜⎜⎜ 5 I5n−1 ⊗ a2 Z ⎟⎟⎟ . (11.137)
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜⎜ √5n−1 I n−1 ⊗ a∗ Z ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ 5 2
⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎝ √ n−1 ⎠
5 I5n−1 ⊗ a∗1 Z
(H5n−1 ⊗ i5 ) Z = (H5n−1 ⊗ i5 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/5 ⊗ Z N/5
= HN/5 P1 (z0 + · · · + z4 ) + HN/5 P2 (z5 + · · · + z9 )
+ · · · + HN/5 PN/5 (zN−5 + · · · + zN−1 )
⎛ ⎞
⎜⎜⎜ z0 + z1 + · · · + z4 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ z5 + z6 + · · · + z9 ⎟⎟⎟⎟⎟
= HN/5 ⎜⎜⎜ .. ⎟⎟⎟⎟ . (11.138)
⎜⎜⎜ . ⎟⎟⎟
⎜⎝ ⎠
zN−5 + · · · + zN−1
√ n−1
Now we compute the complexity of the ( 5 I5n−1 ⊗ a1 )Z transform,
2 1 2 0 0
4 2 6 2 0
8 3 14 6 0
16 4 30 14 0
3n 6(3n − 3) + 11 3(3n − 3) + 4 0
3 1 11 4 0
9 2 47 22 0
27 3 155 76 0
81 4 479 238 0
4 1 16 0 0
16 2 80 0 6
64 3 336 0 24
256 4 1360 0 96
5n 7(5n − 1) 6 · 5n − 14 0
5 1 28 16 0
25 2 168 136 0
125 3 968 736 0
625 4 4368 3736 0
The numerical results of the complexities of the generalized Haar transforms are
given in Table 11.2.
References
1. A. T. Butson, “Generalized Hadamard matrices,” Proc. Am. Math. Soc. 13,
894–898 (1962).
2. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes in
Mathematics, 1168, Springer, New York (1985).
3. C. Mackenzie and J. Seberry, “Maximal q-ary codes and Plotkin’s bound,”
Ars Combin 26B, 37–50 (1988).
4. D. Jungnickel and H. Lenz, Design Theory, Cambridge University Press,
Cambridge, UK (1993).
5. S. S. Agaian, “Advances and problems of fast orthogonal transforms for
signal-images processing applications, Part 1,” in Ser. Pattern Recognition,
Classification, Forecasting Yearbook, The Russian Academy of Sciences,
146–215 Nauka, Moscow (1990).
6. G. Beauchamp, Walsh Functions and their Applications, Academic Press,
London (1980).
7. I. J. Good, “The interaction algorithm and practiced Fourier analysis,” J. R.
Stat. Soc. London B-20, 361–372 (1958).
8. A. M. Trachtman and B. A. Trachtman, Fundamentals of the Theory of
Discrete Signals on Finite Intervals, Sov. Radio, Moscow (1975) (in Russian).
9. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.
Theory IT-25 (5), 566–572 (1979).
10. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
applications,” IEEE Trans. Inf. Theory IT-27, 772–779 (1981).
11. S. S. Agaian and K. O. Egiazarian, “Generalized Hadamard matrices,” Math.
Prob. Comput. Sci. 12, 51–88 (1984) (in Russian), Yerevan.
12. W. Tadej and K. Zyczkowski, “A concise guide to complex Hadamard
matrices,” Open Syst. Inf. Dyn. 13, 133–177 (2006).
13. T. Butson, “Relations among generalized Hadamard matrices, relative
difference sets, and maximal length linear recurring sequences,” Can. J. Math.
15, 42–48 (1963).
14. R. J. Turyn, Complex Hadamard Matrices. Combinatorial Structures and their
Applications, Gordon and Breach, New York (1970) pp. 435–437.
15. C. H. Yang, “Maximal binary matrices and sum of two squares,” Math.
Comput 30 (133), 148–153 (1976).
16. C. Watari, “A generalization of Haar functions,” Tohoku Math. J. 8 (3),
286–290 (1956).
17. S. Agaian, J. Astola, and K. Egiazarian, Binary Polinomial Transforms and
Nonlinear Digital Filters, Marcel Dekker, New York (1995).
Definition 12.1.1: The square matrix A = (ai, j ) of order n is called a jacket matrix
if its entries are nonzero and real, complex, or from a finite field, and satisfy
AB = BA = In , (12.1)
1 1 1 1 1
(1) J2 = , J2−1 = , (12.3)
1 α 2 1 α
383
hence, A is the jacket matrix for all nonzero a and c, and when a = c = 1, it is a
Hadamard matrix.
(3) In Ref. 9, the kernel jacket matrix of order 2 is defined as
a b
J2 = T , (12.5)
b −c
where we must accept that a = b. Clearly, the result is a classical Hadamard matrix
of order 2,
a a
J2 = aH2 = . (12.9)
a −a
√
(5) Let w be a third root of unity, i.e., w = exp( j 2π 3 ) = cos 3 + j sin 3 , j =
2π 2π
−1;
then, we have
⎛ ⎞
⎛ ⎞ ⎜⎜⎜1 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟⎟⎟
⎜
⎜⎜⎜ 2⎟ 1 ⎜⎜ ⎟⎟
B3 = ⎜⎜1 w w ⎟⎟⎟⎟ , B3 = ⎜⎜⎜⎜1 w w2 ⎟⎟⎟⎟ .
−1
(12.11)
⎝ ⎠ 3 ⎜⎜⎜ ⎟⎟⎟
1 w2 w ⎜⎜⎝ 1 1 ⎟⎟⎠
1 2
w w
n−1 n−1
a j,i ai,k
= = 0, for all j k, j, k = 0, 1, . . . , n − 1. (12.14)
i=0
ai,k i=0
a j,i
where w is a third root of unity. Then, the following matrix is the jacket matrix
dependent on a parameter α:
⎛ ⎞
⎜⎜⎜1 1 1 α α α ⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ w w2 α αw 2
αw ⎟⎟⎟⎟
⎜⎜1 2 ⎟ ⎟
w2 α αw αw ⎟⎟⎟⎟
J6 (α) = ⎜⎜⎜⎜⎜
w
⎟, (12.19a)
⎜⎜⎜1 1 1 −α −α −α ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ w w2 −α −αw −αw ⎟⎟⎟⎟
2
2⎠
1 w2 w −α −αw −αw
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟
⎟
⎜⎜⎜⎜ 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1
⎟
w2 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1
⎜⎜⎜ w w2 w
⎜⎜⎜ 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
w ⎟⎟⎟⎟⎟
2
1 2
1 ⎜⎜⎜ w w w
1 ⎟⎟⎟ .
−1
J6 (α) = ⎜⎜⎜ 1 1 1 1 1 (12.19b)
6 ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜ α α α α α α ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ α αw2 α − α − αw2 − αw ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟⎠
− − −
α α αw2 α α αw2
The jacket matrices and their inverse matrices of order 6 for various values of α
are given below (remember that w is a third root of unity):
(1) α = 2:
⎛ ⎞
⎜⎜⎜1 1 1 2 2 2 ⎟⎟
⎟
⎜⎜⎜⎜1 w w2 2 2w2 2w ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 w2 2 2w 2w2 ⎟⎟⎟⎟
J6 = ⎜⎜⎜⎜⎜
w
⎟, (12.20a)
⎜⎜⎜1 1 1 −2 −2 −2 ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ w w2 −2 −2w2 −2w ⎟⎟⎟⎟
⎠
1 w2 w −2 −2w −2w2
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟
⎟
⎜⎜⎜ 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1
⎟
⎜⎜⎜ w w2
1
w2 ⎟⎟⎟⎟⎟
⎜⎜⎜ w
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜ 1 ⎟
w ⎟⎟⎟⎟⎟
1
1 ⎜⎜⎜⎜⎜ w 2 w w 2
J6−1 = ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟ . (12.20b)
6 ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − 2 − ⎟⎟
⎜⎜⎜ 2 2
2w 2w 2 2w 2w ⎟⎟⎟⎟
⎜⎜⎜
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟
2
− − − 2⎠
2 2w 2w 2 2w 2w
(2) α = 1/2:
⎛ 1 1 1 ⎞⎟
⎜⎜⎜1 1 1 ⎟⎟ ⎛ ⎞
⎜⎜⎜ 2 2 2 ⎟⎟⎟⎟ ⎜⎜⎜1 1 1 1 1 1 ⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎟
⎜⎜⎜ 1 w 2
w ⎟⎟⎟ ⎜⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜1 w w2 ⎜⎜⎜1 ⎟⎟
2 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ 2 2 ⎜⎜⎜ w w 2 w w ⎟⎟⎟⎟
2
⎜⎜ 2 ⎟
w ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟⎟
⎜⎜⎜⎜1 w2 w
1 w
⎟⎟ ⎜⎜ 1 1 1
⎟⎟
1 ⎜⎜ 2 ⎟⎟⎟⎟ , −1 1 1 ⎜⎜⎜⎜1 w2 1
w ⎟⎟⎟⎟ .
= ⎜⎜⎜⎜ 2 2 = ⎜⎜⎜ w w2 (12.21)
1 ⎟⎟⎟⎟ ⎟
J6 J6
2 ⎜⎜⎜ 1 1 2 6 ⎜⎜⎜2 2 2 −2 −2 −2 ⎟⎟⎟⎟
⎜⎜⎜⎜1 1 1 − − − ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 2 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 ⎟⎟⎟
−2 − 2 − ⎟⎟⎟⎟⎟
⎟ 2 2 2
⎜⎜⎜ 1 w 2
w ⎟⎟⎟ ⎜⎜⎜2
⎜⎜⎜1 w w2 − − − ⎟⎟⎟⎟ ⎜⎜⎜ w2
⎜⎜⎝
w w w ⎟⎟⎟
⎜⎜⎜ 2 ⎟⎟⎟ 2 ⎟⎟
− 2⎠
2 2 2 2 2
⎜⎜⎜ 1 w w2 ⎟⎟⎟⎠ 2 −2 −
⎝1 w w −
2
− − w w 2 w w
2 2 2
is a jacket matrix of order 2n, where e is the column vector of all 1 elements of
length n − 1 (remember that if A = (ai, j ), then AH = [(1/ai, j )]T ).
Proof: Because A is a jacket matrix of order n with a core A1 , we have
1 eT 1 eT 1 eT 1 eT
AA = A A =
H H
= = nIn . (12.24)
e A1 e A1H e A1H e A1
eT + eT A1 = 0, (In−1 + A1 )e = 0,
(12.25)
eeT + A1 A1H = eeT + A1H A1 = nIn−1 .
eT + eT B1 = 0, (In−1 + B1 )e = 0,
ee + B1 B1 = ee + B1H B1 = nIn−1 ,
T H T
eT + eT C1 = 0, (In−1 + C1 )e = 0,
(12.26)
eeT + C1C1H = eeT + C1H C1 = nIn−1 ,
eT + eT D1 = 0, (In−1 + D1 )e = 0,
eeT + D1 D1H = eeT + D1H D1 = nIn−1 .
From Eq. (12.27), it follows that AC H = BDH if and only if A1C1H = B1 D1H . On the
other hand, from the fourth properties of jacket matrices given above, we obtain
from which it follows that AC H = BDH if and only if CAH = DBH . Finally, we
find that
We can see that the matrix [S W]4 is derived by doubling elements of the
inner 2 × 2 submatrix of the Sylvester–Hadamard matrix. Such matrices are
called weighted Hadamard or centered matrices.4 As for the Sylvester–Hadamard
matrix, a recursive relation governs the generation of higher orders of weighted
Sylvester–Hadamard matrices and their inverses, i.e.,
[S W]2k = [S W]2k−1 ⊗ H2 , k = 3, 4, . . . ,
(12.32)
[S W]−1
2k
= [S W]−1
2k−1
⊗ H2 , k = 3, 4, . . . ,
where H2 = ++ +− .
The forward and inverse weighted Sylvester–Hadamard transform matrices are
given below (Fig. 12.1).
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −2 −2 2 2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
1 −1 −2 2 2 −2 −1 1⎟⎟⎟⎟
[S W]8 = [S W]4 ⊗ H2 = ⎜⎜⎜⎜⎜ ⎟, (12.33a)
⎜⎜⎜1 1 2 2 −2 −2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 2 −2 −2 2 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
1 −1 −1 1 −1 1 1 −1
1
[S W]−1
8 = [S W]−1 44 ⊗ H 2
16 ⎛ ⎞
⎜⎜⎜2 2 2 2 2 2 2 2⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 −2 2 −2 2 −2 2 −2⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 2 −1 −1 1 1 −2 −2⎟⎟⎟⎟⎟
⎜
1 ⎜⎜⎜⎜2 −2 −1 1 1 −1 −2 2⎟⎟⎟⎟⎟
= ⎜ ⎟. (12.33b)
16 ⎜⎜⎜⎜⎜2 2 1 1 −1 −1 −2 −2⎟⎟⎟⎟⎟
⎜⎜⎜⎜2 −2 1 −1 −1 1 −2 2⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 2 −2 −2 −2 −2 2 2⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
2 −2 −2 2 −2 2 2 −2
Figure 12.1 The first (a) four and (b) eight continuous weighted Sylvester–Hadamard
functions in the interval (0, 1).
Therefore, we have
where Om is the zero matrix of order m. The 8 × 8 weighted coefficient matrix has
the following form:
⎛ ⎞
⎜⎜⎜4 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜0 4 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 6 0 −2 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟
0 0 6 0 −2 0 0⎟⎟⎟⎟⎟
[RC]8 = 2 · ⎜⎜⎜⎜⎜
0
⎟. (12.40)
⎜⎜⎜0 0 −2 0 6 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 −2 0 6 0 0⎟⎟⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 0 0 4 0⎟⎟⎟⎟⎟
⎟
⎜⎜⎝ ⎠
0 0 0 0 0 0 0 4
Because [RC]4 is the symmetric matrix and has at most two nonzero elements
in each row and column, from Eq. (12.37) it follows that the same is true for
[RC]2n (n ≥ 2). Note that from Eq. (12.34) it follows that
1
[S W]2n = H2n [RC]2n . (12.41)
2n
1 T
[RJ]n = H [RJ]n Hn (12.43)
n n
is called the parametric reverse jacket matrix. Furthermore, we will consider the
case when Hn is a Sylvester–Hadamard matrix of order n = 2k , i.e., Eq. (12.43)
takes the following form: [RJ]n = (1/n)Hn [RJ]n Hn .
Examples of parametric reverse jacket matrices of order 4 with one and two
parameters are given as follows:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜b 1 1 b⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜1 −a a −1⎟⎟⎟⎟ ⎜1 −a a −1⎟⎟⎟⎟⎟
[RJ]4 (a) = ⎜⎜⎜⎜⎜ ⎟, [RJ]4 (a, b) = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜1 a −a −1⎟⎟⎟⎟⎟ ⎜⎜⎜1 a −a −1⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
1 −1 −1 1 b −1 −1 b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 1⎟⎟⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ b b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜ −
1 1
−1⎟⎟⎟⎟ ⎜⎜⎜1 − 1 1 −1 ⎟⎟⎟⎟⎟
−1 1 ⎜⎜⎜⎜1 a a ⎟⎟⎟⎟ , 1 ⎜⎜⎜ a a ⎟⎟⎟
[RJ]4 (a) = ⎜⎜⎜ [RJ]−1
44 (a, b) = ⎜ ⎟⎟⎟ .
4 ⎜⎜⎜ ⎟⎟⎟
⎟ 4 ⎜⎜⎜⎜⎜ ⎟⎟
− −1 ⎟⎟⎟⎟⎟
⎜⎜⎜1 1 1 1 1
⎜⎜⎜ − −1⎟⎟⎟⎟ ⎜⎜⎜1
⎜⎜⎜
⎜⎜⎜ a a ⎟⎟⎟ a a ⎟⎟⎟
⎟⎟⎠ ⎜⎜⎜
⎝ ⎝⎜ 1 −1 −1 1 ⎟⎟⎠⎟
1 −1 −1 1
b b
(12.44)
Theorem 12.3.1: The matrix [RJ]n is a parametric reverse jacket matrix if and
only if
Note that if [RJ]n is a reverse jacket matrix, then the matrix ([RJ]n )k (k is an integer)
is a reverse jacket matrix, too. Indeed, we have
Note that the matrix [RJ]2 (a, b) is a unique parametric reverse jacket matrix of
order 2.
(1) The Kronecker product of two parametric reverse jacket matrices satisfies
Eq. (12.45). Indeed, let [RJ]n (x0 , . . . , xk−1 ), [RJ]m (y0 , . . . , yr−1 ) be parametric
reverse jacket matrices and Hn , Hm be Hadamard matrices of order n and m,
respectively; then, we have
(2) The Kronecker product of a parametric reverse jacket matrix on the reverse
jacket (nonparametric) matrix is the parametric reverse jacket matrix. Indeed,
let [RJ]n (x0 , . . . , xk−1 ), [RJ]m be parametric and nonparametric reverse jacket
matrices, and Hn , Hm be Hadamard matrices of order n and m, respectively;
then, we have
([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )Hmn = ([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )(Hn ⊗ Hm )
= ([RJ]n (x0 , . . . , xk−1 )Hn ) ⊗ ([RJ]m Hm )
= (Hn [RJ]n (x0 , . . . , xk−1 )) ⊗ (Hm [RJ]m )
= (Hn ⊗ Hm )([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )
= Hmn ([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m ). (12.51)
a b + +
[RJ]2 (a, b) ⊗ [RJ]2 (1, 1) = ⊗
b a − 2b + −
⎛ ⎞
⎜⎜⎜a a b b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜a −a b −b ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜b b a − 2b a − 2b ⎟⎟⎟⎟⎟
(12.52)
⎝ ⎠
b −b a − 2b −a + 2b
a b 1 2
[RJ]2 (a, b) ⊗ [RJ]2 (1, 2) = ⊗
b a − 2b 2 −3
⎛ ⎞
⎜⎜⎜ a 2a b 2b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜2a −3a 2b −3b ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜ b 2b a − 2b 2a − 4b ⎟⎟⎟⎟⎟
(12.53)
⎝ ⎠
2b −3b 2a − 4b −3a + 6b
a b 2 1
[RJ]2 (a, b) ⊗ [RJ]2 (2, 1) = ⊗
b a − 2b 1 0
⎛ ⎞
⎜⎜⎜2a a 2b b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜a 0 b 0 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜2b b 2a − 4b a − 2b⎟⎟⎟⎟⎟
(12.54)
⎝ ⎠
b 0 a − 2b 0
1 2 2 , n = 2, 3, . . ..
n
is the parametric reverse jacket matrix of order
Here we provide an example. Let P2 = 2 −3 be a reverse jacket matrix. Then,
the following matrix also is a reverse jacket matrix:
⎛ ⎞
⎜⎜⎜1 2 1 2⎟⎟⎟
⎜⎜⎜ ⎟
P2 P2 ⎜2 −3 2 −3⎟⎟⎟⎟⎟
P4 = = ⎜⎜⎜⎜ .
⎜⎜⎜1 2 −1 −2⎟⎟⎟⎟⎟
(12.56)
P2 −P2
⎝ ⎠
2 −3 −2 3
Note that the matrix in Eq. (12.56) satisfies the condition of Eq. (12.45) for a
Hadamard matrix of the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
H2 H2 ⎜1 −1 1 −1⎟⎟⎟⎟⎟
H4 = = ⎜⎜⎜⎜ .
⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟
(12.57)
H2 −H2
⎝ ⎠
1 −1 −1 1
⎛ ⎞
⎜⎜⎜1 2 2 1⎟⎟⎟
⎜⎜⎜ ⎟
P2 P2 R ⎜2 −3 −3 2⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜
⎜⎜⎜2 −3 3 −2⎟⎟⎟⎟⎟
(12.59)
RP2 −RP2 R
⎝ ⎠
1 2 −2 −1
Let
a b
A = [RJ]2 (a, b) = , B = [RJ]2 (c, d), C = [RJ]2 (c, d);
b a − 2b
then, from Eq. (12.63), we find the following reverse jacket matrix of order 8
depending on six parameters:
⎛ ⎞
⎜⎜⎜ a b c d c d a b ⎟⎟⎟
⎜⎜⎜ b a − 2b d c − 2d d c − 2d b a − 2b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ c −e −f −c −d ⎟⎟⎟
⎜⎜⎜ d e f ⎟⎟
1 ⎜⎜⎜⎜ d c − 2d − f −e + 2 f f e − 2f −d −c + 2d ⎟⎟⎟⎟
[RJ]8 = ⎜⎜⎜ ⎟⎟⎟ . (12.68)
2 ⎜⎜⎜ c d e f −e −f −c −d ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ d c − 2d f e − 2f − f −e + 2 f −d −c + 2d ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝ a b −c −d −c −d a b ⎟⎟⎟
⎠
b a − 2b −d −c + 2d −d −c + 2d b a − 2b
Using Theorem 12.3.1.1 and the parametric reverse jacket matrix Q1 (a, b, c)
from Eq. (12.43), we can construct a reverse jacket matrix of order 16 depending
on nine parameters. This matrix has the following form:
⎛ ⎞
⎜⎜⎜a b b a d e e d d e e d a b b a⎟⎟
⎜⎜⎜⎜b −c c −b e −f f −e e −f f −e b −c c −b⎟⎟⎟⎟⎟
⎜⎜⎜b −c −b −f −e −f −e −c −b⎟⎟⎟⎟⎟
⎜⎜⎜ c e f e f b c
⎜⎜⎜a
⎜⎜⎜ −b −b a d −e −e d d −e −e d a −b −b a⎟⎟⎟⎟⎟
⎜⎜⎜d e e d −g −h −h −g g h h g −d −e −e −d⎟⎟⎟⎟⎟
⎜⎜⎜ e −f −e −h −q −q −h −e −f e⎟⎟⎟⎟⎟
⎜⎜⎜ f q h h q f
⎜⎜⎜ e
⎜
f −f −e −h −q q h h q −q −h −e −f f e⎟⎟⎟⎟⎟
1 ⎜⎜⎜⎜d −e −e d −g h h −g g −h −h g −d e e −d⎟⎟⎟⎟
⎜⎜ ⎟ . (12.69)
2 ⎜⎜⎜⎜d e e d g h h g −g −h −h −g −d −e −e −d⎟⎟⎟⎟
⎜⎜⎜ e ⎟
⎜⎜⎜ −f f −e h −q q −h −h q −q h −e f −f e⎟⎟⎟⎟
⎟
⎜⎜⎜ e f −f −e h q −q −h −h −q q h −e −f f e⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜d −e −e d g −h −h g −g h h −g −d e e −d⎟⎟⎟⎟
⎜⎜⎜a ⎟
⎜⎜⎜ b b a −d −e −e −d −d −e −e −d a b b a⎟⎟⎟⎟
⎟
⎜⎜⎜b −c c −b −e f −f e −e f −f e b −c c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝b c −c −b −e −f f e −e −f f e b c −c −b⎟⎟⎟⎠
a −b −b a −d e e −d −d e e −d a −b −b a
respectively,
⎛ ⎞
⎜⎜⎜ ⎟⎟⎟
⎛ ⎞ ⎜
⎜
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜
1 1 ⎟⎟⎟
⎜⎜⎜1 −a a −1⎟⎟⎟ 1 ⎜⎜⎜ ⎜ 1 − −1 ⎟⎟⎟
[RJ]4 (a) = ⎜⎜ ⎟
−1
, [RJ]4 (a) = ⎜⎜⎜ a a ⎟⎟⎟ , (12.75a)
⎜⎜⎜1 a −a −1⎟⎟⎟ ⎟ 4 ⎜⎜⎜ ⎟⎟⎟
⎝ ⎠ ⎜⎜⎜1 1 1 ⎟⎟⎟
1 −1 −1 1 ⎜⎜⎜ − −1 ⎟⎟⎟
⎜⎝ a a ⎟⎟⎠
1 −1 −1 1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −a −a a a −1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 −a a a −a −1 1⎟⎟⎟⎟⎟
[RJ]8 (a) = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜1 1 a a −a −a −1 −1⎟⎟⎟⎟⎟
(12.75b)
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 a −a −a a −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎠
1 −1 −1 1 −1 1 1 −1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ 1 1 − − −1 −1 ⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜ 1 −1 − − −1 1 ⎟⎟⎟
−1 1 ⎜⎜⎜⎜ a a a a ⎟⎟⎟ .
[RJ]8 (a) = ⎜⎜⎜ ⎟⎟⎟ (12.75c)
8 ⎜⎜⎜ 1 1 1 1 ⎟
⎟
⎜⎜⎜⎜1 1 a a − a − a −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 − 1 − 1 1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
1 −1 −1 1 −1 1 1 −1
One can check that the inverse matrices of Eq. (12.76) can be defined as
It can be shown that the matrices generated by these formulas are also reverse
jacket matrices, examples of which are given as follows:
⎛ ⎞
⎜⎜⎜1 a a 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜a −a2 a2 −a⎟⎟⎟⎟⎟
[RJ]4 (a) = ⎜⎜ ,
⎜⎜⎜a a2 −a2 −a⎟⎟⎟⎟⎟
⎝ ⎠
1 −a −a 1
⎛ 1 1 ⎞
⎜⎜⎜ 1 1⎟⎟⎟⎟
⎜⎜⎜ a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ (12.78a)
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ − − ⎟
1 ⎜⎜⎜ a a2 a2 a ⎟⎟⎟⎟
[RJ]−1
4 (a) =
⎜⎜⎜ ⎟⎟ ,
4 ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟
⎜⎜⎜ a a2 a2 a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎝ 1 1 ⎟
1 − − 1⎠
a a
⎛ ⎞
⎜⎜⎜1 1 a a a a 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜1 −1 a −a a −a 1 −1⎟⎟⎟⎟
⎜⎜⎜a a −a2 −a2 a2 a2 −a −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜a −a −a2 a2 a2 −a2 −a a⎟⎟⎟⎟⎟
[RJ]8 (a) = ⎜⎜⎜⎜ ⎟,
⎜⎜⎜a a a2 a2 −a2 −a2 −a −a⎟⎟⎟⎟⎟
(12.78b)
⎜⎜⎜ ⎟
⎜⎜⎜a −a a2 −a2 −a2 a2 −a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 1 −a −a −a −a 1 1⎟⎟⎟⎟
⎠
1 −1 −a a −a a 1 −1
⎛ ⎞
⎜⎜⎜ ⎟
1 1⎟⎟⎟⎟⎟
1 1 1 1
⎜⎜⎜ 1 1
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 −1 1
−
1 1
−
1
−1 ⎟⎟⎟
⎜⎜⎜⎜ a a a a
1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ − 2 − 2 − − ⎟
⎜⎜⎜ a a a a a2 a2 a a ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ − − 1 1 1 1
1 ⎜⎜⎜ a a a2 a2 a2 a2 a a ⎟⎟⎟⎟⎟⎜ − −
[RJ]−1
8 (a) = ⎜ ⎟. (12.78c)
8 ⎜⎜⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ a a a2 a2 − a2 − a2 − a − a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ −
⎜⎜⎜ a a a2 − a2 − a2 a2 − a a ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ 1 1 − − − − 1 1 ⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎝ 1 −1 − 1 1
−
1 1
1 −1⎠
a a a a
Substituting
√ into the matrices of Eqs. (12.75a)–(12.75c) and (12.78a)–(12.78c)
a = j = −1, we obtain the following complex reverse jacket matrices, which are
We now introduce the notation [RJ]8 (a, b, c, d, e, f ) = (Ri, j )3i, j=0 , where Ri, j is the
parametric reverse jacket matrices of order 2 [see Eq. (12.68)], i.e.,
R0,0 = R0,3 = [RJ]2 (a, b), R0,1 = R0,2 = [RJ]2 (c, d),
R1,0 = −R1,3 = [RJ]2 (c, d), R1,1 = −R1,2 = −[RJ]2 (e, f ),
(12.80)
R2,0 = −R2,3 = [RJ]2 (c, d), R2,1 = −R2,2 = [RJ]2 (e, f ),
R3,0 = R3,3 = [RJ]2 (a, b), R3,1 = R3,2 = −[RJ]2 (c, d).
One can show that the matrices of Eq. (12.81) and their inverse matrices in
Eq. (12.82) are parametric reverse jacket matrices and have the following form,
respectively:
⎛ ⎞
⎜⎜⎜R0,0 R0,1 R0,1 R0,0 ⎟⎟⎟
⎜⎜⎜ ⎟
(1) ⎜⎜⎜R0,1 −wR1,1 wR1,1 −R0,1 ⎟⎟⎟⎟⎟
R8 = ⎜⎜⎜ ⎟, (12.83a)
⎜⎜⎜R0,1 wR1,1 −wR1,1 −R0,1 ⎟⎟⎟⎟⎟
⎝ ⎠
R0,0 −R0,1 −R0,1 R0,0
⎛ ⎞
⎜⎜⎜R−1 −1 −1 −1 ⎟ ⎟⎟
⎜⎜⎜ 0,0
R 0,1 R 0,1 R 0,0 ⎟ ⎟⎟
(1) −1 ⎜⎜⎜⎜R−1 −1/w R−1 1/w R−1 −R−1 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ 0,1 1,1 1,1 0,1 ⎟ ⎟,
⎜⎜⎜R−1 1/w R−1 −1/w R−1 −R−1 ⎟⎟⎟⎟⎟
R8 (12.83b)
⎜⎜⎜⎝ 0,1 1,1 1,1 0,1 ⎟ ⎟⎟
−1 ⎠
R−1
0,0 −R −1
0,1 −R −1
0,1 R 0,0
⎛ ⎞
⎜⎜⎜ R0,0 wR0,1 wR0,1 R0,0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜wR0,1 −w R1,1 w R1,1 −wR0,1 ⎟⎟⎟⎟⎟ ,
2 2
R(2) = ⎜⎜⎜wR ⎟ (12.83c)
⎜⎜⎝ 0,1 w R1,1 −w R1,1 −wR0,1 ⎟⎟⎟⎟⎠
8 2 2
⎛ ⎞
⎜⎜⎜ −1 1 −1 1 −1 −1 ⎟
⎟
⎜⎜⎜ R0,0 R R R0,0 ⎟⎟⎟⎟
⎜⎜⎜ w 0,1 w 0,1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
(2) −1 ⎜⎜ 0,1 ⎜ R − R R − R ⎟⎟
R8 = ⎜⎜⎜⎜⎜ w w2 1,1 w2 1,1 w 0,1 ⎟⎟⎟⎟ , (12.83d)
⎟
⎜⎜⎜ 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ R0,1 R − R − R ⎟⎟
⎜⎜⎜ w w2 1,1 w2 1,1 w 0,1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝ −1 1 1 −1 −1 ⎟ ⎟⎠
R0,0 − R−1 − R R
w 0,1 w 0,1 0,0
⎛ ⎞
⎜⎜⎜ R0,0 wR0,1 wR0,1 R0,0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜wR0,1 −R1,1 R1,1 −wR0,1 ⎟⎟⎟⎟
R(3) = ⎜⎜⎜wR ⎟, (12.83e)
8
⎜⎜⎝ 0,1 R1,1 −R1,1 −wR0,1 ⎟⎟⎟⎟⎟
⎠
R0,0 −wR0,1 −wR0,1 R0,0
⎛ ⎞
⎜⎜⎜ R−1 1 R−1 1 R−1 R−1 ⎟⎟
⎜⎜⎜ 0,0 w 0,1 w 0,1 0,0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 −1 1 −1 ⎟ ⎟⎟⎟
⎜⎜⎜ R0,1 −R−1 R −1
− R
(3) −1 ⎜⎜ w 1,1 1,1
w 0,1 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟ .
1 −1 ⎟⎟⎟⎟
R8 (12.83f)
⎜⎜⎜ 1 −1 −1 −1
⎜⎜⎜ R0,1 R1,1 −R1,1 − R0,1 ⎟⎟⎟
⎜⎜⎜ w w ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎝ −1 1 −1 1 −1 ⎟⎟⎠
R0,0 − R0,1 − R0,1 R−1 0,0
w w
X = (X0 , X1 , X2 , X3 )T , (12.85)
where
Now the parametric RJT can be presented as (the coefficient 1/2 is omitted)
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜A B B A⎟⎟⎟ ⎜⎜⎜X0 ⎟⎟⎟ ⎜⎜⎜A(X0 + X3 ) + B(X1 + X2 ) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜ B −C C −B⎟⎟⎟ ⎜⎜⎜X1 ⎟⎟⎟ ⎜⎜⎜ B(X0 − X3 ) − C(X1 − X2 )⎟⎟⎟⎟⎟
QX = ⎜⎜⎜⎜ = .
⎜⎜⎜ B C −C −B⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜X2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ B(X0 − X3 ) + C(X1 − X2 )⎟⎟⎟⎟⎟
(12.87)
⎝ ⎠⎝ ⎠ ⎝ ⎠
A −B −B A X3 A(X0 + X3 ) − B(X1 + X2 )
where C P+ (n) and C P× (n) denote the number of additions and multiplications of the
jacket transform P. Below, we present in detail the reverse jacket transforms for
some small orders.
We see that the parametric RJT needs eight addition and one multiplication
operations. The higher-order parametric RJT matrix generated by the
+ × N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.91)
N N
4
From Eq. (12.75a), it follows that the inverse 1D parametric RJT has the same
complexity. Note that if a is a power of 2, then we have
+ shi f t N
C[RJ] (a) = N log2 N, C[RJ] (a) = , (12.92)
N N 4
where
+ +
H2 = , X T = (x0 , x1 , . . . , x7 ), Y T = (y0 , y1 , . . . , y7 ),
+ −
(12.93)
X0T = (x0 , x1 ), X1T = (x2 , x3 ), X2T = (x4 , x5 ), X3T = (x6 , x7 ),
Y0T = (y0 , y1 ), Y1T = (y2 , y3 ), Y2T = (y4 , y5 ), Y3T = (y6 , y7 ).
1 −a −a 1 x3
⎛ ⎞
⎜⎜⎜(x0 + x3 ) + a(x1 + x2 ) ⎟⎟⎟
⎜⎜⎜ ⎟
a(x − x3 ) − a (x1 − x2 )⎟⎟⎟⎟
2
= ⎜⎜⎜⎜⎜ 0 ⎟⎟ . (12.94)
⎜⎜⎝a(x0 − x3 ) + a (x1 − x2 )⎟⎟⎟⎠
2
(x0 + x3 ) − a(x1 + x2 )
Figure 12.4 Flow graph of the 8-point parametric RJT in Eq. (12.94).
We see that this transform needs only eight addition and three multiplication
operations. The higher-order parametric RJT matrix generated by the
Kronecker product has the following form (N = 2n ):
+ × 3N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.96)
N N
4
The same complexity is required for the inverse parametric RJT from
Eq. (12.78a). Note that if a is the power of 2, then we have
+ shi f t 3N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.97)
N N 4
A flow graph of an 8-point [RJ]8 (a)X = ([RJ]4 (a) ⊗ H2 )X = Y transform [see
Eqs. (12.93) and (12.94)] is given in Fig. 12.4.
From Eq. (12.98), we can see that the forward 1D parametric RJT of order 4
requires eight addition and four multiplication operations. A flow graph of the
4-point transform in Eq. (12.98) is given in Fig. 12.5.
Note that if a, b, c are powers of 2, then the forward 1D parametric RJT of order 4
can be performed without multiplication operations. It requires only eight addition
and four shift operations.
A parametric reverse jacket matrix of higher order N = 2k (k > 2) can be
generated recursively as
From Eq. (12.102), it follows that an 8-point parametric RJT with two parameters
needs 24 addition and eight multiplication operations. A flow graph of this
transform is given in Fig. 12.7.
12.5.2.2 Case of three parameters
(1) Let a = b, c = d, and e = f . From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a a c c c c a a⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜⎜a −a c −c c −c a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c c −e −e e e −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
−c −e −e −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, c, e)X = ⎜⎜⎜⎜⎜
c e e
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ . (12.103)
⎜⎜⎜ c c e e −e −e −c −c⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎜⎜⎜ c ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ −c e −e −e e −c c⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a a −c −c −c −c a a⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
a −a −c c −c c a −a x7 y7
From Eq. (12.104), it follows that an 8-point parametric RJT with three parameters
needs 24 addition and eight multiplication operations. A flow graph of this
transform is given in Fig. 12.8.
(2) Let a = b = c = d. From the matrix in Eq. (12.68), we find (below r = e−2 f )
that
⎛a a a a a a a a⎞ ⎛ x ⎞ ⎛y ⎞
⎜⎜⎜ ⎟⎟ ⎜⎜ 0 ⎟⎟ ⎜⎜ 0 ⎟⎟
⎜⎜⎜⎜a −a a −a a −a a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜a a −e − f e f −a −a⎟⎟⎟ ⎜⎜⎜ x2 ⎟⎟⎟ ⎜⎜⎜y2 ⎟⎟⎟
⎜⎜⎜a −a − f −r f r −a a⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
[RJ]8 (a, e, f )X = ⎜⎜⎜⎜a a e f −e − f −a −a⎟⎟⎟⎟ ⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ = ⎜⎜⎜⎜y3 ⎟⎟⎟⎟ , (12.105)
⎜⎜⎜ ⎟ ⎜ 4⎟ ⎜ 4⎟
⎜⎜⎜a −a f r − f −r −a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x5 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y5 ⎟⎟⎟⎟⎟
⎜⎜⎜a a −a −a −a −a a a⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
⎝ ⎠ ⎝ 6⎠ ⎝ 6⎠
a −a −a a −a a a −a x7 y7
where
y0 = a[(x0 + x7 ) + (x1 + x6 )] + a[(x2 + x4 ) + (x3 + x5 )],
y1 = a[(x0 − x7 ) − (x1 − x6 )] + a[(x2 + x4 ) − (x3 + x5 )],
y2 = a[(x0 − x7 ) + (x1 − x6 )] − e(x2 − x4 ) − f (x3 − x5 ),
y3 = a[(x0 + x7 ) − (x1 + x6 )] − f (x2 − x4 ) − r(x3 − x5 ),
(12.106)
y4 = a[(x0 − x7 ) + (x1 − x6 )] + e(x2 − x4 ) + f (x3 − x5 ),
y5 = a[(x0 + x7 ) − (x1 + x6 )] + f (x2 − x4 ) + r(x3 − x5 ),
y6 = a[(x0 + x7 ) + (x1 + x6 )] − a[(x2 + x4 ) + (x3 + x5 )],
y7 = a[(x0 − x7 ) − (x1 − x6 )] − a[(x2 + x4 ) − (x3 + x5 )].
We see that the 8-point parametric RJT in Eq. (12.105) with three parameters
needs 24 addition and 10 multiplication operations. A flow graph of this transform
is given in Fig. 12.9.
12.5.2.3 Case of four parameters
(1) Let a = b, c = d. From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a a c c c c a a⎟⎟⎟ ⎜⎜⎜ x0 ⎟⎟⎟ ⎜⎜⎜y0 ⎟⎟⎟
⎜⎜⎜⎜a −a c −c c −c a −a⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟ ⎜⎜⎜⎜y1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ c c −e − f e f −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
c −c − f −r f r −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, c, e, f )X = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.107)
⎜⎜⎜ c c e f −e − f −c −c⎟⎟⎟ ⎜⎜⎜ x4 ⎟⎟⎟ ⎜⎜⎜y4 ⎟⎟⎟
⎜⎜⎜ c −c f r − f −r −c c⎟⎟⎟ ⎜⎜⎜ x5 ⎟⎟⎟ ⎜⎜⎜y5 ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a a −c −c −c −c a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x6 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y6 ⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
a −a −c c −c c a −a x7 y7
where r = e − 2 f and
From Eq. (12.108), it follows that an 8-point parametric RJT with four parameters
needs 24 addition and 10 multiplication operations. A flow graph of this transform
is given in Fig. 12.10.
(2) Let e = f and c = d. From the matrix in Eq. (12.68), we find
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c c c c a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜⎜b p c −c c −c b p⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c c −e −e e e −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
c −c −e −e −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, b, c, e)X = ⎜⎜⎜⎜⎜
e e
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.109)
⎜⎜⎜ c c e e −e −e −c −c⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎜⎜⎜ c −c e ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ −e −e e −c c⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a b −c −c −c −c a b⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
b p −c c −c c b p x7 y7
where p = a − 2b and
y0 = a(x0 + x6 ) + b(x1 + x7 ) + c[(x2 + x4 ) + (x3 + x5 )],
y1 = b(x0 + x6 ) + p(x1 + x7 ) + c[(x2 + x4 ) − (x3 + x5 )],
y2 = c[(x0 − x6 ) + (x1 − x7 )] − e[(x2 − x4 ) + (x3 − x5 )],
y3 = c[(x0 − x6 ) − (x1 − x7 )] − e[(x2 − x4 ) − (x3 − x5 )],
(12.110)
y4 = c[(x0 − x6 ) + (x1 − x7 )] + e[(x2 − x4 ) + (x3 − x5 )],
y5 = c[(x0 − x6 ) − (x1 − x7 )] + e[(x2 − x4 ) − (x3 − x5 )],
y6 = a(x0 + x6 ) + b(x1 + x7 ) − c[(x2 + x4 ) + (x3 + x5 )],
y7 = b(x0 + x6 ) + p(x1 + x7 ) − c[(x2 + x4 ) − (x3 + x5 )].
From Eq. (12.110), it follows that an 8-point parametric RJT with four
parameters needs 24 addition and 10 multiplication operations. A flow graph of
this transform is given in Fig. 12.11.
12.5.2.4 Case of five parameters
(1) Let e = f . From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c d c d a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜b p d q d q b p⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟ ⎜⎜⎜⎜y1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ c
⎜⎜⎜ d −e −e e e −c −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜d −e −e −d −q⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y3 ⎟⎟⎟⎟⎟
[RJ]8 (a, b, c, d, e)X = ⎜⎜⎜⎜⎜
q e e
⎟⎜ ⎟ = ⎜ ⎟, (12.111)
⎜⎜⎜⎜ c d e e −e −e −c −d⎟⎟⎟⎟ ⎜⎜⎜⎜ x4 ⎟⎟⎟⎟ ⎜⎜⎜⎜y4 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜d q e −e −e e −d −q⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝a b −c −d −c −d a b⎟⎟⎠⎟⎟ ⎜⎜⎜⎜⎝ x6 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝y6 ⎟⎟⎟⎟⎠
b p −d −q −d −q b p x7 y7
From Eq. (12.112), it follows that an 8-point parametric RJT with five
parameters needs 24 addition and 14 multiplication operations. A flow graph of
this transform is given in Fig. 12.12.
12.5.2.5 Case of six parameters
Let X = (x0 , x1 , . . . , x7 ) be a column vector. The forward 1D parametric reverse
jacket transforms depending on six parameters [see Eq. (12.68)] can be realized as
follows:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c d c d a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜b p d q d q b p⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c
⎜⎜⎜ d −e −f e f −c −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎟ ⎜ ⎟
⎜d −f −r −d −q⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, b, c, d, e, f )X = ⎜⎜⎜⎜⎜
q f r
⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.113)
⎟
⎜⎜⎜⎜ c d e f −e −f −c −d⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜d q f r −f −r −d −q⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝a b −c −d −c −d a b⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎠⎝ ⎠ ⎝ ⎠
b p −d −q −d −q b p x7 y7
From Eq. (12.114), we can see that a forward 1D parametric RJT of order
8 requires 24 addition and 16 multiplication operations. A flow graph of this
transform is given in Fig. 12.13.
References
1. S. S. Agaian, K. O. Egiazarian, and N. A. Babaian, “A family of fast
orthogonal transforms reflecting psychophisical properties of vision,” Pattern
Recogn. Image Anal. 2 (1), 1–8 (1992).
2. M. Lee and D. Kim, Weighted Hadamard transformation for S/N ratio
enhancement in image transformation, in Proc. of IEEE Int. Symp. Circuits
and Syst. Proc., Vol. 1, May, Montreal, 65–68 (1984).
3. D. M. Khuntsariya, “The use of the weighted Walsh transform in problems of
effective image signal coding,” GPI Trans. Tbilisi. 10 (352), 59–62 (1989).
4. M.H. Lee, Ju.Y. Park, M.W. Kwon and S.R. Lee, The inverse jacket matrix
of weighted Hadamard transform for multidimensional signal processing, in
Proc. 7th IEEE Int. Symp. Personal, Indoor and Mobile Radio Communi-
cations, PIMRC’96, 15–18 Oct. pp. 482–486 (1996).
5. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,
Englewood Cliffs, NJ (1993).
6. M. H. Lee and S. R. Lee, “On the reverse jacket matrix for weighted Hadamard
transform,” IEEE Trans. Circuits Syst. 45, 436–441 (1998).
7. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEE
Trans. Circuits Syst. 47 (1), 39–47 (2000).
8. M. Lee, B. Sundar Rajan, and J. Y. Park, “Q generalized reverse jacket
transform,” IEEE Trans. Circuits Syst.-II 48 (7), 684–690 (2001).
9. J. Hou, J. Liu and M.H. Lee, “Doubly stochastic processing on jacket
matrices,” in Proc. IEEE Region 10 Conference: TENCON, 21–24 Nov. 2004,
1, 681–684 (2004).
10. M.H. Lee, “Jacket matrix and its fast algorithms for cooperative wireless signal
processing,” Report, 92 (July 2008).
11. M. H. Lee, “The center weighted Hadamard transform,” IEEE Trans. Circuits
Syst. 36 (9), 1247–1249 (1989).
12. K. J. Horadam, “The jacket matrix construction,” in Hadamard Matrices and
their Applications, 85–91 Princeton University Press, London (2007) Chapter
4.5.1.
13. W. P. Ma and M. H. Lee, “Fast reverse jacket transform algorithms,” Electron.
Lett. 39 (18), 47–48 (2003).
14. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEE
Trans. Circuits Syst. II 47 (1), 39–47 (2000).
15. M. H. Lee, B. S. Rajan, and J. Y. Park, “A generalized reverse jacket
transform,” IEEE Trans. Circuits Syst. II 48 (7), 684–691 (2001).
16. G.L. Feng and M.H. Lee, “An explicit construction of co-cyclic Jacket matrices
with any size,” in Proc. of 5th Shanghai Conf. on Combinatorics, May 14–18,
Shanghai (2005).
17. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge Univ.
Press, New York (1991).
18. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes,
Elsevier, Amsterdam (1988).
19. E. Viscito and P. Allebach, “The analysis and design of multidimensional FIR
perfect reconstruction filter banks for arbitrary sampling lattices,” IEEE Trans.
Circuits Syst. 38, 29–41 (1991).
20. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,
Englewood Cliffs, NJ (1993).
21. S.R. Lee and M.H. Lee, “On the reverse jacket matrix for weighted Hadamard
transform,” Schriftenreihe des Fachbereichs Math., SM-DU-352, Duisburg
(1996).
22. M.H. Lee, “Fast complex reverse jacket transform,” in Proc. 22nd Symp. on
Information Theory and Its Applications: SITA99, Yuzawa, Niigata, Japan.
Nov. 30–Dec. 3 (1999).
419
Figure 13.1 Part of the Grand Canyon on Mars. This photograph was transmitted by
Mariner 9 (from Ref. 20).
Figure 13.3 Shannon (Bell Laboratories) and Hamming (AT&T Bell Laboratories) (from
http://www-groups.dcs.st-and.ac.uk/history/Bioglndex.html).
Example 13.1.2.1: Let us assume that we want to send the message 1101. Suppose
that we receive 10101. Is there an error? If so, what is the correct bit pattern? To
answer these questions, we add a 0 or 1 to the end of this message so that the
resulting message has an even number of ls. Thus, we may encode 1101 as 11011.
If the original message were 1001, we would encode that as 10010, because the
original message already had an even number of ls. Now, consider receiving the
message 10101. Because the number of ls in the message is odd, we know that an
error has been made in transmission. However, we do not know how many errors
occurred in transmission or which digit or digits were affected. Thus, a parity check
scheme detects errors, but does not locate them for correction. The number of extra
symbols is called the redundancy of the code.
All error-detection codes (which include all error-detection-and-correction
codes) transmit more bits than were in the original data. We can imagine that as
the number of parity bits increases, it should be possible to correct more errors.
However, as more and more parity check bits are added, the required transmission
bandwidth increases as well. Because of the resultant increase in bandwidth, more
noise is introduced, and the chance of error increases. Therefore, the goal of the
error-detection-and-correction coding theory is to choose extra added data in such
a way that it corrects as many errors as possible, while keeping the communications
efficiency as high as possible.
Example 13.1.2.2: The parity check code can be used to design a code that can
correct an error of one bit. Let the input message symbol have 20 bits: (10010
01101 10110 01101).
Parity check error-detection algorithm:
Input: Suppose we have 20 bits and arrange them in a 4 × 5 array:
1 0 0 1 0
0 1 0 0 1
(13.1)
1 0 1 1 0
0 1 1 0 1
Step 1. Calculate the parity along the rows and columns and define the last
bit in the lower right by the parity of the column/row of parity bits:
1 0 0 1 0 : 0
0 1 0 0 1 : 0
1 0 1 1 0 : 1
(13.2)
0 1 1 0 1 : 1
.. .. .. .. .. . ..
0 0 0 0 0 . 0
(00001) (00000)
(00010) (00011)
(00100) (00101)
(00111) (00110)
(01000) (01001)
(01011) (01010)
(01101) (01100)
(01110) (01111)
(10000) (10001)
(10011) (10010)
(10101) (10100)
(10110) (10111)
(11001) (11000)
(11010) (11011)
(11100) (11101)
(11111) (11110)
codeword whose parity bits are on the right-hand side of the information bits:
n = k + r bits
The rate R of a code is a useful measure of the redundancy within a block code
and is defined as the ratio of the number of information bits to the block length,
R = k/n. Informally, a rate is the amount of information (about the message)
contained in each bit of the codeword. We can see that the code rate is bounded
by 0 ≤ R ≤ 1.
For a fixed number of information bits, the code rate R tends to 0 as the number
of parity bits r increases. Take the case where the code rate R = 1 if n = k. This
means that no coding occurs because there are no parity bits. Low code rates reflect
high levels of redundancy. Several definitions are provided, as follows.
The Hamming distance d(v1 , v2 ) of two codewords v1 and v1 , having the same n
number of bits, is defined as the number of different positions of words v1 and v2 ,
or
d(v1 , v2 ) = v11 ⊕ v12 + v21 ⊕ v22 + · · · + vn1 ⊕ vn2 , (13.3)
The linear code C, with n code length, m information symbols, and minimum
distance d, is said to be an [n, m, d] linear code. We will refer to an any code C that
maps m message bits to n codewords with distance d as an (n, m, d) code. Hence, a
linear code of dimension m contains 2m codewords.
A linear code has the following properties:
• The all-zero word (0 0 0 · · · 0) is always a codeword.
• A linear code can be described by a set of linear equations, usually in the shape
of a single matrix, called the parity check matrix. That is, for any [n, k, d] linear
code C, there exists an (n − k) × n matrix P such that
c∈C ⇔ cPT = 0.
• For any given three codewords ci , c j , and ck such that ck = c1 ⊕ c j , the
distance between two codewords equals the weight of its sum codewords, i.e.,
d(ci , c j ) = w(ck ).
• The minimum distance of the code dmin = wmin , where wmin is the weight of any
nonzero codeword with the smallest weight.
The third property is of particular importance because it enables the minimum
distance to be found quite easily. For an arbitrary block code, the minimum distance
is found by considering the distance between all codewords. However, with a
linear code, we only need to evaluate the weight of every nonzero codeword. The
minimum distance of the code is then given by the smallest weight obtained. This
is much quicker than considering the distance between all codewords. Because an
[n, m, d] linear code encodes a message of length m as a codeword of length n, the
redundancy of a linear [n, m, d] code is n − m.
(S ) = {v0 , v1 , v2 , v3 , v4 , v5 , v1 ⊕ v2 , v1 ⊕ v3 , v1 ⊕ v4 , v1 ⊕ v5 , v2 ⊕ v3 ,
v2 ⊕ v4 , v2 ⊕ v5 , v3 ⊕ v4 , v3 ⊕ v5 , v4 ⊕ v5 }, (13.6)
or
(S ) = {00000, 01001, 11010, 11100, 00110, 10101, 10011, 10101
01111, 11100, 00110, 11100, 01111, 11010, 01001, 10011}. (13.7)
We changed the encoding alphabet from {−1, 1} to {0, 1}. It is also possible to
change the encoding alphabet from {−1, 1} to {1, 0}.
The codewords of C16 are
Properties: Let vi be the i’th row of the vector C2n . It is not difficult to show the
following:
• This code has 2n codewords of length n.
• d(vi , −vi ) = n and d(vi , v j ) = d(−vi , −v j ) = n/2 for i j, i, j = 1, 2, . . . , n,
which means that the minimum distance between any distinct codewords is n/2.
Hence, the constructed code has minimal distance n/2 and the code corrects
n/4 − 1 errors in an n-bit encoded block, and also detects n/4 errors.
• The Hadamard codes are optimal for this Plotkin distance/bound (see more
detail in the next section).
• The Hadamard codes are self-dual.
• Let e be a vector of 1s and −1s of length n. If vector e differs from vi
(a) in at most n/4−1 positions, then it differs from v j in at least n/4+1 positions,
whenever i j.
(b) in n/4 positions, then it differs from v j in at least n/4 positions.
• A generator matrix of the Hadamard code of length 2n has an (n + 1) × 2n
rectangular generator matrix with 0, 1 elements. A Hadamard code of length
16 based on a 5 × 24 generator matrix has the form
Figure 13.5 Matrix of the Hadamard code (32, 6, 16) for the NASA space probe Mariner
9 (from Ref. 20).
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1⎟⎟
⎟
⎜⎜⎜1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜ ⎟
G16 = ⎜⎜⎜⎜⎜1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0⎟⎟⎟⎟ .
⎟ (13.15)
⎜⎜⎜1 0⎟⎟⎟⎟⎠
⎜⎝ 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
Note that the corresponding code encodes blocks of length five to blocks of
length 16. In practice, columns 1, 9, 5, 3, and 2 of matrix G16 form a basis for the
code. Every codeword from C16 is representable as a unique linear combination
of basis vectors. The generator matrix of the (32, 6, 16) Hadamard code (based
on Hadamard matrix of order 16) is a 6 × 32 rectangular matrix. The technical
characteristics of this code are: (1) codewords are 32 bits long and there are 64
of them, (2) the minimum distance is 16, and (3) it can correct seven errors. This
code was used on Mariner’s space mission in 1969 (see Fig. 13.5).
The encoding algorithm is also simple:
Input: The received (0, 1) signal vector v of length n (n must be divided to 4,
i.e., n is the order of Hadamard matrix).
Step 1. Replace each 0 by +1 and each 1 by −1 of the received signal v.
Denote the resulting vector y.
Step 2. Compute the n-point FHT u = HyT of the vector y.
Step 3. Find the maximal by modulus coefficient of the vector u.
Step 3. Find the absolute value of the largest component, i.e., s6 = −6.
Step 4. Apply the decision rule: the 8 + 6 = 14 codeword was sent, i.e.,
(−1, 1, −1, 1, 1, −1, 1, −1).
Step 5. Change all −1s to zeroes.
Output: Codeword that has been sent: 01011010.
Remarks:
• It can be shown that the Hadamard code is the first-order Reed–Muller code
in the case of q = 2. These codes are some of the oldest error-correcting
codes. Reed–Muller codes were invented independently in 1954 by Muller and
Reed. Reed–Muller codes are relatively easy to decode, and first-order codes
are especially efficient. Reed–Muller codes are the first large class of codes to
correct more than a single error. From time to time, the Reed–Muller code is
used in magnetic data storage systems.
• The Sylvester–Hadamard matrix codes are all linear codes.
• It is possible to construct the normalized Hadamard matrices Hn -based codes by
replacing in Hn each of the elements +1 to 0, and −1 to 1 (denote it Qn ). For
instance,
• [n − 1, n, n/2] code An consisting of the rows of Qn with the first column (of
1s) deleted.
• [n−1, 2n, n/2−1] code Bn consisting of the rows of An and their complements.
• [n, 2n, n/2] code Cn consisting of the rows of Qn and their complements.
• In general, Hadamard codes are not necessarily linear codes. A Hadamard code
can be made linear by forming a code with the generator matrix (In , Hn ), where
Hn is a binary Hadamard matrix of order n.
Figure 13.6 Graphical representation of the Hadamard code with generator matrix (I4 , H4 ).
It can be verified that the minimum distance of this code is at least 3. In this
representation, the left nodes (right nodes) are called the “variable nodes” (“check
nodes”). Thus, the code is defined as the set of all binary settings on the variable
nodes such that for all check nodes, the sum of the settings of the adjacent variable
nodes is zero.
Indeed, the minimum distance is not one; otherwise, there is a variable node
that is not connected to any check node, which is a contradiction to the fact that the
degree of the variable nodes is larger than one. Suppose that the minimum distance
is two, and assume that the minimum weight word is (1, 1, 0, . . . , 0). Consider the
subgraph induced by the two first variable nodes. All check nodes in this graph
must have an even degree (or else they would not be satisfied). Moreover, there
are at least two check nodes in this graph of a degree greater than zero, since the
degrees of the variable nodes are supposed to be greater or equal to two. Then, the
graph formed by the two first variable nodes and these two check nodes is a cycle
of length four, contrary to the assumption.
and only if the code C obtained by adding a parity-check bit to each codeword in
C is an (n + 1, k, d + 1) code. Therefore, if d is even, then A(n, d) = A(n − 1, d − 1).
The challenge here is to understand the behavior of A(n, d) for the case when d is
even.18,23
In 1965, Plotkin18 gave a simple counting argument that leads to an upper bound
B(n, d) for A(n, d) when d < n/2. The following also holds:
• If d ≤ n < 2d, then A(n, d) ≤ B(n, d) = 2[d/(2d − n)].
• If n = 2d, then A(n, d) ≤ B(n, d) = 4d.
Levenshtein13 proved that if Hadamard’s conjecture is true, then Plotkin’s bound
is sharp. Let Qn be a binary matrix received from a normalized Hadamard matrix
of order n by replacement of +1 by 0 and −1 by 1. It is clear that the matrix Qn
allows design of the following Hadamard codes:
• (n − 1, n, n/2) code An consisting of rows of a matrix Qn without the first column
of Qn .
• (n − 1, 2n, n/2 − 1) code Bn consisting of codewords of a code An and their
complements.
• (n, 2n, n/2) code Cn consisting of rows of a matrix Qn and their complements.
In Ref. 13, it was proved that if there are the suitable Hadamard matrices, then
the Plotkin bounds have the following form:
• If d is an even number, then
d
M(n, d) = 2 , n < 2d,
2d − n (13.19)
M(2d, d) = 4d.
Table 13.2 Conditions and formulas of construction of the maximal codes for the given n
and d.
N d/(2d − n) k = [d/(2d − n)] Correct Code
In this table, ◦ means matrix connections, and a and b are defined by the
following:
ka + (k + 1)b = d
(13.21)
(2k − 1)a + (2k + 1)b = n.
One can verify that d/(2d − n) is a fractional number, and that k = 2. These
parameters correspond to the second row of Table 13.2. There are also correct
matrices of order 2k and 4(k + 1), obtained from Hadamard matrices of order 4 and
12. Solving the above linear system, we find that a = 1, b = 2. Hence, the code
found can be represented as A14 ◦ A212 .
Consider the Hadamard matrix of order 4 with last column consisting of +1:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜− + − +⎟⎟⎟⎟
H4 = ⎜⎜⎜⎜⎜ ⎟⎟ . (13.22)
⎜⎜⎜− − + +⎟⎟⎟⎟
⎜⎝ ⎟⎠
+ − − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + + − − + − − + − −⎟⎟
⎟ ⎜⎜⎜− − − − + + − + + − + +⎟⎟
⎟
⎜⎜⎜+
⎜⎜⎜ + + − + − − + − − + −⎟⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ − − + − + + − + + − +⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+
⎜⎜⎜ + + − − + − − + − − +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + + − − + − − + − − +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜− + + + + + − + + + − −⎟⎟⎟⎟ ⎜⎜⎜+ − − − − − + − − − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − + + + + + − + − + −⎟⎟⎟⎟⎟ ⎜⎜⎜− + − − − − − + − + − +⎟⎟⎟
⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟ ⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟
H12 = ⎜⎜⎜⎜⎜ ⎟, +
H12 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟ ⎜⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟ ⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − + + + + + + −⎟⎟⎟⎟⎟ ⎜⎜⎜− − + + + − − − − − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜− + + − + + + − − + + +⎟⎟⎟⎟ ⎜⎜⎜− + + − + + + − − + + +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+
⎜⎝ − + + − + − + − + + +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎝ − + + − + − + − + + +⎟⎟⎟⎟
⎠ ⎠
+ + − + + − − − + + + + + + − + + − − − + + + +
(13.24)
+
Hence, according to definition from H12 , we find that
⎛ ⎞
⎜⎜⎜1 1 1 1 0 0 1 0 0 1 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 0 1 0 0 1 0 0 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 1 0 1 1 0 1 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 1 1 1 1 1 0 1 1 1 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 0 1 1 1 1 1 0 1 0 1 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜0 0⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜
0 1 0 0 0 0 0 1 1 1
A12 ⎟⎟ ,
⎜⎜⎜1 0 0 0 1 1 0 0 0 1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 1 0 1 0 1 0 0 0 0 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 0 0 0 1 1 1 1 1 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
0⎟⎟⎟⎟
⎜⎜⎜1 (13.25)
0 0 1 0 0 0 1 1 0 0
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎝ 1 0 0 1 0 1 0 1 0 0 0⎟⎟⎟⎟⎟
⎠
0 0 1 0 0 1 1 1 0 0 0 0
⎛ ⎞
⎜⎜⎜1 1 1 0 1 0 0 1 0 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 1 0 1 1 0 1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜1 0 1 1 1 1 1 0 1 0⎟⎟⎟⎟
A212 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎜0
⎜⎜⎜ 0 1 0 0 0 0 0 1 1⎟⎟⎟⎟⎟
⎟
⎜⎜⎜0
⎜⎝ 1 0 1 0 1 0 0 0 0⎟⎟⎟⎟⎟
⎠
1 1 0 0 0 1 1 1 1 1
Hence, the codewords of the maximal equidistant code A14 ◦A212 with the parameters
n = 13, d = 8, M = 4 are represented as
⎧ ⎫
⎪
⎪
⎪ 0 0 0 1 1 1 0 1 0 0 1 0 0⎪
⎪
⎪
⎪
⎪
⎪ ⎪
⎪
⎪
⎨1 0 1 0 0 0 1 1 0 1 1 0 1⎪
⎪
⎬
A14 ◦ A212 =⎪
⎪ ⎪ . (13.26)
⎪
⎪
⎪
⎪ 1 1 0 1 0 1 1 1 1 1 0 1 0⎪
⎪
⎪
⎪
⎪
⎪
⎩0 ⎪
1 1 0 0 1 0 0 0 0 0 1 1⎭
where D0 = (1) and Om is zero matrix of order m. Note that the matrix Dt has
a dimension (t + 2)2t−1 × 2t , i.e., it is a difference matrix of a uniquely decodable
base code with (t + 2)2t−1 users of length 2t . Note that in the formula in Eq. (13.29),
instead of D0 , one can substitute a Hadamard matrix.
Now we shall consider a problem of decoding. Let (C1 , C2 , . . . , Ck ) be a uniquely
decodable base code of length n, and let Ci = {Ui , Vi }. Y = V1 +V2 +· · ·+Vk is called
a base vector of a code. Let Xi ∈ Ci be a message of the i’th user. Let us calculate
r = X1 + X2 + · · · + Xk . S = r − Y is called a syndrome of a code corresponding to
a vector r. Because
k
d , if Xi = Ui ,
Xi − Vi = i S = qi di , (13.30)
0, if Xi = Vi ,
i=1
where
1, if Xi = Ui ,
qi = (13.31)
0, if Xi = Vi .
X ⊗ D1 + Y ⊗ D 1 (13.32)
We now provide an example of a base code with 30 users of length 18. Let us
have a difference matrix with five users of length three:
⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟
⎜⎜⎜+ + −⎟⎟⎟
⎜⎜ ⎟⎟
D1 = ⎜⎜⎜⎜⎜+ − +⎟⎟⎟⎟⎟ . (13.34)
⎜⎜⎜+ 0 −⎟⎟⎟
⎜⎝ ⎟⎠
+ − 0
According to Eq. (13.28), we determine the components of this code to be
Now, let A and B = C = D be cyclic Williamson matrices of order 3 with first rows
(+ + +) and (+ + −), respectively. Using Theorem 13.1.7.1, we obtain the following
difference matrix:
⎛ ⎞
⎜⎜⎜ D1 D1 D1 D1 −D1 −D1 ⎟⎟⎟
⎜⎜⎜ D1 D1 D1 −D1 D1 −D1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ D1 D1 D1 −D1 −D1 D1 ⎟⎟⎟⎟⎟
D2 = ⎜⎜⎜ ⎟
⎜⎜⎜ D1 −D1 −D1 −D1 D1 D1 ⎟⎟⎟⎟⎟
(13.36)
⎜⎜⎜−D ⎟⎟
⎝⎜ 1 D1 −D1 D1 −D1 D1 ⎟⎟⎠
−D1 −D1 D1 D1 D1 −D1
Denote Ci1 = {Ui ; Vi }, i = 1, 2, . . . , 30. The base vector of this code will be
Y = {10, 12, 12, 10, 12, 12, 10, 12, 12, 15, 12,
12, 15, 12, 12, 15, 12, 12}. (13.38)
Let the following vectors be sent: U i , V5+i , U10+i , U15+i , V20+i , U25+i , i = 1, 2, 3, 4, 5.
The total vector of this message will be
r = {20, 12, 12, 10, 12, 12, 20, 12, 12, 15,
12, 12, 15, 12, 12, 15, 12, 12}. (13.39)
S = qD2 . (13.41)
V = (0, −1, −3, −1, +3, −3, −1, +1). Then, computing the spectral vector H3 V =
(−5, +3, +3, +11, −5, −5, +3, −5), the decoder again resolves that the information
word (011) was transmitted.
Let r be the repeating number of each projector, and k be an information word
length. In Ref. 13, we proved that the above-constructed code can be corrected to
t = 2k−2 r − 1 (13.43)
errors.
Suppose that there are 2k − 1 projectors, with each of them being repeated r − 1
times. Then, the length of the codewords will be M1 = (2k − 1)(r − 1), and by
Eq. (13.43), this code corrects t1 = 2k−2 (r − 1) − 1 errors. However, if each of the
projectors is repeated r times, the codeword length is M2 = (2k − 1)r, and that
code corrects t2 = 2k−2 r − 1 errors. It is necessary to build an optimal code with a
minimal length M(M1 < M < M2 ) that can correct t errors, t1 < t < t2 .
Let d = 2m ; (m < k) is the number of projectors. In Ref. 9, it is shown that
the shortened projection code of length M2 = (2k − 1)r − 2m can be corrected for
t = 2k−2 r − 2m−2 − 1 errors. Note that m = [log2 (2k−2 r − t − 1)] + 2.
Now we give an example. Let the information word (011) be transmitting.
The repetition is r = 3, and it is necessary to correct four errors. From the
previous formula, we obtain m = 2; hence, in d = 22 = 4 projectors, the
repetitions must be reduced by one. As was shown above, if in the first 2m
projectors with small values the repeating is reduced by one, the resulting code
is optimal. The coder forms the following shortened code: (11110000111111000)
of length 17. If no errors occur in the channel, the decoder receiving the codeword
will determine the vector V = (0, −2, −2, 2, 2, −3, −3, 3). Furthermore, we obtain
H3 V = (−3, −3, −3, 17, −1, −1, −1, −5)T . We see that the maximal coefficient 17
correctly identifies the information word (011).
Now, suppose that four errors occur in the channel, which is shown in bold
in the received codeword (01111100111111100). In this case, the decoder will
determine the vector V = (0, 0, −2, −2, 2, −3, −3, 1). Next, we obtain H3 V =
(−7, 1, 5, 9, −1, −1, 3, −9)T , which means that the maximal coefficient 9 still
correctly identifies the transmitted information word. Thus, using a code of length
17, four errors can be corrected.
The experiments are made for grayscale and color images, the results of which
are given in Table 13.3. For an eight-bit 256 × 256 image, a total of 255 projectors
is required. The first 26 = 64 projectors were repeated two times each, and the
other ones three times, i.e., the codeword length is 701 and, therefore, the resulting
code can correct all combinations of t = 26 · 3 − 24 − 1 = 175 errors.
Furthermore, in the codeword using a pseudo-random number generator, t ≥
175 errors have been entered. In Table 13.3, encoding results are given, where
“Err.num.” stands for the number of errors, “M.filter” with “+” is placed showing
that after decoding, the median filtering was performed. A similar trend is observed
also for other types of signals.
1 0–175 - 0 Infinity
2 200 - 0.00000 Infinity
3 210 - 0.00024 84.25
4 220 - 0.057 60.53
5 230 - 8.65 38.76
6 240 - 35.91 32.57
7 250 - 143.21 26.57
8 250 + 67.16 29.86
9 260 - 476.26 21.33
10 260 + 75.58 29.34
11 270 - 1186.22 17.39
12 270 + 104.48 27.94
13 280 - 2436.11 14.26
14 280 + 166.03 25.92
Figure 13.8 Two-branch transmit diversity scheme with two transmitting and one receiving
antenna.
over all codewords ct,1 , ct,2 , . . . ct,n , t = 1, 2, . . . , l and decides in favor of the
codeword that minimizes the sum in Eq. (13.45).
After several mathematical manipulations, we see that the problem is to obtain
ct,i , which gives a minimum of the expression in Eq. (13.45), and leads to
minimization of the following expression (x∗ is a conjugate of x):
⎡
m ⎢ n
⎤
l
⎢⎢⎢ n ⎥⎥⎥
⎢⎢⎣ ∗ ∗ ∗
rt, j αi, j ct,i + rt, j αi, j ct,i − αi, j αk, j ct,i ct,k ⎥⎥⎥⎦ .
∗ ∗
(13.46)
t=1 j=1 i=1 i,k=1
Note that the l × n matrix C = (ci, j ) is called the coding matrix. More complete
information about wireless systems and space–time codes can be found in
Refs. 39–53 and 57.
We examine two-branch transmit diversity schemes with two transmitting and
one receiving antenna in Fig. 13.8. This scheme may be defined by the following
three functions: (1) the encoding and transmission sequence of information
symbols at the transmitter, (2) the combining scheme at the receiver, and (3) the
decision rule for maximum-likelihood detection.
(1) The encoding and transmission sequence: At a given symbol period T , two
signals are simultaneously transmitted from the two antennas. The signal trans-
mitted from antenna 0 is denoted by x0 and from antenna 1 by x1 . During the
next symbol period, signal (−x1∗ ) is transmitted from antenna 0, and signal x0∗ is
transmitted from antenna 1. Note that the encoding is done in space and time
r0 = r(t) = h0 x0 + h1 x1 + n0 ,
(13.48)
r1 = r(t + T ) = −h0 x1∗ + h1 x1∗ + n1 ,
where r0 and r1 are the received signals at time t and t+T and n0 and n1 are complex
random variables representing receiver noise and interference.
(2) The combining scheme: The combiner as shown in Fig. 13.8 builds the
following two combined signals that are sent to the maximum-likelihood detector:
x0 = h∗0 r0 + h1 r1∗ ,
(13.49)
x1 = h∗1 r0 − h0 r1∗ .
(3) The maximum-likelihood decision rule: Combined signals [Eq. (13.49)] are
then sent to the maximum-likelihood detector, which, for each of the signals x0 and
x1 , uses the decision rule expressed in
or in the following equation for phase shift keying (PSK) signals (equal energy
constellations):
The maximal-ratio combiner may then construct the signal x0 as shown in Fig. 13.8
so that the maximum-likelihood detector may produce x0 , which is a maximum-
likelihood estimate of x0 .
The matrices A2 (a, b), A4 (a, b, c, d), and A8 (a, b, . . . , h) are called Yang,
Williamson, and Plotkin arrays,14 respectively:
⎛ ⎞
⎜⎜⎜ a b c d⎟⎟⎟
a b ⎜⎜⎜−b a −d c⎟⎟⎟
A2 (a, b) = , A4 (a, b, c, d) = ⎜⎜⎜⎜ ⎟⎟ ,
−b a ⎜⎜⎝ −c d a −b⎟⎟⎟⎟⎠
−d −c b a
⎛ ⎞
⎜⎜⎜ a b c d e f g h⎟⎟
⎟
⎜⎜⎜ −b a d −c f −e −h g⎟⎟⎟⎟⎟
⎜⎜⎜ (13.52)
⎜⎜⎜ −c
⎜⎜⎜ −d a b g h −e − f ⎟⎟⎟⎟⎟
⎜ −d c −b a h −g f −e⎟⎟⎟⎟⎟
A8 (a, b, . . . , h) = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜⎜ −e − f −g −h a b c d⎟⎟⎟⎟
⎟
⎜⎜⎜− f e −h −g −b a −d c⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝ −g h e − f −c d a −b⎟⎟⎟⎟
⎠
−h −g f e −d −c b a
H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U 2 , (13.56)
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜+ − − −⎟⎟ ⎛ ⎞
⎜⎜⎜− ⎟ ⎟ ⎜⎜⎜0 + 0 ⎟⎟⎟
+ − +⎟⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟, U = ⎜⎜⎜⎝0 0 +⎟⎟⎟⎟⎠ .
⎜
−⎟⎟⎟⎟⎠ +⎟⎟⎟⎟⎠
(13.57)
⎜⎜⎝− + + ⎜⎜⎝+ − +
+ 0 0
− − + + + + − +
References
1. H. F. Harmuth, Transmission of Information by Orthogonal Functions,
Springer-Verlag, Berlin (1969).
2. H. F. Harmuth, Sequency Theory: Foundations and Applications, Academic
Press, New York (1977).
3. A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Inc.,
Englewood Cliffs, NJ (1989).
4. R. K. Yargaladda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis:
With Applications to Communications and Signal/Image Processing, Kluwer,
Dordrecht (1997).
5. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Math., 1168, Springer-Verlag, Berlin (1985).
6. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, John Wiley & Sons, Hoboken, NJ (1992).
7. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space–time codes from
orthogonal designs,” IEEE Trans. Inf. Theory 45 (5), 1456–1467 (1999).
8. H. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Sciences, 1298 575–581 Springer, Berlin (1997).
9. H. Sarukhanyan, A. Anoyan, K. Egiazarian, J. Astola and S. Agaian, Codes
generated from Hadamard matrices, in Proc. of Int. Workshop on Trends and
Recent Achievements in Information Technology, Cluj-Napoca, Romania, May
16–17, pp. 7–18 (2002).
10. Sh.-Ch. Chang and E. J. Weldon, “Coding for T-user multiple-access
channels,” IEEE Trans. Inf. Theory 25 (6), 684–691 (1979).
27. D.W. Bliss, K.W. Forsythe and A.F. Yegulalp, MIMO communication capacity
using infinite dimension random matrix eigenvalue distributions, in Conf. Rec.
35th Asilomar Conf. on Signals, Systems an Computers, 2, Nov. 4–7, Pacific
Grove, CA, 969–974 (2001).
28. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes,
North-Holland, Amsterdam (1977).
29. N. Balaban and J. Salz, “Dual diversity combining and equalization in digital
cellular mobile radio,” IEEE Trans. Vehicle Technol. 40, 342–354 (1991).
30. G. J. Foschini Jr, “Layered space–time architecture for wireless
communication in a fading environment when using multi-element antennas,”
Bell Labs Tech. J. 1 (2), 41–59 (1996).
31. G. J. Foschini Jr. and M. J. Gans, “On limits of wireless communication
in a fading environment when using multiple antennas,” Wireless Personal
Commun. 6 (3), 311–335 (1998).
32. J.C. Guey, M.P. Fitz, M.R. Bell and W.-Y. Kuo, Signal design for transmitter
diversity wireless communication systems over Rayleigh fading channels, in
Proc. IEEE VTC’96, 136–140 (1996).
33. N. Seshadri and J. H. Winters, “Two signaling schemes for improving the
error performance of frequency-division-duplex (FDD) transmission systems
using transmitter antenna diversity,” Int. J. Wireless Inf. Networks 1 (1), 49–60
(1994).
34. V. Tarokh, N. Seshardi, and A. R. Calderbank, “Space–time codes for
high data rate wireless communication: Performance analysis and code
construction,” IEEE Trans. Inf. Theory 44 (2), 744–756 (1998).
35. V. Tarokh, A. Naguib, N. Seshardi, and A. R. Calderbank, “Space–time
codes for high data rate wireless communications: Performance criteria in
the presence of channel estimation errors, mobility and multiple paths,” IEEE
Trans. Commun. 47 (2), 199–207 (1999).
36. E. Telatar, Capacity of multi-antenna Gaussian channels, AT&T-Bell
Laboratories Internal Tech. Memo (Jun. 1995).
37. J. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna diversion
the capacity of wireless communication systems,” IEEE Trans. Commun. 42
(2/3/4), 1740–1751 (1994).
38. A. Wittneben, Base station modulation diversity for digital SIMULCAST, in
Proc. IEEE VTC, 505–511 (May 1993).
39. A. Wittneben, A new bandwidth efficient transmit antenna modulation
diversity scheme for linear digital modulation, in Proc. IEEE ICC, 1630–1634
(1993).
40. K. J. Horadam, Hadamard Matrices and Their Applications, Princeton
University Press, Princeton (2007).
∗
Yue Wu ([email protected]) and Joseph P. Noonan ([email protected]) are with the Dept. of Electrical and Computer
Engineering, Tufts University, Medford, MA 02155 USA. Sos Agaian is with the Dept. of Electrical and Computer Engineering,
University of San Antonio, TX 78249 USA.
449
number); (4) may not have generated the exact form of RDOT,39,40 leading to
inevitable approximation errors in implementation;41 and (5) contained minimal
mention of requirements for cryptanalysis.
This chapter proposes a randomization theorem of DOTs and thus a general
method of obtaining RDOTs from DOTs conforming to Eq. (14.1). It can be
demonstrated that building the proposed RDOT system is equivalent to improving
a DOT system by adding a pair of pre- and post-processes. The proposed
randomization method is very compact and is easily adopted by any existent
user-selected DOT system. Furthermore, the proposed RDOT matrix is of the
exact form, for it is directly obtained by a series of matrix multiplications
related to the parameter matrix and the original DOT matrix. Hence, it avoids
approximation errors commonly seen in those eigenvector-decomposition-related
RDOT methods,39,40 while keeping good features or optimizations, such as a fast
algorithm, already designed for existing DOTs. Any current DOT system can be
improved to a RDOT system and fulfill the needs of secure communication and
data encryption.
The remainder of this chapter is organized as follows:
• Section 14.1 reviews several well-known DOTs in the matrix form and briefly
discusses the model of secure communication.
• Section 14.2 proposes the new model of randomizing a general form of the
DOT, including theoretical foundations, qualified candidates of the parameter
matrix, properties and features of new transforms, and several examples of new
transforms.
• Section 14.3 discusses encryption applications for both 1D and 2D digital data;
the confusion and diffusion properties of an encryption system are also explored.
14.1 Preliminaries
This section will briefly discuss two topics: the matrix form of a DOT, and
cryptography backgrounds. The first step is to unify all DOTs in a general form
so that the DOT randomization theory presented in Section 14.2 can be derived
directly from this general form. The second step is to explain the conceptions
and terminologies used in the model so that secure communication and encryption
applications based on the RDOT can be presented clearly in Section 14.3.
Without losing any generality, let the vector x be the time-domain signal of size
1 × n, the vector y be the corresponding transform domain signal of size 1 × n, the
matrix M be the forward transform matrix of size n × n, and the matrix M̃ be the
inverse transform matrix of size n × n.
Equation (14.1) is called the general matrix form of a DOT. Matrix theory states
that the transform pair in Eq. (14.1) is valid for any time-domain signal x if and only
if the matrix product of the forward transform matrix M and the inverse transform
matrix M̃ is the identity matrix In .
Mn M̃n = In , (14.2)
⎛ ⎞
⎜⎜⎜1 0 ··· 0⎟⎟⎟
⎜⎜⎜0 ··· 0⎟⎟⎟⎟⎟
⎜ 1
In = ⎜⎜⎜⎜⎜.. .. .. .. ⎟⎟⎟⎟ . (14.3)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
0 0 0 1 n×n
In reality, many discrete transforms are of the above type and can be denoted
in the general form of Eq. (14.1). It is the transform matrix pair of M and M̃ that
makes a distinct DOT. For example, the DHT transform pair is of the form of
Eq. (14.1) directly, with its forward transform matrix H and its inverse transform
matrix H̃ defined in Eqs. (14.4) and (14.5), respectively, where ⊗ denotes the
Kronecker product and H T denotes the transpose of H [see equivalent definitions
in Eq. (1.1)]:
⎧
⎪ H1 = (1)
⎪
⎪
⎪
⎪ , n = 1,
⎪
⎨ 1 1
Hn = ⎪
⎪ H2 = , n = 2, (14.4)
⎪
⎪
⎪ 1 −1
⎪
⎩H2 ⊗ H k−1 ,
2 n = 2k m.
1
H̃n = HnT . (14.5)
n
Similarly, the pair of size n × n DFT matrices can be defined as Eqs. (14.6) and
(14.7),
⎛ ⎞
⎜⎜⎜w1 w21 · · · wn1 ⎟⎟⎟⎟
⎜⎜⎜ 1 ⎟⎟⎟
⎜
1 ⎜⎜⎜⎜w12 w22 · · · wn2 ⎟⎟⎟⎟
Fn = √ ⎜⎜⎜. .. . . .. ⎟⎟⎟⎟⎟ , (14.6)
n ⎜⎜⎜.. . . . ⎟⎟
⎜⎜⎜ ⎟⎟⎠
⎝ 1
wn wn · · · wnn
2
⎛ ⎞
⎜⎜⎜w̄1 w̄21 · · · w̄n1 ⎟⎟⎟⎟
⎜⎜⎜⎜ 1 ⎟⎟⎟
1 ⎜⎜⎜w̄1 w̄22 · · · w̄n2 ⎟⎟⎟⎟
F̃n = √ ⎜⎜⎜⎜. 2 .. . . .. ⎟⎟⎟⎟⎟ = Fn ,
∗
(14.7)
n ⎜⎜⎜.. . . . ⎟⎟
⎜⎜⎜ ⎟⎟⎠
⎝ 1
w̄n w̄n · · · w̄nn
2
where wkm is defined in Eq. (14.8), w̄km is the complex conjugate of wkm , and Fn∗ is
the complex conjugate of Fn . In Eq. (14.8), j denotes the standard imaginary unit
with property that j2 = −1:
j2π
wm = exp −
k
(m − 1)(k − 1) . (14.8)
n
Similarly, the pair of size n × n DCT matrices can be defined as Eqs. (14.9) and
(14.10), where ckm is defined in Eq. (14.11):
⎛ √ √ √ ⎞
⎜⎜⎜1/ N · c1 2/ N · c21 · · · 2/ N · cn1 ⎟⎟⎟⎟
⎜⎜⎜⎜ √ 1
√ √ ⎟⎟⎟
⎜⎜⎜1/ N · c1 2/ N · c22 · · · 2/ N · cn2 ⎟⎟⎟⎟
Cn = ⎜⎜⎜⎜. 2
.. . . ..
⎟⎟⎟ , (14.9)
⎜⎜⎜.. . . . ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎝ √ √ √ n⎠
1/ N · c1n 2/ N · c2n · · · 2/ N · cn
C̃n = C T , (14.10)
ckm = cos [π(2m − 1)(k − 1)/2n] . (14.11)
Besides the examples shown above, other DOTs can be written in the form of
Eq. (14.1), for instance, the discrete Hartley transform1 and the discrete M-trans-
form.38
14.1.2 Cryptography
The fundamental objective of cryptography is to enable two people, usually
referred to as Alice and Bob, to communicate over an insecure channel so that
an opponent, Oscar, cannot understand what is being said.42 The information that
Alice wants to send is usually called a plaintext, which can be numerical data, a text
message, or anything that can be represented by a digital bit stream. Alice encrypts
the plaintext, using a predetermined key K, and obtains an encrypted version of
the plaintext, which is called ciphertext. Then Alice sends this resulting ciphertext
over the insecure channel. Oscar (the eavesdropper), upon seeing the ciphertext,
cannot determine the contents of the plaintext. However, Bob (the genuine receiver)
knows the encryption key K and thus can decrypt the ciphertext and reconstruct the
plaintext sent by Alice. Figure 14.1 illustrates this general cryptography model.
In this model, it seems that only the ciphertext communicated over the insecure
channel is accessible by Oscar, making the above cryptosystem appear very secure.
In reality, however, Oscar should be considered a very powerful intruder who
transforms that randomize the original transform’s basis matrix by introducing two
new square matrices P and Q, such that the response y in the transform domain
will dramatically change, while keeping the new transform pair valid for any given
input signal x.
and square parameter matrices Pn and Qn are such that Pn Qn = In , then Ln and
L̃n define a valid pair of transforms,
Forward Transform: y = xLn Ln = Pn Mn Qn
where
Inverse Transform: x = yL̃n , L̃n = Pn M̃n Qn .
Proof: We want to show that for any signal vector x, the inverse transform (IT)
response z of x’s forward transform (FT) response y is equal to x. Consider the
following:
Q = P−1 . (14.12)
Matrix theory states that as long as a matrix P is invertible, then its inverse
matrix exists. Therefore, infinitely many matrices can be used here for matrix
P. According to Eq. (14.12), Q is determined once P is determined. The
remainder of this section focuses only on the matrix P since Q can be determined
correspondingly.
One good candidate for the matrix P is the permutation matrix family P.
The permutation matrix is sparse and can be compactly denoted. Two types of
permutation matrices are introduced here: the unitary permutation matrix and the
generalized permutation matrix.
Definition 14.1: The unitary permutation matrix.54 A square matrix P is called a
unitary permutation matrix (denoted as P ∈ U), if in every column and every row
there is exactly one nonzero entry, whose value is 1.
Definition 14.2: The generalized permutation matrix.54 A square matrix P is
called a generalized permutation matrix (denoted as P ∈ G), if in every column
and every row there is exactly one nonzero entry.
If P is a unitary permutation matrix, i.e., P ∈ U, then an n × nP matrix can be
denoted by a 1 × n vector.
Example 14.2.1: The row permutation sequence [3, 4, 2, 1] denotes Eq. (14.13):54
⎛ ⎞
⎜⎜⎜0 0 0 1⎟⎟⎟
⎜⎜⎜0 1 0 0⎟⎟⎟
P = PU = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎝⎜1 0 0 0⎟⎟⎟⎠⎟
(14.13)
0 0 1 0
Q = P−1 = PT . (14.14)
P = DPU , (14.15)
Q = PTU D−1 , (14.16)
⎛ ⎞
⎜⎜⎜d1 0 · · · 0 ⎟⎟⎟
⎜⎜⎜0 d · · · 0 ⎟⎟⎟
⎜ ⎟
D = ⎜⎜⎜⎜⎜.. .. . . .. ⎟⎟⎟⎟⎟ ,
2
(14.17)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
0 0 · · · dn
⎛ −1 ⎞
⎜⎜⎜d1 0 · · · 0 ⎟⎟
⎜⎜⎜⎜0 ⎟
d2−1 · · · 0 ⎟⎟⎟⎟⎟
D−1 = ⎜⎜⎜⎜⎜. .. . . .. ⎟⎟⎟⎟⎟ . (14.18)
⎜⎜⎜.. . . . ⎟⎟
⎝ ⎠
0 0 · · · dn−1
Example 14.2.2: The row permutation sequence [3, 4, 2, 1] and the weight
sequence [w1 , w2 , w3 , w4 ] define the generalized permutation matrix:
⎛ ⎞
⎜⎜⎜0 0 0 w4 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 w2 0 0 ⎟⎟⎟⎟⎟
P = PG = ⎜⎜⎜⎜ ⎟⎟ . (14.19)
⎜⎜⎜w1 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 w3 0
As a result, the new DOT matrix L = PMQ can be interpreted as a weighted and
shuffled basis matrix of the original transform basis matrix M. It is worth noting
that the parameter matrix P can be any invertible matrix and that the permutation
matrix family is just one special case of the invertible matrix family.
In this example, four Ps, namely Pa , Pb , Pc , and Pd , are used, and the
corresponding new transform matrices are La , Lb , Lc , and Ld , respectively.
Figure 14.2 illustrates the used Ps and resulting transform matrices Ls. For Pa = I8 ,
La = I8 H8 I8−1 and thus La = H8 as plot (a) shows that for Pb = PU , a unitary
permutation matrix, Lb is a row-column-wise shuffled matrix of H8 ; for Pc = PG ,
a generalized permutation matrix, Lc is a row-column-wise shuffled matrix of H8
with additional weights; and for Pd = R8 , an invertible square random matrix, Ld
also becomes a random matrix. Obviously, the new transform basis matrices (f), (g)
and (h) in Fig. 14.2 are very different from the original transform basis matrix (e).
Table 14.1 Transform matrix statistics. Mean and standard deviations were calculated from
100 experiments.
Ps \ Ms I F C H
n n
mean(M) = u M = Mi, j /n2 , (14.21)
i=1 j=1
4
5 n n
std(M) = σ M = (Mi, j − u M )2 /n2 , (14.22)
i=1 j=1
where Mi, j denotes the element at the intersection of the i’th row and j’th column
of the matrix M.
Table 14.1 illustrates the first two statistical moments of the new generated
transform matrix L under various pairs of P and M at size 64. Here the random
full rank matrix R is uniformly distributed on [−1, 1], and the symmetric matrix
S is generated via Eq. (14.23); the nonzero elements in PG are also uniformly
distributed on [−1, 1].
S = (R + RT )/2. (14.23)
The selected M matrices are F (the DFT matrix), C (the DCT matrix) and
H (the DHT matrix). For real Ls, statistical tests are made on its elements
directly; for complex Ls, tests are made on its phase elements. In general, a more
uniformly distributed P matrix on [−1, 1] leads to a higher standard deviation in the
L matrix.
It is worth noting that the first two statistical moments measure only the
population properties; thus, the permutation within the matrix is not accounted
for. However, the permutations within the matrix will cause a significant difference
in the resulting new generated transform.
then the new transform system of R(.) and R−1 (.) can be directly realized by the
original transform system of S (.) and S −1 (.), as the following equation shows:
R(x) = xL = x(PMQ) = S (xP) · Q
(14.26)
R−1 (y) = yL̃ = y(P M̃Q) = S −1 (yP) · Q.
Equation (14.26) is very important because it demonstrates that the new transform
system is completely compatible with the original transform. Unlike other
randomization transforms,39,40 the new randomized transform does not require
any computations on eigenvector decompositions and thus does not create any
approximation-caused errors. Any existing transform system conforming to the
model can be used to obtain new transforms without any change.
The resulting signals in transform domains are shown in Fig. 14.3. It is easy
to see that the signals in the proposed transforms tend to have more random-like
patterns than DFT, DRFT and DFRFT in both phase and magnitude components.
More detailed statistics of each transformed signal are shown in Table 14.2. In
addition, compared to other methods, our randomization method is advantageous
because
• no eigenvector decomposition approximation is required;
• it is completely compatible with DFT at arbitrary size; and
• it can be directly implemented with fast Fourier transforms.
In Table 14.2, it is worth noting that a uniformly distributed random
√ phase
variable on [−π, π] has a mean u = 0 and a standard deviation σ = π/ 3 ≈ 1.8138.
In addition, the symmetry of the phase and the magnitude component of each
transformed signal was measured. This measure compares the left half and the
right half of the transform domain signal and is defined in Eq. (14.28), where y
denotes the transformed signal, and corr denotes the correlation operation defined
in Eq. (14.29), where E denotes the mathematical expectation:
Figure 14.5 The flowchart of the encryption system based on existent transforms.
the transform domain. As a result, their transform domain signal provides very little
information about the plaintext, the rectangular wave in the time domain defined
by Eq. (14.27).
One might ask whether the proposed encryption method for 1D cases can be
also used for high-dimensional digital data. The answer is affirmative, because the
high-dimensional data can always be decomposed to a set of lower-dimensional
data.
For example, a digital video clip is normally taken in the format of a sequence
of digital images. In other words, a digital video (3D data) can be considered as
a set of digital images (2D data). Therefore, from this point of view, the proposed
encryption method in Fig. 14.5 can be applied even for high-dimensional digital
data for encryption.
For example, assume that the input digital data is an n × n gray image. If the key
matrix is restricted to the unitary permutation matrix set U (see Definition 14.1),
then the number of allowed keys is n!. For a 256 × 256 gray image (this size is
commonly used for key space analysis and is much smaller than standard digital
photos), the allowed key space is 256! ≈ 21684 bits, which implies a sufficiently
large key space. It is worth noting that our current encryption algorithms and
ciphers consider a key space of 2256 bits large enough to resist brute-force attacks.
If P is restricted to the generalized permutation matrix set G (see Definition 14.2),
then the number of allowed keys is infinite because the possible weights can be
defined as any decimal number.
14.3.3.2 Confusion property
The confusion property is desirable because ciphertexts generated by different
keys have the same statistics, in which case the statistics of a ciphertext give no
information about the used key.53 This section argues that even the naïve image
encryption system presented in Fig. 14.5 has promising confusion and diffusion
properties. The confusion property of the proposed system is difficult to prove,
so this property will be illustrated with various images. The diffusion property of
the proposed system can be shown by calculating the number-of-pixel change rate
(NPCR) of the system, so a proof will be given directly.
Figures 14.6–14.8 depict the confusion property of the system from different
aspects by using various plaintext image and P matrices. It is worth noting
that the transforms that these figures all refer to are 2D transforms defined in
Eqs. (14.31)–(14.33), and a random transform is taken with respect to the general
form of Eq. (14.34).
In Fig. 14.6, the gray 256 × 256 “Lena” image is used as the plaintext image.
The ciphertext for each 2D transform according to the unitary permutation matrix
PU , the generalized permutation matrix PG , and the full-rank random matrix
R are shown in the odd rows. In the even rows, histograms are plotted below
their corresponding images. From a visual inspection and histogram analysis
perspective, it is clear that various key matrices P generate similar statistics.
In Fig. 14.7, three 256 × 256 gray images from the USC-SIPI image database
demonstrate the effect of using various plaintext images and different types of P
matrices. It is not difficult to see that the ciphertext images have similar coefficient
histograms compared to those in Fig. 14.6, although both of the P matrices and the
plaintext images are different. This indicates that the statistics of ciphertext images
provide very limited information about the key, and thus the proposed system has
the property of confusion.
Figure 14.8 investigates the transform coefficient histogram of the ciphertext
image in the first row of Fig. 14.7. Note that the ciphertext images are sized at
256 × 256. Instead of looking at coefficient histograms for the whole ciphertext
image, the whole ciphertext image was divided into sixteen 64 × 64 image
blocks without overlapping, then the coefficient histogram of each image block
was inspected. These block coefficient histograms are shown in the second
column of Fig. 14.8. The third and fourth columns show the mean and standard
Figure 14.6 Image encryption using the proposed random transform—Part I: influences
of different types of random matrix P. (See note for Fig. 14.7 on p. 467.)
Figure 14.7 Image encryption using the proposed random transform—Part II: using
different input images. Note: (1) The histograms of O are plotted by using 256 bins that
are uniformly distributed on [0, 255], the range of a gray image. (2) The histograms of PU
are plotted by using two bins, i.e. 0 and 1; the number of pixels in each bin in the histogram is
the number expressed after taking logarithm 10. (3) The histograms of PG and R are plotted
by using 256 bins, which are uniformly distributed on [−1, 1]; the number of pixels in each
bin in the histogram is the number expressed after taking logarithm 10. (4) The histogram
of ciphertext images of DCT and WHT are plotted by using 256 bins whose logarithm 10
length are uniformly distributed on [− log m, log m], where m is the maximum of the absolute
transform domain coefficient. (5) The histogram of ciphertext images of DFT are plotted with
respect to magnitude and phase, respectively. The magnitude histogram is plotted by using
256 bins whose logarithm 10 length are uniformly distributed on [0, log m], where m is the
maximum magnitude. The phase histogram is plotted by using 256 bins that are uniformly
distributed on [−π, π].
Definition 14.3:
NPCR = Di, j /T × 100%,
i, j
⎧ (14.36)
⎪
⎪
⎨1, if Ci,1 j Ci,2 j
Di, j = ⎪
⎪
⎩0, if C 1 = C 2 ,
i, j i, j
Figure 14.8 Image encryption using the proposed random transform—Part III: random-
ness of the transform coefficients.
where C 1 and C 2 are ciphertexts before and after changing one pixel in plaintext,
respectively; D has the same size as image C 1 ; and T denotes the total number of
pixels in C 1 .
Lemma 14.1: Suppose A and B are both n × n transform matrices with nonzero
elements, i.e., for any subscript pair i, j, ∃Ai, j 0 and Bi, j 0, its corresponding
2D transform S (.) 6is defined as Eq. (14.30). Suppose x and y are two n × n
x else
matrices and yi, j = z i, j if i = r, j = c , where r and c are constant integers between 1
and n, and z is a constant with z xr,c . Then, for any subscript pair i, j, there is
[S (x)]i, j [S (y)]i, j .
Similarly, there is
n n
[S (y)]i, j = Ai,c × yc,r × Br, j + Ai,m × ym,k × Bk, j
k=1 m=1
kr nc
n n
= Ai,c × yc,r × Br, j + Ai,m × xm,k × Bk, j . (14.38)
k=1 m=1
kr nc
Then,
Proof: Suppose that x and X are two plaintexts with one pixel difference as
Lemma 14.1 shows, and y and Y are corresponding ciphertext obtained by using
the proposed encryption system. The ciphertext can be denoted as
and correspondingly,
If both M and Z satisfy the condition that they contain nonzero components, then
Lemma 14.1 can be applied directly. Therefore, ∀i, j, ∃yi, j Yi, j , and equivalently,
∀i, j ∃ Di, j = 1.
As a result,
NPCR = 100%.
Remarks 14.1: Nonzero conditions imposed on M and Z are automatically
satisfied if P is a unitary permutation matrix PU or a generalized permutation
matrix PG .
References
1. C.-Y. Hsu and J.-L. Wu, “Fast computation of discrete Hartley transform via
Walsh–Hadamard transform,” Electron. Lett. 23 (9), 466–468 (1987).
2. S. Sridharan, E. Dawson, and B. Goldburg, “Speech encryption in the
transform domain,” Electron. Lett. 26, 655–657 (1990).
3. A. H. Delaney and Y. Bresler, “A fast and accurate Fourier algorithm for
iterative parallel-beam tomography,” Image Processing, IEEE Transactions on
5, 740–753 (1996).
4. T. M. Foltz and B. M. Welsh, “Symmetric convolution of asymmetric
multidimensional sequences using discrete trigonometric transforms,” Image
Processing, IEEE Transactions on 8, 640–651 (1999).
5. I. Djurovic and V. V. Lukin, “Robust DFT with high breakdown point for
complex-valued impulse noise environment,” Signal Processing Letters, IEEE
13, 25–28 (2006).
6. G.A. Shah and T.S. Rathore, “A new fast Radix-2 decimation-in-frequency
algorithm for computing the discrete Hartley transform,” in Computational
Intelligence, Communication Systems and Networks, 2009. CICSYN ’09. First
International Conference on, 363–368, (2009).
7. J. Bruce, “Discrete Fourier transforms, linear filters, and spectrum weighting,”
Audio and Electroacoustics, IEEE Transactions on 16, 495–499 (1968).
8. C. J. Zarowski, M. Yunik, and G. O. Martens, “DFT spectrum filtering,”
Acoustics, Speech and Signal Processing, IEEE Transactions on 36, 461–470
(1988).
9. R. Kresch and N. Merhav, “Fast DCT domain filtering using the DCT and the
DST,” Image Processing, IEEE Transactions on 8, 821–833 (1999).
10. V. F. Candela, A. Marquina, and S. Serna, “A local spectral inversion of a
linearized TV model for denoising and deblurring,” Image Processing, IEEE
Transactions on 12, 808–816 (2003).
39. Z. Liu and S. Liu, “Randomization of the Fourier transform,” Opt. Lett. 32,
478–480 (2007).
40. P. Soo-Chang and H. Wen-Liang, “Random discrete fractional Fourier
transform,” Signal Processing Letters, IEEE 16, 1015–1018 (2009).
41. M. T. Hanna, N. P. A. Seif, and W. A. E. M. Ahmed, “Hermite–Gaussian-like
eigenvectors of the discrete Fourier transform matrix based on the singular-
value decomposition of its orthogonal projection matrices,” Circuits and
Systems I: Regular Papers, IEEE Transactions on, 51, 2245–2254 (2004).
42. D. Stinson, Cryptography: Theory and Practice, CRC press, (2006).
43. M. Yang, N. Bourbakis, and L. Shujun, “Data-image-video encryption,”
Potentials, IEEE 23, 28–34 (2004).
44. T. Chuang and J. Lin, “A new multiresolution approach to still image
encryption,” Pattern Recognition and Image Analysis 9, 431–436 (1999).
45. Y. Wu, J.P. Noonan and S. Agaian, “Binary data encryption using the
Sudoku block,” in Systems, Man and Cybernetics, 2010. SMC 2010. IEEE
International Conference on, (2010).
46. G. Chen, Y. Mao, and C. K. Chui, “A symmetric image encryption scheme
based on 3D chaotic cat maps,” Chaos, Solitons & Fractals 21, 749–761
(2004).
47. L. Zhang, X. Liao, and X. Wang, “An image encryption approach based on
chaotic maps,” Chaos, Solitons & Fractals 24, 759–765 (2005).
48. W. Xiaolin and P.W. Moo, “Joint image/video compression and encryption via
high-order conditional entropy coding of wavelet coefficients,” in Multimedia
Computing and Systems, 1999. IEEE International Conference on, 2, 908–912,
(1999).
49. S. S. Maniccam and N. G. Bourbakis, “Image and video encryption using
SCAN patterns,” Pattern Recognition 37, 725–737 (2004).
50. N. Bourbakis and A. Dollas, “SCAN-based compression-encryption-hiding
for video on demand,” Multimedia, IEEE 10, 79–87 (2003).
51. S. Maniccam and N. Bourbakis, “Lossless image compression and encryption
using SCAN,” Pattern Recognition 34, 1229–1245 (2001).
52. R. Sutton, Secure Communications: Applications and Management, John
Wiley & Sons, New York (2002).
53. C. E. Shannon, “Communication theory of secrecy systems,” Bell Systems
Technical Journal 28, 656–715 (1949).
54. L. Smith, Linear Algebra, 3rd ed., Springer-Verlag, New York (1998).
55. D. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Applica-
tion to Linear Systems Theory, Princeton University Press, Princeton (2005).
56. A. Gupta and K. R. Rao, “An efficient FFT algorithm based on the discrete
sine transform,” Signal Processing, IEEE Transactions on 39, 486–490 (1991).
57. R. González and R. Woods, Digital Image Processing, Prentice Hall, New
York (2008).
58. C. K. Huang and H. H. Nien, “Multi chaotic systems based pixel shuffle for
image encryption,” Optics Communications 282, 2123–2127 (2009).
59. H. S. Kwok and W. K. S. Tang, “A fast image encryption system based on
chaotic maps with finite precision representation,” Chaos, Solitons & Fractals
32, 1518–1529 (2007).
60. Y. Mao, G. Chen, and S. Lian, “A novel fast image encryption scheme based
on 3D chaotic Baker maps,” Int. J. Bifurcation and Chaos 14 (10), 3613–3624
(2003).