Optimal Least-Squares Design of Sparse FIR Filters For Big-Data Signal Processing
Optimal Least-Squares Design of Sparse FIR Filters For Big-Data Signal Processing
Abstract—Since amount of big data is extremely huge, low- we use the relaxation. Hence, we propose a selection method
delay and low-complexity signal processing devices are strongly of the initial cost for high-speed search. The feature of this
required in big data signal processing. Digital filters are the method is that the optimal filter with sparse coefficients can
key device for digital signal processing. Digital filters with be obtained in the least squares sense.
sparse coefficients (0 coefficients) are beneficial to reduce the
computational complexity. This paper proposes a design method The paper is organized as follows. In section II, we present
for low-delay FIR filters with sparse coefficients. We consider the the design problem of FIR filters. In section III, design scheme
optimization of combination of selection for sparse coefficients. based on the branch and bound method is applied to the
If the sparse coefficients are selected, the real coefficients can be optimization of combination problem. In section IV, a design
computed based on the Lagrange multiplier method. We employ example is given to demonstrate the effectiveness of the
the branch and bound method incorporated with the Lagrange proposed method.
multiplier method. Also, we propose a selection method of the
initial cost for high-speed search. The feature of this method is
as follows: (a) The number of 0 coefficients can explicitly specify. II. L OW- DELAY FIR FILTERS
(b) The optimality is guaranteed in the least squares sense. We Let a transfer function be
present a design example in order to demonstrate the effectiveness 𝑛
of our method.
∑
𝐻(𝑧) = 𝑏𝑘 𝑒−𝑗𝑙𝜔 (1)
Keywords: Least Squares Design, FIR filter, Sparse coef- 𝑙=0
ficients, Low-delay, Big-Data Signal Processing where 𝑏𝑙 are filter coefficients and 𝑛 is order of the polynomial.
Here define
[ ]𝑇
I. I NTRODUCTION 𝒃 = 𝑏 0 , 𝑏1 , ⋅ ⋅ ⋅ , 𝑏𝑛 (2)
Digital signal processing is well-used in various fields such where the superscript 𝑇 indicates the transposition of the
as measurement, automatic control, and biomedical engineer- matrix.
ing. Recently, since the digital signal has been big data, the
Let the desired response be 𝐻𝑑 (𝜔) and the weighting
signal to be processed is getting large. Thus, low-delay and
function be 𝑊 (𝜔). Then, the cost function of the least-squares
low-complexity signal processing devices are required for the
error defined as
big-data signal processing. ∫ 𝜋
2
By the difference of structures, digital filters [1]–[3] can 𝐽= 𝑊 (𝜔)𝐻𝑑 (𝜔) − 𝐻(𝜔) 𝑑𝜔. (3)
0
be classified into two types, finite impulse response (FIR)
filters [4]–[10] and infinite impulse response (IIR) filters [11]– For simplicity, we consider the multi-band response [17].
[17]. Although FIR filters can achieve the exact linear phase Thus, the desired response is defined as
property by the symmetrical coefficients and the guaranteed ⎧
𝐺1 𝑒−𝑗𝜏1 𝜔 , 𝜔1 ≤ 𝜔 < 𝜔2
stability, the order of FIR filters is generally higher than that
⎨
of IIR filters to reach the comparable accuracy. Hence, recent 𝐺2 𝑒−𝑗𝜏2 𝜔 , 𝜔2 ≤ 𝜔 < 𝜔3
𝐻𝑑 (𝜔) = .. (4)
Trends for the design of FIR filters [4]–[10] are to employ a
.
sparsity (0 coefficients). ⎩
𝐺𝑁 𝑒−𝑗𝜏𝑁 𝜔 , 𝜔𝑁 ≤ 𝜔 < 𝜔𝑁 +1
Digital filters with sparse coefficients (0 coefficients) are where 𝐺𝑖 , 𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝑁 are the filter gains, and 𝜏𝑖 , 𝑖 =
beneficial to reduce the computational complexity. This paper 1, 2, ⋅ ⋅ ⋅ , 𝑁 are the desired group delays in the each band.
proposes a design method for low-delay FIR filters with sparse
coefficients. We consider the optimization of combination for Moreover, the weighting function is set as
⎧
sparse coefficients. Using the Lagrange multiplier method, 𝑊1 , 𝜔1 ≤ 𝜔 < 𝜔2
the real coefficients can be computed for the selected sparse
⎨ 𝑊2 , 𝜔 2 ≤ 𝜔 < 𝜔 3
coefficients. Also, we employ the branch and bound method 𝑊 (𝜔) := .. (5)
to reduce the search space based on the relaxation of the sub- ⎩ .
problem. However, the search time is extremely long even if 𝑊𝑁 , 𝜔𝑁 ≤ 𝜔 < 𝜔𝑁 +1
Authorized licensed use limited to: Qilu University of Technology. Downloaded on April 18,2023 at 03:24:13 UTC from IEEE Xplore. Restrictions apply.
where 𝑊1 , 𝑊2 , ⋅ ⋅ ⋅ , 𝑊𝑁 are non-negative real values.
Then, (3) can be expressed as
[ ][ ] x1 x2 x3
[ 𝑇] 𝑃 𝒒 1
𝐽= 1𝒃
𝒒𝑇 𝑹 𝒃
x2 x3 x4 x3 x4 x4
= 𝒙𝑇 𝑲𝒙 (6)
where x3 x4 x5 x4 x5 x5 x4 x5 x5 x5
[ ]𝑇 [ ]𝑇
𝒙 = 1 𝒃𝑇 = 𝑥0 , 𝑥1 , ⋅ ⋅ ⋅ , 𝑥𝑛+1 , (7)
[ ] Fig. 1. Tree model of sparse coefficients.
𝑝0 𝒒
𝑲= 𝑇 . (8)
𝒒 𝑹
Here, define the set of index of sparse coefficients as
Now, we define 𝑞𝑙 is the element of 𝒒, and 𝑅𝑙,𝑙′ is the
element of 𝑹 as I𝑠 = {𝑘 ∣ 𝑥𝑘 = 0} (15)
[ ]
𝒒 = 𝑞0 𝑞1 ⋅ ⋅ ⋅ 𝑞𝑛 (9)
⎡ ⎤ where ∣I𝑠 ∣ = 0.
𝑅0,0 𝑅0,1 ⋅ ⋅ ⋅ 𝑅0,𝑛
⎢ 𝑅1,0 𝑅1,1 ⋅ ⋅ ⋅ 𝑅1,𝑛 ⎥ Hence, the design problem with the sparse coefficients can
𝑹=⎢ ⎣ ... .. .. ⎥ . (10)
. ⎦
.. be formulated as
. .
𝑅𝑛,0 𝑅𝑛,1 ⋅ ⋅ ⋅ 𝑅𝑛,𝑛
min 𝐽(𝒙) (16a)
𝒙
Here, 𝑝0 is computed as subject to 𝑥0 = 1, (16b)
𝑁
∑ 𝑥𝑘 = 0 for 𝑘 ∈ I𝑠 . (16c)
𝑝0 = 𝑊𝑖 𝐺2𝑖 ⋅ (𝜔𝑖+1 − 𝜔𝑖 ), (11)
𝑖=1
The universal method for the sparse optimization is to use
and 𝑞𝑙 can be obtained as ˜
the new cost function 𝐽(𝒙) defined as
𝑁
∑ ˜
𝐽(𝒙) = 𝐽(𝒙) + 𝑟(𝒙) (17)
𝑞𝑙 = − 𝑊𝑖 𝐺𝑖 𝑞˜𝑖 (12)
𝑖=1
where 𝑟(𝒙) is a regularization term.
with
⎧ A sparse solution can be obtained by solving the problem
𝜔𝑖+1 − 𝜔𝑖 if 𝑙 = 𝜏
⎨ (17). However, the sparsity of the solution depends on the
𝑞˜𝑖 = 1 (13) weight of 𝑟(𝒙). Then, it is hard to adjust the number of 0
{sin[(−𝑙 + 𝜏 )𝜔𝑖+1 ] coefficients.
⎩ −𝑙 +𝜏
− sin[(−𝑙 + 𝜏 )𝜔𝑖 ]} otherwise.
In this paper, we present a new approach to get a sparse
Also, 𝑅𝑙,𝑙′ can be obtained as solution. This method can explicitly specify the number of 0
⎧ coefficients with 𝑝.
∑𝑁
𝑊𝑖 (𝜔𝑖+1 − 𝜔𝑖 ) if 𝑙 = 𝑙′ Now, let us consider all possible combinations of selection
⎨ 𝑖=1 for sparse coefficients. The number of candidates is
𝑅𝑙,𝑙′ = (14) ( )
∑𝑁
𝑊𝑖 𝑛+1 (𝑛 + 1)!
{sin[(𝑙 − 𝑙′ )𝜔𝑖+1 ] = . (18)
𝑙 − 𝑙 ′ 𝑝 𝑝!(𝑛 + 1 − 𝑝)!
⎩ 𝑖=1
− sin[(𝑙 − 𝑙′ )𝜔𝑖 ]} otherwise. Note that the number of candidates for 𝑺 is equal to (18). Fig.
1 shows the tree model of the example above.
III. S PARSE O PTIMIZATION
A. Constrained optimization problem for filter design with
sparse coefficients B. Lagrange multiplier method
Let us consider the filter design with sparse (0) coefficients. Assume that 𝒙𝑠 is fixed. Then 𝒙𝑟 can be computed by
Assume that the number of sparse coefficients is 𝑝 (i.e., the solving the following constraint problem.
number of multiplier is 𝑛 + 1 − 𝑝). Now, let the set of sparse
coefficients be 𝒙𝑠 , the set of real coefficients be 𝒙𝑟 with 𝒙𝑟 ∈ min 𝐽(𝒙) (19a)
R𝑛+1−𝑝 , respectively. Assume the element of 𝒙 belong to the 𝒙
set 𝒙𝑠 or 𝒙𝑟 . subject to 𝒙𝑇 𝑺 = 𝒖𝑝+1 (19b)
Authorized licensed use limited to: Qilu University of Technology. Downloaded on April 18,2023 at 03:24:13 UTC from IEEE Xplore. Restrictions apply.
x1 x3 1. Let the temporary solution be 𝒙 ˜ tmp = (𝑥3 = 𝑥4 =
𝑥5 = 0, {𝑥1 , 𝑥2 } ∈ R2 ). Now, consider the relaxation solution
Compare ˜ rlx = (𝑥1 = 𝑥2 = 0, {𝑥3 , 𝑥4 , 𝑥5 } ∈ R3 ) as shown in Fig. 2.
𝒙
x2 x4 Now, if
2
{x3,x4,x5}∈R 3
{x1,x2,}∈R (˜
𝒙rlx ) > 𝐽(˜
𝒙tmp ), (26)
x5 the optimal solution is not existed under the sub-tree 𝑥1 =
Relaxation solution Temporary solution 𝑥2 = 0 as shown in Fig. 3. Based on the branch and bound
method above, the number of searches can be reduced.
Fig. 2. Comparison between the relaxed problem and temporary solution. Let the initial cost value of solution be 𝐽𝑖 . (The com-
putation procedure of 𝐽𝑖 is introduced in the next section.
) the branch and bound method for optimization of sparse
coefficients are summarized as follows.
x1 x2 x3 STEP 1: If all sub-trees are not excluded, select another
sub-tree. If not, stop the algorithm.
x2 x3 x4 x3 x4 x4 STEP 2: If the possible solution can be computed, go to
STEP 3. If the relaxation solution can be computed, go to
STEP 4.
x3 x4 x5 x4 x5 x5 x4 x5 x5 x5
STEP 3: Compute the possible solution. If the possible
solution is better than the temporary solution, renew the
Fig. 3. Termination of subtree. temporary solution and its cost value. Return to STEP 1.
STEP 4: Compare the relaxation solution and the temporary
where solution. If the relaxation solution is better than the temporary
⎡ 𝑝 ⎤ solution, return to STEP 1. If not, exclude the sub-tree and
return to STEP 1.
⎢ 1 0 ⋅⋅⋅ 0 ⎥
⎢ ⎥
𝑺=⎢ 0 ⎥ (20)
⎢ . ⎥ D. Selection of the initial cost for high-speed search
⎣ . ⎦
. 𝑺 (𝑛+1)×𝑝 The performance of the branch and bound method strongly
0 depends on the initial cost value. That is, it is preferable the
[ ]
𝒖𝑝+1 = 1 0 ⋅
⋅ ⋅ 0 . (21) initial cost value is slightly worse than that of the optimal
𝑝 solution 𝒙˜ ∗ . Let us consider how to select the initial cost value.
First, compute an optimal (non-sparse) solution by using (23).
Here, 𝑺 is an (𝑛 + 2) × (𝑝 + 1) matrix, and 𝒖𝑝+1 is a (𝑝 + 1)- And let the optimal solution be 𝒙∗ . Second, we obtain an initial
dimensional horizontal vector. Now, let 𝑆𝑘,𝑙 be a element of 𝑺. temporary solution 𝒙 ˜ 0 based on the following procedure.
Then, 𝑆𝑘,𝑙 ∈ {0, 1}, 𝑘 = 1, 2, ⋅ ⋅ ⋅ (𝑛 + 2), 𝑙 = 1, 2, ⋅ ⋅ ⋅ (𝑝 + 1).
(i) Select 𝑝 elements from 𝒙∗ in ascending order and
We employ the Lagrange multiplier method to solve the equating the 𝑝 elements to zero.
problem (19). Define
(ii) Set 𝑺 according to the elements selected in (a).
1 𝑇
𝒙 𝑲𝒙 − (𝒙𝑇 𝑺 − 𝒖𝑝+1 )𝝀
𝐿(𝒙, 𝝀) = (22) ˜ 0 with 𝑺 set in (b) by
(iii) Compute temporary solution 𝒙
2
using (23).
where 𝝀 is the Lagrange multiplier.
(iv) The initial cost value is set by
Differentiating 𝐿(𝒙, 𝝀) by 𝒙 and 𝝀, and equating to 0, we
𝐽(𝒙∗ ) + 𝐽(˜ 𝒙0 )
have 𝐽1 = . (27)
−1 𝑇 −1
2
𝒙 =𝑲 𝑺(𝑺 𝑲𝑺) 𝒖𝑇𝑝+1 . (23)
˜ ∗ . If
(v) Now, let the optimal sparse solution be 𝒙
Authorized licensed use limited to: Qilu University of Technology. Downloaded on April 18,2023 at 03:24:13 UTC from IEEE Xplore. Restrictions apply.
TABLE I. C OST VALUES 10
∗ 0
𝑝 𝐽(˜𝒙0 ) 𝐽𝑖 𝐽(˜𝒙 )
10 8.5411𝑒 − 05 6.9110𝑒 − 05 (𝐽1 ) 7.1895𝑒 − 05
Magnitude response in dB
7.7261𝑒 − 05 (𝐽2 ) -10
20 1.8383𝑒 − 04 1.1832𝑒 − 04 (𝐽1 ) 1.2062𝑒 − 04
1.5107𝑒 − 04 (𝐽2 )
-20
30 4.6325𝑒 − 04 2.5803𝑒 − 04 2.5014𝑒 − 04 -30
40 1.1665𝑒 − 03 6.0967 − 04 4.9775 − 04
-40
0.08
black broken lines are the responses of non-sparse filter (𝒙∗ ).
It should be noted that we can design the optimal sparse 0.06
filter in the least squares sense. The cost values are shown in
Table I. Table I indicates the trials with 𝐽1 obtain no solution
when 𝑝 = 10, 20. Hence the initial cost is changed to 𝐽2 . After 0.04
using 𝐽2 , the optimal solution is found.
0.02
Table II shows the comparison of number of combinations.
From Table II, we can see our method can reduce the number
of searches in comparison with that of round-robin search. 0
0 0.1 0.2 0.3 0.4 0.5
Normalized frequency
V. C ONCLUSION (c)
We have proposed a design method for low-delay FIR Fig. 4. (a) Magnitude responses in dB and (b) group delays, and (c)magnitude
filters with sparse coefficients. We have considered the op- of complex errors with 𝑝 = 10.
timization of combination of selection for sparse coefficients.
We employ the branch and bound method for reduction of
the search space with the relaxation of the sub-problem. The
feature of the proposed method is the optimality of the sparse ACKNOWLEDGEMENT
filters is guaranteed in the least squares sense. Also, we
propose a selection method of the initial cost for high-speed
search. We confirm the number of searches is significantly This work was partially supported by a research grant
reduced in design example. from The Furukawa Technology Foundation and Scientific
Research (B) (Grant No. 16H02921) from Japan Society for
1 MATLAB is a trademark of The Math Works Inc. the Promotion of Science (JSPS).
Authorized licensed use limited to: Qilu University of Technology. Downloaded on April 18,2023 at 03:24:13 UTC from IEEE Xplore. Restrictions apply.
R EFERENCES
[1] V. K. Ingle, and J. G. Proakis, Digital Signal Processing Using
MATLAB, Cengage Learning, Jan. 2011.
[2] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing,
Prentice Hall, Aug. 2009.
[3] T. Hinamoto and W. -S. Lu Digital Filter Design and Realization, River
Publishers, June 2017.
[4] C. Rusu and B. Dumitrescu, “Iterative reweighted l1 design of sparse
FIR filters,” Signal Process., vol.92, no.4, pp.905-911, Apr. 2012.
[5] A. Jiang, H. K. Kwan, and Y. Zhu, “Peak-error-constrained sparse FIR
filter design using iterative SOCP,” IEEE Trans. Signal Process., vol.60,
no.8, pp. 4035-4044, Aug. 2012.
[6] A. Jiang and H. K. Kwan, “WLS design of sparse FIR digital filters,”
IEEE Trans. Circuits Syst. I, vol.60, no.1, pp. 125-135, Jan. 2013.
[7] D. Wei, C.K. Sestok and A.V. Oppenheim, ”Sparse Filter Design Under
a Quadratic Constraint: Low-Complexity Algorithms,” IEEE Trans.
Signal Process., vol.61, no.4, pp. 857 - 870, Feb. 2013.
[8] D. Wei and A.V. Oppenheim, ”A Branch-and-Bound Algorithm for
Quadratically-Constrained Sparse Filter Design,” IEEE Trans. Signal
Process., vol.61, no.4, pp. 1006 - 1018, Feb. 2013.
[9] A.Jiang, H.K.Kwan, Y. Zhu, X. Liu, N. Xu and Y. Tang, “Design of
Sparse FIR Filters With Joint Optimization of Sparsity and Filter Order,”
IEEE Trans. Circuits Syst.I, vol.62, no.1, pp. 195-204, Jan. 2015.
[10] W. -S. Lu and T. Hinamoto, “A Unified Approach to the Design
of Interpolated and Frequency-Response-Masking FIR Filters,” IEEE
Trans. Circuits Syst.I, vol.63, no.12, pp. 2257-2266, Dec. 2016.
[11] C. T. Mullis and R. A. Roberts, “The use of second-order information in
the approximation of discrete-time linear systems,” IEEE Trans. Acoust.,
Speech, Signal Process., vol. ASSP-24, no. 3, pp. 226-238, Jun. 1976.
[12] A. T. Chottera and G. A. Jullien, “A linear programming approach to
recursive digital filter design with linear phase”, IEEE Trans. Circuits
Syst., Vol.CAS-29, no. 3, pp.139-149, Mar. 1982.
[13] W. -S. Lu, S. -C. Pei, and C. -C. Tseng, “A weighted least-squares
method for the design of stable 1-D and 2-D IIR digital filters,” IEEE
Trans. Signal Process., vol.46, no.1, pp.1-10, Jan. 1998.
[14] M. C. Lang, “Least-squares design of IIR filters with prescribed
magnitude and phase responses and a pole radius constraint,” IEEE
Trans. on Signal Process., vol.48, no.11, pp.3109-3121, Nov. 2000.
[15] W. -S. Lu and T. Hinamoto, “Optimal design of IIR digital filters
with robust stability using conic quadratic-programming updates,” IEEE
Trans. Signal Process., vol.51, no.6, pp.1581-1592, June 2003.
[16] A. Jiang and H. K. Kwan, “Minimax design of IIR digital filters using
iterative SOCP,” IEEE Trans. Circuits Syst. I, vol.57, no.6, pp.1326-
1337, Jun. 2010.
[17] M. Nakamoto and S. Ohno, “Design of multi-band digital filters and
full-band digital differentiators without frequency sampling and iterative
optimization,” IEEE Trans. Ind. Electron., vol.61, no.9, pp.4857-4866,
Sep. 2014.
Authorized licensed use limited to: Qilu University of Technology. Downloaded on April 18,2023 at 03:24:13 UTC from IEEE Xplore. Restrictions apply.