Performance Evaluation of Evolutionary Algorithms For Digital Filter Design
Performance Evaluation of Evolutionary Algorithms For Digital Filter Design
I. INTRODUCTION
Signals arise in almost every eld of science and engineering. Two general classes of signals can be identified, namely, continuous-time and discrete-time signals. A discretetime signal is one that is dened at discrete instants of time. The numerical manipulation of signals and data in discretetime signals is called digital signal processing (DSP). Almost any DSP algorithm or processor can reasonably be described as a lter. Filtering is a process by which the frequency spectrum of a signal can be modified, reshaped, or manipulated according to some desired specications. Digital lters can be broadly classified into two groups: recursive and nonrecursive. The response of nonrecursive (FIR) lters is dependent only on present and previous values of the input signal. However, the response of recursive (IIR) lters depends not only on the input data but also on one or more previous output values. The main advantage of an IIR lter is that it can provide a much better performance than the FIR lter having the same number of coecients. The main advantage of a digital IIR lter is that it can provide a much better performance than the FIR lter having the same number of coefficients [1]. Design of a digital lter is the process of synthesizing and implementing a lter network so that a set of prescribed excitations results in a set of desired responses. Like most other engineering problems, the design of digital filters involves multiple, often conflicting, design criteria and specifications, and finding an optimum design is, therefore, not a simple task. Analytic or simple iterative methods usually lead to sub-optimal designs. Consequently, there is a need for optimization-based methods that can be used to design digital filters that would satisfy prescribed specifications. However,
optimization problems for the design of digital filters are often complex, highly nonlinear, and multimodal in nature. The problems usually exhibit many local minima. Ideally, the optimization method should lead to the global optimum of the objective function with a minimum amount of computation. Classical optimization methods are generally fast and efficient, and have been found to work reasonably well for the design of digital filters. These methods are very good in locating local minima but unfortunately, they are not designed to discard inferior local solutions in favor of better ones. Therefore, they tend to locate minima in the locale of the initialization point [2, 3]. Genetic Algorithms (GAs) received considerable attention about their potentials as novel optimization technique for complex problems, especially for the problem with nondifferentiable solution space. While these algorithms tend to require a large amount of computation, they also offer certain unique features with respect to classical gradient-based algorithms. For example, having located local suboptimal solutions, GAs can discard them in favor of more promising subsequent local solutions and, therefore, in the long run they are more likely to obtain better solutions for multimodal problems. GAs are also very flexible, non-problem specific, and robust. Furthermore, owing to their heuristic nature, arbitrary constraints can be imposed on the objective function without increasing the mathematical complexity of the problem. [4] The particle swarm optimization (PSO) is a populationbased stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. The algorithm starts with a random initialization of a swarm of individuals, referred to as particles, within the problem search space. It then endeavors to find a global optimal solution by simply adjusting the trajectory of each individual toward its own best location visited so far and toward the best position of the entire swarm at each evolutionary optimization step. The attractions of the PSO method include its simplicity in implementation, ability to quickly converge to a reasonably good solution and its robustness against local minima. [5] Therefore, some researchers have attempted to develop the designed methods based on modern global optimization
IJSET@2013
Page 398
The working process of a basic PSO is illustrated in the steps listed below: Step 1: Create a population of agents (called particles) uniformly distributed over X. Step 2: Evaluate each particles position according to the objective function. Step 3: If a particles current position is better than its previous best position, update it. Step 4: Determine the best particle (according to the particles previous best positions). Step 5: Update particles velocities according to . where and are position and velocity of particle i, respectively; is the position with the best objective value found so far by particle i and the entire population respectively; w is a parameter controlling the flying dynamics; R1 and R2 are random variables in the range [0, 1]; c1 and c2 are factors controlling the related weighting of corresponding terms. Step 6: Move particles to their new positions according to: After updating should be checked and limited to the allowed range. Step 7: Go to step 2 until stopping criteria are satisfied. PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. More specifically, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-Newton methods. PSO can therefore also be used on optimization problems that are partially irregular, noisy, change over time, etc.
IJSET@2013
Page 399
5. Discretize and eliminate values that that are not free to vary. 6. Check the nearest integer values for better filter to be realized. 7. Plot the frequency response after optimization.
The parameters for the optimal filter design that are considered are stopband and passband normalized frequencies the passband and stopband ripple the stopband attenuation and the transition width. These parameters are mainly decided by the filter coefficients. In any filter design problem, some of these parameters are fixed while others need to be determined. Hence, the design of these filters can be considered as an optimization problem of the cost function J (w) stated as follows: .. (4) where is the filter coefficient vector for the IIR filter. The aim is to minimize the cost function J (w) for IIR by adjusting w. The cost function is usually expressed as the time averaged cost function defined by (6): (5) where ideal (K) and actual (K) are the magnitude response of the ideal and the actual filter, where N is the order of the filter. The evolutionary approaches are hence applied in order to obtain the actual filter response as close as possible to the ideal response and filter coefficients are obtained.
(a)
(b)
IJSET@2013
Page 400
(b) (a) Figure 3: Improvement of average best solution by (a) PSO and (b) GA
(b) Figure 2: Evaluation of the numerator parameters of the LPF filter for algorithms, (a) PSO and (b) &(c) GA (a)
(b) (a)
IJSET@2013
Page 401
Figure 4: Histogram drawn for the results obtained for the fitness function by (a) PSO (b) GA (a) (a)
(b) (b)
(c) Figure 5: Magnitude and Phase Response of (a) desired filter, filter obtained using (b) PSO and (c) GA (c) Figure 6: Pole-Zero plot of (a) desired filter, filter obtained using (b) desired filter (b) PSO and (c) GA
IJSET@2013
Page 402
VIII. REFERENCES
i. ii. iii. iv. v. Andreas Antoniou, Digital Filters analysis, design and application, Tata McGraw-hill edition, New Delhi, 2005. VinayK. Ingle and John G. Proakis, Digital Signal Processing using MATLAB, Thomson Books, New Delhi, 2004. A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons Ltd., New Delhi, 2005. Holland, J. Adaptation in Natural and Artificial Systems. University of Michigan Press. 1975. Kennedy, J. and R.C. Eberhart.. Particle Swarm Optimisation. Proceedings, IEEE International Conference on Neural Networks, IV, pp. 1942-1948, 1995. Joelle Skaf and P. Boyd Stephen, Filter Design with Low Complexity Coefficients, IEEE Transactions on Signal processing, vol. 56, no. 7, p5. 3162-3170, July 2008. Richard J. Vaccaro and Brian F. Harrison, Optimal MatrixFilter Design, IEEE Transactions on Signal processing, vol. 44, no. 3, pp. 705-710, March 1996. Xi Zhang and Hiroshi Iwakura, Design of IIR Digital Filters based on Eigen value Problem, IEEE Transactions on Signal processing, vol.44, no. 6, pp. 1325-1319, June 1996. Fabrizio Argenti and Enrico Del Re, Design of IIR Eigen filters in the Frequency domain, IEEE Transactions on Signal processing, vol. 46, no. 6, pp. 1694-1700, June 1998. Xin Yao, Yong Liu, and Guangming Lin, Evolutionary Programming Made Faster, IEEE Transaction on Evolutionary Computation, vol. 3, no. 2, pp. 83-102, July 1999. T.William and W.C. Miller, Genetic algori thms for the design of Digital filters, Proceedings of IEEE ICSP'O4, p. 9-12, 2004. Revna Acar Varul, Performance Evaluation of Evolutionary Algorithms for Optimal Filter Design,IEEE Transactions On Evaluationary Computation, vol .16, No.1, February 2012.
a0 a1 a2 a3 b0 b1 b2 b3
vi. vii.
From Table 1 it is clear that the results obtained by PSO are almost the same as of the desired coefficients as compared with GA.
viii.
ix.
VI. CONCLUSION
PSO algorithm is a new heuristic approach mainly having three advantages: finding true global minimum of a multimodal search, fast convergence, using a few control parameters. In this work, both PSO and GA algorithms were applied to digital filter design. From the simulations, it was observed that the performance of PSO algorithm in terms of convergence, speed and computation time required is better than that of GA.
x.
xi.
xii. xiii.
VII. APPENDIX
Example 1: Design a digital low-pass IIR filter with following specifications: Pass/Stop band ripples 4dB/30dB and band edges 400Hz/800Hz and a sampling frequency of 2000Hz.
IJSET@2013
Page 403