0% found this document useful (0 votes)
187 views16 pages

In Signal Processing

The Wiener filter is a filter proposed in the 1940s to reduce noise in a signal. It assumes the signal and noise are stationary stochastic processes with known spectral characteristics. The goal is to minimize the mean square error between the estimated and actual signals by finding the optimal impulse response of the filter. There are causal and non-causal solutions, with the causal solution being more practical for real-time applications. The discrete-time Wiener filter solves a similar problem using finite impulse responses and the autocorrelation and cross-correlation functions of the input signal and noise.

Uploaded by

Ramya Chinna
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views16 pages

In Signal Processing

The Wiener filter is a filter proposed in the 1940s to reduce noise in a signal. It assumes the signal and noise are stationary stochastic processes with known spectral characteristics. The goal is to minimize the mean square error between the estimated and actual signals by finding the optimal impulse response of the filter. There are causal and non-causal solutions, with the causal solution being more practical for real-time applications. The discrete-time Wiener filter solves a similar problem using finite impulse responses and the autocorrelation and cross-correlation functions of the input signal and noise.

Uploaded by

Ramya Chinna
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949.

Its purpose is to reduce the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal. The discrete-time equivalent of Wiener's work was derived independently by Kolmogorov and published in 1941. Hence the theory is often called the Wiener-Kolmogorov filtering theory. The Wiener-Kolmogorov was the first statistically designed filter to be proposed and subsequently gave rise to many others including the famous Kalman filter. A Wiener filter is not an adaptive filter because the theory behind this filter assumes that the inputs are stationary.[2]

Contents
[hide]

1 Description 2 Wiener filter problem setup 3 Wiener filter solutions o 3.1 Noncausal solution o 3.2 Causal solution 4 Finite Impulse Response Wiener filter for discrete series o 4.1 Relationship to the least mean squares filter 5 See also 6 References 7 External links

Description
The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Typical filters are designed for a desired frequency responce. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following: 1. Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and cross-correlation 2. Requirement: the filter must be physically realizable/causal (this requirement can be dropped, resulting in a non-causal solution) 3. Performance criterion: minimum mean-square error (MMSE) This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.

[edit] Wiener filter problem setup

The input to the Wiener filter is assumed to be a signal, , corrupted by additive noise, The output, , is calculated by means of a filter, , using the following convolution:[3]

where

is the original signal (not exactly known; to be estimated) is the noise is the estimated signal (the intention is to equal ) is the Wiener filter's impulse response

The error is defined as

where

is the delay of the Wiener filter (since it is causal)

In other words, the error is the difference between the estimated signal and the true signal shifted by . The squared error is

where

is the desired output of the filter is the error , the problem can be described as follows:

Depending on the value of


If then the problem is that of prediction (error is reduced when is similar to a later value of s) If then the problem is that of filtering (error is reduced when is similar to ) If then the problem is that of smoothing (error is reduced when is similar to an earlier value of s) as a convolution integral:

Writing

Taking the expected value of the squared error results in

where

is the observed signal is the autocorrelation function of is the autocorrelation function of is the cross-correlation function of

and is zero), then

If the signal and the noise this means that


are uncorrelated (i.e., the cross-correlation

For many applications, the assumption of uncorrelated signal and noise is reasonable. The goal is to minimize , the expected value of the squared error, by finding the optimal , the Wiener filter impulse response function. The minimum may be found by calculating the first order incremental change in the least square error resulting from an incremental change in g(.) for positive time. This is

For a minimum, this must vanish identically for all

which leads to the Wiener-Hopf equation

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved by a special technique due to Wiener and Hopf.

Wiener filter solutions :


The Wiener filter problem has solutions for three possible cases : one where a non causal filter is acceptable ( requiring an infinite amount of both past and future data ) , the case where a causal filter is desired ( using an infinite amount of past data ) , and the finite impulse response ( FIR )

case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book Levinson gave the FIR solution.

Noncausal solution

Provided that

is optimal, then the minimum mean-square error equation reduces to

and the solution

is the inverse two-sided Laplace transform of

[edit] Causal solution

where

consists of the causal part of (that is, that part of this fraction having a positive time solution under the inverse Laplace transform) is the causal component of (i.e., the inverse Laplace transform of is non-zero only for ) is the anti-causal component of (i.e., the inverse Laplace transform of non-zero only for )

is

This general formula is complicated and deserves a more detailed explanation. To write down the solution in a specific case, one should follow these steps:[4] 1. Start with the spectrum components: in rational form and factor it into causal and anti-causal

where contains all the zeros and poles in the left hand plane (LHP) and contains the zeroes and poles in the right hand plane (RHP). This is called the WienerHopf factorization. 1. Divide by and write out the result as a partial fraction expansion. 2. Select only those terms in this expansion having poles in the LHP. Call these terms

3. Divide

by

. The result is the desired filter transfer function

[edit] Finite Impulse Response Wiener filter for discrete series

Block diagram view of the FIR Wiener filter for discrete series. An input signal w[n] is convolved with the Wiener filter g[n] and the result is compared to a reference signal s[n] to obtain the filtering error e[n]. The causal finite impulse response (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V). In order to derive the coefficients of the Wiener filter, consider the signal w[n] being fed to a Wiener filter of order N and with coefficients , . The output of the filter is denoted x[n] which is given by the expression

The residual error is denoted e[n] and is defined as e[n] = x[n] s[n] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows:

where denotes the expectation operator. In the general case, the coefficients may be complex and may be derived for the case where w[n] and s[n] are complex as well. With a complex signal, the matrix to be solved is a Hermitian Toeplitz matrix, rather than Symmetric Toeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as:

To find the vector respect to

which minimizes the expression above, calculate its derivative with

Assuming that w[n] and s[n] are each stationary and jointly stationary, the sequences and known respectively as the autocorrelation of w[n] and the cross-correlation between w[n] and s[n] can be defined as follows:

The derivative of the MSE may therefore be rewritten as (notice that

Letting the derivative be equal to zero results in

which can be rewritten in matrix form

These equations are known as the Wiener-Hopf equations. The matrix T appearing in the equation is a symmetric Toeplitz matrix. These matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, . Furthermore, there exists an efficient algorithm to solve such Wiener-Hopf equations known as the Levinson-Durbin algorithm so an explicit inversion of is not required.

[edit] Relationship to the least mean squares filter

The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix and output vector is

The FIR Wiener filter is related to the least mean squares filter, but minimizing its error criterion does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution. .. In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution. It works in the frequency domain, attempting to minimize the impact of deconvoluted noise at frequencies which have a poor signal-to-noise ratio. The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily. Wiener deconvolution is named after Norbert Wiener.

WIENER FILTERING: To find the filter to shape a wavelet to another wavelet is not an exact process, but the filter which produces the closest result can be obtained by a mathematical technique known as least squares. The Wiener filter is that which best (in a least squares sense) shapes a given wavelet to a desired wavelet. Applications include shaping a source wavelet to it's minimum phase equivalent, shaping a wavelet within the data to a spike (to improve resolution) or to shape a time-series with multiples to one without multiples (predictive deconvolution). Without going into the mathematics it turns out that the filter is found by dividing the cross-correlation of the input with the desired output by the auto-correlation of the input. This solution sets up a series of simultaneous equations which are solved rapidly in the computer by matrix inversion using the Levinson algorithm. A certain percentage of noise (called white noise or white light) is added to stabilise the inversion program.

Description The two-step noise reduction (TSNR) technique removes the annoying reverberation effect while maintaining the benefits of the decision-directed approach. However, classic short-time noise

reduction techniques, including TSNR, introduce harmonic distortion in the enhanced speech. To overcome this problem, a method called harmonic regeneration noise reduction (HRNR) is implemented in order to refine the a priori SNR used to compute a spectral gain able to preserve the speech harmonics as proposed by Plapous et al. ("Improved Signal-to-Noise Ratio Estimation for Speech Enhancement", IEEE Transactions on ASLP, Vol. 14, Issue 6, pp. 2098 - 2108, Nov. 2006). MATLAB release MATLAB 7.6 (R2008a)

Coding of the FFT Wiener Filter

The source code for the FFT Wiener Filter was created in Matlab as this is a strong mathematical programming language with fast in built Fourier functions which will increase the efficiency of the program . My experience in Matlab has been fairly limited in the past and hence studying Wiener Filtering in Matlab gives me an opportunity to learn a new language . The basic premise of the code below is taking the Fourier transform of the image and multiplying it by the transform of the filter function(h) . The process elimates low frequency signals in the Fourier domain which is the noise and emphasizes the high frequency signals which is the line . The output image is then created by taking the inverse Fourier transform of the product . The filter multiplies each pixel in the Fourier image by this filter to give the final filtered image . The original model for the FFT Wiener Filter code was taken from a website http://www.owlnet.rice.edu/~elec539/Projects99/BACH/proj2/wiener.html although many alterations have been made to the original Matlab functions and the program restructured. The original code was about twice as long and did not operate as efficiently or as effectively as it does now. Originally the code could only just make out a line at an intensity of 1.5 above the noise mean. Now it clearly shows a line at an intensity of only 0.5 above the noise mean. I have also commented all steps in the code so the Wiener Filtering Process may be followed and understood in relation to the code.

Matlab Code for Fourier Transform Wiener Filter


WienerDriver
clear; clf; %generate random noise array SIZE = 128; N = randn(SIZE,SIZE); components %generates a random Gaussian matrix with SIZExSIZE

N(50,:)=N(50,:) + 2; figure(1) colormap(jet) image(N*50) h = ones(10,10)/16;

%line is x=50 and raised by 2 above the noise mean.

%multiplies each pixel in the image by 50 %Filter function

sigma=1; % should always be 1 for a gaussian noise distribution as stated in the Matlab RANDN function Xf = fft2(N); %Fourier Transform of supplied image Hf = fft2(h,SIZE,SIZE); %Fourier Transform of blurring filter y = real(ifft2(Hf.*Xf)); %y is the real inverse fourier transform of the Fourier transform of %the initial image multiplied by the Fourier Transform of the filter %function (h). This creates the output image y % restoration using generalized Wiener filtering gamma = 1; alpha = 1; ewx = wienerFilter(y,h,sigma,gamma,alpha); figure(2) colormap(jet) image(ewx*50) return %magnifies each pixel in the image by 50

WienerFilter
function ex = wienerFilter(y,h,sigma,gamma,alpha); % % ex = wienerFilter(y,h,sigma,gamma,alpha); % % Generalized Wiener filter using parameter alpha. When % alpha = 1,(The noise Power) it is the Wiener filter. It is also called % Regularized inverse filter. % SIZE = size(y,1); Yf = fft2(y); filter function Hf = fft2(h,SIZE,SIZE); Pyf = abs(Yf).^2/SIZE^2; %Fourier transform of original image N by % Fourier transform of filter function %Fourier transform of filter

sHf = Hf.*(abs(Hf)>0)+1/gamma*(abs(Hf)==0); function + its inverse iHf = 1./sHf; %inverse

iHf = iHf.*(abs(Hf)*gamma>1)+gamma*abs(sHf).*iHf.*(abs(sHf)*gamma<=1);

%array element increases if bigger than noise variance, decreases if below noise variance. %iHf is the Power Spectral Frequency Fourier Transform Pyf = Pyf.*(Pyf>sigma^2)+sigma^2*(Pyf<=sigma^2); %Pyf is the noise power spectrum..increased for %pyf bigger than noise mean, if signal is small pyf will be made smaller. Gf = iHf.*(Pyf-sigma^2)./(Pyf-(1-alpha)*sigma^2); %Gf is the esitmated image Fourier Transform %Gf is basically the filter functions effect on Pyf-the transformed signal N % Restored image eXf = Gf.*Yf; %The filter multiplies each pixel in the Fourier image(Yf) by this filter(Gf) ex = real(ifft2(eXf)); %inverse fourier tranform of the filtered Fourier transform of the new image yields %the hopefully noise free result. return ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Wiener filtering GARCIA Frederic VIBOT MSc

-2Objective: restoration of a burred and noisy image (program in Matlab) The Wiener filter purpose is to reduce the amount of noise present in a signal by Comparison with an estimation of the desired noiseless signal. It is based on a statistical approach. Typical filters are designed for a desired frequency response. The Wiener filter approaches filtering from a different angle. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the LTI filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following: 1. Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and crosscorrelation. 2. Requirement: the filter must be physically realizable, i.e. causal (this requirement can be dropped, resulting in a non-causal solution). 3. Performance criteria: minimum mean-square error. 1. Generate a 128x128p image: a. background = 0 b. a centred 80x80p square of level 100 c. a centred disk (radius = 20) of level = 200 Matlab does not have a specific function to draw a square or a circle. Then it is necessary to do it manually. A grid of 128x128 cells has been done in order to define a mask for the square and another mask for the disk. The square mask is easy to be computed while for the disk mask has been necessary to apply the circumference equation (Eq. 1).

x2 + y2 < r
Eq. 1

Once both masks has been computed, there are multiply by the specific colour and merged using the mergeImages(img1,img2) function, which merge both images taking into account the maximum colour of each pixel. Hence, it maintains the square and disk colour after being merged in a single image. The Matlab code to perform this step is presented in Table 1:
% 1. Generate a 128x128p image: % a. background = 0 n = 128; ind = -n/2:1:(n/2)-1; [Y,X] = meshgrid(ind,ind); cc = [0,0]; % center % b. a centred 80x80p square of level 100 r = 40; % radius of the square sqMask = max(abs(X-cc(1)),abs(Y-cc(2))) < r; % mask containing the square square = sqMask*100; % fill with color = 100 % c. a centred disk (radius = 20) of level 200 r = 20; % radius of the disk dkMask = (X-cc(1)).^2 + (Y-cc(2)).^2 < r^2; % mask containing the disk disk = dkMask*200; % fill with color = 200 iSrc = mergeImages(square,disk);
Table 1. Step 1

Wiener filtering GARCIA Frederic VIBOT MSc

-3The following figures present both masks and the resulting image after being merged.
Figure 1a. Mask for the square image. Figure 1b. Mask for the disk image. Figure 1c. Initial image with the specific colours for the square and for the disk.

2. Perform a blurring step of the image with the following filter (choose s <1):
22

2 h(x, y) 2 e s x y = +

The implementation of this step is based on the generation of the filter h, done by the function generateFilter(s,img) but once the filter is computed it is necessary to be applied on the image in order to obtain a blur image. To apply this filters there are two possible ways: 1. By convolution of the filter and the image in the temporal domain. The problem of this way is that the Matlab function conv2() performs the 2D convolution of matrices but if [ma,na] = size(A) and [mb,nb] = size(B), then size(C) = [ma+mb-1,na+nb-1]. However, the resulting image size expected is the same as the input size. There are three flags in order to determine the output of conv2. These flags are: 'full' which returns the full 2D convolution (default and presented in figure 2a). same which returns only the central part of the convolution that is the same size as A (Figure 2b). valid which returns only those parts of the convolution that are computed without the zero-padded edges size(C) = [mamb+ 1,na-nb+1] when all(size(A) >= size(B)), otherwise C is an empty matrix []. Wiener filtering GARCIA Frederic VIBOT MSc

-4-

Figure 2a. Obtained image after the convolution using flag: full. Figure 2b. Obtained image after the convolution using flag: same.

As can be observed the convolution does not returns the expected image size or the desired convolution part which implies to work in the frequency domain. 2. By computing the product between the filter and the image in the frequency domain. In order to do not spend much time trying to select the valid range of the convolution the computation has been done in the Fourier domain as is presented in Table 2.
% 2. Perform a blurring step of the image with the following filter: s = 0.5; [iBlur,h] = applyBlurFilter(s,iSrc); function [iRes,h] = applyBlurFilter(s,img); iRes = []; h = generateFilter(s,img); H = fft2(h); IMG = fft2(double(img)); I_BLUR = IMG .* H; iRes = ifft2(I_BLUR); end function h = generateFilter(s,img) h = []; [width,height] = size(img); for y = 1:height for x = 1:width h(x,y) = (s/(2*pi))^(-s*sqrt(x^2+y^2)); end end % 'h' is the bluring filter h = h / sum(sum(h)); % normalize end
Table 2. Step 2

The resulting image after the blur step is presented in figure 3: Wiener filtering GARCIA Frederic VIBOT MSc

-5Figure 3. Blurred image.

3. Add a Gaussian (white) noise: variance v>10 (mean=0). Table 3 presents the Matlab code to add white noise to an image. The noise is simulated by random values in the standard deviation area. % 3. Add a gaussian (white) noise: variance v>10 (mean=0) variance = 11; % Must be bigger than 10 std_dev = sqrt(variance); noise = std_dev.*randn(size(iBlur)); iBlurNoise = iBlur + noise;
Table 3. Step 3

The resulting image after the addition of noise is presented in figure 4:


Figure 4. Noisy and blurred image.

4. Perform a restoration of this built image by using the simplified Wiener restoration filtering:

HK WH + =2 *
where the restored image

is obtained from the observed (degraded) image G in the Fourier space by:

F =WG
^

Give the K value leading to the best visual restoration.


In order to compute the K value leading to the best visual restoration an array from 0.001 to 0.1 with 100 intervals has been computed. Each value has been assigned to a specific K and the restoration has been performed. The selected K value corresponds with the minimum error obtained between the original image and the restored one. The Matlab code is presented in Table 4: Wiener filtering GARCIA Frederic VIBOT MSc

-6% 4. Perform a restoration of this built image by using the simplifier H = fft2(h); % Transform the filter into Fourier domain % Give the K value leading to the best visual restoration, calculated % by minimazing the error btw the initial image and the restored image K = linspace(0.001,0.1,100); errorVect = zeros(1,100); for i=1:length(K) % Generate restored Wiener filter W = conj(H)./(abs(H).^2 + K(i)); % Apply filter G = fft2(iBlurNoise); F = W.*G; iRestored = uint8(ifft2(F)); % Calculate error error = uint8(iSrc) - iRestored; errorVect(i) = mean(error(:))^2; end % Retrieve minimum error [minErrorValue minErrorPos] = min(errorVect); idealK = K(minErrorPos); W = conj(H)./(abs(H).^2 + idealK); G = fft2(iBlurNoise); F = W.*G; iRestored = ifft2(F);
Table 4. Step 4

And the restored image is presented in figure 5:


Figure 5. Restored image.

Figure 6 presents the results of all the computations until this step: Wiener filtering GARCIA Frederic VIBOT MSc

-7Figure 6. Results of the steps 1-4.

Although the information about the spectrum of the undistorted image and also of the noise is known, it is not possible to achieve a perfect restoration. The main reason is the random nature of noise 5. Try to introduce two different blurs: one (s1) for the square and another (s2<s1/2) for the disk. Add noise and restore the image. Conclusion?

In order to perform this step, two different filters h has been computed for the square and for the disk, respectively, with different s values. After compute separately both images, there are added in order to obtain a single image. Then, the noise is added and the restoration process is performed. In that case, it is not possible to find a priori a fit value of s to make a correct restoration. Then, an array of 100 elements between s 1 and s2 has been use to compute the blur filter and besides, as in the step 4, the best K value has been computed for each s value. The Matlab code for this step is presented in Table 5:
% 5. Try to introduce different blurs: one (s1) for the square and another % (s2<s1/2) for the disk. Add noise and restore the image s1 = 0.5; % Square s2 = (s1/2) - 0.05; % Disk % Perform the bluring step [iBlurSq,hSq] = applyBlurFilter(s1,square); % Apply the filter to Square [iBlurDk,hDk] = applyBlurFilter(s2,disk); % Apply the filter to Disk iBlurNew = mergeImages(iBlurSq,iBlurDk); % Add the gaussian noise iBlurNoiseNew = iBlurNew + noise; % Find the ideal H by trial and error sVect = linspace(s1,s2,100);

Wiener filtering GARCIA Frederic VIBOT MSc

-8kVal = []; % Best K value for each value of s kVect = linspace(0.001,0.1,100); errorVect = zeros(1,100); for i=1:length(sVect) % Generate the filter h = generateFilter(sVect(i),iBlurNew); H = fft2(h); errorKVect = zeros(1,100); for j=1:length(kVect) % Generate restored Wiener filter W = conj(H)./(abs(H).^2 + kVect(j)); % Apply filter G = fft2(iBlurNoiseNew); F = W.*G; iRestoredNew = uint8(ifft2(F)); % Calculate error error = uint8(iSrc) - iRestoredNew; errorKVect(j) = mean(error(:))^2; end % Retrieve minimum error [minErrorValue minErrorPos] = min(errorKVect); kVal(i) = kVect(minErrorPos); % Generate restored Wiener filter W = conj(H)./(abs(H).^2 + kVal(i)); % Apply filter G = fft2(iBlurNoiseNew); F = W.*G; iRestoredNew = uint8(ifft2(F)); % Calculate error error = uint8(iSrc) - iRestoredNew; errorVect(i) = mean(error(:))^2; end % Retrieve minimum error [minErrorValue minErrorPos] = min(errorVect); % Generate the best filter 'h' h = generateFilter(sVect(minErrorPos),iBlurNew); H = fft2(h);

W = conj(H)./(abs(H).^2 + kVal(minErrorPos)); G = fft2(iBlurNoiseNew); F = W.*G; iRestoredNew = ifft2(F);


Table 5. Step 5

Wiener filtering GARCIA Frederic VIBOT MSc

-9The results are presented in the following figure:


Figure 6. Results of the step 5.

In that case the restoration is slightly worse than in the previous step but the invested time to perform the best restoration is considerably higher. Conclusions: As is said in the introduction, the Wiener filter is used to reduce the amount of noise presented in a signal by comparison with an estimation of the desired noiseless signal. As can be observed in the results, the image restoration is not absolutely perfect but it achieves a very close image to the original one. However, if it is necessary to find a value of s or K to perform the closest restoration, the invested time is very high, which can be a problem in many cases Description: The aim of this course is to prepare upper level undergraduates so that they can be productive when faced with technical problems related to biomedical imaging. The basic underlying techniques (mathematics, physics, signal processing, data analysis) for understanding the several phenomena related to image formation in biomedical devices are presented. Several methods for computational information extraction from image data are also presented (segmentation, registration, pattern recognition, etc.). Course work will include homework assignments (including analytical and programming exercises) as well as an independent project. Field trips to observe biomedical imaging devices in action are also planned. ..

Abstract
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design filters based on discrete cosine transform (DCT) is proposed in this study for optimal medical image filtering. This algorithm exploits the better energy compaction property of DCT and re-arrange these coefficients in a wavelet manner to get the better energy clustering at desired spatial locations. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions.

Abstract. Image filtering techniques have potential applications in biomedical image processing such as image restoration and image enhancement. ... quality compared to those restored by average and Wiener filters.

Abstract:Image filtering techniques have potential applications in biomedical image


processing such as image restoration and image enhancement. The potential of traditional filters largely depends on the apriori knowledge about the type of noise corrupting the image. This makes the standard filters to be application specific. For example, the well-known median filter and its variants can remove the salt-and-pepper (or impulse) noise at low noise levels. Each of these methods has its own advantages and disadvantages. In this paper, we have introduced a new finite impulse response (FIR) filter for image restoration where, the filter undergoes a learning procedure. The filter coefficients are adaptively updated based on correlated Hebbian learning. This algorithm exploits the inter pixel correlation in the form of Hebbian learning and hence performs optimal smoothening of the noisy images. The application of the proposed filter on images corrupted with Gaussian noise, results in restorations which are better in quality compared to those restored by average and Wiener filters. The restored image is found to be visually appealing and artifactfree

Abstract
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

You might also like