0% found this document useful (0 votes)
15 views24 pages

LAB6

Uploaded by

durugul03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

LAB6

Uploaded by

durugul03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Bilkent University EE321- Signals and

Systems Lab 6 Report


Duru Gül/22102015

Section 1

30/04/2025

1 INTRODUCTION
This lab focuses on three fundamental signal processing operations: convolution, the discrete-
time Fourier transform (DTFT), and its inverse (IDTFT). Analytical analysis of these ideas,
together with their matrix representations, is covered in Part 1. Error and performance
analysis is performed after DTFT, IDTFT, and convolution are implemented in MATLAB
without the use of built-in functions in Part 2. The denoising of a horn signal and the design
of a simple equalizer for the analysis and filtering of multi-instrument recordings are covered
in Parts 3 and 4. The lab builds a deeper knowledge of signal behavior in the frequency and
time domains by combining theory, and audio processing.

2 SIMULATIONS

2.1 Part 1

This part includes hand written derivations. They can be seen in the following figures.
Figure 1: Solutions of part 1
Figure 2: Solutions of part 1
Figure 3: Solutions of part 1
Figure 4: Solutions of part 1
Figure 5: Solutions of part 1

2.2 Part 2

2.2.1 Implementation of DTFT and IDTFT

The aim of this section is to use MATLAB's matrix operations to create the Discrete-Time
Fourier Transform (DTFT) and its inverse (IDTFT) without the need for Fourier functions
like fft, ifft, or fftshift. The DTFT transforms a discrete-time sequence with a finite length
into a continuous (sampled) frequency-domain representation, and the IDTFT uses its DTFT
to reconstruct the time-domain sequence.

The DTFT is defined as:


𝑁−1

𝑋(𝑒 𝑗𝑤 ) = ∑ 𝑥[𝑛] . 𝑒 −𝑗𝑤𝑛


𝑛=0

To compute this numerically, I evaluated it for a frequency vector 𝑤 𝜖 [−𝜋, 𝜋]. Using
MATLAB matrix operations, function is implemented. It can be found in the appendix.

The inverse DTFT is given as:

1 𝜋
𝑥[𝑛] = ∫ 𝑋(𝑒 𝑗𝑤 )𝑒 𝑗𝑤𝑛 𝑑𝑤
2𝜋 −𝜋

In MATLAB, this integral is approximated using a Riemann sum over a finely sampled
frequency vector. The implementation of the function can be found in the appendix.

To verify the correctness of the implementation, a test signal 𝜙1 [𝑛] is given by the lab
manual. Then I computed the reconstruction error:

𝑁
2
𝐸 = ∑[𝜙1 [𝑛] − 𝜙2 [𝑛]]
𝑛=0

The reconstructed signal 𝜙2 [𝑛] closely matches the original 𝜙1 [𝑛] , confirming the DTFT
and IDTFT functions operate correctly. The result of the error can be seen below.

Figure 6: The error for part 2.1

A small nonzero error was observed. This was expected because of the finite sampling of the
frequency vector 𝑤 and numerical round-off errors in floating point arithmetic. And the error
is acceptable because it is very low.

2.2.2 Implementation of Convolution

In this part of the lab, the goal is to implement the convolution operation of two dicrete time
sequences using the frequency domain method. This is based on that convolution in time
domain corresponds to multiplication in the frequency domain.

𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛]

𝑌(𝑒 𝑗𝑤 ) = 𝑋(𝑒 𝑗𝑤 ). 𝐻(𝑒 𝑗𝑤 )


The implementation must not use any built in convolution functions. The function was
implemented under the nameconvFUNC, with the following:

function [y] = convFUNC(x, h, nx, nh, ny, w)

Code for this function can be seen in the appendix.

2.2.3 Another Approach for Convolution

In this part, the objective is to implement the convolution of two discrete-time signals using
matrix multiplication. Convolution can be interpreted as a special case of polynomial
multiplication, and thus modeled as a Toeplitz matrix multiplication.

Given two sequences:

𝑥[𝑛]𝜖ℝ𝑁

ℎ[𝑛] 𝜖 ℝ𝑀 , 𝑀 ≤ 𝑁

The convolution𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛] results in a sequence of length N+M-1. It can be


represented as:

𝑦 = 𝐻𝑚𝑎𝑡𝑟𝑖𝑥 . 𝑥𝑝𝑎𝑑

where𝐻𝑚𝑎𝑡𝑟𝑖𝑥 is a Toeplitz matrix formed from ℎ[𝑛], 𝑥𝑝𝑎𝑑 is a zero-padded version of x[n],
and the operation becomes a simple matrix vector multiplication.

Code for this function can be seen in appendix.

2.2.4 Testing the Convolution Function

The purpose of this section is to compare and evaluate the accuracy and performance of the
two custom convolution implementations created in Parts 2.2 (ConvFUNC) and 2.3
(ConvFUNC_M) with the conv() function provided with MATLAB. The following discrete-
time signals were used as inputs for the test:

x[n]=[2,4,6,8,7,6,5,4,3,2,1]

h[n]=[1,2,1,−1]

The frequency vector 𝑤 was linearly spaced in the interval [−𝜋, 𝜋] with 1000 samples.

Execution times were measured using tic and toc. The results can be seen below.
Figure 7: The output of convFUNC and its execution time

Figure 8: The output of convFUNC_M and its execution time

Figure 9: The output of conv() and its execution time

the norm-square difference between outputs was computed as:

𝐸 = ∑|𝑦1 [𝑛] − 𝑦2 [𝑛]|2


𝑛

The following comparisons were made:

Figure 10: The errors


Because to internal optimizations, MATLAB's built-in conv() method was the fastest. These
zero-error findings verify that the frequency-domain implementation based on DTFT-IDTFT
was accurate and that all implementations are numerically identical in this specific case.

2.3 Part 3

This section aims to denoise an anechoic horn sound recording by designing and
implementing two distinct moving average filter types which are Gaussian moving average
(GMAV) and a simple moving average (SMAV). Using the previously developed functions
(ConvFUNC or ConvFUNC_M), the filtering is applied by convolution, and each filter's
efficacy is evaluated both visually and sound. SMAV and GMAV are defined as:

1
ℎ𝑆𝑀𝐴𝑉 [𝑛] = . [1, 1, … , 1, 1]
𝑁

1 (𝑛−𝜇)2

ℎ𝑔𝑀𝐴𝑉 [𝑛] = .𝑒 2𝜎2
𝜎. √2𝜋

Then, the given horn recording is convoluted with these implemented filters. The results can
be seen below.

Figure 11: Impulse responses


SMAV produces a rectangular impulse response. It remains constant during its duration,
giving equal weight to all input samples within the filter frame. This uniform averaging
method is good in removing high frequency noise, but it may distort sudden changes and
boundaries in the signal. The GMAV has a rounded impulse response. The weights decay
proportionately toward the edges, with the maximum value at the center. This means that the
filter prioritizes central samples while reducing the relevance of distant ones. As a result, it
reduces noise more smoothly and better maintains edges and transients, making it better
suited to signals that require clarity and structure.

The SMAV's rectangular shape results in stronger averaging, but the GMAV's decreasing
impulse response allows for a balance of noise reduction and signal quality.

Figure 11: The filter outputs

The filtered outputs were played using the sound() function. By producing a smoother output,
the (SMAV) significantly decreased background noise while maintaining horn quality. The
horn signal sounded slightly blurry using the (GMAV), which also decreased noise but added
additional smoothness. Although Gaussian filters have theoretical advantages, the SMAV
filter outperformed them in this case, providing cleaner output with less noise. This implies
that the uniform averaging of the SMAV was more successful for the given horn signal and
noise characteristics.

2.4 Part 4

This section designs a simple equalizer with Gaussian-based high-pass and low-pass filters.
These filters are applied to recordings of various musical instruments, and the frequency
domain can be used to examine the effects. Understanding how various filters reduce or
maintain the frequency components in the recordings is the goal. Gaussian impulse responses
with different standard deviations are used to build the filters:

LPF passes low frequencies by using a broader Gaussian (σ = 0.4). HPF attenuates low
frequencies while passing high ones by using a thin Gaussian (σ = 0.02) filter.

The DTFT magnitudes of each filter were plotted in order to examine its behavior. The result
can be seen below.

Figure 12: Frequency Response


Figure 13: Impulse Responses

Each of the recording was loaded, and DTFT was used to evaluate its frequency content.

My observations:

Flute: Higher-frequency concentrated energy.

Bassoon and cello: mostly low-frequency instruments

Trumpet: Spread over a large frequency range (broadband)

This was verified by plotting the DTFT magnitude of each recording. Each instrument
recording was passed through the LPF and HPF. The results can be seen below.
Figure 13: Bassoon’s Filtering

Figure 14: Cello’s Filtering


Figure 15: Flute’s Filtering

Figure 16: Trumpet’s Filtering

The results were listened to and observed. LPF attenuated high frequency instruments like the
flute and trumpet. HPF removed most of cello and bassoon components but preserved flute
and trumpet.
All four instrument signals were combined into one audio stream in order to create a simple
orchestra. To this combined signal, the same LPF and HPF filters were then applied. The
result can be seen below.

Figure 17: Orchestra Filtering

In original orchestra, all instruments were audible in a full-frequency version. After filtered
with LPF, the cello and bassoon were the most common low-frequency instruments, while
high-frequency instruments were largely muted. After filtered with HPF, there was a
noticeable decrease in the amount of bass and an increase in high-frequency instruments like
the flute and trumpet. The filter design was confirmed by these data, which matched the
expected behavior based on the frequency characteristics of each particular instrument.

3 CONCLUSION

This lab explored fundamental signal processing topics such as convolution, DTFT, and
IDTFT using both theoretical derivations and MATLAB implementations. Custom
convolution functions were created and demonstrated to match MATLAB's built-in results
with little error. Simple and Gaussian moving average filters were used to denoise a horn
sound. The SMAV filter outperformed the other filters in terms of clarity and noise reduction.
In the end, individual instrument recordings and a combined symphonic signal were given
Gaussian-based low-pass and high-pass filters. Frequency domain analysis and audio
playback confirmed that the filters performed as expected, separating low or high frequency
information dependent on instrument characteristics. Overall, the lab combined theory and
practical methods to improve understanding of time and frequency domain signal behavior.

4 APPENDIX

• ConvFUNC

function [y] = convFUNC(x, h, nx, nh, ny, w)


y = IDTFT(DTFT(x,nx,w) .* DTFT(h,nh,w), ny, w);
end

• ConvFUNC_M

function [y] = ConvFUNC_M(x, h)


H = toeplitz([h(:); ...
zeros(length(x)-1,1)], [h(1), zeros(1,length(x)-1)]);
y = H * x(:);
end

• Part 2.1

w = linspace(-pi, pi, 1000).';


original_signal = [1 1 1 1 1 0 0 0 0 0];
n = 0:(length(original_signal)-1);
X = DTFT(original_signal, n, w);
final_signal = IDTFT(X, n, w);
error = sum(abs(original_signal - final_signal).^2);
disp(['The error is: ', num2str(error)]);
function X = DTFT(x, n, w)
X = sum(x .* exp(-1j * (w * n)), 2);
end
function x = IDTFT(X, n, w)
x = real((1 / (2 * pi)) * sum(X .* exp(1j * (w * n)), 1) * (w(2) - w(1)));
end

• Part 2.4

omega = linspace(-pi, pi, 1000);


inputSignal = [2, 4, 6, 8, 7, 6, 5, 4, 3, 2, 1];
lenSignal = length(inputSignal) - 1;
timeIndexSignal = 0:lenSignal;

impulseResponse = [1, 2, 1, -1];


lenImpulse = length(impulseResponse) - 1;
timeIndexImpulse = 0:lenImpulse;
tic;
outputFreqDomain = ConvFUNC(inputSignal, impulseResponse, timeIndexSignal, timeIndexImpulse,
0:lenSignal+lenImpulse, omega);
timeFreq = toc;
disp('Result from ConvFUNC (frequency-domain method):');
disp(outputFreqDomain);
disp(['Execution time (ConvFUNC): ', num2str(timeFreq), ' seconds']);
tic;
outputMatrixMethod = ConvFUNC_M(inputSignal, impulseResponse);
timeMatrix = toc;
disp('Result from ConvFUNC_M (matrix-based method):');
disp(outputMatrixMethod);
disp(['Execution time (ConvFUNC_M): ', num2str(timeMatrix), ' seconds']);
tic;
outputBuiltin = conv(inputSignal, impulseResponse);
timeBuiltin = toc;
disp('Result from MATLAB built-in conv():');
disp(outputBuiltin);
disp(['Execution time (conv): ', num2str(timeBuiltin), ' seconds']);
err_freq_vs_builtin = sum(abs(outputFreqDomain - outputBuiltin).^2);
err_matrix_vs_builtin = sum(abs(outputMatrixMethod.' - outputBuiltin).^2);
err_matrix_vs_freq = sum(abs(outputMatrixMethod.' - outputFreqDomain).^2);

disp(['Error between conv() and ConvFUNC: ', num2str(err_freq_vs_builtin)]);


disp(['Error between conv() and ConvFUNC_M: ', num2str(err_matrix_vs_builtin)]);
disp(['Error between ConvFUNC_M and ConvFUNC: ', num2str(err_matrix_vs_freq)]);
function [y] = ConvFUNC(x, h, nx, nh, ny, w)
y = round(IDFT(DTFT(x, nx, w) .* DTFT(h, nh, w), ny, w));
end

function y = ConvFUNC_M(x, h)
y = toeplitz([h(:); zeros(length(x)-1, 1)], [h(1), zeros(1, length(x)-1)]) * x(:);
end

function X = DTFT(x, n, w)
X = x * exp(-1j * n' * w);
end

function x = IDFT(X, n, w)
x = real((X * exp(1j * w' * n)) * (w(2) - w(1)) / (2 * pi));
end

• Part 3

[y, Fs] = audioread('Part3_recording.flac');


N = 0.01*Fs;
SMAVh = ones(1,N)*(1/N);
SMAVFiltered = conv(y,SMAVh,'same');
n = -5:5;
GMAVH = (1/(0.7*sqrt(2*pi)))*exp(-((n).^2)/2*(0.7^2));
GMAVH = GMAVH/sum(GMAVH);
figure;
subplot(2,1,1);
stem(0:N-1, SMAVh, 'filled', 'r');
title('Impulse Response of SMAV Filter');
xlabel('n'); ylabel('Amplitude');
subplot(2,1,2);
stem(n, GMAVH, 'filled', 'r');
title('Impulse Response of GMAV Filter');
xlabel('n'); ylabel('Amplitude');
GMAVFiltered = conv(y,GMAVH,'same');
figure;
subplot(2,2,[1,2]);
plot((0:length(y)-1)/Fs, y,'r');
title('Original Given Signal');
xlabel('Time (s)');
ylabel('Amplitude');
xlim([0 length(y)/Fs]);
subplot(2,2,3);
plot((0:length(y)-1)/Fs, SMAVFiltered, 'r');
title('SMAV Filter Output');
xlabel('Time (sec)');
ylabel('Amplitude');
xlim([0 length(SMAVFiltered)/Fs]);
subplot(2,2,4);
plot((0:length(y)-1)/Fs, GMAVFiltered, 'r');
title('GMAV Filter Output');
xlabel('Time (sec)');
ylabel('Amplitude');
xlim([0 length(GMAVFiltered)/Fs]);
pause(length(y)/Fs);
sound(GMAVFiltered, Fs);
pause(length(y)/Fs);
sound(SMAVFiltered, Fs);
function y = ConvFUNC_M(x,h)
y = toeplitz([h(:);
zeros(length(x)-1,1)],[h(1),zeros(1,length(x)-1)])*x(:);
end
function[y] = ConvFUNC(x,h,nx,nh,ny,w)
y = round(IDFT(DTFT(x,nx,w).*DTFT(h,nh,w),ny,w));
end
function x = IDFT(X,n,w)
x = (real((X*exp(1j*w'*n))*(w(2)-w(1)))/(2*pi));
end
function X = DTFT(x,n,w)
X = x*exp(-1j*n'*w);
end

• Part 4

sigmaLPF = 0.4;
sigmaHPF = 0.02;
nLimit = ceil(3 * sigmaLPF);
nFilter = -nLimit:nLimit;
filterLPF = exp(-0.5 * (nFilter / sigmaLPF).^2);
filterHPF = -exp(-0.5 * (nFilter / sigmaHPF).^2);
filterHPF(ceil(length(filterHPF)/2)+1) = filterHPF(ceil(length(filterHPF)/2)+1) + 1;
figure;
subplot(2,1,1);
stem(nFilter, filterLPF, 'filled', 'r');
title('Impulse Response of Low-Pass Filter');
xlabel('n'); ylabel('Amplitude');

subplot(2,1,2);
stem(nFilter, filterHPF, 'filled', 'r');
title('Impulse Response of High-Pass Filter');
xlabel('n'); ylabel('Amplitude');
omega = linspace(-pi, pi, 2048);
[bassoonSignal, fs] = audioread('bassoon.flac');
freqHz = omega / (2 * pi) * fs;
responseLPF = DTFT(filterLPF, nFilter, omega);
responseHPF = DTFT(filterHPF, nFilter, omega);
figure;
subplot(2,1,1);
plot(freqHz, abs(responseLPF),'r');
title('Low-Pass Filter Magnitude Response');
xlabel('Frequency (Hz)');
ylabel('Magnitude');
xlim([0 fs/2]);
subplot(2,1,2);
plot(freqHz, abs(responseHPF),'r');
title('High-Pass Filter Magnitude Response');
xlabel('Frequency (Hz)');
ylabel('Magnitude');
xlim([0 fs/2]);
[celloSignal, ~] = audioread('cello.flac');
[fluteSignal, ~] = audioread('flute.flac');
[trumpetSignal, ~] = audioread('trumpet.flac');
bassoonLPF = conv(bassoonSignal, filterLPF, 'same');
bassoonHPF = conv(bassoonSignal, filterHPF, 'same');
celloLPF = conv(celloSignal, filterLPF, 'same');
celloHPF = conv(celloSignal, filterHPF, 'same');
fluteLPF = conv(fluteSignal, filterLPF, 'same');
fluteHPF = conv(fluteSignal, filterHPF, 'same');
trumpetLPF = conv(trumpetSignal, filterLPF, 'same');
trumpetHPF = conv(trumpetSignal, filterHPF, 'same');
figure;
subplot(3,1,1);
plot((0:length(bassoonSignal)-1)/fs, bassoonSignal,'r');
title('Bassoon - Original');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,2);
plot((0:length(bassoonLPF)-1)/fs, bassoonLPF,'r');
title('Bassoon - After Low-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,3);
plot((0:length(bassoonHPF)-1)/fs, bassoonHPF,'r');
title('Bassoon - After High-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
figure;
subplot(3,1,1);
plot((0:length(celloSignal)-1)/fs, celloSignal,'r');
title('Cello - Original');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,2);
plot((0:length(celloLPF)-1)/fs, celloLPF,'r');
title('Cello - After Low-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,3);
plot((0:length(celloHPF)-1)/fs, celloHPF,'r');
title('Cello - After High-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
figure;
subplot(3,1,1);
plot((0:length(fluteSignal)-1)/fs, fluteSignal,'r');
title('Flute - Original');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,2);
plot((0:length(fluteLPF)-1)/fs, fluteLPF,'r');
title('Flute - After Low-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,3);
plot((0:length(fluteHPF)-1)/fs, fluteHPF,'r');
title('Flute - After High-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
figure;
subplot(3,1,1);
plot((0:length(trumpetSignal)-1)/fs, trumpetSignal,'r');
title('Trumpet - Original');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,2);
plot((0:length(trumpetLPF)-1)/fs, trumpetLPF,'r');
title('Trumpet - After Low-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,3);
plot((0:length(trumpetHPF)-1)/fs, trumpetHPF,'r');
title('Trumpet - After High-Pass Filter');
xlabel('Time (s)'); ylabel('Amplitude');
sound(trumpetSignal, fs);
orchestra = bassoonSignal + celloSignal + fluteSignal + trumpetSignal;
orchestraLPF = conv(orchestra, filterLPF, 'same');
orchestraHPF = conv(orchestra, filterHPF, 'same');
sound(orchestra, fs); pause(length(orchestra)/fs + 1);
sound(orchestraLPF, fs); pause(length(orchestraLPF)/fs + 1);
sound(orchestraHPF, fs); pause(length(orchestraHPF)/fs + 1);
figure;
subplot(3,1,1);
plot((0:length(orchestra)-1)/fs, orchestra,'r');
title('Orchestra - Original Mix');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,2);
plot((0:length(orchestraLPF)-1)/fs, orchestraLPF,'r');
title('Orchestra - LPF Applied');
xlabel('Time (s)'); ylabel('Amplitude');
subplot(3,1,3);
plot((0:length(orchestraHPF)-1)/fs, orchestraHPF,'r');
title('Orchestra - HPF Applied');
xlabel('Time (s)'); ylabel('Amplitude');
function X = DTFT(x, n, w)
X = x * exp(-1j * n' * w);
end

You might also like