0% found this document useful (0 votes)
13 views24 pages

Sns pbl2

The document outlines a Problem-Based Learning (PBL) project for Electrical Engineering students focused on understanding sampling and modulation in signals and systems. Students are tasked with recording audio signals at various sampling frequencies, analyzing them using MATLAB, and designing a system for modulation. The project emphasizes the importance of the Nyquist theorem and includes practical steps for recording, transforming, and visualizing audio signals in both single-sided and double-sided formats.

Uploaded by

hajra shafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views24 pages

Sns pbl2

The document outlines a Problem-Based Learning (PBL) project for Electrical Engineering students focused on understanding sampling and modulation in signals and systems. Students are tasked with recording audio signals at various sampling frequencies, analyzing them using MATLAB, and designing a system for modulation. The project emphasizes the importance of the Nyquist theorem and includes practical steps for recording, transforming, and visualizing audio signals in both single-sided and double-sided formats.

Uploaded by

hajra shafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA

ELECTRICAL ENGINEERING DEPARTMENT

SIGNALS AND SYSTEMS


PROBLEM BASED LEARNING(PBL)

SUBMITTED BY:
• Hajra Shafique (22-EE-13)
• Maryam Fatima (22-EE-33)
• Muhammad Haris (22-EE-49)
• Ghadeer Raza (22-EE-109)
• Waqar Ahmad (21-EE-R01)

SUBMITTED TO:
Sir Dr. Junaid Mir and Ma’am Zainab Shahid
SECTION: A
Designing a system that can perform modulation property

Objective:
• The objective of this lab is to understand concepts and observe the effects of
sampling a continuous signal at different sampling rates, changing the sampling
rate of a sampled signal.
• To explain concepts related to sampling theorem and design a system that can
perform modulation property.
Sampling and Nyquist Criteria:
In signal and systems, sampling refers to the process of converting a continuous-
time signal into a discrete-time signal by taking samples at regular intervals in time.
The samples are then used to reconstruct the original signal. The Nyquist- Shannon
sampling theorem states that a continuous-time signal can be perfectly
reconstructed from its samples if the sampling frequency is greater than twice the
highest frequency component of the signal.
The Nyquist criterion is a fundamental concept in signal processing that requires
that the sampling frequency be at least twice the highest frequency contained in
the signal, or information about the signal will be lost. If the sampling frequency is
less than twice the maximum analog signal frequency, a phenomenon known as
aliasing will occur. In other words, if you want to accurately capture a signal with a
certain frequency content, you need to sample it at a rate that is at least twice as
high as the highest frequency component of the signal.
Discrete-time Fourier transform (DTFT):
The Discrete-Time Fourier Transform (DTFT) is a tool used in signal processing and
analysis to represent a discrete-time signal in the frequency domain. It is defined as
the Fourier transform of a discrete-time signal that is periodic with a period of N.
The DTFT is used to analyze signals and systems in the frequency domain.
The DTFT is defined as:
The discrete-time Fourier transform (DTFT) is a mathematical tool that converts a
discrete-time signal into its frequency-domain representation. It is useful for
analyzing the spectral properties of signals and systems. There are different ways to
compute the DTFT in MATLAB. One way is to use the fft function, which implements
the fast Fourier transform (FFT) algorithm, it is a more efficient way to compute the
DTFT.
Modulation:
Modulation refers to the process of varying one or more properties of a carrier
signal in accordance with the information signal, typically to transmit data over a
communication channel. It enables efficient transmission by adapting the signal to
the characteristics of the channel and allowing multiple signals to share the same
transmission medium. Common modulation techniques include Amplitude
Modulation (AM), Frequency Modulation (FM), and Phase Modulation (PM).
PBL lab brief:
A composite audio signal needs to be self-created and investigated to understand
and explain the concepts related to the sampling theorem. Then a system is to be
designed which can be used to perform modulation of any signal.
Step-1: Record individual messages:
This PBL lab is conducted in groups of 5. Each student individually records an audio
signal of 6 seconds with one sampling frequency Fs (as specified below). Student 1
will use Fs=2000, Student 2 will use Fs 4000, Student 3 will use Fs=8000, Student 4
will use Fs =12000 and Student 5 will use Fs 14000 while recording their signals.
Matlab code:
STUDENT 1:
Fs1 =3000 ; % Set your sampling frequency
recObj = audiorecorder(Fs1, 8, 1); %Create an audio recorder object
disp('Start speaking.')
recordblocking(recObj, 6); %set duration of 6 seconds
disp('End of Recording.');
get(recObj)
MARYAM = getaudiodata(recObj); %extracts the audio from the audio recorder
object and save it in variable ‘MARYAM’,in the form of column vector
save('my_recording.mat','MARYAM');
load('my_recording.mat')
sound(MARYAM, Fs1);
L = length(MARYAM);
Y = fft(MARYAM'); % calculates the Discrete Fourier Transform (DFT) of the signal
stored in the variable “HAJIRA”
P2 = abs(Y/L);
P1 = P2(1:L/2+1); %selects the first half of the magnitude spectrum calculated
previously and stores it in the variable "P1"
P1(2:end-1) = 2*P1(2:end-1); %This command doubles the magnitude spectrum
values (excluding the first and last elements) stored in "P1"
f1 = Fs1*(0:(L/2))/L;
figure(1);
subplot(5,1,1);
plot(MARYAM);
title('Maryam recording');
grid on
STUDENT 2:
Fs2 =4000; % Set your sampling frequency
recobj audiorecorder(Fs2, 8, 1); %Create an audio recorder object
disp('Start speaking.')
recordblocking (recObj, 6); %set duration of 6 seconds
disp('End of Recording.');
get(recobj)
HAJIRA getaudiodata (recobj); %extracts the audio from the audio recorder object
and save it in variable ‘HAJIRA’,in the form of column vector
save("my_recording.mat', 'HAJIRA');
load('my_recording.mat')
sound(HAJIRA, Fs2);
L length(HAJIRA);
Y= fft (HAJIRA'); % calculates the Discrete Fourier Transform (DFT) of the signal
stored in the variable “HAJIRA”
P2 abs(Y/L);
P1 P2(1:L/2+1); %selects the first half of the magnitude spectrum calculated
previously and stores it in the variable "P1"
P1(2:end-1) 2*P1(2:end-1); %This command doubles the magnitude spectrum
values (excluding the first and last elements) stored in "P1"
f2Fs2 (0: (L/2))/L;
subplot(5,1,2);
plot (HAJIRA);
title('Hajira recording');
grid on
STUDENT 3:
Fs3 8000; % Set your sampling frequency
recObj audiorecorder (Fs3, 8, 1); %Create an audio recorder object
disp('Start speaking.')
recordblocking (recObj, 6); %set duration of 6 seconds
disp('End of Recording.');
get(recobj)
GHADEER getaudiodata (recobj); %extracts the audio from the audio recorder
object and save it in variable ‘GHADEER’,in the form of column vector
save('my_recording.mat', 'GHADEER');
load('my_recording.mat')
sound (GHADEER, Fs3);
L=length(GHADEER);
Y= fft (GHADEER'); % calculates the Discrete Fourier Transform (DFT) of the signal
stored in the variable “GHADEER”
P2 abs(Y/L);
P1 P2(1:L/2+1); %selects the first half of the magnitude spectrum calculated
previously and stores it in the variable "P1"
P1(2:end-1) 2*P1(2:end-1); %This command doubles the magnitude spectrum
values (excluding the first and last elements) stored in "P1"
f3Fs3 (0: (L/2))/L;
subplot(5,1,3);
plot(GHADEER);
title('Ghadeer recording');
grid on
STUDENT 4:
Fs4 =12000; % Set your sampling frequency
recobj audiorecorder (Fs4, 8, 1); %Create an audio recorder object
disp('Start speaking.') %display ‘start speaking’
recordblocking (recObj, 6); %set duration of 6 seconds
disp('End of Recording.');
get(recobj)
HARIS getaudiodata (recObj); %extracts the audio from the audio recorder object
and save it in variable ‘HARIS’,in the form of column vector
save('my_recording.mat', 'HARIS');
load('my_recording.mat')
sound(HARIS, Fs4);
L length(HARIS);
Y=fft(HARIS'); % calculates the Discrete Fourier Transform (DFT) of the signal stored
in the variable "HARIS"
P2 abs(Y/L);
P1 P2(1:L/2+1); %selects the first half of the magnitude spectrum calculated
previously and stores it in the variable "P1"
P1(2:end-1) 2*P1(2:end-1); %This command doubles the magnitude spectrum
values (excluding the first and last elements) stored in "P1"
f4 Fs4* (0: (L/2))/L; subplot(5,1,4);
plot (HARIS);
title('Haris recording');
grid on
STUDENT 5:
Fs5 =14000; % Sets your sampling frequency
recobj audiorecorder (Fs5, 8, 1); %Create an audio recorder object
disp('Start speaking.') %display ‘start speaking’
recordblocking (recObj, 6); %set duration of 6 seconds
disp('End of Recording.');
get(recobj)
WAQAR getaudiodata (recObj); %extracts the audio from the audio recorder object
and save it in variable ‘WAQAR’,in the form of column vector
save('my_recording.mat', "WAQAR'); ')
load('my_recording.mat sound(WAQAR, Fs5);
L length (WAQAR);
Yfft (WAQAR'); % calculates the Discrete Fourier Transform (DFT) of the signal stored
in the variable "WAQAR"
P2 abs(Y/L);
P1 P2(1:L/2+1); % What does this command do? %selects the first half of the
magnitude spectrum calculated previously and stores it in the variable "P1"
P1(2:end-1) 2*P1 (2:end-1); %This command doubles the magnitude spectrum
values (excluding the first and last elements) stored in "P1"
f5 Fs5*(0: (L/2))/L; subplot (5,1,5);
plot (WAQAR);
title('Waqar recording');
grid on

Step 2: DFT magnitude spectrum of audio recording


MATLAB CODE:
1)Single sided
L = length(MARYAM);
Y = fft(MARYAM');
P2 = abs(Y/L);
P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f1 = Fs1*(0:(L/2))/L;
figure(1);
subplot(5,1,1);
grid on
plot(f1,P1,'m')
title('Single-Sided Amplitude Spectrum of X(t)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
L = length(HAJIRA);
Y = fft(HAJIRA');
P2 = abs(Y/L);
P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f2 = Fs2*(0:(L/2))/L;
subplot(5,1,2);
grid on
plot(f2,P1,'b')
title('Single-Sided Amplitude Spectrum of X(t)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
L = length(GHADEER);
Y = fft(GHADEER');
P2 = abs(Y/L);
P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f3 = Fs3*(0:(L/2))/L;
subplot(5,1,3);
grid on
plot(f3,P1,'k')
title('Single-Sided Amplitude Spectrum of X(t)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
L = length(HARIS);
Y = fft(HARIS');
P2 = abs(Y/L);
P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f4 = Fs4*(0:(L/2))/L;
subplot(5,1,4);
grid on
plot(f4,P1,'r')
title('Single-Sided Amplitude Spectrum of X(t)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
L = length(WAQAR);
Y = fft(WAQAR');
P2 = abs(Y/L);
P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f5 = Fs5*(0:(L/2))/L;
subplot(5,1,5);
grid on
plot(f5,P1,'g')
title('Single-Sided Amplitude Spectrum of X(t)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
sgtitle('DFT of individual Audios')

2)double sided
figure(2);
subplot(5,1,1)
L1 = length(MARYAM);
Y1 = fftshift(fft(MARYAM));
P2 = abs(Y1/L1);
P1 = P2;
fa = (-Fs1/2):(Fs1/L1):(Fs1/2-Fs1/L1);
plot(fa,P1,'m')
sgtitle('DFT Magnitude Spectrum of Individual Audio Signals (Two Sided)')
xlabel('f (Hz)')
ylabel('|P1(f)|')
subplot(5,1,2)
L2 = length(HAJIRA);
Y2 = fftshift(fft(HAJIRA));
P2 = abs(Y2/L2);
P1 = P2;
fb = (-Fs2/2):(Fs2/L2):(Fs2/2-Fs2/L2);
plot(fb,P1,'k')
xlabel('f (Hz)')
ylabel('|P1(f)|')
subplot(5,1,3)
L3 = length(GHADEER);
Y3 = fftshift(fft(GHADEER));
P2 = abs(Y3/L3);
P1 = P2;
fC = (-Fs3/2):(Fs3/L3):(Fs3/2-Fs3/L3);
plot(fC,P1,'b')
xlabel('f (Hz)')
ylabel('|P1(f)|')
subplot(5,1,4)
L4 = length(HARIS);
Y4 = fftshift(fft(HARIS));
P2 = abs(Y4/L4);
P1 = P2;
fd = (-Fs4/2):(Fs4/L4):(Fs4/2-Fs4/L4);
plot(fd,P1,'g')
xlabel('f (Hz)')
ylabel('|P1(f)|')
subplot(5,1,5)
L5 = length(WAQAR);
Y5 = fftshift(fft(WAQAR));
P2 = abs(Y5/L5);
P1 = P2;
fe = (-Fs5/2):(Fs5/L5):(Fs5/2-Fs5/L5);
plot(fe,P1,'r')
xlabel('f (Hz)')
ylabel('|P1(f)|')
Step 3:Create composite signal
Fs=8000;
composite_data=[MARYAM;HAJIRA;GHADEER;HARIS;WAQAR];
save('composite_recording.mat', 'composite_data');
sound(composite_data,Fs);
a) Single sided composite signal dft magnitude spectrum
Lq = length(composite_data);
Yq = fft(composite_data');
P2 = abs(Yq/Lq);
P1 = P2(1:Lq/2+1);
P1(2:end-1) = 2*P1(2:end-1);
fl = Fs*(0:(Lq/2))/Lq;
plot(fl,P1)
grid on
title('DFT Magnitude spectrum of Composite(single sided) ')
xlabel('f (Hz)')
ylabel('|P1(f)|')
b) double sided composite signal dft magnitude spectrum
Lp = length(composite_data);
Yp = fftshift(fft(composite_data));
P2 = abs(Yp/Lp);
P1 = P2;
fp = (-Fs/2):(Fs/Lp):(Fs/2-Fs/Lp);
plot(fp,P1,'g')
xlabel('f (Hz)')
ylabel('|P1(f)|')
sgtitle('DFT magnitude spectrum of composite(double sided)')

CT COMPOSITE AUDIO SIGNAL:


MARYAM = composite_data(1:length(MARYAM));
HAJIRA = composite_data(length(MARYAM)+1:length(MARYAM)+length(HAJIRA));
GHADEER=composite_data(length(MARYAM)+length(HAJIRA)+1:length(MARYAM)
+length(HAJIRA)+
length(GHADEER));
HARIS=composite_data(length(MARYAM)+length(HAJIRA)+length(GHADEER)+1:le
ngth(MARYAM)+l
ength(HAJIRA)+length(GHADEER)+length(HARIS));
WAQAR=
composite_data(length(MARYAM)+length(HAJIRA)+length(GHADEER)+length(HARI
S)+1:end);
t= 0:1/Fs:(length(composite_data)-1)/Fs; % Creation of time axis
Plotting of composite data in Continous time
hold on;
plot (t,composite_data, 'm');
plot (1:length(MARYAM), MARYAM, 'r');
plot (length(MARYAM)+1:length(MARYAM)+length(HAJIRA), HAJIRA, 'k');
plot
(length(MARYAM)+length(HAJIRA)+1:length(MARYAM)+length(HAJIRA)+length(GH
ADEER),
GHADEER, 'b');
plot(length(MARYAM)+length(HAJIRA)+length(GHADEER)+1:length(MARYAM)+len
gth(HAJIRA)+le
ngth(GHADEER)+length(HARIS), HARIS, 'r');
plot(length(MARYAM)+length(HAJIRA)+length(GHADEER)+length(HARIS)+1:length(
MARYAM)+len
gth(HAJIRA)+length(GHADEER)+length(HARIS)+length(WAQAR), WAQAR, 'g');
hold off;
title('Composite CT Audio Signal');
xlabel('Time (sec) ');
ylabel('Audio Data');

DT COMPOSITE SIGNAL:
stem (t,composite_data, 'm');
stem (1:length(MARYAM), MARYAM, 'r');
stem (length(MARYAM)+1:length(MARYAM)+length(HAJIRA), HAJIRA, 'k');
stem
(length(MARYAM)+length(HAJIRA)+1:length(MARYAM)+length(HAJIRA)+length(GH
ADEER),
GHADEER, 'b');
stem(length(MARYAM)+length(HAJIRA)+length(GHADEER)+1:length(MARYAM)+le
ngth(HAJIRA)+le
ngth(GHADEER)+length(HARIS), HARIS, 'r');
stem(length(MARYAM)+length(HAJIRA)+length(GHADEER)+length(HARIS)+1:lengt
h(MARYAM)+len
gth(HAJIRA)+length(GHADEER)+length(HARIS)+length(WAQAR), WAQAR, 'g');
hold off;
title('Composite DT Audio Signal');
xlabel('sample numbers ');
ylabel('Audio Data');

Step 4:
DO THEORETICAL CALCULATIONS (SHARED WITH U)
IMPULSE RESPONSE:
% impulse response
% t = linspace(0, 1, 1000); % Time vector from 0 to 1 second
% carrier_freq = 10^9; % 1 GHz
% x_t = cos(30*pi*t);
% c_t = cos(2*pi*carrier_freq*t);
% y_t = x_t .* c_t;
% Y_f = fft(y_t);
% impulse_response = ifft(Y_f);
% figure;
% plot(t, impulse_response);
% xlabel('Time (s)');
% ylabel('signal amplitude');
% title('Impulse Response of the Modulator System');

FREQUENCY RESPONSE:
n=-10:10;
s=zeros(size(n));
s(n==1)=pi;
s(n==-1)=pi;
stem(n,s);
xlabel('F(GHz)');
ylabel('Magnitude');
title('Frequency response of modulator system')
Questions:
What is the minimum sampling frequency for recording human audio signals?
According to Nyquist’s Theorem, for an accurate digital representation of a sound wave, the
sample rate must be at least two times bigger than the highest frequency going to be
recorded. As the highest sound a human can hear has a frequency of 20 kHz, the minimum
sample rate must be 40 kHz to be possible to digitalize this frequency. Therefore, the
minimum sampling frequency for recording human audio signals is 40 kHz.

What do different commands in Step-1 do?


• Fs1 = 3000;
This line sets the sampling frequency of the audio signal to 3000Hz
• recObj = audiorecorder(Fs1, 8, 1);
This line creates an audio recorder object called `recObj` using the
`audiorecorder` function. It takes three arguments: the sampling frequency
(`Fs`), the number of bits per sample (8 bits), and the number of audio channels
(1 channel)
• disp('Start speaking.') This line displays the message "Start speaking." on the
command window.
• recordblocking(recObj, 6);
This line records audio data for a duration of 6 seconds using the
`recordblocking` function. It takes two arguments: the audio recorder object
(`recObj`) and the duration in seconds
• disp('End of Recording.');
This line displays the message "End of Recording." on the command window.
• get(recObj)
This line retrieves information about the audio recorder object `recObj` using
the `get` function. It returns various properties and their values, such as the
sampling frequency and the number of channels
• MARYAM = getaudiodata(recObj);
This line extracts the audio data from the audio recorder object `recObj` using
the `getaudiodata` function. It returns the recorded audio data as a column
vector, which is assigned to the variable `MARYAM`.
• save('my_recoring.mat','MARYAM'); This line saves the variable `MARYAM`
containing the recorded audio data to a file named "MARYAM.mat" using
the`save` function. The data is stored in a MAT-file format, which can be loaded
and accessed later.
• sound(MARYAM, Fs1)
This line plays the audio data stored in the variable `MARYAM` using the `sound`
function. The audio is played back at the specified sampling frequency `Fs`.

What is the relation between the number of samples, signal duration and
sampling frequency in Step-1? E.g., if we use Fs=2000 and record 6 seconds
duration CT signal, your signal will be saved in how many discrete (sample)
points?

The number of samples is directly proportional to the signal duration and the
sampling frequency. The formula for calculating the number of samples is:

Number of samples = Signal duration x Sampling frequency

In this example, if we use Fs=2000 and record 6 seconds duration CT signal, the
signal will be saved in 12000 discrete (sample) points.

What do different commands in Step-2 do?


• L1 length (MARYAM);
This command calculates the length of the signal stored in the variable MARYAM
and assigns it to the variable L1.
• Y1 fftshift(fft(MARYAM));
This command takes the Fast Fourier Transform (FFT) of the signal MARYAM,
then shifts the zero-frequency component to the center of the spectrum. The
result is stored in Y1.
• P2 abs(Y1/L1);
This command calculates the magnitude spectrum by taking the absolute value
of each element of Y1 and dividing it by L1. The result is stored in P2.
• P1 P2;
This command assigns the values of P2 to P1.
• fa (-Fs1/2):(Fs1/L1): (Fs1/2-Fs1/L1);
This command generates the frequency axis fa for the spectrum. It starts from -
Fs1/2, ends at Fs1/2-Fs1/L1, and has a step size of Fs1/L1, where Fs1 is the
sampling frequency.
• plot(fa, P1, 'm')
This command plots the magnitude spectrum P1 against the frequency axis fa
with a color of magenta ('m').
• sgtitle('DFT Magnitude Spectrum of Individual Audio Signals (Two Sided)'
This command adds a super title to the plot.
• xlabel('f (Hz)'):
This command adds a label to the x-axis of the plot.
• ylabel('P1(f)|'):
This command adds a label to the y-axis of the plot.
• subplot(5,1,2):
This command creates a subplot grid with 5 rows, 1 column, and activates the
second subplot for further plotting.

What is the main frequency present in your audio signal?


The main frequency present in audio signal is 8000HZ.

If Fs is your sampling frequency, what is the maximum frequency which can


appear in the DFT or be stored while sampling?
The maximum frequency that can be represented in the DFT or stored while
sampling depends on the sampling frequency Fs and the number of samples N.
The maximum frequency is given by the Nyquist frequency, which is half of the
sampling frequency. The Nyquist frequency is also known as the folding
frequency or the folding point. Frequencies above the Nyquist frequency will be
aliased and will appear as lower frequencies in the DTFT.
The Nyquist frequency is given by:

Nyquist frequency = Fs / 2

What are your observations while listening to the composite audio signal in
Step-3 at Fs=8000? Why are the characteristics of individual audio signals
different now?
The observations while listening to the composite audio signal at Fs=8000
depend on the characteristics of the individual audio signals that are combined.
The composite audio signal is a combination of two or more audio signals that
are mixed. The characteristics of each individual audio signal determine how it
contributes to the composite signal. When you listen to the composite audio
signal, you may notice that some parts of the signal are louder than others or
that certain frequencies are emphasized or de-emphasized. These differences in
volume and frequency content are due to the characteristics of the individual
audio signals that make up the composite signal. The characteristics of individual
audio signals can be different for composite signals recorded at the same
frequency due to several reasons. One reason is that each audio signal has its
unique frequency content and amplitude. When two or more audio signals are
combined, their frequency content and amplitude interact with each other,
resulting in a composite signal that has different characteristics than the
individual signals.

How do the characteristics of audio signals recorded at Fs8000?


The characteristics of audio signals recorded at Fs<8000 differ from those
recorded at Fs>8000 in several ways. The most significant difference is the
frequency range that can be captured by the audio signal. When an audio signal
is sampled at a lower frequency (Fs<8000), it can only capture frequencies up to
half of the sampling rate (Nyquist frequency). For example, if the sampling rate
is 4000 Hz, the highest frequency that can be captured is 2000 Hz.In contrast,
when an audio signal is sampled at a higher frequency (Fs>8000), it can capture
a wider range of frequencies. For example, if the sampling rate is 16000 Hz, the
highest frequency that can be captured is 8000 Hz . Another difference is the
amount of data that is captured by the audio signal. When an audio signal is
sampled at a lower frequency, it captures fewer data points per second than
when it is sampled at a higher frequency. This can result in a loss of detail and
accuracy in the audio signal.
Can you tell which frequency belongs to which message signal from the
composite signal frequency domain representation?
We have change the color of individual signal in composite signal so, can tell
which frequency belongs to which message signal from the composite signal
frequency domain representation.

Propose a solution that can be used to identify the main frequencies of all five
audio signals from the composite signal, and the only information you have is
the signal duration?
To determine the main frequency present in the audio signal, you can
examine the frequency domain representation (P1) obtained from the FFT.
The peak amplitude in P1 corresponds to the dominant frequency
component in the signal, indicating the main frequency present.

➢ MARYAM(110HZ)
➢ HAJRA(260HZ)
➢ GHADEER(250HZ)
➢ HARIS(280HZ)
➢ WAQAR(500HZ)

Result:
Upon implementing the modulation system, we observed that the modulated signal
carries the information signal encoded in its amplitude variations. This modulated
signal can be transmitted over the communication channel and successfully
received at the receiver end.
Conclusion:
Through this lab experiment, we have gained insights into the fundamental
concepts of sampling theorem and modulation techniques. We have learned about
the importance of sampling frequency in accurately representing signals and how
modulation enables efficient data transmission over communication channels. This
hands-on experience has deepened our understanding of signal processing and
communication systems, laying a solid foundation for further exploration in this
field.

You might also like