Lab 02
Lab 02
Lab 02
1 Overview
This laboratory covers the topics of sampling, aliasing, and reconstruction. Sampling is a critical
step in nearly all signal processing applications. Moreover, sampling must be properly applied to
avoid aliasing and allow appropriate reconstruction of the continuous time signal. Sampling is
investigated here by considering first a rotating disk illuminated by a strobe light. After applying
sampling to this mechanical system, we consider one and two-dimensional signals. In the one-
dimensional case, we employ chirp signals that have time varying frequency. Such signals can be
visualized in the time and frequency domains, as well as listened to by outputting them through
speakers. Lastly, we utilize images as two-dimensional signals to study the effects of sampling as
well as reconstruction. In all cases, the results will be investigated and compared to the results
predicted by the Sampling Theorem.
2 Procedures
2.1 Strobe Sampling of a Rotating Disk
The effects of sampling and aliasing can be demonstrated through the use of a rotating disk and
strobe light. Strobe lights, in addition to entertainment, can be utilized to determine the frequency
of mechanical movements. In this experiment, a disk is affixed onto a motor that rotates at a
constant speed. The disk is marked with an arrow representing a phaser. Thus, the system
represents a rotating phaser. A strobe light can be used to illuminate the rotating phaser at a fixed
frequency. Using this setup, complete the following and include the results in your report:
a) Vary the frequency of the strobe light to determine the frequency of rotation of the phaser
and motor. What should the apparent motion of the phaser be when the strobe light and
phaser are at the same frequency? Determine what other strobe frequencies give the same
result. What is the relation between the frequencies?
b) After determining the frequency of the phaser, increase the frequency of the strobe light
so that it is slightly greater than the frequency of the phaser. Observe the apparent motion
of the phaser. Decrease the frequency of the strobe light so that is slightly less than the
frequency of the phaser. Observe the apparent motion again. Can you explain this
behavior? Is aliasing observed? Are aliasing and folding observed?
c) Derive an expression for the complex phaser p[n] that gives the position of the phaser at
the nth flash, assuming that the phaser is initially pointing straight up at n=0. Draw a two
sided spectrum for the cases: (1) strobe frequency equals phaser frequency, (2) strobe
frequency is slightly greater than phaser frequency, and (3) strobe frequency is slightly
less than phaser frequency. Explain the observed apparent rotation (direction and speed)
of the phaser based on the spectrums.
1/8
ELEG-212 Signals and Communications
the argument of the cosine is also the exponent of the complex exponential, so the angle of this
signal is the exponent (2π f 0t + φ ) . This angle function changes linearly versus time, and its time
derivative is 2π f 0 which equals the constant frequency of the cosine in rad/sec.
A generalization is available if we adopt the following notation for the class of signals
represented by a cosine function with a time-varying angle:
x(t ) = A cos(ψ (t )) = ℜe{ Ae jψ (t ) } (2)
The time derivative of the angle from (2) gives a frequency in rad/sec
d
ωi (t ) = ψ (t ) (rad/sec)
dt
but we prefer units of hertz, so we divide by 2π to define the instantaneous frequency:
1 d
fi (t ) = ψ (t ) (Hz) (3)
2π dt
ψ (t ) = 2πμ t 2 + 2π f 0t + φ
The (scaled) derivative of ψ (t ) yields an instantaneous frequency, equation (3), that changes
linearly versus time:
fi (t ) = 2 μ t + f 0
The slope of fi (t ) is equal to 2 μ and its intercept is equal to f 0 . If the signal starts at time
t = 0 secs., then f 0 is also the starting frequency. The frequency variation produced by such a
time-varying angle is called frequency modulation. This kind of signal is an example of a
frequency modulated (FM) signal. More generally, we often consider them to be part of a larger
class called angle modulation signals. Finally, since the linear variation of the frequency can
produce an audible sound similar to a siren or a chirp, the linear-FM signals are also called
“chirps.”
The following MATLAB code will synthesize a chirp:
fsamp = 11025;
dt = 1/fsamp;
dur = 1.8;
tt = 0 : dt : dur;
psi = 2*pi*(0.25 + 200*tt + 500*tt.*tt);
2/8
ELEG-212 Signals and Communications
xx = real( 7.7*exp(j*psi) );
soundsc( xx, fsamp );
(a) Determine the total duration of the synthesized signal in seconds, and also the length of
the tt vector (number of samples).
(b) In MATLAB, signals can only be synthesized by evaluating the signal’s defining formula
at discrete instants of time. These are called samples of the signal. For the chirp we do the
following:
x(tn ) = A cos(2πμ tn2 + 2π f 0tn + φ )
where tn = nTs represents discrete time instants. In the MATLAB code above, what is the
value for tn ? What are the values of A , μ , f 0 , and φ ?
(c) Determine the range of frequencies (in hertz) that will be synthesized by the MATLAB
script above. Make a sketch/plot by hand or using Matlab of the instantaneous frequency
versus time. What are the minimum and maximum frequencies that will be heard?
(d) Listen to the signal to determine whether the signal’s frequency content is increasing or
decreasing (use soundsc()). Notice that soundsc() needs to know the sampling
rate at which the signal samples were created. For more information do help sound
and help soundsc.
Use the code above as a starting point in order to write a MATLAB function that will synthesize a
“chirp” signal according to the following comments:
function [xx,tt] = mychirp( f1, f2, dur, fsamp )
%MYCHIRP generate a linear-FM chirp signal
%
% usage: xx = mychirp( f1, f2, dur, fsamp )
%
% f1 = starting frequency
% f2 = ending frequency
% dur = total time duration
% fsamp = sampling frequency (OPTIONAL: default is
% 11025)
%
% xx = (vector of) samples of the chirp signal
% tt = vector of time instants for t=0 to t=dur
%
if( nargin < 4 ) %-- Allow optional input argument
fsamp = 11025;
end
As a test case, generate a chirp sound whose frequency starts at 2500 Hz and ends at 500 Hz; its
duration should be 1.5 sec. Listen to the chirp using the soundsc function. Include in your
report a listing of the mychirp.m function that you wrote.
3/8
ELEG-212 Signals and Communications
the spectrum consists of two components, one at 2π f 0 , the other at −2π f 0 . For more
complicated signals, the spectrum may be very interesting and, in the case of FM, the spectrum is
considered to be time-varying. One way to represent the time-varying spectrum of a signal is the
spectrogram (see Chapter 3 in the text). A spectrogram is found by estimating the frequency
content in short sections of the signal. The magnitude of the spectrum over individual sections is
plotted as intensity or color on a two-dimensional plot versus frequency and time.
1
The variables t1 and t2 do not denote time, they represent spatial dimensions. Thus, their units would be inches or
some other unit of length.
4/8
ELEG-212 Signals and Communications
monochrome images, the signal x(t1 , t2 ) would be a scalar function of the two spatial variables,
but for color images the function x(⋅, ⋅) would have to be a vector-valued function of the two
variables2. Moving images (such as TV) would add a time variable to the two spatial variables.
Monochrome images are displayed using black and white and shades of gray, so they are called
grayscale images. Here, we will consider only sampled gray-scale still images. A sampled gray-
scale still image would be represented as a two-dimensional array of numbers of the form
x[m, n] = x(mT1 , nT2 ) 1 ≤ m ≤ M , and 1 ≤ n ≤ N
where T1 and T2 give the sample spacing in the horizontal and vertical directions. Typical
values of M and N are 256 or 512; e.g., a 512×512 image that has nearly the same resolution
as a standard TV image. In MATLAB we can represent an image as a matrix, so it would consist of
M rows and N columns. The matrix entry at (m, n) is the sample value x[m, n] —called a
pixel (short for picture element).
An important property of light images such as photographs and TV pictures is that their values
are always non-negative and finite in magnitude; i.e.,
0 ≤ x[m, n] ≤ X max < ∞
This is because light images are formed by measuring the intensity of reflected or emitted light,
which must always be a positive finite quantity. When stored in a computer or displayed on a
monitor, the values of x[m, n] have to be scaled relative to a maximum value X max . Usually an
eight-bit integer representation is used. With 8-bit integers, the maximum value (in the computer)
is X max = 28 − 1 = 255 , and there are 28 = 256 different gray levels for the display, from 0 to
255. NOTE: The [0, X max ] range is often scaled to [0,1] by dividing all values by X max .
2
For example, an RGB color system needs three values at each spatial location: one for red, one for green and one for
blue.
5/8
ELEG-212 Signals and Communications
which gives a 256×3 matrix where all 3 columns are equal. The function
colormap(gray(256)) creates a linear mapping, so that each input pixel amplitude
is rendered with a screen intensity proportional to its value (assuming the monitor is
calibrated). For our lab experiments, non-linear color mappings would introduce an extra
level of complication, so they will not be used.
When the image values lie outside the range [0,255], or when the image is scaled so that
it only occupies a small portion of the range [0,255], the display may have poor quality.
In this lab, we will use imshow to display images, which automatically rescales the
image to the appropriate range.
In order to probe your understanding of image display, do the following simple displays:
a) Load and display the 326 × 426 “lighthouse” image from lighthouse.mat. The
command load lighthouse will put the sampled image into the array xx. Use
whos to check the size of xx after loading. When you display the image it might be
necessary to set the colormap via colormap(gray(256)).
b) Use the colon operator to extract the 200th row of the “lighthouse” image, and make a
plot of that row as a 1-D discrete-time signal.
xx200 = xx(200,:);
Observe that the range of signal values is between 0 and 255. Which values represent
white and which ones black? Can you identify the region where the 200th row crosses
the fence?
3
For this example, the sampling periods would be T1 = T2 = 1/ 300 inches.
4
The Sampling Theorem applies to digital images, so there is a Nyquist Rate that depends on the maximum spatial
frequency in the image.
6/8
ELEG-212 Signals and Communications
c) This part is challenging: explain why the aliasing happens in the lighthouse image
by using a “frequency domain” explanation. In other words, estimate the frequency of the
features that are being aliased. Give this frequency as a number in cycles per pixel. (Note
that the fence provides a sort of “spatial chirp” where the spatial frequency increases
from left to right.) Can you relate your frequency estimate to the Sampling Theorem?
You might try zooming in on a very small region of both the original and downsampled images.
Figure 1: 2-D Interpolation broken down into row and column operations: the gray dots indicate
repeated data values created by a zero-order hold; or, in the case of linear interpolation, they are the
interpolated values.
For these reconstruction experiments, use the lighthouse image, down-sampled by a factor
of 3. You will have to generate this by loading in the image from lighthouse.mat to get the
image which is in the array called xx. A down-sampled lighthouse image should be created
and stored in the variable xx3. The objective will be to reconstruct an approximation to the
original lighthouse image, which is 256 × 256, from the smaller down-sampled image.
a) The simplest interpolation would be reconstruction with a square pulse which produces a
“zero-order hold.” Here is a method that works for a one-dimensional signal (i.e., one
row or one column of the image), assuming that we start with a row vector xr1, and the
result is the row vector xr1hold.
xr1 = (-2).^(0:6);
L = length(xr1);
nn = ceil((0.999:1:4*L)/4); %<-- Round up to the
%integer part
xr1hold = xr1(nn);
Plot the vector xr1hold to verify that it is a zero-order hold version derived from xr1.
Explain what values are contained in the indexing vector nn. If xr1hold is treated as
an interpolated version of xr1, then what is the interpolation factor? Your lab report
should include an explanation for this part, but plots are optional—use them if they
simplify the explanation.
b) Now return to the down-sampled lighthouse image, and process all the rows of xx3
to fill in the missing points. Use the zero-order hold idea from part (a), but do it for an
interpolation factor of 3. Call the result xholdrows. Display xholdrows as an
image, and compare it to the downsampled image xx3; compare the size of the images as
well as their content.
7/8
ELEG-212 Signals and Communications
c) Now process all the columns of xholdrows to fill in the missing points in each column
and call the result xhold. Compare the result (xhold) to the original image
lighthouse. Include your code for parts (b) and (c) in the lab report.
d) Linear interpolation can be done in MATLAB using the interp1 function (that’s
“interp-one”). When unsure about a command, use help. Its default mode is linear
interpolation, which is equivalent to using the ’*linear’ option, but interp1 can
also do other types of polynomial interpolation. Here is an example on a 1-D signal:
n1 = 0:6;
xr1 = (-2).^n1;
tti = 0:0.1:6; %-- locations between the n1 indices
xr1linear = interp1(n1,xr1,tti); %-- function is
% INTERP-ONE
stem(tti,xr1linear)
For the example above, what is the interpolation factor when converting xr1 to
xr1linear?
e) In the case of the lighthouse image, you need to carry out a linear interpolation
operation on both the rows and columns of the down-sampled image xx3. This requires
two calls to the interp1 function, because one call will only process all the columns of
a matrix5. Name the interpolated output image xxlinear. Include your code for this
part in the lab report.
f) Compare xxlinear to the original image lighthouse. Comment on the visual
appearance of the “reconstructed” image versus the original; point out differences and
similarities. Can the reconstruction (i.e., zooming) process remove the aliasing effects
from the down-sampled lighthouse image?
g) Compare the quality of the linear interpolation result to the zero-order hold result. Point
out regions where they differ and try to justify this difference by estimating the local
frequency content. In other words, look for regions of “low-frequency” content and
“high-frequency” content and see how the interpolation quality is dependent on this
factor. A couple of questions to think about: Are edges low frequency or high frequency
features? Are the fence posts low frequency or high frequency features? Is the
background a low frequency or high frequency feature?
Comment: You might use MATLAB’s zooming feature to show details in small patches of the
output image. However, be careful because zooming does its own interpolation, probably a zero-
order hold.
3 Report Components
Your report should be neatly prepared and have a cover sheet, short overview, discussion section,
and conclusions. The body of the report should asnswer all questions asked and include necessary
plots and figures as well as MATLAB code. Please make sure all plots, figures, and code are
appropriately referenced in the body of the report.
5
Use a matrix transpose in between the interpolation calls. The transpose will turn rows into columns.
8/8