0% found this document useful (0 votes)
3 views

Example Scripts

Uploaded by

Kirollos shawki
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Example Scripts

Uploaded by

Kirollos shawki
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Example Scripts- What has been covered in Class:

Contents
Lab 2:..................................................................................................................... 2
Lab 3:..................................................................................................................... 9
Lab 4:................................................................................................................... 13
Lab 5:................................................................................................................... 16
Lab 6:................................................................................................................... 33
Lab 2:
Introduction

Current Directory
Remember to set your ‘current folder’ to the folder that you created last week to
hold all your MATLAB scripts and results for this module. You should also add any
other folders that contain images that you might want to use to the search path
that MATLAB uses (under the ‘Home’ menu ribbon; ‘Set path’ button).

Scripts
Rather than simply running all commands interactively from the Command
window, you should start to create script (‘.m’) files to hold your commands – i.e.
start creating ‘programs’ within MATLAB.

Help Information
You can search for help on MATLAB commands and topics from within the
MATLAB environment itself. This help information is also available online from the
Mathworks website. For ease of access, links to the online help are provided in
the text below (however, you can also search for the same terms etc. from within
the MATLAB environment).

Reading and Displaying Images

In Lab_01, reading and displaying of images was introduced. Some further


examples are now provided.

Create a script that does the following:


 Reads three images into three separate variables using the ‘imread’
function. (You should read the help information on imread for further
details on the range of parameters available with imread:
https://www.mathworks.com/help/matlab/ref/imread.html ).
 Display the images in three separate ‘figures’ - read the help information
on ‘figure’: https://www.mathworks.com/help/matlab/ref/figure.html .
 Modify properties of the figures. There are a number of properties that can
be changed, such as the figure title, etc. These can be modified either
when the figure is created or by using the figure object reference that is
returned when the figure is created:
f1 = figure(‘Name’, ‘Test figure’); % set the figure name
f1 = figure; % f1 holds a ‘reference’ to the figure which can be
used later

f1.Name = ‘My figure’; % changes the title of the figure

figure(f1); % makes f1 the ‘current’ figure

 Create a fourth figure that uses ‘subplot’ to display all three images in one
figure (help on subplot:
https://www.mathworks.com/help/matlab/ref/subplot.html ). Display the
three in a row and then in a column format.

Capturing Images from a Webcam

Video data can easily be captured from a webcam. To see a list of all available
webcams attached to a computer use the command:

webcamlist;

To create a webcam ‘object’ that can be used in your programs, assign one of the
webcams to a variable:

cam = webcam(1); % picks webcam number 1


...
cam = webcam; % if only one webcam is connected

Note: the camera can be ‘switched off’ by simply clearing the ‘cam’ variable:

clear cam; % or:


clear(‘cam’);

Live video can be previewed and closed at any time using:

preview(cam);

closePreview(cam);

A single image frame can be captured using:

img = snapshot(cam);
Take a look at the help information for webcam, preview, closePreview and
snapshot.

Exercise:
Write a script that:
 Opens the webcam and previews live video
 Creates a figure to display a captured frame (with title ‘Snapshot’)
 Loops through e.g. five iterations of:
o Capturing a frame
o Displaying the frame
o Pausing (to allow the image to be viewed before the next iteration)
 Close the preview and turn the camera off.
(Note: when you run the script, you may need to move some windows around to
make sure that you can see the preview and snapshots on screen).

The script captures and displays colour images. Modify your script so that the
colour image is converted to a greyscale image (using ‘rgb2gray’). Then
(i) display the greyscale image rather than the colour image, and
(ii) (ii) display both colour and greyscale in separate figures.
Histograms

The histogram of an image can be obtained using:


h = imhist(img); % where ‘img’ is a previously read image

This generates a 1D array or vector called ‘h’ that holds a count of the number of
times each greylevel occurs in the image. If ‘imhist’ is used on its own (without
assigning the result to a variable), then a plot of the histogram will be displayed.
Take a look at the MATLAB help information for ‘imhist’.

Exercise:
 Write a script that:
o Reads an image into an image variable (e.g. ‘moon.bmp’ into ‘img’)
o Displays the image in a figure
o Calculates the histogram and puts the result into a variable (e.g. ‘h’)
 Run the script and examine the variable ‘h’ in the workspace:
o Double-click on the variable to see the values that it holds
o Note that each element is a count of the number of times each
greylevel occurs (although the array is indexed from 1 to 256, this
represents the greylevels 0 to 255).
 Add to the script and find the min and max pixel values in the image:
min(img(:)); % finds the minimum pixel value
max(img(:)); % finds the maximum pixel value

(Note that you need the ‘:’ in the code above – otherwise min will find the
minimum value in every column of the image and generate a vector of
these minima).
 Run the script and examine these values. There should be zeros in the
histogram (h) variable for any entries less that the min or greater than the
max.
 Extend the script to display the image histogram in a figure (remember: if
you use imhist on its own – in a similar way to imshow – a plot of the
histogram will be generated in a figure.
 Modify your script so that the image and the histogram are displayed side-
by-side in the same figure (use ‘subplot’).

Brightness and Contrast Adjustment

The brightness of an image can be changed by simply adding or subtracting a


greylevel amount to each pixel in the image. Contrast can be adjusted by
multiplying (or dividing) each pixel value by a factor.
Exercise
 Write a script that:
o Reads an image
o Reduces the contrast by dividing the pixel values by 2 (put the
result in a new image)
imNew = imOld / 2;

o Displays both images (the second should appear darker than the
original)
 Add to the script by calculating and displaying the images together with
their respective histograms in a grid in the same figure. The result should
be similar to the following:

 Note the different appearance of the two histograms.


 Modify the script:
o Divide the image by a factor of 4 rather than 2
o Add a value of 100 to all pixels
o Display the images and histograms in a grid as before
o The result should be similar to:
o Examine the different appearance of the resulting image and
histogram.
 The contrast of the image can be modified when the image is being
displayed (i.e. just for display – the original image data is not changed).
This is achieved by using an additional parameter to the ‘imshow’ function
(check the help information). To ‘stretch’ the contrast to the full range
(during display) you can use:
imshow(img, []);
% low and high scaled can also be specified within the []

 Write a new script that:


o Reads in two images
o Divides each by a factor of two
o Adds the images together and display the final result
 Selected parts of an image can be processed, rather than the full image.
Remember that the ‘:’ can be used to select 2D regions of interest from a
matrix (or image), e.g.:
o Given a 5x5 matrix:
A = [10,12,14,16,18; 14,15,13,16,17; 16,17,14,16,17;
12,14,16,17,12];

o we can process just elements between rows 2 to 3 and columns 4 to


5; e.g. set just these elements to 0:
A(2:3, 4:5) = 0;

o Try the example above interactively in MATLAB, observing the


workspace variable as the change is made.
 Duplicate the first script from this section (reducing the contrast by a
factor of 2), and then modify the script as follows (rather than reducing the
contrast of the whole image):
o Reduce just the top half of the image
o Reduce just the left-hand half
o Reduce a square in the centre of the image
o Note that you will need to obtain the size of the image to calculate
the parts of the image that are to be processed:
[rows, cols] = size(img1) % returns the image size as a
vector – rows, cols
o The result should be similar to:

Contrast Adjustment on Live Video

We can combine the scripts developed in the previous two sections to reduce the
contrast on live video obtained from a webcam.

Exercise:
 Write a script that reads images from a webcam and reduces the contrast
of the centre portion of each image frame. The results should be displayed
in real-time (no ‘pauses’) in an output figure as a greyscale image
sequence. (Rather than running forever, you could run for 100-200
iterations – this should demonstrate the algorithms adequately).
 (As an additional exercise you could replicate the figure above, but for
video (i.e. subplot original, top half, left half and centre contrast
reductions)).
Lab 3:
Introduction

This lab session is concerned with implementing algorithms for a number


of image operations that have been covered in the lectures. An important
aspect of this lab class is that you experiment with ranges of parameter
values and observe the effects they have on a selection of images .

Note:
When you read a greyscale intensity image into MATLAB, an image of type ‘uint8’
is created. For some operations these images can be processed directly.
However, for other types of operations the results obtained would be outside the
permitted range for uint8 (i.e. the resulting values would be outside the range [0
… 255]). In this case you should create, for example, a ‘double’ array of the
image data, process this and then scale the results back to the range [0 … 255]
before converting back to a uint8 image for display.

For example, a general approach could be the following:


imIn = imread('moon.bmp'); % read the input image
dImg = double(imIn); % creates an array of 'doubles'

% process the image (dImg) as required…

dImg = rescale(dImg, 0, 255); % possibly rescale greyscale range to [0 …
255] if needed
imOut = uint8(dImg); % create uint8 image for display purposes
figure, imshow(imIn); % show the input image
figure, imshow(imOut); % show the output image

Note that a ‘double’ array is not the same as an image of type ‘double’. The
former can have any value; an image of type ‘double’ has pixel values in the
range between 0 and 1.
A number of the built-in MATLAB Image Processing toolbox functions use the
approach above – i.e. you might pass an image as a parameter to the function,
and it will return an image as the output of the function. But within the function,
it converts to double, processes the data, and converts back to an image.

1. Mean and Standard Deviation

The mean, or average value, of an image can be obtained in MATLAB as follows


(given an image ‘img1’):
m = mean(img1(:));

The standard deviation of an image can be obtained using the ‘std’ function.
However, the input to the function must be of type ‘double’; hence the following
to compute the standard deviation:
dbl1 = double(img1); % convert the image to an array/matrix of type
double
sd1 = std(dbl1(:));

You can use the ‘disp’ function to display values of variables in the
command window, e.g.:
disp(m); % display the value of m
disp(sd1); % display the standard deviation

% To display text and variable values you can create a string array:
str = ['Mean: ', num2str(m), ' SD: ', num2str(sd1)];
disp(str);

Check the help information for ‘disp’ for other options.

Exercise:
Experiment with processing a number of images: calculate their means
and standard deviations, and display these values. You should also display
the images and their corresponding histograms. Initially, you should
create some images with different greyscale distributions (or ranges). To
do this, you can use code similar to the following:
img1 = imread('baboon.bmp'); % read the image
dbl1 = double(img1); % create a ‘double’ array from the
image
dbl2 = rescale(img1, 64, 124); % rescale the greyscale range, creating
‘dbl2’
Read the help on ‘rescale’ and then create a few images (dbl2, dbl3,
dbl4, ..) with different contrasts. Process these images, finding the mean
and standard deviation, and display these values (together with displaying
the images and histograms). Make sure that you understand the
relationships between mean, standard deviation, brightness and contrast.

You should be able to generate something similar to the following:

… with the means and standard deviations displayed in the command


window.

For each of the images shown, think carefully about the relationships
between mean, standard deviation, brightness and contrast.
2. Gains and Biases
The following function performs the linear pixel value mapping discussed
in lectures, applying a ‘gain’ and ‘bias’ to an input image. Note that once
the mapping is applied, it checks to ensure that values above 255 are set
to 255 and that values below 0 are set to 0:
function imArrOut = remap(imArrIn,gain,bias)
% Assume an array of double as input image

imArrOut = imArrIn * gain + bias; % apply the gain and bias


imArrOut(imArrOut > 255) = 255; % clip values above 255
imArrOut(imArrOut <0) = 0; % clip values below 0
end

Copy the code above into an ‘.m’ file (select a ‘New’ | ‘Function’ within the
Editor and paste the code above into the file). (More information on how
to define functions can be found in the MATLAB help documentation:
https://uk.mathworks.com/help/matlab/ref/function.html ).
Test the function on a number of images by applying different gain and
bias values, and observe the effect on the images. (Don’t forget that the
function is expecting a ‘double’ array as an input parameter (not an
image) and produces a ‘double’ array as output. You will need to create
an image from this output double array in order to display the results. See
the ‘Note’ box at the start of this document.). Once again, display
histograms of the images to gain an understanding of the relationships
between mean, standard deviation, brightness and contrast.

3. Thresholding
Thresholding can be performed in MATLAB using the ‘imbinarize’ function.
There are a number of options for the input parameters to the function.
For example, an image can be thresholded using:
bw = imbinarize(img1, T);
where img1 is the image to be thresholded, T is a threshold value (in the
range [0 … 1]), and bw is the output (logical) image. (Help information on
imbinarize can be found at:
https://www.mathworks.com/help/images/ref/imbinarize.html ).

Experiment with different threshold values using the following (or other)
images: rice.png, paper.bmp, coins.png, and printedtext.png. Which
threshold values work best with which images? Why does a single (global)
threshold not work well for some images?

Adaptive thresholding can also be performed, where a different threshold


value could be applied to each pixel (similar to the ‘variable’ thresholding
approach covered in lectures). The function ‘adaptthresh’ creates an array
or matrix of threshold values for a particular image – i.e. the threshold
value is adapted to the local neighbourhood of each pixel:
t = adaptthresh(img1);
The variable ‘t’ will be a 2D matrix of the same size as ‘img’ and will hold
threshold values for each pixel. This matrix of threshold values can then
be applied to the image to produce a final binary result:
bwadaptive = imbinarize(img1, t);
Apply adaptive thresholding to the same images that you used above for
global thresholding. Compare the results. On which images did adaptive
thresholding perform well? Why?

Lab 4:

Introduction

In this lab class we will investigate a number of algorithms / operators that


perform convolution, image filtering and edge detection. Once again, it is
important that you experiment with the operator parameters and apply them to
a range of images, observing the differences in the results that you obtain.

4. Convolution

Convolution can be performed in MATLAB using either of two primary


functions:

 conv2 – takes an array / matrix and a mask / template as input and


produces an output array / matrix. Additional parameters can be
specified, for example:
o 'full' — Return the full 2-D convolution.
i.e. prior to convolution, the image is padded with border rows
of zeros above and below, and border columns of zeros left
and right, as necessary, so that all elements of the template
interact with all elements of the image.
 In this case, the output array is larger than the image
o 'same' — Return the central part of the convolution, which is
the same size as the image.
i.e. prior to convolution, the image is padded with border rows
and columns of zeros, as appropriate, to enable the “origin” of
the template to interact with all elements in the image.
 In this case, the output array is the same size as the image
o 'valid' — Return only parts of the convolution that are
computed without zero-padded borders.
i.e. the “origin” of the template interacts only with elements
in the image for which the template remains within the area
of the image.
 In this case, the output array is smaller than the image

 imfilter – takes an input image and mask / template; produces an


output image (additional parameters can be specified).

conv2:
Note that ‘conv2’ will rotate the mask by 180 o prior to applying it – this is
the correct use of the term convolution. ‘imfilter’ uses correlation to apply
the mask (without any rotation). However, if the mask is symmetric, both
correlation and convolution will return the same result.

In order to get a better understanding of convolution, we will use some


simple input data to represent an image and a mask. In particular, initially
we will initially an ‘image’ (A) with two horizontal (step) edges:

% Create a 10x5 'image' with two horizontal step edges:

A = zeros(10, 5);

A(3, :) = 2;

A(4:7, :) = 4;

A(8, :) = 2;

Enter the code above into a script, and run it. Then, in the Workspace
Variables window, examine the values in the array A. You should see what
appear to be two horizontal ‘edges’ in the ‘image’ A.

Now create a mask ‘M’ – add the following to your script:

% Create a 3x3 mask:

M = [1,1,1 ; 0,0,0 ; -1,-1,-1];

We will now use ‘conv2’ to compute the convolution of the mask M with
the ‘image’ A.
(Note that, by default, conv2 will compute a ‘full’ convolution as indicated
above):

B = conv2(A, M); % Result matrix will be larger than the input matrix
A

% This is equivalent to using 'full' as an


additional parameter

% - i.e. part of the mask can be placed


outside the bounds of the ‘image’ A
% Undefined input elements are assumed
to be zero

Run this code and examine the values of the variables – make sure that
you understand the results.

We can also use the parameters indicated above, ‘same’ and ‘valid’, to
modify how conv2 computes its results:

C = conv2(A, M, 'same'); % Result is the same size as A

% Undefined input elements


are assumed to be zero

D = conv2(A, M, 'valid'); % The mask is always placed completely


inside A

% The resulting matrix will


be smaller than A

Run the code and examine the values of the workspace variables.

Exercise:
Experiment with different mask values; examine the results and ensure that
you understand how the results were generated.

(Further information on conv2:


https://uk.mathworks.com/help/matlab/ref/conv2.html ).
Lab 5:
Introduction

In this lab session you will experiment with the Fourier transform and its
use in filtering. MATLAB provides a number of built-in functions that can
be used to compute and process Fourier transforms.

1. Calculating the FT and displaying the FT spectrum

The function ‘fft2’ calculates the Fourier Transform of an image (fft2 refers
to the ‘Fast Fourier Transform’ in 2D).

The following code will read an image, create a ‘double’ version of the
image, and then compute the FT (add this code to a script and run it):
img1 = imread('gull2.bmp');

% Create double image from img1


dbl1 = double(img1);

% Calculate the FT
ft1 = fft2(dbl1);

‘ft1’ is a 2D array of complex numbers representing the FT of ‘img1’.


Examine the ft1 workspace variable.

In order to display the FT, we must shift the origin of the FT (i.e. “centre”
the FT), calculate its magnitude and its ‘spectrum’:
ft1 = fftshift(ft1); % Shifts the
origin of the FT to the centre
magft1 = abs(ft1); % Calculates the
magnitude of the FT
spectrum = log(1 + magft1); % Scales the Fourier spectrum
for display

In order to display the FT spectrum we will create an ‘image’ copy and


display the image and the spectrum in separate figures:
% Create an 'image' version of the spectrum for display:
img2 = uint8(rescale(spectrum, 0, 255));

% Display in separate figures so that we can examine the


results more closely:
f1 = figure;
imshow(img1);
impixelinfo(f1);
f2 = figure;
imshow(img2);
impixelinfo(f2);

We can now calculate the inverse of the FT and compare the resulting
image with the original image. As we have not applied any filter, the final
image should be the same as the original:
% Now calculate the inverse FT and compare it with the
original image
ft2 = ifftshift(ft1);
invft = ifft2(ft2);
dbl2 = abs(invft);
img3 = uint8(rescale(dbl2, 0, 255));

f3 = figure;
imshow(img3);
impixelinfo(f3);

2. Generating ‘sin’ noise

Before we experiment with filtering in the Fourier domain, we will


generate some ‘noisy’ images. The following function generates ‘sin’ noise
– effectively sine waves with a specified number of cycles across a matrix
(of size ‘rows x cols’) and a given intensity (given by ‘factor):
function [noiseMat] = generateSinNoise(rows, cols, cycles,
factor)
% Generates a sinusoidal noise matrix of size rows x cols
% The number of cycles is given by 'cycles' and the intensity
of noise is determined by 'factor'

% Generate linearly spaced coordinates


vals = linspace(-cycles*pi, cycles*pi, cols);

sinWave = sin(vals); % creates a 1D array of values


noiseMat = repmat(sinWave, rows, 1); % copy to all rows of
Noisemat

% Modify by 'factor', i.e. the values generated are in the


range 0 … 1;
% so we now multiply by 'factor' to create the final ‘noise’
matrix
noiseMat = factor*noiseMat;

end

This function will return a matrix of ‘sinusoidal’ noise. This noise can now
be added to an image (so that subsequently we can test noise filters to
see how well they work in removing the noise – particularly in the
frequency domain). The following code reads in an image, generates
noise, adds this noise to the image, and then displays both the original
and noisy images:
img1 = imread('gull2.bmp');
[rows, cols] = size(img1);

noiseArray = generateSinNoise(rows, cols, 70, 100);

% Add the noise to the image and scale to 0 … 255


dblImg1 = rescale(double(img1) + noiseArray, 0, 255);
% Create a uint8 copy for display purposes
img2 = uint8(dblImg1);

f1 = figure;
subplot(1,2,1), imshow(img1);
subplot(1,2,2), imshow(img2);
impixelinfo(f1);

Test the code shown so far. Experiment with different amounts (i.e.
different intensities) of noise (by trying different numbers of ‘cycles’ and
different ‘factor’ values) to see the effect on the image.
img1 = imread('gull2.bmp');
[rows, cols] = size(img1);

% Add sin noise to the image;


% the last two parameters are the number of cycles and
intensity of the noise
dblImg = double(img1) + generateSinNoise(rows, cols, 50, 100);

% Display the original image and the noisy image


f1 = figure;
subplot(1,2,1), imshow(img1);
subplot(1,2,2), imshow(dblImg, []);

% Calculate the FT and spectrum:


ft1 = fft2(dblImg); % Calculate Fourier transform
ft1 = fftshift(ft1); % Shift origin to the centre of the
image to “centre” the scpectrum
magft1 = abs(ft1); % Calculate magnitude of the FT (i.e. the
Fourier spectrum)
spectrum = log(1 + magft1); % Calculate the Fourier spectrum

% Create image of the spectrum for display purposes


% and display the image and spectrum in separate figures
img2 = uint8(rescale(spectrum, 0, 255));
f2 = figure;
imshow(dblImg, []);
impixelinfo(f2);
f3 = figure;
imshow(img2);
impixelinfo(f3);

Your noisy image and the image of the spectrum should be similar to the

following:
Note that there are two bright spots to the left and right of the centre of
the spectrum (they are difficult to see in this document, but you will see
them on screen when you run the code). These positions correspond to
the ‘frequencies’ of the noise, and we can use a ‘notch’ filter to filter out
these frequencies prior to taking an inverse FT.

3. Creating and applying a notch filter

To create a notch filter we need to know the positions of the bright spots
in the spectrum (indicating the frequencies that represent the noise that
has been added). For this exercise we will complete this task interactively
by ‘clicking’ on the bright spots in the display of the spectrum and
capturing their coordinates. This can be achieved by using the ‘ginput’
function in MATLAB.

Add the following code to the script that you have been developing:
% Capture a sequence of mouse clicks from the spectrum until
<return> key is pressed.
% Clicks are stored in two arrays holding the x and y
coordinates

[c1, r1] = ginput(1); % capture locations one at a time so


that we can display and process them
while ~isempty(c1)
% Need to convert location to integer values so that we
can index into the image
c=int16(c1);
r=int16(r1);

% Show 'notches' on the spectrum (10x10 square, centred on


(r, c))
img2(r-5:r+5, c-5:c+5) = 255;
imshow(img2);

% Now apply the notch filter to the FT


ft1(r-5:r+5, c-5:c+5) = 0;

% Get next mouse click (or <return> key)


[c1, r1] = ginput(1);
end

This code fragment effectively captures mouse clicks (on the current
figure). In this case we will capture just one click at a time, display a white
square at that position on the spectrum image, and then apply that
‘notch’ filter to the FT. We continue capturing mouse clicks until the return
key is pressed.

Finally, we can calculate the inverse FT from the filtered transform, and
display the resulting image:
% The FT array has now been filtered
% We now calculate the inverse FT and display the result
ft1 = ifftshift(ft1);
invft = ifft2(ft1);
img3 = uint8(rescale(abs(invft), 0, 255));
f3 = figure;
imshow(img3);

E.g. the filtered spectrum transform spectrum and resulting image should
be similar to the following:

Note that not all of the ‘noise’ has been removed from the image – there
are still some ‘shadows’ of noise. However, the filter has performed
significantly better that what could be achieved by using spatial low-pass
filters.

Experiment with different parameter values for the noise frequency and
intensity, and observe your results. You could also add ‘horizontal’ sin
noise to the image by simply transposing the noise matrix (using the '
operator). For example:
noise = generateSinNoise(rows, cols, 50, 100);
% To transpose the noise:
noise = noise’;

4. Generating vertical and horizontal line noise

The following two functions will generate vertical and horizontal ‘line’
noise with specified line spacing, width and intensity:
function [noiseMatrix] = generateVertLineNoise(spacing, width,
value, rows, cols)
% Generate a noise matrix of vertical 'lines' with width 'width',
and spacing 'spacing'
% The noise intensity is 'value'
% The image size is rows x cols

noiseMatrix = zeros(rows,cols);

for idx = 1:width


noiseMatrix(:, idx:spacing:end) = value;
end

end

function [noiseMatrix] =
generateHorzLineNoise(spacing,width,value,rows,cols)
% Generate a noise matrix of horizontal 'lines' with width 'width',
and spacing 'spacing'
% The noise intensity is 'value'
% The image size is rows x cols

noiseMatrix = zeros(rows,cols);

for idx = 1:width


noiseMatrix(idx:spacing:end, :) = value;
end

end

Modify your previous script so that, e.g., vertical line noise is added to the
image rather than sinusoidal noise:
% Generate the noise array and add it to the image
dbl1 = double(img1) + generateVertLineNoise(20, 2, 100, rows,
cols);

Run your code. In this case you should see a number of ‘bright’ spots
across the horizontal axis of the Fourier spectrum. Filter all these out and
observe the results. For the parameter values above there could be nine
spots to the left and right of the centre.
5. Applying Ideal and Butterworth low-pass filters

The following functions will generate Ideal and Butterworth low-pass filters
of size rows x cols, and specified ‘cut-off’ frequency (and of a particular
‘order’ for the Butterworth filter).
function [filter] = generateIdealLPFilter(cutoff,rows, cols)
% Creates a (double) low pass filter of size rows x cols with cut-
off value 'cutoff'

x = (1:rows) - (rows / 2);


y = (1:cols) - (cols / 2);

[X,Y] = meshgrid(x, y);

filter = double((X.*X + Y.*Y) < cutoff*cutoff);


end

function [filter] = generateButterworthLPFilter(cutoff, order, rows,


cols)
% Creates a (double) low pass filter of size rows x cols
% with cut-off value 'cutoff' and order of 'order’

x = (1:rows) - (rows / 2);


y = (1:cols) - (cols / 2);

[X,Y] = meshgrid(x, y);

temp = (sqrt(X.*X + Y.*Y) / cutoff) .^ (2*order);

filter = 1 ./ (1 + temp);
end

For example, a low-pass Ideal filter with cut-off frequency of 50 could be


generated as follows:
lpfilter = generateIdealLPFilter(50, rows, cols);
This could then be applied to the FT (e.g. ft1) of an image:
ft2 = ft1 .* lpfilter;

Remember that the ‘.*’ operator applies point-to-point multiplication of


the two matrices, ft1 and lpfilter.

The following complete script will read an image, compute the FT of the
image, generate an Ideal low-pass filter, apply this filter to the FT, and
compute the inverse FT:
%% Read an image, create an Ideal low-pass filter, apply this filter
to the FT, and view results

img1 = imread('gull2.bmp');
[rows,cols] = size(img1);
dblImg = double(img1);

% Create an Ideal low-pass filter with cut-off frequency and size


rows x cols
lpfilter = generateIdealLPFilter(50, rows, cols);

% Plot the filter - sometimes appears to be black because the grid


lines are so dense;
% we can set the line style to ‘none’ to solve this issue
figure;
h = surf(lpfilter);
set(h,'LineStyle','none')

ft1 = fft2(dblImg); % Calculate Fourier transform


ft1 = fftshift(ft1); % Shift the origin to the centre of the
spectrum

% Create image of the spectrum for display purposes


magft1 = abs(ft1); %
Calculate magnitude of FT
ft1Spectrum = log(1 + magft1); % Calculate the
Fourier spectrum
img2 = uint8(rescale(ft1Spectrum, 0, 255)); % Create spectrum image
for display

% Apply the Ideal low pass filter


ft2 = ft1 .* lpfilter;

% Create image of the filtered spectrum for display


magft2 = abs(ft2); % Calculate magnitude of
FT
ft2Spectrum2 = log(1 + magft2); % Calculate the Fourier spectrum
img3 = uint8(rescale(ft2Spectrum2, 0, 255));

% Calculate the inverse FT of the filtered image


ft3 = ifftshift(ft2); % Undo centering
invft = ifft2(ft3); % Calculate the inverse FT
img4 = uint8(rescale(abs(invft), 0, 255));

% Display all images


f1 = figure;
subplot(2,2,1), imshow(img1);
subplot(2,2,2), imshow(img2);
subplot(2,2,3), imshow(img3);
subplot(2,2,4), imshow(img4);
impixelinfo(f1);
Note that a surface plot of the filter is also shown. Run the code;
experiment with different cut-off frequencies. The results should be similar

to the following:

Examine the output image carefully. When using the Ideal filter, you
should see a ‘ringing’ effect – this is due to the sharp cut-off at the
specified cut-off frequency.

This ‘ringing’ effect can be reduced if this sharp transition in the filter is
removed. For example, we could use the Butterworth filter instead:
lpfilter = generateButterworthLPFilter(50, 2, rows, cols);

Here we apply a cut-off frequency of 50 with an order of 2. The results


should be similar to the following:

Experiment with different cut-off frequencies (and order values) for both
the Ideal and Butterworth filters, and observe the results – make sure that
you understand why the output images appear as they do.
imfilter:
We will now repeat the examples above using the ‘imfilter’ function. (Note
that imfilter can be used on array data as well as image variables). Create
a new script, set up the variables A and M again, and then add the
following:

% Use imfilter to compute the convolution of the mask M with A

% Note: imfilter will return a result the same size as A; by default,


it uses correlation

% - specify 'conv' as a parameter for convolution

B = imfilter(A, M, 'conv'); % Convolves M with A

% result will be the same size as A

% elements lying outside A will be


padded with zeros

Run the code and examine the output variable values. The result you get
for B should be the same as that for the variable C above when using
‘conv2’.

C = imfilter(A, M, 0, 'conv'); % Same as above; can pad with any


value - here we still pad with 0

D = imfilter(A, M, 'replicate', 'conv'); % Border of input data is


replicated,

% rather
than padding with zeros

Exercise:
Experiment with different mask values (and maybe difference mask sizes);
examine the results and ensure that you understand how the results were
generated.

(For further information on imfilter:


https://uk.mathworks.com/help/images/ref/imfilter.html ).

5. Filtering – average and weighted average filters

We can create filters or masks explicitly, as we have done above (e.g. for
mask M). However, MATLAB also provides a function, called ‘fspecial’, that
can be used to generate some ‘standard’ masks. For example, a 3x3
average mask can be created as follows:

% 3x3 average filter


hAvg3x3 = fspecial('average',3);

Run this and examine the mask values in the workspace (each of the nine
values should be 1/9).

Other masks can be created, e.g. a 7x7 average, a Gaussian (weighted)


average (with size 9 and standard deviation 3) and a ‘disk’ average of size
17:

% 7x7 average filter

hAvg7x7 = fspecial('average',7);

% Gaussian (weighted average) filter

hGaussian = fspecial('gaussian',9,3);

% disk average filter

hDisk = fspecial('disk',8);

Again, examine the workspace variable values. We can also display a


‘surface’ plot of the masks to get a better idea of how they appear (using
the function ‘surf’):

f1 = figure('Name', 'Smoothing filters');

subplot(2,2,1), surf(hAvg3x3);

subplot(2,2,2), surf(hAvg7x7);

subplot(2,2,3), surf(hGaussian);

subplot(2,2,4), surf(hDisk);

This should generate a figure similar to the following:


(Read the help information on fspecial:
https://uk.mathworks.com/help/images/ref/fspecial.html ).

Exercise:
Apply the filters above to an image. Experiment with different mask / filter
sizes; observe the effects on the image data. You could display the resulting
images using ‘subplots’ for easy comparison.

6. Edge-preserving Smoothing

A side effect of using smoothing filters (e.g. to remove noise) is that the
resulting image can be ‘blurred’ to an extent that depends on the size and
shape of the filter. In addition to the ‘weighted’ filters in the previous
section (e.g. the Gaussian filter) and those shown in the lecture slides
(Image Operations), the ‘median’, ‘mode’ and ‘k-nearest neighbours’
filters can be used to reduce the blurring effect while removing / reducing
noise.

In MATLAB a median filter can be applied to an image using the ‘medfilt2’


function. For example, we could open an image, add some noise and then
apply a 3x3 median filter to it as follows:

% Apply a median filter to an image

img1 = imread(‘cameraman.tif');

img1 = imnoise(img1,'salt & pepper', 0.02);

medRes1 = medfilt2(img1, [3, 3]);

Run this code, displaying the noisy image and the resulting filtered image.

Exercise:
Experiment with different amounts of noise, different images and different
sizes and shapes of median filters, displaying and observing the results.

7. Edge Detection

As an initial example of edge detection we will consider the use of the


Sobel masks to detect edges in an image. Recall that the Sobel masks are
defined as follows:
1 2 1 1 0 -1
SobelH = 0 0 0 SobelV = 2 0 -2
-1 -2 -1 1 0 -1

Each of the masks is to convolved with the image at each pixel point, and
(for each pixel) the absolute values of the two results are added to give a
final result image. Note that the intermediary results from applying the
masks could contain negative values, and the final pixel values could be
outside the range 0 …. 255. Hence, we will need to convert the image to a
‘double’ array and scale the final result to the range 0 … 255 prior to
display.

The masks can be created as follows in MATLAB:


sobelH = [1,2,1 ; 0,0,0 ; -1,-2,-1];

sobelV = sobelH'; % the symbol ' will transpose sobelH

Create a script containing this code; run it and examine the workspace
variables to confirm that the masks have been created correctly.

Note that the function ‘fspecial’ can also be used to create the masks:
sobelH = fspecial('sobel');

sobelV = sobelH';

Each Sobel mask can now be convolved with the image using the imfilter
function:
resultH = imfilter(dbl1, sobelH, 'replicate', 'conv');

resultV = imfilter(dbl1, sobelV, 'replicate', 'conv');

Note that we have used the ‘replicate’ parameter to effectively extend the
input image around its outer boundary so that the masks can be applied
to every image pixel (including those on the boundary) – in this way, the
parts of the mask that lie outside the image will still have valid pixel data
to operate on. We also use the ‘conv’ parameter, indicating that imfilter
should use convolution rather than correlation. However, in this case the
default correlation could be used and the ‘conv’ parameter removed. This
is because when ‘conv’ is used the mask is rotated by 180 o – so with the
Sobel masks, this means that the results will be negated. However, since
we will take the ‘absolute’ value of the results, then the results will be the
same regardless of whether convolution or correlation is used.

Now add the absolute values of the results of applying the two Sobel mask
to obtain a final result image:
resultH = abs(resultH);
resultV = abs(resultV);

result = resultH + resultV;

We can now display the results:


f1 = figure('Name', 'Using imfilter...');

subplot(2,2,1), imshow(img1);

subplot(2,2,2), imshow(result, []);

subplot(2,2,3), imshow(resultH, []);

subplot(2,2,4), imshow(resultV, []);

impixelinfo(f1);

Examine the results – for example, if you use the ‘blocks4’ image, the
results should be similar to the following. What can you tell from the

images below?

Note the use of an additional parameter ( [] ) for the imshow function. This
simply means that the three ‘result’ images will be scaled to the min /
max range prior to display. Recall that the result images could have pixels
outside the range 0 … 255. Normally we would ‘rescale’ these to the
range 0 … 255 and then convert them to uint8 images prior to display.
However, as we are only interested in the appearance of the results, we
can do this scaling ‘on-the-fly’ using the additional imshow parameter.

Exercise:
Experiment with the code above, applying it to a number of images. Select
images that have a number of straight lines in them, e.g. the ‘blocks’ images
in the image folder.
Try some of the other edge detection masks presented in the lectures,
e.g. the Roberts and Prewitt operators (or the ‘larger window’ operators –
some are shown in the Segmentation notes). Compare these with the
results from using the Sobel masks.

Lab 6:
1. Introduction

 In this session we will use Matlab to write some short programs to do the
following:
 Read and play video data
 from a stored video file
 using streamed video data from a camera
 Explore two simple ways of identifying motion in video:
 Subtract the background from the scene in a video (fixed camera
position)
 Calculate differences between successive frames in video data

2. Reading and Playing Video Data

2.1 Reading and Playing Data from a Stored Video File

 Firstly, identify the video file to be read. Remember that the file must either
be in the “Current Folder”, or you will need to use “Set Path” to the folder in
which the file is located.
%Name of video file to read and play:
Vid_Filename = 'Walk1_CAVIAR.mp4';

 Then define a VideoReader object:


vReader1 = VideoReader(Vid_Filename);

 And a VideoPlayer object:


videoPlayer1 = vision.VideoPlayer;

 Check the frame rate for the video:


vReader1.FrameRate

 Read and play the video frame by frame:


while hasFrame(vReader1)
%read the next frame:
 We can =also
frame use the look-up tables to decode the bitstream to recover the
readFrame(vReader1);
%play the data
original current frame:
sequence. We ca check that the recovered data is exactly the
videoPlayer1(frame);
same as the original data – ie, the encoding was lossless.
%pause between frames so that the video is played at correct speed:
pause(1/vReader1.FrameRate);
end
2.2 Historical Data Stream

 Inspect the VideoReader and VideoPlayer objects in the Matlab Workspace:

 Experiment with the parameter values in the VideoPlayer object; for


example:
videoPlayer1.Position = [100 100 410 300];

 Try slowing down or speeding up the rate at which the video frames are
played.

2.2 Reading and Playing Data using Streamed Video Data from a
Camera

 Create a Video Input object (using video adaptor parameters):


vid1 = videoinput('winvideo', 1);

 Create 2 Video Player objects: to play the original input video data; and to
play the (processed) output:
videoPlayer1 = vision.VideoPlayer('Name','Output_Image','Position',
[100,200,640,360]);
videoPlayer2 = vision.VideoPlayer('Name','Output_Image','Position',
[750,200,640,360]);

 Set the number of frames to be played, and the maximum number of frames
to be captured when the video input is triggered:
N=300;
vid1.FramesPerTrigger = N+30;
 Define an operator to process the video (i.e. process each frame) – for
example, Sobel edge detector:
%Define a horizontal Sobel operator:
h1=fspecial('sobel');

%Define a vertical Sobel operator:
h2=h1';
 Initiate video acquisition and play the output:
start(vid1);
while (vid1.FramesAcquired < N)
%store the current frame:
frame = getsnapshot(vid1);
%Play the frame:
videoPlayer1(frame);
end
 To check the acquired frame number currently being played, you could
(temporarily) insert:

vid1.FramesAcquired
 Processing the video frames:
Within the loop, convert the frame format from RGB to greyscale, and apply
the Sobel gradient filter:
frame_grey=rgb2gray(frame);
sobel_gradient = abs(conv2(h1,frame_grey)) + abs(conv2(h2,frame_grey));

 For display, the Sobel gradient output must be scaled to values between 0
(black) and 255 (white):
sobel_max=max(sobel_gradient(:));
sobel_show=uint8(round(255*sobel_gradient/sobel_max));

 Now the Sobel gradient output can be played alongside the video input:
videoPlayer2(sobel_show);

3. Identifying Motion in Video

3.1 Using Background Subtraction

 Firstly, identify the video file to be read:


Vid_Filename = 'Walk1_CAVIAR.mp4';

 Then define a VideoReader object:


vReader1 = VideoReader(Vid_Filename);

 And two VideoPlayer objects:


 one for displaying the input video
 and one for displaying the background difference
videoPlayer1 = vision.VideoPlayer('Name','Input_Image','Position',
[100,200,300,300]);
videoPlayer2 = vision.VideoPlayer('Name','Output_Image','Position',
C[400,200,300,300]);
 Set a threshold value for the background difference:
th=20;

 Acquire the "background" frame (e.g. in this case, the first frame in the
video):
frame1 = readFrame(vReader1);

 Identify the numbers of rows (My) and columns (Nx) in each video frame:
[My,Nx,Sz]=size(frame1);

 Separate its R, G and B components:


bg_r=frame1(:,:,1);
bg_g=frame1(:,:,2);
bg_b=frame1(:,:,3);

 Start reading the video, and for each frame compute the background
difference in each of the R, G and B channels, and find the maximum
difference:
while hasFrame(vReader1)
%Define a "background difference indicator" array, initially all zeros.
%This array will be used to highlight where a frame differs from the
background.
%Background will be denoted by 0 (black); foreground by 255 (white):
BGI=zeros(My,Nx);

%Read in the next frame and separate its R, G and B components:


frame2 = readFrame(vReader1);
f_r=frame2(:,:,1);
f_g=frame2(:,:,2);
f_b=frame2(:,:,3);

%Compute the difference between the current frame and the background
for each of the R,G,B components:
C1=abs(f_r-bg_r);
C2=abs(f_g-bg_g);
C3=abs(f_b-bg_b);

%Calculate the maximum difference over the 3 channels:


Cabs12=max(C1,C2);
Cabs=max(Cabs12,C3);
Cmax=uint8(Cabs);
end

 If we want to compute a "motion silhouette", then within the loop compute


an indicator value (white = 255) to show where the frame differs from the
background by more than the threshold value:
BGI(Cmax>th)=255;
BGI=uint8(BGI);
 Within the loop, Play the video frame and the frame difference from
background:
videoPlayer1(frame2);
videoPlayer2(BGI);
pause(1/vReader1.FrameRate);

Exercise:
 What results do you observe in terms of distinguishing the parts
of each frame that differ from the background?
 What happens if you change the threshold value?
 How do you think the performance of this approach could be
improved?
 See the next section

3.2 Using Background Subtraction with Smoothing

The performance of the Background Subtraction approach can be improved


significantly by smoothing the input frames of the video before carrying out
subtraction from the background.
 To do this we can convolve each frame with a simple local average filter

 Firstly, define an averaging filter of size = hs x hs (for example, hs = 5):


hs=5;
h_average=fspecial('average',[hs hs]);

 Now after we acquire the "background" frame, after separating its R, G and B
components, we smooth each component using the local averaging filter:
%Acquire the "background" frame:
frame1 = readFrame(vReader1);
%Separate its R, G and B components, and smooth each component using a
local averaging filter:
bg_r=conv2(frame1(:,:,1),h_average,’same’);
bg_g=conv2(frame1(:,:,2),h_average,’same’);
bg_b=conv2(frame1(:,:,3)),h_average,’same’;
 Convolution using the conv2 function with the parameter ‘same’ uses
padding and so the output image will be the same size as the input image.
 Similarly, when we read each frame in the video and separate its R, G, and B
components, we also need to smooth each component using the local
averaging filter:
%Read in the next frame:
frame2 = readFrame(vReader1);
%Separate its R, G and B components, and smooth each component
using a local averaging filter:
f_r=conv2(frame2(:,:,1),h_average,’same’);
f_g=conv2(frame2(:,:,2),h_average,’same’;
f_b=conv2(frame2(:,:,3),h_average,’same’);
 So now, within the loop, we can compute the difference between the
smoothed current frame and the smoothed background for each of the R, G,
B components (same script as before).
 And calculate the maximum difference over the 3 channels (same script as
before).
 And compute a "motion silhouette" by computing an indicator value
(white=255) to show where the frame differs from the background (same
script as before).
 And finally play the video frame and the frame difference from background to
highlight where the frame differs from the background (same script as
before).

Exercise:
 When each frame is smoothed, what results do you observe now?
 What happens if you change the amount of smoothing? (For
example, change the size of the average filter.)

3.3 Playing just the Foreground Frame Content (Rather than a


Silhouette)

 If we want to show the frame content where it differs from the background
(i.e. show the foreground content (which may represent moving objects)), we
can modify the way that we use the background difference indicator (the
array BGI).
 We can use the background difference indicator array as a binary
characterisation array, with indicator value (0) to denote background and
value (1) to show foreground (i.e. where the frame differs from the
background). Then these binary values are used as multipliers for the pixel
values in the frame:
BGI(Cmax>th)=1;
BGI=uint8(BGI);
 And calculate the maximum difference over the 3 channels:
%Highlight those pixels in the frame that differ from the background:
frame3=frame2;
frame3(:,:,1)=BGI.*frame2(:,:,1);
frame3(:,:,2)=BGI.*frame2(:,:,2);
frame3(:,:,3)=BGI.*frame2(:,:,3);
 Then, as before, we can play the video frame and the frame difference from
the background:
videoPlayer1(frame2);
videoPlayer2(frame3);
pause(1/vReader1.FrameRate);

Exercise:
 Again, when each frame is smoothed, what results do you observe
now?
 What happens if you change the amount of smoothing? (For
example, change the size of the average filter.)

 Modify your script so that it works with streamed video data from
a camera rather than input obtained by reading a video file.

3.4 Frame Differencing

Exercise:
 Modify your script so that it computes the difference between
successive video frames rather than between each frame and a
background frame.
 What do you observe if you introduce smoothing by
convolving each frame with a local averaging filter?
 How could you smooth the video frames over time?
 What effect does this have?

 Again, modify your script so that it works with streamed video
data from a camera rather than input obtained by reading a video
file.

You might also like