Example Scripts
Example Scripts
Contents
Lab 2:..................................................................................................................... 2
Lab 3:..................................................................................................................... 9
Lab 4:................................................................................................................... 13
Lab 5:................................................................................................................... 16
Lab 6:................................................................................................................... 33
Lab 2:
Introduction
Current Directory
Remember to set your ‘current folder’ to the folder that you created last week to
hold all your MATLAB scripts and results for this module. You should also add any
other folders that contain images that you might want to use to the search path
that MATLAB uses (under the ‘Home’ menu ribbon; ‘Set path’ button).
Scripts
Rather than simply running all commands interactively from the Command
window, you should start to create script (‘.m’) files to hold your commands – i.e.
start creating ‘programs’ within MATLAB.
Help Information
You can search for help on MATLAB commands and topics from within the
MATLAB environment itself. This help information is also available online from the
Mathworks website. For ease of access, links to the online help are provided in
the text below (however, you can also search for the same terms etc. from within
the MATLAB environment).
Create a fourth figure that uses ‘subplot’ to display all three images in one
figure (help on subplot:
https://www.mathworks.com/help/matlab/ref/subplot.html ). Display the
three in a row and then in a column format.
Video data can easily be captured from a webcam. To see a list of all available
webcams attached to a computer use the command:
webcamlist;
To create a webcam ‘object’ that can be used in your programs, assign one of the
webcams to a variable:
Note: the camera can be ‘switched off’ by simply clearing the ‘cam’ variable:
preview(cam);
…
closePreview(cam);
img = snapshot(cam);
Take a look at the help information for webcam, preview, closePreview and
snapshot.
Exercise:
Write a script that:
Opens the webcam and previews live video
Creates a figure to display a captured frame (with title ‘Snapshot’)
Loops through e.g. five iterations of:
o Capturing a frame
o Displaying the frame
o Pausing (to allow the image to be viewed before the next iteration)
Close the preview and turn the camera off.
(Note: when you run the script, you may need to move some windows around to
make sure that you can see the preview and snapshots on screen).
The script captures and displays colour images. Modify your script so that the
colour image is converted to a greyscale image (using ‘rgb2gray’). Then
(i) display the greyscale image rather than the colour image, and
(ii) (ii) display both colour and greyscale in separate figures.
Histograms
This generates a 1D array or vector called ‘h’ that holds a count of the number of
times each greylevel occurs in the image. If ‘imhist’ is used on its own (without
assigning the result to a variable), then a plot of the histogram will be displayed.
Take a look at the MATLAB help information for ‘imhist’.
Exercise:
Write a script that:
o Reads an image into an image variable (e.g. ‘moon.bmp’ into ‘img’)
o Displays the image in a figure
o Calculates the histogram and puts the result into a variable (e.g. ‘h’)
Run the script and examine the variable ‘h’ in the workspace:
o Double-click on the variable to see the values that it holds
o Note that each element is a count of the number of times each
greylevel occurs (although the array is indexed from 1 to 256, this
represents the greylevels 0 to 255).
Add to the script and find the min and max pixel values in the image:
min(img(:)); % finds the minimum pixel value
max(img(:)); % finds the maximum pixel value
(Note that you need the ‘:’ in the code above – otherwise min will find the
minimum value in every column of the image and generate a vector of
these minima).
Run the script and examine these values. There should be zeros in the
histogram (h) variable for any entries less that the min or greater than the
max.
Extend the script to display the image histogram in a figure (remember: if
you use imhist on its own – in a similar way to imshow – a plot of the
histogram will be generated in a figure.
Modify your script so that the image and the histogram are displayed side-
by-side in the same figure (use ‘subplot’).
o Displays both images (the second should appear darker than the
original)
Add to the script by calculating and displaying the images together with
their respective histograms in a grid in the same figure. The result should
be similar to the following:
We can combine the scripts developed in the previous two sections to reduce the
contrast on live video obtained from a webcam.
Exercise:
Write a script that reads images from a webcam and reduces the contrast
of the centre portion of each image frame. The results should be displayed
in real-time (no ‘pauses’) in an output figure as a greyscale image
sequence. (Rather than running forever, you could run for 100-200
iterations – this should demonstrate the algorithms adequately).
(As an additional exercise you could replicate the figure above, but for
video (i.e. subplot original, top half, left half and centre contrast
reductions)).
Lab 3:
Introduction
Note:
When you read a greyscale intensity image into MATLAB, an image of type ‘uint8’
is created. For some operations these images can be processed directly.
However, for other types of operations the results obtained would be outside the
permitted range for uint8 (i.e. the resulting values would be outside the range [0
… 255]). In this case you should create, for example, a ‘double’ array of the
image data, process this and then scale the results back to the range [0 … 255]
before converting back to a uint8 image for display.
Note that a ‘double’ array is not the same as an image of type ‘double’. The
former can have any value; an image of type ‘double’ has pixel values in the
range between 0 and 1.
A number of the built-in MATLAB Image Processing toolbox functions use the
approach above – i.e. you might pass an image as a parameter to the function,
and it will return an image as the output of the function. But within the function,
it converts to double, processes the data, and converts back to an image.
The standard deviation of an image can be obtained using the ‘std’ function.
However, the input to the function must be of type ‘double’; hence the following
to compute the standard deviation:
dbl1 = double(img1); % convert the image to an array/matrix of type
double
sd1 = std(dbl1(:));
You can use the ‘disp’ function to display values of variables in the
command window, e.g.:
disp(m); % display the value of m
disp(sd1); % display the standard deviation
% To display text and variable values you can create a string array:
str = ['Mean: ', num2str(m), ' SD: ', num2str(sd1)];
disp(str);
Exercise:
Experiment with processing a number of images: calculate their means
and standard deviations, and display these values. You should also display
the images and their corresponding histograms. Initially, you should
create some images with different greyscale distributions (or ranges). To
do this, you can use code similar to the following:
img1 = imread('baboon.bmp'); % read the image
dbl1 = double(img1); % create a ‘double’ array from the
image
dbl2 = rescale(img1, 64, 124); % rescale the greyscale range, creating
‘dbl2’
Read the help on ‘rescale’ and then create a few images (dbl2, dbl3,
dbl4, ..) with different contrasts. Process these images, finding the mean
and standard deviation, and display these values (together with displaying
the images and histograms). Make sure that you understand the
relationships between mean, standard deviation, brightness and contrast.
For each of the images shown, think carefully about the relationships
between mean, standard deviation, brightness and contrast.
2. Gains and Biases
The following function performs the linear pixel value mapping discussed
in lectures, applying a ‘gain’ and ‘bias’ to an input image. Note that once
the mapping is applied, it checks to ensure that values above 255 are set
to 255 and that values below 0 are set to 0:
function imArrOut = remap(imArrIn,gain,bias)
% Assume an array of double as input image
Copy the code above into an ‘.m’ file (select a ‘New’ | ‘Function’ within the
Editor and paste the code above into the file). (More information on how
to define functions can be found in the MATLAB help documentation:
https://uk.mathworks.com/help/matlab/ref/function.html ).
Test the function on a number of images by applying different gain and
bias values, and observe the effect on the images. (Don’t forget that the
function is expecting a ‘double’ array as an input parameter (not an
image) and produces a ‘double’ array as output. You will need to create
an image from this output double array in order to display the results. See
the ‘Note’ box at the start of this document.). Once again, display
histograms of the images to gain an understanding of the relationships
between mean, standard deviation, brightness and contrast.
3. Thresholding
Thresholding can be performed in MATLAB using the ‘imbinarize’ function.
There are a number of options for the input parameters to the function.
For example, an image can be thresholded using:
bw = imbinarize(img1, T);
where img1 is the image to be thresholded, T is a threshold value (in the
range [0 … 1]), and bw is the output (logical) image. (Help information on
imbinarize can be found at:
https://www.mathworks.com/help/images/ref/imbinarize.html ).
Experiment with different threshold values using the following (or other)
images: rice.png, paper.bmp, coins.png, and printedtext.png. Which
threshold values work best with which images? Why does a single (global)
threshold not work well for some images?
Lab 4:
Introduction
4. Convolution
conv2:
Note that ‘conv2’ will rotate the mask by 180 o prior to applying it – this is
the correct use of the term convolution. ‘imfilter’ uses correlation to apply
the mask (without any rotation). However, if the mask is symmetric, both
correlation and convolution will return the same result.
A = zeros(10, 5);
A(3, :) = 2;
A(4:7, :) = 4;
A(8, :) = 2;
Enter the code above into a script, and run it. Then, in the Workspace
Variables window, examine the values in the array A. You should see what
appear to be two horizontal ‘edges’ in the ‘image’ A.
We will now use ‘conv2’ to compute the convolution of the mask M with
the ‘image’ A.
(Note that, by default, conv2 will compute a ‘full’ convolution as indicated
above):
B = conv2(A, M); % Result matrix will be larger than the input matrix
A
Run this code and examine the values of the variables – make sure that
you understand the results.
We can also use the parameters indicated above, ‘same’ and ‘valid’, to
modify how conv2 computes its results:
Run the code and examine the values of the workspace variables.
Exercise:
Experiment with different mask values; examine the results and ensure that
you understand how the results were generated.
In this lab session you will experiment with the Fourier transform and its
use in filtering. MATLAB provides a number of built-in functions that can
be used to compute and process Fourier transforms.
The function ‘fft2’ calculates the Fourier Transform of an image (fft2 refers
to the ‘Fast Fourier Transform’ in 2D).
The following code will read an image, create a ‘double’ version of the
image, and then compute the FT (add this code to a script and run it):
img1 = imread('gull2.bmp');
% Calculate the FT
ft1 = fft2(dbl1);
In order to display the FT, we must shift the origin of the FT (i.e. “centre”
the FT), calculate its magnitude and its ‘spectrum’:
ft1 = fftshift(ft1); % Shifts the
origin of the FT to the centre
magft1 = abs(ft1); % Calculates the
magnitude of the FT
spectrum = log(1 + magft1); % Scales the Fourier spectrum
for display
We can now calculate the inverse of the FT and compare the resulting
image with the original image. As we have not applied any filter, the final
image should be the same as the original:
% Now calculate the inverse FT and compare it with the
original image
ft2 = ifftshift(ft1);
invft = ifft2(ft2);
dbl2 = abs(invft);
img3 = uint8(rescale(dbl2, 0, 255));
f3 = figure;
imshow(img3);
impixelinfo(f3);
end
This function will return a matrix of ‘sinusoidal’ noise. This noise can now
be added to an image (so that subsequently we can test noise filters to
see how well they work in removing the noise – particularly in the
frequency domain). The following code reads in an image, generates
noise, adds this noise to the image, and then displays both the original
and noisy images:
img1 = imread('gull2.bmp');
[rows, cols] = size(img1);
f1 = figure;
subplot(1,2,1), imshow(img1);
subplot(1,2,2), imshow(img2);
impixelinfo(f1);
Test the code shown so far. Experiment with different amounts (i.e.
different intensities) of noise (by trying different numbers of ‘cycles’ and
different ‘factor’ values) to see the effect on the image.
img1 = imread('gull2.bmp');
[rows, cols] = size(img1);
Your noisy image and the image of the spectrum should be similar to the
following:
Note that there are two bright spots to the left and right of the centre of
the spectrum (they are difficult to see in this document, but you will see
them on screen when you run the code). These positions correspond to
the ‘frequencies’ of the noise, and we can use a ‘notch’ filter to filter out
these frequencies prior to taking an inverse FT.
To create a notch filter we need to know the positions of the bright spots
in the spectrum (indicating the frequencies that represent the noise that
has been added). For this exercise we will complete this task interactively
by ‘clicking’ on the bright spots in the display of the spectrum and
capturing their coordinates. This can be achieved by using the ‘ginput’
function in MATLAB.
Add the following code to the script that you have been developing:
% Capture a sequence of mouse clicks from the spectrum until
<return> key is pressed.
% Clicks are stored in two arrays holding the x and y
coordinates
This code fragment effectively captures mouse clicks (on the current
figure). In this case we will capture just one click at a time, display a white
square at that position on the spectrum image, and then apply that
‘notch’ filter to the FT. We continue capturing mouse clicks until the return
key is pressed.
Finally, we can calculate the inverse FT from the filtered transform, and
display the resulting image:
% The FT array has now been filtered
% We now calculate the inverse FT and display the result
ft1 = ifftshift(ft1);
invft = ifft2(ft1);
img3 = uint8(rescale(abs(invft), 0, 255));
f3 = figure;
imshow(img3);
E.g. the filtered spectrum transform spectrum and resulting image should
be similar to the following:
Note that not all of the ‘noise’ has been removed from the image – there
are still some ‘shadows’ of noise. However, the filter has performed
significantly better that what could be achieved by using spatial low-pass
filters.
Experiment with different parameter values for the noise frequency and
intensity, and observe your results. You could also add ‘horizontal’ sin
noise to the image by simply transposing the noise matrix (using the '
operator). For example:
noise = generateSinNoise(rows, cols, 50, 100);
% To transpose the noise:
noise = noise’;
The following two functions will generate vertical and horizontal ‘line’
noise with specified line spacing, width and intensity:
function [noiseMatrix] = generateVertLineNoise(spacing, width,
value, rows, cols)
% Generate a noise matrix of vertical 'lines' with width 'width',
and spacing 'spacing'
% The noise intensity is 'value'
% The image size is rows x cols
noiseMatrix = zeros(rows,cols);
end
function [noiseMatrix] =
generateHorzLineNoise(spacing,width,value,rows,cols)
% Generate a noise matrix of horizontal 'lines' with width 'width',
and spacing 'spacing'
% The noise intensity is 'value'
% The image size is rows x cols
noiseMatrix = zeros(rows,cols);
end
Modify your previous script so that, e.g., vertical line noise is added to the
image rather than sinusoidal noise:
% Generate the noise array and add it to the image
dbl1 = double(img1) + generateVertLineNoise(20, 2, 100, rows,
cols);
Run your code. In this case you should see a number of ‘bright’ spots
across the horizontal axis of the Fourier spectrum. Filter all these out and
observe the results. For the parameter values above there could be nine
spots to the left and right of the centre.
5. Applying Ideal and Butterworth low-pass filters
The following functions will generate Ideal and Butterworth low-pass filters
of size rows x cols, and specified ‘cut-off’ frequency (and of a particular
‘order’ for the Butterworth filter).
function [filter] = generateIdealLPFilter(cutoff,rows, cols)
% Creates a (double) low pass filter of size rows x cols with cut-
off value 'cutoff'
filter = 1 ./ (1 + temp);
end
The following complete script will read an image, compute the FT of the
image, generate an Ideal low-pass filter, apply this filter to the FT, and
compute the inverse FT:
%% Read an image, create an Ideal low-pass filter, apply this filter
to the FT, and view results
img1 = imread('gull2.bmp');
[rows,cols] = size(img1);
dblImg = double(img1);
to the following:
Examine the output image carefully. When using the Ideal filter, you
should see a ‘ringing’ effect – this is due to the sharp cut-off at the
specified cut-off frequency.
This ‘ringing’ effect can be reduced if this sharp transition in the filter is
removed. For example, we could use the Butterworth filter instead:
lpfilter = generateButterworthLPFilter(50, 2, rows, cols);
Experiment with different cut-off frequencies (and order values) for both
the Ideal and Butterworth filters, and observe the results – make sure that
you understand why the output images appear as they do.
imfilter:
We will now repeat the examples above using the ‘imfilter’ function. (Note
that imfilter can be used on array data as well as image variables). Create
a new script, set up the variables A and M again, and then add the
following:
Run the code and examine the output variable values. The result you get
for B should be the same as that for the variable C above when using
‘conv2’.
% rather
than padding with zeros
Exercise:
Experiment with different mask values (and maybe difference mask sizes);
examine the results and ensure that you understand how the results were
generated.
We can create filters or masks explicitly, as we have done above (e.g. for
mask M). However, MATLAB also provides a function, called ‘fspecial’, that
can be used to generate some ‘standard’ masks. For example, a 3x3
average mask can be created as follows:
Run this and examine the mask values in the workspace (each of the nine
values should be 1/9).
hAvg7x7 = fspecial('average',7);
hGaussian = fspecial('gaussian',9,3);
hDisk = fspecial('disk',8);
subplot(2,2,1), surf(hAvg3x3);
subplot(2,2,2), surf(hAvg7x7);
subplot(2,2,3), surf(hGaussian);
subplot(2,2,4), surf(hDisk);
Exercise:
Apply the filters above to an image. Experiment with different mask / filter
sizes; observe the effects on the image data. You could display the resulting
images using ‘subplots’ for easy comparison.
6. Edge-preserving Smoothing
A side effect of using smoothing filters (e.g. to remove noise) is that the
resulting image can be ‘blurred’ to an extent that depends on the size and
shape of the filter. In addition to the ‘weighted’ filters in the previous
section (e.g. the Gaussian filter) and those shown in the lecture slides
(Image Operations), the ‘median’, ‘mode’ and ‘k-nearest neighbours’
filters can be used to reduce the blurring effect while removing / reducing
noise.
img1 = imread(‘cameraman.tif');
Run this code, displaying the noisy image and the resulting filtered image.
Exercise:
Experiment with different amounts of noise, different images and different
sizes and shapes of median filters, displaying and observing the results.
7. Edge Detection
Each of the masks is to convolved with the image at each pixel point, and
(for each pixel) the absolute values of the two results are added to give a
final result image. Note that the intermediary results from applying the
masks could contain negative values, and the final pixel values could be
outside the range 0 …. 255. Hence, we will need to convert the image to a
‘double’ array and scale the final result to the range 0 … 255 prior to
display.
Create a script containing this code; run it and examine the workspace
variables to confirm that the masks have been created correctly.
Note that the function ‘fspecial’ can also be used to create the masks:
sobelH = fspecial('sobel');
sobelV = sobelH';
Each Sobel mask can now be convolved with the image using the imfilter
function:
resultH = imfilter(dbl1, sobelH, 'replicate', 'conv');
Note that we have used the ‘replicate’ parameter to effectively extend the
input image around its outer boundary so that the masks can be applied
to every image pixel (including those on the boundary) – in this way, the
parts of the mask that lie outside the image will still have valid pixel data
to operate on. We also use the ‘conv’ parameter, indicating that imfilter
should use convolution rather than correlation. However, in this case the
default correlation could be used and the ‘conv’ parameter removed. This
is because when ‘conv’ is used the mask is rotated by 180 o – so with the
Sobel masks, this means that the results will be negated. However, since
we will take the ‘absolute’ value of the results, then the results will be the
same regardless of whether convolution or correlation is used.
Now add the absolute values of the results of applying the two Sobel mask
to obtain a final result image:
resultH = abs(resultH);
resultV = abs(resultV);
subplot(2,2,1), imshow(img1);
impixelinfo(f1);
Examine the results – for example, if you use the ‘blocks4’ image, the
results should be similar to the following. What can you tell from the
images below?
Note the use of an additional parameter ( [] ) for the imshow function. This
simply means that the three ‘result’ images will be scaled to the min /
max range prior to display. Recall that the result images could have pixels
outside the range 0 … 255. Normally we would ‘rescale’ these to the
range 0 … 255 and then convert them to uint8 images prior to display.
However, as we are only interested in the appearance of the results, we
can do this scaling ‘on-the-fly’ using the additional imshow parameter.
Exercise:
Experiment with the code above, applying it to a number of images. Select
images that have a number of straight lines in them, e.g. the ‘blocks’ images
in the image folder.
Try some of the other edge detection masks presented in the lectures,
e.g. the Roberts and Prewitt operators (or the ‘larger window’ operators –
some are shown in the Segmentation notes). Compare these with the
results from using the Sobel masks.
Lab 6:
1. Introduction
In this session we will use Matlab to write some short programs to do the
following:
Read and play video data
from a stored video file
using streamed video data from a camera
Explore two simple ways of identifying motion in video:
Subtract the background from the scene in a video (fixed camera
position)
Calculate differences between successive frames in video data
Firstly, identify the video file to be read. Remember that the file must either
be in the “Current Folder”, or you will need to use “Set Path” to the folder in
which the file is located.
%Name of video file to read and play:
Vid_Filename = 'Walk1_CAVIAR.mp4';
Try slowing down or speeding up the rate at which the video frames are
played.
2.2 Reading and Playing Data using Streamed Video Data from a
Camera
Create 2 Video Player objects: to play the original input video data; and to
play the (processed) output:
videoPlayer1 = vision.VideoPlayer('Name','Output_Image','Position',
[100,200,640,360]);
videoPlayer2 = vision.VideoPlayer('Name','Output_Image','Position',
[750,200,640,360]);
Set the number of frames to be played, and the maximum number of frames
to be captured when the video input is triggered:
N=300;
vid1.FramesPerTrigger = N+30;
Define an operator to process the video (i.e. process each frame) – for
example, Sobel edge detector:
%Define a horizontal Sobel operator:
h1=fspecial('sobel');
%Define a vertical Sobel operator:
h2=h1';
Initiate video acquisition and play the output:
start(vid1);
while (vid1.FramesAcquired < N)
%store the current frame:
frame = getsnapshot(vid1);
%Play the frame:
videoPlayer1(frame);
end
To check the acquired frame number currently being played, you could
(temporarily) insert:
vid1.FramesAcquired
Processing the video frames:
Within the loop, convert the frame format from RGB to greyscale, and apply
the Sobel gradient filter:
frame_grey=rgb2gray(frame);
sobel_gradient = abs(conv2(h1,frame_grey)) + abs(conv2(h2,frame_grey));
For display, the Sobel gradient output must be scaled to values between 0
(black) and 255 (white):
sobel_max=max(sobel_gradient(:));
sobel_show=uint8(round(255*sobel_gradient/sobel_max));
Now the Sobel gradient output can be played alongside the video input:
videoPlayer2(sobel_show);
Acquire the "background" frame (e.g. in this case, the first frame in the
video):
frame1 = readFrame(vReader1);
Identify the numbers of rows (My) and columns (Nx) in each video frame:
[My,Nx,Sz]=size(frame1);
Start reading the video, and for each frame compute the background
difference in each of the R, G and B channels, and find the maximum
difference:
while hasFrame(vReader1)
%Define a "background difference indicator" array, initially all zeros.
%This array will be used to highlight where a frame differs from the
background.
%Background will be denoted by 0 (black); foreground by 255 (white):
BGI=zeros(My,Nx);
%Compute the difference between the current frame and the background
for each of the R,G,B components:
C1=abs(f_r-bg_r);
C2=abs(f_g-bg_g);
C3=abs(f_b-bg_b);
Exercise:
What results do you observe in terms of distinguishing the parts
of each frame that differ from the background?
What happens if you change the threshold value?
How do you think the performance of this approach could be
improved?
See the next section
Now after we acquire the "background" frame, after separating its R, G and B
components, we smooth each component using the local averaging filter:
%Acquire the "background" frame:
frame1 = readFrame(vReader1);
%Separate its R, G and B components, and smooth each component using a
local averaging filter:
bg_r=conv2(frame1(:,:,1),h_average,’same’);
bg_g=conv2(frame1(:,:,2),h_average,’same’);
bg_b=conv2(frame1(:,:,3)),h_average,’same’;
Convolution using the conv2 function with the parameter ‘same’ uses
padding and so the output image will be the same size as the input image.
Similarly, when we read each frame in the video and separate its R, G, and B
components, we also need to smooth each component using the local
averaging filter:
%Read in the next frame:
frame2 = readFrame(vReader1);
%Separate its R, G and B components, and smooth each component
using a local averaging filter:
f_r=conv2(frame2(:,:,1),h_average,’same’);
f_g=conv2(frame2(:,:,2),h_average,’same’;
f_b=conv2(frame2(:,:,3),h_average,’same’);
So now, within the loop, we can compute the difference between the
smoothed current frame and the smoothed background for each of the R, G,
B components (same script as before).
And calculate the maximum difference over the 3 channels (same script as
before).
And compute a "motion silhouette" by computing an indicator value
(white=255) to show where the frame differs from the background (same
script as before).
And finally play the video frame and the frame difference from background to
highlight where the frame differs from the background (same script as
before).
Exercise:
When each frame is smoothed, what results do you observe now?
What happens if you change the amount of smoothing? (For
example, change the size of the average filter.)
If we want to show the frame content where it differs from the background
(i.e. show the foreground content (which may represent moving objects)), we
can modify the way that we use the background difference indicator (the
array BGI).
We can use the background difference indicator array as a binary
characterisation array, with indicator value (0) to denote background and
value (1) to show foreground (i.e. where the frame differs from the
background). Then these binary values are used as multipliers for the pixel
values in the frame:
BGI(Cmax>th)=1;
BGI=uint8(BGI);
And calculate the maximum difference over the 3 channels:
%Highlight those pixels in the frame that differ from the background:
frame3=frame2;
frame3(:,:,1)=BGI.*frame2(:,:,1);
frame3(:,:,2)=BGI.*frame2(:,:,2);
frame3(:,:,3)=BGI.*frame2(:,:,3);
Then, as before, we can play the video frame and the frame difference from
the background:
videoPlayer1(frame2);
videoPlayer2(frame3);
pause(1/vReader1.FrameRate);
Exercise:
Again, when each frame is smoothed, what results do you observe
now?
What happens if you change the amount of smoothing? (For
example, change the size of the average filter.)
Modify your script so that it works with streamed video data from
a camera rather than input obtained by reading a video file.
Exercise:
Modify your script so that it computes the difference between
successive video frames rather than between each frame and a
background frame.
What do you observe if you introduce smoothing by
convolving each frame with a local averaging filter?
How could you smooth the video frames over time?
What effect does this have?
Again, modify your script so that it works with streamed video
data from a camera rather than input obtained by reading a video
file.