0% found this document useful (0 votes)
20 views48 pages

TP2 Final Report IP

The document summarizes several image processing exercises: 1. Different types of noise (Gaussian, salt and pepper) are added to an image and the effects are visualized. Spatial filters (Gaussian, average, median) are then applied to reduce the noise and signal-to-noise ratios are calculated. 2. High pass filters including Sobel, Laplacian, LoG, and DoG are applied to images and zero-crossings are identified. 3. Various image enhancement techniques are demonstrated including adjusting color maps, viewing pixel values in homogeneous regions using imtool, and analyzing image format, resolution, dynamics using imfinfo. 4. Key metrics like mean, variance, minimum, maximum values are

Uploaded by

pablo orellana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views48 pages

TP2 Final Report IP

The document summarizes several image processing exercises: 1. Different types of noise (Gaussian, salt and pepper) are added to an image and the effects are visualized. Spatial filters (Gaussian, average, median) are then applied to reduce the noise and signal-to-noise ratios are calculated. 2. High pass filters including Sobel, Laplacian, LoG, and DoG are applied to images and zero-crossings are identified. 3. Various image enhancement techniques are demonstrated including adjusting color maps, viewing pixel values in homogeneous regions using imtool, and analyzing image format, resolution, dynamics using imfinfo. 4. Key metrics like mean, variance, minimum, maximum values are

Uploaded by

pablo orellana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

TP2

M1 E3A international track Lab 2 Image and signal processing - M1 E3A - UEVE/UPSay

Group:

Orellana Pablo (20225753)

Calderon Wilder (20225780)

Spatial filtering
Exercise 1: Generate a noisy image
1. On the image of your choice, generate a Gaussian type noise, uniform then salt and pepper using imnoise.
Vary the density of the noise. Keep these images in reserve to evaluate the low pass filters for the next
exercise.

clear all
image = imread('image3.jpg');
imshow(image)
title('Original Image')

image2 = im2gray(image);
imshow(image2)

1
title('Image in Gray Scale')

D = [0.05 0.1 0.2 0.3 0.4 0.5] ; %Noise Density


M = 0 ;%Mean
V = 0.06; %Variance
J = imnoise(image2,'gaussian',M,V);
figure;
imshow(J)
title('Gaussian Noise Image');

2
figure;
for i=1:size(D,2)
K = imnoise(image2,'salt & pepper',D(i));
subplot(3,2,i)
imshow(K)
title( 'Salt and Pepper Noise Img with '+ string(D(i))+ ' density');
end

3
Exercise 2: Comparison of spatial low pass filters

Let us define the signal to noise ratio: where is the filtered image

and the image before degradation with noise.

%Gaussian filter
n = [3 7 11];
len_n = size(n,2);
sigma = (n-1)/6;

for i=1:len_n
img_blur_gauss = imgaussfilt(J,sigma(i));
figure;
montage({J,img_blur_gauss})
title('Noisy Image (Left) Vs. Gaussian Filtered Image (Right)')
SNR_gauss = 10*log10((sum(sum(image2.^2)))/(sum(sum((img_blur_gauss-image2).^2))))
end

4
SNR_gauss = 3.5560

SNR_gauss = 4.3690

SNR_gauss = 4.5975

5
%Average filter
avg3 = ones(3)/9;
avg7 = ones(7)/49;
avg11 = ones(11)/121;
img_blur_avg = imfilter(J, avg3, 'symmetric');
figure;
montage({J,img_blur_avg})
title('Noisy Image (Left) Vs. Average Filtered Image (Right)')

SNR_avg3 = 10*log10((sum(sum(image2.^2)))/(sum(sum((img_blur_avg-image2).^2))))

SNR_avg3 = 4.2176

img_blur_avg = imfilter(J, avg7, 'symmetric');


figure;
montage({J,img_blur_avg})
title('Noisy Image (Left) Vs. Average Filtered Image (Right)')

SNR_avg7 = 10*log10((sum(sum(image2.^2)))/(sum(sum((img_blur_avg-image2).^2))))

6
SNR_avg7 = 4.5981

img_blur_avg = imfilter(J, avg11, 'symmetric');


figure;
montage({J,img_blur_avg})
title('Noisy Image (Left) Vs. Average Filtered Image (Right)')

SNR_avg11 = 10*log10((sum(sum(image2.^2)))/(sum(sum((img_blur_avg-image2).^2))))

SNR_avg11 = 4.6008

%Median filter
for i=1:len_n
img_blur_med = medfilt2(J,[n(i) n(i)]);
figure;
montage({J,img_blur_med})
title('Noisy Image (Left) Vs. Median Filtered Image (Right)')
SNR_med = 10*log10((sum(sum(image2.^2)))/(sum(sum((img_blur_med-image2).^2))))
end

7
SNR_med = 4.5501

SNR_med = 6.3945

SNR_med = 7.3537

8
Exercise 3: Spatial High pass filters
1. Calculate the modulus is the orientation of the gradient using a Sobel filter on the image of your choice.
Comment.

BW = edge(image2,"sobel");
%Display the filted image.
figure;
imshowpair(image2,BW,'montage')
title('Original Image (Left) Vs. Sobel Filtered Image(Right)')

%Calculate the gradient magnitude and direction, specifying the Sobel gradient operator.
[Gx, Gy] = imgradientxy(image2, 'sobel');
[Gmag, Gdir] = imgradient(Gx, Gy);

%Display the gradient magnitude and direction.


figure;
imshowpair(Gmag, Gdir, 'montage');
title('Gradient Magnitude (left), and Gradient Direction (right), using Sobel method')

9
The modulus of the gradient represents the strength of the gradient at each pixel, while the orientation
represents the direction of the gradient at each pixel. The imgradient function returns the orientation values in
degrees, with 0 degrees indicating a vertical gradient and 90 degrees indicating a horizontal gradient.

The Sobel filter is a popular choice for edge detection and gradient analysis because it is relatively simple to
implement and provides good results in many cases.

2. Convolve the image with a Laplacian 3 × 3. Determine zero crossings.

laplacian_filter = fspecial('laplacian');
conv_image = conv2(im2double(image2),laplacian_filter);

imshowpair(image2, conv_image,'montage')
title('Original Image (Left) Vs. Laplacian Filtered Image (Right)')

zerocross_image = edge(conv_image,"zerocross");
imshowpair(image2, zerocross_image,'montage')
title('Original Image (Left) Vs. Zerocross Filtered Image (Right)')

10
3. Convolve the image with a LOG then a DOG of size 5 × 5 suitably generated. Plot the profiles of the two
filters. Determine zero crossings.

%Log filter
log = fspecial('log',5,2);
log_image = conv2(image2,log,'same');

%dog filter
H1 = fspecial('gaussian',5,15);
H2 = fspecial('gaussian',5,20);
dog = H1 - H2;
dog_image = conv2(image2,dog,'same');

imshowpair(log_image, dog_image,'montage')
title('LoG Image (Left) Vs. DoG Image (Right)')

11
zerocross_log_image = edge(log_image,"zerocross");
zerocross_dog_image = edge(dog_image,"zerocross");
imshowpair(zerocross_log_image, zerocross_dog_image,'montage')
title('Zero crossings LoG Image (Left) Vs. Zero crossings DoG Image (Right)')

Image enhancement
Exercise 4: Simple enhancement using histograms
1. Load the images: cameraman, kids, pout and flowers (imread then imshow). Change the colormaps using
matlab colormaps Matlab (PINK, HSV, GRAY, HOT, COOL, BONE, COPPER , FLAG).

img1 = imread('cameraman.tif');
img2 = imread('kids.tif');
img3 = imread('pout.tif');
img4 = imread('flowers.jpg');

imshow(img1)
title('Image 1: Cameraman image')

12
imshow(img2)
title('Image 2: Kids image')

imshow(img3)
title('Image 3: Pouting image')

13
img4 = im2gray(img4);

% Change the colormap to pink


figure;
imshow(img4, pink);
title('Image 4: Pink Colormap');

14
% Change the colormap to HSV
figure;
imshow(img4, hsv);
title('Image 4: HSV Colormap');

15
% Change the colormap to gray
figure;
imshow(img4, gray);
title('Image 4: Gray Colormap');

16
% Change the colormap to hot
figure;
imshow(img4, hot);
title('Image 4: Hot Colormap');

17
% Change the colormap to cool
figure;
imshow(img4, cool);
title('Image 4: Cool Colormap');

18
% Change the colormap to bone
figure;
imshow(img4, bone);
title('Image 4: Bone Colormap');

19
% Change the colormap to copper
figure;
imshow(img4, copper);
title('Image 4: Copper Colormap');

20
% Change the colormap to flag
figure;
imshow(img4, flag);
title('Image 4: Flag Colormap');

21
2. Display pixel values and corresponding values in the colormap in a small neighborhood (imtool). Focus on
homogeneous regions. Conclude.

% Display the image using imtool


threshold = 10;
high_variance_img4 = stdfilt(img4);
uniform_area = zeros(size(high_variance_img4));

22
uniform_area(high_variance_img4<=threshold)=1;

% Display the uniform areas


imtool(uniform_area);

%imtool(img4,"Colormap",pink);
%imtool(img4,"Colormap",hsv);

23
%imtool(img4,"Colormap",gray);
%imtool(img4,"Colormap",hot);
%imtool(img4,"Colormap",cool);
%imtool(img4,"Colormap",bone);
%imtool(img4,"Colormap",copper);
%imtool(img4,"Colormap",flag);

3. Compare the characteristics of these images using imfinfo: format, resolution, dynamics. Note for each
image the mean, the variance as well as the min and max values. Make a link between these values and the
visual quality of the images.

info1 = imfinfo('cameraman.tif');
info2 = imfinfo('kids.tif');
info3 = imfinfo('pout.tif');
info4 = imfinfo('flowers.jpg');

% Print the format, resolution, and dynamics information


fprintf('Image 1: Cameraman\n');

Image 1: Cameraman

fprintf('Format: %s\n', info1.Format);

Format: tif

fprintf('Resolution: %d x %d\n', info1.Width, info1.Height);

Resolution: 256 x 256

fprintf('Dynamics: %d\n\n', info1.BitDepth);

Dynamics: 8

fprintf('Mean value: %s\n', string(mean(img1(:))));

Mean value: 118.7245

fprintf('Variance value: %s\n', string(var(double(img1(:)))));

Variance value: 3886.4895

fprintf('Min value: %s\n', string(min(img1(:))));

Min value: 7

fprintf('Max value: %s\n', string(max(img1(:))));

Max value: 253

fprintf('Dynamic Range: %s\n', string(20*log10(double(max(img1(:)))/double(min(img1(:))+1))));

Dynamic Range: 30.0006

24
For the first image, it is seen that it has a format .tif, a resolution of 256x256 and a bit depth of 8 which tell us
that it is possible to have up to 255 colours.From the mean value 118.72, we could say that it is close to half of
the maximum level 255 but we cannot say the same for each individual pixel value because the value can vary
a lot one to another and this variation is measured by the standard deviation or variance. From the variance
value 3886.4895, the min level 7 and the max level 253, we can say the probability density function is spread all
over the gray level values. So, this is for a high contrast image.

% Print the format, resolution, and dynamics information


fprintf('Image 2: Kids\n');

Image 2: Kids

fprintf('Format: %s\n', info2.Format);

Format: tif

fprintf('Resolution: %d x %d\n', info2.Width, info2.Height);

Resolution: 318 x 400

fprintf('Dynamics: %d\n\n', info2.BitDepth);

Dynamics: 8

fprintf('Mean value: %s\n', string(mean(img2(:))));

Mean value: 26.1343

fprintf('Variance value: %s\n', string(var(double(img2(:)))));

Variance value: 369.7469

fprintf('Min value: %s\n', string(min(img2(:))));

Min value: 0

fprintf('Max value: %s\n', string(max(img2(:))));

Max value: 63

fprintf('Dynamic Range: %s\n', string(20*log10(double(max(img2(:)))/double(min(img2(:))+1))));

Dynamic Range: 35.9868

For the second image, it is seen that it has a format .tif, a resolution of 318x400 and a bit depth of 8 which tell
us that it is possible to have up to 255 colours.From the mean value 26.1343, we could say that there are lots
of low pixel levels and this is not a guess because by looking at the variance 369.7469, the min value 0 and the
max value 63, we can say it is a Bright image.

% Print the format, resolution, and dynamics information

25
fprintf('Image 3: Pout\n');

Image 3: Pout

fprintf('Format: %s\n', info3.Format);

Format: tif

fprintf('Resolution: %d x %d\n', info3.Width, info3.Height);

Resolution: 240 x 291

fprintf('Bit Depth: %d\n\n', info3.BitDepth);

Bit Depth: 8

fprintf('Mean value: %s\n', string(mean(img3(:))));

Mean value: 110.3037

fprintf('Variance value: %s\n', string(var(double(img3(:)))));

Variance value: 537.3647

fprintf('Min value: %s\n', string(min(img3(:))));

Min value: 74

fprintf('Max value: %s\n', string(max(img3(:))));

Max value: 224

fprintf('Dynamic Range: %s\n', string(20*log10(double(max(img3(:)))/double(min(img3(:))+1))));

Dynamic Range: 9.5037

For the third image, it is seen that it has a format .tif, a resolution of 240x291 and a bit depth of 8 which tell us
that it is possible to have up to 255 colours. By looking at the mean value 110.3037, we could say that it is close
to the value 255/2 but not for each individual pixel value which the value can vary a lot one to another and it is
measured by the standard deviation or variance. From the variance value 537.36, the min value 74 and the max
value 224, we can say it is a dark image.

% Print the format, resolution, and dynamics information


fprintf('Image 4: Flowers\n');

Image 4: Flowers

fprintf('Format: %s\n', info4.Format);

Format: jpg

fprintf('Resolution: %d x %d\n', info4.Width, info4.Height);

Resolution: 640 x 853

26
fprintf('Dynamics: %d\n\n', info4.BitDepth);

Dynamics: 24

fprintf('Mean value: %s\n', string(mean(img4(:))));

Mean value: 98.8255

fprintf('Variance value: %s\n', string(var(double(img4(:)))));

Variance value: 3174.8306

fprintf('Min value: %s\n', string(min(img4(:))));

Min value: 0

fprintf('Max value: %s\n', string(max(img4(:))));

Max value: 253

fprintf('Dynamic Range: %s\n', string(20*log10(double(max(img4(:)))/double(min(img4(:))+1))));

Dynamic Range: 48.0624

For the first image, it is seen that it has a format .jpg, a resolution of 640x853 and a bit depth of 24 which tell us
that it is possible to have up to 255*255*255 colours, this is because the image is RGB (pixel is compound of
red, green and blue). By looking at the mean value 98.82, we could say that it is close to the value 255/2 but not
for each individual pixel value which the value can vary a lot one to another and it is measured by the standard
deviation or variance. From the variance value 3174.8306, the min value 0, the max value 253 and the fact that
the probability density function is spread all over the gray level values , we can say it is a high contrast image.

4. Calculate and display the histogram of each of the images. Make a link between the histogram and the
content of the images when possible.

imhist(img1)
title('Histogram of Image 1')

27
From the histogram of image 1, it can be said that the levels are concentrated in two regions, the first region
around 13 level value and the second region around 162 level value, so the mean value will be in the middle of
those 2 regions.

imhist(img2)

28
title('Histogram of Image 2')

From the histogram of image 2, it can be said that the levels are concentrated in one region, this region includes
just low level value having as mean value 26.

imhist(img3)

29
title('Histogram of Image 3')

From the histogram of image 3, it can be said that the levels are concentrated in one region, this region is in the
center and includes values from 74 until 224 having as mean value 110.

imhist(img4)

30
title('Histogram of Image 4')

From the histogram of image 4, it can be said that the levels are spread all over the gray level values from 0 to
253, this region has a triangular shape where maximum value is 4571 in 42 level value, this mean that there are
4571 pixels that has 42 have as gray level value.

31
5 .In order to improve contrast, apply a point function to the image: linear, exponential then logarithmic. Explain
the role of each function and configure them appropriately. Apply them to the images that are best suited to
each function. Compare the histograms before and after application of each function.

% Apply an exponential point function using imadjust


gamma = 0.5; % specify the gamma value
img1_adj = imadjust(img1, [], [], gamma);
figure;
imshowpair(img1,img1_adj,'montage');
title('Img 1 Original Image vs Exponential Point function Image')

imhist(img1_adj)

%IMG 2 Apply a linear point function using imadjust


img2_adj = imadjust(img2);

32
figure;
imshowpair(img2,img2_adj,'montage');
title('Img 2 Original Image vs Linear Point function Image')

imhist(img2_adj)

33
%IMG 3 Apply a linear point function using imadjust
img3_adj = imadjust(img3);
figure;
imshowpair(img3,img3_adj,'montage');
title('Img 3 Original Image vs Linear Point function Image')

imhist(img3_adj)

34
%IMG 4 Apply an exponential point function using imadjust
img4_adj = imadjust(img4, [], [], gamma);
figure;
imshowpair(img4,img4_adj,'montage');
title('Img 4 Original Image vs Exponential Point function Image')

35
imhist(img4_adj)

36
Linear point function:

A linear point function maps the pixel values linearly to a new range of values. This can be useful when the
image has a low contrast, i.e., the range of pixel values is small. By using a linear function to expand the range
of pixel values, we can improve the contrast of the image.

Exponential point function:

An exponential point function maps the pixel values to a new range of values using an exponential function.
This can be useful when the image has a high dynamic range, i.e., the range of pixel values is large. By using
an exponential function to compress the range of pixel values, we can enhance the details in the darker parts of
the image.

Logarithmic point function:

A logarithmic point function maps the pixel values to a new range of values using a logarithmic function. This
can be useful when the image has a low dynamic range, i.e., the range of pixel values is small. By using a
logarithmic function to expand the range of pixel values, we can enhance the details in the brighter parts of the
image.

37
6. Compare previous functions with histogram equalization (histeq).

img1_histeq = histeq(img1);
img2_histeq = histeq(img2);
img3_histeq = histeq(img3);
img4_histeq = histeq(img4);

figure;
imshowpair(img1_adj,img1_histeq,'montage');
title('Img 1: Exponential Image vs Histogram Equalized Image')

figure;
imshowpair(img2_adj,img2_histeq,'montage');
title('Img 2: Linear Image vs Histogram Equalized Image')

38
figure;
imshowpair(img3_adj,img3_histeq,'montage');
title('Img 3: Linear Image vs Histogram Equalized Image')

figure;
imshowpair(img4_adj,img4_histeq,'montage');
title('Img 4: Exponential Image vs Histogram Equalized Image')

39
The histogram equalization is useful to enhance an image by transforming the image histogram into a constant
(flat) histogram, but from the comparison with the previous point operations it is seen that the constranst
enhancement is not always the best option because it does not work for images composed for more than 1
object. For this cases, the thresholding could be a more suitable choice.

7. Use the histogram to find the best threshold for the coins, blood1 and rice images. Do the same with an
image of your choice.

coins_img = imread('coins.png');
imhist(coins_img)

40
For the coins image, the best threshold should be 90.

rice_img = imread('rice.png');
imhist(rice_img)

41
For the rice image, the best thresholds should be in 70 and 150.

imhist(image2)

42
For the image we chose, the best threshold should be 180.

8. Compare the adaptive thresholding method (to be implemented) with Otsu’s method (graythresh) on the
previous images and on the image of your choice. Carefully keep the binary images obtained for the following
exercise.

% Adaptive thresholding
global_threshold = sum(coins_img(:))/(size(coins_img,1)*size(coins_img,2));

for i=1:10
pivot1_img = coins_img;
pivot2_img = coins_img;
pivot1_img(coins_img<global_threshold)=0;
pivot2_img(coins_img>=global_threshold)=0;
m1 = sum(pivot1_img(:))/nnz(pivot1_img);
m2 = sum(pivot2_img(:))/nnz(pivot2_img);
global_threshold = (m1+m2)/2;
end
BW_adapt = coins_img;
BW_adapt(coins_img<global_threshold) = 0;
BW_adapt(coins_img>=global_threshold) = 255;

43
% Otsu's method
level = graythresh(coins_img);
BW_otsu = imbinarize(coins_img, level);

% Display the results


figure;
subplot(1,3,1); imshow(coins_img); title('Original image');
subplot(1,3,2); imshow(BW_otsu); title('Otsu');
subplot(1,3,3); imshow(BW_adapt); title('Adaptive thresholding');

Adaptive thresholding is a method that adjusts the threshold level locally based on the properties of the
surrounding pixels, whereas Otsu's method determines a global threshold level that separates the foreground
and background based on the intensity histogram of the entire image.

The results of the comparison show that with both methods it is seen that the coins are separated from the
background, the difference is because the otsu method uses the histrogram (frequencies of gray level value)
this separation will be more accurate than the adaptive threshold method.

Segmentation
Exercise 5 : Highlighting zones
1. On the previous thresholded images (you will choose the best result obtained), apply a labeling in connected
components (blob coloring) (bwlabel then label2rgb).

44
% Label the connected components in the binary image (4-connectivity)
cc4 = bwlabel(BW_otsu,4);

% Generate a color map for the labels


cmap4 = label2rgb(cc4, 'lines', 'k', 'shuffle');

% Display the results


figure;
subplot(1,2,1); imshow(BW_otsu); title('Binary image');
subplot(1,2,2); imshow(cmap4); title('Labeled 4-connectivity image');

2. Compare 4-connected labeling with 8-connected labeling.

% Label the connected components in the binary image (8-connectivity)


cc8 = bwlabel(BW_otsu,8);

% Generate a color map for the labels


cmap8 = label2rgb(cc8, 'lines', 'k', 'shuffle');

% Display the results


figure;
subplot(1,2,1); imshow(cmap4); title('Labeled 4-connectivity image');
subplot(1,2,2); imshow(cmap8); title('Labeled 8-connectivity image');

45
3. Find a simple way to calculate the area in pixels of labeled objects.

% Find the unique labels in the image (excluding 0)


labels = unique(cc8);
labels(labels == 0) = [];

% Initialize a vector to store the areas of each object


areas = zeros(size(labels));

% Loop over each object label and count the number of pixels with that label
for i = 1:length(labels)
areas(i) = sum(cc8(:) == labels(i));
end

% Display the results


fprintf('Number of labeled objects: %d\n', length(labels));

Number of labeled objects: 15

fprintf('Area of each labeled object (in pixels):\n');

Area of each labeled object (in pixels):

disp(areas);

2649
1854
2667

46
964
1
2
2748
1
5
1
2510
2587
2544
1893
1864

fprintf('Total area of all labeled objects (in pixels): %d\n', sum(areas));

Total area of all labeled objects (in pixels): 22290

4. Perform edge detection (as in Exercise 3) starting from the labeled image. What do you deduce from this?

% Convert the labeled image to a binary mask


binary_mask = imbinarize(cc8);

% Apply the Sobel edge detector to the binary mask


edge_mask = edge(binary_mask, 'sobel');

% Display the original labeled image and the edge mask


figure;
subplot(1,2,1); imshow(cmap8); title('Labeled Image');
subplot(1,2,2); imshow(edge_mask); title('Edge Mask');

47
Performing edge detection on a labeled image can provide us useful information about the boundaries between
different labeled objects in the image.

To do this, we can first convert the labeled image to a binary mask where the boundaries between the labeled
objects are set to 1 and everything else is set to 0. We can then apply an edge detection algorithm, such as the
Sobel or Canny edge detectors, to this binary mask to highlight the edges

48

You might also like