0% found this document useful (0 votes)
12 views59 pages

Image Processing LECTURE 7

Image_Processing

Uploaded by

Mert DUMAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views59 pages

Image Processing LECTURE 7

Image_Processing

Uploaded by

Mert DUMAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

INTRODUCTION TO IMAGE PROCESSING

LECTURE 7
Hough Transform
Segmentation
Hough Transform
Hough Transform
– Hough Transform is used to detect parametric geometric
structures (such as lines, circles, ellipses, etc) from images.
– The general practice is to detect the edges first.
– The input to line detection through Hough Transform is the
binary edge image.
Representation of lines

where ρ is the length of the normal from the origin to the


line, and θ is the angle between this normal and the x-axis.
Define the parameter space

Quantize the parameter space (ρ, θ), i.e., divide it into cells.
We call this quantized space the accummulator cells.
Hough Transform

Initialize the accummulator cells with 0.

For each edge point (x,y), add +1 to all


the cells (ρ, θ) that satisfy the above
line equation.

If a line exists in the edge map, the


corresponding cell (ρ, θ) will get many
votes from the edge points on that line.
Hough Transform
Hough Transform
Hough Transform
Hough Transform
Example: 11x11 edge map and its Hough Transform
MATLAB DEMO
Detecting Lines Using the Hough Transform
I = imread('circuit.tif');

rotI = imrotate(I,33,'crop');
fig1 = imshow(rotI);

BW = edge(rotI,'canny');
figure, imshow(BW);

[H,theta,rho] = hough(BW);

figure, imshow(imadjust(mat2gray(H)),[],'XData',theta,'YData',rho,...
'InitialMagnification','fit');
xlabel('\theta (degrees)'), ylabel('\rho');
axis on, axis normal, hold on;

P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));

lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',7);
Detecting Lines Using the Hough Transform
figure, imshow(rotI), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');

% Plot beginnings and ends of lines


plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');

% Determine the endpoints of the longest line segment


len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end

% highlight the longest line segment


plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','red');
-300

-200

-100

0

100

200

300

-80 -60 -40 -20 0 20 40 60 80


 (degrees)
lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',7);

Each element of the structure array has these fields:

point1 End-point of the line segment; two-element vector


point2 End-point of the line segment; two-element vector
theta Angle (in degrees) of the Hough transform bin
rho Rho-axis position of the Hough transform bin
Segmentation
• Segmentation: Segmentation is the decomposition of an image
into regions and objects.
– To facilitate recognition, understanding, and region of interests
(ROI) processing
• Ill-defined problem
– The definition of a region is context-dependent
• The result is segmentation is another image where the value of each
pixel is the «label» of the region the pixel belongs to.
Segmentation
• UNSUPERVISED METHODS: Use the properties (gray-level
or color statistics, homogeneity, edge information, etc) of
the original image to partition it into regions.

• SUPERVISED METHODS: Use training images with


associated ground truth to train machine learning
algorithms such as neural networks.
Histogram-based segmentation
Threshold Segmentation
1 if f ( x, y)  T (object point)
g ( x, y) = 
0 if f ( x, y)  T (background point)
T : global thresholding

Multiple thresholding
a if f ( x, y)  T2

g ( x, y ) =  b if T1  f ( x, y)  T2
c if f ( x, y)  T1

The Role of Noise in Image Thresholding
The Role of Illumination and Reflectance
Basic Global Thresholding

1. Select an initial estimate for the global threshold, T.


2. Segment the image using T. It will produce two groups of pixels: G1
consisting of all pixels with intensity values > T and G2 consisting of pixels
with values < T.
3. Compute the average intensity values m1 and m2 for the pixels in G1 and
G2, respectively.
4. Compute a new threshold value.
1
T= ( m1 + m2)
2
5. Repeat Steps 2 through 4 until the difference between values of T in
successive iterations is smaller than a predefined parameter .
Otsu’s Thresholding Method

• Based on a very simple idea: Find the threshold


that minimizes the weighted within-class variance.
• This turns out to be the same as maximizing the
between-class variance.
• Operates directly on the gray level histogram [e.g.
256 numbers, P(i)], so it’s fast (once the histogram
is computed).
Optimum Global Thresholding Using
Otsu’s Method
• Principle: maximizing the between-class variance

Let {0, 1, 2, ..., L -1} denote the L distinct intensity levels


in a digital image of size M  N pixels, and let ni denote the
number of pixels with intensity i.
L −1
pi = ni / MN and p
i =0
i =1

k is a threshold value, C1 → [0, k ], C2 → [k + 1, L -1]


k L −1
P1 (k ) =  pi and P2 (k ) = p i = 1 − P1 (k )
i =0 i = k +1
Optimum Global Thresholding Using
Otsu’s Method
The mean intensity value of the pixels assigned to class
C1 is
k
1 k
m1 (k ) =  iP(i / C1 ) =  ipi
i =0 P1 (k ) i =0
The mean intensity value of the pixels assigned to class
C2 is
L −1
1 L −1
m2 (k ) =  iP(i / C2 ) =  ipi
i = k +1 P2 (k ) i =k +1

1 1 + P2 m2 = mG (Global mean value)


Pm
Optimum Global Thresholding Using
Otsu’s Method

Between-class variance,  B2 is defined as


 B2 = P1 (m1 − mG ) 2 + P2 (m2 − mG ) 2
= P1 P2 (m1 − m2 )2
 mG P1 − m1P1 
2

=
P1 (1 − P1 )
 mG P1 − m
2

=
P1 (1 − P1 )
Optimum Global Thresholding Using
Otsu’s Method

The optimum threshold is the value, k*, that maximizes


 B2 (k*),  B2 (k*) = max  B2 (k )
0 k  L −1

1 if f ( x, y)  k *
g ( x, y ) = 
0 if f ( x, y)  k *

 B2
Separability measure  = 2
G
Otsu’s Algorithm: Summary

1. Compute the normalized histogram of the input image.


Denote the components of the histogram by pi, i=0, 1, …, L-1.
2. Compute the cumulative sums, P1(k), for k = 0, 1, …, L-1.
3. Compute the cumulative means, m(k), for k = 0, 1, …, L-1.
4. Compute the global intensity mean, mG.
5. Compute the between-class variance, for k = 0, 1, …, L-1.
6. Obtain the Otsu’s threshold, k* that maximizes between-class
variance.
7. Obtain the separability measure.
Using Image Smoothing to Improve Global Thresholding
Multiple Thresholds

In the case of K classes, C1 , C2 , ..., CK , the between-class


variance is
K
 B2 =  Pk ( mk − mG )
2

k =1

1
where Pk =  pi and mk =  ip i
iCk Pk iCk

The optimum threshold values, k1*, k2 *, ..., kK −1 * that maximize


 B2 (k1*, k2 *, ..., kK −1*) = max  B2 (k1 , k2 , ..., kK −1 )
0 k  L −1
Multiple Thresholds
Grouping and Segmentation
• Segmentation is the process of dividing an image into regions of
“related content”
Homogeneous Region

PCA-fitted
Sample ellipsoid
K-Means Clustering
Algorithm

• Choose a fixed number of clusters and initial cluster


centers
• Allocate points to clusters that they are closest too
• Recompute the cluster centers
• Go back to step 2, and repeat until convergence
K-Means Clustering
More General Segmentation
• Region Growing:
– Tile the image
– Start a region with a seed tile Bottom-up
– Merge similar neighboring tiles in the region body approach

– When threshold exceeded, start a new region

• Region Splitting
– Start with one large region
– Recursively Top-down
• Choose the region with highest dissimilarity approach
• If sufficiently similar, stop, otherwise split
• repeat until no more splitting occurs
Region Growing

• A simple approach to image segmentation is to start from


some pixels (seeds) representing distinct image regions
and to grow them, until they cover the entire image
• For region growing we need a rule describing a growth
mechanism and a rule checking the homogeneity of the
regions after each growth step
Region Growing
• The growth mechanism – at each stage k and for each
region Ri(k), i = 1,…,N, we check if there are unclassified
pixels in the 8-neighbourhood of each pixel of the region
border
• Before assigning such a pixel x to a region Ri(k),we check
if the region homogeneity:
P(Ri(k) U {x}) = TRUE , is valid
Region Growing

The arithmetic mean m and standard deviation std of a group R


having n =|R| pixels:

1
1
m( R ) =  I ( r , c )
n ( r ,c )R
std ( R) =  (
n − 1 ( r ,c )R
I ( r , c ) − m ( R )) 2

The following property P is used to decide if the merging


of the two regions R1, R2 is allowed,
If |m(R1) – m(R2)| < k*min{std(R1), std(R2)},
two regions R1, R2 are merged.
Region Splitting
• The opposite approach to region growing is region
splitting.
• It is a top-down approach and it starts with the
assumption that the entire image is homogeneous
• If this is not true, the image is split into four sub images
• This splitting procedure is repeated recursively until we
split the image into homogeneous regions
Region Splitting

• If the original image is square N x N, having dimensions


that are powers of 2(N = 2n):
• All regions produced but the splitting algorithm are
squares having dimensions M x M , where M is a power
of 2 as well.
• Since the procedure is recursive, it produces an image
representation that can be described by a tree whose
nodes have four sons each
• Such a tree is called a Quadtree.
Region Splitting

Quadtree
R0 R1
R0

R3
R2 R1

R00 R01 R02 R04


Split / Merge

• If a region R is inhomogeneous (P(R)= False) then is


split into four sub regions
• If two adjacent regions Ri,Rj are homogeneous (P(Ri U
Rj) = TRUE), they are merged
• The algorithm stops when no further splitting or
merging is possible
Otsu’s Method with MATLAB

I = imread('coins.png');
imshow(I)
level = graythresh(I);
BW = im2bw(I,level);
imshow(BW)

You might also like