Image Segmentation
Image Segmentation
Image Segmentation
2
Segmentation
Segmentation
Discontinuity
Similarity based
based algorithms
Isolated
Thresholding
points
Line Region
detection Growing
4
5
Background
⚫ An approximation to the first-order derivative at an arbitrary
point x of a one-dimensional function f(x) by expanding the
function f(x + x) into a Taylor series about x, where x is the
separation between samples of f.
6
Background (contd.)
⚫ When x =1
7
Background (contd.)
⚫ When x = -1
8
Background (contd.)
⚫ Keeping only linear terms, the central difference is:
9
Background (contd.)
⚫ To find second order derivative, add f(x+1) and f(x-1) following
and keeping till second order derivative:
10
Background (contd.)
⚫ First-order derivative
⚫ Second-order derivative
11
Background (contd.)
Definition for derivatives
First Derivative Second Derivative
Constant Intensity Areas ZERO ZERO
Onset of intensity step or Non-zero Non-zero
ramp + at the end
Along intensity ramp Non-zero ZERO
12
13
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Characteristics of First and Second Order
Derivatives
⚫ First-order derivatives generally produce thicker edges in image
14
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Detection of Isolated Points
⚫ The Laplacian
15
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
16
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Line Detection
⚫ Second derivatives to result in a stronger response and to produce thinner
lines than first derivatives
17
18
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Detecting Line in Specified Directions
⚫ Let R1, R2, R3, and R4 denote the responses of the masks in Fig. 10.6. If, at a
given point in the image, |Rk|>|Rj|, for all j≠k, that point is said to be more
likely associated with a line in the direction of mask k.
19
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
20
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Edge Detection
⚫ Edges are pixels where the brightness function changes abruptly
⚫ Edge models
21
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
22
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Two additional properties of Second derivative:
1. It produces two values for every edge in an image
2. Zero crossing can be used for locating the centers of thick edges
23
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
First order
Second order
24
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Steps for edge detection
⚫ Image Smoothing
⚫ To reduce noise
⚫ Detection of edge points
⚫ A local operation to extract all points that are potential edge points
⚫ Edge localization
⚫ Select from candidate points only the points which are members of
the set of points comprising an edge.
25
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Basic Edge Detection: Gradient and its
properties
26
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Basic Edge Detection: Gradient and its
properties (contd.)
27
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Gradient, 𝛼 for the highlighted pixel??
28
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Gradient operators
29
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
`
30
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Prewitt and Sobel filters to find
diagonal edges
31
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Kirsch compass kernels
⚫ The Kirsch compass kernels (Kirsch [1971]), are designed to detect
edge magnitude and direction (angle) in all eight compass
directions.
⚫ The edge magnitude computation:
⚫ Convolve an image with all eight kernels
⚫ Assign the edge magnitude at a point as the response of the kernel that
gave strongest convolution value at that point.
⚫ The edge angle
⚫ The direction associated with that kernel.
32
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Kirsch compass kernels
33
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
34
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
35
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
36
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
37
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
38
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Edge Linking and Boundary Detection
⚫ Edge detection typically is followed by linking algorithms designed to
assemble edge pixels into meaningful edges and/or region boundaries
⚫ Three approaches to edge linking
⚫ Local processing
⚫ Regional processing
⚫ Global processing
39
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing
⚫ Analyze the characteristics of pixels in a small neighborhood about every
point (x,y) that has been declared an edge point
⚫ All points that similar according to predefined criteria are linked, forming an
edge of pixels.
⚫ Establishing similarity:
⚫ The strength (magnitude) and
⚫ The direction of the gradient vector.
A pixel with coordinates (s,t) in Sxy is linked to the pixel at (x,y) if both
magnitude and direction criteria are satisfied.
40
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing (contd.)
⚫ Let Sxy denote the set of coordinates of a neighborhood centered at point
(x,y) in an image. An edge pixel with coordinate (s,t) in Sxy is similar in
magnitude to the pixel at (x,y) if
|M(s,t) – M(x,y)| ≤ E
⚫ An edge pixel with coordinate (s,t) in Sxy is similar in angle to the pixel at (x,y)
if
|α(s,t) – α(x,y)| ≤ A
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing (contd.)
1. Compute the gradient magnitude and angle arrays, M(x,y) and α(x,y), of
the input image f(x,y)
2. Form a binary image, g, whose value at any pair of coordinates (x,y) is
given by
3. Scan the rows of g and fill (set to 1) all gaps (sets of 0s) in each row that do
not exceed a specified length, L.
4. To detect gaps in any other direction, rotate g by this angle and apply the
42
horizontal scanning procedure in step 3.
TM = 30% of maximum gradient value, A=900 TA=450 43
Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Thresholding
1 if f ( x, y ) T (object point)
g ( x, y ) =
0 if f ( x, y ) T (background point)
T : global thresholding
Multiple thresholding
a if f ( x, y) T2
g ( x, y ) = b if T1 f ( x, y ) T2
c if f ( x, y) T1
44
Thresholding (contd.)
45
Thresholding (contd.)
⚫ The separation between peaks (the further apart the peaks are,
the better the chances of separating the modes);
⚫ The noise content in the image (the modes broaden as noise
increases);
⚫ The relative sizes of objects and background;
⚫ The uniformity of the illumination source; and
⚫ The uniformity of the reflectance properties of the image.
46
The Role of Noise in Image Thresholding
47
The Role of Illumination and Reflectance
48
Basic Global Thresholding
1. Select an initial estimate for the global threshold, T.
2. Segment the image using T. It will produce two groups of pixels: G1
consisting of all pixels with intensity values > T and G2 consisting of pixels
with values ≤ T.
3. Compute the average intensity values m1 and m2 for the pixels in G1 and
G2, respectively.
4. Compute a new threshold value.
1
T = ( m1 + m2 )
2
5. Repeat Steps 2 through 4 until the difference between values of T in
successive iterations is smaller than a predefined parameter ∆T.
49
Basic Global Thresholding (contd.)
50
Optimum Global Thresholding Using
Otsu’s Method
⚫ Principle: maximizing the between-class variance
⚫ Let {0, 1, 2, ..., 𝐿−1} denote the 𝐿 distinct intensity levels in a digital image of
size M×N pixels, and let ni denote the number of pixels with intensity 𝑖.
52
Optimum Global Thresholding Using
Otsu’s Method (contd.)
Between-class variance, B2 is defined as
B2 = P1 ( m1 − mG ) 2 + P2 (m2 − mG ) 2
= P1 P2 ( m1 − m2 ) 2
mG P1 − m1P1
2
=
P1 (1 − P1 )
mG P1 − m
2
=
P1 (1 − P1 )
53
Optimum Global Thresholding Using
Otsu’s Method (contd.)
1 if f ( x, y ) k *
g ( x, y ) =
0 if f ( x, y ) k *
B2
Separability measure = 2
G
54
Otsu’s Algorithm: Summary
1. Compute the normalized histogram of the input image. Denote
the components of the histogram by pi, i=0, 1, …, L-1.
2. Compute the cumulative sums, P1(k), for k = 0, 1, …, L-1.
3. Compute the cumulative means, m(k), for k = 0, 1, …, L-1.
4. Compute the global intensity mean, mG.
5. Compute the between-class variance, for k = 0, 1, …, L-1.
6. Obtain the Otsu’s threshold, k*.
7. Obtain the separability measure.
55
56
Region-Based Segmentation
⚫ Region Growing
1. Region growing is a procedure that groups pixels or subregions into larger
regions.
2. The simplest of these approaches is pixel aggregation, which starts with a
set of “seed” points and from these grows regions by appending to each seed
points those neighboring pixels that have similar properties (such as gray
level, texture, color, shape).
3. Region growing based techniques are better than the edge-based techniques
in noisy images where edges are difficult to detect.
57
Region-Based Segmentation (contd.)
Example: Region Growing based on 8-connectivity
𝑓(𝑥,𝑦): input image array
𝑆(𝑥,𝑦): seed array containing 1s (seeds) and 0s
𝑄(𝑥,𝑦): predicate which is TRUE if the absolute difference of intensities between
the seed and the pixel at (𝑥,𝑦) ≤ T and FALSE otherwise.
1. Find all connected components in S(𝑥,𝑦) and erode each connected component to one
pixel; label all such pixels found as 1. All other pixels in S are labeled 0.
2. Form an image 𝑓𝑄 such that, at a pair of coordinates (𝑥,𝑦), let 𝑓𝑄(𝑥,𝑦) = 1 if the 𝑄 is satisfied
otherwise 𝑓𝑄(𝑥,𝑦) = 0.
3. Let 𝑔 be an image formed by appending to each seed point in 𝑆 all the 1−valued points in 𝑓𝑄
that are 8−connected to that seed point.
4. Label each connected component in g with a different region label. This is the segmented
image obtained by region growing. 58
Numerical
59
4-connectivity
60
8-connectivity
61
62
Region Splitting and Merging
63
64
65
TRUE if a and 0 m b
Q=
FALSE otherwise
66
Watershed Segmentation Algorithm
◼ Visualize an image in 3D: spatial coordinates and gray levels.
◼ In such a topographic interpretation, there are 3 types of points:
❑ Points belonging to a regional minimum
Watershed lines
67
Watershed Segmentation Algorithm
(contd.)
◼ The objective is to find watershed lines.
◼ The idea is simple:
❑ Suppose that a hole is punched in each regional minimum and that the entire topography
is flooded from below by letting water rise through the holes at a uniform rate.
❑ When rising water in distinct catchment basins is about the merge, a dam is built to
prevent merging. These dam boundaries correspond to the watershed lines.
68
Watershed Segmentation Algorithm
(contd.)
69
Watershed Segmentation Algorithm
(contd.)
⚫ Start with all pixels with the lowest possible value.
⚫ These form the basis for initial watersheds
⚫ For each intensity level k:
⚫ For each group of pixels of intensity k
⚫ If adjacent to exactly one existing region, add these pixels to that region
⚫ Else if adjacent to more than one existing regions, mark as boundary
70
Watershed Segmentation Algorithm
(contd.)
Watershed algorithm might be used on the gradient image instead of the original image.
71
Watershed Segmentation Algorithm
(contd.)
72
Watershed Segmentation Algorithm
(contd.)
A solution is to limit the number of regional minima. Use markers to
specify the only allowed regional minima.
73
Watershed Segmentation Algorithm
(contd.)
A solution is to limit the number of regional minima. Use markers to
specify the only allowed regional minima. (For example, gray-level
values might be used as a marker.)
74
K-Means Clustering
⚫ Partition the data points into K clusters randomly. Find the centroids of each
cluster.
75
K-Means Clustering (contd.)
76
K-Means Clustering (contd.)
77
K-Means Clustering (contd.)
78
K-Means Clustering (contd.)
79
K-Means Clustering (contd.)
80
K-Means Clustering (contd.)
81
K-Means Clustering (contd.)
82
K-Means Clustering (contd.)
83
Clustering Example
D. Comaniciu and P.
Meer, Robust Analysis
of Feature Spaces:
Color Image
Segmentation, 1997. 84