Image Segmentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 84

Image Segmentation

Dr. Navjot Singh


Image and Video Processing
Acknowledgements
⚫ Gonzalez, Rafael C. Digital image processing. Pearson, 4th
edition, 2018.
⚫ Jain, Anil K. Fundamentals of digital image processing. Prentice-
Hall, Inc., 1989.
⚫ Digital Image Processing course by Brian Mac Namee, Dublin
Institute of Technology
⚫ Digital Image Processing course by Christophoros Nikou,
University of Ioannina

2
Segmentation
Segmentation

Discontinuity
Similarity based
based algorithms

Isolated
Thresholding
points

Line Region
detection Growing

Edge Region splitting


detection and merging 3
Fundamentals
⚫ Let R represent the entire spatial region occupied by an image.
Image segmentation is a process that partitions R into n sub-
regions, R1, R2, …, Rn, such that

4
5
Background
⚫ An approximation to the first-order derivative at an arbitrary
point x of a one-dimensional function f(x) by expanding the
function f(x + x) into a Taylor series about x, where x is the
separation between samples of f.

6
Background (contd.)
⚫ When x =1

⚫ Keeping linear terms, the forward difference is:

7
Background (contd.)
⚫ When x = -1

⚫ Keeping only linear terms, the backward difference is:

8
Background (contd.)
⚫ Keeping only linear terms, the central difference is:

9
Background (contd.)
⚫ To find second order derivative, add f(x+1) and f(x-1) following
and keeping till second order derivative:

10
Background (contd.)
⚫ First-order derivative

⚫ Second-order derivative

11
Background (contd.)
Definition for derivatives
First Derivative Second Derivative
Constant Intensity Areas ZERO ZERO
Onset of intensity step or Non-zero Non-zero
ramp + at the end
Along intensity ramp Non-zero ZERO

12
13

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Characteristics of First and Second Order
Derivatives
⚫ First-order derivatives generally produce thicker edges in image

⚫ Second-order derivatives have a stronger response to fine detail, such as


thin lines, isolated points, and noise

⚫ Second-order derivatives produce a double-edge response at ramp and


step transition in intensity

⚫ The sign of the second derivative can be used to determine whether a


transition into an edge is from light to dark or dark to light

14

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Detection of Isolated Points
⚫ The Laplacian

15

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
16

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Line Detection
⚫ Second derivatives to result in a stronger response and to produce thinner
lines than first derivatives

⚫ Double-line effect of the second derivative must be handled properly

17
18

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Detecting Line in Specified Directions

⚫ Let R1, R2, R3, and R4 denote the responses of the masks in Fig. 10.6. If, at a
given point in the image, |Rk|>|Rj|, for all j≠k, that point is said to be more
likely associated with a line in the direction of mask k.

19

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
20

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Edge Detection
⚫ Edges are pixels where the brightness function changes abruptly
⚫ Edge models

21

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
22

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Two additional properties of Second derivative:
1. It produces two values for every edge in an image
2. Zero crossing can be used for locating the centers of thick edges

23

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
First order

Second order

24

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Steps for edge detection
⚫ Image Smoothing
⚫ To reduce noise
⚫ Detection of edge points
⚫ A local operation to extract all points that are potential edge points
⚫ Edge localization
⚫ Select from candidate points only the points which are members of
the set of points comprising an edge.

25

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Basic Edge Detection: Gradient and its
properties

26

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Basic Edge Detection: Gradient and its
properties (contd.)

27

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Gradient, 𝛼 for the highlighted pixel??

28

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Gradient operators

29

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
`

30

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Prewitt and Sobel filters to find
diagonal edges

31

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (3rd edition), Prentice–Hall of India, 2016
Kirsch compass kernels
⚫ The Kirsch compass kernels (Kirsch [1971]), are designed to detect
edge magnitude and direction (angle) in all eight compass
directions.
⚫ The edge magnitude computation:
⚫ Convolve an image with all eight kernels
⚫ Assign the edge magnitude at a point as the response of the kernel that
gave strongest convolution value at that point.
⚫ The edge angle
⚫ The direction associated with that kernel.

32

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Kirsch compass kernels

33

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
34

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
35

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
36

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
37

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
38

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Edge Linking and Boundary Detection
⚫ Edge detection typically is followed by linking algorithms designed to
assemble edge pixels into meaningful edges and/or region boundaries
⚫ Three approaches to edge linking
⚫ Local processing
⚫ Regional processing
⚫ Global processing

39

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing
⚫ Analyze the characteristics of pixels in a small neighborhood about every
point (x,y) that has been declared an edge point
⚫ All points that similar according to predefined criteria are linked, forming an
edge of pixels.
⚫ Establishing similarity:
⚫ The strength (magnitude) and
⚫ The direction of the gradient vector.
A pixel with coordinates (s,t) in Sxy is linked to the pixel at (x,y) if both
magnitude and direction criteria are satisfied.

40

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing (contd.)
⚫ Let Sxy denote the set of coordinates of a neighborhood centered at point
(x,y) in an image. An edge pixel with coordinate (s,t) in Sxy is similar in
magnitude to the pixel at (x,y) if

|M(s,t) – M(x,y)| ≤ E

⚫ An edge pixel with coordinate (s,t) in Sxy is similar in angle to the pixel at (x,y)
if
|α(s,t) – α(x,y)| ≤ A

where E and A are positive threshold and positive angle threshold


respectively
41

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Local Processing (contd.)
1. Compute the gradient magnitude and angle arrays, M(x,y) and α(x,y), of
the input image f(x,y)
2. Form a binary image, g, whose value at any pair of coordinates (x,y) is
given by

3. Scan the rows of g and fill (set to 1) all gaps (sets of 0s) in each row that do
not exceed a specified length, L.
4. To detect gaps in any other direction, rotate g by this angle and apply the
42
horizontal scanning procedure in step 3.
TM = 30% of maximum gradient value, A=900 TA=450 43

Rafael C. Gonzalez and Richard E.Woods, Digital Image Processing (4th edition), Prentice–Hall of India
Thresholding
1 if f ( x, y )  T (object point)
g ( x, y ) = 
0 if f ( x, y )  T (background point)
T : global thresholding

Multiple thresholding
a if f ( x, y)  T2

g ( x, y ) =  b if T1  f ( x, y )  T2
c if f ( x, y)  T1

44
Thresholding (contd.)

45
Thresholding (contd.)
⚫ The separation between peaks (the further apart the peaks are,
the better the chances of separating the modes);
⚫ The noise content in the image (the modes broaden as noise
increases);
⚫ The relative sizes of objects and background;
⚫ The uniformity of the illumination source; and
⚫ The uniformity of the reflectance properties of the image.

46
The Role of Noise in Image Thresholding

47
The Role of Illumination and Reflectance

48
Basic Global Thresholding
1. Select an initial estimate for the global threshold, T.
2. Segment the image using T. It will produce two groups of pixels: G1
consisting of all pixels with intensity values > T and G2 consisting of pixels
with values ≤ T.
3. Compute the average intensity values m1 and m2 for the pixels in G1 and
G2, respectively.
4. Compute a new threshold value.
1
T = ( m1 + m2 )
2
5. Repeat Steps 2 through 4 until the difference between values of T in
successive iterations is smaller than a predefined parameter ∆T.
49
Basic Global Thresholding (contd.)

50
Optimum Global Thresholding Using
Otsu’s Method
⚫ Principle: maximizing the between-class variance
⚫ Let {0, 1, 2, ..., 𝐿−1} denote the 𝐿 distinct intensity levels in a digital image of
size M×N pixels, and let ni denote the number of pixels with intensity 𝑖.

k is a threshold value, C1 → [0, k ], C2 → [k + 1, L -1]


k L −1
P1 (k ) =  pi and P2 (k ) = p i = 1 − P1 (k )
i =0 i = k +1
51
Optimum Global Thresholding Using
Otsu’s Method (contd.)

52
Optimum Global Thresholding Using
Otsu’s Method (contd.)
Between-class variance,  B2 is defined as
 B2 = P1 ( m1 − mG ) 2 + P2 (m2 − mG ) 2
= P1 P2 ( m1 − m2 ) 2
 mG P1 − m1P1 
2

=
P1 (1 − P1 )
 mG P1 − m
2

=
P1 (1 − P1 )

53
Optimum Global Thresholding Using
Otsu’s Method (contd.)

The optimum threshold is the value, k*, that maximizes


 B2 (k *),  B2 (k *) = max  B2 (k )
0 k  L −1

1 if f ( x, y )  k *
g ( x, y ) = 
0 if f ( x, y )  k *

 B2
Separability measure  = 2
G

54
Otsu’s Algorithm: Summary
1. Compute the normalized histogram of the input image. Denote
the components of the histogram by pi, i=0, 1, …, L-1.
2. Compute the cumulative sums, P1(k), for k = 0, 1, …, L-1.
3. Compute the cumulative means, m(k), for k = 0, 1, …, L-1.
4. Compute the global intensity mean, mG.
5. Compute the between-class variance, for k = 0, 1, …, L-1.
6. Obtain the Otsu’s threshold, k*.
7. Obtain the separability measure.

55
56
Region-Based Segmentation
⚫ Region Growing
1. Region growing is a procedure that groups pixels or subregions into larger
regions.
2. The simplest of these approaches is pixel aggregation, which starts with a
set of “seed” points and from these grows regions by appending to each seed
points those neighboring pixels that have similar properties (such as gray
level, texture, color, shape).
3. Region growing based techniques are better than the edge-based techniques
in noisy images where edges are difficult to detect.

57
Region-Based Segmentation (contd.)
Example: Region Growing based on 8-connectivity
𝑓(𝑥,𝑦): input image array
𝑆(𝑥,𝑦): seed array containing 1s (seeds) and 0s
𝑄(𝑥,𝑦): predicate which is TRUE if the absolute difference of intensities between
the seed and the pixel at (𝑥,𝑦) ≤ T and FALSE otherwise.
1. Find all connected components in S(𝑥,𝑦) and erode each connected component to one
pixel; label all such pixels found as 1. All other pixels in S are labeled 0.
2. Form an image 𝑓𝑄 such that, at a pair of coordinates (𝑥,𝑦), let 𝑓𝑄(𝑥,𝑦) = 1 if the 𝑄 is satisfied
otherwise 𝑓𝑄(𝑥,𝑦) = 0.
3. Let 𝑔 be an image formed by appending to each seed point in 𝑆 all the 1−valued points in 𝑓𝑄
that are 8−connected to that seed point.
4. Label each connected component in g with a different region label. This is the segmented
image obtained by region growing. 58
Numerical

59
4-connectivity

60
8-connectivity

61
62
Region Splitting and Merging

63
64
65
TRUE if   a and 0  m  b
Q=
 FALSE otherwise

66
Watershed Segmentation Algorithm
◼ Visualize an image in 3D: spatial coordinates and gray levels.
◼ In such a topographic interpretation, there are 3 types of points:
❑ Points belonging to a regional minimum

❑ Points at which a drop of water would fall to a single minimum. (→The

catchment basin or watershed of that minimum.)


❑ Points at which a drop of water would be equally likely to fall to more than
one minimum. (→The divide lines or watershed lines.)

Watershed lines

67
Watershed Segmentation Algorithm
(contd.)
◼ The objective is to find watershed lines.
◼ The idea is simple:
❑ Suppose that a hole is punched in each regional minimum and that the entire topography
is flooded from below by letting water rise through the holes at a uniform rate.
❑ When rising water in distinct catchment basins is about the merge, a dam is built to
prevent merging. These dam boundaries correspond to the watershed lines.

68
Watershed Segmentation Algorithm
(contd.)

69
Watershed Segmentation Algorithm
(contd.)
⚫ Start with all pixels with the lowest possible value.
⚫ These form the basis for initial watersheds
⚫ For each intensity level k:
⚫ For each group of pixels of intensity k
⚫ If adjacent to exactly one existing region, add these pixels to that region
⚫ Else if adjacent to more than one existing regions, mark as boundary

⚫ Else start a new region

70
Watershed Segmentation Algorithm
(contd.)
Watershed algorithm might be used on the gradient image instead of the original image.

71
Watershed Segmentation Algorithm
(contd.)

Due to noise and other local irregularities of the gradient, over


segmentation might occur.

72
Watershed Segmentation Algorithm
(contd.)
A solution is to limit the number of regional minima. Use markers to
specify the only allowed regional minima.

73
Watershed Segmentation Algorithm
(contd.)
A solution is to limit the number of regional minima. Use markers to
specify the only allowed regional minima. (For example, gray-level
values might be used as a marker.)

74
K-Means Clustering
⚫ Partition the data points into K clusters randomly. Find the centroids of each
cluster.

⚫ For each data point:


⚫ Calculate the distance from the data point to each cluster.
⚫ Assign the data point to the closest cluster.

⚫ Recompute the centroid of each cluster.

⚫ Repeat steps 2 and 3 until there is no further change in the assignment of


data points (or in the centroids).

75
K-Means Clustering (contd.)

76
K-Means Clustering (contd.)

77
K-Means Clustering (contd.)

78
K-Means Clustering (contd.)

79
K-Means Clustering (contd.)

80
K-Means Clustering (contd.)

81
K-Means Clustering (contd.)

82
K-Means Clustering (contd.)

83
Clustering Example

D. Comaniciu and P.
Meer, Robust Analysis
of Feature Spaces:
Color Image
Segmentation, 1997. 84

You might also like