0% found this document useful (0 votes)
14 views87 pages

IP 5 Segmentation

Segmentation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views87 pages

IP 5 Segmentation

Segmentation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Danang University of Science and Technology (DUT)

Image Processing
Lecture 6
Image Segmentation
Fundamentals
 Let R represent the entire spatial region occupied by
an image. Image segmentation is a process that
partitions R into n sub-regions, R1, R2, …, Rn, such
that:
𝐧
 𝐢=𝟏 𝐑 𝐢 =𝐑
 Ri is a connected set, i = 1,2,..,n
 𝐑𝐢 ∩ 𝐑𝐣 = ∅
 P(Ri) = TRUE for i = 1,2,..,n
 P(Ri ∪ Rj) = FALSE for any adjacent regions Ri
and Rj.
2
3
Background

 First-order derivative
𝛛𝐟
= 𝐟 ′ 𝐱 = 𝐟 𝐱 + 𝟏 − 𝐟(𝐱)
𝛛𝐱
 Second-order derivative
𝛛𝟐 𝐟
𝟐
= 𝐟′′(𝐱) = 𝐟 𝐱 + 𝟏 + 𝐟(𝐱 − 𝟏) − 𝟐𝐟(𝐱)
𝛛𝐱

4
5
Characteristics of First and Second
Order Derivatives
 First-order derivatives generally produce thicker edges
in image
 Second-order derivatives have a stronger response to
fine detail, such as thin lines, isolated points, and noise
 Second-order derivatives produce a double-edge
response at ramp and step transition in intensity
 The sign of the second derivative can be used to
determine whether a transition into an edge is from light
to dark or dark to light.

6
Detection of Isolated Points
 The Laplacian
𝛛 𝟐𝐟 𝛛 𝟐𝐟
𝛁𝟐 𝐟 𝐱, 𝐲 = 𝟐 + 𝟐
𝛛𝐱 𝛛𝐲
= 𝐟 𝐱 + 𝟏, 𝐲 + 𝐟 𝐱 − 𝟏, 𝐲 + 𝐟 𝐱, 𝐲 + 𝟏 + 𝐟 𝐱, 𝐲 − 𝟏 − 𝟒𝐟(𝐱, 𝐲)
 Point detection:
𝟏 𝐢𝐟 𝐑(𝐱, 𝐲) ≥ 𝐓
𝐠 𝐱, 𝐲 =
𝟎 𝐨𝐭𝐡𝐞𝐫𝐰𝐢𝐬𝐞
with
𝟗

𝐑= 𝐰𝐤 𝐳𝐤
𝐤=𝟏
7
Point
detection

8
Line Detection

 Second derivatives to result in a stronger response and


to produce thinner lines than first derivatives
 Double-line effect of the second derivative must be
handled properly (see later in edge detection)

9
10
Detecting Line in Specified Directions

 Let R1, R2, R3, and R4 denote the responses of the


masks in Fig. 10.6. If, at a given point in the image,
|Rk|>|Rj|, for all j≠k, that point is said to be more likely
associated with a line in the direction of mask k.
11
12
Edge Detection
 Edges are pixels where the brightness function changes
abruptly
 Edge models

13
14
Basic Edge Detection by Using
First-Order Derivative
𝛛𝐟
𝐠𝐱 𝛛𝐱
𝛁𝐟 ≡ 𝐠𝐫𝐚𝐝 𝐟 = 𝐠 = 𝛛𝐟
𝐲
𝛛𝐲
 The magnitude of 𝛁𝐟
𝐌 𝐱, 𝐲 = 𝐦𝐚𝐠 𝛁𝐟 = 𝐠 𝐱𝟐 + 𝐠 𝐲𝟐

 The direction of 𝛁𝐟
𝐠𝐲
𝛂 𝐱, 𝐲 = 𝐭𝐚𝐧−𝟏
𝐠𝐱
 The direction of the edge
∅ = 𝛂 − 𝟗𝟎°
15
Basic Edge Detection by Using
First-Order Derivative
 Edge normal:
𝛛𝐟
𝐠𝐱 𝛛𝐱
𝛁𝐟 ≡ 𝐠𝐫𝐚𝐝 𝐟 = 𝐠 = 𝛛𝐟
𝐲
𝛛𝐲
 Edge unit normal 𝛁𝐟/𝐦𝐚𝐠(𝛁𝐟)
 In practice, sometimes the magnitude is approximated by
𝛛𝐟 𝛛𝐟
𝐦𝐚𝐠 𝛁𝐟 = +
𝛛𝐱 𝛛𝐲
Or
𝛛𝐟 𝛛𝐟
𝐦𝐚𝐠 𝛁𝐟 = 𝐦𝐚𝐱 ,
𝛛𝐱 𝛛𝐲
16
17
18
19
20
21
22
23
Low-pass filter before edge detection

24
Low-pass filter

25
Advanced Techniques for Edge
Detection
 The Marr-Hildreth edge detector
𝐱 𝟐 +𝐲 𝟐

𝐆 𝐱, 𝐲 = 𝐞 𝟐𝛔𝟐
 Laplacian of Gaussian (LoG)
𝛛 𝟐 𝐆(𝐱, 𝐲) 𝛛 𝟐 𝐆(𝐱, 𝐲)
𝛁𝟐 𝐆 𝐱, 𝐲 = 𝟐
+
𝛛𝐱 𝛛𝐲 𝟐
𝐱 𝟐 +𝐲 𝟐 𝐱 𝟐 +𝐲 𝟐
𝛛 −𝐱 − 𝟐 𝛛 −𝐲 − 𝟐
= 𝟐
𝐞 𝟐𝛔 + 𝟐
𝐞 𝟐𝛔
𝛛𝐱 𝛔 𝛛𝐲 𝛔
𝟐 𝟐 𝟐 𝟐
𝐱𝟐 𝟏 −𝐱 +𝐲𝟐 𝐲 𝟐 𝟏 −𝐱 +𝐲𝟐
= 𝟒 − 𝟐 𝐞 𝟐𝛔 + 𝟒 − 𝟐 𝐞 𝟐𝛔
𝛔 𝛔 𝛔 𝛔
𝟐 𝟐
𝐱 𝟐 + 𝐲 𝟐 − 𝛔𝟐 −𝐱 +𝐲𝟐
= 𝟒
𝐞 𝟐𝛔
𝛔
26
27
Marr-Hildreth Algorithm
1. Filter the input image with an nxn Gaussian lowpass
filter. n is the smallest odd integer greater than or
equal to 6
2. Compute the Laplacian of the image resulting from
step1
𝐠 𝐱, 𝐲 = 𝛁𝟐 𝐆(𝐱, 𝐲) ∗ 𝐟(𝐱, 𝐲)
3. Find the zero crossing of the image from step 2

28
29
The Canny Edge Detector
 Optimal for step edges corrupted by white noise.
 Objectives:
1. Low error rate: The edges detected must be as close as
possible to the true edge
2. Edge points should be well localized: The edges located
must be as close as possible to the true edges
3. Single edge point response: The number of local maxima
around the true edge should be minimum

30
The Canny Edge Detector: Algorithm (1)

 Let f(x,y) denote the input image and G(x,y) denote the
Gaussian function:
𝐱 𝟐 +𝐲 𝟐

𝐆 𝐱, 𝐲 = 𝐞 𝟐𝛔𝟐
 We form a smoothed image fs(x,y) by convolving G and f:
𝐟𝐬 𝐱, 𝐲 = 𝐆 𝐱, 𝐲 ∗ 𝐟(𝐱, 𝐲)

31
The Canny Edge Detector: Algorithm(2)

 Compute the gradient magnitude and direction (angle) :

𝐌 𝐱, 𝐲 = 𝐠 𝐱𝟐 + 𝐠 𝐲𝟐

and
𝐠𝐲
𝛂 𝐱, 𝐲 = 𝐚𝐫𝐜𝐭𝐚𝐧
𝐠𝐱
Where 𝐠 𝐱 = 𝛛𝐟𝐬 /𝛛𝐱 and 𝐠 𝐲 = 𝛛𝐟𝐬 /𝛛𝐲
 Note: any of the filter mask pairs in Fig.10.14 can be
used to obtain gx and gy.

32
The Canny Edge Detector: Algorithm(3)

 The gradient M(x,y) typically contains wide ridge


around local maxima. Next step is to thin those ridges.
 Non-maxima suppression: Let d1, d2, d3, and d4 denote
the four basic edge directions for a 3x3 region:
horizontal, -45°, vertical, +45°, respectively.
1. Find the direction dk that is closest to 𝛼 𝑥, 𝑦
2. If the value of M(x,y) is less than at least one of its two
neighbors along dk, let gN(x,y) = 0 (suppression);
otherwise, let gN(x,y) = M(x,y).

33
34
The Canny Edge Detector: Algorithm(4)

 The final operation is to threshold gN(x,y) to reduce


false edge points.
 Hysteresis thresholding:
𝐠 𝐍𝐇 𝐱, 𝐲 = 𝐠 𝐍 𝐱, 𝐲 ≥ 𝐓𝐇
𝐠 𝐍𝐋 𝐱, 𝐲 = 𝐠 𝐍 𝐱, 𝐲 ≥ 𝐓𝐋
and
𝐠 𝐍𝐋 𝐱, 𝐲 = 𝐠 𝐍𝐋 𝐱, 𝐲 − 𝐠 𝐍𝐇 𝐱, 𝐲

Comment: if gN(x,y) is high, gNL(x,y) will be zero !

35
The Canny Edge Detector: Algorithm(5)
 Depending on the value of TH, the edges in gNH(x,y)
typically have gaps. Longer edges are formed using the
following procedure:
(a) Locate the next unvisited edge pixel p in gNH(x,y)
(b) Mark as valid edge pixel all the weak pixels in gNL(x,y)
that are connected to p using 8-connectivity.
(c) If all nonzero pixel in gNH(x,y) have been visited go to
step (d), else return to (a)
(d) Set to zero all pixels in gNL(x,y) that were not marked
as valid edge pixels.
36
The Canny Edge Detection: Summary

 Smooth the input image with a Gaussian filter


 Compute the gradient magnitude and angle images
 Apply nonmaxima suppression to the gradient magnitude
image
 Use double thresholding and connectivity analysis to
detect and link edges
 Implementation: Matlab

37
TL  0.04; TH  0.10;  4 and a mask of size 25  25
Canny is better ! 38
TL  0.05; TH  0.15;  2 and a mask of size 13 13 39
40
Edge Linking and Boundary Detection

 Edge detection typically is followed by linking algorithms


designed to assemble edge pixels into meaningful edges
and/or region boundaries
 Three approaches to edge linking
 Local processing
 Regional processing
 Global processing

41
Local Processing
 Analyze the characteristics of pixels in a small
neighborhood about every point (x,y) that has been
declared an edge point
 All points that similar according to predefined criteria are
linked, forming an edge of pixels.
 Establishing similarity:
 the strength (magnitude)
 the direction of the gradient vector.
 A pixel with coordinates (s,t) in Sxy is linked to the pixel
at (x,y) if both magnitude and direction criteria are
satisfied. 42
Local Processing
 Let Sxy denote the set of coordinates of a neighborhood
centered at point (x,y) in an image. An edge pixel with
coordinate (s,t) in Sxy is similar in magnitude to the pixel
at (x,y) if
𝐌 𝐬, 𝐭 − 𝐌(𝐱, 𝐲) ≤ 𝐄
 An edge pixel with coordinate (s,t) in Sxy is similar in
angle to the pixel at (x,y) if
𝛂 𝐬, 𝐭 − 𝛂(𝐱, 𝐲) ≤ 𝐀

43
Local Processing: Steps (1)
1. Compute the gradient magnitude and angle arrays,
M(x,y) and 𝛼(𝑥, 𝑦), of the input image f(x,y)
2. Form a binary image, g, whose value at any pair of
coordinates (x,y) is given by
𝟏 𝐢𝐟 𝐌 𝐱, 𝐲 > 𝐓𝐌 and 𝛂 𝐱, 𝐲 = 𝐀 ± 𝐓𝐀
𝐠 𝐱, 𝐲 =
𝟎 otherwise
TM: threshold
A: specified angle direction
TA: a “band” of acceptable directions about A

44
Local Processing: Steps (2)

3. Scan the rows of g and fill (set to 1) all gaps (sets of


0s) in each row that do not exceed a specified length,
K.
4. To detect gaps in any other direction, rotate g by this
angle and apply the horizontal scanning procedure in
step 3.

45
46
Regional Processing
 The location of regions of interest in an image are known
or can be determined
 Polygonal approximations can capture the essential
shape features of a region while keeping the
representation of the boundary relatively simple
 Open or closed curve
Open curve: a large distance between two consecutive
points in the ordered sequence relative to the distance
between other points.

47
48
Regional Processing: Steps
1. Let P be the sequence of ordered, distinct, 1-valued
points of a binary image. Specify two starting points,
A and B.
2. Specify a threshold, T, and two empty stacks, OPEN
and CLOSED.
3. If the points in P correspond to a closed curve, put A
into OPEN and put B into OPEN and CLOSES. If the
points correspond to an open curve, put A into OPEN
and B into CLOSED.
4. Compute the parameters of the line passing from the
last vertex in CLOSED to the last vertex in OPEN.
49
Regional Processing: Steps
5. Compute the distances from the line in Step 4 to all
the points in P whose sequence places them
between the vertices from Step 4. Select the point,
Vmax, with the maximum distance, Dmax
6. If Dmax> T, place Vmax at the end of the OPEN stack
as a new vertex. Go to step 4.
7. Else, remove the last vertex from OPEN and insert it
as the last vertex of CLOSED.
8. If OPEN is not empty, go to step 4.
9. Else, exit. The vertices in CLOSED are the vertices
of the polygonal fit to the points in P.
50
51
Global Processing Using the
Hough Transform

52
Edge-linking Based on the Hough Transform

1. Obtain a binary edge image


2. Specify subdivisions in 𝜌𝜃-plane
3. Examine the counts of the accumulator cells for high
pixel concentrations
4. Examine the relationship between pixels in chosen cell

53
54
Global Thresholding

 Single thresholding:
𝟏 if 𝐟 𝐱, 𝐲 > 𝐓 (object point)
𝐠 𝐱, 𝐲 =
𝟎 if 𝐟 𝐱, 𝐲 ≤ 𝐓 (background point)
T: global thresholding
 Multiple thresholding:

𝐚 if 𝐟 𝐱, 𝐲 > 𝐓𝟐
𝐠 𝐱, 𝐲 = 𝐛 if 𝐓𝟏 < 𝐟(𝐱, 𝐲) ≤ 𝐓𝟐
𝐜 if 𝐟(𝐱, 𝐲) ≤ 𝐓𝟏

55
56
Region-Based Segmentation
 Region Growing
 Region growing is a procedure that groups pixels or
subregions into larger regions.
 The simplest of these approaches is pixel aggregation,
which starts with a set of “seed” points and from these
grows regions by appending to each seed points those
neighboring pixels that have similar properties (such
as gray level, texture, color, shape).
 Region growing based techniques are better than the
edge-based techniques in noisy images where edges
are difficult to detect.
57
Region-Based Segmentation
 Example: Region Growing based on 8-connectivity.
f(x,y): input image array
S(x,y): seed array containing 1s (seeds) and 0s
Q(x,y): predicate

𝐓𝐑𝐔𝐄if the absolute difference of the


intensities between the seed and
𝐐=
the pixel at x, y is ≤ T
𝐅𝐀𝐋𝐒𝐄 otherwise

58
Example: Region Growing based on 8-
connectivity
1. Find all connected components in S(x,y) and erode
each connected components to one pixel; label all
such pixels found as 1. All other pixels in S are
labelled 0.
2. Form an image fQ such that, at a pair of coordinates
(x,y), let fQ(x,y) = 1 if the Q is satisfied, otherwise
fQ(x,y) = 0.
3. Let g be an image formed by appending to each seed
point in S all the 1-value points in fQ that are
connected to that seed point.
4. Label each connected component in g with a different
region label. This is the segmented image obtained
by region growing.
59
60
4-connectivity

61
8-connectivity

62
63
Region Splitting and Merging
R: entire image, Ri: entire image, Q: predicate

1. For any region Ri, if Q(Ri) = FALSE, we divide the


image Ri into quadrants.
2. When no further splitting is possible, merge any
adjacent regions Rj and Rk for which Q(𝑅𝑗 ∪ 𝑅𝑘 ) =
TRUE.
3. Stop when no further merging is possible.

64
65
66
Segmentation Using Morphological
Watersheds
 Three types of points in a topographic interpretation:
 Points belonging to a regional minimum
 Points at which a drop of water would fall to a single
minimum. (The catchment basin or watershed of that
minimum.)
 Points at which a drop of water would be equally likely
to fall to more than one minimum. (The divide lines or
watershed lines.)

Watershed lines

67
Segmentation Using Morphological
Watersheds: Backgrounds

http://www.icaen.uiowa.edu/~dip/LECTURE/Segmentation3.html#watershed

68
Watershed Segmentation: Example
 The objective is to find watershed lines.
 The idea is simple:
 Suppose that a hole is punched in each regional
minimum and that the entire topography is flooded
from below by letting water rise through the holes
at a uniform rate.
 When rising water in distinct catchment basins is
about the merge, a dam is built to prevent merging.
These dam boundaries correspond to the
watershed lines.
69
Watershed Segmentation: Example

70
71
72
Watershed Segmentation Algorithm
 Start with all pixels with the lowest possible value.
These form the basis for initial watersheds
 For each intensity level k:
For each group of pixels of intensity k
1. If adjacent to exactly one existing region, add
these pixels to that region
2. Else if adjacent to more than one existing
regions, mark as boundary
3. Else start a new region
73
Watershed Segmentation: Examples
Watershed
algorithm is
often used on
the gradient
image instead
of the original
image.

74
Watershed Segmentation: Examples

Due to noise and other local irregularities of the gradient, over-segmentation


might occur.
75
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use
markers to specify the only allowed regional minima.

76
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use
markers to specify the only allowed regional minima. (For
example, gray-level values might be used as a marker.)

77
K-means Clustering
1. Partition the data points into K clusters randomly. Find the
centroids of each cluster.
2. For each data point:
 Calculate the distance from the data point to each cluster.
 Assign the data point to the closest cluster.
3. Re-compute the centroid of each cluster.
4. Repeat steps 2 and 3 until there is no further change in the
assignment of data points (or in the centroids).

78
K-Means Clustering

79
K-Means Clustering

80
K-Means Clustering

81
K-Means Clustering

82
K-Means Clustering

83
Appendix: Similarity
 Similarity can be evaluated using distance
 There are several kinds of distance
 Euclidean distance
 Mahalanobis distance
 Bhattacharyya distance
 Manhattan distance (city block distance)
 Kullback-Leibler divergence
 Effective computation of distance is highly important

84
Example: Euclidean Distance
 Matrix X contains M column vectors of K-dimensions
 Matrix Y contains N column vectors of K-dimensions
 Compute the Euclidean distance between each column
vector of X and each column vector of Y

𝐱 𝟏,𝟏 𝐱𝟏,𝟐 … 𝐱 𝟏,𝐌 𝐲𝟏,𝟏 𝐲𝟏,𝟐 … 𝐲𝟏,𝐍


𝐱 𝐱𝟐,𝟐 … 𝐱 𝟐,𝐌 𝐲 𝐲𝟐,𝟐 … 𝐲𝟐,𝐍
𝐗 = 𝟐,𝟏 and 𝐘 = 𝟐,𝟏
⋮ ⋮ … ⋮ ⋮ ⋮ … ⋮
𝐱𝐊,𝟏 𝐱𝐊,𝟐 … 𝐱 𝐊,𝐌 𝐲𝐊,𝟏 𝐲𝐊,𝟐 … 𝐲𝐊,𝐍

85
Example: Euclidean Distance
𝐝𝟏,𝟏 𝐝𝟏,𝟐 … 𝐝𝟏,𝐍
𝐝𝟐,𝟏 𝐝𝟐,𝟐 … 𝐝𝟐,𝐍
𝐝(𝐗, 𝐘) =
⋮ ⋮ … ⋮
𝐝𝐌,𝟏 𝐝𝐌,𝟐 … 𝐝𝐌,𝐍
with
𝐝𝐦,𝐧 = 𝐃𝐄 𝐗 𝐦 , 𝐘𝐧
and

𝐱𝟏,𝐦 𝐲𝟏,𝐧
𝐱𝟐,𝐦 𝐲𝟐,𝐧
𝐗𝐦 = and 𝐘𝐧 =
⋮ ⋮
𝐱𝐊,𝐦 𝐲𝐊,𝐧
86
Example: Euclidean Distance

Prove that in Matlab we have:

𝐝 𝐗, 𝐘
𝟐 ′
= 𝐬𝐮𝐦 𝐗 × 𝐨𝐧𝐞𝐬 𝟏, 𝐍 + 𝐨𝐧𝐞𝐬 𝐌, 𝟏
× 𝐬𝐮𝐦 𝐘 𝟐 − 𝟐𝐗 ′ × 𝐘

87

You might also like