0% found this document useful (0 votes)
38 views8 pages

Computer Vision & Digital Image Processing

The document discusses image segmentation techniques. It describes how segmentation divides an image into constituent parts or objects. The level of subdivision depends on the problem being solved. Common segmentation methods include detecting discontinuities using edge detection, and detecting similarities using thresholding, region growing, region splitting and region merging. Edge detection algorithms often use gradient and Laplacian operators to find boundaries between regions with distinct gray level properties.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views8 pages

Computer Vision & Digital Image Processing

The document discusses image segmentation techniques. It describes how segmentation divides an image into constituent parts or objects. The level of subdivision depends on the problem being solved. Common segmentation methods include detecting discontinuities using edge detection, and detecting similarities using thresholding, region growing, region splitting and region merging. Edge detection algorithms often use gradient and Laplacian operators to find boundaries between regions with distinct gray level properties.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Computer Vision &

Digital Image Processing

Image Segmentation

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-1

Image segmentation

• Segmentation divides an image into its constituent parts or


objects
• Level of subdivision depends on the problem being solved
• Segmentation stops when objects of interest in an
application have been isolated
• Example:
– For an air-to-ground target acquisition system interest may lie in
identifying vehicles on a road
• Segment the road from the image
• Segment contents of the road down to objects of a range of sizes that
correspond to potential vehicles
• No need to go below this level, or segment outside the road boundary

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-2


Image segmentation (continued)

• Autonomous segmentation is one of the most difficult tasks


in image processing - largely determines the eventual failure
or success of the process
• Segmentation algorithms for monochrome images are based
on one of two basic properties of gray-level values
– Discontinuity
– Similarity
• For discontinuity, the approach is to partition an image
based on abrupt changes in gray level
• The principal areas of interest are:
– detection of isolated points
– detection of lines and edges in an image

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-3

Image segmentation (continued)

• For similarity, the principal approaches are based


on
– thresholding
– region growing
– region splitting
– merging
• Using discontinuity and similarity of gray-level pixel
values is applicable to both static and dynamic (time
varying) images
• For dynamic images, the concept of motion can be
exploited in the segmentation process
Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-4
Discontinuity detection
• Detecting discontinuities (points, lines and edges) is generally
accomplished by mask processing (much as in the spatial domain filter
examples)
• Use the response equation

R = w1 z1 + w2 z 2 + ... + w9 z9
9
= ∑ wi zi
i =1

• A mask used for detecting isolated points (different from a constant


background would be

-1 -1 -1
-1 8 -1
-1 -1 -1
Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-5

Isolated point detection

• Detection of isolated points is accomplished by using the


previous mask
• An isolated point is detected if the response of the mask is
greater that a predetermined threshold
R >T
• This measures the weighted difference between a center
point and its neighbors
• The mask is the same as the high frequency filtering mask
• The emphasis here is on the detection of points
– Only differences that are large enough to be considered isolated
points in an image are of interest

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-6


Line detection

• Line detection would involve the application of


several masks
• In the creation of masks, the intent is to form a
mask (or set of masks) that will respond to a 1-pixel
thick line in a given orientation
– Horizontal, Vertical, +45º, -45º

-1 -1 -1 -1 2 -1 -1 -1 2 2 -1 -1
2 2 2 -1 2 -1 -1 2 -1 -1 2 -1
-1 -1 -1 -1 2 -1 2 -1 -1 -1 -1 2

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-7

Line detection (continued)

• With a constant background, the maximum response occurs


when the line is “lined up” with the center of the mask
• Note that the preferred direction of each mask is weighted
with a larger coefficient than other possible directions
• Let R1, R2, R3 and R4 denote the responses of the masks
• If, at a certain point in the image,

Ri > R j for all j ≠ i


• that point is said to be more likely associated with a line in
the direction of mask i

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-8


Edge detection

• Edge detection is by far the most common approach for


detecting discontinuities in gray levels
– Isolated points and 1-pixel thin lines are not common in most practical
applications
• Basic formulation and initial assumptions
– An edge is a boundary between two regions with relatively distinct
gray-level properties
– Regions are sufficiently homogeneous so that the transition between
the regions can be determined on the basis of gray-level
discontinuities alone
– If this is not valid, some other techniques will be used
• The basic idea behind most edge detection techniques is the
computation of a local derivative operator

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-9

Derivative operators

• An image of a dark stripe on a light


background (and visa versa)
• A profile of the lines in the image
(modeled as a gradual rather than
sharp transition)
– Edges in images tend to be slightly
blurred as a result of sampling
• The first derivative: the magnitude
detects the presence of an edge
• The second derivative: the sign tells
the type of transition (light-to-dark or
dark-to-light) Note also the presence
of a zero-crossing at each edge

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-10


Gradient operators

• The first derivative at any point in an image is computed


using the magnitude of the gradient
⎡Gx ⎤ ⎡∂f / ∂x ⎤
∇f = ⎢ ⎥ = ⎢ ⎥
⎣G y ⎦ ⎣∂f / ∂y ⎦
• Where the magnitude is
[
∇f = mag(∇f ) = Gx2 + G y2 ]
1/ 2

≈ Gx + G y
• The direction of the gradient vector is the angle α(x,y) given
by
⎛ Gy ⎞
α ( x, y ) = tan −1 ⎜⎜ ⎟⎟
⎝ Gx ⎠

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-11

Gradient operators (continued)

• The derivatives may be digitally implemented in several


ways, but the Sobel operators are commonly chosen as they
provide both a differencing and a smoothing
– The smoothing is advantageous as derivative operators enhance
noise
• The gradient computation using Sobel operators is given as
Gx = ( z7 + 2 z8 + z9 ) − ( z1 + 2 z 2 + z3 )
G y = ( z3 + 2 z6 + z9 ) − ( z1 + 2 z 4 + z7 )

-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 1 -1 0 1
Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-12
The Laplacian

• The Laplacian is a second order derivative operator given by


∂2 f ∂2 f
∇2 f = +
∂x 2 ∂y 2
• As with the gradient, this may be implemented digitally
• With a 3x3 mask, the most common form is

∇ 2 f = 4 z 5 − ( z 2 + z 4 + z 6 + z8 )

• The basic requirement for the digital Laplacian is that the


center coefficient be positive, the other coefficients be
negative (or zero), and that the sum of the coefficients be
zero (indicating a zero response over a constant area)

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-13

The Laplacian (continued)

• Although the Laplacian responds to changes in


intensity, it is seldom used in edge detection for
several reasons
– As a second derivative operator it is typically
unacceptably sensitive to noise
– The Laplacian produces double edges
– Unable to detect direction
• As such, the Laplacian is used in the secondary role
of detector for establishing whether a pixel is on the
light or dark side of an edge

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-14


The Laplacian (continued)

• A more general use of the Laplacian is to find the


location of edges using the zero-crossings property
• Basic idea is to convolve an image with the
Laplacian of a 2-D Gaussian function of the form
⎛ x2 + y 2 ⎞
h( x, y ) = exp⎜⎜ − ⎟
⎝ 2σ 2 ⎟⎠

• σ = standard deviation.
• If r2=x2+y2, then the Laplacian is then
⎛ r2 −σ 2 ⎞ ⎛ r2 ⎞
∇ 2 h = ⎜⎜ ⎟
⎟ exp⎜⎜ − ⎟
2 ⎟
⎝ σ ⎝ 2σ ⎠
4

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-15

Example using the Laplacian

Original Original Image


Image convolved with
the Laplacian

Thresholding Zero crossings


the convolved from the binary
image to yield image
a binary image

Electrical & Computer Engineering Dr. D. J. Jackson Lecture 16-16

You might also like