0% found this document useful (0 votes)
11 views

Lecture 9-Edge Detection

The document discusses edge detection techniques in image processing. It describes how edge detection works by calculating derivatives to find boundaries between different image regions. Several common first-order derivative methods like Roberts, Prewitt, and Sobel operators are explained for computing gradients to detect edges in images.

Uploaded by

Tăng Hiền
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Lecture 9-Edge Detection

The document discusses edge detection techniques in image processing. It describes how edge detection works by calculating derivatives to find boundaries between different image regions. Several common first-order derivative methods like Roberts, Prewitt, and Sobel operators are explained for computing gradients to detect edges in images.

Uploaded by

Tăng Hiền
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

PART I

IMAGE PROCESSING
INSTRUCTOR: DR. NGUYEN NGOC TRUONG MINH
SCHOOL OF ELECTRICAL ENGINEERING, INTERNATIONAL UNIVERSITY (VNU-HCMC)

Ho Chi Minh City, Spring 2024


Lecture 9 – Edge Detection 2

LECTURE IX –
EDGE DETECTION
INSTRUCTOR: DR. NGUYEN NGOC TRUONG MINH
SCHOOL OF ELECTRICAL ENGINEERING, INTERNATIONAL UNIVERSITY (VNU-HCMC)

Ho Chi Minh City, Spring 2024


Lecture 9 – Edge Detection 3

LECTURE CONTENT
• What is edge detection and why is it so important to computer vision?

• What are the main edge detection techniques and how well do they work?

• How can edge detection be performed in MATLAB?

• What is the Hough transform and how can it be used to postprocess the
results of an edge detection algorithm?

• Chapter Summary - What we have learned?

• Problems
Lecture 9 – Edge Detection 4

9.1 FORMULATION OF THE PROBLEM


• Edge detection is a fundamental image processing operation used in many
computer vision solutions.
• The goal of edge detection algorithms is to find the most relevant edges in
an image or scene. These edges should then be connected into meaningful
lines and boundaries, resulting in a segmented image containing two or
more regions.
• Subsequent stages in a machine vision system will use the segmented
results for tasks such as object counting, measuring, feature extraction,
and classification.
Lecture 9 – Edge Detection 5

9.1 FORMULATION OF THE PROBLEM

• The need for edge detection algorithms as part of a vision system also has its
roots in biological vision: there is compelling evidence that the very early
stages of the human visual system (HVS) contain edge-sensitive cells that
respond strongly (i.e., exhibit a higher firing rate) when presented with
edges of certain intensity and orientation.

• Edge detection algorithms, therefore, attempt to emulate an ability present


in the human visual system.
Lecture 9 – Edge Detection 6

9.1 FORMULATION OF THE PROBLEM


• Edge detection is a hard image processing problem. Most edge detection
solutions exhibit limited performance in the presence of images containing
real-world scenes, that is, images that have not been carefully controlled in
their illumination, size and position of objects, and contrast between
objects and background.
• The impacts of shadows, occlusion among objects and parts of the scene,
and noise—to mention just a few—on the results produced by an edge
detection solution are often significant. Consequently, it is common to
precede the edge detection stage with preprocessing operations such as
noise reduction and illumination correction.
Lecture 9 – Edge Detection 7

9.2 BASIC CONCEPTS

• An edge can be defined as a boundary between two image regions having


distinct characteristics according to some feature (e.g., gray level, color, or
texture).
• In this chapter, we focus primarily on edges in grayscale 2D images, which
are usually associated with a sharp variation of the intensity function
across a portion of the image.
Lecture 9 – Edge Detection 8

9.2 BASIC CONCEPTS


• Edge detection methods usually rely on calculations of the first or second
derivative along the intensity profile.
• The first derivative has the desirable property of being directly proportional
to the difference in intensity across the edge; consequently, the magnitude
of the first derivative can be used to detect the presence of an edge at a
certain point in the image.
• The sign of second derivative can be used to determine whether a pixel lies
on the dark or on the bright side of an edge. Moreover, the zero crossing
between its positive and negative peaks can be used to locate the center
of thick edges.
Lecture 9 – Edge Detection 9

9.2 BASIC CONCEPTS


Lecture 9 – Edge Detection 10

9.2 BASIC CONCEPTS


• Figure shows an image with a ramp edge
and the corresponding intensity profile,
first and second derivatives, and zero
crossing for any horizontal line in the
image. It shows that the first derivative of the
intensity function has a peak at the center of
the luminance edge, whereas the second
derivative—which is the slope of the first
derivative function—has a zero crossing at
the center of the luminance edge, with a
positive value on one side and a negative
value on the other.
Lecture 9 – Edge Detection 11

9.2 BASIC CONCEPTS

• The edges in Figures 9.1 and 9.2 were noise-free.

• When the input image is corrupted by noise, the first and second derivatives
respond quite differently. Even modest noise levels—barely noticeable when
you look at the original image or its profile—can render the second
derivative results useless, whereas more pronounced noise levels will also
impact the first derivative results to a point that they cannot be used for
edge detection.
Lecture 9 – Edge Detection 12

9.2 BASIC CONCEPTS

• In summary, the process of edge detection consists of three main steps:


1. Noise Reduction: Due to the first and second derivative’s great sensitivity to
noise, the use of image smoothing techniques before applying the edge
detection operator is strongly recommended.
2. Detection of Edge Points: Here local operators that respond strongly to edges
and weakly elsewhere are applied to the image, resulting in an output image
whose bright pixels are candidates to become edge points.
3. Edge Localization: Here the edge detection results are postprocessed, spurious
pixels are removed, and broken edges are turned into meaningful lines and
boundaries, using techniques such as the Hough transform.
Lecture 9 – Edge Detection 13

9.2 BASIC CONCEPTS

In M ATLAB
The IPT has a function for edge
detection (edge) whose variants
and options will be explored
throughout the tutorial.
Lecture 9 – Edge Detection 14

9.3 FIRST-ORDER DERIVATIVE EDGE DETECTION


• The simplest edge detection methods work by estimating the gray-level
gradient at a pixel, which can be approximated by the digital equivalent of
the first-order derivative as follows:
𝑔𝑥 𝑥, 𝑦 ≈ 𝑓 𝑥 + 1, 𝑦 − 𝑓 𝑥 − 1, 𝑦
𝑔𝑦 𝑥, 𝑦 ≈ 𝑓 𝑥, 𝑦 + 1 − 𝑓 𝑥, 𝑦 − 1
• The 2x2 approximations of the first-order derivative above are also
known as Roberts operators and can be represented using a 2x2 matrix
notation as
0 −1 −1 0
𝑔𝑥 = 𝑔𝑦 =
1 0 0 1
Lecture 9 – Edge Detection 15

9.3 FIRST-ORDER DERIVATIVE EDGE DETECTION

• These gradients are often computed within a 3x3 neighborhood using


convolution:
𝑔𝑥 𝑥, 𝑦 = ℎ𝑥 ⋆ 𝑓(𝑥, 𝑦)
𝑔𝑦 𝑥, 𝑦 = ℎ𝑦 ⋆ 𝑓(𝑥, 𝑦)
where ℎ𝑥 and ℎ𝑦 are appropriate convolution masks (kernels).
• The simplest pair of kernels, known as the Prewitt edge detector
(operator), are as follows:
−1 0 1 −1 − 1 − 1
ℎ𝑥 = −1 0 1 ℎ𝑦 = 0 0 0
−1 0 1 1 1 1
Lecture 9 – Edge Detection 16

9.3 FIRST-ORDER DERIVATIVE EDGE DETECTION

• A similar pair of kernels, which gives more emphasis to on-axis pixels, is


the Sobel edge detector, given by the following:

−1 0 1 −1 − 2 − 1
ℎ𝑥 = −2 0 2 ℎ𝑦 = 0 0 0
−1 0 1 1 2 1
Lecture 9 – Edge Detection 17

9.3 FIRST-ORDER DERIVATIVE EDGE DETECTION

• As you may have noticed, despite their differences, the 3x3 masks
presented so far share two properties:
➢ They have coefficients of opposite signs (across a row or column of
coefficients equal to zero) in order to obtain a high response in image
regions with variations in intensity (possibly due to the presence of an
edge).
➢ The sum of the coefficients is equal to zero, which means that when applied
to perfectly homogeneous regions in the image (i.e., a patch of the image
with constant gray level), the result will be 0 (black pixel).
Lecture 9 – Edge Detection 18

9.3 FIRST-ORDER DERIVATIVE EDGE DETECTION

In M ATLAB
The IPT function edge has options for both Prewitt and Sobel operators.
Edge detection using Prewitt and Sobel operators can also be achieved by
using imfilter with the corresponding 3x3 masks (which can be created
using fspecial).
Lecture 9 – Edge Detection 19

EXAMPLE 9.1

• Figure 9.4 shows an example of using imfilter to apply the Prewitt edge
detector to a test image. Due to the fact that the Prewitt kernels have
both positive and negative coefficients, the resulting array contains negative
and positive values.
• Since negative values are usually truncated when displaying an image in
MATLAB, the partial results (Figure 9.4b and c) have been mapped to a
modified gray-level range (where the highest negative value becomes black,
the highest positive value is displayed as white, and all zero values are
shown with a midlevel gray).
Lecture 9 – Edge Detection 20

EXAMPLE 9.1

• The combined final result (Figure 9.4d) was obtained by computing the
magnitude of the gradient, originally defined as

𝑔= 𝑔𝑥2 + 𝑔𝑦2
which can be approximated by
𝑔 = 𝑔𝑥 + 𝑔𝑦
Lecture 9 – Edge Detection 21

EXAMPLE 9.1
Lecture 9 – Edge Detection 22

EXAMPLE 9.2

• Figure 9.5 shows examples of edge detection using imfilter to apply the
Sobel operator on a grayscale image.
• All results are displayed in their negative (using imcomplement) for better
viewing on paper.
• The idea of using horizontal and vertical masks used by the Prewitt and
Sobel operators can be extended to include all eight compass directions:
north, northeast, east, southeast, south, southwest, west, and northwest.
• The Kirsch (Figure 9.6) and the Robinson (Figure 9.7) kernels are two
examples of compass masks.
Lecture 9 – Edge Detection 23

EXAMPLE 9.2
Lecture 9 – Edge Detection 24

EXAMPLE 9.2
Lecture 9 – Edge Detection 25

EXAMPLE 9.2
Lecture 9 – Edge Detection 26

EXAMPLE 9.2

• Edge detection results can be thresholded to reduce the number of false


positives (i.e., pixels that appear in the output image although they do not
correspond to actual edges).
• Figure 9.8 shows an example of using edge to implement the Sobel
operator with different threshold levels. The results range from
unacceptable because of too many spurious pixels (part (a)) to
unacceptable because of too few edge pixels (part (d)).
• Part (c) shows the result using the best threshold value, as determined by
the edge function using the syntax: [BW,thresh] = edge(I,’sobel’);, where I is
the input image.
Lecture 9 – Edge Detection 27

EXAMPLE 9.2
Lecture 9 – Edge Detection 28

9.4 SECOND-ORDER DERIVATIVE EDGE DETECTION

• The Laplacian operator is a straightforward digital approximation of the


second-order derivative of the intensity.
• Although it has the potential for being employed as an isotropic (i.e.,
omnidirectional) edge detector, it is rarely used in isolation because of
two limitations (commented earlier in this lecture):
➢ It generates “double edges,” that is, positive and negative values for each
edge.
➢ It is extremely sensitive to noise.
Lecture 9 – Edge Detection 29

9.4 SECOND-ORDER DERIVATIVE EDGE DETECTION

In M ATLAB
• Edge detection using the Laplacian operator can be implemented using the
fspecial function (to generate the Laplacian 3x3 convolution mask) and the
zerocross option in function edge as follows:
h = fspecial(’laplacian’,0);
J = edge(I,’zerocross’,t,h);
where t is a user-provided sensitivity threshold.
Lecture 9 – Edge Detection 30

EXAMPLE 9.3
• Figure 9.9 shows the results of applying the zero-cross edge detector to
an image and the impact of varying the thresholds.
• Part (a) shows a clean input image; part (b) shows the results of edge
detection in (a) using default parameters, whereas part (c) shows the
effects of reducing the threshold to 0. Clearly the result in (b) is much
better than (c).
• Part (d) is a noisy version of (a) (with zero-mean Gaussian noise with 𝜎 =
0.0001). Parts (e) and (f) are the edge detection results using the noisy
image in part (d) as an input and the same options as in (b) and (c),
respectively. In this case, although the amount of noise is hardly noticeable
in the original image (d), both edge detection results are unacceptable.
Lecture 9 – Edge Detection 31

EXAMPLE 9.3
Lecture 9 – Edge Detection 32

9.4.1 LAPLACIAN OF GAUSSIAN

• The Laplacian of Gaussian (LoG) edge detector works by smoothing the


image with a Gaussian low-pass filter (LPF), and then applying a Laplacian
edge detector to the result. The resulting transfer function (which
resembles a Mexican hat in its 3D view) is represented in Figure 9.10.
• The LoG filter can sometimes be approximated by taking the differences
of two Gaussians of different widths in a method known as difference of
Gaussians (DoG).
In M ATLAB
• Edge detection using the LoG filter can be implemented using the log
option in function edge.
Lecture 9 – Edge Detection 33

9.4.1 LAPLACIAN OF GAUSSIAN


Lecture 9 – Edge Detection 34

EXAMPLE 9.4

• Figure 9.11 shows the results of applying the LoG edge detector to an
image, and the impact of varying σ.
• Part (a) shows the input image; part (b) shows the results of edge
detection in (a) using default parameters (i.e., σ = 2), whereas parts (c)
and (d) show the effects of reducing or increasing sigma (to 1 and 3,
respectively).
• Reducing σ causes the resulting image to contain more fine details,
whereas an increase in σ leads to a coarser edge representation, as
expected.
Lecture 9 – Edge Detection 35

EXAMPLE 9.4
Lecture 9 – Edge Detection 36

9.5 THE CANNY EDGE DETECTOR

• The Canny edge detector is one of the most popular, powerful, and effective
edge detection operators available today. Its algorithm can be described as
follows:
1. The input image is smoothed using a Gaussian low-pass filter, with a specified value
of σ: large values of σ will suppress much of the noise at the expense of weakening
potentially relevant edges.
2. The local gradient (intensity and direction) is computed for each point in the
smoothed image.
3. The edge points at the output of step 2 result in wide ridges.The algorithm thins
those ridges, leaving only the pixels at the top of each ridge, in a process known as
nonmaximal suppression.
Lecture 9 – Edge Detection 37

9.5 THE CANNY EDGE DETECTOR

• The Canny edge detector is one of the most popular, powerful, and effective
edge detection operators available today. Its algorithm can be described as
follows:
4. The ridge pixels are then thresholded using two thresholds Tlow and Thigh: ridge pixels
with values greater than Thigh are considered strong edge pixels; ridge pixels with
values between Tlow and Thigh are said to be weak pixels.This process is known as
hysteresis thresholding.
5. The algorithm performs edge linking, aggregating weak pixels that are 8-connected to
the strong pixels.
Lecture 9 – Edge Detection 38

9.5 THE CANNY EDGE DETECTOR

In M ATLAB
• The edge function includes the Canny edge detector, which can be invoked
using the following syntax:
J = edge(I, ’canny’,T, sigma);
where I is the input image, T = [T_lowT_high] is a1x2 vector containing the
two thresholds explained in step 4 of the algorithm, sigma is the standard
deviation of the Gaussian smoothing filter, and J is the output image.
Lecture 9 – Edge Detection 39

EXAMPLE 9.5

• Figure 9.12 shows the results of applying the Canny detector to an image
(Figure 9.5a), and the impact of varying σ and the thresholds. Part (a) uses
the syntax BW = edge(J,’canny’);, which results in t = [0.0625 0.1563] and σ
= 1. In part (b), we change the value of σ (to 0.5) leaving everything else
unchanged.
Lecture 9 – Edge Detection 40

EXAMPLE 9.5

• In part (c), we change the value of σ (to 2) leaving everything else


unchanged. Changing σ causes the resulting image to contain more (part
(b)) or fewer (part (c)) edge points (compared to part (a)), as expected.
• Finally, in part (d), we keep σ in its default value and change the thresholds
to t = [0.01 0.1]. Since both Tlow and Thigh were lowered, the resulting
image contains more strong and weak pixels, resulting in a larger number
of edge pixels (compared to part (a)), as expected.
Lecture 9 – Edge Detection 41

EXAMPLE 9.5
Lecture 9 – Edge Detection 42

9.6 EDGE LINKING AND BOUNDARY DETECTION

• The goal of edge detection algorithms should be to produce an image


containing only the edges of the original image.
• However, due to the many technical challenges discussed earlier (noise,
shadows, and occlusion, among others), most edge detection algorithms
will output an image containing fragmented edges.
• In order to turn these fragmented edge segments into useful lines and
object boundaries, additional processing is needed. In this section, we
discuss a global method for edge linking and boundary detection: the
Hough transform.
Lecture 9 – Edge Detection 43

9.6.1 THE HOUGH TRANSFORM


• The Hough transform is a mathematical method designed to find lines in
images. It can be used for linking the results of edge detection, turning
potentially sparse, broken, or isolated edges into useful lines that
correspond to the actual edges in the image.
• Let (𝑥, 𝑦) be the coordinates of a point in a binary image. The Hough
transform stores in an accumulator array all pairs (𝑎, 𝑏) that satisfy the
equation 𝑦 = 𝑎𝑥 + 𝑏. The (𝑎, 𝑏) array is called the transform array. For
example, the point (𝑥, 𝑦) = (1, 3) in the input image will result in the
equation 𝑏 = −𝑎 + 3, which can be plotted as a line that represents all
pairs (𝑎, 𝑏) that satisfy this equation (Figure 9.13).
Lecture 9 – Edge Detection 44

9.6.1 THE HOUGH TRANSFORM


Lecture 9 – Edge Detection 45

9.6.1 THE HOUGH TRANSFORM

• Since each point in the image will map to a line in the transform domain,
repeating the process for other points will result in many intersecting
lines, one per point (Figure 9.14).

• The meaning of two or more lines intersecting in the transform domain is


that the points to which they correspond are aligned in the image. The
points with the greatest number of intersections in the transform domain
correspond to the longest lines in the image.
Lecture 9 – Edge Detection 46

9.6.1 THE HOUGH TRANSFORM


• Describing lines using the equation y = ax + b (where a represents the
gradient) poses a problem, since vertical lines have infinite gradient. This
limitation can be circumvented by using the normal representation of a
line, which consists of two parameters: ρ (the perpendicular distance from
the line to the origin) and θ (the angle between the line’s perpendicular
and the horizontal axis).

• In this new representation (Figure 9.15), vertical lines will have θ = 0. It is


common to allow ρ to have negative values, therefore restricting θ to the
range −90º <θ≤ 90º..
Lecture 9 – Edge Detection 47

9.6.1 THE HOUGH TRANSFORM


Lecture 9 – Edge Detection 48

9.6.1 THE HOUGH TRANSFORM


Lecture 9 – Edge Detection 49

9.6.1 THE HOUGH TRANSFORM


The relationship between ρ, θ, and the original coordinates (x, y) is
𝜌 = 𝑥 cos 𝜃 + 𝑦 sin 𝜃
Under the new set of coordinates, the Hough transform can be
implemented as follows:
1. Create a 2D array corresponding to a discrete set of values for ρ and θ.
Each element in this array is often referred to as an accumulator cell.
2. For each pixel (x, y) in the image and for each chosen value of θ,
compute x cos θ + y sin θ and write the result in the corresponding
position—(ρ, θ)—in the accumulator array.
3. The highest values in the (ρ, θ) array will correspond to the most
relevant lines in the image.
Lecture 9 – Edge Detection 50

9.6.1 THE HOUGH TRANSFORM

IN M AT L A B
• The IPT contains a function for Hough transform calculations, hough,
which takes a binary image as an input parameter, and returns the
corresponding Hough transform matrix and the arrays of ρ and θ values
over which the Hough transform was calculated. Optionally, the
resolution of the discretized 2D array for both ρ and θ can be specified
as additional parameters.
Lecture 9 – Edge Detection 51

EXAMPLE 9.6

• In this example, we use the hough function to find the strongest lines in a
binary image obtained as a result of an edge detection operator (BW),
using the following steps:
[H,T,R] = hough(BW,’RhoResolution’,0.5,’ThetaResolution’,0.5);
• Figure 9.16 shows the original image and the results of the Hough
transform calculations.You will notice that some of the highest peaks in
the transform image (approximately at θ =−60◦ and θ = 60◦) correspond
to the main diagonal lines in the scissors shape.
Lecture 9 – Edge Detection 52

EXAMPLE 9.6
Lecture 9 – Edge Detection 53

EXAMPLE 9.6

IN MATLAB

• The IPT also includes two useful companion functions for exploring and
plotting the results of Hough transform calculations: houghpeaks (which
identifies the k most salient peaks in the Hough transform results, where
k is passed as a parameter) and houghlines, which draws the lines
associated with the highest peaks on top of the original image.
Lecture 9 – Edge Detection 54

EXAMPLE 9.7

• In this example, we use hough, houghpeaks, and houghlines on a grayscale


test image whose edges have been extracted using the Canny edge
detector. Figure 9.17a shows the results of the Hough transform
calculations with two small squares indicating the two highest peaks.
• Figure 9.17b shows the result of the Canny edge detector (displayed with
black pixels against a white background for better visualization). Figure
9.17c displays the original image with the highest (cyan) and second
highest (yellow) lines overlaid.
Lecture 9 – Edge Detection 55

EXAMPLE 9.7
Lecture 9 – Edge Detection 56

9.7 TUTORIAL: EDGE DETECTION

Goa l
• The goal of this tutorial is to learn how to implement edge detection and
associated techniques in MATLAB.
Objectives
• Learn how to use the IPT edge function.
• Explore the most popular first-derivative edge detectors: Roberts, Sobel,
and Prewitt.
• Explore the Marr–Hildreth Laplacian of Gaussian edge detector.
• Explore the Canny edge detector.
• Learn how to implement edge detection with compass masks (Kirsch and
Robinson).
Lecture 9 – Edge Detection 57

9.7 TUTORIAL: EDGE DETECTION

E dge Detection U sing the P rewitt Opera tor


1. Load and display the test image.
I = imread(’lenna.tif ’);
figure, subplot(2,2,1), imshow(I), title(’Original Image’);
2. Extract the edges in the image using the Prewitt operator.
[I_prw1,t1] = edge(I,’prewitt’);
subplot(2,2,2), imshow(I_prw1), title(’Prewitt, default thresh’);
Question 1. Wha t does the t1 va ria ble represent?
Edge detection methods are often compared by their ability to detect edges
in noisy images. Let us perform the Prewitt operator on the Lenna image
with additive Gaussian noise.
Lecture 9 – Edge Detection 58

9.7 TUTORIAL: EDGE DETECTION


3. Add noise to the test image and extract its edges.
I_noise = imnoise(I,’gaussian’);
[I_prw2,t2] = edge(I_noise,’prewitt’);
subplot(2,2,3), imshow(I_noise), title(’Image w/ noise’);
subplot(2,2,4), imshow(I_prw2), title(’Prewitt on noise’);
Question 2. How did the Prewitt edge detector perform in the presence of
noise (compa red to no noise)?
Question 3. Did M ATLAB use a different threshold va lue for the noisy ima ge?
Question 4. Try using different threshold va lues. Do these different va lues
a ffect the opera tor’s response to noise? How does the threshold va lue a ffect
the edges of the object?
Lecture 9 – Edge Detection 59

9.7 TUTORIAL: EDGE DETECTION

Edge Detection Using the Sobel Opera tor


4. Extract the edges from the test image using the Sobel edge detector.
[I_sob1,t1] = edge(I,’sobel’);
figure, subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_sob1), title(’Sobel, default thresh’);
5. Extract the edges from the test image with Gaussian noise using the Sobel
edge detector.
[I_sob2,t2] = edge(I_noise,’sobel’);
subplot(2,2,3), imshow(I_noise), title(’Image w/ noise’);
subplot(2,2,4), imshow(I_sob2), title(’Sobel on noise’);
Lecture 9 – Edge Detection 60

9.7 TUTORIAL: EDGE DETECTION


Question 5. How does the Sobel opera tor compa re with the Prewitt
opera tor with a nd without noise?
Another feature of the edge function is ’thinning’, which reduces the thickness of
the detected edges. Although this feature is turned on by default, it can be turned
off, which results in faster edge detection.
6. Extract the edges from the test image with the Sobel operator with no thinning.
I_sob3 = edge(I,’sobel’,’nothinning’);
figure, subplot(1,2,1), imshow(I_sob1), title(’Thinning’);
subplot(1,2,2), imshow(I_sob3), title(’NoThinning’);
As you already know, the Sobel operator actually performs two convolutions
(horizontal and vertical). These individual images can be obtained by using
additional output parameters.
Lecture 9 – Edge Detection 61

9.7 TUTORIAL: EDGE DETECTION


7. Display the horizontal and vertical convolution results from the Sobel
operator.
[I_sob4,t,I_sobv,I_sobh] = edge(I,’sobel’);
figure
subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_sob4), title(’Complete Sobel’);
subplot(2,2,3), imshow(abs(I_sobv),[]), title(’Sobel Vertical’);
subplot(2,2,4), imshow(abs(I_sobh),[]), title(’Sobel Horizontal’);
Question 6. Why do we displa y the a bsolute va lue of the vertica l a nd
horizonta l ima ges? Hint: Inspect the minimum a nd ma ximum va lues of
these ima ges.
Lecture 9 – Edge Detection 62

9.7 TUTORIAL: EDGE DETECTION


Question 7. Cha nge the code in step 7 to displa y thresholded (bina rized),
not thinned, versions of a ll ima ges.
As you may have noticed, the edge function returns the vertical and horizontal
images before any thresholding takes place.
E dge Detection with the Roberts Opera tor
Similar options are available with the edge function when the Roberts
operator is used.
8. Extract the edges from the original image using the Roberts operator.
I_rob1 = edge(I,’roberts’);
figure
subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_rob1), title(’Roberts, default thresh’);
Lecture 9 – Edge Detection 63

9.7 TUTORIAL: EDGE DETECTION


9. Apply the Roberts operator to a noisy image.
[I_rob2,t] = edge(I_noise,’roberts’);
subplot(2,2,3), imshow(I_noise), title(’Image w/ noise’);
subplot(2,2,4), imshow(I_rob2), title(’Roberts on noise’);
Question 8. Compa re the Roberts opera tor with the Sobel a nd Prewitt
opera tors. How does it hold up to noise?
Question 9. If we were to a djust the threshold, would we get better
results when filtering the noisy ima ge?
Question 10. Suggest a method to reduce the noise in the ima ge
before performing edge detection.
Lecture 9 – Edge Detection 64

9.7 TUTORIAL: EDGE DETECTION

E dge Detection with the L apla cia n of a Ga ussia n Opera tor


The LoG edge detector can be implemented with the edge function as well.
Let us see its results.
10. Extract the edges from the original image using the LoG edge detector.
I_log1 = edge(I,’log’);
figure
subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_log1), title(’LoG, default parameters’);
Lecture 9 – Edge Detection 65

9.7 TUTORIAL: EDGE DETECTION

11. Apply the LoG edge detector to the noisy image.


[I_log2,t] = edge(I_noise,’log’);
subplot(2,2,3), imshow(I_noise), title(’Image w/ noise’);
subplot(2,2,4), imshow(I_log2), title(’LoG on noise’);
Question 11. By defa ult, the LoG edge detector uses a va lue of 2 for σ
(the sta nda rd devia tion of the filter). Wha t ha ppens when we increa se
this va lue?
Lecture 9 – Edge Detection 66

9.7 TUTORIAL: EDGE DETECTION


Edge Detection with the Canny Operator
12. Extract the edges from the original image using the Canny edge detector.
I_can1 = edge(I,’canny’);
figure
subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_log1), title(’Canny, default parameters’);
13. Apply the filter to the noisy image.
[I_can2,t] = edge(I_noise,’canny’, [], 2.5);
subplot(2,2,3), imshow(I_noise), title(’Image w/ noise’);
subplot(2,2,4), imshow(I_can2), title(’Canny on noise’)
Lecture 9 – Edge Detection 67

9.7 TUTORIAL: EDGE DETECTION


As you know, the Canny detector first applies a Gaussian smoothing
function to the image, followed by edge enhancement. To achieve better
results on the noisy image, we can increase the size of the Gaussian
smoothing filter through the sigma parameter.
14. Apply the Canny detector on the noisy image where sigma =2.
[I_can3,t] = edge(I_noise,’canny’, [], 2);
figure
subplot(1,2,1), imshow(I_can2), title(’Canny, default parameters’);
subplot(1,2,2), imshow(I_can3), title(’Canny, sigma = 2’);
Question 12. Does increa sing the va lue of sigma give us better results
when using the Ca nny detector on a noisy ima ge?
Lecture 9 – Edge Detection 68

9.7 TUTORIAL: EDGE DETECTION


Another parameter of the Canny detector is the threshold value, which
affects the sensitivity of the detector.
15. Close any open figures and clear all workspace variables.
16. Load the mandrill image and perform the Canny edge detector with
default parameters.
I = imread(’mandrill.tif ’);
[I_can1,thresh] = edge(I,’canny’);
figure
subplot(2,2,1), imshow(I), title(’Original Image’);
subplot(2,2,2), imshow(I_can1), title(’Canny, default parameters’);
17. Inspect the contents of variable thresh.
Lecture 9 – Edge Detection 69

9.7 TUTORIAL: EDGE DETECTION

18. Use a threshold value higher than the one in variable thresh.
[I_can2,thresh] = edge(I, ’canny’, 0.4);
subplot(2,2,3), imshow(I_can2), title(’Canny, thresh = 0.4’);
19. Use a threshold value lower than the one in variable thresh.
[I_can2,thresh] = edge(I, ’canny’, 0.08);
subplot(2,2,4), imshow(I_can2), title(’Canny, thresh = 0.08’);
Question 13. How does the sensitivity of the Ca nny edge detector
cha nge when the threshold va lue is increa sed?
Lecture 9 – Edge Detection 70

9.7 TUTORIAL: EDGE DETECTION

E dge Detection with the Kirsch Opera tor


The remaining edge detection techniques discussed in this tutorial are not
included in the current implementation of the edge function, so we must
implement them as they are defined. We will begin with the Kirsch operator.
20. Close any open figures and clear all workspace variables.
21. Load the mandrill image and convert it to double format.
I = imread(’mandrill.tif ’);
I = im2double(I);
Lecture 9 – Edge Detection 71

9.7 TUTORIAL: EDGE DETECTION

Previously, when we were using the edge function, we did not need to
convert the image to class double because the function took care of this for
us automatically.
Since now we are implementing the remaining edge detectors on our own,
we must perform the class conversion to properly handle negative values
(preventing unwanted truncation).
Next, we will define the eight Kirsch masks. For ease of implementation, we
will store all eight masks in a 3 × 3 × 8 matrix. Figure 9.18 illustrates this
storage format.
Lecture 9 – Edge Detection 72

9.7 TUTORIAL: EDGE DETECTION


22. Create the Kirsch masks and store them in a preallocated matrix.
k = zeros(3,3,8);
k(:,:,1) = [-3 -3 5; -3 0 5; -3 -3 5];
k(:,:,2) = [-3 5 5; -3 0 5; -3 -3 -3];
k(:,:,3) = [5 5 5; -3 0 -3; -3 -3 -3];
k(:,:,4) = [5 5 -3; 5 0 -3; -3 -3 -3];
k(:,:,5) = [5 -3 -3; 5 0 -3; 5 -3 -3];
k(:,:,6) = [-3 -3 -3; 5 0 -3; 5 5 -3];
k(:,:,7) = [-3 -3 -3; -3 0 -3;555];
k(:,:,8) = [-3 -3 -3; -3 0 5; -3 5 5];
Next, we must convolve each mask on the image, generating eight images.
We will store these images in a three-dimensional matrix just as we did for
the masks. Because all the masks are stored in one matrix, we can use a for
loop to perform all eight convolutions with less lines of code.
Lecture 9 – Edge Detection 73

9.7 TUTORIAL: EDGE DETECTION


23. Convolve each mask with the image using a for loop.
I_k = zeros(size(I,1), size(I,2), 8);
for i = 1:8
I_k(:,:,i) = imfilter(I,k(:,:,i));
end
24. Display the resulting images.
figure
for j = 1:8
subplot(2,4,j), imshow(abs(I_k(:,:,j)),[]), ...
title([’Kirsch mask ’, num2str(j)]);
end
Lecture 9 – Edge Detection 74

9.7 TUTORIAL: EDGE DETECTION

Question 14. Why a re we required to displa y the a bsolute va lue of ea ch


ma sk? Hint: Inspect the minimum a nd ma ximum va lues.
Question 15. How did we dyna mica lly displa y the ma sk number when
displa ying a ll eight ima ges?
Next we must find the maximum value of all the images for each pixel. Again,
because they are all stored in one matrix, we can do this in one line of code.
25. Find the maximum values.
I_kir = max(I_k,[],3);
figure, imshow(I_kir,[]);
Question 16. When ca lcula ting the ma ximum va lues, wha t does the la st
pa ra meter in the ma x function ca ll mea n?
Lecture 9 – Edge Detection 75

9.7 TUTORIAL: EDGE DETECTION

Question 17. Why a re we required to sca le the ima ge when displa ying it?
In the previous step, we scaled the result for display purposes. If we wish to
threshold the image (as we did with all previous edge detectors), we must first
scale the image so that its values are within the range [0, 255] as well as
convert to class uint8.To do so, we can create a linear transformation function
that maps all current values to values within the range we want.
26. Create a transformation function to map the image to the grayscale range
and perform the transformation.
m = 255 / (max(I_kir(:)) - min(I_kir(:)));
I_kir_adj = uint8(m * I_kir);
figure, imshow(I_kir_adj);
Lecture 9 – Edge Detection 76

9.7 TUTORIAL: EDGE DETECTION

Question 18. Why is it not necessa ry to sca le this ima ge (I_kir_a dj)
when displa ying it?
Question 19. M a ke a copy of the ma ndrill ima ge a nd a dd Ga ussia n
noise to it. Then perform the Kirsch edge detector on it. Comment on
its performa nce when noise is present.
E dge Detection with the Robinson Opera tor
The Robinson edge detector can be implemented in the same manner as
the Kirsch detector. The only difference is the masks.
Lecture 9 – Edge Detection 77

9.7 TUTORIAL: EDGE DETECTION

27. Generate the Robinson masks.


r = zeros(3,3,8);
r(:,:,1) = [-1 0 1; -2 0 2; -1 0 1];
r(:,:,2) = [0 1 2; -1 0 1; -2 -1 0];
r(:,:,3) = [1 2 1;000;-1-2-1];
r(:,:,4) = [2 1 0; 1 0 -1; 0 -1 -2];
r(:,:,5) = [1 0 -1; 2 0 -2; 1 0 -1];
r(:,:,6) = [0 -1 -2;10-1;210];
r(:,:,7) = [-1 -2 -1;000;121];
r(:,:,8) = [-2 -1 0; -1 0 1;012];
Lecture 9 – Edge Detection 78

9.7 TUTORIAL: EDGE DETECTION


28. Filter the image with the eight Robinson masks and display the
output.
I_r = zeros(size(I,1), size(I,2), 8);
for i = 1:8
I_r(:,:,i) = imfilter(I,r(:,:,i));
end
figure
for j = 1:8
subplot(2,4,j), imshow(abs(I_r(:,:,j)),[]), ...
title([’Robinson mask ’, num2str(j)]);
end
Lecture 9 – Edge Detection 79

9.7 TUTORIAL: EDGE DETECTION

29. Calculate the max of all eight images and display the result.

I_rob = max(I_r,[],3);
figure, imshow(I_kir,[]);
Question 20. How does the Robinson edge detector compa re with the
Kirsch detector?
Lecture 9 – Edge Detection 80

WHAT HAVE WE LEARNED?


• Edge detection is a fundamental image processing operation that attempts to
emulate an ability present in the human visual system. Edges in grayscale
2D images are usually defined as a sharp variation of the intensity function.
In a more general sense, an edge can be defined as a boundary between
two image regions having distinct characteristics according to some
feature (e.g., gray level, color, or texture).
• Edge detection is a fundamental step in many image processing techniques:
after edges have been detected, the regions enclosed by these edges are
segmented and processed accordingly.
Lecture 9 – Edge Detection 81

WHAT HAVE WE LEARNED?

• There are numerous edge detection techniques in the image processing


literature. They range from simple convolution masks (e.g., Sobel and
Prewitt) to biologically inspired techniques (e.g., the Marr–Hildreth
method) and the quality of the results they provide vary widely. The
Canny edge detector is allegedly the most popular contemporary edge
detection method.
• MATLAB has a function edge that implements several edge detection
methods such as Prewitt, Sobel, Laplacian of Gaussian, and Canny.
Lecture 9 – Edge Detection 82

WHAT HAVE WE LEARNED?

• The results of the edge detection algorithm are typically postprocessed by


an edge linking algorithm that typically eliminates undesired points,
bridges gaps, and results in cleaner edges that are then used in subsequent
stages of an edge-based image segmentation solution.

• The Hough transform is a commonly used technique to find long straight


edges (i.e., line segments) within the edge detection results.
Lecture 9 – Edge Detection 83

PROBLEMS

Problem 1. Write a MATLAB script to generate a test image containing an


ideal edge and plot the intensity profile and the first and second derivatives
along a horizontal line of the image.

Problem 2. Repeat Problem 1 for a ramp edge.

Problem 3. Show that the LoG edge detector can be implemented using
fspecial and imfilter (instead of edge) and provide a reason why this
implementation may be preferred.
Lecture 9 – Edge Detection 84

END OF LECTURE 9

LECTURE 10 – IMAGE SEGMENTATION

You might also like