PengSis Ch4
PengSis Ch4
Detection and
Image
Segmentation
C HAPTER 4
Edge Detection: identify boundary
between objects
Sobel horizontal
Input image Output image
mask
➢The Sobel mask – horizontal edges: (0, 2, 1 : completely dark, bright, medium)
➢Laplacian usage: 1st derivatives (use masks e.g. Sobel), a weak edge can’t be
detected.
➢2nd derivate (Laplacian) amplifies the changes in the 1st.
➢However, Laplacian if used alone magnifies the noise (false edges).
➢Gaussian usage: h(x, y), is used to filter out the high-frequency noise before
applying Laplacian operator.
MATLAB Example: Laplacian of Gaussian
➢I = imread(‘image.jpg’);
➢I = rgb2gray(I);
➢Imshow(I);
➢J = edge(I,‘sobel’);
➢Figure,
➢Imshow(J); Original image, (b) edge-detected image using Sobel, and (c) edge-detected
image using Laplacian of Gaussian. (Courtesy of Andre D’Avila, MD, Heart
%laplacian Institute (InCor), University of Sao Paulo, Medical School, Sao Paulo,
➢JL = edge(I,‘log’); Brazil.)
➢Figure, Result in ( c ) is closer to the complete set of true edges in the image
➢Imshow(JL);
3. Canny Edge Detection
➢Among the most popular edge detection techniques
➢Has a number of specialized versions
➢All Canny edge detection systems, however, have 4 fundamental steps:
➢Step 1: The image is smoothed using a Gaussian filter.
➢Step 2: The gradient magnitude and orientation are computed using finite difference
approximations for the partial derivatives (as discussed in the following).
➢Step 3: Non-maxima suppression is applied to the gradient magnitude to search for
pixels that can identify the existence of an edge.
➢Step 4: A double thresholding algorithm is used to detect significant edges and link
them.
➢The output of the smoothing filter, S(i, j), is
related to the original image and the smoothing
filter as (4.5)
➢The gradient of S(i, j) is used to produce the h & v
partial derivatives P(i, j) and Q(i, j), as in (4.6) and
(4.7), respectively.
➢The magnitude M(i, j) and orientation θ(i, j) of the
gradient vector are given as (4.8) and (4.9)
➢In the third step, a thresholding is applied to
identify the peaks of the edge pixels
➢In Canny method, an edge point is defined as a
point whose gradient’s magnitude identifies a
local maximum in the direction of the gradient
➢This will result to an image N(i, j), which is 0,
except at the local maxima points
MATLAB Example: Canny
➢The image N(i, j) has false edge fragments. I = imread(‘image.jpg’);
I = rgb2gray(I);
➢To discard these, apply thresholding and Imshow(I);
set all of the values below the threshold J = edge(I,‘canny’);
value to 0. Figure,
➢After thresholding, an array including the Imshow(J);
edges of the image, I(i, j), is obtained.
➢While small threshold values can allow Input image
many false edges, excessively large
thresholds can miss the true edges.
➢At this step, the surviving edge pixels are
connected to form complete edges.
Canny Laplacian
➢Image segmentation is considered as the most sensitive
PART 2: step in medial image processing apps.
➢It is necessary to separate different regions and objects
IMAGE in an image.
SEGMENTATION ➢For instance, in processing of cytological samples, we
need to segment an image into regions corresponding to
nuclei, cytoplasm, and background pixels.
➢The first category of techniques, segmentation is
conducted based on the discontinuity of the points
across two regions (by thresholding and detecting gray-
level discontinuities e.g. points, lines, and edges).
➢In the second group, the algorithms exploit the
similarities among the points in the same region.
1. Point Detection
➢To detect the isolated points in an image
➢Using difference between their and the neighboring pixels’ gray levels.
➢Suggested using masks that magnify these differences to distinguish them.
➢Formally speaking, for any point in the image, the point detection method
checks the following condition: , where = pixel after masking.
➢If the condition holds, then the point is marked as an isolated point that stands
out and needs to be investigated.
➢While in biomedical image processing applications it may caused by “salt and
pepper” noise or small abnormalities (e.g., small tumors).
➢This emphasizes the importance of point detection methods.
MATLAB Example: Point Detection
➢I = imread(‘synthetic.jpg’);
➢I = rgb2gray(I);
➢Maxpix = max(max(I) );
➢H = [−1 −1 −1;−1 8 −1;−1 −1 −1];
➢Sharpened = imfilter(I,H);
➢Maxpix = double(Maxpix);
➢Sharpened = (sharpened > .9*Maxpix); %T
➢Imshow(I);
➢Figure,
➢Imshow(sharpened);
2. Line Detection
➢using a variety of line detection masks for magnifying and detecting horizontal lines,
vertical line, or lines with any prespecified angles (e.g., 45°)
➢Figure shows four masks that can be used for detecting h lines, v lines, rising lines 45°,
and falling lines −45°.
➢Then, through a thresholding, the algorithm decides whether the point belongs to a line
in a specific direction or not.
➢Often, only the line with maximum mask response will be chosen as the line the point
belongs to.
MATLAB Example: Line Detection
➢I = imread(‘18.jpg’); I = rgb2gray(I);
➢Hh = [−1 −1 −1;2 2 2;−1 −1 −1];
➢Hv = [−1 2 −1;−1 2 −1;−1 2 −1];
➢H45 = [−1 −1 2;−1 2 −1;2 −1 −1];
➢Hlinedetected = imfilter(I,Hh);
➢Vlinedetected = imfilter(I,Hv);
➢Line45detected = imfilter(I,H45);
➢Imshow(I);
➢Figure, Imshow(Hlinedetected); (a) Original image, (b) image after horizontal line detection, (c) image
➢Figure, Imshow(Vlinedetected); after vertical line detection, and (d) image after 45° line detection.
(Courtesy of Andre D’Avila, MD, Heart Institute (InCor), University of
➢Figure, Imshow(Line45detected); Sao Paulo, Medical School, Sao Paulo, Brazil.)
3. Region and Object Segmentation
➢Focus to distinguish and detect regions representing different objects.
➢One needs to detect regions representing objects, e.g. tumors, from the
background
➢Consist of:
1. Region Segmentation Using Luminance Thresholding
2. Region Growing
3. Quad-Trees
Region Segmentation
Using Luminance Thresholding
➢Figure shows a synthetic cell image that contains cells (much darker than the bright background)
and the histogram of this.
➢The histogram contains:
➢ a very strong bright region: background,
➢moderately bright region:cytoplasm
➢a small dark interval: nuclei.
➢This separation indicates that the three parts of the histogram
can be rather easily separated from each other by thresholding
Problem in selecting threshold
Region Growing
➢The methods discussed earlier use discontinuities among gray levels of entities
in an image (e.g., point, line, edge, and region)
➢Now, focus on the 2nd category that attempt to find segmented regions of the
image using the similarities of the points inside regions
➢In region growing methods, segmentation often starts by selecting a seed pixel
for each region in the image
➢Seed pixels are often chosen close to the center of the region or object.
➢For example, if we are to segment a tumor from the background, it is always
advisable to select the seed point for the tumor in the middle of the tumor and
the seed point for the background somewhere deep in the background region
➢Then, the region growing algorithm expands each
region based on a criterion, which is defined to
determine similarity between pixels of each region
➢This means that starting from the seed points and
using the criterion, algorithm decides whether the
neighboring pixels are similar enough to the other
points in the region
➢if so, these neighboring pixels are assigned to the
same region that the seed point belongs to.
➢This process is performed on every pixel in each
region until all the points in the image are covered
➢The most important factors in region growing are selecting a suitable similarity
criterion and starting from a suitable set of seed points
➢It mainly depends on the type of the application in hand.
➢For example, for the monochrome (gray level) images, similarity criterion is
often based on the gray-level features and spatial properties such as moments or
textures
Quad-Trees
➢does not rely on a set of seed pixels
➢divides the image into a set of disjointed regions and then uses splitting and
merging of pixels or regions to obtain the segmented regions that satisfy a
prespecified criterion (gray-level variations)
➢if the gray levels of the pixels in two regions are not in the same range, they are
assumed to belong to different objects, and, therefore, the region is split into a
number of subregions
➢often too computationally time consuming and less accurate
➢due to the fact that they do not require seed points