Lecture 10-Image Segmentation
Lecture 10-Image Segmentation
IMAGE PROCESSING
INSTRUCTOR: DR. NGUYEN NGOC TRUONG MINH
SCHOOL OF ELECTRICAL ENGINEERING, INTERNATIONAL UNIVERSITY (VNU-HCMC)
LECTURE X –
IMAGE SEGMENTATION
INSTRUCTOR: DR. NGUYEN NGOC TRUONG MINH
SCHOOL OF ELECTRICAL ENGINEERING, INTERNATIONAL UNIVERSITY (VNU-HCMC)
LECTURE CONTENT
• What is image segmentation and why is it relevant?
• What are the most commonly used image segmentation techniques and how
do they work?
• Problems
Lecture 10 – Image Segmentation 4
10.1 INTRODUCTION
• Segmentation is one of the most crucial tasks in image processing and CV.
• As we may recall from our discussion in Lecture 1 (Section 1.5), image
segmentation is the operation that marks the transition between low-level
image processing and image analysis: the input of a segmentation block in a
machine vision system (MVS) is a preprocessed image, whereas the output is a
representation of the regions within that image.
• This representation can take the form of the boundaries among those regions
(e.g. when edge-based segmentation techniques are used) or information
about which pixel belongs to which region (e.g. in clustering-based
segmentation). Once an image has been segmented, the resulting individual
regions (or objects) can be described, represented, analyzed, and classified
with techniques.
Lecture 10 – Image Segmentation 5
10.1 INTRODUCTION
10.1 INTRODUCTION
10.1 INTRODUCTION
• Figure 10.1 illustrates the problem. At the top, it shows the color and
grayscale versions of a hard test image that will be used later in this
chapter.
• Segmenting this image into its four main objects (Lego bricks) and the
background is not a simple task for contemporary image segmentation
algorithms, due to uneven lighting, projected shadows, and occlusion
among objects. Attempting to do so without resorting to color
information makes the problem virtually impossible to solve for the
techniques described in this chapter.
Lecture 10 – Image Segmentation 8
10.1 INTRODUCTION
Lecture 10 – Image Segmentation 9
10.1 INTRODUCTION
• The bottom part of Figure 10.1 shows another test image, which is
considerably simpler and will probably lead to perfect segmentation with
even the simplest techniques.
• Although the original image has a few imperfections (particularly on one
coin that is significantly darker than the others), simple preprocessing
operations such as region filling using imfill function will turn it into an
image suitable for global thresholding and subsequent labeling of the
individual regions.
Lecture 10 – Image Segmentation 10
10.1 INTRODUCTION
10.1 INTRODUCTION
• There is no universally accepted taxonomy for classification of image
segmentation algorithms either. In this lecture, we have organized the
different segmentation methods into the following categories:
⮚ Intensity-based methods, also known as non-contextual methods, work based
on pixel distributions (i.e., histograms). The best-known example of intensity-
based segmentation technique is thresholding.
⮚ Region-based methods, also known as contextual methods, rely on adjacency
and connectivity criteria between a pixel and its neighbors. The best-known
examples of region-based segmentation techniques are region growing and
split and merge.
⮚ Other methods, where we have grouped relevant segmentation techniques
that do not belong to any of the two categories above. These include
segmentation based on texture, edges, and motion, among others.
Lecture 10 – Image Segmentation 12
In MATLAB
• The IPT has a function to convert a grayscale image into a binary (black-
and-white) image, im2bw, that takes an image and a threshold value as
input parameters.
Lecture 10 – Image Segmentation 18
EXAMPLE 10.1
Lecture 10 – Image Segmentation 30
EXAMPLE 10.1
Lecture 10 – Image Segmentation 31
• The basic idea of region growing methods is to start from a pixel and
grow a region around it, as long as the resulting region continues to
satisfy a homogeneity criterion. It is, in that sense, a bottom-up approach
to segmentation, which starts with individual pixels (also called seeds) and
produces segmented regions at the end of the process.
• The key factors in region growing are as follows:
⮚The Choice of Similarity Criteria: For monochrome images, regions are
analyzed based on intensity levels (either the gray levels themselves or
the measures that can easily be calculated from them, for example,
moments and texture descriptors) and connectivity properties.
Lecture 10 – Image Segmentation 36
Let f(x,y) be the input image Let Mi be the mean gray level of pixels in Ri
Define a set of regions R1, R2, ..., Rn, each if the neighbor is unassigned and
|f(x,y) - Mi| <= Delta then
consisting of a single seed pixel Add neighbor to Ri
repeat Update Mi
for i=1to n do end if
for each pixel p at the border of Ri do end for
end for
for all neighbors of p do end for
Let (x,y) be the neighbor’s coordinates until no more pixels can be assigned to regions
Lecture 10 – Image Segmentation 38
EXAMPLE 10.2
Lecture 10 – Image Segmentation 39
EXAMPLE 10.2
Lecture 10 – Image Segmentation 40
EXAMPLE 10.3
• A quick inspection shows that the results are virtually useless, primarily
due to poor contrast between the two darker and two brighter bricks
and the influence of projected shadows.
Lecture 10 – Image Segmentation 41
EXAMPLE 10.3
• For comparison purposes, we ran the same algorithm with an easy input
image (Figure 15.8c) in unsupervised mode, that is, without specifying any
points to be used as seeds (or even the total number of regions that the
algorithm should return).
• The results (Figure 15.8d) are good, comparable to the ones obtained
using global thresholding earlier in this chapter.
Lecture 10 – Image Segmentation 42
EXAMPLE 10.3
Lecture 10 – Image Segmentation 43
• At the end of the process, it is guaranteed that the resulting regions satisfy
the homogeneity criterion. It is possible, however, that two or more
adjacent regions are similar enough that should be combined into one.
• This is the goal of the merging step: to merge two or more adjacent
regions into one if they satisfy a homogeneity criterion.
Lecture 10-Image Segmentation
In MATLAB
• The IPT function watershed implements the watershed transform. It takes
an input image and (optionally) a connectivity criterion (4- or 8-
connectivity) as input parameters and produces a labeled matrix (of the
same size as the input image) as a result.
• Elements labeled 1 and higher belong to a unique watershed region,
identified by their number, whereas elements labeled 0 do not belong to
any watershed region.
Lecture 10 – Image Segmentation 49
EXAMPLE 10.4
• This example shows the creation of a test matrix of size 5x5 and the
results of computing the distance transform using bwdist and two
different distance calculations:
• Euclidean and city block.
>>a = [0 1 1 0 1; 1 1 1 0 0; 0 0 0 1 0; 0 0 0 0 0; 0 1 0 0 0];
a=
0 1 1 0 1
1 1 1 0 0
0 0 0 1 0
0 0 0 0 0
0 1 0 0 0
Lecture 10 – Image Segmentation 51
EXAMPLE 10.4
>> b = bwdist(a)
b=
1.0000 0 0 1.0000 0
0 0 0 1.0000 1.0000
1.0000 1.0000 1.0000 0 1.0000
1.4142 1.0000 1.4142 1.0000 1.4142
1.0000 0 1.0000 2.0000 2.2361
>> b = bwdist(a,’cityblock’)
b=
1 0 0 1 0
0 0 0 1 1
1 1 1 0 1
2 1 2 1 2
1 0 1 2 3
Lecture 10 – Image Segmentation 52
EXAMPLE 10.5
• Finally, part (d) shows the overlap between parts (a) and (c), indicating
how the watershed transform results lead to an excellent segmentation
result in this particular case.
Lecture 10 – Image Segmentation 53
EXAMPLE 10.5
Lecture 10 – Image Segmentation 54
Goal
• The goal of this tutorial is to learn to perform image thresholding using
MATLAB and the IPT.
Objectives
• Learn how to visually select a threshold value using a heuristic approach.
• Explore the graythresh function for automatic threshold value selection.
• Learn how to implement adaptive thresholding.
Lecture 10 – Image Segmentation 55
T2 = graythresh(I);
I_thresh2 = im2bw(I,T2);
figure, imshow(I_thresh2), title(’Threshold Image (graythresh)’);
Question 5. How did the graythresh function compare with the
heuristic approach?
Lecture 10 – Image Segmentation 61
Adaptive Thresholding
Bimodal images are fairly easy to separate using basic thresholding
techniques discussed thus far. Some images, however, are not as well
behaved and require a more advanced thresholding technique such as
adaptive thresholding. Take, for example, one of the images we used back in
Tutorial 6.1: a scanned text document with a nonuniform gradient
background.
6. Close all open figures and clear all workspace variables.
7. Load the gradient_with_text image and prepare a subplot.
I = imread(’gradient_with_text.tif’);
figure, imshow(I), title(’Original Image’);
Lecture 10 – Image Segmentation 62
17. Replace the last line of our function with the following code. Save the
function after the alteration.
if std2(x) < 1
y = ones(size(x,1),size(x,2));
else
y = im2bw(x,graythresh(x));
end
Question 7. How does our function label a block of pixels as
background?
Lecture 10 – Image Segmentation 69
PROBLEMS
Problem 1. Explain in your own words why the image on the top right of
Figure 15.1 is significantly harder to segment than the one on the bottom
left of the same figure.
Problem 2. Modify the MATLAB code to perform iterative threshold
selection on an input gray-level image (section 15.2.2) to include a variable
that counts the number of iterations and an array that stores the values of
T for each iteration.
Problem 3. Write a MATLAB script to demonstrate that thresholding
techniques can be used to subtract the background of an image.
Lecture 10 – Image Segmentation 73
END OF LECTURE 10