0% found this document useful (0 votes)
27 views23 pages

Chapter 6 Image Segmentation 1 - 240827 - 072524

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views23 pages

Chapter 6 Image Segmentation 1 - 240827 - 072524

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Image Processing – CSIT 5th Semester

Unit 6: Image Segmentation

Introduction:
Image segmentation is a method in which a digital image is broken down into
various subgroups called Image segments which helps in reducing the complexity
of the image to make further processing or analysis of the image simpler.

Segmentation refers to the process of partitioning on image into multiple regions. It


is typically used to locate objects and boundaries in images.

Segmentation in easy words is assigning labels to pixels. All picture elements or


pixels belonging to the same category have a common label assigned to them.

For example: Let’s take a problem where the picture has to be provided as input for
object detection. Rather than processing the whole image, the detector can be
inputted with a region selected by a segmentation algorithm. This will prevent the
detector from processing the whole image thereby reducing inference time.

Similarity and discontinuity approach


Similarity approach: This approach is based on detecting similarity between
image pixels to form a segment, based on a threshold. ML algorithms like
clustering are based on this type of approach to segment an image.

Discontinuity approach: This approach relies on the discontinuity of pixel


intensity values of the image. Line, Point, and Edge Detection techniques use this
type of approach for obtaining intermediate segmentation results which can be later
processed to obtain the final segmented image.

By Lec. Pratik Chand, NCCS Page 1


Image Processing – CSIT 5th Semester

Discontinuity Based Technique:


In discontinuity-based approach, the partitions or sub-division of an image is based
on some abrupt changes in the intensity level of images. Here, we mainly interest
in identification of isolated points, lined and edges in an image. To identify these,
we use 3X3 Mask operation.

The discontinuity-based segmentation can be classified into three approaches:

 Point detection
 Line detection
 Edge detection

Point Detection:

A point is the most basic type of discontinuity in a digital image. The most
common approach to finding discontinuities is to run an (n X n) mask over each
point in the image. The mask is as shown in figure bellow

The point is detected at a location (x, y) in an image, where the convolution


operation is done using this mask. If the absolute corresponding value of Z is
greater than threshold value T, put label 1 for that point and put level 0 for others.

Where Z is the response of the mask at any point in the image and T is non-
negative threshold value.

By Lec. Pratik Chand, NCCS Page 2


Image Processing – CSIT 5th Semester

For Example:

Here a point is detected in middle of the image.

Line Detection

Line detection is the next level of complexity in the direction of image


discontinuity. For any point in the image, a response can be calculated that will
show which direction the point of a line is most associated with. The mask for
different direction is given bellow

Horizontal direction Vertical direction

By Lec. Pratik Chand, NCCS Page 3


Image Processing – CSIT 5th Semester

450 direction -450 direction

Perform convolution operation in given image using these masks.

Edge detection

Since isolated points and lines of unitary pixel thickness are infrequent in most
practical application, edge detection is the most common approach in gray level
discontinuity segmentation. An edge is a boundary between two regions having
distinct intensity level. It is very useful in detecting of discontinuity in an image,
when the image changes from dark to white or vice-versa.

3 different edge types are observed:

Step edge: Transition of intensity level over 1 pixel only in ideal, or few pixels on
a more practical use

Ramp edge: A slow and graduate transition

Roof edge: A transition to a different intensity and back

By Lec. Pratik Chand, NCCS Page 4


Image Processing – CSIT 5th Semester

The first order derivative or gradient based filter such as Robert-cross, Prewitt, and
Sobel operators are prefers for detecting thicker lines.

The second order derivative such as Laplacian is prefers for detecting thinner lines.

Robert (Cress Gradient) operator


This operator find the gradient difference in cross or diagonal pixel position.

The filter mask of Robert operator is:

By using one of this filter mask, we can perform convolution operation on input
image to calculate g(x) and g(y).

Prewitt Operator

This method takes the central difference of the neighboring pixels.

The filter mask of Prewitt Operator is:

By using one of this filter mask, we can perform convolution operation on input
image to calculate g(x) and g(y).

Sobel Operator
This method also takes the central difference of the neighboring pixels. It provides
both a differentiating and a smoothing effect.

The filter mask of Sobel Operator is:

By Lec. Pratik Chand, NCCS Page 5


Image Processing – CSIT 5th Semester

By using one of this filter mask, we can perform convolution operation on input
image to calculate g(x) and g(y).

Edge Linking and Boundary Detection

Edge Linking:
Ideally, edge detection should yield sets of pixels lying only on edges. In practice,
these pixels rarely characterize edges completely because of non-uniform
illumination, noise and breaks in the edges. Therefore, edge detection typically is
followed by linking algorithms designed to assemble edge pixels into meaningful
edges and/or region boundaries.

Edge linking may be:

Local: requiring knowledge of edge points in a small neighborhood.

Regional: requiring knowledge of edge points on the boundary of a region.

Global: the Hough transform, involving the entire edge image.

Edge Linking by Local Processing

All points that are similar according to predefined criteria are linked, forming an
edge of pixels that share common properties.

Edge Linking by Regional Processing

Often, the location of regions of interest is known and pixel membership to regions
is available. Approximation of the region boundary by fitting a polygon.

Polygons are attractive because:

 They capture the essential shape


 They keep the representation simple
By Lec. Pratik Chand, NCCS Page 6
Image Processing – CSIT 5th Semester

Requirements

 Two starting points must be specified (e.g. rightmost and leftmost points).
 The points must be ordered (e.g. clockwise).
 Variations of the algorithm handle both open and closed curves.

If this is not provided, it may be determined by distance criteria:

 Uniform separation between points indicate a closed curve


 A relatively large distance between consecutive points with respect to the
distances between other points indicate an open curve

We present here the basic mechanism for polygon fitting.

Given the end points A and B, compute the straight line AB. Compute the
perpendicular distance from all other points to this line. If this distance exceeds a
threshold, the corresponding point C having the maximum distance from AB is
declared a vertex. Compute lines AC and CB and continue.

By Lec. Pratik Chand, NCCS Page 7


Image Processing – CSIT 5th Semester

Edge Linking by Global Processing

Hough Transform
The Hough Transform is an algorithm patented by Paul V. C. Hough and was
originally invented to recognize complex lines in photographs (Hough, 1962).
Since its inception, the algorithm has been modified and enhanced to be able to
recognize other shapes such as circles and quadrilaterals of specific types.

It is mainly used to connect disjoint edge points.

Equation of line is

y = mx + c

Where, m =slope and c = intercept of the line

By Lec. Pratik Chand, NCCS Page 8


Image Processing – CSIT 5th Semester

A single point can be a part of infinite line. Therefore we transform that point in
the x-y plane, into a line in the m-c plane.

If A and B are two points connected by a line in the special domain, they will be
intercepting line in the Hough space.

By Lec. Pratik Chand, NCCS Page 9


Image Processing – CSIT 5th Semester

By Lec. Pratik Chand, NCCS Page 10


Image Processing – CSIT 5th Semester

All three lines are intercept in single point (1,1) that means all points (1,2) (2,3)
(3,4) are collinear and they are part of the same line.
By Lec. Pratik Chand, NCCS Page 11
Image Processing – CSIT 5th Semester

Thresholding
The simplest method for segmentation in image processing is the threshold
method. It divides the pixels in an image by comparing the pixel’s intensity with a
specified value (threshold). It is useful when the required object has a higher
intensity than the background (unnecessary parts).

You can consider the threshold value (T) to be a constant but it would only work if
the image has very little noise (unnecessary information and data). You can keep
the threshold value constant or dynamic according to your requirements.

The thresholding method converts a grey-scale image into a binary image by


dividing it into two segments (required and not required sections).

Steps to apply threshold

1. Select a threshold value T


2. Any point (x,y) in the image at which f(x,y) > T, this point is called an object
point and f(x,y) <= T is called background point.
3. The segmented image g(x,y) is denoted by

By Lec. Pratik Chand, NCCS Page 12


Image Processing – CSIT 5th Semester

Here in histogram, the region greater than T is object and less than T is
background.

The histogram bellow shows the thresholding problem involving three dominant
modes. For example two objects on a dark background.

Here in histogram multiple thresholding classifies a point (x,y) as belonging to the


background if f(x,y) <= T1, one object classes if T1 < f(x,y) <= T2, and to the other
object classes f(x,y) > T2

By Lec. Pratik Chand, NCCS Page 13


Image Processing – CSIT 5th Semester

According to the different threshold values, we can classify thresholding


segmentation in the following categories:

Global Thresholding:
A constant threshold value is apply for both object and background is called global
thresholding. In this method, you replace the image’s pixels with either white or
black.

If the intensity of a pixel at a particular position is less than the threshold value,
you’d replace it with black. On the other hand, if it’s higher than the threshold,
you’d replace it with white.

Procedure for Global thresholding

1. Select initial threshold value T (choose average value of intensity)


2. Segment the image using T. this will produce two groups G1 and G2 which
are G1 contains values > T and G2 contains values <= T
3. Compute the average gray level values μ1 and μ2 for the pixel in region
G1and G2
4. Compute the new threshold value
T= ½( μ1+ μ2)
5. Repeat step 2 to step 4 until the T in successive iterations is same.

Example: Find the Global Threshold value of given image

5 3 9
2 1 7
8 4 2

Solution:

Calculate the threshold value T0 by taking average of all the pixel value

T0 = (5+3+9+2+1+7+8+4+2)/9

By Lec. Pratik Chand, NCCS Page 14


Image Processing – CSIT 5th Semester

T0 = 4.55 = 5

Segment the image using T = 5, we get

G1 = {9,7,8}

Calculate the mean value of G1

μ1 = (9+7+8)/3

μ1 = 8

G2 = {5,3,2,1,4,2}

Calculate the mean value of G2

μ2 = (5+3+2+1+4+2)/6

= 2.83

μ2 = 3

Now, the new value of T say T1

T1= ½( μ1+ μ2)

T1= ½ (8+3)

T1= 5.5 = 6

Here the threshold value for successive iteration is different,

So, again segment the image using new threshild value of T1 i.e. 6,

G1 = {9,7,8}

μ1 = 8

G2 = {5,3,2,1,4,2}

μ2= 3

By Lec. Pratik Chand, NCCS Page 15


Image Processing – CSIT 5th Semester

Now, the new value of T say T2

T2 = ½ (8+3)

T2 = 5.5 = 6

Here the threshold value for successive iteration is same, so final value of T is 6,
which is the global threshold value.

Local or Regional Thrisholding


When the value of T changes over an image is called Variable thresholding. The
term local or Regional thresholding is used sometimes to denote variable
thresholding in which the value of T at any point (x,y) in an image depends on
properties of a neighborhood of (x,y).

For example: The average intensity of the pixels in the neighborhood.

Adaptive Thresholding
If T depends on special coordinates (x,y) themselves, then variable thresholding is
called dynamic or adaptive thresholding.

Having one constant threshold value might not be a suitable approach to take with
every image. Different images have different backgrounds and conditions which
affect their properties.

Thus, instead of using one constant threshold value for performing segmentation
on the entire image, you can keep the threshold value variable. In this technique,
you’ll keep different threshold values for different sections of an image.

This method works well with images that have varying lighting conditions. You’ll
need to use an algorithm that segments the image into smaller sections and
calculates the threshold value for each of them.

Procedure for Adaptive thresholding

1. Divide original image in to different regions


2. Apply global thrsholding method in each region separately
3. Merge the resulted region based image

By Lec. Pratik Chand, NCCS Page 16


Image Processing – CSIT 5th Semester

Region-Based Segmentation
Region-based segmentation algorithms divide the image into sections with similar
features. These regions are only a group of pixels and the algorithm find these
groups by first locating a seed point which could be a small section or a large
portion of the input image.

After finding the seed points, a region-based segmentation algorithm would either
add more pixels to them or shrink them, so it can merge them with other seed
points.

Based on these two methods, we can classify region-based segmentation into the
following categories:

Region Growing
In this method, you start with a small set of pixels and then start iteratively
merging more pixels according to particular similarity conditions. A region
growing algorithm would pick an arbitrary seed pixel in the image, compare it with
the neighbor pixels and start increasing the region by finding matches to the seed
point.

When a particular region can’t grow further, the algorithm will pick another seed
pixel which might not belong to any existing region. One region can have too
many attributes causing it to take over most of the image. To avoid such an error,
region growing algorithms grow multiple regions at the same time.

You should use region growing algorithms for images that have a lot of noise as
the noise would make it difficult to find edges or use thresholding algorithms.

Algorithm for Region Growing

1. Choose a seed point


2. Check the condition

If |seed point – pixel value| <=T

Add pixel to seed point region

By Lec. Pratik Chand, NCCS Page 17


Image Processing – CSIT 5th Semester

Else

Leave as it is

3. Repeat step 2 for all pixel

Example: Apply region growing on following image with seed point at (2, 2) and
threshold value as 2.

0 1 2 0
2 5 6 1
1 4 7 8
0 9 5 1
Solution:

Here the seed point is 7 which is at location (2,2)

Threshold value (T) =2

Therefore the condition is

|seed point – pixel value| <=2

The possible pixel values which satisfy the condition are {5,6,7,8,9}

Now, let’s say region A which is denoted by 1 for condition satisfied values and
region B which is denoted by 0 for other values

By Lec. Pratik Chand, NCCS Page 18


Image Processing – CSIT 5th Semester

0 0 0 0
0 1 1 0
0 0 1 1
0 1 1 0

Region Splitting and Merging


As the name suggests, a region splitting and merging focused method would
perform two actions together – splitting and merging portions of the image.

This method is also called divide and conquer.

It would first split the image into regions that have similar attributes and merge the
adjacent portions which are similar to one another. In region splitting, the
algorithm considers the entire image, while in region growth, the algorithm would
focus on a particular point.

The region splitting and merging method follows a divide and conquer
methodology. It divides the image into different portions and then matches them
according to its predetermined conditions. Another name for the algorithms that
perform this task is split-merge algorithms.

Quad tree of splitting

By Lec. Pratik Chand, NCCS Page 19


Image Processing – CSIT 5th Semester

Algorithm for Region Splitting

1. Select max and min intensity value from given image


2. Check the condition
if ((max – min) > T)
split the image in to equal 4 regions
else
leave as it is
3. Apply step 2 in all regions

Algorithm for Region Merging

1. Select max and min intensity value from neighbor regions


2. Check the condition
if ((max1 – min2) <= T && (max2 – min1) <= T )
merge those regions
else
leave as it is
3. Apply step 2 in all neighbor regions

Example: Apply split and merge in given image. The threshold value is 3.

By Lec. Pratik Chand, NCCS Page 20


Image Processing – CSIT 5th Semester

Solution:

Here, given T = 3

Max pixel value = 7

Min pixel value = 0

Now, check the condition (max – min) > T

(7 – 0) > 3

Then split the image in to 4 regions

Now, apply rule for all the regions A, B, C and D

By Lec. Pratik Chand, NCCS Page 21


Image Processing – CSIT 5th Semester

Now, merging

Check the condition ((max1 – min2) <= T && (max2 – min1) <= T )

Here, for region A and B1

((7 – 5) <= 3 && (7 – 4) <= 3)

Both the condition is satisfied so merge these two regions

Apply same rule for all the regions

In the resulted image we can see two regions one is shaded region and another is
un-shaded region.

End of Unit-6

By Lec. Pratik Chand, NCCS Page 22

You might also like