GNR602-Lec09-11 Texture Segmentation Methods
GNR602-Lec09-11 Texture Segmentation Methods
Slot 13
Texture Segmentation Methods
Lectures 9-11 February 2023
IIT Bombay Slide 1
Lectures 9 – 11 February 2023 Texture Segmentation Methods
Concept of Texture
• Texture is an important visual cue
• What does texture mean? Formal approach or
precise definition of texture does not exist!
• Texture discrimination techniques are for the
part ad hoc.
Concept of Texture
• Perception of texture is dependent on the spatial
organization of gray level or color variations.
• Manmade features have a repetitive pattern, where
a basic pattern or primitive is replicated over a region
• Large variation within the pattern leads to a textured
appearance, while flat regions lead to a smooth
appearance
Sample
Textures
Sample Textures
Source: www.pepfx.net
From Remotely
Sensed Images
What is Texture?
• A feature used to partition images into regions of
interest and to classify those regions
• Spatial arrangement of colours or intensities in an
image
• Characterized by the spatial distribution of intensity
levels in a neighbourhood
• A repeating pattern of local variations in image
intensity
• An area attribute, not defined at a point
What is Texture?
Notion of Texture
• Suppose an image has a 50% black and 50% white
distribution of pixels.
• Three different images with the same intensity
distribution, but with different textures.
Composition of Texture
• Made up of texture primitives, called texels.
• Can be described as fine, coarse, grained, smooth,
etc.
• Tone is based on pixel intensity properties in the
texel, while structure represents the spatial
relationship between texels.
• If texels are small and tonal differences between
texels are large a fine texture results.
• If texels are large and consist of several pixels, a
coarse texture results.
Notion of Texture
• Statistical methods are particularly useful when
the texture primitives are small, resulting in
microtextures.
• When the size of the texture primitive is large,
first determine the shape and properties of the
basic primitive and the rules which govern the
placement of these primitives, forming
macrotextures.
GNR602 Lecture 9-11 B. Krishna Mohan
IIT Bombay Slide 12
Description/Definition of Texture
• Non-local property, characteristic of region
more important than its size
• Repeating patterns of local variations in image
intensity which are too fine to be
distinguished as separated objects at the
observed resolution
Definition of Texture
• There are three approaches to describing what
texture is:
• Structural : texture is a set of primitive texels in some
regular or repeated relationship.
• Statistical : texture is a quantitative measure of the
arrangement of intensities in a region.
This set of measurements is called a feature vector.
• Modeling : texture modeling techniques involve
constructing models to specify textures.
Texture Analysis
• Two primary issues in texture analysis:
- texture classification
- texture segmentation
• Texture classification is concerned with identifying a given textured
region from a given set of texture classes.
Each of these regions has unique texture characteristics.
Statistical methods are extensively used.
Texture Classification
• Texture classification is concerned with
identifying a given textured region from a
given set of texture classes.
• Each of these regions has unique texture
characteristics.
• Statistical methods are extensively used.
Texture Segmentation
• Texture segmentation is concerned with
automatically determining the boundaries
between various texture regions in an image.
• Texture segmentation also results in regions
homogenous with respect to texture property
Example
Variance
In textured areas both high and low intensity
pixels can be found
Variance of the pixel intensities over an area
will be higher for textured areas compared to
non-textured areas
Variance image (e.g., as in ERDAS software) can
be used to represent texture
Directionality of Texture
Texture is a strong directional feature
e.g., horizontal stripes and vertical stripes are
clearly perceived separately
Some texture features can provide directional
information
Features like edge per unit area or variance
cannot handle texture orientation
Construction of GLCM
• A co-occurrence matrix is a two-dimensional array, P, in which
both the rows and the columns represent a set of possible
image values.
Definition of GLCM
• The GLCM is defined by:
Example
This count is entered in the ith row and jth column of the matrix
Pd[i,j]
Normalized GLCM
The elements of Pd[i,j] can be normalized by dividing
each entry by the total number of pixel pairs.
i 1 j 1
• R is a normalizing factor
• ASM is large when only very few gray level pairs are present in
the textured image
• K is the number of gray levels
Contrast (CON)
• Contrast CON
K K
• CON = (i j
i 1 j 1
) 2
Pd (i, j ) / R
Entropy (ENT)
• ENT = K K
1
P[i, j ]ln
P[i , j ]
i 1 j 1
• ENT emphasises many different
co-occurrences
• P(i,j) is the normalized co-occurrence matrix,
each entry indicating probability of occurrence
of that gray level combination
Input
Image
IDM
Feature
CLASSIFIED
IMAGE
(Mumbai)
WATER
Redundant Computations
• When texture window moves by 1 pixel right,
– The first column moves out of the computation
– The last column enters the computation
– Many pixel pairs remain unchanged
Efficiency Considerations
• Incremental Adjustments
– Deduct the pairs formed with elements of first column
– Add pairs formed with elements of last column
– New matrix is ready
Efficiency Considerations
• Direct computation of features
– Examine each feature
– Make modifications to the feature directly instead of to the
GLCM
K K
• ASM = P
i 1 j 1
d
2
(i , j ) / R
Fast Computation
• ASM and CON can easily be adjusted for changes in
GLCM due to window shift
• ENT and IDM may be difficult
Stage 1
Filter
Banks
Stage 2
Non-
Linearity
Stage 3
Energy
Compu-
tation
Illustration
Gabor Filters
• The frequency spectrum of the image is divided into
radial and angular ranges such that there are Nr x No
windows in the frequency domain
• Retaining each such window, the inverse transform is
computed to generate a bandpass filtered input
image
• This process is based on the early processing of
visual information in the human visual cortex
Frequency
lobes
Stage 1
Filter
Banks
Stage 2
Non-
Linearity
Stage 3
Energy
Compu-
tation
Nonlinear transformation
• The inverse Fourier transformed images are
passed through a nonlinear function, similar
to the way the neural computation happens in
the human system
• Typical nonlinearity is a hyperbolic tangent
function
• tanh(α(t))=[(1-exp(-2 αt))/(1+exp(-2 αt))]
Illustration
Improved version can be found at "Integrating Region and Edge Information for
Texture Segmentation …", Image and Vision Computing, Vol. 26, No. 8, pp.
1106-1117, August 2008.