0% found this document useful (0 votes)
65 views18 pages

Chapter 2 - Texture Analysis

This document discusses texture analysis techniques for renal tumors. It describes how texture analysis uses mathematical parameters to evaluate pixel distributions and characterize textures in medical images. Various segmentation and feature extraction methods are discussed, including thresholding, region growing, watershed segmentation, and statistical/transform techniques. Tables provide examples of segmentation techniques used in previous renal tumor research and pros and cons of different segmentation methods.

Uploaded by

Siti Fairuz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views18 pages

Chapter 2 - Texture Analysis

This document discusses texture analysis techniques for renal tumors. It describes how texture analysis uses mathematical parameters to evaluate pixel distributions and characterize textures in medical images. Various segmentation and feature extraction methods are discussed, including thresholding, region growing, watershed segmentation, and statistical/transform techniques. Tables provide examples of segmentation techniques used in previous renal tumor research and pros and cons of different segmentation methods.

Uploaded by

Siti Fairuz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

2.

7 Texture analysis of renal tumor

Texture analysis, or the pixel-scale quantitative evaluation of repetitive morphological

patterns that cannot be assessed by humans. It refers to the appearance, structure, and

arrangement of the parts of an object within the image. In clinical practice, texture

analysis may provide a solution to the issues caused by the inability of radiologists in

distinguishing benign and malignant solid renal masses. A two-dimensional digital image

consists of little rectangular blocks known as pixel or picture elements whilst three-

dimensional digital image consists of little volume blocks called voxels also known as

volume elements. These elements were represented by a set of coordinates in space that

has a value indicating the grey-level intensity of pixel or voxel in space.

By using mathematical parameters computed from the distribution of pixels, texture

analysis evaluates the position and intensity of signal features which help to characterize

the texture type and under-lying structure, or pattern of the object shown by the image.

There are various techniques used to evaluate the inter-relationships of the pixels for

texture analyses including statistical and transform methods, structural and model-based.

2.7.1 Image Segmentation

Segmentation is the primary step by which the required information is drawn from the

data for further processing. It is possible to describe image segmentation as the

segregation of pixels of interest for effective processing. The main goal of image

segmentation is to segment the relevant processing regions of interest. The Region of


Interest (ROI) has a community of boundary-defined pixels that may contribute to

various shapes, such as circles, ellipses, polygons or irregular shapes. In image analysis,

segmentation is an essential process because it paves the way for future image processing

(Arslan et al., 2014). If image segmentation is performed efficiently, it will be easier to

analyze images for the latter part. For the high standards of automatic image analysis,

image segmentation provides definite and useful data or data.

There are several other segmentation techniques that can be used including thresholding,

region growing, region splitting, region merging, detection of boundary discontinuities

and watershed segmentation are few examples of image segmentation process. Table 2.2

shows the pros and cons in different segmentation techniques. Table 2.3 shows the

segmentation techniques used by previous research to improve the result for their

findings.

Table 2.2 Pros and cons for different segmentation (Padhi et al., 2019)

Method Pros Cons


Dependent upon initialization
Effective conforming to
Active Contour Models parameters; high complexity of time
object contours; robustness
and computation
against noise
Easy to implement; Low Inability to satisfy all images; Low
Edge Detection
computational efficacy; weak-bounded
complexity
Ineffective in cases of low contrast
Thresholding Easy to implement; moderate between foreground and background;
efficacy difficulty in deciding optimal value(s)
for division of regions
Region Growing High time complexity; wrong regions
Robustness against noise identified; high time complexity
Closed edges denoting clear-cut Over-segmentation; wrong regions
Watershed
object boundaries identified; high time complexity
Transformation
Relatively moderate to high
K-means prodeuces clear clusters; computational complexity; fixed
Clustering easy to implement; fuzzy c-means number of clusters obtained upon
allows flexible region membership fixing the value of k.
Table 2.3 Implementation of various segmentation techniques
Author Method Findings
Edge–based Active Contour model using an inflation/deflation force
Active with a damping coefficient (EM), a Geometric Active Contour
Ciecholewski, 2017 model (GAC) and an Active Contour Without Edges (ACWE).
Contour Result shows that The qualitative and quantitative results obtained
Model are the best for EM and are better than compared to other methods.
Experimental results for this research performed on complex kidney
Active images using the proposed fractional Mittag–Leffler’s function for
Al- Contour energy minimization method show that the proposed model
Shahmasneh Model outperforms the existing models in terms of sensitivity, accuracy,
et al., 2020 Jaccard index and Dice coefficient.

Al-Najdawi et Edge The accuracy obtained using this segmentation is 90.7% with
al., 2015 Detection combination of CLAHE and median filter as image enhancement.

Otsu thresholding technique was adopted in this research. The


author deduced that over 300 images were categorized as good
Qayyum et
segmentation while 150 images were successfully segmented
al., 2016 Thresholding without over segmenting the
breast tissue.
Wang et al, Conventional region growing algorithm was applied on the
2018 Gaussian- constraint image to define the mass region in this
Region
research. The AUC achieved is 0.806±0.025.
Growing

Nayak et al, 2019 Morphological watershed algorithm was developed in this study.
Watershed The experimental results reported are competent with the existing
literature methods with an increased computational efficiency.

Reddy et al., The outcomes of segmentation using Spatial fuzzy C-means


2018 Clustering (SFCM) demonstrated more precise segmentation when combined
with K-means.

Recently, there are many sorts of image segmentation algorithms, each of which can be

used for image segmentation with certain characteristics. The image segmentation can be

categorized into two groups. One is a region-based image segmentation algorithm, which

mainly uses the similarity of certain feature attributes within the image area to segment

different regions. Its representative methods are threshold segmentation and region

growth method. The other is an edge detection-based image segmentation algorithm,

which mainly uses each region within the image. For the difference of some features, the

concept of the gradient is used to detect the edge and segment the boundary of various
regions, mainly based on the Canny operator algorithm and the Sobel operator algorithm.

With the development of specific theories in related fields in current years, relevant

theories have also been applied to image segmentation. such as statistical theory, neural

network theory, fuzzy theory, wavelet transform theory, morphological theory, and many

more to consider image segmentation from a new perspective.

Recent research proposed the concept of integrated segmentation technology,

which combines more than two segmentation algorithms for image segmentation. The

segmentation effect of the integrated segmentation algorithm is better than that of a single

segmentation algorithm. Therefore, the mixing of varied image segmentation algorithms

is also a vital direction of research. Additionally, the medical image itself has features of

boundary blur, uncertainty, and excessive detail.

In this research, ……
a b

c d

Figure 3.14: Region of interest (ROI) delineation in RCC patient, image a) ct scan image

of the kidney b) segmented image c) segmentation image in coronal plane d) sagittal

plane of the image

2.7.2 Feature Extraction

Feature extraction is one of the major steps in texture analysis (M. Andrzej et al., 1988).

As mentioned in 2.7.1, the forms of texture analysis are categorized as structural, model-

based, statistical and transform methods. Table 2.1 shows the description of texture

analysis.

Texture Analysis Method Description

Structural methods This represents texture using well-defined


primitives. In other words, a square object
is represented in terms of the straight lines
or primitives that form its border.
Model-Based methods This represents texture in an image using
sophisticated mathematical models (such
as fractal or stochastic). The model
parameters are estimated and used for the
image analysis.

The statistical approaches This represents texture using properties


governing the distribution and
relationships of grey-level values in the
image.

The transform methods This represents texture in a different


space, such as the frequency or the scale
space. These methods are based on the
Fourier, Gabor or Wavelet transform.

In this research, statistical approached were implemented for further analysis. Four types

of image features, including shape, intensity, and texture, were taken from the segmented

tumor ROIs.

Figure 2.12 Flowchart shows most often encountered radiomics features

(Parekh et al 2016)
Previous researchers implemented texture and shape features in their research.

Table 2.4 shows the implementation of texture and shape features in respective

fields.

Author Features Extracted Findings


Feature values of the left and right breast from normal
breast thermograms tend to be symmetry, whereas for
Histogram and
Rasyid et al.,2018 abnormal breast thermograms there are significant
GLCM features
differences in feature values between both sides of the
breast.

Mall et al., 2019 GLCM features Twelve GLCM texture features are extracted and further
used for image abnormality detection in the X- ray
image.

Author extracted thirteen Haralick features from the


Seal et al., 2018 GLCM GLCMs of abnormal and normal lesions, which are further
employed to build two probabilistic models using LR, LDA
and a predictive model using MLP to determine the
probability that the patient has cancer in his liver or not.
The best accuracy achieved is 96.67% using logistic
regression.
Histogram and
Htay et al., 2018
GLCM features
Author adopted First order statistics features and GLCM
for feature extraction and k-NN classifier for the early
stage breast cancer detection system. The best accuracy
achieved in this research is 92%.
Souza et al., 2017 Shape Features
Author presented a novel approach for breast cancer
classification using section convolutions and descriptors of
shape distribution. The result achieved in this research is,
as follows; 92.15% accuracy, 91.40%
sensitivity and 92.90% specificity.
2.7.2.1 First Order Statistics

In this study the first-order texture analysis which is Histogram-Based Approach were

used to calculate the texture. The frequency occurrence of each individual gray-level

intensity value is calculated to find the histogram for the image. Hence, first-order

statistical detail about the image is obtained from the histogram. There are several texture

features that can be included based on the histogram statistics. Table 3.1 shows the

description and the computed formula for every histogram-based texture feature included.

This is due to simple approach using standard descriptors (e.g. mean, variance, SD,

skewness, and kurtosis) to portray the data (Kocak et al., 2019).

Table 3.1 The computational formula for histogram-based texture features and
explanation.

Computatio Explanation Equation


nal Formula
Interquartile P25 and P75 are the
Range 25th and 75th
percentile of the
image array,
respectively
Skewness Measures the
asymmetry of the
distribution of
values about the
Mean value.
Uniformitya measure of the
homogeneity of the
image array, where a
greater uniformity
implies a greater
homogeneity or a
smaller range of
discrete intensity
values.
Median The median gray -
level intensity
within the ROI.
Energy Energy is a measure
of the magnitude of
voxel values in an
image. A larger
values implies a
greater sum of the
squares of these
values.
Robust the mean distance of
Mean all intensity values
Absolute from the Mean
Deviation Value calculated on
the subset of image
array with gray
levels in between, or
equal to the 10th and
90th percentile.
Mean the mean distance of
Absolute all intensity values
Deviation from the Mean
Value of the image
array.
Total The value of Energy
Energy feature scaled by the
volume of the voxel
in cubic mm.
Maximum The maximum gray
level intensity
within the ROI.
Root Mean the square-root of
Squared the mean of all the
squared intensity
values.
90Percentile The 90th percentile
of X
Minimum The minimum gray
level intensity
within the ROI.
Entropy specifies the
uncertainty/randomn
ess in the image
values. It measures
the average amount
of information
required to encode
the image values.
Range The range of gray
values in the ROI.
Variance measure of the
spread of the
distribution about
the mean.
10Percentile The 10th percentile
of X
Kurtosis a measure of the
‘peakedness’ of the
distribution of
values in the image
ROI.

Mean The average gray


level intensity
within the ROI.

2.7.2.2 Second-Order Statistical Texture Analysis

First order statistic does not provide sufficient information as human visual unable to

distinguish between texture pairs and their neighborhood pixel (Crook, 1937; Ramola,

Shakya and Van Pham, 2020). Pixel of interest (POI) and their neighborhood pixel can

show the relationship between the adjacent pixel in the second-order statistics. The
second-order statistics provide better information compared to first order statistic

particularly to the gray level distribution of the image.

The Second Order Statistics features extracted from this research is Gray Level Co-

occurrence Matrix (GLCM) or Second Order Statistics features. GLCM (Haralick et al.,

1973) is a widely used and powerful statistical analysis approach to extract

information from an image (Nikoo et al., 2011; Kumar et al., 2008). GLCM is

calculated based on the second-order joint conditional probability density functions, (, : ,

). Based on Figure 3.3, (, : , ) indicate the probability that two neighboring pixels with

gray levels, i and j , occur for a distance, d, and direction at certain angle, . This result in

a matrix of dimensions equal to the gray levels in the image, for each distance and

orientation (d,).

Figure 3.3 Spatial relationships of pixels defined by offsets, where d is


the distance from the pixel of interest

Thus, parameters included in computing GLCM are, the number of gray level Ng, the

distance between pixels (d) and the angle (˚) . The distance (d) indicates the distance

between pairs of pixels. Similar to the distance parameter, the direction of the analysis is
also another important parameter. Common directions are 0°, 45°, 90°and 135°.Figure

3.4 shows the way GLCM is computed.

Figure 3.4 GLCM computation process

The matrix on the left side indicates the input image whereas matrix C on the right

side is GLCM computed. As shown in the figure, the element (1,1) generates

value of 1 due to the only occurrence in the input images where two horizontally

adjacent pixels hold the value 1 and 1, respectively. Table 3.2 shows the

description and the computed formula for every GLCM features included.

Computational Explanation Formula


Formula
Joint Average

Sum Average

Joint Entropy

Cluster Shade

Maximum Probability

Idmn
Joint Energy

Contrast a measure of the local Nq−1 Nq Nq


f2= ∑ n ⌊ ∑ ∑ ρ (i, j)⌋ i − j=n
2 '
variations presents in a
volume i=0 i=0 j=1

Difference Entropy

Inverse Variance

Difference Variance

Idn

Idm

Correlation

Autocorrelation

Sum Entropy

MCC

Sum Squares

Cluster Prominence

Imc2

Im1

Difference Average

Id

2.7.2.3 Higher-Order Statistical Texture Analysis

In image interpretation, high-order statistics do not provide much information about the

spatial parameters, instead these parameters shall be obtained through texture

classification (Aggarwal and K. Agrawal, 2012). One example for higher order statistic is
the grey-level run length method (GLRLM) which it contain information on the run of a

particular grey-level, or grey-level range, in a particular direction (Galloway, 1975).

The higher order statistical included in this research is Grey-level run-length matrix

(GLRLM) which is a matrix from which the texture features can be extracted for texture

analysis. A gray level run can be described as a line of pixels in a certain direction with

the same intensity value. The number of such pixels defines the gray level run length, and

the number of occurrences is called the run length value. Here a run length is several

neighbouring pixels that possess the same grey intensity in a particular direction. In this

work only seven GLRLM features will be extracted, and these parameters are Short Run

Emphasis (SRE), Long Run Emphasis (LRE), Gray level non-uniformity (GLN), Run

length non-uniformity (RLN), Run Percentage (RP), Low Gray Level Run Emphasis

(LGLRE), and High Gray Level Run Emphasis (LGLRE).

Computational Formula Explanation Equation

Short Run Low Gray Level Emphasis


Gray Level Variance
Low Gray Level Run Emphasis
Gray Level Non Uniformity
Normalized
Run Variance
Gray Level Non Uniformity
Long Run Emphasis
Short Run High Gray Level Emphasis
Run Length Non Uniformity
Short Run Emphasis
Long Run High Gray Level Emphasis
Run Percentage
Long Run Low Gray Level Emphasis
Run Entropy
High Gray Level Run Emphasis
Run Length Non Uniformity
Normalized

2.7.2.4 Shape Features

Shape is known to be one of important features in pattern recognition. Nevertheless,

identifying the description of shape is a very challenging task, since it is difficult to

identify relevant shape characteristics and quantify the resemblance between shapes that

look alike. Table 3.3 shows the description and the computed formula for every shape

features included.

Computational Formula Explanation Equation

Voxel Volume
Maximum 3D Diameter
Mesh Volume
Major Axis Length
Sphericity
Least Axis Length
Elongation
Surface Volume Ratio
Maximum 2D Diameter Slice
Flatness
Surface Area
Minor Axis Length
Maximum 2D
DiameterColumn
Maximum 2D Diameter Row
2.8 Feature Selection

Feature selection is important to reduce dimensionality to reduce computational costs, to reduce

noise to improve the accuracy of the classification and to select interpretable features that can

help identify and monitor the phenomenon under study (Ding et al., 2005). Figure 2.16 shows

several feature selections approaches. This grouping is based on how the feature selection search

is combined with the construction of the classification engine (Jain et al., 2000).

In this research, ANOVA method was utilized as feature selection. ANOVA is a variance test

used to find out the separability of individual features between two classes and used to further

reduce the dimensionality.

Table… shows the finding from previous research


2.9 Classification

Classification process is a method to consolidate data into appropriate group for efficient

use. An organized data categorization scheme makes it far easier to locate and retrieve

important data. In medical research, data classification is essential to diagnose and

categorize illnesses. There are a number of classifiers being used widely now and each

classifier performs best on different data sets and produces distinct accuracy. Table

2.9 shows the previous research on the implementation of various types of classifier in

previous research.

Table 2.6 Implementation of various types of classifiers in previous


research
Publication Method Findings
Sim et al., Decision Tree, Logistics Author implemented decision tree (DT), logistic
2020 Regression, Adaptive regression (LR), bagging, random forest (RF), and
Boosting and Random adaptive boosting (AdaBoost) methods to two features
Forest set. Results shows that accuracy of RF and AdaBoost
model outperform all classifiers.
Hamim et al., Logistic Regression, Author compare several classifiers Logistic Regression,
2020 Artificial Neural Network Artificial Neural Network (ANN), Support Vector
(ANN), Support Vector Machine (SVM) and FC.0-C5.0. Result shows that
Machine (SVM) FC.0-C5.0 gives the best result with over 93% accuracy
and FC.0-C5.0 whereas the other classifiers give accuracy above 80%.

V.Jackins et Random Forest and Naive Author applied RF and NB. Accuracy outcome of RF
al., 2021 Bayes model for the three diseases is greater than the accuracy
values of NB classifier.

You might also like