Texture Classification Based On Symbolic Data Analysis: Abstract
Texture Classification Based On Symbolic Data Analysis: Abstract
(i, j) = Pr(I(p
1
) = i I(p
2
) = j p
1
p
2
= d) (1)
where P is the probability, p
1
and p
2
are positions in the gray scale image I. In this
work, we will use a set of offsets sweep through 180 degrees (ie 0, 45, 90, and 135
degrees), obtaining four GLCM matrices. The algorithm proceeds as follows:
TEXTURE DESCRIPTOR ALGORITHM SCHEMA
1: Compute for each imagem Imgs
2: for s = 1 to S do
3: Compute the GLCM(P
s
) fromimage Imgs using the angles: = {0
, 45
, 90
, 135
}.
4: end for
2.2 SDA Module
This module transforms the GLCM matrices P
(i, j), we
extract the minimumand maximumprobabilities over all angles . This way, we create
a new variable PI(i, j) in order to capture the variability of the probability over dif-
ferent values of . Problems with choosing [min; max] can arise when these extreme
146
ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 25-27 April 2012, i6doc.com publ., ISBN 978-2-87419-049-0.
Available from http://www.i6doc.com/en/livre/?GCOI=28001100967420.
values are in fact outliers. The extraction of the minimum and maximum probabilities
is computed as:
PI(i, j) =
if max(P
(i, j))]
end if
(2)
SDA Module assumes that the interval matrix PI is composed by n items or indi-
viduals (rows) that are described by p interval-type variables (columns). For a given
number n of interval data x
i
= [a
i
, b
i
] (i = 1, ..., n), the extraction of a vector (X)
from a matrix (PI) is dened by the following equation:
x
i
=
, do
2: for i = 1 to I do
3: for j = 1 to J do
4: Compute the PI(i, j) nding the minimum and maximum probabilities over all
angles P
(i, j) ( = 0
, 45
, 90
, 135
i=1
n
k=1
(u
ik
)
m
(x
k
gi) =
c
i=1
n
k=1
(u
ik
)
m
p
j=1
_
(a
j
k
j
i
)
2
+ (b
j
k
j
i
)
2
_
(4)
The algorithm sets an initial partition and convergence when the criterion J reaches
a stationary value representing a local minimum. IFKCN controls learning rates and
size of the neighborhoods changing the value of weight exponents with time as follows:
mt = m0 t
m0 1
tmax
(5)
where m
0
is the initial value of weight exponent greater than one and t
max
is the itera-
tion limit. The membership degree u
ik
(k = 1, ..., n) of each pattern k in each cluster
P
i
, minimizing the clustering criterion J under u
ik
0 and
c
i=1
u
ik
= 1, is updated
according to the following expression:
u
ik,t
=
_
c
h=1
_
p
j=1
_
(a
j
k
j
i,t1
)
2
+ (b
j
k
j
i,t1
)
2
_
p
j=1
_
(a
j
k
j
h,t1
)
2
+ (b
j
k
j
h,t1
)
2
_
_ 2
m
t
1
_
1
for (i = 1, ..., c) (6)
The fuzzied learning rate is dened as:
ik,t
= (u
ik,t
)
mt
(7)
The prototype g
i
= (g
1
i
, ..., g
p
i
) of class P
i
(i = 1, ..., c), which minimizes the
clustering criterion J, has the bounds of the interval g
j
i
= [
j
i
,
j
i
] (j = 1, ..., p) updated
according to the following expression:
j
i,t
=
j
i,t1
+
n
k=1
(
ik,t
)(a
j
k
j
k,t1
)
n
k=1
(
ik,t
)
and
j
i,t
=
j
i,t1
+
n
k=1
(
ik,t
)(b
j
k
j
k,t1
)
n
k=1
(
ik,t
)
(8)
IFKCN-FD ALGORITHM SCHEMA
1: Fix the number of clusters c. Select > 0. Set tmax and m0 > 1.
2: Initialize the prototype vector g
i
and membership degree u
ik
(k = 1, ..., n) (i = 1, ..., c) of
the pattern k belonging to cluster Pi (i = 1, ..., c) such that u
ik
0 and
c
i=1
u
ik
= 1.
3: for t = 1 to tmax do
4: for k = 1 to n do
5: Calculate the learning rate (Equation 5), the fuzzy membership (Equation 6) and the
fuzzied learning rate (Equation 7).
6: Update all cluster centroids interval (Equation 8).
7: Compute Et = vt vt1
2
.
8: if Et then
9: Stop.
10: else
11: next t.
12: end if
13: end for
14: end for
148
ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 25-27 April 2012, i6doc.com publ., ISBN 978-2-87419-049-0.
Available from http://www.i6doc.com/en/livre/?GCOI=28001100967420.
2.4 Classication Module
This module receives a pre-processed image query and return to the user the classe
considered to be the most similar to his/her query. After the training phase, it is possible
to use the IFKCN to construct a classier in which each prototype represents one class
type. If labelled data are available, this information can be used to assign each prototype
a label. The IFKCN is labelled based on votes between the labels according input
data vectors and only uses the one which is most frequent. Finally, class label of each
original data vector is the label of the corresponding Best Matching Units (BMUs) [8].
A BMU is the winning node of the IFKCN, that is, the prototype more similar to the
query. The search procedure considers at least the rst BMU. It is possible that more
than one of the BMUs have to be considered in order to classify.
CLASSIFICATION ALGORITHM SCHEMA
1: Compare xj = (x
1
j
, . . . , x
p
j
) with all the node vectors (m
d
) (d = 1, . . . , M) of the IFKCN
using the Euclidean distance.
2: Obtain an ordered list of all BMUs.
3: Determine the rst BMUs in the IFKCN.
4: Extract the label associated with the selected BMU(s).
5: if the label data are not available then
6: Determine the next BMU in the IFKCN and repeat the step 4.
7: end if
3 Experimental Results
In order to assess the performance of the proposed hybrid approach, experiments with
the Brodatz database [9], were carried out. In the experiments, each Brodatz texture
constitutes a separate class. Each texture have 640 640 pixels, with 8 bits/pixel.
The texture was partitioned in 32 32 subimages, taking 400 non-overlapping texture
samples in each class. The samples were separated in two disjoint sets, one for training
and the other for testing the classier. This corpus is the same used by Li et al [10].
The evaluation is based by the correct classication rate (CCR) [10]. These mea-
surements are estimated in the framework of a Monte Carlo experience with 30 random
partitions of the training and test sets. This approach is compared with several classi-
ers in Li et al [10].
In this experiment, the effect of the training set size in the classier accuracy was
then assessed. A small fraction (from 1.25% to 10%) of the 400 subimages are used
in training the classiers, while the rest are used for testing. The training samples
in each class was set to 1.25% (5 samples), 2.5% (10 samples), 3.75% (15 samples),
5% (20 samples), 6.25% (25 samples), 7.5% (30 samples), 8.75% (35 samples) and
10% (40 samples). For the proposed method, the number of clusters c was set to 30
(the same number of classes in database). Figure 1 summarizes the results of the pro-
posed method, along with the results [10] for the single and fused SVM classier, the
Bayes classiers using Bayes distance and Mahalanobis distance, and the LVQ classi-
er. These measurements are estimated in the framework of a Monte Carlo experience
with 30 random partitions of the training and test sets. From this gure, we can observe
149
ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 25-27 April 2012, i6doc.com publ., ISBN 978-2-87419-049-0.
Available from http://www.i6doc.com/en/livre/?GCOI=28001100967420.
Fig. 1: Texture classication CCR rates
the superiority of the proposed classier in comparision with the other ones.
4 Conclusions
In this paper, an approach for texture-based image classication using the gray-level co-
occurrence matrices (GLCM) and FKCN for Symbolic Interval Data (IFKCN) methods
is presented. To showthe usefulness of the proposed methodology, an application with a
benchmark data set was considered. The proposed classier was compared with several
classiers according to the error rate of classication. The results demonstrated that our
method outperformed the other ones.
References
[1] Anil K. Jain and Farshid Farrokhnia. Unsupervised texture segmentation using gabor lters. Pattern
Recognition, 24(12):11671186, 1991.
[2] J. Zhang, M. Marsza, S. Lazebnik, and C. Schmid. Local features and kernels for classication of texture
and object categories: A comprehensive study. International Journal of Computer Vision, 73(2):213
238, 2007.
[3] Dimitrios Charalampidis and Takis Kasparis. Wavelet-based rotational invariant roughness features for
texture classication and segmentation. IEEE Trans. Image Process., 11:825837, 2002.
[4] A. Khotanzad and Y.H. Hong. Invariant image recognition by zernike moments. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(5):489497, 1990.
[5] Robert M. Haralick, K. Shanmugam, and ItsHak Dinstein. Textural features for image classication.
IEEE Transactions on In Systems, Man and Cybernetics, 3(6):610621, 1973.
[6] Hans-Hermann Bock and Edwin Diday. Analysis of Symbolic Data: Exploratory Methods for Extract-
ing Statistical Information from Complex Data. Springer, 2000.
[7] James C. Bezdek, Eric Chen-Kuo Tsao, and Nikhil R. Pal. Fuzzy kohonen clustering networks. In Proc.
of the First IEEE Conference on Fuzzy Systems, San Diego, USA, 1992.
[8] T. Kohonen. Self-Organizing Maps. Springer-Verlag, 3rd edition edition, 2001.
[9] P. Brodatz. Textures: A Photographic Album for Artists and Designers. Dover Publications, 1966.
[10] Shutao Li, James T. Kwok, Hailong Zhu, and Yaonan Wang. Texture classication using the support
vector machines. Pattern Recognition, 36(12):28832893, 2003.
150
ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence
and Machine Learning. Bruges (Belgium), 25-27 April 2012, i6doc.com publ., ISBN 978-2-87419-049-0.
Available from http://www.i6doc.com/en/livre/?GCOI=28001100967420.