0% found this document useful (0 votes)
22 views11 pages

MTSP

This paper proposes a new texture descriptor called Multi-scale Ternary and Septenary Pattern (MTSP) which extracts local features from textures. MTSP combines two single-scale encoders, STP and SSP, which encode interactions between local and non-local pixels using directional information and differential excitation. Experimental results show MTSP achieves reliable performance on ten texture datasets against other methods. MTSP is also statistically proven to be effective for texture modeling via the Wilcoxon signed rank test.

Uploaded by

El merabet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views11 pages

MTSP

This paper proposes a new texture descriptor called Multi-scale Ternary and Septenary Pattern (MTSP) which extracts local features from textures. MTSP combines two single-scale encoders, STP and SSP, which encode interactions between local and non-local pixels using directional information and differential excitation. Experimental results show MTSP achieves reliable performance on ten texture datasets against other methods. MTSP is also statistically proven to be effective for texture modeling via the Wilcoxon signed rank test.

Uploaded by

El merabet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Computer and Electrical Engineering (2022)

Contents lists available at ScienceDirect

Computer and Electrical Engineering


journal homepage: www.elsevier.com/locate/jvci

Texture and material recognition with multi-scale ternary and septenary patterns

Elmokhtar Rachdia , Issam El khadiria , Youssef El merabeta,∗, Youssef Rhazib , Cyril Meuriec
a LaboratoireSETIME, Département de Physique, Faculté des Sciences, Université Ibn Tofail, BP 133, Kénitra, 14000, Maroc
b Electrical
Engineering Department, Faculty of Sciences and Technology, Beni-Mellal, Morocco
c Univ Gustave Eiffel, COSYS-LEOST, F-59650 Villeneuve d’Ascq, France

ARTICLE INFO ABSTRACT

Article history: This paper ideates, inspired by LBP and its variants, a novel local feature extraction
operator for texture classification, referred to as Multi-scale Ternary and Septenary Pat-
tern (MTSP). MTSP is a histogram-based feature representation that is composed of two
single-scale STP and SSP (single-scale ternary and septenary patterns, respectively) en-
Keywords: Texture recognition, texture coders designed according to a novel set theory based pattern encoding scheme that
descriptors, LBP, feature extraction, di- integrates the concepts of both LQP’s and LTP’s operators. The essence of STP and
rectional topologies SSP is to compute several virtual pixels based on various local and global image statis-
tics and progressively encode interactions between local and non-local pixels by exam-
ining the directional information and differential excitation according to relationships
between adjacent pixels rearranged in a variety of spatial arrangements. Unlike various
parametric state-of-the-art texture operators that perform thresholding based on static
thresholds, MTSP incorporates dynamic thresholds estimated automatically. MTSP de-
scriptor has good ability as faithfully as possible to capture more detailed image infor-
mation via complementary texture information generated from the fusion of both STP
and SSP encoders. Experimental results show that MTSP ensures reliable performance
stability over ten texture datasets and against several recent representative methods.
In addition, the performance of MTSP is further proved statistically via the Wilcoxon
signed rank test demonstrating thus that MTSP is a good candidate for texture model-
ing.
© 2022 Elsevier B. V. All rights reserved.

1. Introduction rotation and affine varieties as imaging conditions changes.


In texture analysis, extracting powerful features for texture
In the field of texture analysis, the classification of textures categorization is still a gauntlet. Over the years, numerous
is increasingly seen as a serious problem. It plays a major advanced approaches were developed in the literature with
role in a wide range of applications, such as face presentation excellent surveys given in [7]. Over the past decades, local
attack detection, image classification, writer identification, feature extraction techniques have been designed and have
image retrieval, object recognition, image matching, gender demonstrated great success in the field of texture analysis. The
classification, facial classification, etc., [3, 5]. [32]. How- primary benefits of these texture descriptors come from the fact
ever, in the real world, textures vary in scale, illumination, that they only depend on a relatively modest amount of training
data and their straightforward design [21].

author: Tel.: +0-000-000-0000; fax: +0-000-000-0000;


∗ Corresponding

e-mail: [email protected] (Youssef El merabet), Among the local texture descriptors, LBP (local binary
[email protected] (Cyril Meurie) patterns) [22], has become one of the most remarkable and
2 Rachdi et al. / Computer and Electrical Engineering (2022)

attractive texture descriptor and has attracted greater interest direction local binary pattern (MDLBP) [30], scale-selective
for more than a decade. LBP is highly regarded by scientists and noise-robust extended local binary pattern (SNELBP)
due to its distinctive merits including ease to train with a [25], multi level directional cross binary patterns (MLD-CBP)
small amount of data, implementation simplicity, suitability to [27], petersen graph multi-orientation based multi-scale
solve high-class texture problems with real-time applications ternary pattern (PGMO-MSTP) [32], oriented star sampling
due to its relative fast calculation, invariability to monotonic structure based multi-scale ternary pattern (O3S-MTP) [40],
illumination variations, etc. [38]. Despite these advantages, orthogonal difference-local binary pattern (OD-LBP) [31],
LBP also has many limitations [44] such as: (1) It is sensitive directional neighborhood topologies based multi-scale quinary
to noise since it is based only on the comparison between local pattern (DNT-MQP) [38], quaternionic extended local binary
pixels; (2) It disregards the image’s comprehensive spatial pattern (QxLBP) [26], adaptively binarizing magnitude vector
information in favor of local texture elements; (3) It is not (ABMV) [20] and so on.
invariant to image rotation, etc. Several attempts have been
made to overcome the shortcomings of LBP, and as a result, Handcrafted texture methods appear to be progressively re-
many modifications and extensions built on LBP have been placed by CNN-based methods [50]. However, the main criti-
designed in recent years. [13, 14, 15, 16]. The authors in cism of CNN come from the observation that they requires ex-
[19] provided thorough tests evaluating the performance of pensive model learning on massive data to achieve high recog-
various state-of-the-art texture operators in palmprint and face nition accuracy, but at the expense of computation time that is
recognition problems, respectively. To overcome the sensitivity very expensive compared to hand-crafted features. In 2017, Liu
of LBP to central pixel noise, local ternary pattern (LTP) [41] et al.[8] evaluated large number of LBP variants and compared
has been introduced. In LTP, by assigning a threshold to the them to some deep texture methods. Their findings revealed that
central pixel, the descriptor is quantized into three levels (-1, 0, the best overall performance is obtained by their designed hand-
and 1) and decomposed into two upper and lower descriptors. crafted descriptor. Yang et al. [9] have shown that handcrafted
In [33], the authors presented local directional ternary pattern methods are efficient based directly on human knowledge. Song
(LDTP) operator that encodes both directional and contrast et al. [10] indicate that basic deep features lack some robust-
informations using the concepts of LDP and LTP operators. ness to rotation and illumination changes, while their designed
More recently, Shih et al. [23] designed synchronized rotation handcrafted texture descriptor have great advantages in this re-
local ternary pattern (SRLTP) operator for image classification. gard. On the other side, Huang et al. in [11] have demonstrated
SRLTP operator consists in improving the LTP operator by how a structured and reliable local descriptor can enhance deep
using an additional procedure on the extracted upper and learning’s remarkable capacity to extract more discriminating
lower LTPs which are henceforth encoded to a uniform and features. Similar comments are highlighted in [12]).
rotation invariant patterns histograms, respectively. In [17],
multi kernel based local binary pattern (MKLBP) descriptor In summary and in light of above findings, the need to de-
is proposed for texture recognition. The construction process sign a robust handcrafted texture method with high discrimi-
of MKLBP integrates ternary, quaternary and signum feature nant power is no longer to be proved. On the other hand, de-
extraction operators into the same encoding scheme. The spite the promising results of LBP and its extensions and mod-
authors in [1] proposed corner rhombus shape LBP (CRSLBP) ifications, still an alternative solution to strengthen their power
descriptor for texture classification. As an extension of LBP of discrimination for better representation of salient local tex-
operator, CRSLBP in addition to using a single parameter ture structure, is crucial. In this paper, to better address the
(radius) and the selected even block, it takes into account limitations of local feature descriptors and in particular LBP-
magnitude and sign information,to threshold four center pixels. like algorithms and thus to further improve their performance
This permit to encode relationships between the centers and while keeping their easiness and efficiency, we develop a com-
the neighbor of centers as well as neighbors. In [2], Zheng et putationally and conceptually simple yet powerful local tex-
al. proposed circumferential local ternary pattern (CLTP) for ture descriptor, named multi-scale ternary and septenary pat-
anti-counterfeiting pattern identification. CLTP classify each terns (MTSP), for image texture understanding and analysis.
triplet of pixels composed by two circumferential adjacent The idea behind MTSP method is to compute the feature rep-
pixels and the central pixels in each 3 × 3 square neighborhood resentation using different neighbourhood topologies, in order
into falling, rising and stable structures. The local information to catch comprehensive spatial information from neighbouring
is encoded using the LTP’s concept. pixels in multiple direction and blocks and also to character-
ize the spatial relationship and the appearance of a given pixel
intensity. The MTSP operator is obtained as being a concatena-
LBP-like techniques have dominated the best position of tion of two single scale descriptors STP (Single-scale Ternary
local feature algorithms for more than a decade. The need Pattern) and SSP (Single-scale Septenary Pattern) where infor-
to develop a discriminative local image feature operator mation extraction is carried out according to a compact encod-
no longer needs approval and the emergence of new local ing scheme based on set theory integrating both the concepts of
hand-crafted methods in pattern recognition is still ongoing, LQP and LTP-like texture methods to provide more discrimi-
e.g., center Lop-Sided Local Binary Patterns (CLS-LBP) [6], native texture information. MTSP has the advantage of being
quaternionic local angular binary pattern (QLABP) [24], local training free and conceptually much easier to implement. Fur-
concave-and-convex micro-structure (LCCMSP) [44], multi- thermore, let us note that the major advantage of the proposed
Rachdi et al. / Computer and Electrical Engineering (2022) 3

MTSP model lies in its flexibility given by its adaptive thresh- 2.2. Local Ternary pattern (LTP)
olding mechanism that uses dynamic thresholds estimated auto-
matically inside each local compact neighborhood. In this con- The authors in [34] extended the traditional LBP to three-
text, the four main contributions of this paper are as follows: value encoding scheme referred to as LTP, in which the two con-
ventional binary codes (0 and 1) are extended to ternary codes
• Two single scale feature descriptors called single-scale (-1, 0 and 1). LTP uses a constant threshold value τ specified
ternary pattern (STP) and single-scale septenary pattern by the user to compare the central pixel with its neighbouring
(SSP) are designed based on novel set theory based pat- pixels. The 3-valued function ϕ(·) is given as follows (cf. Eq.
tern encoding scheme. It extends the concepts of both LTP (3)):
and LQP operators using multiple oriented blocks and di-
rections based neighborhood topologies which are more 
suitable for texture modeling against vast number of state- +1 if I p >= Ic + τ





ϕ(I p , Ic , τ) =  |(I p − Ic )| < τ

of-the-art methods. 
0 if (3)

I p <= Ic − τ

• Both single-scale STP and SSP operators are combined to- −1 if


gether into a single vector feature construct the distinctive
MTSP method which ought to be more effective and more Consequently, by using the function ϕ(·), local ternary pat-
reliable. terns upper (LT PU ) and local ternary patterns lower (LT PL ) are
• Unlike vast number of existing parametric methods which coded as following (cf. Eqs. (4) and (5)):
use static thresholds, the creation process of MTSP incor-
porates dynamic thresholds which are estimated via an au- 7
X
tomatic mechanism. LTPL (Ic ) = 2 p ϑ(Ic − I p − τ) (4)
p=0
• We provide thorough comparison on ten challenging
datasets and prove that MTSP shows superior of compet- 7
X
itive performance against recent powerful existing texture LTPU (Ic ) = 2 p ϑ(I p − Ic − τ) (5)
methods. p=0

The structure of the rest of this paper is as follows. Section


2 briefly presents some typical state-of-the-art texture methods. The final LTP code generated for each pixel of the image, is
Section 3 describes in detail the proposed MTSP texture de- the combination of the two LT PU and LT PL texture features.
scriptor. Comprehensive experiments and comparative evalua-
tion are given in Section 4. Section 5 presents the implemen- 2.3. Local quinary patterns (LQP)
tation details to compare the processing time of all evaluated
methods. Section 6 summarizes the study and presents some Loris Nanni et al. [36], inspired by LTP, have extended LBP
future research directions. method to five-value encoding technique referred to as LQP in
which the two conventional binary codes (0 and 1) are extended
2. Background : LBP, LTP and LQP to quinary codes (-2, -1, 0, 1 and 2) based on two constant
thresholds τ1 and τ2 . The 5-valued function υ(·) is given as
2.1. Basic Local binary pattern (LBP) follows (cf. Eq. (6)):
The popular texture operator LBP, which demonstrated to be
a very effective and computationally simple texture descriptor 
for monochromatic images, was introduced by Ojala et al. [22].




+2 if I p ≥ Ic + τ2
+1 if Ic + τ1 ≤ I p < Ic + τ2

For each pixel in the image I of size M × N, a LBP value is





υ(I p , Ic , τ1 , τ2 ) =  if Ic − τ1 ≤ I p < Ic + τ1

computed by comparing its intensity value with those of its sur- 
0 (6)

rounding neighbors in each 3 × 3 grayscale image patch. For- 



−1 if Ic − τ2 ≤ I p < Ic − τ1
mally, given a central pixel Ic in a 3 × 3 square neighborhood,



−2 otherwise


its LBP label is generated based on the kernel function LBP(·)
(cf. Eq. (1)).
P−1
where τ1 and τ2 are two constant thresholds.
X
LBP(Ic ) = ϑ(I p − Ic ) × 2 p
(1)
p=0 Subsequently, the obtained LQP is split into four binary pat-
terns using the bc (x) function and c ∈ {2, 1, 0, −1, −2}. Finally,
where I p (p ∈ {0, 1, ..., P-1}) denotes the pth surrounding the histograms produced from the binary patterns are concate-
pixel, P (P=8) is the number of surrounding pixels. ϑ is de- nated.
fined as follows (cf. Eq. (2)):

a >= 0 x == c
( (
1 if 1 if
ϑ(a) = (2) bc (x) = (7)
0 otherwise 0 otherwise
4 Rachdi et al. / Computer and Electrical Engineering (2022)

3. Multi-scale Ternary and Septenary Patterns (MTSP) 1


K1 = (I1 + I5 + 2Ic + I3 + I7 ) (11)
6
MTSP is derived from the relationship between the local
neighboring pixels and their central pixel in each 3 × 3 patch by
combining the concepts of LQP- and LTP-like texture methods
in the same compact encoding strategy, increasing the texture
modeling’s accuracy, which leads to more hopeful outcomes.
The essence of MTSP is to carry out pixels sampling and pat-
tern encoding in the most relevant directions present in local
microstructures. The construction process of MTSP descriptor
that we propose, involves the following steps:

3.1. Pattern sampling


It is worthmentioning that MTSP operator takes into account
a unit distance radius as nearest surrounding pixels keeps
more discriminative information for local texture modeling Fig. 1. Different template-directional neighborhood topologies.
retaining thus lower feature size and computational cost. Thus,
the whole 3 × 3 square neighborhood is chosen as spatial 1
micro-structure to build the MTSP descriptor which conveys Bk = (I2k + 2I2k+1 + I2k+2 ) (12)
4
valuable information between the adjacent neighbouring pixels.
Given a central pixel Ic and its 8 surrounding pixels {I0 , I1 , ..., ek = 1 (I2k + 2Ic + I2k+2 )
B (13)
I7 } and based on the commonly used assumption that a texture 4
information is derived from local spatial fluctuations in pixel where Bk and B ek represent respectively the mean of ((I2k ,
intensities and orientation, MTSP considers multiple oriented I2k+1 ), (I2k+1 , I2k+2 )) and the mean of ((I2k , Ic ), (Ic , I2k+2 )),
blocks and directions based neighborhood topologies to encode where k ∈ {0, 1, 2, 3}
the relationships and interactions between neighbouring pixels.
As illustrated in Figures 1 and 2, the pixels are sampled around
the central pixel in a variety of spatial arrangements according
to their angular position to catch more comprehensive spatial
information from surrounding pixels. On the one hand, as
shown from the first row of Figure 1, the central pixel Ic is
rearranged each time with two pixels which are symmetrically
located around it at all possible orientations, i.e. horizontal
(0◦ ), forward slant (45◦ ), vertical (90◦ ), and backward slant
(135◦ ), while from the second row, it can be seen that Ic is
sampled each time with four pixels. On the other side, it can be Fig. 2. Different template-block neighborhood topologies.
inferred from Figure 2 that the 3 × 3 square neighborhood is
divided into four 2 × 2 blocks where Ic is sampled with three We consider (Dk , D ek ) and (Bk , B
ek ) as virtual neighboring
pixel in each direction. pixels of the central pixel Ic and then compute their local means
and medians as shown in the following equations (cf. Eqs. 14
It is appropriate to indicate that both mean grey value and to 19):
median value are commonly used statistical metrics for texture
modeling and analysis. In light of this and with the intention 3
1 X
of generating a code that is impervious to noise and more mD = (Ic + (Dk + D
ek )) (14)
9
resistant to lighting changes, various kinds of median and mean k=0
values are used as virtual pixels into the construction process 3
of MTSP. Mathematical definitions of these values are given by 1 X
mB = (Ic + (Bk + B
ek )) (15)
the following equations (cf. Eqs. 8 to 13): 9 k=0

I(α1 , α2 )
PM PN
1 α1 =1 α2 =1
Dk = (Ik + Ic + Ik+4 ) (8) mI = (16)
3 M×N
bD = median(D)
m (17)
ek = 1 (Ik + 2Ic + Ik+4 )
D (9)
4 b B = median(B)
m (18)
where Dk and D ek represent respectively the mean of (Ik , Ic ,
bI = median(I)
m (19)
Ik+4 ) and the mean of ((Ik , Ic ), (Ic , Ik+4 )), where k∈ {0, 1, 2, 3}.
1 where B and D are the set of Bk;k∈{0,1,2,3} and Dk;k∈{0,1,2,3} (cf.
K0 = (I0 + I4 + 2Ic + I2 + I6 ) (10) Eqs 8 and 12), respectively.
6
Rachdi et al. / Computer and Electrical Engineering (2022) 5

3.2. Coding Scheme


First of all, in order to establish relationships between pixels
and different sets of pixels, we use the set theory and in partic-
ular Venn diagrams which is a diagrammatic representation of
different possible relationships between different sets of a finite
number of elements. Let’s consider the following definitions:

Definition 1. A set is an unordered collection of objects, called


members or elements of the set. A real interval x is a nonempty
set of real numbers A = [a, b] = {x|a ≤ x ≤ b} where a is called
the infimum and b is called the supremum.

Definition 2. Let A and B be two sets. The set containing those Fig. 4. A schematic image of the upper Venn diagrams mode.
elements in both A and B is the intersection of the sets A and
B, indicated by A ∩ B.
and lower Venn diagrams modes as well as the three definitions
Definition 3. Let E be a set and A a subset of E. The comple- (given in Definition 1 to Definition 3). Denoted as A, they are
expressed as follows (cf. Eqs 26 to 39):

ment of A in E is the set x|x ∈ E et x < A . We denote it C E A
or E \ A or AC or Ā.
1 (x) = x ∈ SS1 | x ≥ Ic − τ1
AU

(26)
Accordingly, by considering the neighbouring pixels and the
2 (y) = y ∈ SS2 | y ≥ Ic + τ1
AU

virtual pixels as well as their mean and median values, we con- (27)
struct six sampling sets denoted as SSi;i∈{1,...,6} (cf. Eqs. 20 to
AU (x, y) = AU U
1 (x) ∩ A2 (y) (28)
25).
A1L (x) = x ∈ SS1 | x ≤ Ic + τ1

SS1 = {D0 , D1 , I4 , I5 , I6 , I7 } (20) (29)

A2L (y) = y ∈ SS2 | y ≤ Ic − τ1



SS2 = {D2 , D3 , I0 , I1 , I2 , I3 } (21) (30)

SS3 = {mI , K0 , I1 , I3 , I5 , I7 } (22) AL (x, y) = A1L (x) ∩ A2L (y) (31)


n o
SS4 = m bI , K1 , B (23) 3 (x) = x ∈ SS3 | x ≥ Ic + τ2
AU

ek;k∈{0,..,3} (32)
mD , m
min(mD , mB ) min(b
( )
4 (y) = y ∈ SS4 | y ≥ Ic − τ3
AU
bB ) 
(33)
SS5 = , , min(D
ek , B
ek )k∈{0,..,3} (24)
2 2
5 (z) = z ∈ SS5 | z ≥ Ic + τ1
AU

(34)
mD , m
max(mD , mB ) max(b bB )
SS6 = { , , ... max(D
ek , B
ek )k∈{0,..,3} } (25)
2 2 AU (x, y, z) = AU U U
3 (x) ∩ A4 (y) ∩ A5 (z) (35)
To visualize set operations, we define two kinds of Venn
A3L (x) = x ∈ SS3 | x ≤ Ic − τ2

diagrams, i.e., Venn diagrams in lower and upper modes, as (36)
schematized in Figures 3 and 4, respectively.
A4L (y) = y ∈ SS4 | y ≤ Ic + τ3

(37)

A5L (z) = z ∈ SS6 | z ≤ Ic − τ1



(38)

AL (x, y, z) = A3L (x) ∩ A4L (y) ∩ A5L (z) (39)

The local texture relationship between each points within the


established six sampling sets SSi;i∈{1,...,6} and the central pixel, is
encoded using three and seven-valued coding schemes (hence
the name is ternary respectively septenary) according to the
ensemble of sets of pixels relationship based on the three dy-
namic threshold values τ1 ,τ2 and τ3 . Note that τ1 ,τ2 and τ3
are employed to reduce the influence of outside factors like
noise that disturbs patterns. We design a texture operator called
Fig. 3. A schematic image of the lower Venn diagrams mode. Multi-scale Ternary and Septenary Pattern (MTSP) in order
to clearly extract comprehensive micro structure features con-
Given the defined six sampling sets SSi;i∈{1,...,6} , an ensemble tained in these relationships. Conceptually, MTSP is composed
of sets of pixels relationship based on three dynamic thresh- of Single-scale Ternary Pattern (STP) and Single-scale Septe-
old values τ1 ,τ2 and τ3 are constructed according to both upper nary Pattern (SSP) defined as follows:
6 Rachdi et al. / Computer and Electrical Engineering (2022)

3.2.1. Single-scale Ternary Pattern (STP) ψ(−3) respectively. It is worth noting that SSP encodes a texture
By employing a three-valued coding scheme (i.e., concept image in seven channels but gives six bit patterns. The resultant
of LTP-like methods), we design Single-scale Ternary Pattern image encoded by SSP is divided into six bit patterns using the
(STP) to encode relationship between the central pixel and ψ pattern function defined by (cf. Eq. 45):
points of both the sampling sets SS1 and SS2 . The exploited 
indicator function φ pattern (·) that converts each couple relation- 1 if ψ x) = {1, 2, 3, 4, 5, 6

ψ pattern (x) = 

(45)
ship in ternary form is given by equation 40. 0 otherwise

 3.3. Features extraction


+1 if α ∈ AU (x, y)




 At this step, two code maps are generated for each grayscale
φ(α) =  α ∈ AU (x, y) ∩ AL (x, y)

(40)

0 if image using both the STP and SSP encoders. Two histograms

α ∈ AL (x, y)

−1 if

 are then produced from these two code maps using equations
46 and 47.
The local information is then encoded using the following X
encoder noted STP and expressed as follows: hS T P (k) = δ(S T P(Ic ), k)
b (46)
Ic ∈I
5
X
S T PPattern (Ic ) = φ pattern (I p ) × 2 p
X
(41) hS S P (k) = δ(S S P(Ic ), k)
b (47)
p=0 Ic ∈I

where where k ∈ [0, 26 ] is the number of STP and SSP patterns and
δ(·) is the Kronecker delta function given by (cf. Eq. 48):

if φ x) = {1, 2
b
1

φ pattern (x) = 

(42)
0 otherwise if α = β
 (
1
δ(α, β) =
b (48)
0 otherwise
3.2.2. Single-scale Septenary Pattern (SSP)
Similarly to LQP which extended LTP method to five-value Considering the fact that several texture features have differ-
encoding technique and in order to capture more comprehen- ent capabilities to describe images and in order to take advan-
sive features, we propose, in this paper, to encode relationship tage of their performances, the trend towards integrating them
between points within the rest of sampling sets SSi;i∈{3,..,6} and into a single row feature vector seems to be the best way for-
the central pixel using a seven-value encoding scheme based on ward. To accomplish such a task and to capture more salient
three dynamic threshold values (τ1 , τ2 and τ3 ). Called Single- texture features, a multi-scale fusion operation is used to gener-
scale Septenary Pattern (SSP), it can capture discriminant micro ate a novel hybrid texture description model. The obtained tex-
structure information from the perspective of the established en- ture operator, which is the fusion of both STP and SSP opera-
semble of sets of pixels relationship. SSP employs the follow- tors is called multi-scale ternary and septenary pattern (MTSP).
ing indicator denoted as ψ pattern (·) (cf. Eq. 45): It is expected to be more powerful as it leads to improved power
 of discrimination and expressiveness of STP and SSP operators



 +3 if α ∈ AU U U
3 (x) ∩ A4 (y) ∩ Ā (x, y, z)
via their complementary informations. The generated single
feature vector of multi-scale analysis is expressed as follows:

+2 if α ∈ AU U U

4 (x) ∩ A5 (y) ∩ Ā (x, y, z)





+1 if α ∈ A3 (x) ∩ A5 (y) ∩ ĀU (x, y, z)
 U U
hMTSP = hhS T P , hS S P i




 (49)
ψ(α) =  −1 if α ∈ A3L (x) ∩ A5L (y) ∩ ĀL (x, y, z) (43)



where hi is the concatenation operator.

−2 if α ∈ A4L (x) ∩ A5L (y) ∩ ĀL (x, y, z)






−3 if α ∈ A3L (x) ∩ A4L (y) ∩ ĀL (x, y, z)



3.4. Dynamic thresholds




0

 otherwise
In order to realize a high trade-off between classification ac-
curacy and computational efficiency, we plan to define locally
where ĀU (x, y, z) (resp. ĀL (x, y, z)) is the complement of
and dynamically the three parameters τ1 , τ2 and τ3 of MTSP.
AU (x, y, z) (resp. AL (x, y, z)) as defined in Definition 2.
Given a 3 × 3 square neighborhood, we first calculate the neigh-
The local information is then encoded using the following
bor to center difference to form a difference vector dv3×3 (cf.
encoder noted SSP and expressed as follows:
Eq. 50). After that, the mean of all negative and positive dif-
− +
5
X ference values (i.e., dvmean
3×3 and dvmean
3×3 , respectively) are pro-
S S PPattern (Ic ) = ψ pattern (I p ) × 2 p (44) duced (cf. Eqs. 51 and 52).
p=0
1 1 1
 dv3×3 = [ (I0 − 3 ∗ Ic ), (I1 − 3 ∗ Ic ), ..., (I7 − 3 ∗ Ic )](50)
where pattern ∈ 1, 2, 3, 4, 5, 6 are six binary patterns by 2 2 2
taking into account its lower-positive, lower-negative, upper- pv
positive, upper-negative, middle-positive and middle-negative + 1 X +
dvmean = dv (51)
components denoted as ψ(+3), ψ(+2), ψ(+1), ψ(−1), ψ(−2) and 3×3
pv k=1 k
Rachdi et al. / Computer and Electrical Engineering (2022) 7

nv
− 1 X − state-of-the-art methods using a series of experiments (cf. Ta-
mean
dv3×3 = |dv | (52)
nv k=1 k ble 2). The experiments herein were conducted under the split-
sample validation protocol where half of the samples are ran-
where dv+k and dv−k are, respectively, the negative (i.e., 12 (Ik − domly chosen for training and the rest of samples are used for
3∗Ic ) < 0) and positive (i.e., 12 (Ik −3∗Ic ) ≥ 0) difference values testing. The 1-NN technique with L1-city block distance is used
in the dv3×3 set, nv is the number of dv−k elements and pv is the for classification purpose. Let us stress out that the classifica-
number of dv+k elements (pv + nv = P). Finally, the parameters tion experiments are repeated over 100 random splits to avoid
τ1 , τ2 and τ3 are calculated using these equations (cf. Eqs. 53 any bias resulting from the database’s partition, and estimated
to 55): accuracies is measured as averaged results. In the following,
the considered texture databases and the impact of feature com-
+ −
|dvmean mean
3×3 − dv3×3 |
bination process of the two SSP and STP encoders are first pre-
τ1 = + (53) sented and the findings of the experiments are then discussed.
3×3 , dv3×3 )

max(dvmean mean

+ − 4.1. Texture datasets


3×3 + dv3×3
dvmean mean
τ2 = + (54) To further verify the capabilities and performance stability of
3×3 , dv3×3 )

min(dvmean mean
MTSP, extensive experiments are carried out on ten well-known
1 datasets such as JerryWu, KTH-TIPS2b, BonnBTF, MBT, Kyl-
τ3 = (τ1 + τ2 ) (55) berg, Brodatz, VisTex, CUReTgrey and KTH-TIPS (the same
2
datasets used in [38]) and STex. These considered datasets were
Algorithm 1 illustrates the pseudo-code of the designed selected to cover different representative characteristics in terms
MTSP descriptor. of number of images and classes, and specific challenges. It can
be inferred from Table 1 that each database presents specific
Algorithm 1: Computing MTSP method challenges in terms of translation, scale, view angle, rotation,
Require: I ← grayscale texture image I M×N . illumination, etc., allowing thus to evaluate the performance of
Output: hMTSP ← the multi-scale histogram feature. the tested existing methods as well as the designed descriptor
1: Obtain the median of the grey-scale values m
bI of the whole image I M×N
against these factors.
using Eq. 19.
2: Obtain the average global gray levels mI of the whole image I M×N using 4.2. Impact of feature combination process
Eq. 16
As pointed out previously, the combination of different fea-
3: for Each central pixel Ic of I M×N do ture extraction methods is suitable to exploit their advantages
4: Consider a grayscale image patch of dimension 3 × 3 around Ic .
5: Obtain the difference between the central pixel and its surrounding
simultaneously with the objective of strengthening the discrim-
pixels d f3×3 and then obtain the dynamic thresholds τ1 , τ2 and τ3 using inant power of the produced encoder and promote the over-
Eqs. 53, 54 and 55, respectively. all classification results. MTSP is composed of two encoders
6: Obtain local means of the virtual pixels (Dk , D ek ) and (Bk , B
ek ) using
namely STP and SSP and in order to highlight the impact of the
Eqs. 14 and 15.
7: Obtain local medians of the virtual pixels (Dk , D ek ) and (Bk , B
ek ) using combination process, we evaluate the MTSP performance com-
Eqs. 17 and 18. pared to its STP and SSP components applied alone. Figure 5
8: Obtain (using Eqs. 41 and 44, respectively): illustrates the comparison results obtained using KTH-TIPS2b,
• S T PPattern (Ic ) ← the single-scale ternary pattern (STP) based on VisTex, MBP, CUReT and STex datasets. It can be inferred
indicator function φ pattern (·) and associated to the set of pixels from Figure 5 that none of the two encoders gives always the
relationship AU (x, y) and AL (x, y).
best score against the other, i.e., there are some datasets where
• S S PPattern (Ic ) ← the single-scale septenary pattern (SSP) based STP achieves the best accuracies against SSP and vice versa.
on indicator function ψ pattern (·) and associated to the set of pixels
relationship AU U U L L L Remarkably, we can notice that MTSP, as a hybrid texture de-
3 (x), A4 (x), A5 (x), A3 (x), A4 (x), A5 (x),
U L
Ā (x, y, z) and Ā (x, y, z). scription model which extracts complementary texture infor-
9: end for mation from combined components, improves significantly the
10: Obtain (using Eqs. 46 and 47, respectively): classification results of both STP and SSP encoders applied
• hS T P (k) ← histogram feature of S T PPattern (Ic ) code map. alone giving reasons thus for their combination.
• hS S P (k) ← histogram feature of S S PPattern (Ic ) code map.
11: Obtain the multi-scale histogram feature hMTSP using Eq. 49. 4.3. Comparative Assessment of Performance
12: return hMTSP 4.3.1. Experiment #1: Investigation on Performance Stability
Table 2 summarizes the achieved average classification
accuracies (i.e., over 100 splits) of each tested method over
each dataset as well as the GAP (global average performance)
4. Experimental results and discussion over all the ten used datasets.

In this section, the efficiency and performance of the pro- Concerning the results shown in Table 2, one can notice that
posed MTSP descriptor were extensively evaluated on several some handcrafted descriptors LDENP, DC and EULLTP are
publicly available texture datasets and compared to 19 recent emerged in majority of cases as the three weakest methods of
8 Rachdi et al. / Computer and Electrical Engineering (2022)

No. Name Classes Total samples Challenges


1 Jerry Wu 39 156 Images captured under different imaging direction, surface rotation and illumination direction.
2 Bonn BTF 10 160 Images captured under varying illumination and viewing angle.
3 Brodatz 13 208 Images are not-corrected and acquired with a lack of intraclass variations and without controlled conditions.
4 KTH-TIPS 10 40 Images captured under three poses, nine illumination conditions and nine scales.
5 KTH-TIPS2b 11 176 Image captured under pose scale, rotation and illumination changes.
6 VisTex 167 2672 Images captured under real world conditions.
7 MBT 154 2464 Images in high spatial resolution, which are common in areas such as remote sensing and astronomy.
8 Kylberg 28 4480 Images are corrected for aberration, vignetting and lens distortion and captured under controlled conditions.
9 CUReT 61 5612 Images with photometric and geometric properties as variations in illumination, viewing angle and rotation.
10 STex 476 7616 Images representing scenes, materials and objects, such as leather, buildings, bark, metal, flowers, etc.

Table 1. Image databases considered in this work. The Table illustrates the properties of each database, including the variety of samples in view point,
rotation, illumination changes, scale, the number of classes, etc.

Descriptor Reference Year 1 2 3 4 5 6 7 8 9 10 GAP Rank


MTSP this paper - 100 100 100 100 95.61 80.93 87.96 99.71 94.98 88.66 94.766 1
DNT-MQP [38] 2021 99.88 99.25 100 100 95.12 78.22 86.91 99.86 95.32 87.8 94.238 2
FLNIP [49] 2020 97.91 95.42 100 100 93.78 78.42 86.92 99.14 93.45 85.51 93.055 3
RALBGC [37] 2018 97.51 98.8 100 100 93.39 77.89 86.22 99.09 93.31 83.77 92.998 4
ARCSLBP [28] 2018 99.53 99.17 99.88 100 93.23 75.76 83.23 99.8 94.03 83.25 92.788 5
MNTCDP [16] 2018 100 100 100 100 90.93 79.53 78.95 98.48 92.4 86.27 92.656 6
ILQP [47] 2019 98.03 98.72 100 100 93.39 75.39 85.8 98.33 91.03 83.21 92.39 7
LETRIST [39] 2018 100 100 99.95 100 94 70.16 80.12 99.88 97.08 81.41 92.258 8
KLBP [17] 2019 97.49 97.34 100 100 91.24 76.36 86.78 98.6 91.12 82.49 92.142 9
LQP [36] 2010 97.72 97.34 100 100 90.55 76.58 86.76 98.28 91.21 82.35 92.079 10
LDTP [33] 2018 98.23 99.88 100 100 90.47 76.76 82.97 97.74 91.54 82.97 92.056 11
LOOP [45] 2018 96.86 98.78 99.97 100 85.81 73.25 83.99 97.75 90.61 77.81 90.483 12
LDEBP [4] 2019 98.87 92.96 99.96 100 87.3 71.26 84.15 98.8 89.33 82.11 90.474 13
LDZP [35] 2019 96.72 96.85 100 99.75 88.53 68.62 84.18 92.5 84.84 71.07 88.306 14
LGONBP [42] 2021 99.35 99.28 99.84 100 85.51 54.62 72.31 99.85 97.21 69.51 87.748 15
LNIP [29] 2019 96.86 77.86 99.73 98.45 82.97 72.42 83.99 96.55 85.88 78.29 87.3 16
DC [46] 2018 94.95 93.36 99.48 100 79.78 58.6 56.24 97.92 80.67 51.94 81.294 17
EULLTP [48] 2019 92.58 93.4 99.38 93.9 78.63 53.53 71.75 94.87 73.35 58.26 80.965 18
LDENP [51] 2018 88.5 88.29 95.4 88.8 65.8 48.93 60.4 87.13 64.61 43.43 73.129 19
QxLBP [26] 2021 - - - - - - 70.05 - - 74.03 72,04 20

Table 2. Overall accuracy by method and texture dataset. The last column represents the GAP of each descriptor over all the considered databases. The
best method over each dataset is highlighted in green, the three worst in red. As a color texture operator, QxLBP is assessed only on color texture databases
(it is tested with pyramid level L=3 as it gives the highest scores).

and 88% (obtained with the proposed method). The overall


performance may be improved if more complicated machine
learning algorithms such as extended nearest neighbor (ENN)
and SVM are used instead of the 1-NN classifier.
Moreover, none of the existing methods achieve good per-
formance on all the used datasets. Indeed, it is important to
note that obtaining good results on certain datasets, does not
necessarily ensure obtaining satisfactory classification results
on the others. LGONBP illustrates clearly this behavior where
Fig. 5. Accuracies obtained with single-scale image descriptors as well as it can be seen that it realizes outstanding scores on six databases
their combinations. out of ten, but no on KTH-TIPS2b (dataset 5), VisTex (dataset
6), MBT (dataset 7) and STex (dataset 10 in Table 2) datasets.
Indeed, the realized scores decline dramatically compared
the panel (illustrated in red color) and on almost all the used to the MTSP operator (i.e., the top 1 descriptor). This same
database. observation is valid for several other texture operators such as
The majority of the tested methods with the obvious exception LETRIST, LQP, EULLTP and DC and so on.
of those previously mentioned, produces promising classifica-
tion results on KTH-TIPS database (dataset 4 in Table 2) with a
score exceeds 97%. Moreover, some methods like DNT-MQP, It can also be inferred from Table 2 that MTSP offers
LETRIST, MNTCDP, LDEBP, etc. and the proposed MTSP to satisfactory classification results and positions itself as the
get the score of 100%, leaving then potentially no possibility best texture operator as it works meaningfully better for eight
for improvement. This same observation can be made for datasets out of ten: Jerry Wu, Bonn BTF, Brodatz, KTH-TIPS,
Brodatz dataset (database 3 in Table 2) where MTSP as well KTH-TIPS2b, Vistex, MBT and STex (databases 1 to 7 and 10
as various evaluated existing methods are able to accurately in Table 2) with a score upper than 80.93 %. It obtains the best
discern between all classes. rank with a score equal to 94.76 % when we consider the global
The performance of all the tested methods degrades consider- average performance (GAP) against all the existing descriptors.
ably on VisTex and MBT datasets (datasets 6 and 7 in Table Furthermore, it is in the top 5 and 4 texture operators on
2) where the achieved scores do not exceed respectively 81% Kylberg and CUReT datasets, respectively. Nevertheless,
Rachdi et al. / Computer and Electrical Engineering (2022) 9

let us mention that when MTSP does not realize the highest scatter plot. The Xaxis is the dimension of feature vectors (on
overall accuracies, it yields an interesting competitive score log 2 scale), while the Y axis is the normalized number of
compared to the one realized by the top 1 method. Taking victories reached by each evaluated texture operator.
the Kylberg database as an example (database 8 in Table 2),
MTSP is ranked at the 5th best position (i.e., MTSP has the It emerges from both Table 3 and Figure 6 that the conclu-
fifth-highest accuracy), but in contrast, it reached a score of sions that can be highlighted from the analysis of the realized
99.71% which is seen as a very satisfying result as it is very results are coherent with those drawn previously in Section
close to the one of the texture operator ranked at the first 4.3.1. Indeed, these results reinforce the conclusion that the
position reaching a score of 99.88%. Remarkably, MTSP combination of both STP and SSP is capable of representing
provides superior scores against QxLBP, which was originally local texture well which allows to construct a texture descriptor
conceived for color image representation, on MBT and STex which is clearly the most effective descriptor among all the oth-
color texture databases. QxLBPP, like a large number of color ers methods. In particular, the normalized number of victories
image descriptors, shows a tendency to be more sensitive to realized by MTSP is 0.788, vs. 0.738 with DNT-MQP (top 2nd ).
resolution and illumination. Additionally, it typically either vs. 0.60 with FLNIP vs. 0.588 with MNTCDP vs. 0.572 with
ignores spatial correlations between pixels in the image or LETRIST, etc. As mentioned previously, MTSP is based on the
gives them less weight [38]. LQP concept. Hence, if we consider the performance of LQP as
the benchmark, MTSP gives about 71,73% improvement over
It is interesting to note that the satisfactory results realized the ten used databases.
on KTH-TIPS2b and Jerry Wu indicate that MTSP can tolerate
a certain degree of rotation variations. Good performance Ranking 1-NN Texture descriptor Victories/comparisons Dimension
1 MTSP 0.788 256
on these two datasets indicate that MTSP shows reasonable 2 DNT-MQP 0.738 384
3 FLNIP 0.600 1024
tolerance to rotation when compared to LETRIST, which was 4 MNTCDP 0.588 2048
5 LETRIST 0.572 413
originally designed for rotation-invariant texture description. 6 ARCSLBP 0.566 256
7 RALBGC 0.561 1022
In particular, MTSP gives 95.61% on KTH-TIPS2b vs 90.08% 8 ILQP 0.483 1024
9 LDTP 0.477 1022
by LETRIST, indicating a performance improvement about 10 KLBP 0.466 1280
11 LQP 0.461 1024
5,53%. Furthermore, the good scores of MTSP (upper 94,98%) 12 LGONBP 0.427 1404
13 LDEBP 0.366 64
on Jerry Wu, Bonn BTF, KTH-TIPS2b, KTH-TIPS, CUReT, 14 LOOP 0.311 256
15 LDZP 0.261 354
and Kylberg datasets indicate that MTSP has good tolerance 16 LNIP 0.216 512
17 DC 0.122 225
to illumination changes. The significant accuracy (100%) 18 EULLTP 0.077 32
19 LDENP 0.011 15
obtained on KTH-TIPS2b indicate that MTSP has also good
tolerance to scale changes. Table 3. Ranking results obtained using the Wilcoxon-based ranking test
according to the normalized number of victories reached by each evaluated
Considering the findings above, it can be concluded that texture operator on all the employed databases.
the developed handcrafted MTSP method, despite its smaller
feature vector length (28 codes), is relatively efficient. The
proposed method ranks first with scores that are relatively
high and stable against the 19 evaluated existing methods on 5. Implementation and Reproducible Research
almost all the 10 several texture databases used. These findings
indicate that the combination of both STP and SSP features The experiments herein was carried out on a laptop equipped
describes better the characteristics of texture images helping with 2.10 GHz Core i7 CPU, 8 GB of RAM and having Ubuntu
thus to construct a descriptor that works well on various texture 14.04 trusty operating system. The evaluated methods have
databases. been implemented in MATLAB R2013a. Figure 7 illustrates
the processing time (in minutes) over 2464 samples of the MBT
dataset (dataset 7 in Table 1), including computation of fea-
4.3.2. Experiment #2: Statistical significance of the achieved ture extraction, distance calculation and 1-NN classification, for
results in terms of accuracy improvement all the evaluated methods. It is clear that the designed MTSP
The purpose of this section is to further prove statistically texture operator makes the best compromise between computa-
the realized performances via MTSP vs the existing evaluated tional cost and classification performance.
methods by employing the ranking procedure based on the
Wilcoxon signed rank test introduced in [44]. The algorithm
is applied on all the pairwise combinations of the 19 evaluated 6. Conclusion
existing texture operators including MTSP on the ten tested
databases. Table 3 shows the reached ranking results according In this paper, we have proposed an efficient feature descrip-
to the normalized number of victories (number of wins/(number tor referred to as Multi-scale Ternary and Septenary Pattern
of used databases*(number of evaluated methods - 1))) realized (MTSP) based on multiple oriented blocks and directions based
by each descriptor on all the considered databases. Figure 6 neighborhood topologies as well the set theory. MTSP com-
illustrates the produced classification results in the form of a bines the concepts of both LQP and LTP-like descriptors in the
10 Rachdi et al. / Computer and Electrical Engineering (2022)

[5] Bahram, T. (2022). A texture-based approach for offline writer identifi-


cation. Journal of King Saud University-Computer and Information Sci-
ences.
[6] Dawood, H., Saleem, S., Hassan, F., Javed, A. (2022). A robust voice
spoofing detection system using novel CLS-LBP features and LSTM.
Journal of King Saud University-Computer and Information Sciences.
[7] Liu, L., Chen, J., Fieguth, P., Zhao, G., Chellappa, R., Pietikainen, M.
(2019). From BoW to CNN: Two decades of texture representation for
texture classification. International Journal of Computer Vision, 127(1),
74-109.
[8] Liu, L., Fieguth, P., Guo, Y., Wang, X., Pietikäinen, M. (2017). Local bi-
nary features for texture classification: Taxonomy and experimental study.
Pattern Recognition, 62, 135-160.
[9] Yang, W., Zhang, X., Li, J. (2020). A local multiple patterns feature de-
scriptor for face recognition. Neurocomputing, 373, 109-122.
[10] Song, T., Feng, J., Luo, L., Gao, C., Li, H. (2020). Robust texture de-
scription using local grouped order pattern and non-local binary pattern.
IEEE Transactions on Circuits and Systems for Video Technology, 31(1),
189-202.
[11] Huang, W., Yin, H., 2017. Robust face recognition with structural binary
gradient patterns. Pattern Recognit. 68, 126-140.
[12] Karczmarek, P., Kiersztyn, A., Pedrycz, W., Dolecki, M., 2017. An ap-
plication of chain code-based local descriptor and its extension to face
recognition. Pattern Recognit. 65, 26–34.
[13] Guo, Z., Li, Q., You, J., Zhang, D., Liu, W. (2012). Local directional
derivative pattern for rotation invariant texture classification. Neural Com-
Fig. 6. Ranking results on all the used databases. Blue and orange dots puting and Applications, 21(8), 1893-1904.
indicate, respectively, the evaluated state-of-the art methods and the pro- [14] Guo, Z., Zhang, L., Zhang, D. (2010). A completed modeling of local bi-
posed method. nary pattern operator for texture classification. IEEE transactions on im-
age processing, 19(6), 1657-1663
[15] Fernandez, A., Alvarez, M.X., Bianconi, F. Image classification with bi-
nary gradient contours. Opt. Lasers Eng. 49(9-10), 1177-1184 (2011).
[16] Kas, M., Ruichek, Y., Messoussi, R. (2018). Mixed neighborhood topol-
ogy cross decoded patterns for image-based face recognition. Expert Sys-
tems with Applications, 114, 119-142.
[17] Tuncer, T., Dogan, S. (2020). Pyramid and multi kernel based local bi-
nary pattern for texture recognition. Journal of Ambient Intelligence and
Humanized Computing, 11(3), 1241-1252.
Fig. 7. The processing time (in minutes) of the evaluated methods. [18] Chakraborty, S., Singh, S. K., Chakraborty, P. (2018). Centre symmetric
quadruple pattern: A novel descriptor for facial image recognition and
retrieval. Pattern Recognition Letters, 115, 50-58.
[19] El Idrissi, A., El merabet, Y., Ruichek, Y. (2020). Palmprint recognition
using state-of-the-art local texture descriptors: a comparative study. IET
same compact encoding scheme to encode the interactions be- Biometrics, 9(4), 143-153.
tween pixels within 3 × 3 grayscale image patch. The capa- [20] Hu, S., Pan, Z., Dong, J., Ren, X. (2022). A Novel Adaptively Binarizing
bilities and performance stability of MTSP have been evalu- Magnitude Vector Method in Local Binary Pattern Based Framework for
Texture Classification. IEEE Signal Processing Letters, 29, 852-856.
ated on ten challenging texture databases using 1-NN classifier [21] Bhattacharjee, D., Roy, H. (2021). Pattern of local gravitational force
against 19 recent and advanced state-of-the-art texture opera- (PLGF): A novel local image descriptor. IEEE transactions on pattern
tors. MTSP descriptor showed considerable performance over analysis and machine intelligence, 43(2), 595-607.
all used databases, indicating that it better describes the charac- [22] Ojala, T., Pietikainen, M., Harwood, D.: A comparative study of tex-
ture measures with classification based on feature distributions. Pattern
teristics of texture images. In future work, we plan to test other Recognit. 29(1), 51-59 (1996).
sophisticated classifiers in the aim of increasing the classifica- [23] Shih, H. C., Cheng, H. Y., Fu, J. C. (2020). Image classification using
tion rate. synchronized rotation local ternary pattern. IEEE Sensors Journal, 20(3),
1656-1663.
[24] Lan, R., Lu, H., Zhou, Y., Liu, Z., Luo, X. (2019). An LBP encoding
scheme jointly using quaternionic representation and angular information.
References Neural Computing and Applications, 1-7.
[25] Luo, Q., Su, J., Yang, C., Silven, O., Liu, L. (2022). Scale-Selective and
Noise-Robust Extended Local Binary Pattern for Texture Classification.
[1] Al Saidi, I., Rziza, M., Debayle, J. (2022). A New LBP Variant: Corner Pattern Recognition, 108901.
Rhombus Shape LBP (CRSLBP). Journal of Imaging, 8(7), 200. [26] Song, T., Xin, L., Gao, C., Zhang, T., Huang, Y. (2021). Quaternionic
[2] Zheng, Z., Xu, B., Ju, J., Guo, Z., You, C., Lei, Q., Zhang, Q. (2022). Cir- extended local binary pattern with adaptive structural pyramid pooling
cumferential Local Ternary Pattern: New and Efficient Feature Descrip- for color image representation. Pattern Recognition, 115, 107891.
tors for Anti-Counterfeiting Pattern Identification. IEEE Transactions on [27] Kas, Mohamed, Y. Ruichek, and R. Messoussi. ”Multi level directional
Information Forensics and Security, 17, 970-981. cross binary patterns: new handcrafted descriptor for SVM-based tex-
[3] Li, L., Xia, Z., Wu, J., Yang, L., Han, H. (2022). Face presentation attack ture classification.” Engineering Applications of Artificial Intelligence 94
detection based on optical flow and texture analysis. Journal of King Saud (2020): 103743.
University-Computer and Information Sciences, 34(4), 1455-1467. [28] Ruichek, Y. (2019). Attractive-and-repulsive center-symmetric local bi-
[4] Sucharitha, G., Senapati, R. K. (2019). Biomedical image retrieval by nary patterns for texture classification. Engineering Applications of Arti-
using local directional edge binary patterns and Zernike moments. Multi- ficial Intelligence, 78, 158-172.
media Tools and Applications, 1-18.
Rachdi et al. / Computer and Electrical Engineering (2022) 11

[29] Banerjee, P., Bhunia, A. K., Bhattacharyya, A., Roy, P. P., Murala, S. K. H., Krishnan, R. (2018). Local diagonal extrema number pattern: a
(2018). Local Neighborhood Intensity Pattern-A new texture feature de- new feature descriptor for face recognition. Future Generation Computer
scriptor for image retrieval. Expert Systems with Applications, 113, 100- Systems, 81, 297-306.
115.
[30] Liu, J., Chen, Y., Sun, S. (2019). A novel local texture feature extraction
method called multi-direction local binary pattern. Multimedia Tools and
Applications, 78(13), 18735-18750
[31] Karanwal, S., Diwakar, M. (2021). OD-LBP: Orthogonal difference-
local binary pattern for Face Recognition. Digital Signal Processing, 110,
102948.
[32] El Khadiri, I., El merabet, Y., Tarawneh, A. S., Ruichek, Y., Chetverikov,
D., Touahni, R., Hassanat, A. B. (2021). Petersen Graph Multi-
Orientation Based Multi-Scale Ternary Pattern (PGMO-MSTP): An Effi-
cient Descriptor for Texture and Material Recognition. IEEE Transactions
on Image Processing, 30, 4571-4586.
[33] El-khadiri, I., Chahi, A., El-merabet, Y., Ruichek, Y., Touahni, R. (2018).
Local directional ternary pattern: A new texture descriptor for texture
classification. Computer Vision and Image Understanding, 169, 14-27.
[34] Song, H. Li, F. Meng, Q. Wu, and B. Luo, Exploring space–frequencyco-
occurrences via local quantized patterns for texture representation, Pattern
Recognit., vol. 48, no. 8, pp. 2621–2632, Aug. 2015
[35] Roy, S. K., Chanda, B., Chaudhuri, B. B., Banerjee, S., Ghosh, D. K.,
Dubey, S. R. (2018). Local directional ZigZag pattern: A rotation invari-
ant descriptor for texture classification. Pattern Recognition Letters, 108,
23-30.
[36] Nanni, L., Lumini, A., Brahnam, S.: Local binary patterns variants as
texture descriptors for medical image analysis. Artif. Intell. Med. 49(2),
117-125 (2010)
[37] El Khadiri, I., Kas, M., El Merabet, Y., Ruichek, Y., Touahni, R. (2018).
Repulsive-and-attractive local binary gradient contours: New and efficient
feature descriptors for texture classification. Information Sciences, 467,
634-653.
[38] Rachdi, E., El merabet, Y., Akhtar, Z., Messoussi, R. (2020). Directional
neighborhood topologies based multi-scale quinary pattern for texture
classification. IEEE Access, 8, 212233-212246..
[39] T. Song, H. Li, F. Meng, Q. Wu, J. Cai, Letrist: locally encoded transform
feature histogram for rotation-invariant texture classification, IEEE Trans.
Circuits Syst. Video Technol. (2017).
[40] I. El khadiri, El merabet, Y., Ruichek, Y., D. Chetverikov, and R. Touahni.
”O3S-MTP: Oriented star sampling structure based multi-scale ternary
pattern for texture classification.” Signal Processing: Image Communica-
tion (2020): 115830.
[41] X. Y.Tan , B. Triggs, Enhanced local texture feature sets for face recog-
nition under difficult lighting conditions, IEEE Trans. Image Process. 19
(6) (2010) 1635-1650.
[42] Song, T., Feng, J., Luo, L., Gao, C., Li, H. (2020). Robust texture descrip-
tion using local grouped order pattern and non-local binary pattern. IEEE
Trans. Circuits Syst. Video Technol. 31(1), 189-202
[43] S. Chakraborty, S. K. Singh, and P. Chakraborty, “Local quadru-
ple pattern:A novel descriptor for facial image recognition and re-
trieval,”Comput.Electr. Eng., vol. 62, pp. 92–104, Aug. 2017
[44] Ruichek, Y. (2018). Local concave-and-convex micro-structure patterns
for texture classification. Pattern Recognition, 76, 303-322.
[45] Chakraborti, T., McCane, B., Mills, S., Pal, U. (2018). LOOP Descriptor:
Local Optimal-Oriented Pattern. IEEE Signal Processing Letters, 25(5),
635-639.
[46] Ouslimani, F., Ouslimani, A., Ameur, Z. (2019). Rotation-invariant fea-
tures based on directional coding for texture classification. Neural Com-
puting and Applications, 31(10), 6393-6400.
[47] Armi, L., Fekri-Ershad, S. (2019). Texture image Classification based
on improved local Quinary patterns. Multimedia Tools and Applications,
78(14), 18995-19018.
[48] Kabbai, L., Abdellaoui, M., Douik, A. (2019). Image classification by
combining local and global features. The Visual Computer, 35(5), 679-
693.
[49] Ghose, S., Das, A., Bhunia, A.K. et al. Fractional Local Neighborhood
Intensity Pattern for Image Retrieval using Genetic Algorithm. Multimed
Tools Appl 79, 18527–18552 (2020).
[50] Shu, Xin, et al. ”Using global information to refine local patterns for tex-
ture representation and classification.” Pattern Recognition 131 (2022):
108843.
[51] Pillai, A., Soundrapandiyan, R., Satapathy, S., Satapathy, S. C., Jung,

You might also like