Pattern Descriptors Orientation and MAP Firefly Algorithm Based Brain Pathology Classification Using Hybridized Machine Learning Algorithm
Pattern Descriptors Orientation and MAP Firefly Algorithm Based Brain Pathology Classification Using Hybridized Machine Learning Algorithm
ABSTRACT Magnetic Resonance Imaging (MRI) is a significant technique used to diagnose brain
abnormalities at early stages. This paper proposes a novel method to classify brain abnormalities (tumor
and stroke) in MRI images using a hybridized machine learning algorithm. The proposed methodology
includes feature extraction (texture, intensity, and shape), feature selection, and classification. The texture
features are extracted by intending a neoteric directional-based quantized extrema pattern. The intensity
features are extracted by proposing the clustering-based wavelet transform. The shape-based extraction
is performed using conventional shape descriptors. Maximum A Priori (MAP) based firefly algorithm is
proposed for feature selection. Finally, hybridized support vector-based random forest classifier is used for
the classification. The MRI brain tumor and stroke images are detected and categorized into four classes
which are a high-grade tumor, a low-grade tumor, an acute stroke, and a sub-acute stroke. Besides, three
different regions are identified in tumor detection such as edema, and tumor (necrotic and non-enhancing)
region. The accuracy of the proposed method is analyzed using various performance metrics in comparison
with the few state-of-the-art classification methods. The proposed methodology successfully achieves a
reliable accuracy of 88.3% for classifying brain tumor cases and 99.2% for brain stroke classification. The
best F-score of 0.91 and the least FPR of 0.06 are attained while considering brain tumor classification against
the proposed HSVFC. Likewise, HSVFC has 0.99 as the best F-score and a 0.0 FPR in the case of brain
stroke classification. The experimental analysis offers a maximum mean accuracy of different classifiers
for categorizing MRI brain tumor are 76.55%, 49.24%, 65.12%, 74.36%, 69.25%,and 55.61% for HSVFC,
SVM, FFNN, DC, ResNet-18 and KNN respectively. Similarly, in identifying MRI brain stroke, the average
accuracy for HSVFC, SVM, FFNN, DC, ResNet-18 and KNN are 98.17%, 53.40%, 85.8%, 87.5%, 70.06%,
and 61.24%, respectively is achieved.
INDEX TERMS Neoteric directional quantized extrema pattern, optimal MAP firefly algorithm, edema,
necrotic and non-enhancing region.
I. INTRODUCTION
The associate editor coordinating the review of this manuscript and IN recent decades, the research on automatic tumors and
approving it for publication was Jerry Chun-Wei Lin . stroke diagnosis has increased. The feature representation
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
3848 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ VOLUME 10, 2022
B. Deepa et al.: Pattern Descriptors Orientation and MAP Firefly Algorithm Based Brain Pathology Classification
plays a salient role in high-level medical tasks like classifi- in section IV. Lastly, section V concludes the proficiency of
cation [1]. A tumor occurs due to the growth of the unwanted the proposed approach.
cells in an uncontrolled manner. The major brain tumor starts
from the brain and has different characteristics such as size, II. RELATED WORK
shape, location, and image intensities [2]. The stroke happens Recent years have seen a sharp increase of machine learning
due to the sudden interruption in the blood supply of the (ML) applications in medical image analysis [7]–[17]. Out
brain [3]. An early detection of this disease will facilitate an of this vast literature, particularly, in [18], the researchers
earlier diagnosis and increase the probability of the individ- have proposed an improved local derivative pattern for feature
ual’s survival. According to the World Health Organization extraction. Here, the local derivative pattern variation method
(WHO), 15 million people across the world suffer from stroke is used to extract the diagonal directional pattern features
annually. Of these, 5 million die, and another 5 million are for brain pathology detection. Classification is carried out by
permanently disabled. The incidence rate of stroke per year a k-nearest neighbour, conventional neural network (CNN),
in India is between 145-154 per 1, 00,000 individuals, while and a support vector machine (SVM). The performance
that of the central nervous system (CNS) tumors ranges from of the proposed system is limited due to time complexity.
5 to 10 per 100,000 population with an increasing trend. In the Maire et al. [19] presented an article by comparing nine
United States, an estimated 23,890 adults (13,590 men and different classification techniques using a multi-parametric
10,300 women) were diagnosed with primary brain tumors MRI dataset. The high-level machine learning algorithms
in 2020 alone. like Convolutional Neural Networks (CNN) and Random
The above-mentioned diseases are conventionally diag- Decision Forests (RDF) produced significant results in clas-
nosed through medical imaging modalities such as com- sification with 77% accuracy for CNN and 82% for RDF.
puted tomography (CT), MRI, ultrasound scan, and positron The limitation of this work are challenges in hyperparameter
emission tomography (PET). Among these imaging meth- tuning and higher time complexity in training the features.
ods, MRI accurately captures the inner parts of the brain A new amalgam technique in a computer-aided diagnosis
for an accurate diagnosis compared to other methods [4]. (CAD) system for the detection of abnormality using MRI
Magnetic Resonance based diffusion and perfusion analysis brain images is proposed in [20]. After a pre-processing,
are more sensitive for the detection of tumors and stroke, the features are extracted using the Gabor filter and Walsh-
especially in earlier stages. The precision of MRI in detecting Hadamard transform (WHT). Finally, SVM is used for the
brain tumor and stroke is attributed to its clearer vision of classification of an abnormality like a tumor. The major
brain tissues obtained with the help of magnetic and radio demerit of this system is less energy compacting. In [21]
waves. the researchers utilized wavelet texture features along with
In clinical routine, the diagnosis of brain tumor and stroke several machine learning algorithms. Intensity, neighborhood
is employed by different MRI sequences such as T1-weighted information, intensity difference, and wavelet-based texture
(T1-w), T1-weighted with contrast enhancement (T1-wc), features are extracted and applied on multi-modality MRI
T2-weighted (T2-w), Proton Density-weighted (PD-w), and images with numerous classifiers. The utilization of wavelet-
Fluid-Attenuated Inversion Recovery (FLAIR). T1-w is the based texture feature with random forest classifier maximizes
most commonly utilized sequence for a brain tumor and it the accuracy of classification to a rate of 81% for low-grade
is used for simple annotation of healthy tissues. The borders tumors and 85% for high-grade tumors. Automatic diagnosis
of the brain tumor can be highlighted using T1-wc and this and detection of stroke in DWI images are presented in [22].
helps distinguish the active cell region and the necrotic core The rule-based classification approach is chosen because
regions easily. The edema region can be made brighter by of its simplicity and the ability to classify stroke lesions.
using the T2-w sequence. FLAIR can be observed as a highly However, the rule-based method fails to perform well to
effective sequence that helps to separate the edema region prediction quality. The analysis of DWI and FLAIR MRI
from Cerebro-Spinal Fluid (CSF) [5]. In the case of stroke, images for the detection of acute stroke is presented in [23].
Diffusion-weighted imaging (DWI) in MRI is mostly used Three machine learning algorithms such as SVM, logistic
for detecting acute stroke. It analyses the biological tissue regression, and random forest were utilized to detect an acute
structure which depends on the motion of water molecules stroke. Though the performance of the machine learning
at the microscopic level [6]. By using these MRI sequences, algorithm is more feasible than human perception, it fails to
the tasks such as feature extraction, feature selection, and detect stroke patients with small infarctions.
classification are used for autonomously diagnose the abnor- A new automated method to differentiate the different
malities in the brain. cancer diseases from the MRI images has been proposed
The rest of this paper is structured as follows: A detailed in [24]. Its geometrical properties such as shape, texture,
descriptions of the related works about the brain tumor and and intensity are used for classification and the results are
stroke detection and classification is provided in section II. endorsed on one local and publicly available dataset with
The procedure and description of the proposed technique different cross-validations on the feature set. A novel pattern
are explained in section III. The comparative results of the descriptor referred to as a directional local quantized extrema
proposed technique with traditional approaches are depicted pattern for image retrieval and indexing is presented in [25].
The standard local binary patterns and local ternary patterns of peritumoral edema [33]. From this, the potential value,
encoded a greyscale relationship. This technique uses ternary and relationship of both Vivo quantization of the apparent
patterns from Horizontal–Vertical–Diagonal–Anti diagonal diffusion coefficient along with the T2 relaxation times are
structure to encode more spatial structural data to obtain investigated to characterize the cellularity of brain tumors
better retrieval, but the assortment of features in a localized and tumor-related edema. An intra-voxel assessment in the
direction limits the classifier performance. Jothi et al. [26] magnetic resonance imaging is presented in [34]. This novel
have utilized a Tolerance Rough Set Firefly-based Quick technique involves a combination of various acquisitions
Reduct (TRSFFQR) feature selection algorithm for brain known as intravoxel analysis which have been utilized in
tumor detection. The shape, intensity, and texture-based fea- the evaluation of spin-spin relaxation and identification of
tures are extracted from segmented images of the MRI brain. multiple tissues. In this technique, exceedingly small number
TRS and Firefly Algorithm (FA) is utilized for selecting the of clinical data sets are used for evaluation, which is the major
imperious features of a brain tumor. The results obtained from disadvantage for this system.
this work show that the TRS firefly-based quick redact algo- The brain intracranial hemorrhage classification using syn-
rithm effectively selects the useful features with substantial- ergic deep learning model is presented in [35]. Pre-processing
quality, but the network convergence speed is slow when is initially done using gabor filter and grab-cut based seg-
compared to conventional algorithms. mentation is used to identify the affected portion. Then the
Automated segmentation of brain tumors in multimodal synergic deep learning model is utilized for extracting the fea-
MRI images is proposed in [27]. A Fully Convolutional Neu- tures and softmax layer is used for classification. In [36], the
ral Network (FCNN) and hand-designed features are used for various components of evolutionary algorithms including the
the classification of MRI images. Also, random forest classi- fitness function, parents selection, population and crossover
fication is used to classify the MRI data. This hybrid approach operators etc., for feature selection process has been dis-
of machine learning feature extraction using FCNN and the cussed in detail. The fitness function forms the basis for
proposed texture features offer better results, but it is spatially selection of features which opens the door for improving the
invariant to the input data. A region-based Active Contour classifier performance by representing the task to be solved in
Method (ACM) for segmentation and classification using an an evolutionary context. The classification of abnormal cervi-
artificial neural network-based Levenberg-Marquardt algo- cal cells using transfer learning approach is depicted in [37].
rithm is proposed in [28]. The process of texture and shape ResNet50, VGG19, inception V3 and squeezeNet are utilized
feature extraction helps for accurate classification. The com- for abnormal cell classification. The accuracy of 97.89% is
bination of the classifier with ACM segmentation offers high attained from ResNet50 in combination with random forest
accuracy (93.74%), good sensitivity (90.98%) and specificity classifier for predicting the cancerous cervical cells.
(87.47%) measures, combined with a few shortcomings such
as disability to segregate the discontinuous objects. Mach- III. PROPOSED METHODOLOGY
hale et al. [29] presented an intellectual classification system, This present work aims to classify brain tumors and stroke in
for identifying the status of the brain images by combining MRI images using hybridized machine learning algorithms.
the SVM and KNN classifier approach. Even though the The major contributions of the proposed work is highlighted
results of this hybrid approach give 98% accuracy, it requires below:
a longer time to predict the status of the brain images if the • Intensity feature extraction by proposing Intensity-based
database is larger. Gupta et al. [30] proposed a classification Clustering Wavelet Transform (ICWT).
model for MRI-FLAIR images to detect a brain stroke. DWT • Feature selection by proposing Maximum A Posteriori
is utilized for extracting the feature and the Principal Com- (MAP) based Firefly Algorithm (FFA).
ponent Analysis (PCA) is employed for selecting the optimal • Image classification by proposing Hybridized Support
features. However, the classification model achieves only Vector based Forest Classifier (HSVFC).
88% of accuracy and lacks time complexity. An innovative • Texture feature extraction by proposing Neoteric Direc-
system for detecting and classifying brain tumors is described tional based Quantized Extrema Pattern (NDQEP).
in [31]. In this system, the rougPrh set theory is developed The flow diagram of the proposed methodology is shown
for the process of feature extraction, and the Particle Swarm in Figure. 1.
Optimization Neural Network (PSO-NN) is utilized for clas- This hybrid machine learning framework is developed to
sifying the abnormalities in the MRI brain images. However, obtain a robust model capable of handling MRI scans with
a large converge rate limits the performance of the classifier. hypo and hyper intense sequences. In some MRI sequences,
Griffis et al. [32] have presented an approach for the auto- the affected lesions appear hypo intense which makes it diffi-
matic identification of stroke lesions by using naive Bayes cult for the radiologists to distinguish between the abnormal
classification in T1-w MRI images. The major drawback of and the normal regions. Hence image fusion is utilized to
this approach is that it is sensitive to indirect lesions which make the lesions hyper intense in order to get better visual
makes detection difficult. A quantitative apparent diffusion perception. Further, the dimension of the affected region is
coefficient along with the relaxation time T2 in the character- not predicted precisely in MRI, so image segmentation is per-
izing contrast enhances the brain tumor region and the region formed to predict the exact boundary of the abnormal region.
FIGURE 1. Flow diagram of the proposed methodology. Legend: NDQEP-Neoteric Directional based Quantized Extrema Pattern; GLCM - Gray Level
Co-occurrence Matrix; ICWT- Intensity-based Clustering Wavelet Transform; MAP- Maximum A Posteriori; FFA- Firefly Algorithm; HSVFC- Hybridized
Support Vector based Forest Classifier.
(Image fusion and segmentation is done in the previous phase overlapping pattern in matrix form (Ip) with 7 × 7 dimensions
of this present work which is referred in [38]). Then the (i.e. 49 pixels) to extract the spatial relation between any pair
texture feature extraction is carried out in the present work of neighbors in a local region along with the given directions,
to extract the information from the image at various angles by varying the limit variables in each row and column index
and to identify the various structures of the image. The inten- (i and j) from 1 to m-7 and 1 to n-7 respectively and is
sity feature extraction is performed in this work, because given by,
the identification of different grades of tumor and stroke is
Ip = lim Segima [a, b] (1)
based on the pixel intensity. The selection of best features i→ 1 to (m−7),j→ 1 to (n−7)
is necessary to make the classification process easier and it
helps the radiologist to accurately diagnose the types of tumor Here, ‘a’ indicates the indices from ith row to i + 6th row,
and stroke. ‘b’ denotes the indices from jth column to j + 6th column
Initially, the input segmented image is taken from the and Segima is the segmented image obtained from our earlier
earlier work, which includes image fusion and segmenta- work [38], m and n represents the total number of rows and
tion [38]. The image fusion is performed using Gradient- columns of the segmented image (i.e 256), [.] indicates the
based Discrete Wavelet Transform (GDWT). In the fusion matrix representation.
process, two input images are decomposed into low- The Local Direction Extrema Values (V) in all directions
frequency and high-frequency sub-bands using Discrete Haar are computed with dimension 7 × 7 by subtracting the
Wavelet Transform, for which gradient and corresponding neighbourhood pixel with the center pixel in the overlapping
fusion rules must be applied. After fusing the images, seg- pattern. V is represented by,
mentation is performed based on the Intensity Factorized V [x, y] = lim (Ip[x, y] − Ic) (2)
Threshold (IFT) technique. Histogram equalization is per- x→1 to 7,y→1 to 7
formed for the fused image and matrix factorization is where x and y represent the scalar value for accessing row and
done to estimate the threshold for segmenting the affected column index of the formed pattern matrix, IC is the center
region. pixel of the pattern matrix. For computing the upper and lower
pattern, the threshold value (th) is calculated by taking the
A. TEXTURE FEATURE EXTRACTION USING NEOTERIC median value for the obtained LDEV results.
DIRECTIONAL BASED QUANTIZED EXTREMA Also, from the LDEV results, the multi-directional
PATTERN (NDQEP) pixel value is estimated in all the different directions
Initially, the segmented images obtained from [38] are taken for extracting the upper and lower binary patterns which
as an input for texture extraction. Then it is separated into an are shown in figure 2. Here the horizontal (0◦ and 180◦ ),
vertical (90◦ and 270◦ ), diagonal (45◦ , 135◦ , 225◦ , 315◦ ), pattern pixel values which are given by,
(30◦ , 60◦ , 120◦ , 150◦ , 210◦ , 240◦ , 300◦ and 330◦ ) anti-
diagonal directions are considered. Upperpattern = 1, Lowerpattern = 0 if
P1 < Ic and P2 < Ic − th
f (p1 , p2 , p3 ) Upperpattern = 0, Lowerpattern = 1 if
P1 > Ic and P2 > Ic
pattern = 0, Lowerpattern = 0 else
Upper
(4)
where C is the number of co-efficient; K is the number of where fd is the current firefly position, fs is the previous
pixels which is 2. firefly position, LId represents the light intensity of the
For predicting the high-intensity pixels, the cluster distance current firefly, α is the adjusting parameter (i.e, 0.2), and
is calculated by, = 2.220 × 10−16 is a constant. β is the estimation of
attractiveness with distance r and it is given as β = I0 eγ r and
2
F. CLASSIFICATION APPROACH USING HYBRIDIZED using Lagrange multipliers and it is given by,
SUPPORT VECTOR BASED FOREST CLASSIFIER (HSVFC)
Wk (tf )
The Classification is the process of classifying the images ITr
based on the trained features. Generally the level or grade X
= lim lim α(o) ∗ L(o) ∗ Trainfes [o, tf ]
of the tumor and stroke can be determined by applying the k→ 1 to CL tf → 1 to FD
o=1
classification technique [43]. (14)
Here, the process of classification is performed based
on the combination of Support Vector Machine (SVM) and here,Wk is the weight vector of each class, α is a vector of
the Random Forest classifier which is represented in fig- Lagrange Multiplier, which varies from 0 to 0.5 for ITr , ITr
ure 3. The dataset is comprised of different levels of tumor depicts the total number of tumor or stroke images (that are
images (High-Grade and Low-Grade) and various types to be trained ), Trainfea signifies the input dataset features in
of stroke images (Sub-Acute and Acute) based on which matrix form, FD denotes the indices from 1 to a total number
binary labeling (L) is done separately for tumor (High- of extracted features (i.e. 61).
Grade-1; Low-Grade-0) and stroke (Sub-Acute-1; Acute-0) After this, the bias is estimated which separates the two
images. classes and is given by,
The total number of classes (CL) are predicted by consider- 1
Bk (tf ) = lim lim
ing the same values in the formed label (L) without repetitions ITr k → 1 to CL tf → 1to FD
(CL = 2 for tumor and CL = 2 for stroke). Initially, SVM XTr
(binary) classifier is trained with dataset features and the × (L(o) − Trainfea [o, tf ] ∗ Wk (tf )) (15)
formed label (L), from which the weight vector is calculated o=1
where, Bk is the bias vector of each class with the dimension where fk denotes the decision tree at each class
of extracted features. From the estimated bias and weight
vector, the support vectors (SV) are formed which helps to class(u) = lim arg max (fˆk (u)) (22)
u→ 1to ITe k→ 1to CL
determine the appropriate margins among two classes and is
given by, The equation (22) represents the predicted class (high-
grade, low-grade, sub-acute and acute) for all the given test
−Bk (tf ) features. It is done by selecting the class label which is having
SVk (tf ) = lim (16)
tf → 1 to FD Wk (tf ) maximum vote values.
The train features and the test features are updated using
hyperparameters such as scale factor (S2 ) and shift (S1 ) from IV. EXPERIMENTAL RESULTS AND DISCUSSION
the formed support vectors and is denoted as, In this work, MRI brain images from BRATS (Brain Tumor
Segmentation) 2013 [44] and ISLES (Ischemic Stroke Lesion
Trainfea [o, tf ] = lim lim Segmentation) [45] databases of having the image resolution
o→ 1to ITr tf → 1to FD
(S2 (tf ) ∗ Trainfea [o, tf ] + s1 (tf )) (17) of 96 dpi is used for brain pathology classification. The
Testfea [u, tf ] = lim lim MRI brain tumor image dataset is obtained from BRATS
u→ 1to ITe tf → 1to FD 2013 challenges.1 The MRI brain stroke image dataset is
(S2 (tf ) ∗ Testfea [u, tf ] + s1 (tf )) (18) obtained from ISLES 2015 challenges.2
where ITe represents the total number of tumor or stroke The dataset of a tumor consists of low-grade and high-
images that are to be tested, Testfea indicates the features for grade patients, whereas the stroke dataset includes sub-acute
testing images, s1 is calculated by the mean of support vectors, and acute patients. Each low-grade and high-grade tumor
and s2 is calculated by taking the inverse of the standard patient has the MRI image sequences of T2-w, T1-w, T1-c,
deviation of support vectors. The updated train features and FLAIR, and each sub-acute stroke patient has the sequences
the formed label (L) are given as input to the random forest like T2-w, T1-w, DWI, FLAIR whereas each acute stroke
algorithm for regression tree formation. In the RF algorithm, patient has the sequences like T2-w, T1-w, and DWI. In this
bootstrapping is done initially, in which Q number of features present work, 1100 image samples (or patients) are consid-
are selected randomly from the given updated train features ered for each MRI sequence (among which 600 samples are
for forming the decision trees. The Gini index (G) is con- tumor affected images and 500 samples are stroke affected
sidered as the splitting criterion at each node of the tree and images) i.e., around 4,150 images are taken for experimenta-
given by, tion. The Ground Truth (GT) images are taken from the same
CL BRATS 2013 and ISLES 2015 database, which is used for the
calculation of performance metrics.
X
Ginip = 1 − (Pab )2 (19)
ab=1
As for computing, we have used the Desktop personal
TM
computer which has intel R core i5-6200U CPU running
where Pab represents the probability of abth class in the Q
ab at 2.40 GHz using 4 GB of RAM, operating in the Windows
features associated at the parent node (p) and Pab = MN , M ab
10 platform. MATLAB 2019b is used to implement the pro-
indicates the number of features in abth class among Q fea-
posed methodology.
tures and N signifies the total number of Q features.
Similarly, splitting criteria for the left and the right node A. GRAPHICAL USER INTERFACE (GUI) IMPLEMENTATION
from the parent node are given by,
The experimental results are obtained by implementing GUI
N1 N2 in MATLAB 2019b platform. The following steps were
Ginisplit = Ginit1 + Ginit2 (20)
N N involved:
where N1 and N2 imply the number of features at node t1 Step1. Selection of input image.
and t2 , i.e., left and right node. The Gini index value must Step2. Pathology detection as tumor or stroke using
be minimum, and it ranges from 0 to 1. The node splitting in KNN algorithm [38].
the tree continues until the value of Gini index is zero. The Step3. Segmentation of image using IFT algorithm. It is
tree formation is repeated for the given two classes. More- carried out for both individual MRI sequences
over, the hyperparameters used for the RF classifier include and for fused images which are obtained using
an ‘out-of-bag error prediction (OOBPred)’, ‘minleafsize’ the GDWT algorithm [38].
(minimum number of observations per leaf). The condition Step4. Feature extraction using NDQEP, GLCM, and
for OOBPred is ‘ON’ and the value of ‘minleafsize’ is 1. shape-based extraction, and ICWT. Feature
Thus, the RF classifier is trained with the updated train fea- selection for ICWT features using MFFA.
tures to classify the updated test features. After tree forma- Step5. Classification of MRI images as high-grade or
tion, the votes f at each formed decision tree is calculated as, low-grade tumor, sub-acute or acute stroke using
HSVFC.
fˆk (u) = lim lim lim
k→ 1 to CL u→ 1to ITe tf → 1to FD 1 https://www.smir.ch/BRATS/Start2013
{fk (Testfea [u, tf ])} (21) 2 https://www.smir.ch/ISLES/Start2015
FIGURE 4. Input MRI sequences and classification output for brain tumor image of: (A) patient no.1, slice no.95; (B) patient no.14, slice no.95; (C) patient
no.4, slice no.109; (D) patient no.6, slice no.143; and (E) patient no.10, slice no.77.
Step6. Performance analysis of different classifiers to shown in figure 4(A) and figure 4(B). The proposed HSVFC
prove the efficiency of the proposed classifier. algorithm appropriately classifies the given MRI input as a
The classification results for MRI brain tumor images of high-grade tumor; besides, three different affected regions are
patient no. 1, slice no. 95, and patient no. 14, slice no. 95 is identified such as edema, enhancing core tumor, and necrotic
FIGURE 5. Input MRI sequences and classification output for brain tumor image of: (A) patient no.8, slice no.96; (B) patient no.15, slice no.81; (C) patient
no.16, slice no.46; and (D) patient no.8, slice no.42.
tumor core based on the segmented result. figure 4(C) depicts figure 5(A), 5(B), 5(C), and 5(D). The sub-acute stroke region
the high-grade tumor classified result of patient no.4, slice is identified from the segmented output of DWI and FLAIR
no.109. Here, two tumors are predicted with two different fused MRI, whereas acute stroke is known from a segmented
regions such as edema and enhancing core tumor. Figure 4(D) result of DWI MRI and it is emphasized with green color.
and figure 4(E) represent the low-grade tumor classified In this work, we have tested the classifiers with five dif-
results of patient no. 6, slice no. 143 and patient no.10, slice ferent training and testing ratios as 50-50, 60-40, 70-30,
no.77. Edema and enhancing tumor core are identified as 80-20, and 90-10 and the accuracy of all these ratios has
like in-ground truth images and highlighted with different been presented in this section. The summary statistics of
colors. For patient no.8, slice no.96 and patient no.15, slice extracted features and its distributions over different classes
no.81, patient no. 16, slice no. 46 and patient no.8, slice are depicted in Table 1. Table 2 illustrates the accuracy of
no.42, the proposed classifier pertinently classifies the given different features when the train-test ratio is 70-30. From
MRI stroke input as sub-acute and acute case as shown in the tabulated results it is understood that better accuracy
FIGURE 8. Performance measures of various classifiers in brain FIGURE 10. Performance measures of various classifier in MRI brain
high-grade tumor detection. sub-acute stroke detection.
FIGURE 9. Performance measures of various classifiers in brain FIGURE 11. Performance measures of various classifiers in MRI brain
low-grade tumor detection. acute stroke detection.
is observed for the proposed HSVFC and a low (0.05) for and Jaccard index look similar for the KNN and ResNet-18
FFNN. The ResNet-18 classifier performs better than KNN, algorithms.
DC, FFNN, and SVM classifiers, but its performance is
observed inferior to the proposed HSVFC. C. RESULT ANALYSIS OF PROPOSED HSVFC FOR
The comparative analysis of various classifiers for MRI 60-40 AS TRAIN AND TEST RATIO
brain sub-acute stroke is depicted in figure 10. The best In the case of testing, the proposed HSVFC with 60% training
recall (1.00) and F-score (0.98) is achieved for the proposed and 40% testing data, the range of F-score varies from 0.31 to
classifier, whereas SVM has the least recall (0.5) measure and 0.81 for all the classifiers in brain tumors detection as shown
KNN has the poor F-score (0.55). The FPR is zero for SVM in figure 12. Among the F score, the highest score (0.81) is
and 0.02 for HSVFC. The highest Jaccard index value (0.97) accomplished by the proposed classifier, besides, the recall
is achieved for the proposed HSVFC. The recital measures of (0.75) and precision (0.98) rates are high. The utmost Jaccard
DC and FFNN classifiers concord with each other. Likewise, index (0.68) is obtained for the proposed HSVFC. The false-
the performance of the KNN classifier and ResNet-18 model positive rate (0.23) is low for HSVFC and high (0.66) for
goes hand in hand. KNN. For DC and FFNN, the FPR concords with each other.
The concert measures for MRI brain acute stroke are The F-score of ResNet-18 is 18% low compared to HSVFC.
depicted in figure 11. For DC and FFNN algorithm, the range In figure 13, the recall value of 0.79, precision rate of 0.12,
of performance metrics gets along with each other. The least F-score of 0.50, FPR of 0.23, and Jaccard index of 0.33 are
FPR of 0.02, high Jaccard index of 0.97, and high F-score attained for HSVFC in low-grade tumor detection. Next to
of 0.98 are obtained for the proposed HSVFC. The F-score the proposed HSVFC, the ResNet-18 method performs well.
FIGURE 12. Performance measures of various classifiers in MRI brain FIGURE 14. Performance measures of various classifiers in MRI brain
high-grade tumor detection. sub-acute stroke detection.
FIGURE 13. Performance measures of various classifiers in MRI brain FIGURE 15. Performance measures of various classifiers in MRI brain
low-grade tumor detection. sub-acute stroke detection.
TABLE 4. Confusion matrix of HSVFC for stroke detection (70-30 as Similarly, for sub-acute stroke detection, the high value of
train-test ratio). TP: True Positive; TN: True Negative, FP: False Positive,
FN: False Negative. Area Under the Curve (AUC-0.99) is obtained for the pro-
posed HSVFC when the train-test ratio is 50-50 (figure 17).
The proposed work is not suitable for large databases (i.e., for
multiple images with different pathological condition) which
is considered as the major limitation.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
FIGURE 17. ROC of HSVFC for MRI brain sub-acute stroke detection.
REFERENCES
class and the remaining 250 images are Acute (A). In Table 4, [1] Y. Xu, Z. Jia, Y. Ai, F. Zhang, M. Lai, I. Eric, and C. Chang, ‘‘Deep
convolutional activation features for large scale brain tumor histopathology
70% of the images are considered for training, which means image classification and segmentation,’’ in Proc. ICASSP, Apr. 2015,
that 175-SA and 175-acute tumor images are reviewed for pp. 947–951.
training. For the testing process, 30% of the stroke images are [2] G. Praveen and A. Agrawal, ‘‘Hybrid approach for brain tumor detection
and classification in magnetic resonance images,’’ in Proc. CCIS, 2015,
considered, which implies 75 SA and 75 acute stroke images pp. 162–166.
are taken for testing. The results in the confusion matrix [3] J. Hu, W.-S. Pang, J. Han, K. Zhang, J.-Z. Zhang, and L.-D. Chen,
show that, of the 75 SA stroke images, around 73 images are ‘‘Gualou Guizhi decoction reverses brain damage with cerebral ischemic
stroke, multi-component directed multi-target to screen calcium-overload
predicted correctly (i.e, positive cases) and among 75 acute
inhibitors using combination of molecular docking and protein–protein
stroke images, all the 75 images are projected as positive docking,’’ J. Enzyme Inhibition Medicinal Chem., vol. 33, no. 1,
cases. pp. 115–125, Jan. 2018.
In figure 16, for high-grade tumor class detection, the high [4] A. M. Hasan, H. A. Jalab, F. Meziane, H. Kahtan, and A. S. Al-Ahmad,
‘‘Combining deep and handcrafted image features for MRI brain
value of Area Under the Curve (AUC-0.95) is obtained for scan classification,’’ IEEE Access, vol. 7, pp. 79959–79967,
the proposed HSVFC when the train-test ratio is 70-30. 2019.
[5] R. A. Zeineldin, M. E. Karar, J. Coburger, C. R. Wirtz, and O. Burgert, [26] G. Jothi, ‘‘Hybrid tolerance rough set–firefly based supervised feature
‘‘DeepSeg: Deep neural network framework for automatic brain tumor selection for MRI brain tumor image classification,’’ Appl. Soft Comput.,
segmentation using magnetic resonance FLAIR images,’’ Int. J. Comput. vol. 46, pp. 639–651, Sep. 2016.
Assist. Radiol. Surg., vol. 15, no. 6, pp. 909–920, Jun. 2020. [27] M. Soltaninejad, L. Zhang, T. Lambrou, G. Yang, N. Allinson, and X. Ye,
[6] M. A. Queiroz, M. Hüllner, F. Kuhn, G. Huber, C. Meerwein, S. Kollias, ‘‘MRI brain tumor segmentation and patient survival prediction using
G. von Schulthess, and P. Veit-Haibach, ‘‘Use of diffusion-weighted imag- random forests and fully convolutional networks,’’ in Proc. Int. MICCAI
ing (DWI) in PET/MRI for head and neck cancer evaluation,’’ Eur. J. Nucl. Brainlesion Workshop, 2017, pp. 204–215.
Med. Mol. Imag., vol. 41, no. 12, pp. 2212–2221, Dec. 2014. [28] A. Shenbagarajan, V. Ramalingam, C. Balasubramanian, and S. Palanivel,
[7] L. Ali, A. Hussain, J. Li, A. Shah, U. Sudhakr, M. Mahmud, U. Zakir, ‘‘Tumor diagnosis in MRI brain image using ACM segmentation and
X. Yan, B. Luo, and M. Rajak, ‘‘Intelligent image processing techniques ANN-LM classification techniques,’’ Indian J. Sci. Technol., vol. 9, no. 1,
for cancer progression detection, recognition and prediction in the human pp. 1–12, Feb. 2016.
liver,’’ in Proc. CICARE, 2014, pp. 25–31. [29] K. Machhale, H. B. Nandpuru, V. Kapur, and L. Kosta, ‘‘MRI brain cancer
[8] M. Mahmud, M. S. Kaiser, A. Hussain, and S. Vassanelli, ‘‘Applications of classification using hybrid classifier (SVM-KNN),’’ in Proc. ICIC, 2015,
deep learning and reinforcement learning to biological data,’’ IEEE Trans. pp. 60–65.
Neural Netw. Learn. Syst., vol. 29, no. 6, pp. 2063–2079, Jun. 2018. [30] T. Gupta, T. K. Gandhi, R. K. Gupta, and B. K. Panigrahi, ‘‘Classification
[9] H. M. Ali, M. S. Kaiser, and M. Mahmud, ‘‘Application of convolutional of patients with tumor using MR FLAIR images,’’ Pattern Recognit. Lett.,
neural network in segmenting brain regions from MRI data,’’ in Brain vol. 139, pp. 112–117, Nov. 2020.
Informatics. Cham, Switzerland: Springer, 2019, pp. 136–146. [31] T. Rajesh, R. S. M. Malar, and M. R. Geetha, ‘‘Brain tumor detection using
[10] M. B. T. Noor, N. Z. Zenia, M. S. Kaiser, M. Mahmud, and S. Al Mamun, optimisation classification based on rough set theory,’’ Cluster Comput.,
‘‘Detecting neurodegenerative disease from MRI: A brief review on a deep vol. 22, no. S6, pp. 13853–13859, Nov. 2019.
learning perspective,’’ in Brain Informatics. Cham, Switzerland: Springer, [32] J. C. Griffis, J. B. Allendorfer, and J. P. Szaflarski, ‘‘Voxel-based Gaus-
2019, pp. 115–125. sian naïve Bayes classification of ischemic stroke lesions in individual
[11] M. Mahmud, M. S. Kaiser, and A. Hussain, ‘‘Deep learning in mining T1-weighted MRI Scans,’’ J. Neurosci. Methods, vol. 257, pp. 97–108,
biological data,’’ Cogn. Comput., vol. 13, no. 1, pp. 1–33, Jan. 2021. Jan. 2016.
[12] M. B. T. Noor, N. Z. Zenia, M. S. Kaiser, S. A. Mamun, and M. Mahmud, [33] J. Oh, S. Cha, A. H. Aiken, E. T. Han, J. C. Crane, J. A. Stainsby,
‘‘Application of deep learning in detecting neurological disorders from G. A. Wright, W. P. Dillon, and S. J. Nelson, ‘‘Quantitative apparent
magnetic resonance images: A survey on the detection of Alzheimer’s diffusion coefficients and T2 relaxation times in characterizing contrast
disease, Parkinson’s disease and schizophrenia,’’ Brain Informat., vol. 7, enhancing brain tumors and regions of peritumoral edema,’’ J. Magn.
no. 1, pp. 1–21, Dec. 2020. Reson. Imag., vol. 21, no. 6, pp. 701–708, 2005.
[13] V. M. Aradhya, M. Mahmud, D. Guru, B. Agarwal, and M. S. Kaiser, [34] M. Ambrosanio, F. Baselice, G. Ferraioli, F. Lenti, and V. Pascazio, ‘‘Intra
‘‘One-shot cluster-based approach for the detection of COVID-19 from voxel analysis in magnetic resonance imaging,’’ Magn. Reson. Imag.,
chest X-ray images,’’ Cogn. Comput., vol. 13, no. 4, pp. 873–881, Jul. 2021. vol. 37, pp. 70–80, Apr. 2017.
[14] N. Dey, V. Rajinikanth, S. J. Fong, M. S. Kaiser, and M. Mahmud, ‘‘Social [35] C. S. S. Anupama, M. Sivaram, E. L. Lydia, D. Gupta, and K. Shankar,
group optimization–assisted Kapur’s entropy and morphological segmen- ‘‘Synergic deep learning model–based automated detection and classifi-
tation for automated detection of COVID-19 infection from computed cation of brain intracranial hemorrhage images in wearable networks,’’
tomography images,’’ Cogn. Comput., vol. 12, no. 5, pp. 1011–1023, 2020. Pers. Ubiquitous Comput., pp. 1–10, Nov. 2020, doi: 10.1007/s00779-020-
[15] Y. Miah, C. N. E. Prima, S. J. Seema, M. Mahmud, and M. S. Kaiser, 01492-2.
‘‘Performance comparison of machine learning techniques in identifying [36] A. Nayyar, S. Garg, D. Gupta, and A. Khanna, ‘‘Evolutionary computation:
dementia from open access clinical datasets,’’ in Proc. ICACIn, 2021, Theory and algorithms,’’ in Advances in Swarm Intelligence for Optimizing
pp. 79–89, doi: 10.1007/s12559-021-09848-3. Problems in Computer Science. Boca Raton, FL, USA: CRC Press, 2018,
[16] A. K. Singh, A. Kumar, M. Mahmud, M. S. Kaiser, and A. Kishore, pp. 1–26.
‘‘COVID-19 infection detection from chest X-ray images using hybrid [37] A. Khamparia, D. Gupta, V. H. C. de Albuquerque, A. K. Sangaiah,
social group optimization and support vector classifier,’’ Cognit. Comput., and R. H. Jhaveri, ‘‘Internet of health things-driven deep learning system
pp. 1–13, Mar. 2021, doi: 10.1007/s12559-021-09848-3. for detection and classification of cervical cells using transfer learning,’’
[17] R. V. Tali, S. Borra, and M. Mahmud, ‘‘Detection and classification of J. Supercomput., vol. 76, pp. 8590–8608, Jan. 2020.
leukocytes in blood smear images: State of the art and challenges,’’ Int. [38] B. Deepa and M. G. Sumithra, ‘‘An intensity factorized thresholding
J. Ambient Comput. Intell., vol. 12, no. 2, pp. 111–139, Apr. 2021. based segmentation technique with gradient discrete wavelet fusion for
[18] N. A. B. Mary and D. Dharma, ‘‘Coral reef image classification employing diagnosing stroke and tumor in brain MRI,’’ Multidimensional Syst. Signal
improved LDP for feature extraction,’’ J. Vis. Commun. Image Represent., Process., vol. 30, no. 4, pp. 2081–2112, Oct. 2019.
vol. 49, pp. 225–242, Nov. 2017. [39] R. Usha and K. Perumal, ‘‘SVM classification of brain images from MRI
[19] O. Maier, C. Schröder, N. D. Forkert, T. Martinetz, and H. Handels, ‘‘Clas- scans using morphological transformation and GLCM texture features,’’
sifiers for ischemic stroke lesion segmentation: A comparison study,’’ Int. J. Comput. Syst. Eng., vol. 5, no. 1, pp. 18–23, 2019.
PLoS ONE, vol. 10, no. 12, Dec. 2015, Art. no. e0145118. [40] K. Devi, P. Gupta, D. Grover, and A. Dhindsa, ‘‘An effective texture feature
[20] B. Deepa and M. Sumithra, ‘‘A new amalgam technique in cad system extraction approach for iris recognition system,’’ in Proc. ICACCA, 2016,
for detection of abnormality in MRI brain images,’’ Adv. Nat. Appl. Sci., pp. 1–5.
vol. 10, no. 4, pp. 95–104, 2016. [41] W. Ayadi, W. Elhamzi, I. Charfi, B. Ouni, and M. Atri, ‘‘Brain MRI
[21] K. Usman and K. Rajpoot, ‘‘Brain tumor classification from multi- classification using discrete wavelet transform and bag-of-words,’’ in Proc.
modality MRI using wavelets and machine learning,’’ Pattern Anal. Appl., IC_ASET, vol. 2018, pp. 45–49.
vol. 20, no. 3, pp. 871–881, 2017. [42] M. A. Kabir, ‘‘Automatic brain tumor detection and feature extraction from
[22] N. M. Saad, N. Noor, A. Abdullah, S. Muda, A. Muda, and N. A. Rahman, MRI image,’’ Glob. Sci. J., vol. 8, no. 4, pp. 695–711, 2020.
‘‘Automated stroke lesion detection and diagnosis system,’’ in Proc. Int. [43] M. W. Nadeem, M. A. A. Ghamdi, M. Hussain, M. A. Khan, K. M. Khan,
Multiconf. Eng. Comput. Sci., vol. 1, 2017, pp. 1–6. S. H. Almotiri, and S. A. Butt, ‘‘Brain tumor analysis empowered with
[23] H. Lee, E.-J. Lee, S. Ham, H.-B. Lee, J. S. Lee, S. U. Kwon, J. S. Kim, deep learning: A review, taxonomy, and future challenges,’’ Brain Sci.,
N. Kim, and D.-W. Kang, ‘‘Machine learning approach to identify stroke vol. 10, no. 2, p. 118, Feb. 2020.
within 4.5 hours,’’ Stroke, vol. 51, no. 3, pp. 860–866, Mar. 2020. [44] B. H. Menze et al., ‘‘The multimodal brain tumor image segmenta-
[24] J. Amin, M. Sharif, M. Yasmin, and S. L. Fernandes, ‘‘A distinctive tion benchmark (BRATS),’’ IEEE Trans. Med. Imag., vol. 34, no. 10,
approach in brain tumor detection and classification using MRI,’’ Pattern pp. 1993–2024, Oct. 2015.
Recognit. Lett., vol. 139, pp. 118–127, Nov. 2020. [45] O. Maier, B. H. Menze, J. von der Gablentz, L. Häni, M. P. Heinrich,
[25] G. Deep, L. Kaur, and S. Gupta, ‘‘Directional local ternary quantized M. Liebrand, S. Winzeck, A. Basit, P. Bentley, and L. Chen, ‘‘ISLES
extrema pattern: A new descriptor for biomedical image indexing and 2015—A public evaluation benchmark for ischemic stroke lesion segmen-
retrieval,’’ Eng. Sci. Technol., Int. J., vol. 19, no. 4, pp. 1895–1909, tation from multispectral MRI,’’ Med. Image Anal., vol. 35, pp. 250–269,
Dec. 2016. Jan. 2017.
B. DEEPA received the B.E. degree in elec- MUFTI MAHMUD (Senior Member, IEEE)
tronics and communication engineering and the received the Ph.D. degree in information engineer-
M.E. degree in communication engineering from ing from the University of Padova, Italy, in 2011.
the Bannari Amman Institute of Technology, He is currently working as an Associate Professor
Sathyamangalam, Erode, India, in 2009 and 2011, of computer science with Nottingham Trent Uni-
respectively. She is currently pursuing the Ph.D. versity, U.K. With over 18 years of experience in
degree in information and communication engi- the industry and academia in India, Bangladesh,
neering with Anna University, Chennai. She is also Italy, Belgium, and U.K. He is an expert in compu-
working as an Assistant Professor with the Depart- tational intelligence, applied data analysis, and big
ment of Electronics and Communication Engi- data technologies with a keen focus on healthcare
neering, Jayaram College of Engineering and Technology, Tiruchirappalli, applications. He has published over 150 peer-reviewed articles and papers
India. She has published 12 technical articles in refereed journals, 16 research in leading journals and conferences, and (co-)edited five volumes and many
articles in national and international conferences in India and abroad. Her journal special issues on those domains. He has secured research grants total-
research interests include signal/image processing, biomedical engineering, ing a sum of approximately $3.3 million. He has supervised two postdoctoral
and wireless communications. and over 50 research students (Ph.D., master’s, and bachelor’s). He is also a
Senior Member of ACM, a Professional Member of the British Computer
Society, and a fellow of the Higher Education Academy, U.K. He was a
M. MURUGAPPAN (Senior Member, IEEE) recipient of the Vice-Chancellor’s Outstanding Research Award 2020 at the
received the Ph.D. degree in mechatronic engi- NTU and the Marie-Curie Postdoctoral Fellowship, in 2013. From 2020 to
neering from Universiti Malaysia Perlis, Malaysia, 2021, he was the Vice-Chair of the Intelligent System Application and Brain
in 2010. Since February 2016, he has been Informatics Technical Committees of the IEEE Computational Intelligence
an Associate Professor with the Department Society (CIS), a member of the IEEE CIS Task Force on Intelligence Systems
of Electronics and Communication Engineer- for Health, an Advisor of the IEEE R8 Humanitarian Activities Subcommit-
ing, Kuwait College of Science and Technol- tee, the Publications Chair of the IEEE UK and Ireland Industry Applica-
ogy (KCST) (Private University), Kuwait. He has tions Chapter, and the Project Liaison Officer of the IEEE UK and Ireland
gained more than ten years of post-Ph.D. teaching SIGHT Committee. He has also served as the Coordinating Chair of the
and research experience from different countries Local Organization of the IEEE-WCCI2020; the General Chair of BI2020,
(India, Malaysia, and Kuwait). Recently, he has been included in the top 2% 2021, and AII2021; and the Program Chair of the IEEE-CICARE2020 and
of scientist in the world in experimental psychology and artificial intelligence 2021. He serves as a Section Editor (Big Data Analytics) for the Cognitive
by Stanford University researchers. He has published more than 110 research Computation journal, an Associate Editor for the Frontiers in Neuroscience,
articles in peer-reviewed conference proceedings/journals/book chapters. and Big Data Analytics journals, and a Regional Editor (Europe) for the Brain
He has got a maximum citation of 3900 and the H-index of 33 and i10 index Informatics journal.
of 66 (Ref: Google Scholar citations). He secured nearly $2.5 Million as
research grants from the Government of Malaysia, Malaysia, and Kuwait
Foundation for Advancement of Sciences (KFAS), Kuwait for continu-
ing his research works and successfully guided 14 postgraduate students
(9 Ph.D. and 5 M.Sc.). His research interests include affective computing,
the Internet of Things (IoT), the Internet of Medical Things (IoMT), cogni-
tive neuroscience, brain–computer interface, neuromarketing, medical image
processing, machine learning, and artificial intelligence. He has received
several research awards, medals, and certificates on excellent publications
and research products. He is also serving as a Chair for Educational Activities
in the IEEE Kuwait Section. He is serving as an Editorial Board Member
for PLoS ONE, Journal of Medical Imaging and Health Informatics, and
International Journal of Cognitive Informatics.