0% found this document useful (0 votes)
58 views8 pages

Ensemble Learning Approach For Multi-Class Classification of Alzheimer's Stages Using Magnetic Resonance Imaging

Alzheimer’s disease (AD) is a gradually progressing neurodegenerative irreversible disorder. Mild cognitive impairment convertible (MCIc) is the clinical forerunner of AD. Precise diagnosis of MCIc is essential for effective treatments to reduce the progressing rate of the disease. The other cognitive states included in this study are mild cognitive impairment non-convertible (MCInc) and cognitively normal (CN). MCInc is a stage in which aged people suffer from memory problems, but the stage wil

Uploaded by

TELKOMNIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views8 pages

Ensemble Learning Approach For Multi-Class Classification of Alzheimer's Stages Using Magnetic Resonance Imaging

Alzheimer’s disease (AD) is a gradually progressing neurodegenerative irreversible disorder. Mild cognitive impairment convertible (MCIc) is the clinical forerunner of AD. Precise diagnosis of MCIc is essential for effective treatments to reduce the progressing rate of the disease. The other cognitive states included in this study are mild cognitive impairment non-convertible (MCInc) and cognitively normal (CN). MCInc is a stage in which aged people suffer from memory problems, but the stage wil

Uploaded by

TELKOMNIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

TELKOMNIKA Telecommunication Computing Electronics and Control

Vol. 21, No. 2, April 2023, pp. 374~381


ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v21i2.23352  374

Ensemble learning approach for multi-class classification of


Alzheimer’s stages using magnetic resonance imaging

Ambily Francis1, 2, Immanuel Alex Pandian1


1
Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Tamilnadu, India
2
Department of Electronics and Communication Engineering, Sahrdaya College of Engineering and Technology, Kerala, India

Article Info ABSTRACT


Article history: Alzheimer’s disease (AD) is a gradually progressing neurodegenerative
irreversible disorder. Mild cognitive impairment convertible (MCIc) is the
Received Feb 14, 2022 clinical forerunner of AD. Precise diagnosis of MCIc is essential for
Revised Nov 07, 2022 effective treatments to reduce the progressing rate of the disease. The other
Accepted Nov 26, 2022 cognitive states included in this study are mild cognitive impairment
non-convertible (MCInc) and cognitively normal (CN). MCInc is a stage in
which aged people suffer from memory problems, but the stage will not lead
Keywords: to AD. The classification between MCIc and MCInc is crucial for the early
detection of AD. In this work, an algorithm is proposed which concatenates
Alzheimer’s disease the output layers of Xception, InceptionV3, and MobileNet pre-trained
Convolutional neural network models. The algorithm is tested on the baseline T1-weighted structural
Ensemble learning magnetic resonance imaging (MRI) images obtained from Alzheimer’s
Mild cognitive imapirement disease neuroimaging initiative database. The proposed algorithm provided
non convertible multi-class classification accuracy of 85%. Also, the proposed algorithm
Mild conginitive imapirement gave an accuracy of 85% for classifying MCIc vs MCInc, an accuracy of
convertible 94% for classifying AD vs CN, and an accuracy of 92% for classifying
Pre-trained models MCIc vs CN. The results exhibit that the proposed algorithm outruns other
state-of-the-art methods for the multi-class classification and classification
between MCIc and MCInc.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Ambily Francis
Department of Electronics and Communication Engineering
Karunya Institute of Technology and Sciences, Coimbatore, Tamilnadu, India
Email: [email protected]

1. INTRODUCTION
Alzheimer’s disease (AD) is a brain shrinkage disorder with the prime marks of memory loss. The AD
will progressively worse over the years. It will affect the daily activities of a human being and slowly end to
death. The four cognitive states of the human brain are cognitive normal (CN), mild cognitive impairment
convertible (MCIc), mild cognitive impairment non-convertible (MCInc), and AD. AD can not be cured
completely, but the shrinking rate can be reduced if it is detected at the early stage MCIc.
The conventional clinical examinations with different imaging modalities fail to detect AD, at its
early stage MCIc. Advanced image processing techniques have to be applied to distinguish the MCIc from
MCInc and AD. The state-of-the-art methods observe various image processing techniques for the early
diagnosis of AD. The methods include diagnosis using hand-crafted features and deep learning models. Deep
learning models outperform most of the methods supported by hand-crafted features. Difficulty to process the
high-dimensional hand-crafted features makes these methods inferior to deep learning models. The rest of the
section discusses the significant deep learning algorithms for the early detection of AD.

Journal homepage: http://telkomnika.uad.ac.id


TELKOMNIKA Telecommun Comput El Control  375

Researchers put forward many significant works for the early detection of AD using deep learning
algorithms. The latent feature representation using a stacked encoder is proposed by Suk and Shen [1].
The algorithm considers low-level features like gray matter tissue volumes that improve diagnostic accuracy for
early detection of AD. An algorithm based on a 3D convolutional neural network and sparse autoencoder is
proposed by Payan and Montana [2]. The algorithm improves the three-class (AD vs CN vs MCI) classification
accuracy. In [3], 2D convolutional neural networks and sparse autoencoder are used to classify AD, CN, and
MCI. Compared to the algorithm proposed by Payan and Montana [2], Gupta et al. [3] is simple as it uses 2D
convolution. But 3D spatial information is not exploited by Gupta et al. [3] while Payan and Montana [2] is
utilized the 3D spatial information of magnetic resonance imaging (MRI) images. The algorithm proposed by
Valliani and Soni [4] claimed that non-biomedical pre-trained models like ResNet [5] learn cross-domain
features that enable the model to extract significant low-level features from MRI images to improve the
classification accuracy. The proposed algorithm ensures the efficiency of data augmentation before learning.
The algorithm proposed by McCrackin [6] generates 3D multi-channel feature maps based on
Voxception-Resnet for the classification between AD and CN. Data augmentation is performed before
generating feature maps. The algorithm is implemented on diffusion magnetic resonance imaging (MRI)
images. As mentioned in [4], [7] is also used a non-biomedical pre-trained model visual geometry group
(VGG-16) to learn the cross-domain features to increase the accuracy. The algorithm proposes a
mathematical model based on transfer learning with VGG-16 and achieves remarkable three-class
classification accuracy. The ensemble-based algorithm proposed by Pan et al. [8] combines the features from
sagittal, coronal, and transverse slices of MRI images. Data augmentation is performed to avoid over-fitting.
Two-stage ensemble learning is implemented in this algorithm. In the first stage, three base classifiers
ensemble sagittal, coronal, and transverse slices separately. Then in the second stage another base classifier
ensembles three-axis slices. Each base classifier consists of six convolution layers. The outputs from multiple
base classifiers are combined to improve the classification accuracy.
The algorithm proposed by Islam and Zhang [9] is an ensemble of three slightly different deep
convolutional neural networks. The individual model has four following basic operational layers 1) convolution,
2) batch normalization, 3) rectified linear unit, and 4) pooling. The model focuses on four-class classification
while the majority of the works focused on either binary classification or three-class classification. Here, also
data augmentation is performed to expand the dataset. In [10] end-to-end learning of a CNN-based model has
been implemented for three-class classification. The features can be naturally learned from basic data without
any specialist control. In this work, the input data is transformed into a lower dimension space using a
convolutional autoencoder. In [11] a convolutional neural network integrates the features from MRI and
positron emission tomography (PET) images of the hippocampal area for the detection of AD. Here the
hippocampal area is selected based on the region of interest (ROI). Since the different modalities are combined,
the proposed algorithm provides decent results for the classification of AD vs CN, MCIc vs CN, MCIc vs MCInc.
The algorithm proposed by Sun et al. [12] is an efficient dual-functional 3D convolutional neural
network for three-class classification and an accurate bilateral hippocampus segmentation. Accurate
hippocampus segmentation is advantageous to increase classification accuracy. The algorithm uses V-Net
convolutional blocks with bottleneck architecture to reduce the scaling while maintaining the segmentation
accuracy. The review by Al-Shoukry et al. [13] has been listed and analyzed the recent works in the field of
early detection of AD using deep learning algorithms. The work points out the fact that prediction of AD at the
early stage deserves much more attention than the diagnosis of AD. The algorithm proposed by Ju et al. [14]
works with functional MRI images along with medical information including age, gender, and genetic
information. A stacked autoencoder has been used to train the deep neural network based on functional MRI
time-series data or correlation coefficient data. Wen et al. [15] has reviewed numerous algorithms based on
convolutional neural network (CNN) and MRI in the field of early detection of AD. Also, the algorithm
proposes an open framework for reproducible evaluation. In [16], a 3D local directional pattern is implemented
which computes the orientation around each voxel. The algorithm shows less sensitivity to illumination and noise.
Shao et al [17], multi-kernel support vector machines and hyper graph-based regularisation were
utilised to combine shared features from many modalities. According to the findings, the method provides
classification accuracy that is higher than that of previous multi-modality techniques. The algorithm’s
primary flaw is that all hyperedge weights are set to 1 without considering various hyperedges. In [18],
support vector machine classifier and wavelets, as well as the Gabor filter and Gaussian of local descriptors,
are employed as tools for feature extraction. Three separate support vector machines (SVM), each trained
with a different feature descriptor, are combined in the system. In [19], clinical and texture characteristics are
used to identify the transition stage of MCIc. The key benefit is that MCI and AD have been classified using the
entire brain’s MRI texture. An approach to feature selection that makes use of a multivariate general linear model
is suggested in [20]. The modest intensity fluctuations from CN to MCIc are produced with the use of a general
linear model. Additionally, multivariate adaptive regression splines, a unique classifier, are utilised as a classifier.

Ensemble learning approach for multi-class classification of Alzheimer’s stages using … (Ambily Francis)
376  ISSN: 1693-6930

High classification accuracy was achieved in [21] by using texture features that were derived from the
elliptical neighbourhood, however at a significant computational expense.
Maguolo et al [22], different activation functions used in convolutional neural networks for medical
applications are compared. In [23] and [24], the efficiency of assembling pre-trained networks for medical
applications is demonstrated. The algorithm used by Liu et al. [25] is based on deep CNN for learning both
features of hippocampus segmentation and features of classification using 3D DenseNet. Many of the papers
listed don’t specifically address the distinction between MCIc and MCInc [26]. Accuracy in the works that
have undergone the aforementioned classification has not exceeded 70%. Multi-class classification and MCIc
vs MCInc classification are the main focus of this work.

2. METHOD
In this work, a deep learning algorithm for the multi-classification of AD is presented. The proposed
algorithm is based on the ensemble of pre-trained models. The block diagram of the proposed system is
shown in Figure 1. It consists of four parts: 1) separating the middle slice from MRI image, 2) normalization,
3) augmentation, and 4) ensemble model.
In this work, data are taken from the Alzheimer’s disease neuroimaging initiative (ADNI) database.
According to the ADNI central database acquisition protocol, a three-dimensional sagittal T1-weighted image
sequence with 1.2 mm slice thickness in 1.5 T field strength is acquired. The relative age group and the number
of samples of each category used in this study are given in Table 1. There are 54 cognitively normal subjects,
52 mild cognitive impairment non-convertible subjects, 58 mild cognitive impairment convertible subjects,
and 72 Alzheimer’s disease subjects.

Table 1. Description of MRI images used in this study


Category Numbers Age
CN 54 74.12 ± 3.48
MCInc 52 75.36 ± 2.58
MCIc 58 76.89 ± 3.65
AD 72 75.89 ± 3.68

Figure 1. Workflow of the proposed model

Initially, the MRI database is modified with middle slices of 3D MRI images. The middle slice
contains significant data while all other slices may carry redundant information. The separated middle slice
of images is undergone through normalization operation. The slice of the MRI image is composed of pixels
with a value between 0 and 255. Normalization downscales the array of the original image pixel values to be
between [0, 1] which makes the images contribute more equally to the overall loss. Otherwise, a higher pixel
range image results in greater loss and a lower learning rate should be used, a lower pixel range image would
require a higher learning rate.
Data augmentation is performed to ensure the larger availability of training, testing, and validation
images to avoid overfitting. Image augmentation enlarges the size of the dataset by building a revised version
of the existing dataset images that provides a large amount of dataset variation and finally increases the
capacity of the model to predict new images without any error. Data augmentation consists of four operations
for each image. Flipping left to right, flipping up and down, rotation, and insertion of randomized noise are
the various operation which has been done to get the augmented images.
The samples of augmented images are given in Figure 2. The augmented images are classified using
the proposed ensemble model. The ensemble model is made up of three pre-trained networks. The pre-trained
networks used to build the ensemble model are Xception, InceptionV3, and MobileNet. Detailed network
architecture is given in Figure 3. The three pre-trained models are trained individually. The outputs of all
models will be taken and connected to a concatenation layer. Along with the concatenation layer, a dense
layer with 1024 units followed by that another dense layer with a single output and activation equal to
“sigmoid” will be added for binary classification. A dense layer with four outputs and an activation function
“softmax” will add for multi-class classification.

TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 2, April 2023: 374-381
TELKOMNIKA Telecommun Comput El Control  377

Figure 2. MRI image augmentation (from left to right: original image, flipping left to right, flipping up and
down, rotation with 45° and noisy image)

InceptionV3, MobileNet, and Xception networks are operated based on a separable convolution
layer [27], [28]. Spatial convolution and depth-wise convolution are two types of separable convolution.
The spatial separable convolution primarily deals with spatial dimensions of the image and kernel. A spatial
separable convolution divides a kernel into two, smaller kernels. This results in a reduction in the number of
multiplications and thus system complexity. The network will run faster compared to normal convolution.
But all the kernels cannot be divided into smaller kernels uniformly. Because of this, spatial separable
convolution is not commonly used in deep learning algorithms. The depth-wise separable convolution can
work with the kernels which cannot be factorized uniformly. It deals with spatial dimension and depth
dimension. Depth indicates the number of channels of the image. Each channel is a particular interpretation of
the image. Depth-wise separable convolution divides the kernel to do depth-wise convolution and point-wise
convolution. Depth-wise convolution is performed on the image without changing the depth of the image.
The point-wise convolution uses a unity size kernel or a kernel that iterates through every single point.
The kernel depth and image depth will be the same. The less computation time and different feature maps are
the advantages of depth-wise separable convolution. The main concern about depth-wise separable convolution
is that it reduces the number of parameters in a convolution. But the depth multiplier can be set accordingly to
increase the number of parameters in the network to learn more about the characteristics of different images.

Figure 3. Architecture of ensemble model

3. RESULTS AND DISCUSSION


In this study, 3D brain MRI images with the size of 121×145×121 voxels are used as the input for
the proposed model. The 3D MRI images are Neuroimaging Informatics Technology Initiative (NIFTI)
images. The middle slice is extracted using med2image of the python library supported by Keras and
TensorFlow. The function med2image will convert the medical images of NIFTI format into joint
photographic experts group (JPEG) format. Due to the small dataset of images used in this study, the images
are augmented by random nonlinear transformation, rotation, and flipping. The training images and test
images are augmented separately. The model is trained in a workstation environment of Google Colab and
implemented based on the deep learning toolkit Keras and TensorFlow. This model is trained from scratch
until it converges. To achieve fast convergence, a fixed learning rate of 0.01 is set, and uses a stochastic
gradient descent algorithm as an optimizer to update weight parameters. 10% of training images are taken as
validation images. The cross-entropy loss function is used to update the weights.

Ensemble learning approach for multi-class classification of Alzheimer’s stages using … (Ambily Francis)
378  ISSN: 1693-6930

The proposed algorithm is evaluated by precision, recall (sensitivity), accuracy, and F1 score.
The formulas of the above four measures in (1), (2), (3), and (4) respectively.
𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (1)
𝑇𝑃+𝐹𝑃

𝑇𝑃
𝑅𝑒𝑐𝑎𝑙𝑙 = (2)
𝑇𝑃+𝐹𝑁

𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (3)
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁

2×𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛× Sensitivity
𝐹1 𝑠𝑐𝑜𝑟𝑒 = (4)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+ Sensitivity

True-postive (TP), false-positive (FP), true-negative (TN), and false-negative (FN) provide the TP
classifications, FP classifications, TN classifications, and FN classifications. TP represents that the model predicts
1 and the true value is 1. When a value is TN, the model predicts 0 while the true value is 0. FP means that the
model predicts 1 and the true value is 0. FN refers to when a model predicts a value of 0 when the true value is 1.
The receiver operating characteristics (ROC) of AD vs CN, and MCIc vs CN, MCIc vs MCInc are
given in Figure 4, Figure 5, and Figure 6. A ROC is a graph that plots two parameters, true positive rate in (5)
and false positive rate in (6). It shows the performance of the classification model at different classification
thresholds. The ROC curve area indicates a two-dimensional area under the entire ROC curve from (0, 0) to
(1, 1). The area can be a value between 0 and 1. The values 0 or 1 indicate that all the predictions of the
model are wrong or correct respectively. In this work, the area under the ROC curve provides the values 0.99,
0.98, and 0.94 for AD vs CN, MCIc vs CN, and MCIc vs MCInc respectively.
𝑇𝑃
𝑇𝑃𝑅 (𝑅𝑒𝑐𝑎𝑙𝑙) = (5)
𝑇𝑃+𝐹𝑁

𝐹𝑃
𝐹𝑃𝑅 = (6)
𝐹𝑃+𝑇𝑁

Figure 4. Receiver operating characteristics of AD Figure 5. Receiver operating characteristics of MCIc


vs CN vs CN

Figure 6. Receiver operating characteristics of MCIc vs MCInc

TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 2, April 2023: 374-381
TELKOMNIKA Telecommun Comput El Control  379

The multi-class classification performance of Xception, InceptionV3, MobileNet models, and


proposed ensemble models are reported in Table 2, Table 3 and Table 4 respectively. The results show that
the proposed ensemble model provides the best classification performance. Table 5 shows the accuracy of the
binary classification of MCIc vs MCInc, AD vs CN, and MCIc vs CN. The results indicate the significance
of this algorithm for MCIc vs MCInc classification. The multi-class classification accuracy of the proposed
ensemble model is 85%. The multi-class classification accuracy of Xception, InceptionV3, MobileNet, and
the proposed ensemble model is given in Table 6. The precision, recall, and F1 score of each output class of
proposed ensemble model is given in Table 7. The training and test accuracy for 100 epochs of multi-class
classification is given in Figure 7.
The experimental analysis shows that the proposed algorithm has achieved good results in both binary
and multi-class classifications. Furthermore, the proposed model is compared with state-of-the-art methods as
given in Table 8. Multi-class classification with the ADNI dataset is addressed by very few works. The multi-class
classification accuracy using other than ADNI is not included in the comparison. Many works addressed the
binary classification AD vs CN. But in the context of early detection, the classification accuracy MCIc vs
CN and MCIc vs MCInc are significant classifications. As mentioned earlier, a better MCIc vs MCInc
classification accuracy is very promising for the early detection of AD. With the use of different layers of
separable convolution and normal convolution, necessary information to distinguish MCIc and MCInc can be
learned by the model. The separable convolution layers ensure the reduced computational complexity of the
algorithm. Results indicate that the proposed model is efficient for binary and multi-class classifications.

Figure 7. Training and test accuracy of multi-class classification

Table 2. Multi-class classification performance of Table 3. Multi-class classification performance of


Xception InceptionV3
Class Precision Recall F1 score Class Precision Recall F1 score
AD 0.77 0.93 0.84 AD 0.85 0.91 0.88
MCIc 0.77 0.55 0.64 MCIc 0.66 0.69 0.68
MCInc 0.68 0.74 0.71 MCInc 0.86 0.65 0.74
CN 0.89 0.72 0.80 CN 0.86 0.87 0.87

Table 4. Multi-class classification performance of Table 5. Comparison of binary classification


MobileNet accuracy (%)
Class Precision Recall F1 score Model AD vs CN MCIc vs CN MCIc vs MCInc
AD 0.85 0.82 0.84 Xception 90 89 75
MCIc 0.58 0.71 0.64 InceptionV3 92 90 56
MCInc 0.77 0.65 0.71 MobileNet 89 90 71
CN 0.84 0.83 0.84 Ensemble 94 92 85

Table 6. Comparison of multi-class Table 7. Multi-class classification performance of proposed


classification accuracy (%) ensemble model
Model AD vs CN vs MCIc vs MCInc Class Precision Recall F1 score
Xception 78 AD 0.95 0.86 0.90
InceptionV3 81 MCIc 0.65 0.81 0.72
MobileNet 78 MCInc 0.76 0.70 0.73
Ensemble 85 CN 0.86 0.89 0.88

Ensemble learning approach for multi-class classification of Alzheimer’s stages using … (Ambily Francis)
380  ISSN: 1693-6930

Table 8. Comparison of classification accuracy of proposed model and state-of-the-art methods


Methods AD vs CN MCIc vs CN MCIc vs MCInc
[1] 95.9 - 75.8
[4] 81.3 - -
[8] 84 79 62
[11] 90.10 87.46 76.90
[17] 92.51 82.53 75.48
[25] 88.90 - -
Proposed method 94 92 85

4. CONCLUSION
In this work, an ensemble model is proposed to improve the accuracy of binary and multi-class
classification of AD stages. Xception, InceptionV3, and MobileNet models are concatenated to get the new
ensemble model. The ensemble model succeeds to improve the classification accuracy of MCIc vs MCInc.
Since the MCIc stage is the early stage of AD, MCIc vs MCInc is a crucial classification in the context of
early detection of AD. To the best of our knowledge, not many works have come up with MCIc vs MCInc
classification accuracy of more than 70%. In this work, MCIc vs MCInc classification accuracy is obtained as
85%. While the majority of the existing research works focus on binary classification, this model provides
significant improvement for multi-class classification also. The proposed algorithm can be very beneficial for
the early stage of AD diagnosis. The algorithm tested for the MRI images in the ADNI database. The
algorithm can be tested with other classification problems in medical image processing.

ACKNOWLEDGEMENTS
Data used in the preparation of this article were obtained from the Alzheimer’s disease
neuroimaging initiative (ADNI) database (http://adni.loni.usc.edu).

REFERENCES
[1] H. -I. Suk and D. Shen, “Deep learning-based feature representation for AD/MCI classification,” International conference on
medical image computing and computer-assisted intervention, 2013, pp. 583-590, doi: 10.1007/978-3-642-40763-5_72.
[2] A. Payan and G. Montana, “Predicting Alzheimer's disease: a neuroimaging study with 3D convolutional neural networks,”
arXivpreprint arXiv:1502.02506, 2015, doi: 10.48550/arXiv.1502.02506.
[3] A. Gupta, M. S. Ayhan, and A. S. Maida, “Natural image bases to represent neuroimaging data,” International conference on
machine learning, 2013, pp. 987-994. [Online]. Available: http://proceedings.mlr.press/v28/gupta13b.pdf
[4] A. Valliani and A. Soni, “Deep residual nets for improved Alzheimer's diagnosis,” in Proceedings of the 8th ACM International
Conference on Bioinformatics, Computational Biology, and Health Informatics, 2017, doi: 10.1145/3107411.3108224.
[5] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on
computer vision and pattern recognition, 2016, pp. 770-778. [Online]. Available: https://www.cv-
foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf
[6] L. McCrackin, “Early detection of Alzheimer’s disease using deep learning,” in Canadian Conference on Artificial Intelligence,
2018, pp. 355-359, doi: 10.1007/978-3-319-89656-4_40.
[7] R. Jain, N. Jain, A. Aggarwal, and D. J. Hemanth, “Convolutional neural network based Alzheimer’s disease classification from
magnetic resonance brain images,” Cognitive Systems Research, vol. 57, pp. 147-159, 2019, doi: 10.1016/j.cogsys.2018.12.015.
[8] D. Pan, A. Zeng, L. Jia, Y. Huang, T. Frizzell, and X. Song, “Early detection of Alzheimer’s disease using magnetic resonance
imaging: a novel approach combining convolutional neural networks and ensemble learning,” Frontiers in neuroscience, vol. 14,
2020, doi: 10.3389/fnins.2020.00259.
[9] J. Islam and Y. Zhang, “Brain MRI analysis for Alzheimer’s disease diagnosis using an ensemble system of deep convolutional
neural networks,” Brain informatics, vol. 5, no. 2, 2018, doi: 10.1186/s40708-018-0080-3.
[10] K. Oh, Y. -C. Chung, K. W. Kim, W. -S. Kim, and I. -S. Oh, “Classification and visualization of Alzheimer’s disease using volumetric
convolutional neural network and transfer learning,” Scientific Reports, vol. 9, 2019, doi: 10.1038/s41598-019-54548-6.
[11] Y. Huang, J. Xu, Y. Zhou, T. Tong, X. and Zhuang, “Diagnosis of Alzheimer’s disease via multi-modality 3D convolutional
neural network,” Frontiers in neuroscience, vol. 13, 2019, doi: 10.3389/fnins.2019.00509.
[12] J. Sun, S. Yan, C. Song, and B. Han, “Dual-functional neural network for bilateral hippocampi segmentation and diagnosis of Alzheimer’s
disease,” International Journal of Computer Assisted Radiology and Surgery, vol. 15, pp. 445-455, 2020, doi: 10.1007/s11548-019-
02106-w.
[13] S. Al-Shoukry, T. H. Rassem and N. M. Makbol, “Alzheimer’s Diseases Detection by Using Deep Learning Algorithms: A Mini-
Review,” in IEEE Access, vol. 8, pp. 77131-77141, 2020, doi: 10.1109/ACCESS.2020.2989396.
[14] R. Ju, C. Hu, P. Zhou, and Q. Li, “Early Diagnosis of Alzheimer's Disease Based on Resting-State Brain Networks and Deep
Learning,” in IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 16, no. 1, pp. 244-257, 2019,
doi: 10.1109/TCBB.2017.2776910.
[15] J. Wen et al., “Convolutional neural networks for classification of Alzheimer's disease: Overview and reproducible evaluation,”
Medical image analysis, vol. 63, 2020, doi: 10.1016/j.media.2020.101694.
[16] S. Yan, C. Song, and B. Zheng, “3D local directional patterns for early diagnosis of Alzheimer's disease,” The Journal of
Engineering, vol. 2019, no. 14, pp. 530-535, 2019, doi: 10.1049/joe.2018.9412.
[17] W. Shao, Y. Peng, C. Zu, M. Wang, and D. Zhang, “Hypergraph based multi-task feature selection for multimodal classification of
Alzheimer's disease,” Computerized Medical Imaging and Graphics, vol. 80, 2020, doi: 10.1016/j.compmedimag.2019.101663.
[18] L. Nanni, S. Brahnam, C. Salvatore, and I. Castiglioni, “Texture descriptors and voxels for the early diagnosis of Alzheimer’s

TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 2, April 2023: 374-381
TELKOMNIKA Telecommun Comput El Control  381

disease,” Artificial intelligence in medicine, vol. 97, pp. 19-26, 2019, doi: 10.1016/j.artmed.2019.05.003.
[19] C. C. Luk et al., “Alzheimer's disease: 3-dimensional MRI texture for prediction of conversion from mild cognitive impairment,”
Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, vol. 10, pp. 755-763, 2018, doi: 10.1016/j.dadm.2018.09.002.
[20] A. Çevik, G. -W. Weber, B. M. Eyüboğlu, and K. K. Oğuz, “Voxel-MARS: a method for early detection of Alzheimer’s disease by
classification of structural brain MRI,” Annals of Operations Research, vol. 258, pp. 31-57, 2017, doi: 10.1007/s10479-017-2405-7.
[21] L. Nanni, S. Brahnam, and G. Maguolo, “Data augmentation for building an ensemble of convolutional neural networks,” in
Innovation in Medicine and Healthcare Systems, and Multimedia, 2019, vol. 145, pp. 61-69, doi: 10.1007/978-981-13-8566-7_6.
[22] G. Maguolo, L. Nanni, and S. Ghidoni, “Ensemble of convolutional neural networks trained with different activation functions,”
Expert Systems with Applications, vol. 166, 2021, doi: 10.1016/j.eswa.2020.114048.
[23] L. Nanni, S. Ghidoni, and S. Brahnam, “Ensemble of convolutional neural networks for bioimage classification,” Applied
Computing and Informatics, vol. 17, no. 1, pp. 19-35, 2021, doi: 10.1016/j.aci.2018.06.002.
[24] A. Francis and I. A. Pandian, “Early detection of Alzheimer’s disease using ensemble of pre-trained models,” 2021 International
Conference on Artificial Intelligence and Smart Systems (ICAIS), 2021, pp. 692-696, doi: 10.1109/ICAIS50930.2021.9395988.
[25] M. Liu et al., “A multi-model deep convolutional neural network for automatic hippocampus segmentation and classification in
Alzheimer’s disease,” Neuroimage, vol. 208, 2020, doi: 10.1016/j.neuroimage.2019.116459.
[26] A. Francis and I. A. Pandian, “Review on Local Feature Descriptors for Early Detection of Alzheimer’s Disease,” 2018 International
Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), 2018, pp. 1-5, doi: 10.1109/ICCSDET.2018.8821115.
[27] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017, pp. 1800-1807, doi: 10.1109/CVPR.2017.195.
[28] A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint
arXiv:1704.04861, 2017, doi: 10.48550/arXiv.1704.04861.

BIOGRAPHIES OF AUTHORS

Ambily Francis is currently a research scholar at Karunya Institute of


Technology and Sciences, Tamil Nadu, India. She is working as Assistant Professor in
Sahrdaya College of Engineering and Technology, Kerala, India. She obtained the Bachelor of
Electronics and Communication Engineering from the Mahatma Gandhi university in 2008
and subsequently her Master’s degree in signal processing from the Mahatma Gandhi
University in 2012. She has contributed much of her expertise in areas related medical image
processing using deep learning. Ambily Francis is an active member of ISTE and IAENG. She
can be contacted at email: [email protected].

Immanuel Alex Pandian is currently working as an Associate Professor at


Karunya Institute of Technology and Sciences, Tamil Nadu, India and has almost 18 years
working experience in an engineering education. He obtained his PhD degree in Electronics
and Communication from Karunya Institute of Technology and Sciences, Tamil Nadu, India in
2014. He received his BE in Electronics and Communication Engineering from Madurai
Kamaraj University and Master in Applied Electronics from Anna University in 1998 and
2005 respectively. His areas of interest mainly in image processing and machine learning. He
has published more than 40 research papers in refereed International Journals and
Conferences. He can be contacted at email: [email protected].

Ensemble learning approach for multi-class classification of Alzheimer’s stages using … (Ambily Francis)

You might also like