Medical Image Classification For Alzheimer's Using A Deep Learning Approach
Medical Image Classification For Alzheimer's Using A Deep Learning Approach
*Correspondence:
[email protected] Abstract
1
Computer Science & Medical image categorization is essential for a variety of medical assessments and
Engineering, U.I.E.T, Panjab education functions. The purpose of medical image classification is to organize medical
University SSG Regional Centre, images into useful categories for the purpose of illness diagnosis or study, making it
Hoshiarpur, Punjab, India
2
Information Technology, U.I.E.T, one of the most pressing issues in the field of image recognition. On the other hand,
Panjab University SSG Regional traditional methods have plateaued in their effectiveness. Additionally, a substantial
Centre, Hoshiarpur, Punjab, India amount of time and energy is required when employing them to extract and choose
categorization features. Alzheimer’s disease is one of the most frequent sources of
dementia in elderly patients. Metabolic diseases affect a huge population worldwide,
and henceforth, there is a vast scope of applying machine learning to find treat-
ments to these diseases. As a relatively new machine learning technique, deep neural
networks have shown great promise for a variety of categorization problems. In this
research, a model for diagnosing and tracking the development of Alzheimer’s disease
that is both accurate and easy to understand has been developed. By following the
developed procedure, medical professionals may make deliberations with solid justi-
fication. Early diagnosis utilizing these machine learning algorithms has the potential
to minimize mortality rates associated with Alzheimer’s disease. This research work has
developed a convolutional neural network using a shallow convolution layer to identify
Alzheimer’s disease in medical image patches. The total accuracy of proposed classifi-
cations is around 98%, which is greater than the accuracy of the most popular existing
approaches.
Keywords: Machine learning, Medical image classification, Deep learning,
Convolutional neural network, Alzheimer’s disease detection, OTSU
Introduction
Medical imaging is the process of photographing the inside of the body so that scientists,
doctors, and clinicians may research, treat, and observe how the body’s tissues function.
This procedure attempts to identify and resolve the issue. It generates a database of how
organs should typically appear and function, making it simple to identify issues. Further-
more, this procedure employs both organic and radiological imaging techniques, includ-
ing X-rays and gamma rays, sonography, magnetic imaging, scopes, thermal imaging,
and isotope imaging.
There are several different methods for tracking where and what the body is doing.
These approaches have a number of issues as compared to modulates that create
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publi
cdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 2 of 18
pictures. Every year, billions of photographs are taken throughout the world for various
diagnostic objectives. Approximately half of them employ modulators, which mix ion-
izing radiation with other forms of radiation. Medical imaging allows doctors to view
into the body without having to cut it open. These images are created using a combina-
tion of fast computers and a mathematical and logical method of converting energy into
signals. Here impulses are subsequently converted into digital images. These signals rep-
resent the functioning of various tissues in the body and are mapped into digital images.
The use of a computer to modify images is referred to as “medical imaging processing.”
Obtaining a photograph, saving it, showing it to someone, and interacting with them
are all part of this operation. MRIs and CT scans enable doctors to analyze how well the
treatment is working and make changes as needed [1]. Medical imaging is essential for
monitoring the progression of a long-term disease. Medical imaging gives patients bet-
ter, more complete care because it provides clinicians with more information. Medical
image processing involves the use and analysis of 3D picture collections of the human
body. These records, which are frequently collected via a computed tomography (CT) or
magnetic resonance imaging (MRI) scanner, are used to diagnose diseases, plan medical
operations like surgery, and perform research.
Medical imaging is becoming increasingly crucial for early illness detection, diagnosis,
and treatment as patients want faster and more precise results. The resolution and preci-
sion of medical imaging improves as physics, electrical engineering, computer science,
and technology advances, and additional image modalities become available and as a
result, at the same time, many more medical images available than there were previously.
Imaging modalities such as X-ray, CT, MRI, PET/CT, and ultrasound are often employed
in hospitals and clinics these days. Both obtaining a precise diagnosis and deciding on
the best treatment strategy rely greatly on accurate interpretation of medical images.
However, because the findings drawn from image interpretation by doctors with vary-
ing degrees of precision are so dependent on the doctors own subjective judgment, the
results might vary greatly. In recent years, major advancements in the precision of clas-
sification of images, identification of target, and segmentation of image have emerged
from the spread of massive annotated natural image datasets and the advent of com-
puter vision deep learning. Many studies on early sickness identification and diagnosis
have used supervised learning.
Researcher Ciresan [2] evaluated medical images using deep neural networks, which
considerably benefited in skin cancer classification, breast cancer identification, and
brain tumor segmentation. Then, researcher Hinton improved the deep convolutional
neural network and utilized it to analyze medical images. One of the most important
uses of this technology is deep learning. Deep learning is a type of artificial intelligence
that can think and act in the same way as humans do. Most of the time, the system will
be configured with hundreds or even thousands of input data points in order to accel-
erate and improve the “training” experience. It begins by “training” on all of the data
that was provided to it in some fashion. Many machine learning methods have also
been used to sort images. However, machine learning (ML) may be superior in other
aspects. As a result, the deep learning system will be busy identifying images. Machine
vision has its own picture categorization setting. This technology can search images of
persons, items, places, activities, and text. Image categorization can be done extremely
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 3 of 18
successfully when artificial intelligence (AI) and machine vision technologies are utilized
combined in software.
The primary purpose of image classification is to organize all images into meaning-
ful groups or sectors. Grouping objects is simple for humans but difficult for machines.
It is not the same as identifying an object and categorizing it since it contains patterns
that cannot be identified. One may use image categorization technology to drive a car,
control a robot, or obtain information from a long distance. It is still hard labor, and
all it takes to improve it is a few more tools. For a long time, image categorization has
been a major issue in machine vision. The photographs in each challenge class are sig-
nificantly distinct in terms of color, size, context, and shape. There should be sufficiently
enough annotated training images, and producing sufficient numbers of training images
takes time and money [3]. There are several categorization systems, each with its own
set of benefits and drawbacks. Even the most advanced learning algorithm will have dif-
ficulty overcoming the limitations of supervised learning. Neural networks (artificial),
algorithms (genetic), induction rule, decision trees, recognition methods (statistical and
pattern), k-nearest neighbors, classifiers like naive Bayes, and discriminating analysis
are all examples of approaches in machine learning. This research mainly focuses on the
description of a classification model that may be used in conjunction with deep learn-
ing to address image classification issues. The benefits and drawbacks of this strategy
are discussed in this study. To do this, the document is structured as follows. The sec-
ond part of the research focuses on evaluating the classifier. In the final section, testing
of the proposed model has been explained. Findings and recommendations for further
research in the study have been proposed in the final phase.
Review of literature
In the recent past, deep learning techniques have been increasingly applied to the Alz-
heimer’s disease (AD) classification using multimodal data of brain imaging. Several
studies have proposed improved deep convolutional neural networks (CNNs) for the
classification of Alzheimer’s disease, taking advantage of the rich information available
from multiple imaging modalities. For example, Zhang et al. [4] proposed improved
CNNs for classification of Alzheimer’s disease using multimodal data of brain imaging,
while Dey et al. [5] developed a hybrid deep learning (DL) framework for early stage
diagnosis and detection of Alzheimer’s disease. Kumar et al. [6] conducted a systematic
review of deep learning-based medical image classification for diagnosis of Alzheimer’s
disease, and Liu et al. [7] proposed multi-modality cascaded convolutional networks
for prediction and detection of disease like Alzheimer. Past studies have investigated
the implementation of deep learning for diagnosis of Alzheimer’s disease using MRI—
magnetic resonance imaging—and PET—positron emission tomography images. For
instance, Liu et al. [8] proposed a multimodal deep learning approach for diagnosis of
Alzheimer’s disease using MRI and PET images. Furthermore, Ma and Liu [9] developed
multi-scale attention-guided capsule networks for the classification of Alzheimer’s dis-
ease. Zhang et al. [10] proposed dual-pathway convolutional neural network for joint
learning of MRI and PET images for diagnosis of Alzheimer’s disease, while Yan et al.
[11] proposed a multi-scale feature fusion network for disease classification like Alzhei-
mer using MRI data. Additionally, deep CNNs and other deep learning techniques such
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 4 of 18
as generative adversarial networks (GANs) and stacked deep polynomial networks have
also been explored for Alzheimer’s disease diagnosis. Li et al. [12] proposed a multi-
modal fusion approach with GANs for Alzheimer’s disease diagnosis, and Liu et al. [13]
developed multimodal deep polynomial networks (stacked) for the diagnosis of Alzhei-
mer’s disease. Furthermore, studies have investigated the fusion of brain imaging data
from multiple modalities, such as MRI and PET, for classification of Alzheimer’s disease.
Huang et al. [14] proposed a fusion approach (multimodal) for the classification of Alz-
heimer disease using the data of brain imaging. Zhang et al. [10] developed a dual-path-
way CNN for combined learning of MRI and PET images. However, one of the main
challenges in using DL for classification of medical images including Alzheimer’s disease
is the limited availability of labeled training samples. Hai Tang [15] demonstrated the use
of semi-supervised learning for classification of images, which further required the path-
ologically tagged data of the image for training the network model. Wang and Dong [16]
also showed that fine-tuning can enhance the classification performance of DL models,
especially when training samples are limited. Hai Tang [15] focused his study on demon-
strating how images can be sorted using semi-supervised learning. To train the network
model, this technique just requires a modest amount of pathologically tagged image
data. When it comes to identifying images, the neural network surpasses the CNN and
other baseline models, according to this data. Wang and Dong [16] researched the main
problem of using DL to sort medical images that there are not enough labeled training
samples. It shows that fine-tuning makes it a lot easier to classify liver lesions, especially
when the training samples are small. Satya Eswari Jujjavarapuand [14] researched about
different applications, such as figuring out what kind of cancer is in a medical image or
extracting and choosing features. The authors use the above applications to explain how
to evaluate different machine learning methods in a fair way. Nadar [17] studied patient
hyperspectral images to create an algorithm based on DL for an automated, computer-
aided oral cancer diagnosis system. Yi Zhou [18] provided a collaborative learning tech-
nique that blends semi-supervised learning with a mechanism to enhance illness grading
and lesion segmentation. After developing early predictions of lesion maps for a large
amount of image-level annotated data, it constructs a lesion-aware disease grading
model. This enhances the accuracy with which it classifies illness severity. Hossam H.
Sultan [19] presented a CNN-based DL model for brain tumor classification utilizing
two publically accessible datasets. The best overall accuracy for the proposed network
structure was 96.13% in one study and 98.7% in the other. The results proved that the
model can be used to distinguish between types of brain tumors. Laleh Armi [20] in his
paper looks at combinatorial methods that are well-known; the “Results” section of this
paper has a list of the pros and cons of well-known image texture descriptors. All of the
methods that have made it this far are based on how well they can tell things apart, how
hard they are to figure out, and how well they can handle noise, rotation, etc. Standard
classifiers for texture images are also briefly discussed. Dongyun Lin [21] described a
novel deep neural network design that uses transfer learning to recognize photographs
taken with a microscope.
Based on proposed research of 2D-Hela and PAP-smear datasets, it can be con-
cluded that proposed network structure outperforms the neural network structure
that is mostly based on characteristics provided by a single CNN and a few standard
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 5 of 18
classification methods. Allugunti [22] illustrated the similarities between DL and the
more traditional non-parametric approach to machine learning, i.e., CNN demonstrates
that the proposed approach performs better than the “state of the art” methods currently
employed to deliver precise diagnoses. H. H. A. and Zahraa Sh. Aaraji [23] investigated
how MRI brain scans and sections of those pictures are used to build and evaluate vari-
ous deep learning architectures. After processing these photographs, it became simpler
to distinguish between AD and CN (cognitively normal) in four distinct patterns. In
terms of prediction, the ResNet architecture fared the best (90.83% for raw brain pic-
tures and 93.50% for processed images). Ali Mohammad Alquadah [24] proved that the
SVM classifier puts each image into a “good” or “bad” group. By looking at histopathol-
ogy as a whole, the system can be used to find where in the body cancer is. Overall, the
proposed method was 91.12% correct, 85.2% sensitive, and 94.01% specific. That is more
than any other system has ever had. Afshar and Mohammadi [25] utilized Caps Nets to
determine how to categorize brain cancers, implemented new architecture to improve
classification accuracy, further analyzed the over-fitting problem of Caps Nets by apply-
ing it to a real-world dataset of MRI images, also tested whether Caps Nets can give a
better match for whole-brain imaging or simply the segmented tumor, and finally devel-
oped a unique display paradigm for Caps Nets output.
Our findings imply that the suggested approach outperforms CNNs in determining the
type of brain tumor a patient has. Jianliang GaoAnd [26] evaluated using T1-weighted
MRI data from the Autism Brain Imaging Data Exchange I (ABIDE I) utilizing tenfold
cross validation. Based on the assessment results, it can be concluded that the proposed
solution for classification of autism spectrum disorder/total communication has an
accuracy of 90.39% and an AUC (area under the curve) of 0.9738. The proposed solu-
tion performs a range of cutting-edge ASD/TC classification algorithms, according to
the findings. Monika Bansal [27] has enhanced the performance of picture classification
by fusing deep features extracted with a well-known deep CNN (VGG19) with a wide
variety of manually-designed feature extraction methods. When compared to other clas-
sifiers and algorithms developed by different authors, random forest is generally more
accurate (93.73%) (Fig. 1).
Methods
Our proposed method for categorizing and collecting images is described in Fig. 2 for a
high-level diagram of our proposed procedure. This chart depicts the five main stages of
the proposed procedure.
Dataset
This MRI scanner data is made freely available to the scientific community via the open
access series of imaging studies (OASIS). OASIS has made available MRI datasets from
a wide range of subjects; cross-sectional OASIS-1 and longitudinal OASIS-2 have since
been used in numerous studies. The OASIS-3 database is an upgrade to previous ver-
sions. There are 1098 people in total, ranging in age from 42 to 95. Six hundred and
nine persons are linked to moderate cognitive decline (normal), whereas another 489
are linked to more severe forms of the condition. More than two thousand MRI pictures
were used to construct the OASIS-3 dataset, which included structural and functional
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 6 of 18
characteristics. Figure 3 displays the results of the dataset consisting of four types of MR
images. Dataset is also available on Kaggle.
Pre‑processing
The steps of feature extraction and image recognition follow the pre-processing of
images. Whatever methods of picture capture are used; the resulting photographs never
live up to expectations. Image noise, blurry focal plane, unwanted object interference,
and similar issues are all examples of such drawbacks. Pre-processing techniques vary
depending on the final use of the picture. The paper’s pre-processing phase consists of
the following procedures: resizing and segmentation. Resize the all images into width:
196 and height: 196. After that, apply the image segmentation technique to the pre-pro-
cessed images (Fig. 4).
Step 1: Image preparation: Prepare image for thresholding. Convert the image to
grayscale and apply pre-processing, such as noise reduction and contrast enhance-
ment.
Step 2: Histogram calculation: Compute the histogram of the grayscale image. The
histogram represents the distribution of pixel intensities. Histogram is a plot for the
number of pixels at each intensity level.
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 7 of 18
Step 5: Image segmentation: Once the threshold is selected, segment the image into
background and foreground regions on the basis of threshold value. Pixels with
intensities above threshold were classified as foreground, while pixels with intensities
below the threshold were classified as background.
Same threshold has been applied to all images. And we found that the OTSU method
is excellent for images with bimodal intensity distributions or images with distinct peaks
in the histogram representing the two classes of pixels. OTSU automatically found the
threshold that best separates these two classes and gave us accurate image segmentation
(Figs. 6, 7, and 8).
not computationally feasible. These days, networks of neurons with tens of layers, and
even hundreds of layers are being designed and built. These kinds of systems are referred
to as “deep neural systems.” It shows that the system has several levels, making it quite
complex.
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 11 of 18
Measurements
In the field of machine learning, classification challenges are frequently validated by
specificity, precision, accuracy, F1-score, recall, and other metrics, particularly to quan-
tify the performance of a system or to understand the generalization potential of models.
• The entire amount of correct predictions (TP + TN) divided by the total number of
datasets (P + N) yields the accuracy.
TP + TN
Accuracy = (1)
TP + TN + FP + FN
• By dividing the overall number of accurate positive predictions (TP + FP) by the
total number of correct positive predictions, precision is calculated (TP).
TP
Precision = (2)
TP + FP
• F1-score. It is a single indication that combines precision and sensitivity. The maxi-
mum and minimum values are 1 and 0.
2TP
F1 = (3)
2TP + FP + FN
TP
Recall = (4)
TP + FN
TP True positive
TN True negative
FP False positive
FN False negative
Commonly, a confusion matrix is used to assess the classifier’s efficacy. This table
is used to display the true classes, the classes that the classifier predicted, and the
types of errors that the classifier produced. Figure 10 depicts a confusion matrix for
the diagnosis of Alzheimer’s. According to the confusion matrix, the following are the
four different types of terminologies used:
• When a classifier returns a “1” as a “true positive,” it signifies the prediction was
accurate.
• When a classifier returns a result of “true negative,” it signifies the prediction was
indeed negative.
• If a classifier returns a “1” for a positive result when it should return a “0,” this is
a false positive (FP).
• When a classifier returns a false negative (FN), it is indicating that the prediction
was incorrect.
Among the most crucial performance indicators for a classification model are the
ones derived from the confusion matrix itself: accuracy, precision, and recall. The
result of our model is shown in the classification report in Table 5.
Future scope
DL inAD research is continuously being enhanced for improved performance and
transparency. Research on the diagnostic categorization of AD using deep learning
is moving away from hybrid approaches and towards a model that purely employs
deep learning algorithms. Getting enough, accurate, and brain-balanced informa-
tion on Alzheimer’s disease is one of the key obstacles. However, strategies still need
to be developed to incorporate totally diverse forms of data in a deep learning net-
work. Knowing that good, noise free data is a key obstacle, we propose the following
approaches for future work:
All of the above approaches could potentially open up new avenues for AD prediction
and classification.
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 15 of 18
Limitations
The suggested technique does have certain limitations that may need to be addressed.
Conclusions
In recent years, deep learning has emerged as a central force in the automation of our
daily lives, bringing about significant improvements in comparison to traditional AI
calculations. The proposed work opted to investigate the application of CNN-based
classification on a short MRI image dataset and assess its performance because of its
significance of classifying medical images and the unique difficulty posed by the tiny
dataset of Alzheimer’s disease-based images. For the best outcomes, using deep learn-
ing is the foremost requirement. If the convolution layer is defrosted, then the feature
in question can acquire knowledge from the fresh data. Hence, the unique character-
istic is a crucial element in achieving better precision. When compared to standard
SVM, NB, and CNN models, the suggested methodology outperforms them all.
Abbreviations
AD Alzheimer’s disease
MRI Medical resonance imaging
CT Computerized tomography
PET Positron emission tomography
ML Machine learning
AI Artificial intelligence
CNN Convolutional neural networks
DL Deep learning
GAN Generative adversarial networks
CN Cognitively normal
ABIDE I Autism Brain Imaging Data Exchange I
AUC Area under the curve
ACD Autism spectrum disorder
OASIS Open Access Series of Imaging Studies
OTSU Optical Thresholding Using Maximum Variance
ANN Artificial neural network
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 17 of 18
Acknowledgements
Not applicable.
Authors’ contributions
SSB carefully reviewed and edited the paper; TV performed literature review and wrote the paper. All authors have read
and approved the manuscript.
Funding
Not applicable.
Declarations
Competing interests
The authors declare that they have no competing interests.
References
1. Y. M. Y. A. and T. Alqahtani (2019) “Research in medical imaging using image processing techniques”. Available:
https://www.researchgate.net/publication/334201999_Research_in_Medical_Imaging_Using_Image_Processing_
Techniques
2. Dan Ciresan AG (2012) Deep neural networks segment neuronal membranes in electron microscopy images. Avail-
able: https://proceedings.neurips.cc/paper/2012/hash/459a4ddcb586f24efd9395aa7662bc7c-Abstract.html
3. I. H. Sarker, “Machine learning: algorithms, real-world applications and research directions,”
4. Zhang L, Peng Y, Sun J (2022) Improved deep convolutional neural networks for Alzheimer’s disease classification
using multimodal brain imaging data. J Med Imaging 9(2):024501. https://doi.org/10.1117/1.JMI.9.2.024501
5. Dey S, Maity S, Mondal S, Dey N (2021) Early diagnosis of Alzheimer’s disease using a hybrid deep learning frame-
work. J Med Syst 45(11):135. https://doi.org/10.1007/s10916-021-01875-6
6. Kumar A, Gupta A, Khanna A, Khanna P (2021) Deep learning-based medical image classification for Alzheimer’s
disease diagnosis: a systematic review. J Healthcare Eng 2021:6646234. https://doi.org/10.1155/2021/6646234
7. Liu C, Liu Y, Shen D (2020) Multi-modality cascaded convolutional networks for Alzheimer’s disease prediction.
NeuroImage 215:116806. https://doi.org/10.1016/j.neuroimage.2020.116806
8. Liu J, Zhang J, Luo S (2022) A multi-modal deep learning approach for Alzheimer’s disease diagnosis using MRI and
PET images. J Med Image Anal 78:102210. https://doi.org/10.1016/j.media.2021.102210
9. Ma J, Liu W (2022) Multi-scale attention-guided capsule networks for Alzheimer’s disease classification. J Neurocom-
puting 510:167–180. https://doi.org/10.1016/j.neucom.2021.07.049
10. Zhang L, Zhang Y, Wang Y (2021) Joint learning of MRI and PET images for Alzheimer’s disease diagnosis using a
dual-pathway convolutional neural network. J Pattern Recog 116:107985. https://doi.org/10.1016/j.patcog.2021.
107985
11. Yan R, Han J, Huang L (2021) Multi-scale feature fusion network for Alzheimer’s disease classification. IEEE Transact
Med Imag 41(9):2566–2577. https://doi.org/10.1109/TMI.2021.3059265
12. Li Y, Chen C, Liu M (2020) Multi-modal fusion with generative adversarial networks for Alzheimer’s disease diagnosis.
Comput Med Imag Graphics 82:101679. https://doi.org/10.1016/j.compmedimag.2020.101679
13. Liu M, Cheng D, Yan w (2019) Multi-modal neuroimaging feature learning with multimodal stacked deep
polynomial networks for diagnosis of Alzheimer’s disease. J Neuroinform 17(3):393–404. https://doi.org/10.1007/
s12021-019-09409-x
14. Huang L, Yang H, Wang L (2018) Multi-modal fusion of brain imaging data for Alzheimer’s disease classification. J
Brain Imag Behav 12(4):1244–1255. https://doi.org/10.1007/s11682-017-9799-9
15. Hai Tang ZH (2020) Research on medical image classification based on machine learning. Available: https://ieeex
plore.ieee.org/document/9091175
16. Wang W, Liang D et al (2020) Medical image classification using deep learning. Available:https://www.researchgate.
net/publication/337362482_Medical_Image_Classification_Using_Deep_Learning
17. P. R. J. & E. R. S. Nadar ( 2019) “Computer-assisted medical image classification for early diagnosis of oral cancer
employing deep learning algorithms” . Available: https://link.springer.com/article/10.1007/s00432-018-02834-7
18. X. H. et. a. Yi Zhou (2019) “Collaborative learning of semi-supervised segmentation and classification for medical
images”. Available: https://openaccess.thecvf.com/content_CVPR_2019/html/Zhou_Collaborative_Learning_of_
Semi-Supervised_Segmentation_and_Classification_for_Medical_Images_CVPR_2019_paper.html
19. N. M. S. et. a. Hossam H. Sultan (2020) “Multi-classification of brain tumor images using deep neural network” . Avail-
able: https://ieeexplore.ieee.org/document/8723045
20. Laleh Armi SFE (2019) Texture image analysis and texture classification methods - a review. Available: https://arxiv.
org/abs/1904.06554
21. L. N. et. a. Dongyun Lin And (2018) “Deep CNNs for microscopic image classification by exploiting transfer learning
and feature concatenation”. Available: https://www.researchgate.net/publication/324961229_Deep_CNNs_for_
microscopic_image_classification_by_exploiting_transfer_learning_and_feature_concatenation
Bamber and Vishvakarma Journal of Engineering and Applied Science (2023) 70:54 Page 18 of 18
22. Allugunti VR (2021) A machine learning model for skin disease classification using convolution neural net-
work. Available: https://www.researchgate.net/publication/361228242_A_machine_learning_model_for_skin_
disease_classification_using_convolution_neural_network
23. Abbas HH, Aaraji ZSh (2022) “Automatic classification of Alzheimer’s disease using brain MRI data and deep convolu-
tional neural networks. Available: https://arxiv.org/ftp/arxiv/papers/2204/2204.00068.pdf
24. A. A. Ali Mohammad Alqudah And (2018) “Sliding window based support vector machine system for classification of
breast cancer using histopathological microscopic images” 2019. Available:https://www.tandfonline.com/DOI/abs/
10.1080/03772063.2019.1583610?journalCode=tijr20
25. P. Afshar and A. et. a. Mohammadi (2018) “Brain tumor type classification via capsule networks”. Available: https://
arxiv.org/pdf/1802.10200.pdf
26. Y. K. et. a. Jianliang GaoAnd (2018) “Classification of autism spectrum disorder by combining brain connectivity and
deep neural network classifier”. Available: https://www.sciencedirect.com/science/article/abs/pii/S09252312183062
34
27. Monika Bansal MK (2021) Transfer learning for image classification using VGG19: Caltech-101 imagedata set. Avail-
able: https://link.springer.com/article/https://doi.org/10.1007/s12652-021-03488-z
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.