BTC Classification-IFJ
BTC Classification-IFJ
sciences
Article
Brain Tumor Classification Using Conditional Segmentation
with Residual Network and Attention Approach by Extreme
Gradient Boost
Arshad Hashmi * and Ahmed Hamza Osman
Department of Information Systems, Faculty of Computing and Information Technology in Rabigh (FCITR),
King Abdulaziz University, Jeddah 21911, Saudi Arabia
* Correspondence: [email protected]
Abstract: A brain tumor is a tumor in the brain that has grown out of control, which is a dangerous
condition for the human body. For later prognosis and treatment planning, the accurate segmen-
tation and categorization of cancers are crucial. Radiologists must use an automated approach to
identify brain tumors, since it is an error-prone and time-consuming operation. This work proposes
conditional deep learning for brain tumor segmentation, residual network-based classification, and
overall survival prediction using structural multimodal magnetic resonance images (MRI). First, we
propose conditional random field and convolution network-based segmentation, which identifies
non-overlapped patches. These patches need minimal time to identify the tumor. If they overlap,
the errors increase. The second part of this paper proposes residual network-based feature mapping
with XG-Boost-based learning. In the second part, the main emphasis is on feature mapping in
nonlinear space with residual features, since residual features reduce the chances of loss information,
and nonlinear space mapping provides efficient tumor information. Features mapping learned
Citation: Hashmi, A.; Osman, A.H.
by XG-Boost improves the structural-based learning and increases the accuracy class-wise. The
Brain Tumor Classification Using experiment uses two datasets: one for two classes (cancer and non-cancer) and the other for three
Conditional Segmentation with classes (meningioma, glioma, pituitary). The performance on both improves significantly compared
Residual Network and Attention to another existing approach. The main objective of this research work is to improve segmentation
Approach by Extreme Gradient Boost. and its impact on classification performance parameters. It improves by conditional random field
Appl. Sci. 2022, 12, 10791. https:// and residual network. As a result, two-class accuracy improves by 3.4% and three-class accuracy
doi.org/10.3390/app122110791 improves by 2.3%. It is enhanced with a small convolution network. So, we conclude in fewer
Academic Editors: Jose-Maria resources, and better segmentation improves the results of brain tumor classification.
Buades-Rubio and Antoni
Jaume-i-Capó Keywords: magnetic resonance imaging; CNN segmentation; patches; CRF; XGBoost; machine
learning; medical image processing; attention mechanism
Received: 16 September 2022
Accepted: 14 October 2022
Published: 25 October 2022
and Fisher kernel function improve the accuracy in comparison to deep learning techniques
such as Convolution Neural Network(CNN), Multi scale Convolution Neural Network [8],
and CNN with Genetic algorithm(GA) [9].
The goal of this study is to examine how the various GBM failure patterns affect the
results of second surgery. Overall survival (OS) and post-recurrence survival (PRS) were
examined according to clinical factors in a prospective cohort of patients with recurrent
GBM. One of the clinical features that was taken into account was the pattern of recurrence.
Calculations based on the Kaplan–Meier method were used to determine survival curves;
the log-rank test was applied in order to evaluate the differences in survival rates between
the various curves. Patients who experienced a recurrence that was local had a longer OS
than those who experienced a recurrence that was not local, with an OS of 24.1 months
compared to 18.2 months, respectively (P = 0.015, HR = 1.856 (1.130–3.010)). This benefit
was more pronounced in patients who had a recurrence of their cancer in their local area
(P = 0.002 with HR 0.212 (95% CI: 0.081–0.552) and P = 0.029 with HR = 0.522 (95% CI:
0.289–0.942)). Patients who have recurrent GBM and are candidates for a second operation
may have a different prognosis depending on the pattern of their re-occurrence [6]. Condi-
tional deep learning is proposed here for the purpose of brain tumor segmentation, residual
network-based classification, and overall survival prediction utilizing structural multi-
modal magnetic resonance images (MRI). First, they devised a method of segmentation
based on a convolution network, and then, they used a technique called conditional random
field to locate patches that did not overlap. Because of the tumor, it is imperative that
these patches be applied as quickly as humanly possible. If there is overlap, the mistakes
should be made larger. The residual network-based feature mapping with XG-Boost-based
learning was proposed in the second half of this paper. In the second half of this paper. In
the second segment, mapping characteristics [10] has been discussed.
knowledge [15]. Deep learning-based methods like classification, reconstruction [16], and
even segmentation [17] have lately become popular in medical picture analysis. The fea-
ture extraction method used by [18] was the VGG-16 model. A feature map was input
into an LSTM recurrent neural network to classify brain cancers. The authors claim that
pre-trained CNN models for feature extraction outperform GoogleNet [5], ResNet [19], and
AlexNet [20]. The 2D CNN limits for brain tumor MRI categorization have recently been
investigated [8]. White matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) may
all be recognized using a voxelwise residual network (VoxResNet) based on 3D CNN-based
architecture [8]. Training results with XG-Boost are dependent on optimizing the structure
and lowering complexity.
1.2. Motivation
Using Machine Learning (ML) and Deep Learning (DL) approaches, many researchers
have worked on the Brain Tumor Classification (BTC) and developed numerous algorithms
to analyze MRI scans in order to extract numerous potential features from the dataset [5].
Brain tumor early detection is the primary motivation of this research and analysis. The
automatic segmentation and categorization of medical images are critical in diagnosing,
predicting tumor progression, and therapy of brain tumors. An early detection of a brain
tumor results in a more rapid response to treatment, which helps to increase patients’
survival rate [19]. Manually locating and classifying brain cancers in huge medical picture
collections takes a lot of effort and time. So, in the proposed approach, an efficient deep
learning model is used to improve segmentation with a conditional random field, which
smooths the segmentation boundary edge and improves feature mapping.
2. Related Work
This work aims to increase the performance of brain tumor classification using MRI
images. This section presents the recent works related to brain tumor classification using
various machine-learning techniques, as shown in Table 1.
Research Gap
• Feature selection reduces the information, so feature selection and extraction increase
false positives [10]
• Segmentation is an essential part of features, so optimize segmentation is needed for
adequate classification accuracy [8]
• A normal classifier does not reduce the overlapping of features nor does it increase
the false positive rate [21].
Appl. Sci. 2022, 12, 10791 4 of 14
3. Proposed Methodology
This section discusses the proposed approach to classify the brain tumor MRI images
into meningioma, glioma, and pituitary using conditional segmentation by CNN-CRF and
Resnet-XG Boost techniques. Figure 1 demonstrates the basic structure of the segmentation
by CNN-CRF, whereas Figure 2 illustrates the classification and feature mapping of the
Resnet-XG Boost framework.
Appl. Sci. 2022, 12, 10791 5 of 14
3.1. Dataset
Clinical situations typically only allow for the acquisition and availability of a lim-
ited number of brain MRI slices, each having a significant slice gap. With such little
data, creating a 3D model is challenging. As a result, the suggested method is founded
on 2D slices gathered from 233 patients between 2005 and 2010 at Nanfang Hospital in
Guangzhou and Tianjing Medical University, China [24]. Meningiomas (708 slices), pi-
tuitary tumors (930 slices), and gliomas (1426 pieces) are among the 3064 slices in this
dataset’s shared perspectives (coronal, sagittal, and axial). Additionally, this dataset has
5-fold cross-validation indices. In total, 80% (2452) of the photos are being used for training
utilizing this information, while 20% (612 images) have been used to test performance.
Five times the process is carried out. The pixels of the photographs are 0.49 by 0.49 mm2
in size, with an in-plane resolution of 512 by 512. The space between the slices is 1 mm,
and the slice thickness is 6 mm. The perimeter of the tumor was meticulously mapped
out by three highly qualified radiologists. Each slice in the dataset is associated with an
Appl. Sci. 2022, 12, 10791 6 of 14
information structure, which typically contains the patient’s ID ( pid), the tumor type label,
which is 1 for meningioma, 2 for glioma, and 3 for pituitary tumor, the coordinates vector
(x,y) of points belonging to the tumor border, and the tumor mask (Tij), which is a binary
image in which the tumor positions encompass a value of 1, and the healthy ones represent
by 0. The cornerstone of the training method will consist of the pair labels and features.
Data augmentation via the use of an elastic transform [25] has also been used to reduce
the amount of overfitting that occurs in neural networks when they are being trained. An
instance of this shift is shown in Figure 3, which you may see here. The data augmentation
strategy brings the total number of training pictures for each fold iteration up to 4904 from
the previous 2402. In two class dataset class 1 and 2 combined and give 165 class 1 and
3 names as class2 for binary class classification.
3.2. Preprocessing
The preprocessing is done in two stages. The first step was to improve or reduce the
noise in MRI images using Z-score. The Z-score calculates the variance between pixels and
reduces it to values 0 and 1. It reduces the area of the image where the variance increases.
Because the MRI image database is made up of different sizes, they must be resized to 256
by 256 normalized length. By augmenting images, we can increase the number of images
in the dataset and improve the class imbalance. This was accomplished through the use of
two transformation methods. The first image had a 90-degree angle. In both datasets, the
second transformation was used three times more than the original to flip images vertically.
3.3. Segmentation
The Figure 1 shows the segmentation flow. CNN is used for tumor segmentation
because the tumor size is so small that other approaches do not work correctly. CNN does
not use the entire image for training. Instead, it was trained using patches extracted at
random from images. Images are divided into patches and target categories of center points
during training. The method is commonly used in biomedical image processing such as
skin cancer images and mammography images [18]. The training strategy based on patches
required more storage and time than the training strategy based on full-sized images. The
former is preferable for tumor segmentation because small tumor areas and using full-sized
images would result in false positives. CNN outperforms other methods, but it has some
flaws. This paper circumvents that weakness. The limitation is that CNN segmentation
is completed perfectly but with rough edges and low contrast, which sometimes misses
Appl. Sci. 2022, 12, 10791 7 of 14
the tumor boundary and increases false positives. The current segmentation techniques
disregard overlapping patch boundary overlaps. The overlap of patches increases the
detection and localization error of tumors. The proposed CNN segmentation method
maps patches, but condition-based conditional random field extracts improve the overlap.
This improves the accuracy of detection by enhancing segmentation, feature mapping,
and classification.
∈ ( x ) = φ( xi ) + φ( xi + xj) (1)
φ( x ) Its gradient, which computes every pixel,
φu ( x ) = −logp( xi ) (2)
with convolution network using four convolution layers with 0.2 dropout and combined
the features in SoftMax activation and after CNN find the segmentation edge. In this edge
some edges are sharp, so for smoothing this process uses conditional random field which
show in smoothing edge block. By this block refine the edges by efficient threshold which
dynamically define by CRF block. Other blocks show input, middle edges, and output. In
CNN input 3 by 32, there is first Conv layer 32 by 32, second Conv layer 32 by 32, third
Conv layer 32 by 64 by 15, fourth Conv + pool layer 64 by 15, then flattened layer 64 by
64. Figure 2 shows the classification process segmented images, as seen in Figure 1. Then,
we apply CNN variant Resnet-50, which divided it into 5 blocks: first block 7 by 7 by 64,
second block 1 by 1 by 64, third block 1 by 1 by 128, fourth block 1 by 1 by 1024, and fifth
block 1 by 1 by 2048. Following these features, we used the XG-Boost block to make a
prediction model and then tested it on various analytical performance parameters.
Figure 3 illustrates three classes of brain tumor detection in addition to four phases
and figures. The second image segmentation is on the right side, the third image has an
attention mechanism on the left side, the fourth image has residual activation on the right
side, and the first image prediction is on the left side, is Prediction.
3.6. XG-Boost(XGB)
It is a combination of both classification and regression tree sets, making it an ensemble
model classification and regression tree. On the other hand, XGB [2] is used for issues of
supervised learning and enhances patterns based on structure. A representation of the
XG-Boost prediction model may be seen in Figure 2. The modification to XG’s learning
base tree structure results in an improvement to XG-Boost.
Appl. Sci. 2022, 12, 10791 9 of 14
Algorithm 2: Residual-Boost.
Input: Segmented images
Output: Classification of brain tumor
2.1 For every Segmented Images
2.2 M( x) < − Pixel Matrix
2.3 Fvx = Resnet 50 ( M( x) )
2.4 Initialize XG-Boost Modeling
(x)
F0 (x) = argmin ∑in=1 L( f v , γ). . . ...... (5)
2.5 For every instance and Leaf node N and L respectively
ε( F0 (x)) = γL + 21 λ∑iL=1 wi2 .....(6)
− ∑ i ∈ L i gi
2.6 For Each Leaf Wi = ∑i∈ Ni hi +γ
............... (7)
2.7 Make Classifier Model α2 < −(Wi , θ )
TP + TN
Accuracy = (8)
TP + TN + FP + FN
In Equation (8), TP refers to samples that have been accurately categorized as positive,
while TN refers to those that have been correctly classified as negative. While FP represents
samples that have been wrongly classed as being properly classified, FN represents samples
that have been correctly classified as being incorrectly categorized.
Recall: It is used to measure the percentage of real positive instances that we were
able to accurately forecast using our algorithm. Equation (9) is used in the computation of
the recall metric.
TP
Recall = (9)
TP + FN
F-Score: It is used as means of expressing harmonic mean of precision and recall. It
will only accept samples that have been positively identified as pituitary, meningioma, or
glioma. The F-Score is determined by applying Equation (10) to the data.
2TP
F-Score = (10)
2TP + FP + FN
Precision: It is used to measure how many of the correctly predicted cases actually
turned out to be positive.
TP
Precision = (11)
TP + FP
Meningioma 100 5 1
Glioma 3 197 0
Pituitary 6 0 300
Appl. Sci. 2022, 12, 10791 11 of 14
Meningioma 2000 51 10
Glioma 58 1500 0
Pituitary 36 0 1200
To validate our proposed technique and its impact on detection accuracy, we compared
it to the work of other researchers who used SVM and Fisher Kernel machine learning based
on detection accuracy ranging from 91% to 94%. The accuracy of CNN and CNN-based
approaches ranges from 81% to 97%. Aside from that, the proposed method CRF-Resnet50
has a maximum accuracy of 99.56%. The proposed method is more effective than O(n2 )
when compared to existing approach complexities, but the proposed approach O (NlogN ).
Existing approaches use differential CNNs for classifier tuning, which reduces accuracy
to O(n2 ). The proposed method improved segmentation while avoiding the use of a
complex network.
Segmentation is performed in existing approaches without regard to patch boundaries
that overlap. The overlapping of information increases the error of tumor detection and
localization. The proposed CNN segmentation approach maps in infinite nonlinear space,
but conditional random field extracts condition-based finite space. This enhances detection
accuracy by improving segmentation, feature mapping, and classification.
Appl. Sci. 2022, 12, 10791 13 of 14
5. Conclusions
This study presents an accurate and automated brain tumor categorization method
that requires little preprocessing. Deep Residual Learning is used to extract features from
brain MRI images in the proposed system. The proposed method focuses on efficient
segmentation by using a CNN-CRF-based approach to reduce overlapping edges. CRF
provides condition-based decisions of segmented boundaries in this approach. We reduce
the small tumor overlapping by other patches in non-overlapping borders. As a result,
no overlapped patches improve cancer detection and detect efficient feature mapping by
residual network and learning by an XG-Boost base ensemble classifier. The experiments
employ two datasets: one for two classes (cancer and non-cancer) and the other for three
classes (Meningioma, Glioma, Pituitary). We applied CNN-CRF-Resnet to both datasets
(proposed approach), used 10-fold cross-validation, and analyzed the performance metrics.
Accuracy, precision, recall, and F-Score are all important metrics, and we achieved the
following values: accuracy 99.89%, precision 98.12%, recall 98.45%, and F-score 98.1% for
the two classes. The following three-class experiments had improved accuracy by 99.56%,
precision by 98.56%, recall by 98.12%, and F-score by 98.45%. There were improved results
in the augmented approach by increasing the instances of all classes. The three classes all
showed the same increase in performance. Compared to existing approaches, the two-class
and three-class approaches significantly improve resources and computation. The proposed
research aims to improve the overlap between patches and its effect on the mapping
efficiency of features by residual network. In addition, the proposed work contributes
learning and focus mechanisms for enhancing learning and features, respectively.
Author Contributions: Conceptualization, A.H.; Data Curation, A.H.; Formal Anal-ysis, A.H.,
A.H.O.; Acquisition, A.H.; Methodology, A.H., A.H.O.; Resources, A.H., A.H.O.; Validation, A.H.,
A.H.O.; Investigation, A.H.O.; Visualization, A.H., A.H.O.; Software, A.H.; Project Administration,
A.H.O. Project Supervision, A.H.; Writing Original Draft, A.H.; Writing review and edit, A.H.O. All
authors have read and agreed to the published version of the manuscript.
Funding: This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz
University, Jeddah, under grant no (G: 383-830-1442).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: This project was funded by the Deanship of Scientific Research (DSR) at King
Abdulaziz University, Jeddah, under grant no (G: 383-830-1442) . The authors, therefore, acknowledge
with thanks DSR for technical and financial support.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design
of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript, or
in the decision to publish the results
References
1. Pasqualetti, F.; Montemurro, N.; Desideri, I.; Loi, M.; Giannini, N.; Gadducci, G.; Malfatti, G.; Cantarella, M.; Gonnelli, A.;
Montrone, S.; et al. Impact of recurrence pattern in patients undergoing a second surgery for recurrent glioblastoma. Acta Neurol.
Belg. 2022, 122, 441–446. [CrossRef] [PubMed]
2. Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A deep learning approach for brain tumor
classification and segmentation using a multiscale convolutional neural network. Healthcare 2021, 9, 153. [CrossRef] [PubMed]
3. Montemurro, N.; Fanelli, G.N.; Scatena, C.; Ortenzi, V.; Pasqualetti, F.; Mazzanti, C.M.; Morganti, R.; Paiar, F.; Naccarato, A.G.;
Perrini, P. Surgical outcome and molecular pattern characterization of recurrent glioblastoma multiforme: A single-center
retrospective series. Clin. Neurol. Neurosurg. 2021, 207, 106735–106735. [CrossRef] [PubMed]
4. Usha, R.; K. SVM classification of brain images from MRI scans using morphological transformation and GLCM texture features.
Int. J. Comput. Syst. Eng. 2019, 5, 18–23. [CrossRef]
Appl. Sci. 2022, 12, 10791 14 of 14
5. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor
classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020,
10, 565–565. [CrossRef] [PubMed]
6. Mathew, A.R.; Anto, P.B. Tumor detection and classification of MRI brain image using wavelet transform and SVM. In Proceedings
of the 2017 International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 27–28 July 2017;
pp. 75–78.
7. Hamiane, M.; Saeed, F. SVM classification of MRI brain images for computer-assisted diagnosis. Int. J. Electr. Comput. Eng. 2017,
7, 2555–2555. [CrossRef]
8. Mohan, G.; Subashini, M.M. MRI based medical image analysis: Survey on brain tumor grade classification. Biomed. Signal
Process. Control 2018, 39, 139–161. [CrossRef]
9. Wadhwa, A.; Bhardwaj, A.; Verma, V.S. A reviewon brain tumor segmentation of MRI images. Magn. Reso Nance Imaging 2019,
61, 247–259. [CrossRef] [PubMed]
10. Gumaei, A.; Hassan, M.M.; Hassan, R.; Alelaiwi, A.; Fortino, G. A Hybrid Feature Extraction Method with Regularized Extreme
Learning Machine for Brain Tumor Classification. IEEE Access 2019, 7, 36–266. [CrossRef]
11. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI.
Pattern Recognit. Lett. 2020, 139, 118–127. [CrossRef]
12. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.M. Brain tumor segmentation and classification by improved binomial thresholding
and multi-features selection. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [CrossRef]
13. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer
learning and finetuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [CrossRef] [PubMed]
14. Saha, C.; Hossain, M.F. MRI brain tumor images classification using K-means clustering, NSCT and SVM. In Proceedings of the
2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON), Mathura, India,
26–28 October 2017; pp. 329–333.
15. Mukambika, P.S.; Rani, K.U. Segmentation and classification of MRI brain tumor. Int. Res. J. Eng. Technol. IRJET 2017, 4, 683–688.
16. Bhanumathi, V.; Sangeetha, R. CNN Based Training and Classification of MRI Brain Images. In Proceedings of the 2019 5th
International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019;
pp. 129–133.
17. Boustani, A.E.; Aatila, M.; Bachari, E.E.; Oirrak, A. MRI brain images classification using convolutional neural networks. In
International Conference on Advanced Intelligent Systems for Sustainable Development; Springer: Cham, Switzerland, 2019; pp. 308–320.
18. Paul, J.S.; Plassard, A.J.; Landman, B.A.; Fabbri, D. Deep learning for brain tumor classification. In Medical Imaging 2017:
Biomedical Applications in Molecular, Structural, and Functional Imaging; SPIE: Bellingham, WA, USA, 2017; Volume 10137.
19. Pashaei, A.; Sajedi, H.; Jazayeri, N. BrainTumor Classification via Convolutional Neural Network and Extreme Learning
Machines. In Proceedings of the 8th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad,
Iran, 25–26 October 2018; pp. 314–319.
20. Abiwinanda, N.; Hanif, M.; Hesaputra, S.T.; Handayani, A.; Mengko, T.R. Brain tumor classifification using convolutional neural
network. In World Congress on Medical Physics and Biomedical Engineering; Springer: Singapore, 2018, pp. 183–189.
21. Kader, I.A.E.; Xu, G.; Shuai, Z.; Saminu, S.; Javaid, I.; Salim Ahmad, I. Salim Ahmad I. Differential Deep Convolutional Neural
Network Model for Brain Tumor Classification. Brain Sci. 2021, 11, 8001442. [CrossRef] [PubMed]
22. Mzoughi, H.; Njeh, I.; Wali, A.; Slima, M.B.; Benhamida, A.; Mhiri, C.; Mahfoudhe, K.B. Deep Multi-Scale 3D Convolutional
Neural Network (CNN) for MRI Gliomas Brain Tumor Classification. J. Digit. Imaging 2020, 33, 7522155. [CrossRef] [PubMed]
23. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R. The
multimodal brain tumor image segmentation benchmark (brats). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [CrossRef]
[PubMed]
24. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classifification
via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381.
25. Ge, C.; Gu, I.Y.; Jakola, A.S.; Yang, J. Deep Learning and Multi-Sensor Fusion for Glioma Classification Using Multistream 2D
Convolutional Networks. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5894–5897.
26. Cheng, J.; Yang, W.; Huang, M.; Huang, W.; Jiang, J.; Zhou, Y.; Yang, R.; Zhao, J.; Feng, Q.; Chen, W. Retrieval of brain tumors by
adaptive spatial pooling and fifisher vector representation. PLoS ONE 2016, 11, e0157112.
27. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-classifification of brain tumor images using deep neural network. IEEE Access
2019, 7, 69215–69225. [CrossRef]
28. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumor grades classifification and grading via
convolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng. 2019, 39, 63–74. [CrossRef]