Impact of Data Augmentation On Brain Tumor Detection Using Different YOLO Versions Models
Impact of Data Augmentation On Brain Tumor Detection Using Different YOLO Versions Models
3, May 2024
Abstract: Brain tumors are widely recognized as one of the world's worst and most disabling diseases. Every year, thousands
of people die as a result of brain tumors caused by the rapid growth of tumor cells. As a result, saving the lives of tens of
thousands of people worldwide needs speedy investigation and automatic identification of brain tumors. In this paper, we
propose a new methodology for detecting brain tumors. The designed framework assesses the application of cutting-edge YOLO
models such as YOLOv3, YOLO v5n, YOLO v5s, YOLO v5m, YOLOv5l, YOLOv5x, and YOLOv7 with varying weights and data
augmentation on a dataset of 7382 samples from three distinct MRI orientations, namely, axial, coronal, and sagittal. Several
data augmentation techniques were also employed to minimize detector sensitivity while increasing detection accuracy. In
addition, the Adam and Stochastic Gradient Descent (SGD) optimizers were compared. We aim to find the ideal network weight
and MRI orientation for detecting brain cancers. The results show that with an IoU of 0.5, axial orientation had the highest
detection accuracy with an average mAP of 97.33%. Furthermore, SGD surpasses Adam optimizer by more than 20% mAP.
Also, it was found that YOLO 5n, YOLOv5s, YOLOv5x, and YOLOv3 surpass others by more than 95% mAP. Besides that, it
was observed that the YOLOv5 and YOLOv3 models are more sensitive to data augmentation than the YOLOv7 model.
parts of the body and spread to the brain [4]. Primary [12]. Two-stage methods are less efficient than one-
malignant tumors are more likely to kill the patient. stage methods, although they yield results eventually.
Gliomas, pituitary tumors, and meningiomas are the One example of a single-stage strategy employed in the
three most common forms of brain tumors diagnosed. A research is the "You Only Look Once" (YOLO)
meningioma is a tumor that develops in the meninges, methodology. To locate objects with distinct bounding
the thin membranes (or tissues) surrounding and boxes in space, YOLO frames the task as a regression
protecting the brain and spinal cord. Gliomas begin in problem. How YOLO approaches the problem allows it
the brain's glial cells, where they also begin. Tumors of to yield results faster than competing two-stage item
the pituitary gland can develop when cells in the identification algorithms [38].
pituitary gland, located close to the brain, grow DWI is an essential technique for diagnosing these
uncontrollably. A brain tumor is one of the diseases that malignant growths because it may show how brain
can take someone's life the most quickly. Despite the tumors interfere with the normal free diffusion of water
presence of brain tumors, prompt identification and inside tissues [45]. DWI can reveal how brain tumors
treatment are necessary to save lives. Machine learning obstruct the usual free diffusion of water within tissues.
(ML) algorithms could automatically diagnose Because of the numerous features that can be
individuals with brain tumors and classify them into extracted from MRI images, these images are a true
specific groupings to address this issue. However, goldmine of information that can be utilized to classify
because of the wide range of sizes, shapes, and tumors. Learning how to describe data opens the stage
intensities these tumors can take, classifying brain for DL to eventually apply that knowledge to form
cancers into meningioma, pituitary, and glioma tumors inferences and carry out actions.
is more challenging [11]. This is because these tumors DL methods are used to classify diagnostic imaging
can take any of these shapes. Furthermore, studies. However, DL-based approaches are useful in
meningiomas, pituitary tumors, and gliomas account for various domains and specialties [5]. A large amount of
the great majority of occurrences of brain cancer [37]. training data is required for DL algorithms to perform
Furthermore, the great resolution of brain MRI successfully. In recent years, there has been an increase
allows for an in-depth analysis of the brain's anatomy. in the acceptability of DL techniques in general and the
As a result, magnetic resonance imaging (MRI) images prominence of the CNN model in particular.
substantially impact the automatic interpretation of Some potentially fatal diseases have been difficult to
medical images [18, 21, 36, 49]. Researchers heavily diagnose, but recent advances in Computer-Aided
rely on MRI technology when detecting and evaluating Diagnosis (CAD) have eliminated that problem for
brain cancers. Researchers have recently created many many people CAD. This technology makes rapid and
new automated algorithms for detecting and classifying accurate identification by medical equipment possible,
brain cancers in MRI data. Traditional machine learning allowing doctors to extend a patient's life or improve
algorithms, such as Multi-Layer Perceptron (MLP) and their quality of life [9].
SVM classifiers, are extensively used for brain tumor Further, potent new tools called Deep Convolutional
identification [30]. Deep learning (DL) [1] is a subfield Neural Networks (DCNN) were produced by fusing ML
of machine learning that builds a feature hierarchy by and CV. State-of-the-art models like Deep
using low-level features to build high-level features. Convolutional Neural Networks (DCNNs) have
Technological progress has enabled digital image successfully challenged CAD issues like recognition,
processing to spread to fields including classification, segmentation, and detection [29]. Figure
photogrammetry, remote sensing, and computer vision 1 shows the process of detecting brain tumors using
[8]. When an image is processed digitally, it undergoes deep learning methods.
a series of transformations that allow us to convert it to However, DCNNs form the basis of most existing
a digital format and extract valuable data. Computer CAD systems for identifying and detecting brain
vision applications of deep learning for digital image cancers. Unfortunately, these systems are not very
processing have expanded to include a wide range of powerful and perform poorly across most platforms
tasks, from face recognition [2] to object detection and [16]. Lighter classification models are not as effective
classification [3]. Remote sensing and photogrammetric for most DCNNs because they cannot pinpoint the
images are ideal for using deep learning-based object tumor's location. The processing costs of a segmentation
detection techniques. Deep learning approaches to model, which uses a mask to detect the damaged area
object detection can perform better with larger datasets and identify the tumor, are greater. However, most
and more robust models. Significant advances in object existing CAD systems for identifying and locating brain
detection have been made thanks to R-CNNs and other and breast cancers rely on deep convolutional neural
region-based approaches [10]. Two main types of two- networks. However, these systems typically perform
stage convolutional neural networks are used for object poorly across platforms and require a lot of computing
identification, and they are the single-stage networks. power. Most DCNNs are restricted by the inability of
There are a few different types of two-stage CNNs, lightweight classification models to pinpoint the precise
including R-CNN [14], Faster R-CNN [34], and R-FCN site of the tumor [23]. Comparatively, a segmentation
468 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024
model can utilize a mask to detect the damaged area and As a result, in this study, we use three datasets that
identify the tumor, but this model has greater processing describe all distinct axes. Also, to emphasize the image
costs [28]. dimensions with the best discriminatory strength of
The key contributions are as follows: contemporary detection algorithms.
1. Several modern and ancient detection models exist. 4. To our knowledge, the YOLOv7 model is the most
recent detection model. As a result, we explain the
In this paper, we propose a new methodology for
utilization of the most well-known optimizers, such
comparing and evaluating the effectiveness of the most
as Adam and Stochastic Gradient Descent (SGD), in
recent models of early brain cancer detection, such as
terms of performance and detection speed rate based
YOLOv3, YOLOv5, and YOLOv7. The objective is to
on a set of loss functions and evaluation metrics.
highlight the most effective models that can be used to
identify various illnesses in the future. The rest of the paper is structured as follows: section 2
shows the state-of-the-art related works. Section 3
2. Data quality is critical when utilizing supervised
illustrates the proposed methodology for brain tumor
learning because of the scarcity of labeled datasets.
detection, dataset description and analysis, and
We apply a mixture of data augmentation techniques to employed detection models. Section 4 shows the
expand the number of dataset components, lower the evaluation metrics. Section 5 discusses the experimental
sensitivity of detection models, and improve their results. Finally, the conclusion and future directions are
performance in the early detection of brain cancer shown in section 6.
utilizing other datasets.
3. Brain cancer may be diagnosed utilizing images with
three distinct axes such as sagittal, coronal, and axial.
outdoor objects with a precision of 100% and a recall of Other studies used YOLOv5 for indoor and outdoor
95.3%. They also stated that YOLOx outperforms fire and smoke detection. In [50], swin-YOLOv5, a new
YOLOv5 regarding detection performance and speed. framework that improves on the original YOLOv5
Other research evaluated the evolution and architecture by improving feature extraction, was
improvement of various YOLO algorithm variants proposed. The developed framework can acceptably
throughout time. In [19] a new framework was proposed detect fire and smoke. Swin-fundamental YOLOv5's
to compare the efficiency of YOLOv3 with YOLOv5. concept is to use a transformer across three headers.
The framework is intended to detect Apple items. However, a dataset of 16,503 images of two target
However, a collection of 878 images with varying classes was employed for comparison. In addition,
image resolutions describing a near and distant view at seven hyperparameters were fine-tuned. According to
apples was employed. The testing results show that the data, swin-YOLOv5 outperforms the original by
YOLOv5 surpasses YOLOv3 in recall, with 97.8%, 0.7% mean average precision enhancement at 0.5 and
compared to other models such as Faster-RCNN (81.4% 4.5% mean average precision enhancement at 0.5 to
recall) and DaSNet-v2 (86.8%). YOLOv5 generates 0.95. In [43] an enhanced version of YOLOv5 was
better outcomes in terms of precision, recall, and F- developed, incorporating dynamic anchor learning
measure. In [27] a methodology for detecting landing through the K-means++ algorithm. The approach
locations for autonomous flying systems was proposed. developed seeks to limit fire damage by improving
The developed frame analyzes the usage of several detection speed and performance. Furthermore, several
YOLO versions, such as YOLOv3, YOLOv4, and loss functions such as CIOU and GIOU were applied to
YOLOv5, to identify landing sweet spots to decrease three distinct YOLOv5 models: YOLOv5 small,
flight system failure and boost safety. However, a YOLOv5 medium, and YOLOv5 large. However, a self-
collection of 11268 satellite images, called the DOTA, created dataset of 4815 images was exposed to a
with up to 20,000x20,000 image resolution and 15 synthetic system to increase the data size to 20,000.
labels, was employed. The findings demonstrate that According to the findings, the modified model
YOLOv5 with large network weights outperforms outperforms the original YOLOv5 by 4.4% mean
others, with a precision of 70%, recall of 61%, and mean average precision. It was also discovered that YOLOv5
average precision of 63%. Furthermore, YOLOv4 performs better using the CIoU loss function, with a
exceeds YOLOv3 with a recall of 57% and a mean recall of 78% and a mean average precision of 87%.
average accuracy of 60%. Since the launch of YOLOv5 with many models,
The detection of indoor and outdoor objects is several studies have employed the YOLO algorithm to
helping to track down pandemic infections such as improve Internet of Things (IoT) technology. In [6],
coronavirus. However, numerous research employed YOLOv5 was used to create a new framework to
YOLO to recognize face masks. In [40] a real-time improve IoT devices with limited memory and
monitoring system that identifies face masks for the computing power. The YOLOv4 model was also
COVID-19 pandemic was proposed using YOLOv5. utilized to compare experiment findings. However, two
Closed-circuit television footage is supplied into the separate datasets, including automobile license plates,
established framework. However, a dataset of 3,846 were integrated into a single dataset to boost data size.
images with masked and unmasked labels was created. The data set includes 5991 distinct 640x640-pixel
Several approaches were used to supplement the data, images. Transfer learning was used to build the model
including Gaussian and motion blur. In addition, using YOLOv5 with a tiny weight network and the
stacked ResNet-50, which incorporates transfer Microsoft COCO dataset. Furthermore, Long Short-
learning, was deployed. With a testing accuracy of 87% Term Memory (LSTM) based on the OCR engine was
and a precision of 71%, stacked ResNet-50 surpasses applied for automobile plate identification. The findings
other comparable models such as ResNet-50 and demonstrate that YOLOv5s outperform YOLOv4 with
Convolutional Neural Network. In [47] a new face an mAP of 87% across 100 epochs.
detection method based on YOLOv5 has been In the context of medical data, YOLOv5 has shown
developed. ShuffleCANet is used as a new backbone an improvement in diagnosing cancer status. The study
YOLO layer in the designed system. in [25] proposed a new methodology to improve the use
Furthermore, the AIZOO dataset was employed, of YOLOv5 for breast cancer detection. The intended
which contains 7959 images of a face and a masked work is assessed using four YOLOv5 weight models:
face. For image splicing and arrangement, the mosaic YOLOv5 small, YOLOv5 medium, YOLOv5 large, and
approach was used. Additionally, images were YOLOv5 x-large. However, the CBIS-DDSM dataset,
processed and scaled to 640 × 640. With a mean average which contains 10239 distinct 1000 x2000 pixel images,
accuracy of 95.2%, the proposed framework with was employed. It specifies if the breast cancer is benign
YOLOv5 surpasses existing models such as YOLOv3. or malignant. The testing findings demonstrate that
Furthermore, the proposed methodology with modified modified YOLOv5x outperforms the small, medium,
ShuffleCANet exceeds the original YOLOv5 findings and large weights, with a Matthews Correlation
by 0.58% in precision. Coefficient (MCC) value of 93.6%. In addition, the
470 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024
proposed improved version of YOLOv5m was used five distinct YOLOv5 models with transfer
compared to other models such as YOLOv3 and faster learning for identifying malignant brain tumors,
RCNN. It was discovered that modified YOLOv5m including nano, small, medium, large, and x-large
outperforms YOLOv3 and faster RCNN with an models. The proposed framework uses the Brats 2021
accuracy of 96.5% and mAP of 96%. In [26] a dataset, which contains 2,000 instances with 8000 scans
framework for detecting brain tumors via transfer at 240x240 resolution and three distinct kinds and
learning is being developed. The proposed methodology locations, namely, T1, T2, and Flair. In addition, the
employs the tiny YOLOv4 model for training and the Microsoft COCO dataset was utilized to train the model
YOLOv3 detection unit. However, the model was using transfer learning. With a mean average precision
trained on the Microsoft Common Objects in Context of 0.912, the findings demonstrate that the YOLOv5 x-
(COCO) dataset with another gathered dataset that large model outperforms the others. In this paper, we
depicts 3064 magnetic resonance images with different propose a new framework for detecting brain tumors
regions for cancer tumors of 512x512 resolution, such using the three distinct YOLO models such as YOLOv3,
as coronal, axial, and sagittal, using transfer learning. YOLOv5, and YOLOv7 with different data
With a mean average precision of 0.9314, the findings augmentation techniques. Table 1 summarizes the state-
show that the fine-tuned small YOLOv4 model with of-the-art related works.
transfer learning surpasses others. The study in [35]
Table 1. Summary of the state-of-art research.
Paper Dataset Task YOLO version Findings
Newly constructed dataset with 11,367 samples and Indoor object The best average accuracy of 93.9 at an intersection over union of
[41] YOLOv5
Pascal-VOC2012 with 640 x 640. detection. 0.9.
Full 360-degree images with a resolution of 2048 x 12 Indoor and outdoor YOLOx outperforms others with a precision of 100% and a recall
[46] YOLOx and YOLOv5
were 8 collected by Lidar sensors. objects detection. of 95.3%.
Apple dataset with 878 images of different YOLOv3 and
[19] Detecting apple fruit. YOLOv5 outperforms YOLOv3 with a recall of 97.8%.
resolutions. YOLOv5
DOTA dataset with 11268 satellite images of 20,000 x Detecting landing YOLOv3, YOLOv4, YOLOv5 shows better results improvement with a precision of
[27]
20,000 resolution and 15 target classes. sweet spots. and YOLOv5 70% and recall of 61%.
A dataset of 3,846 images of face masks was collected Stacked ResNet-50 outperforms others with a testing accuracy of
[40] Detecting face masks. YOLOv5
by CCTVs. 87% and a precision of 71%.
AIZOO dataset for face mask detection with 7959 YOLOv3, and ShuffleCANet as the backbone layer outperforms others with a
[47] Detecting face masks.
images of 640 x 640. YOLOv5 mean average accuracy of 95.2%.
Fire and smoke Swin-YOLOv5 outperforms others with an mAP of 0.7
[50] Dataset of 16,503 images of two target classes. YOLOv5
detection. improvements at an IOU of 0.5.
The improved model of YOLOv5 using K-means++ outperforms
[43] Self-build dataset of 4815 images of fires and smoke. Fire detection. YOLOv5
others by 4.4% mAP.
Google images, Microsoft COCO, and Indian number Automobile plate YOLOv4 and
[6] YOLOv5s outperforms others with an mAP of 87%.
plates dataset detection. YOLOv5
YOLOv5x outperforms other models with an MCC of 93.6%.
The CBIS-DDSM dataset with 10239 distinct 1000 x Breast cancer YOLOv5 and
[25] Also, it outperforms YOLOv3 with an accuracy of 96.5% and an
2000 pixel images of breast cancer. detection. YOLOv3
mAP of 96%.
Microsoft COCO dataset and a collected dataset of Tiny YOLOv4 and Tiny YOLOv4 with transfer learning shows the best results with
[26] Brain tumor detection.
3064 brain cancer MRI images of 512 x 512. YOLOv3 an mAP of 93.14%.
Microsoft COCO and the Brats 2021 datasets with YOLOv5 x-large model shows the best results with an mAP of
[35] Brain tumor detection. YOLOv5
240 x 240 brain cancer MRI images. 91.2%.
3. Proposed Methodology for Brain Tumor evaluating the usage of several YOLO models in order
Detection to find the optimal model for brain tumor detection, as
shown in Figure 2. The proposed methodology contrasts
This section demonstrates the proposed framework for traditional YOLO models such as YOLOv3 with
detecting brain tumors using various YOLO models cutting-edge models such as YOLOv7 and YOLOv5.
such as YOLOv3, YOLOv7, and YOLOv5 with We also test the YOLOv5 model with various network
different weights and data augmentation. The detection sizes, including nano, small, medium, large, and x-large
process of various brain tumors of variable sizes and networks. Data augmentation, on the other hand,
dimensions may be evaluated using various metrics improves the model's effectiveness by increasing the
regarding accuracy and loss functions. However, the number of training samples. However, we apply several
weight and size of the neural networks have a significant data augmentation techniques for improvement, such as
influence on detection accuracy and speed, particularly image and bounding box flipping horizontally and
in the situation of low-light magnetic resonance images. vertically, which minimizes model sensitivity to varied
In this paper, we propose a novel framework for orientations.
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 471
Furthermore, it employs three distinct MRI imaging enhancing the overall accuracy and robustness of the
positions for brain tumors: axial, coronal, and sagittal. detection system.
Our objective is to find the ideal posture for detecting In this work, we use an amassed dataset initially
cancer tumors. We maintain 20% of each dataset for developed to identify malignancies in the brain using a
model testing while evaluating multiple models. In variety of MRI orientations. The dataset is available
addition, we compute several evaluation metrics for online via the Kaggle repository at
model comparisons, such as precision, recall, mean https://www.kaggle.com/datasets/davidbroberts/brain-
average precision, intersection over union, and three tumor-object-detection-datasets. However, as shown in
loss functions. Also, we compare different YOLO Figure 3, the dataset consists of three unique datasets
models with prior models, such as the faster region- representing three possible brain tumor orientations,
based convolutional neural networks for object namely axial, coronal, and sagittal, with two labels,
detection. tumor and non-tumor. It also includes 1218 images of
varying resolutions. All Exchangeable Image File
3.1. Dataset Collection and Image Processing Format )EXIF( rotations were ignored during data
Magnetic Resonance Imaging (MRI) is a powerful diagnostic
preparation, and pixels were normalized. All images
tool widely employed in medical imaging, capable of were also resized to 416×416. However, data analysis
capturing detailed images of the body's internal structures reveals that the axial dataset has 18 missing labels, and
from various orientations. The three primary orientations the coronal dataset contains one missing label.
utilized in MRI are axial, coronal, and sagittal, each providing RoboFlow, an online platform, was used to manage
a unique perspective that enables healthcare professionals to missing classes and ground truth bounding boxes.
comprehensively evaluate and diagnose a range of conditions,
including brain tumors.
The axial orientation offers cross-sectional views
perpendicular to the body's long axis, spanning from the
top of the head to the bottom. These axial MRI images
are invaluable for visualizing intricate details of the
brain, spinal cord, and abdominal organs. Conversely,
the coronal orientation presents a frontal view of the
body, with the imaging plane perpendicular to the axial
plane. Coronal MRI scans facilitate thorough
assessments of the brain, eyes, facial structures, spinal
cord, and abdominal organs. a) Axial. b) Sagittal. c) Coronal.
Furthermore, the sagittal orientation provides a side Figure 3. Illustrates a sample of different brain cancer MRI
view of the body, with the imaging plane parallel to its orientations and datasets labels.
long axis. Sagittal MRI images are instrumental in
On the other hand, data augmentation approaches
examining the brain, spinal cord, pelvic region, and
were used to reduce the models' sensitivity to different
evaluating the integrity of various muscles and tendons.
orientations. We flip images and employ bounding
By incorporating these diverse MRI orientations into
boxes in horizontal and vertical directions to increase
their study, researchers can comprehensively evaluate
the number of data samples. The description and labels
the performance of the YOLO models in detecting brain
for the dataset are shown in Table 2. However, we
tumors from multiple vantage points, potentially
maintain 20% of each dataset for testing for alternative
472 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024
False Positive (FP): the number of misclassified non- This section shows the experimental findings of
tumors that are tumors. comparing several YOLO models for identifying brain
False Negative (FN): the number of tumors cancers. Our objective is to find the optimum YOLO
misclassified as non-tumors. model and MRI orientation for tumor detection in terms
of accuracy and performance. However, image
The following metrics were computed and derived based on
processing and classification are known to have high
the confusion matrix:
equipment requirements, such as large amounts of RAM
𝑀𝑒𝑎𝑛 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (𝑚𝐴𝑃) = (1) and a powerful GPU. The experiment setup and device
1
∑𝑘=𝑛
𝑘=1 𝐴𝑣𝑒𝑟𝑒𝑎𝑔𝑒 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑘 qualification are shown in Table 4.
𝑛
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (2)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = Table 4. Experiment setup and simulation device qualifications.
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 Device Specification Description
𝑅𝑒𝑐𝑎𝑙𝑙 = (3)
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 Processor Intel(R) Core i7 10𝑡ℎ generation.
RAM 8 Gigabits.
𝐼𝑛𝑡𝑒𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 𝑜𝑣𝑒𝑟 𝑢𝑛𝑖𝑜𝑛 (𝐼𝑜𝑈) =
Operating system Windows x64
𝑂𝑣𝑒𝑟𝑙𝑎𝑝𝑒𝑑 𝑎𝑟𝑒𝑎 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑡ℎ𝑒 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑎𝑛𝑑 𝑔𝑟𝑜𝑢𝑛𝑑 𝑡𝑟𝑢𝑡ℎ 𝑏𝑜𝑥𝑒𝑠 (4) CPU 1.50 GHz
𝐴𝑟𝑒𝑎 𝑜𝑓 𝑢𝑛𝑖𝑜𝑛
GPU NVIDIA GeForce MX230
We also employ three loss functions for minimization
and evaluation, including the bounding box regression For hyperparameters tuning, YOLO models
score (loss), which may be used to assess non- comprise around 29 distinct parameters in total.
overlapping bounding boxes [20]. The class probability However, as shown in Table 5, we set up twelve
score may be used to determine how well a bounding box parameters. The parameters are loss gain functions,
matches the class of an item [33]. The objectness score learning rates, optimizers, and IoU threshold. All
(confidence score/GIoU) may be used to assess the images were resized to 416×416 for all models as input
likelihood of a certain object being in a grid cell [44]. image size. However, due to low-weight networks such
𝑂𝑏𝑗𝑒𝑐𝑡𝑛𝑒𝑠𝑠 𝑠𝑐𝑜𝑟𝑒 (𝑐𝑜𝑛𝑓𝑖𝑑𝑒𝑛𝑐𝑒) = as Nano and small models, we increase the number of
𝑇𝑟𝑢𝑡ℎ
𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑜𝑏𝑗𝑒𝑐𝑡) ∗ 𝐼𝑜𝑈𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 )
(5) epochs to 100 iterations in YOLOv5 for detection
𝐵𝑜𝑢𝑛𝑑𝑖𝑛𝑔 𝑏𝑜𝑥 𝑟𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑙𝑜𝑠𝑠 =
results improvement. To make the comparison more
(6)
𝑀𝑒𝑎𝑛 𝑠𝑞𝑢𝑎𝑟𝑒𝑑 𝑒𝑟𝑟𝑜𝑟 (𝑥 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 , 𝑥 𝑡𝑟𝑢𝑡ℎ ) accurate, we set all other YOLOv5 models, including
medium, large, and x-large, to 100 epochs.
𝐶𝑙𝑎𝑠𝑠 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑠𝑐𝑜𝑟𝑒 = 𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑐𝑙𝑎𝑠𝑠𝑖 |𝑜𝑏𝑗𝑒𝑐𝑡) (7)
Where: Table 5. Hyperparameter tuning and data-augmentation
1, 𝑇ℎ𝑒𝑟𝑒 𝑖𝑠 𝑛𝑜 𝑜𝑏𝑗𝑒𝑐𝑡 processing.
𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑜𝑏𝑗𝑒𝑐𝑡) = { (8)
0, 𝑇ℎ𝑒𝑟𝑒 𝑖𝑠 𝑜𝑏𝑗𝑒𝑐𝑡 Detection Models
Parameters
YOLOv3 YOLOv5 YOLOv7
𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑐𝑙𝑎𝑠𝑠𝑖 ) = (9) Initial learning rate 0.01 0.01 0.01
𝑇ℎ𝑒 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑜𝑏𝑗𝑒𝑐𝑡 𝑡ℎ𝑎𝑡 𝑏𝑒𝑙𝑜𝑛𝑔 𝑡𝑜 𝑐𝑙𝑎𝑠𝑠𝑖 (lr0)
Final learning rate 0.1 0.01 0.1
(lrf)
5. Experimental Results and Discussion Momentum 0.937 0.937 0.937
Box loss gain 0.05 0.05 0.05
In cancer detection, supervised learning and detection Classification loss gain 0.5 0.5 0.3
models are used. The type and quality of data are Objectness loss gain 1.0 1.0 0.7
thought to have the most significant influence on IoU training threshold 0.2 0.2 0.2
Optimizer SGD SGD SGD/Adam
detecting operations. However, detection models need Anchors per output 6.14 6.14 6.02
labeled datasets, which are costly and necessitate expert layer
identification of illnesses to minimize confusion with Image input size 416 x 416 416 x 416 416 x 416
Batches 16 16 16
symptoms or other conditions. As a result, we employ a Epochs 50 100 60
variety of data augmentation approaches to boost the Data-Augmentation For images (Flip horizontally and vertically). For
number of training components to get reliable detection bounding boxes (Flip horizontally and vertically)
results. On the other hand, brain tumor detection
procedures have a specific and unique instance in which In contrast to YOLOv5, we chose 50 epochs to train
brain cancers may be discovered using a collection of the YOLOv3 model and 60 epochs to train the YOLOv7
magnetic resonance images of varying dimensions. model. However, all models were set up to 16 batches
Therefore, detecting tumors alone is not considered compatible with the small chosen learning rates and
sufficient; instead, it is necessary to concentrate and device qualifications in terms of RAM and GPU, as
conduct research on the side of the images with the discussed in Table 4. For clarity, only four images will
highest discriminatory power of the automated be fed at once into the model in each iteration. We use
detection compilations, highlighting their advantages the SGD optimizer for optimizers in YOLOv3 and
and limitations while considering medical YOLOv5 models. However, to our knowledge, the
considerations. Adam optimizer performs worse than the SGD
optimizer even though the Adam optimizer converges
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 475
augmentation, YOLOv3 has the highest detection Table 8. Shows tumor detection results over the coronal dataset
outcomes with an mAP of 78.2% at IoU of 0.5. before data-augmentation with different YOLO weights.
Furthermore, YOLOv5 x-large and medium weights mAP mAP 50-
Model Precision Recall Obj loss Cls loss Box loss
50 95
rank second in terms of mean average precision. YOLO v5n 0.65431 0.67047 0.6923 0.46631 0.00418910.031565 0.024857
YOLOv7, like the axial orientation dataset, performs the YOLO v5s 0.5559 0.79502 0.68246 0.4927 0.00406850.042924 0.023766
YOLO v5m 0.67332 0.74448 0.71691 0.53649 0.00416340.038446 0.021936
worst, with a mAP of 54% and an IoU of 50%. Also, YOLO v5l 0.57298 0.73733 0.6436 0.48649 0.00455280.053788 0.02338
compared to the axial orientation dataset and other YOLO v5x 0.61021 0.82697 0.71946 0.5315 0.00457940.043106 0.023826
YOLO v3 0.68968 0.79627 0.78289 0.59829 0.00440650.032529 0.02301
detection models, YOLOv7 is the least affected by data YOLO v7 0.4808 0.8538 0.5405 0.4063 0.004962 0.02106 0.08383
augmentation, with results enhancement ranging from 1 Average 0.59715 0.79231 0.68097 0.50863 0.00445540.038642 0.033291
to 3% of total assessment metrics.
Table 9. Shows tumor detection results over the coronal dataset
Nonetheless, when comparing data augmentation following data-augmentation with different YOLO weights.
outcomes, we find that increased data affects YOLO
Model Precision Recall mAP mAP 50- Obj loss Cls loss Box loss
models, resulting in improved results, which is 50 95
unsurprising given that data augmentation methods YOLO 0.97445 0.9856 0.99283 0.88498 0.00239260.0001683 0.010644
v5n
lower detector sensitivity. The results reveal that YOLO v5s 0.9994 0.98735 0.99359 0.92035 0.00199960.00023680.0089049
detection results improve by 0.317 average precision, YOLO 0.99415 0.9835 0.99185 0.92973 0.00179310.00039260.0079115
0.239 average mAP at IoU of 0.5, and 0.343 average v5m
YOLO v5l 0.99303 0.99028 0.99109 0.93772 0.00176710.00053780.0074157
mAP at IoU of 0.5 to 0.95. Similarly to the axial dataset, YOLO 0.9934 0.98055 0.99053 0.94204 0.00175610.00014190.0068884
the YOLOv5 small weight has the highest detection v5x
YOLO v3 0.994 0.994 0.991 0.926 0.001877 0.00094820.0085525
results after data augmentation, with an mAP of 99.3%, YOLO v7 0.5115 0.8996 0.5564 0.4532 0.003962 0.02854 0.06537
demonstrating that the small weight is the most Average 0.91425 0.97255 0.91908 0.85151 0.00219250.00513290.0175072
favorably affected by data increase. However, as shown
in Figure 9, all models have significant detection
accuracy over coronal orientation. Compared to axial
orientation, YOLOv5n, YOLOv5s, YOLOv5x, and
YOLOv3 have the highest detection stability.
f) YOLOv3. g) YOLOv7.
Figure 9. Shows a sample of improved brain tumor detection results and accuracy over the coronal dataset following augmentation.
Tables 10 and 11 show the tumor detection accuracy detector weights. YOLOv3, on the other hand, has a
and performance using YOLO models before and after high positive sensitivity to data augmentation, with
data augmentation for the sagittal orientation dataset. precision increased by 59.8%, mAP by 42.9% at IoU of
However, following data augmentation, detection 0.5, and 0.077 classification loss gain reduction.
accuracy improves by 0.434 average precision, 0.388 YOLOv7 is the least susceptible to data increase, yet it
average mAP at IoU=0.5, and 0.207 average recall. improves outcomes by 0.064 compared to coronal
Furthermore, we found that the YOLOv5 nano weight orientation. Furthermore, as compared to the axial and
surpasses others with an mAP of 96.7% at an IoU of coronal orientations, the sagittal dataset has the lowest
50%. It is worth noting that, like with the coronal detection accuracy, with an average mAP of 89.9%.
orientation dataset, increasing network weight However, as shown in Figure 10, all models exhibit
significantly impacts detection accuracy. However, significant detection performance and accuracy.
building a large weight network does not necessarily
improve detection outcomes. As a result, it is critical to
identify the relationship between data augmentation and
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 477
Table 10. Shows tumor detection results over the sagittal dataset Table 11. Shows tumor detection results over the sagittal dataset
before data-augmentation with different yolo weights. following data-augmentation with different yolo weights.
Model Precision Recall mAP 50mAP 50-95 Obj loss Cls loss Box loss mAP mAP 50-
YOLO v5n 0.48354 0.83131 0.55339 0.35948 0.00518680.028463 0.0299 Model Precision Recall Obj loss Cls loss Box loss
50 95
YOLO v5s 0.46457 0.85612 0.54414 0.3966 0.00429790.0282710.027036 YOLO v5n 0.94046 0.94527 0.96799 0.85926 0.00266520.0056824 0.013165
YOLO v5m 0.49976 0.78951 0.55119 0.3871 0.00443130.0579060.027464 YOLO v5s 0.96164 0.94119 0.95512 0.8765 0.002439 0.0069707 0.01133
YOLO v5l 0.42547 0.57456 0.46628 0.34213 0.00533690.0724410.028687 YOLO
0.95574 0.9523 0.95932 0.8893 0.002121 0.00835 0.010057
YOLO v5x 0.46981 0.57894 0.49138 0.34809 0.00494070.0712240.030903 v5m
YOLO v3 0.36036 0.86259 0.52743 0.36598 0.00475360.0831550.031451 YOLO v5l 0.95606 0.94729 0.9502 0.89003 0.00213690.00852330.0094977
YOLO v7 0.4622 0.7664 0.4912 0.3465 0.004968 0.01662 0.08737 YOLO v5x 0.95589 0.94631 0.95412 0.9 0.00204970.00837650.0083264
Average 0.44703 0.73802 0.51194 0.3644 0.00478810.0549360.038819 YOLO v3 0.95817 0.92698 0.9566 0.87865 0.002192 0.0061747 0.0106
YOLO v7 0.5014 0.9595 0.6201 0.552 0.002978 0.021 0.06583
Average 0.88148 0.9456 0.89924 0.83108 0.00231940.00989920.0192735
f) YOLOv3. g) YOLOv7.
Figure 10. Shows a sample of improved brain tumor detection results and accuracy over the sagittal dataset following data-augmentation.
For medical considerations, it should be noted that Third, we compare the most recent detection models,
the objective of employing MRI layers is to cover all of such as YOLOv5, YOLOv7, and YOLOv3, to find the
the critical components by which cancers may be most accurate model. Fourth, with the introduction of
diagnosed precisely and clearly, where the tumor's novel detection models such as YOLOv7, evaluating the
location, size, and kind may be determined. On the other model's performance using well-known optimizers such
hand, the study's findings revealed a considerable as Adam and SGD is useful. Finally, we analyze the
improvement in the accuracy of identifying brain performance of several YOLO models using a
cancers utilizing magnetic resonance imaging in all comprehensive set of evaluation metrics to demonstrate
dimensions. However, utilizing axial images yielded the detection speed, performance, and detection error.
greatest detection results. This is owing to the nature of Concerning the study's limitations, it should be
the axis' dimensions, centered on the X and Y points highlighted that altering the parameters of detection
compared to others. Furthermore, it provides an upper models may change from one trial to the next, resulting
and precise coverage of the right and left sides of the in erroneous detection findings. Furthermore, the
brain, providing more recognized data patterns. quality of the images used to diagnose brain tumors
To highlight the state-of-the-art of the study significantly enhances detection accuracy. Since we
compared to other studies. First, while reviewing a used images at a resolution of 416*416 for this study,
variety of internet datasets for brain tumor detection, we enlarging the image could result in a lower-quality
discovered that many of the data lacked labeling. As a image. Furthermore, we discovered that all YOLO
result, in this work, we apply a series of data models are susceptible to data augmentation strategies,
augmentation approaches to enhance the training set and with the YOLOv7 model being the least affected.
lessen the sensitivity of detection models from future Finally, this study did not discover any micro-tumor.
detection operations using any other data set than the The study was restricted to the early detection of cancers
one used in the study. Also, this may be utilized to with two primary categories (tumor and no tumor). This
construct detection models that can learn from is owing to a paucity of online addressable data sets,
unlabeled data sets by discovering methods to enhance particularly for detection operations, whose addressing
the data at the start and then transferring this knowledge procedure is quite costly and necessitates using
to specialized categorization procedures. Second, in this radiologists for addressing operations.
work, we employ all axes of magnetic resonance In short, the results show that the YOLOv5 and
imaging, including axial, sagittal, and coronal, to find YOLOv3 models are more sensitive to data
the ideal image dimensions that may be used to achieve augmentation than the YOLOv7 model. In addition, we
the maximum discriminatory accuracy in detection. show that the axial orientation has higher tumor
478 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024
detection accuracy than the other orientations. Applied Science, vol. 11, no. 12, p. 5551, 2021.
However, based on the statistical results, large-weight https://doi.org/10.3390/app11125551
models are more likely to recognize data samples than [4] Badran E., Mahmoud E., and Hamdy N., “An
uncover data patterns. As a result, YOLOv7 performs Algorithm for Detecting Brain Tumors in MRI
the worst compared to others, in which the number of Images,” in Proceedings of the International
identified labels (Classes) in each epoch ranged from 0 Conference on Computer Engineering and
to 7. Nonetheless, all models revealed significant Systems, Cairo, pp. 368-373, 2010.
detection accuracy. [5] Bakator M., and Radosav D., “Deep Learning and
Medical Diagnosis: A Review of Literature,”
6. Conclusions Multimodal Technology Interact, vol. 2, no. 3, pp.
47, 2018. https://doi.org/10.3390/mti2030047
The developed methodology evaluates the performance [6] Batra P., Hussain I., Abdul Ahad M., Casalino G.,
of state-of-the-art YOLO models on a dataset of 7382 and Alam M., “A Novel Memory and Time-
samples from three different MRI orientations (axial, Efficient ALPR System Based on YOLOv5,”
coronal, and sagittal) using different weights and Sensors, vol. 22, no. 14, pp. 5283, 2022.
degrees of data augmentation. Many data augmentation https://doi.org/10.3390/s22145283
approaches were used to reduce detector sensitivity and [7] Bayram A., Gurkan C., Budak A., and Karataş H.,
enhance detection accuracy. Furthermore, a comparison “A Detection and Prediction Model Based on
was made between the Adam and SGD optimizers. We Deep Learning Assisted by Explainable Artificial
need to determine the optimal network weight and MRI Intelligence for Kidney Diseases,” European
orientation to detect brain tumors with MRI. With an Journal of Science and Technology, no. 40, pp. 67-
IoU of 0.5, the results show that the average mAP for 74, 2022. DOI: 10.31590/ejosat.1171777
axial orientation is 97.33 percent. [8] Cepni S., Atik M., and Duran Z., “Vehicle
Additionally, SGD outperforms Adam optimizer by Detection Using Different Deep Learning
over 20% mAP. In addition, YOLOv5n, YOLOv5s, Algorithms from Image Sequence,” Baltic
YOLOv5x, and YOLOv3 were discovered to have a Journal Modern Computing, vol. 8, no. 2, pp. 347-
mAP of greater than 95%. Furthermore, the YOLOv5 358, 2020. DOI:10.22364/bjmc.2020.8.2.10
and YOLOv3 models were more sensitive to data [9] Chan H., Hadjiiski L., and Samala R., “Computer‐
augmentation than the YOLOv7 model. Using the aided Diagnosis in the Era of Deep Learning,”
proposed framework for brain tumor diagnosis has a Medical Physics, vol. 47, no. 5, pp. e218-e227,
moderate computational cost and a small space 2020. DOI: 10.1002/mp.13764
requirement; as a result, it is capable of running on most [10] Chen C., Liu M., Tuzel O., and Xiao J., “R-CNN
systems. for Small Object Detection,” in Proceedings of the
Asian Conference on Computer Vision, Taipei pp.
Acknowledgement 214-230, 2016.
[11] Cheng J., Huang W., Cao S., Yang R., Yang W.,
The authors would like to thank the Deanship of Yun Z., Wang Z., and Feng Q., “Enhanced
Scientific Research at Shaqra University for supporting Performance of Brain Tumor Classification Via
this research. Tumor Region Augmentation and Partition,”
PLoS One, vol. 10, no. 10, p. e0140381,
References 2015. 10.1371/journal.pone.0140381
[1] Abiwinanda N., Hanif M., Hesaputra S., [12] Dai J., Li Y., He K., and Sun J., “R-FCN: Object
Handayani A., and Mengko T., “Brain tumor Detection Via Region-Based Fully Convolutional
Classification Using Convolutional Neural Networks,” in Proceedings of the 30th Conference
Network,” in Proceedings of the World congress on Neural Information Processing Systems,
on Medical Physics and Biomedical Engineering, Barcelona, vol. 29, 2016.
Prague, pp. 183-189, 2019. DOI: 10.1007/978- [13] Gupta A., Ramanath R., Shi J., and Keerthi S.,
981-10-9035-6_33 “Adam vs. SGD: Closing the Generalization Gap
[2] Atik M. and Duran Z., “Deep learning-based 3d on Image Classification,” in Proceedings of the
Face Recognition Using Derived Features from 13th Annual Workshop on Optimization for
Point Cloud,” The Proceedings of the 3rd Machine Learning, 2021.
International Conference on Smart City [14] He K., Gkioxari G., Dollár P., and Girshick R.,
Applications, Karabukp, pp. 797-808, 2020. “Mask R-CNN,” in Proceedings of the IEEE
[3] Atik S. and Ipbuker C., “Integrating International Conference on Computer Vision,
Convolutional Neural Network and Venice, pp. 2961-2969, 2017.
Multiresolution Segmentation for Land Cover and https://doi.org/10.48550/arXiv.1703.06870
Land Use Mapping Using Satellite Imagery,” [15] Henderson P. and Ferrari V., “End-to-end
Training of Object Class Detectors for Mean
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 479
Average Precision,” in Proceedings of the Asian Mathematical Methods in Medicine, vol. 2022,
Conference on Computer Vision, Taipei, pp. 198- 2022. doi: 10.1155/2022/1359019
213, 2016. [26] Montalbo F., “A Computer-Aided Diagnosis of
https://doi.org/10.48550/arXiv.1607.03476 Brain Tumors Using a Fine-Tuned YOLO-Based
[16] Huang X., Yue X., Xu Z., and Chen Y., Model with Transfer Learning,” KSII
“Integrating General and Specific Priors into Deep Transactions on Internet and Information
Convolutional Neural Networks for Bladder Systems, vol. 14, no. 12, pp. 4816-4834, 2020.
Tumor Segmentation,” in Proceedings of the DOI: 10.3837/tiis.2020.12.011
International Joint Conference on Neural [27] Nepal U. and Eslamiat H., “Comparing YOLOv3,
Networks, Shenzhen, pp. 1-8, 2021. doi: YOLOv4 and YOLOv5 for Autonomous Landing
10.1109/IJCNN52387.2021.9533813 Spot Detection in Faulty UAVs,” Sensors, vol. 22,
[17] Kavitha R., Chitra L., and Kanaga L., “Brain no. 2, pp. 464, 2022.
Tumor Segmentation Using Genetic Algorithm https://doi.org/10.3390/s22020464
with SVM Classifier,” International Journal of [28] Nogales A., Garcia-Tejedor A., Monge D., Vara
Advanced Research in Electrical, Electronics and J., and Antón C., “A Survey of Deep Learning
Instrumentation Engineering, vol. 5, no. 3, pp. Models in Medical Therapeutic Areas,” Artificial
1468-1471, 2016. Intelligence in Medicine, vol. 112, pp. 102020,
DOI:10.15662/IJAREEIE.2016.0503043 2021. doi: 10.1016/j.artmed.2021.102020.
[18] Khambhata K. and Panchal S., “Multiclass [29] Oza P., Sharma P., Patel S., Kumar P., “Deep
Classification of Brain Tumor in MR Images,” Convolutional Neural Networks for Computer-
International Journal of Innovative Research in Aided Breast Cancer Diagnostic: A Survey,”
Computer and Communication Engineering, vol. Neural Computing and Applications, vol. 34, no.
4, no. 5, pp. 8982-8992, 2016. 6, pp. 1-22, 2022. DOI: 10.1007/s00521-021-
[19] Kuznetsova A., Maleva T., and Soloviev V., 06804-y
Cyber-Physical Systems: Modelling and [30] Pan Y., Huang W., Lin Z., Zhu W., Zhou J., Wong
Intelligent Control, Springer, 2021. J., Ding Z., “Brain Tumor Grading Based on
https://doi.org/10.1007/978-3-030-66077-2_28 Neural Networks and Convolutional Neural
[20] Lee S., Kwak S., and Cho M., “Universal Networks,” in Proceedings of the 37th Annual
Bounding Box Regression and Its Applications,” International Conference of the IEEE Engineering
in Proceedings of the Asian Conference on in Medicine and Biology Society, Milan, pp. 699-
Computer Vision, Perth, pp. 373-387, 2018. 702, 2015. DOI: 10.1109/EMBC.2015.7318458
https://doi.org/10.1007/978-3-030-20876-9_24 [31] Rahman M. and Wang Y., “Optimizing
[21] Litjens G., Kooi T., Bejnordi B., Setio A., Ciompi Intersection-Over-Union in Deep Neural
F., Ghafoorian M., Laak J., Ginneken B., and Networks for Image Segmentation,” International
Sánchez C., “A Survey on Deep Learning in Symposium on Visual Computing, pp. 234-244,
Medical Image Analysis,” Medical Image 2016.
Analysis, vol. 42, pp. 60-88, 2017. [32] Redmon J. and Farhadi A., “Yolov3: An
https://doi.org/10.1016/j.media.2017.07.005 Incremental Improvement,” 2018.
[22] Logeswari T. Karnan M., “An Improved [33] Redmon J., Divvala S., Girshick R., and Farhadi
Implementation of Brain Tumor Detection Using A., “You Only Look Once: Unified, Real-Time
Segmentation Based on Soft Computing,” Journal Object Detection,” in Proceedings of the IEEE
of Cancer Research and Experimental Oncology, Conference on Computer Vision and Pattern
vol. 2, no. 1, pp. 006-014, 2010. Recognition, Las Vegas, pp. 779-788, 2016. doi:
[23] Lundervold A. and Lundervold A., “An Overview 10.1109/CVPR.2016.91
of Deep Learning in Medical Imaging Focusing on [34] Ren S., He K., Girshick R., and SunJ., “Faster r-
MRI,” Zeitschrift Für Medizinische Physik, vol. CNN: Towards Real-Time Object Detection with
29, no. 2, pp. 102-127, 2019. Region Proposal Networks,” Advances in Neural
https://doi.org/10.1016/j.zemedi.2018.11.002 Information Processing Systems, vol. 28, 2015.
[24] Magnuska Z., Theek B., Darguzyte M., [35] Shelatkar T., Urvashi D., Shorfuzzaman M.,
Palmowski M., and Stickeler E., “Influence of the Alsufyani A., and Lakshmanna K., “Diagnosis of
Computer-Aided Decision Support System Brain Tumor Using Light Weight Deep Learning
Design on Ultrasound-Based Breast Cancer Model with Fine-Tuning Approach,”
Classification,” Cancers, vol. 14, no. 2, pp. 277, Computational and Mathematical Methods in
2022. DOI: 10.3390/cancers14020277 Medicine, vol. 2022, 2022.
[25] Mohiyuddin A., Basharat A., Ghani U., Peter V., https://doi.org/10.1155/2022/2858845
and Abbas S., “Breast Tumor Detection and [36] Singh L., Chetty G., and Sharma D., “A Novel
Classification in Mammogram Images Using Machine Learning Approach for Detecting the
Modified YOLOv5 network,” Computational and Brain Abnormalities from Mri Structural Images,”
480 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024