0% found this document useful (0 votes)
21 views17 pages

Impact of Data Augmentation On Brain Tumor Detection Using Different YOLO Versions Models

Uploaded by

34 ROOPA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views17 pages

Impact of Data Augmentation On Brain Tumor Detection Using Different YOLO Versions Models

Uploaded by

34 ROOPA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

466 The International Arab Journal of Information Technology, Vol. 21, No.

3, May 2024

Impact of Data-Augmentation on Brain Tumor


Detection Using Different YOLO Versions Models
Abdelraouf Ishtaiwi Ahmad Al-Qerem Yazan Alsmadi
Ali Ali
Faculty of Information Faculty of Information Faculty of Information
Faculty of Engineering, Al-Ahliyya
Technology, University of Technology, Zarqa Technology, Zarqa
Amman University, Jordan
Petra, Jordan University, Jordan University, Jordan
[email protected]
[email protected] [email protected] Y. [email protected]
Amjad Aldweesh Mohammad Alauthman Omar Alzubi Shadi Nashwan
College of Computing and Faculty of Information Computer Engineering Faculty of Information
Information Technology, Technology, University of Petra, Department, Umm Al-Qura Technology, Middle East
Shaqra University, Saudi Jordan University, Saudi Arabia University, Jordan
Arabia [email protected] [email protected] [email protected]
[email protected]
Musab Al-Zghoul Someah Alangari
Awad Ramadan
Faculty of Information Technology College of Computing and Information
College of Computing in Al-
Isra University, Jordan Technology, Shaqra University, Saudi
Qunfudah, Umm Al-Qura University,
[email protected] Arabia
Saudi Arabia
[email protected]
[email protected]

Abstract: Brain tumors are widely recognized as one of the world's worst and most disabling diseases. Every year, thousands
of people die as a result of brain tumors caused by the rapid growth of tumor cells. As a result, saving the lives of tens of
thousands of people worldwide needs speedy investigation and automatic identification of brain tumors. In this paper, we
propose a new methodology for detecting brain tumors. The designed framework assesses the application of cutting-edge YOLO
models such as YOLOv3, YOLO v5n, YOLO v5s, YOLO v5m, YOLOv5l, YOLOv5x, and YOLOv7 with varying weights and data
augmentation on a dataset of 7382 samples from three distinct MRI orientations, namely, axial, coronal, and sagittal. Several
data augmentation techniques were also employed to minimize detector sensitivity while increasing detection accuracy. In
addition, the Adam and Stochastic Gradient Descent (SGD) optimizers were compared. We aim to find the ideal network weight
and MRI orientation for detecting brain cancers. The results show that with an IoU of 0.5, axial orientation had the highest
detection accuracy with an average mAP of 97.33%. Furthermore, SGD surpasses Adam optimizer by more than 20% mAP.
Also, it was found that YOLO 5n, YOLOv5s, YOLOv5x, and YOLOv3 surpass others by more than 95% mAP. Besides that, it
was observed that the YOLOv5 and YOLOv3 models are more sensitive to data augmentation than the YOLOv7 model.

Keywords: Data-augmentation, objects detection, brain tumor, yolov7, computer vision.

Received February 22, 2024; accepted May 5, 2024


https://doi.org/10.34028/iajit/21/3/10

1. Introduction brain tumors. When a brain tumor grows, it can affect a


person's personality, their way of thinking, and their
The human brain is the principal organ of the human ability to do just about anything else.
nervous system and the command-and-control center Brain tumors are classified into two types: those that
for all activities necessary to maintain a healthy, normal are not cancerous and are known as benign and those
life. The brain receives impulses or stimuli from the that are cancerous (called malignant). Brain tumors
many sensory organs, analyzes them, and responds considered benign grow slowly and are not malignant.
accordingly. As a result of unchecked cell division or This type of tumor is less dangerous since it cannot
mutations, abnormal clusters of brain cells are spread to other body parts.
generated, eventually leading to the development of a Malignant tumors, on the other hand, are cancerous
brain tumor. Not only can these cells be harmful to growths that are distributed rapidly. Malignant tumors
healthy tissue, but they may also disrupt normal brain are more prevalent. Furthermore, carcinogenic tumors
function [17, 22]. are classified into two types: primary malignant tumors
Vomiting, Headaches, cognitive difficulties, that start in the brain and spread to other body regions,
personality changes, vision and speech impairments, and secondary malignant tumors that develop in other
and nausea and vomiting are all common symptoms of
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 467

parts of the body and spread to the brain [4]. Primary [12]. Two-stage methods are less efficient than one-
malignant tumors are more likely to kill the patient. stage methods, although they yield results eventually.
Gliomas, pituitary tumors, and meningiomas are the One example of a single-stage strategy employed in the
three most common forms of brain tumors diagnosed. A research is the "You Only Look Once" (YOLO)
meningioma is a tumor that develops in the meninges, methodology. To locate objects with distinct bounding
the thin membranes (or tissues) surrounding and boxes in space, YOLO frames the task as a regression
protecting the brain and spinal cord. Gliomas begin in problem. How YOLO approaches the problem allows it
the brain's glial cells, where they also begin. Tumors of to yield results faster than competing two-stage item
the pituitary gland can develop when cells in the identification algorithms [38].
pituitary gland, located close to the brain, grow DWI is an essential technique for diagnosing these
uncontrollably. A brain tumor is one of the diseases that malignant growths because it may show how brain
can take someone's life the most quickly. Despite the tumors interfere with the normal free diffusion of water
presence of brain tumors, prompt identification and inside tissues [45]. DWI can reveal how brain tumors
treatment are necessary to save lives. Machine learning obstruct the usual free diffusion of water within tissues.
(ML) algorithms could automatically diagnose Because of the numerous features that can be
individuals with brain tumors and classify them into extracted from MRI images, these images are a true
specific groupings to address this issue. However, goldmine of information that can be utilized to classify
because of the wide range of sizes, shapes, and tumors. Learning how to describe data opens the stage
intensities these tumors can take, classifying brain for DL to eventually apply that knowledge to form
cancers into meningioma, pituitary, and glioma tumors inferences and carry out actions.
is more challenging [11]. This is because these tumors DL methods are used to classify diagnostic imaging
can take any of these shapes. Furthermore, studies. However, DL-based approaches are useful in
meningiomas, pituitary tumors, and gliomas account for various domains and specialties [5]. A large amount of
the great majority of occurrences of brain cancer [37]. training data is required for DL algorithms to perform
Furthermore, the great resolution of brain MRI successfully. In recent years, there has been an increase
allows for an in-depth analysis of the brain's anatomy. in the acceptability of DL techniques in general and the
As a result, magnetic resonance imaging (MRI) images prominence of the CNN model in particular.
substantially impact the automatic interpretation of Some potentially fatal diseases have been difficult to
medical images [18, 21, 36, 49]. Researchers heavily diagnose, but recent advances in Computer-Aided
rely on MRI technology when detecting and evaluating Diagnosis (CAD) have eliminated that problem for
brain cancers. Researchers have recently created many many people CAD. This technology makes rapid and
new automated algorithms for detecting and classifying accurate identification by medical equipment possible,
brain cancers in MRI data. Traditional machine learning allowing doctors to extend a patient's life or improve
algorithms, such as Multi-Layer Perceptron (MLP) and their quality of life [9].
SVM classifiers, are extensively used for brain tumor Further, potent new tools called Deep Convolutional
identification [30]. Deep learning (DL) [1] is a subfield Neural Networks (DCNN) were produced by fusing ML
of machine learning that builds a feature hierarchy by and CV. State-of-the-art models like Deep
using low-level features to build high-level features. Convolutional Neural Networks (DCNNs) have
Technological progress has enabled digital image successfully challenged CAD issues like recognition,
processing to spread to fields including classification, segmentation, and detection [29]. Figure
photogrammetry, remote sensing, and computer vision 1 shows the process of detecting brain tumors using
[8]. When an image is processed digitally, it undergoes deep learning methods.
a series of transformations that allow us to convert it to However, DCNNs form the basis of most existing
a digital format and extract valuable data. Computer CAD systems for identifying and detecting brain
vision applications of deep learning for digital image cancers. Unfortunately, these systems are not very
processing have expanded to include a wide range of powerful and perform poorly across most platforms
tasks, from face recognition [2] to object detection and [16]. Lighter classification models are not as effective
classification [3]. Remote sensing and photogrammetric for most DCNNs because they cannot pinpoint the
images are ideal for using deep learning-based object tumor's location. The processing costs of a segmentation
detection techniques. Deep learning approaches to model, which uses a mask to detect the damaged area
object detection can perform better with larger datasets and identify the tumor, are greater. However, most
and more robust models. Significant advances in object existing CAD systems for identifying and locating brain
detection have been made thanks to R-CNNs and other and breast cancers rely on deep convolutional neural
region-based approaches [10]. Two main types of two- networks. However, these systems typically perform
stage convolutional neural networks are used for object poorly across platforms and require a lot of computing
identification, and they are the single-stage networks. power. Most DCNNs are restricted by the inability of
There are a few different types of two-stage CNNs, lightweight classification models to pinpoint the precise
including R-CNN [14], Faster R-CNN [34], and R-FCN site of the tumor [23]. Comparatively, a segmentation
468 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

model can utilize a mask to detect the damaged area and As a result, in this study, we use three datasets that
identify the tumor, but this model has greater processing describe all distinct axes. Also, to emphasize the image
costs [28]. dimensions with the best discriminatory strength of
The key contributions are as follows: contemporary detection algorithms.
1. Several modern and ancient detection models exist. 4. To our knowledge, the YOLOv7 model is the most
recent detection model. As a result, we explain the
In this paper, we propose a new methodology for
utilization of the most well-known optimizers, such
comparing and evaluating the effectiveness of the most
as Adam and Stochastic Gradient Descent (SGD), in
recent models of early brain cancer detection, such as
terms of performance and detection speed rate based
YOLOv3, YOLOv5, and YOLOv7. The objective is to
on a set of loss functions and evaluation metrics.
highlight the most effective models that can be used to
identify various illnesses in the future. The rest of the paper is structured as follows: section 2
shows the state-of-the-art related works. Section 3
2. Data quality is critical when utilizing supervised
illustrates the proposed methodology for brain tumor
learning because of the scarcity of labeled datasets.
detection, dataset description and analysis, and
We apply a mixture of data augmentation techniques to employed detection models. Section 4 shows the
expand the number of dataset components, lower the evaluation metrics. Section 5 discusses the experimental
sensitivity of detection models, and improve their results. Finally, the conclusion and future directions are
performance in the early detection of brain cancer shown in section 6.
utilizing other datasets.
3. Brain cancer may be diagnosed utilizing images with
three distinct axes such as sagittal, coronal, and axial.

Figure 1. Brain tumor detection process using deep learning approaches.

2. Related Works dataset, Pascal-VOC2012, was used in the experiment.


Also, the YOLOv5 improvement includes decoupling
Detecting different objects in indoor, outdoor, and the head's layer to improve detection accuracy and
medical images is difficult since objects can be visible performance. However, utilizing a resolution of 640 by
in low light. As a result, the pixel colors of the 640 pixels. The proposed framework's findings were
photographs are more closely associated with dark hues, compared to eleven earlier models that used YOLO in
notably black. For many years, prior contributions have various versions. The results of the tests reveal that the
used various ways to improve the detection process of model can obtain an average accuracy of 93.9 at an
medical diseases. However, a new version of the You Intersection Over Union (IOU) of 0.9. In the study in
Only Look Once (YOLO) algorithm that builds deep [28], a real-time experiment was conducted to identify
learning algorithms for object recognition and detection indoor and outdoor objects by building an engineering
was presented in 2020. YOLOv5 shows better detection system that utilizes camera sensors such as OS1-64 and
accuracy and performance compared to previous OS0-128 that apply to the Lidar device. However, the
versions. In [41], a novel framework for detecting primary contribution is ended by employing full 360-
interior occupancy objects has been suggested. Using degree images with a resolution of 2048x128. The
the anchor-free approach for parameter decrease and developed system compares the performance of
VariFocal loss for data balancing, the designed FasterR-CNN, MaskRCNN, YOLOx, and YOLOv5.
framework optimizes the utilization of YOLOv5. Sensor images define four interior and outdoor target
Furthermore, a newly constructed dataset with classes: a person, a bike, a chair, and a car. The findings
11,367 samples partitioned into training, testing, and demonstrate that YOLOx outperforms others,
validation sets was presented. In addition, a well-known successfully detecting over 80% of the interior and
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 469

outdoor objects with a precision of 100% and a recall of Other studies used YOLOv5 for indoor and outdoor
95.3%. They also stated that YOLOx outperforms fire and smoke detection. In [50], swin-YOLOv5, a new
YOLOv5 regarding detection performance and speed. framework that improves on the original YOLOv5
Other research evaluated the evolution and architecture by improving feature extraction, was
improvement of various YOLO algorithm variants proposed. The developed framework can acceptably
throughout time. In [19] a new framework was proposed detect fire and smoke. Swin-fundamental YOLOv5's
to compare the efficiency of YOLOv3 with YOLOv5. concept is to use a transformer across three headers.
The framework is intended to detect Apple items. However, a dataset of 16,503 images of two target
However, a collection of 878 images with varying classes was employed for comparison. In addition,
image resolutions describing a near and distant view at seven hyperparameters were fine-tuned. According to
apples was employed. The testing results show that the data, swin-YOLOv5 outperforms the original by
YOLOv5 surpasses YOLOv3 in recall, with 97.8%, 0.7% mean average precision enhancement at 0.5 and
compared to other models such as Faster-RCNN (81.4% 4.5% mean average precision enhancement at 0.5 to
recall) and DaSNet-v2 (86.8%). YOLOv5 generates 0.95. In [43] an enhanced version of YOLOv5 was
better outcomes in terms of precision, recall, and F- developed, incorporating dynamic anchor learning
measure. In [27] a methodology for detecting landing through the K-means++ algorithm. The approach
locations for autonomous flying systems was proposed. developed seeks to limit fire damage by improving
The developed frame analyzes the usage of several detection speed and performance. Furthermore, several
YOLO versions, such as YOLOv3, YOLOv4, and loss functions such as CIOU and GIOU were applied to
YOLOv5, to identify landing sweet spots to decrease three distinct YOLOv5 models: YOLOv5 small,
flight system failure and boost safety. However, a YOLOv5 medium, and YOLOv5 large. However, a self-
collection of 11268 satellite images, called the DOTA, created dataset of 4815 images was exposed to a
with up to 20,000x20,000 image resolution and 15 synthetic system to increase the data size to 20,000.
labels, was employed. The findings demonstrate that According to the findings, the modified model
YOLOv5 with large network weights outperforms outperforms the original YOLOv5 by 4.4% mean
others, with a precision of 70%, recall of 61%, and mean average precision. It was also discovered that YOLOv5
average precision of 63%. Furthermore, YOLOv4 performs better using the CIoU loss function, with a
exceeds YOLOv3 with a recall of 57% and a mean recall of 78% and a mean average precision of 87%.
average accuracy of 60%. Since the launch of YOLOv5 with many models,
The detection of indoor and outdoor objects is several studies have employed the YOLO algorithm to
helping to track down pandemic infections such as improve Internet of Things (IoT) technology. In [6],
coronavirus. However, numerous research employed YOLOv5 was used to create a new framework to
YOLO to recognize face masks. In [40] a real-time improve IoT devices with limited memory and
monitoring system that identifies face masks for the computing power. The YOLOv4 model was also
COVID-19 pandemic was proposed using YOLOv5. utilized to compare experiment findings. However, two
Closed-circuit television footage is supplied into the separate datasets, including automobile license plates,
established framework. However, a dataset of 3,846 were integrated into a single dataset to boost data size.
images with masked and unmasked labels was created. The data set includes 5991 distinct 640x640-pixel
Several approaches were used to supplement the data, images. Transfer learning was used to build the model
including Gaussian and motion blur. In addition, using YOLOv5 with a tiny weight network and the
stacked ResNet-50, which incorporates transfer Microsoft COCO dataset. Furthermore, Long Short-
learning, was deployed. With a testing accuracy of 87% Term Memory (LSTM) based on the OCR engine was
and a precision of 71%, stacked ResNet-50 surpasses applied for automobile plate identification. The findings
other comparable models such as ResNet-50 and demonstrate that YOLOv5s outperform YOLOv4 with
Convolutional Neural Network. In [47] a new face an mAP of 87% across 100 epochs.
detection method based on YOLOv5 has been In the context of medical data, YOLOv5 has shown
developed. ShuffleCANet is used as a new backbone an improvement in diagnosing cancer status. The study
YOLO layer in the designed system. in [25] proposed a new methodology to improve the use
Furthermore, the AIZOO dataset was employed, of YOLOv5 for breast cancer detection. The intended
which contains 7959 images of a face and a masked work is assessed using four YOLOv5 weight models:
face. For image splicing and arrangement, the mosaic YOLOv5 small, YOLOv5 medium, YOLOv5 large, and
approach was used. Additionally, images were YOLOv5 x-large. However, the CBIS-DDSM dataset,
processed and scaled to 640 × 640. With a mean average which contains 10239 distinct 1000 x2000 pixel images,
accuracy of 95.2%, the proposed framework with was employed. It specifies if the breast cancer is benign
YOLOv5 surpasses existing models such as YOLOv3. or malignant. The testing findings demonstrate that
Furthermore, the proposed methodology with modified modified YOLOv5x outperforms the small, medium,
ShuffleCANet exceeds the original YOLOv5 findings and large weights, with a Matthews Correlation
by 0.58% in precision. Coefficient (MCC) value of 93.6%. In addition, the
470 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

proposed improved version of YOLOv5m was used five distinct YOLOv5 models with transfer
compared to other models such as YOLOv3 and faster learning for identifying malignant brain tumors,
RCNN. It was discovered that modified YOLOv5m including nano, small, medium, large, and x-large
outperforms YOLOv3 and faster RCNN with an models. The proposed framework uses the Brats 2021
accuracy of 96.5% and mAP of 96%. In [26] a dataset, which contains 2,000 instances with 8000 scans
framework for detecting brain tumors via transfer at 240x240 resolution and three distinct kinds and
learning is being developed. The proposed methodology locations, namely, T1, T2, and Flair. In addition, the
employs the tiny YOLOv4 model for training and the Microsoft COCO dataset was utilized to train the model
YOLOv3 detection unit. However, the model was using transfer learning. With a mean average precision
trained on the Microsoft Common Objects in Context of 0.912, the findings demonstrate that the YOLOv5 x-
(COCO) dataset with another gathered dataset that large model outperforms the others. In this paper, we
depicts 3064 magnetic resonance images with different propose a new framework for detecting brain tumors
regions for cancer tumors of 512x512 resolution, such using the three distinct YOLO models such as YOLOv3,
as coronal, axial, and sagittal, using transfer learning. YOLOv5, and YOLOv7 with different data
With a mean average precision of 0.9314, the findings augmentation techniques. Table 1 summarizes the state-
show that the fine-tuned small YOLOv4 model with of-the-art related works.
transfer learning surpasses others. The study in [35]
Table 1. Summary of the state-of-art research.
Paper Dataset Task YOLO version Findings
Newly constructed dataset with 11,367 samples and Indoor object The best average accuracy of 93.9 at an intersection over union of
[41] YOLOv5
Pascal-VOC2012 with 640 x 640. detection. 0.9.
Full 360-degree images with a resolution of 2048 x 12 Indoor and outdoor YOLOx outperforms others with a precision of 100% and a recall
[46] YOLOx and YOLOv5
were 8 collected by Lidar sensors. objects detection. of 95.3%.
Apple dataset with 878 images of different YOLOv3 and
[19] Detecting apple fruit. YOLOv5 outperforms YOLOv3 with a recall of 97.8%.
resolutions. YOLOv5
DOTA dataset with 11268 satellite images of 20,000 x Detecting landing YOLOv3, YOLOv4, YOLOv5 shows better results improvement with a precision of
[27]
20,000 resolution and 15 target classes. sweet spots. and YOLOv5 70% and recall of 61%.
A dataset of 3,846 images of face masks was collected Stacked ResNet-50 outperforms others with a testing accuracy of
[40] Detecting face masks. YOLOv5
by CCTVs. 87% and a precision of 71%.
AIZOO dataset for face mask detection with 7959 YOLOv3, and ShuffleCANet as the backbone layer outperforms others with a
[47] Detecting face masks.
images of 640 x 640. YOLOv5 mean average accuracy of 95.2%.
Fire and smoke Swin-YOLOv5 outperforms others with an mAP of 0.7
[50] Dataset of 16,503 images of two target classes. YOLOv5
detection. improvements at an IOU of 0.5.
The improved model of YOLOv5 using K-means++ outperforms
[43] Self-build dataset of 4815 images of fires and smoke. Fire detection. YOLOv5
others by 4.4% mAP.
Google images, Microsoft COCO, and Indian number Automobile plate YOLOv4 and
[6] YOLOv5s outperforms others with an mAP of 87%.
plates dataset detection. YOLOv5
YOLOv5x outperforms other models with an MCC of 93.6%.
The CBIS-DDSM dataset with 10239 distinct 1000 x Breast cancer YOLOv5 and
[25] Also, it outperforms YOLOv3 with an accuracy of 96.5% and an
2000 pixel images of breast cancer. detection. YOLOv3
mAP of 96%.
Microsoft COCO dataset and a collected dataset of Tiny YOLOv4 and Tiny YOLOv4 with transfer learning shows the best results with
[26] Brain tumor detection.
3064 brain cancer MRI images of 512 x 512. YOLOv3 an mAP of 93.14%.
Microsoft COCO and the Brats 2021 datasets with YOLOv5 x-large model shows the best results with an mAP of
[35] Brain tumor detection. YOLOv5
240 x 240 brain cancer MRI images. 91.2%.

3. Proposed Methodology for Brain Tumor evaluating the usage of several YOLO models in order
Detection to find the optimal model for brain tumor detection, as
shown in Figure 2. The proposed methodology contrasts
This section demonstrates the proposed framework for traditional YOLO models such as YOLOv3 with
detecting brain tumors using various YOLO models cutting-edge models such as YOLOv7 and YOLOv5.
such as YOLOv3, YOLOv7, and YOLOv5 with We also test the YOLOv5 model with various network
different weights and data augmentation. The detection sizes, including nano, small, medium, large, and x-large
process of various brain tumors of variable sizes and networks. Data augmentation, on the other hand,
dimensions may be evaluated using various metrics improves the model's effectiveness by increasing the
regarding accuracy and loss functions. However, the number of training samples. However, we apply several
weight and size of the neural networks have a significant data augmentation techniques for improvement, such as
influence on detection accuracy and speed, particularly image and bounding box flipping horizontally and
in the situation of low-light magnetic resonance images. vertically, which minimizes model sensitivity to varied
In this paper, we propose a novel framework for orientations.
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 471

Figure 2. An Overview of the proposed framework for brain tumors detection.

Furthermore, it employs three distinct MRI imaging enhancing the overall accuracy and robustness of the
positions for brain tumors: axial, coronal, and sagittal. detection system.
Our objective is to find the ideal posture for detecting In this work, we use an amassed dataset initially
cancer tumors. We maintain 20% of each dataset for developed to identify malignancies in the brain using a
model testing while evaluating multiple models. In variety of MRI orientations. The dataset is available
addition, we compute several evaluation metrics for online via the Kaggle repository at
model comparisons, such as precision, recall, mean https://www.kaggle.com/datasets/davidbroberts/brain-
average precision, intersection over union, and three tumor-object-detection-datasets. However, as shown in
loss functions. Also, we compare different YOLO Figure 3, the dataset consists of three unique datasets
models with prior models, such as the faster region- representing three possible brain tumor orientations,
based convolutional neural networks for object namely axial, coronal, and sagittal, with two labels,
detection. tumor and non-tumor. It also includes 1218 images of
varying resolutions. All Exchangeable Image File
3.1. Dataset Collection and Image Processing Format )EXIF( rotations were ignored during data
Magnetic Resonance Imaging (MRI) is a powerful diagnostic
preparation, and pixels were normalized. All images
tool widely employed in medical imaging, capable of were also resized to 416×416. However, data analysis
capturing detailed images of the body's internal structures reveals that the axial dataset has 18 missing labels, and
from various orientations. The three primary orientations the coronal dataset contains one missing label.
utilized in MRI are axial, coronal, and sagittal, each providing RoboFlow, an online platform, was used to manage
a unique perspective that enables healthcare professionals to missing classes and ground truth bounding boxes.
comprehensively evaluate and diagnose a range of conditions,
including brain tumors.
The axial orientation offers cross-sectional views
perpendicular to the body's long axis, spanning from the
top of the head to the bottom. These axial MRI images
are invaluable for visualizing intricate details of the
brain, spinal cord, and abdominal organs. Conversely,
the coronal orientation presents a frontal view of the
body, with the imaging plane perpendicular to the axial
plane. Coronal MRI scans facilitate thorough
assessments of the brain, eyes, facial structures, spinal
cord, and abdominal organs. a) Axial. b) Sagittal. c) Coronal.

Furthermore, the sagittal orientation provides a side Figure 3. Illustrates a sample of different brain cancer MRI
view of the body, with the imaging plane parallel to its orientations and datasets labels.
long axis. Sagittal MRI images are instrumental in
On the other hand, data augmentation approaches
examining the brain, spinal cord, pelvic region, and
were used to reduce the models' sensitivity to different
evaluating the integrity of various muscles and tendons.
orientations. We flip images and employ bounding
By incorporating these diverse MRI orientations into
boxes in horizontal and vertical directions to increase
their study, researchers can comprehensively evaluate
the number of data samples. The description and labels
the performance of the YOLO models in detecting brain
for the dataset are shown in Table 2. However, we
tumors from multiple vantage points, potentially
maintain 20% of each dataset for testing for alternative
472 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

model comparisons by applying data augmentation


techniques. The images were boosted three times more
with 7382 training and testing samples. Table 3 shows
the distribution of datasets following data augmentation.

Table 2. Datasets distribution following data augmentation.


Datasets labels Axial Sagittal Coronal Total
Tumor 1754 1023 1275 4052
Not tumor 1210 983 1137 3330
Total images 2964 2006 2412 7382
Total annotations 3141 2154 2566 7861
Average image size 0.17 megapixel 0.17 megapixel0.17 megapixel ----
Median image ratio 416 x 416 416 x 416 416 x 416 ----

Table 3. Dataset description and analysis following data-


augmentation.
Evaluation sets Axial Sagittal Coronal Total
Training set (80%) 2721 1844 2215 6780
Testing set (20%) 243 162 197 602 Figure 4. Illustration of YOLOv5 architecture.
Total 2964 2006 2412 7382

3.3. YOLOv3 Model


3.2. YOLOv5 Model
In 2018, Redmon and Farhadi [32] came up with the
YOLOv5 [39, 48] was originally launched in May 2020. idea for a new YOLO version, which they called
An object detection algorithm finds objects by looking YOLOv3 The revised version has an inference time of
at images all at once. The backbone, neck, and head 22 milliseconds (ms) and a mean average precision of
layers are represented in Figure 4. All the layers are 28.2%. It does this by applying dimension clusters to the
completely conventional networks. To begin, the problem of predicting ground truth bounding boxes for
backbone layer is utilized to extract significant and anchor boxes. However, due to softmax's (the network
discriminative information from incoming images. In classification layer's) poor performance, YOLOv3 uses
this YOLO version, the cross-stage partial network the logistic regression function to minimize the
CSPNet is employed as the basic learner in the confidence score. The confidence score is the
backbone layer to extract features. Second, the neck probability of a certain item in a particular grid cell.
layer is utilized for feature pyramid creation, which aids Compared to YOLOv5, it employs the darknet-53 as the
in detecting the same objects with varying sizes and backbone layer, adding a more convolutional layer. In
placements. YOLOv5, on the other hand, generates a the neck layer, YOLOv5 extracts features using the path
features pyramid using the path aggregation network aggregation network. In light of the results, Redmon
(PANet). Finally, the head layer, the YOLO layer, is stated that the YOLOv3 detection model is faster and
employed for object recognition and prediction. It more accurate than other detection models, such as the
creates a vector containing the target class probability YOLOv2 and Single-Shot Detector models (SSD). The
and bounding boxes. However, bounding boxes define architecture of the YOLOv3 network is shown in Figure
object coordination regarding x and y cross points, 5. In modern times, YOLOv3 continues to be an
height, and width. This layer generates many bounding effective detection model in various contexts. YOLOv3
boxes to improve detection accuracy and performance was used by Magnuska et al. [24] to detect breast cancer
by computing the area of overlapping boxes. tumors. According to the findings, the performance of
Intersection over Union (IoU) may, on the other hand, YOLOv3 is superior to that of Viola-Jones in terms of
be computed to identify the best bounding overlapping the intersection over union measure. In addition, various
boxes [31]. In this paper, we choose YOLOv5 because variants, such as tiny-YOLOv3, were derived from
of its speed, performance, and accuracy. However, the YOLOv3 and used as the basis for their creation. In their
critical differences between YOLOv5 and the prior experimental investigation, Zhang et al. [51] advocated
versions are as follows: using a K-means cluster as a technique for enhancing
1. It employs CSPDarknet53 at the backbone layer. tiny YOLOv3 in order to raise the accuracy of
2. It employs the PANet in the neck layer. pedestrian identification.
3. It employs logistic and binary cross-loss functions.
4. It recognizes near and remote objects in the same
input image.
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 473

Aggregation Network (E-ELAN) is used as the


backbone layer of the system. The E-ELAN was
developed to provide a detection system that is both
more accurate and faster. However, in contrast to earlier
networks, such as the first iteration of the ELAN and
CSPVoVNet, the E-ELAN adds three extra components
to the training layer. These components are known as
shuffle, merge, and expand, respectively. One of the
fundamental tenets of YOLOv7 is to enhance detection
accuracy and performance while simultaneously
decreasing the number of parameters and the amount of
processing required.
On the other hand, YOLOv7 employs not one but two
Figure 5. Illustration of YOLOv3 Architecture. head layers, namely the lead head and the auxiliary
head, for the detection layer. These two layers interact
3.4. YOLOv7 Model with one another to provide a more accurate
representation of the correlation and distribution of the
A new version of YOLO, known as YOLOv7, was
data. On the other hand, the authors' experimental
developed by Wang et al. [42] in July 2022. Compared
findings demonstrate that YOLOv7 performs better than
to its predecessors, the YOLOv7 model is quicker and
other models such as tiny YOLOv4, YOLOv4, and
more precise. In terms of real-time object identification
YOLOR. In modern times, YOLOv7 is beneficial in
with a mean average precision that ranges from 51.4%
diagnosing several medical conditions. In the
to 56.8%. However, the architecture of YOLOv7 is
experimental investigation that Bayram et al.
inspired by the original YOLOv4 and scaled versions of
[7] conducted on diagnosing kidney disorders, the
that design. The YOLOv7 architect is shown in Figure
authors found that YOLOv7 has the highest mean
6. In addition, the new Extended Efficient Layer
average accuracy of 85% at IoU of 50%.

Figure 6. Illustration of YOLOv7 architecture.

correctly and erroneously categorized objects. Precision


4. Evaluation Metrics
is the proportion of correct predictions (True positive)
We employ three evaluation metrics for the YOLO relative to the total number of correct samples (True
models: precision, recall, and mean Average Precision positive +false positive). The recall metric quantifies the
(mAP). We aim to determine the ideal YOLO network proportion of accurate predictions (True positives)
size and weight with the highest accuracy. The mAP relative to the total number of relevant samples across
metric, on the other hand, analyzes the effectiveness of all labels. A confusion matrix has four main qualities
object detection by calculating the mean of the average that may be used to construct alternative assessment
precision of the entire data class concerning the metrics:
Intersection over Union (IoU) value [15]. The mAP
 True Positive (TP): indicates the number of brain
value is derived from the confusion matrix, IoU,
precision, and recall. Based on the ground truth tumors appropriately diagnosed.
bounding box, the confusion matrix summarizes the  True Negative (TN): number of accurately identified
classification and detection outcomes regarding non-tumors.
474 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

 False Positive (FP): the number of misclassified non- This section shows the experimental findings of
tumors that are tumors. comparing several YOLO models for identifying brain
 False Negative (FN): the number of tumors cancers. Our objective is to find the optimum YOLO
misclassified as non-tumors. model and MRI orientation for tumor detection in terms
of accuracy and performance. However, image
The following metrics were computed and derived based on
processing and classification are known to have high
the confusion matrix:
equipment requirements, such as large amounts of RAM
𝑀𝑒𝑎𝑛 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (𝑚𝐴𝑃) = (1) and a powerful GPU. The experiment setup and device
1
∑𝑘=𝑛
𝑘=1 𝐴𝑣𝑒𝑟𝑒𝑎𝑔𝑒 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑘 qualification are shown in Table 4.
𝑛
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (2)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = Table 4. Experiment setup and simulation device qualifications.
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 Device Specification Description
𝑅𝑒𝑐𝑎𝑙𝑙 = (3)
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 Processor Intel(R) Core i7 10𝑡ℎ generation.
RAM 8 Gigabits.
𝐼𝑛𝑡𝑒𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 𝑜𝑣𝑒𝑟 𝑢𝑛𝑖𝑜𝑛 (𝐼𝑜𝑈) =
Operating system Windows x64
𝑂𝑣𝑒𝑟𝑙𝑎𝑝𝑒𝑑 𝑎𝑟𝑒𝑎 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑡ℎ𝑒 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑎𝑛𝑑 𝑔𝑟𝑜𝑢𝑛𝑑 𝑡𝑟𝑢𝑡ℎ 𝑏𝑜𝑥𝑒𝑠 (4) CPU 1.50 GHz
𝐴𝑟𝑒𝑎 𝑜𝑓 𝑢𝑛𝑖𝑜𝑛
GPU NVIDIA GeForce MX230
We also employ three loss functions for minimization
and evaluation, including the bounding box regression For hyperparameters tuning, YOLO models
score (loss), which may be used to assess non- comprise around 29 distinct parameters in total.
overlapping bounding boxes [20]. The class probability However, as shown in Table 5, we set up twelve
score may be used to determine how well a bounding box parameters. The parameters are loss gain functions,
matches the class of an item [33]. The objectness score learning rates, optimizers, and IoU threshold. All
(confidence score/GIoU) may be used to assess the images were resized to 416×416 for all models as input
likelihood of a certain object being in a grid cell [44]. image size. However, due to low-weight networks such
𝑂𝑏𝑗𝑒𝑐𝑡𝑛𝑒𝑠𝑠 𝑠𝑐𝑜𝑟𝑒 (𝑐𝑜𝑛𝑓𝑖𝑑𝑒𝑛𝑐𝑒) = as Nano and small models, we increase the number of
𝑇𝑟𝑢𝑡ℎ
𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑜𝑏𝑗𝑒𝑐𝑡) ∗ 𝐼𝑜𝑈𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 )
(5) epochs to 100 iterations in YOLOv5 for detection
𝐵𝑜𝑢𝑛𝑑𝑖𝑛𝑔 𝑏𝑜𝑥 𝑟𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑙𝑜𝑠𝑠 =
results improvement. To make the comparison more
(6)
𝑀𝑒𝑎𝑛 𝑠𝑞𝑢𝑎𝑟𝑒𝑑 𝑒𝑟𝑟𝑜𝑟 (𝑥 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 , 𝑥 𝑡𝑟𝑢𝑡ℎ ) accurate, we set all other YOLOv5 models, including
medium, large, and x-large, to 100 epochs.
𝐶𝑙𝑎𝑠𝑠 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑠𝑐𝑜𝑟𝑒 = 𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑐𝑙𝑎𝑠𝑠𝑖 |𝑜𝑏𝑗𝑒𝑐𝑡) (7)
Where: Table 5. Hyperparameter tuning and data-augmentation
1, 𝑇ℎ𝑒𝑟𝑒 𝑖𝑠 𝑛𝑜 𝑜𝑏𝑗𝑒𝑐𝑡 processing.
𝑃𝑟𝑜𝑝𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑜𝑏𝑗𝑒𝑐𝑡) = { (8)
0, 𝑇ℎ𝑒𝑟𝑒 𝑖𝑠 𝑜𝑏𝑗𝑒𝑐𝑡 Detection Models
Parameters
YOLOv3 YOLOv5 YOLOv7
𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑐𝑙𝑎𝑠𝑠𝑖 ) = (9) Initial learning rate 0.01 0.01 0.01
𝑇ℎ𝑒 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑜𝑏𝑗𝑒𝑐𝑡 𝑡ℎ𝑎𝑡 𝑏𝑒𝑙𝑜𝑛𝑔 𝑡𝑜 𝑐𝑙𝑎𝑠𝑠𝑖 (lr0)
Final learning rate 0.1 0.01 0.1
(lrf)
5. Experimental Results and Discussion Momentum 0.937 0.937 0.937
Box loss gain 0.05 0.05 0.05
In cancer detection, supervised learning and detection Classification loss gain 0.5 0.5 0.3
models are used. The type and quality of data are Objectness loss gain 1.0 1.0 0.7
thought to have the most significant influence on IoU training threshold 0.2 0.2 0.2
Optimizer SGD SGD SGD/Adam
detecting operations. However, detection models need Anchors per output 6.14 6.14 6.02
labeled datasets, which are costly and necessitate expert layer
identification of illnesses to minimize confusion with Image input size 416 x 416 416 x 416 416 x 416
Batches 16 16 16
symptoms or other conditions. As a result, we employ a Epochs 50 100 60
variety of data augmentation approaches to boost the Data-Augmentation For images (Flip horizontally and vertically). For
number of training components to get reliable detection bounding boxes (Flip horizontally and vertically)
results. On the other hand, brain tumor detection
procedures have a specific and unique instance in which In contrast to YOLOv5, we chose 50 epochs to train
brain cancers may be discovered using a collection of the YOLOv3 model and 60 epochs to train the YOLOv7
magnetic resonance images of varying dimensions. model. However, all models were set up to 16 batches
Therefore, detecting tumors alone is not considered compatible with the small chosen learning rates and
sufficient; instead, it is necessary to concentrate and device qualifications in terms of RAM and GPU, as
conduct research on the side of the images with the discussed in Table 4. For clarity, only four images will
highest discriminatory power of the automated be fed at once into the model in each iteration. We use
detection compilations, highlighting their advantages the SGD optimizer for optimizers in YOLOv3 and
and limitations while considering medical YOLOv5 models. However, to our knowledge, the
considerations. Adam optimizer performs worse than the SGD
optimizer even though the Adam optimizer converges
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 475

faster [13]. Therefore, we compare the state-of-the-art


YOLOv7 model using two different optimizers, SGD
and Adam optimizer.
The efficiency of several YOLO models for tumor
detection across each dataset is discussed separately in
the following sections. Tables 6 and 7 show the findings a) YOLOv5n. b) YOLOv5s. c) YOLOv5m. d) YOLOv5l.

of brain tumor detection over the axial orientation using


different YOLO models before and after data
augmentation. Also, to emphasize the improvement in
outcomes, the average was calculated. However, when
comparing the outcomes before and after data e) YOLOv5x. f) YOLOv3. g) YOLOv7.
augmentation. It was discovered that there was a result Figure 7. Shows a sample of improved brain tumor detection results
enhancement with 0.432 average precision, 0.242 and accuracy over the axial dataset following data-augmentation.
average recall, 0.431 average mAP at IoU of 0.5, and
0.503 average mAP at IoU of 0.5 to 0.95. Furthermore, For the YOLOv7 optimizers’ comparison, it was
YOLOv3 surpasses others with an mAP of 62.4% at discovered that YOLOv7 does not perform well when
IoU of 0.5 and a precision of 63.1% before data utilizing the Adam optimizer, most likely owing to
augmentation. YOLOv7 also has the lowest mAP of numerous placed labels (number of detected labels).
23.5% at IoU of 0.5 and precision of 32.4%. However, However, the Adam optimizer only recognized labels
YOLOv7 has the highest recall rate of 82.4% compared ranging from 0 to 7 in each epoch. As a result, YOLOv7
to other models. Data augmentation revealed that the yields 40 to 56% mAP. In SGD, identified labels rose
YOLOv5 x-large surpasses others, with an mAP of from 7 to 29 in each epoch, indicating improved
99.5% at IoU of 0.5, a precision of 99%, and an mAP of outcomes. Therefore, we choose the SGD optimizer for
93% for IoU of 0.5 to 0.95. YOLOv7, on the other hand, the YOLOv7 detection model. Figure 8 shows the
has inferior detection outcomes with 0.027 results of comparing the Adam and SGD
classification loss gain and 0.66 box loss gain. As a optimizers using YOLOv7.
result, the YOLOv3 and all YOLOv5 models
outperform the cutting-edge YOLOv7 model on the
axial dataset. Also, in contrast to YOLOv3 and
YOLOv5, it was found that YOLOv7 is more affected
by data augmentation. However, as shown in Figure 7,
the YOLOv3 and YOLOv5 models have a high tumor
detection accuracy ranging from 90% to 100%.
YOlOv7, on the other hand, achieves detection rates
ranging from 30% to 90% across all samples, despite
occasional misclassifications.
Table 6. Shows tumor detection results over the axial dataset before
augmentation with different YOL weights.
Model Precision Recall mAP 50mAP 50-95 Obj loss Cls loss Box loss a) Adam optimizer performance.
YOLO v5n 0.61746 0.70329 0.61684 0.43355 0.00524180.034826 0.025097
YOLO v5s 0.60834 0.68293 0.59927 0.43204 0.00548910.036905 0.024193
YOLO v5m 0.51929 0.73614 0.56837 0.40287 0.00541190.045909 0.023632
YOLO v5l 0.51097 0.69296 0.49397 0.35077 0.005594 0.058388 0.022582
YOLO v5x 0.56775 0.68737 0.58189 0.41925 0.00540350.044001 0.022247
YOLO v3 0.63109 0.69901 0.62469 0.4512 0.00528220.037102 0.02407
YOLO v7 0.3242 0.8242 0.3869 0.2358 0.005279 0.01698 0.08777
Average 0.52694 0.72044 0.54252 0.38199 0.00541 0.039881 0.034082

Table 7. Shows tumor detection results over the axial dataset


following data-augmentation with different YOLO weights.
Model Precision Recall mAP mAP 50- Obj loss Cls loss Box loss
50 95
YOLO 0.99076 0.98204 0.99009 0.8639 0.00250820.0000361 0.012857
v5n b) SGD optimizer performance.
YOLO v5s 0.99898 0.99401 0.99497 0.90149 0.00217930.0000289 0.010736
YOLO 0.99938 0.99401 0.99485 0.91338 0.00197840.00002560.0092766 Figure 8. Compares adam and SGD optimizers performance using
v5m YOLOv7 Model over axial dataset.
YOLO v5l 0.99932 0.99401 0.99494 0.92649 0.00201090.00002540.0086069
YOLO 0.999 0.997 0.995 0.930 0.0081 0.0018 0.000021
v5x
Tables 8 and 9 show the tumor detection accuracy
YOLO v3 0.986 0.994 0.994 0.903 0.0021 0.00008 0.0105 and performance for the coronal orientation dataset
YOLO v7 0.7657 0.8007 0.8661 0.731 0.003841 0.02762 0.06685 using YOLO models before and after data
Average 0.95806 0.96229 0.97331 0.88423 0.0033683 0.00493 0.0176651
augmentation. However, compared to others before data
476 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

augmentation, YOLOv3 has the highest detection Table 8. Shows tumor detection results over the coronal dataset
outcomes with an mAP of 78.2% at IoU of 0.5. before data-augmentation with different YOLO weights.
Furthermore, YOLOv5 x-large and medium weights mAP mAP 50-
Model Precision Recall Obj loss Cls loss Box loss
50 95
rank second in terms of mean average precision. YOLO v5n 0.65431 0.67047 0.6923 0.46631 0.00418910.031565 0.024857
YOLOv7, like the axial orientation dataset, performs the YOLO v5s 0.5559 0.79502 0.68246 0.4927 0.00406850.042924 0.023766
YOLO v5m 0.67332 0.74448 0.71691 0.53649 0.00416340.038446 0.021936
worst, with a mAP of 54% and an IoU of 50%. Also, YOLO v5l 0.57298 0.73733 0.6436 0.48649 0.00455280.053788 0.02338
compared to the axial orientation dataset and other YOLO v5x 0.61021 0.82697 0.71946 0.5315 0.00457940.043106 0.023826
YOLO v3 0.68968 0.79627 0.78289 0.59829 0.00440650.032529 0.02301
detection models, YOLOv7 is the least affected by data YOLO v7 0.4808 0.8538 0.5405 0.4063 0.004962 0.02106 0.08383
augmentation, with results enhancement ranging from 1 Average 0.59715 0.79231 0.68097 0.50863 0.00445540.038642 0.033291
to 3% of total assessment metrics.
Table 9. Shows tumor detection results over the coronal dataset
Nonetheless, when comparing data augmentation following data-augmentation with different YOLO weights.
outcomes, we find that increased data affects YOLO
Model Precision Recall mAP mAP 50- Obj loss Cls loss Box loss
models, resulting in improved results, which is 50 95
unsurprising given that data augmentation methods YOLO 0.97445 0.9856 0.99283 0.88498 0.00239260.0001683 0.010644
v5n
lower detector sensitivity. The results reveal that YOLO v5s 0.9994 0.98735 0.99359 0.92035 0.00199960.00023680.0089049
detection results improve by 0.317 average precision, YOLO 0.99415 0.9835 0.99185 0.92973 0.00179310.00039260.0079115
0.239 average mAP at IoU of 0.5, and 0.343 average v5m
YOLO v5l 0.99303 0.99028 0.99109 0.93772 0.00176710.00053780.0074157
mAP at IoU of 0.5 to 0.95. Similarly to the axial dataset, YOLO 0.9934 0.98055 0.99053 0.94204 0.00175610.00014190.0068884
the YOLOv5 small weight has the highest detection v5x
YOLO v3 0.994 0.994 0.991 0.926 0.001877 0.00094820.0085525
results after data augmentation, with an mAP of 99.3%, YOLO v7 0.5115 0.8996 0.5564 0.4532 0.003962 0.02854 0.06537
demonstrating that the small weight is the most Average 0.91425 0.97255 0.91908 0.85151 0.00219250.00513290.0175072
favorably affected by data increase. However, as shown
in Figure 9, all models have significant detection
accuracy over coronal orientation. Compared to axial
orientation, YOLOv5n, YOLOv5s, YOLOv5x, and
YOLOv3 have the highest detection stability.

a) YOLOv5n. b) YOLOv5s. c) YOLOv5m. d) YOLOv5l. e) YOLOv5x.

f) YOLOv3. g) YOLOv7.

Figure 9. Shows a sample of improved brain tumor detection results and accuracy over the coronal dataset following augmentation.

Tables 10 and 11 show the tumor detection accuracy detector weights. YOLOv3, on the other hand, has a
and performance using YOLO models before and after high positive sensitivity to data augmentation, with
data augmentation for the sagittal orientation dataset. precision increased by 59.8%, mAP by 42.9% at IoU of
However, following data augmentation, detection 0.5, and 0.077 classification loss gain reduction.
accuracy improves by 0.434 average precision, 0.388 YOLOv7 is the least susceptible to data increase, yet it
average mAP at IoU=0.5, and 0.207 average recall. improves outcomes by 0.064 compared to coronal
Furthermore, we found that the YOLOv5 nano weight orientation. Furthermore, as compared to the axial and
surpasses others with an mAP of 96.7% at an IoU of coronal orientations, the sagittal dataset has the lowest
50%. It is worth noting that, like with the coronal detection accuracy, with an average mAP of 89.9%.
orientation dataset, increasing network weight However, as shown in Figure 10, all models exhibit
significantly impacts detection accuracy. However, significant detection performance and accuracy.
building a large weight network does not necessarily
improve detection outcomes. As a result, it is critical to
identify the relationship between data augmentation and
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 477

Table 10. Shows tumor detection results over the sagittal dataset Table 11. Shows tumor detection results over the sagittal dataset
before data-augmentation with different yolo weights. following data-augmentation with different yolo weights.
Model Precision Recall mAP 50mAP 50-95 Obj loss Cls loss Box loss mAP mAP 50-
YOLO v5n 0.48354 0.83131 0.55339 0.35948 0.00518680.028463 0.0299 Model Precision Recall Obj loss Cls loss Box loss
50 95
YOLO v5s 0.46457 0.85612 0.54414 0.3966 0.00429790.0282710.027036 YOLO v5n 0.94046 0.94527 0.96799 0.85926 0.00266520.0056824 0.013165
YOLO v5m 0.49976 0.78951 0.55119 0.3871 0.00443130.0579060.027464 YOLO v5s 0.96164 0.94119 0.95512 0.8765 0.002439 0.0069707 0.01133
YOLO v5l 0.42547 0.57456 0.46628 0.34213 0.00533690.0724410.028687 YOLO
0.95574 0.9523 0.95932 0.8893 0.002121 0.00835 0.010057
YOLO v5x 0.46981 0.57894 0.49138 0.34809 0.00494070.0712240.030903 v5m
YOLO v3 0.36036 0.86259 0.52743 0.36598 0.00475360.0831550.031451 YOLO v5l 0.95606 0.94729 0.9502 0.89003 0.00213690.00852330.0094977
YOLO v7 0.4622 0.7664 0.4912 0.3465 0.004968 0.01662 0.08737 YOLO v5x 0.95589 0.94631 0.95412 0.9 0.00204970.00837650.0083264
Average 0.44703 0.73802 0.51194 0.3644 0.00478810.0549360.038819 YOLO v3 0.95817 0.92698 0.9566 0.87865 0.002192 0.0061747 0.0106
YOLO v7 0.5014 0.9595 0.6201 0.552 0.002978 0.021 0.06583
Average 0.88148 0.9456 0.89924 0.83108 0.00231940.00989920.0192735

a) YOLOv5n. b) YOLOv5s. c) YOLOv5m. d) YOLOv5l. e) YOLOv5x.

f) YOLOv3. g) YOLOv7.

Figure 10. Shows a sample of improved brain tumor detection results and accuracy over the sagittal dataset following data-augmentation.

For medical considerations, it should be noted that Third, we compare the most recent detection models,
the objective of employing MRI layers is to cover all of such as YOLOv5, YOLOv7, and YOLOv3, to find the
the critical components by which cancers may be most accurate model. Fourth, with the introduction of
diagnosed precisely and clearly, where the tumor's novel detection models such as YOLOv7, evaluating the
location, size, and kind may be determined. On the other model's performance using well-known optimizers such
hand, the study's findings revealed a considerable as Adam and SGD is useful. Finally, we analyze the
improvement in the accuracy of identifying brain performance of several YOLO models using a
cancers utilizing magnetic resonance imaging in all comprehensive set of evaluation metrics to demonstrate
dimensions. However, utilizing axial images yielded the detection speed, performance, and detection error.
greatest detection results. This is owing to the nature of Concerning the study's limitations, it should be
the axis' dimensions, centered on the X and Y points highlighted that altering the parameters of detection
compared to others. Furthermore, it provides an upper models may change from one trial to the next, resulting
and precise coverage of the right and left sides of the in erroneous detection findings. Furthermore, the
brain, providing more recognized data patterns. quality of the images used to diagnose brain tumors
To highlight the state-of-the-art of the study significantly enhances detection accuracy. Since we
compared to other studies. First, while reviewing a used images at a resolution of 416*416 for this study,
variety of internet datasets for brain tumor detection, we enlarging the image could result in a lower-quality
discovered that many of the data lacked labeling. As a image. Furthermore, we discovered that all YOLO
result, in this work, we apply a series of data models are susceptible to data augmentation strategies,
augmentation approaches to enhance the training set and with the YOLOv7 model being the least affected.
lessen the sensitivity of detection models from future Finally, this study did not discover any micro-tumor.
detection operations using any other data set than the The study was restricted to the early detection of cancers
one used in the study. Also, this may be utilized to with two primary categories (tumor and no tumor). This
construct detection models that can learn from is owing to a paucity of online addressable data sets,
unlabeled data sets by discovering methods to enhance particularly for detection operations, whose addressing
the data at the start and then transferring this knowledge procedure is quite costly and necessitates using
to specialized categorization procedures. Second, in this radiologists for addressing operations.
work, we employ all axes of magnetic resonance In short, the results show that the YOLOv5 and
imaging, including axial, sagittal, and coronal, to find YOLOv3 models are more sensitive to data
the ideal image dimensions that may be used to achieve augmentation than the YOLOv7 model. In addition, we
the maximum discriminatory accuracy in detection. show that the axial orientation has higher tumor
478 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

detection accuracy than the other orientations. Applied Science, vol. 11, no. 12, p. 5551, 2021.
However, based on the statistical results, large-weight https://doi.org/10.3390/app11125551
models are more likely to recognize data samples than [4] Badran E., Mahmoud E., and Hamdy N., “An
uncover data patterns. As a result, YOLOv7 performs Algorithm for Detecting Brain Tumors in MRI
the worst compared to others, in which the number of Images,” in Proceedings of the International
identified labels (Classes) in each epoch ranged from 0 Conference on Computer Engineering and
to 7. Nonetheless, all models revealed significant Systems, Cairo, pp. 368-373, 2010.
detection accuracy. [5] Bakator M., and Radosav D., “Deep Learning and
Medical Diagnosis: A Review of Literature,”
6. Conclusions Multimodal Technology Interact, vol. 2, no. 3, pp.
47, 2018. https://doi.org/10.3390/mti2030047
The developed methodology evaluates the performance [6] Batra P., Hussain I., Abdul Ahad M., Casalino G.,
of state-of-the-art YOLO models on a dataset of 7382 and Alam M., “A Novel Memory and Time-
samples from three different MRI orientations (axial, Efficient ALPR System Based on YOLOv5,”
coronal, and sagittal) using different weights and Sensors, vol. 22, no. 14, pp. 5283, 2022.
degrees of data augmentation. Many data augmentation https://doi.org/10.3390/s22145283
approaches were used to reduce detector sensitivity and [7] Bayram A., Gurkan C., Budak A., and Karataş H.,
enhance detection accuracy. Furthermore, a comparison “A Detection and Prediction Model Based on
was made between the Adam and SGD optimizers. We Deep Learning Assisted by Explainable Artificial
need to determine the optimal network weight and MRI Intelligence for Kidney Diseases,” European
orientation to detect brain tumors with MRI. With an Journal of Science and Technology, no. 40, pp. 67-
IoU of 0.5, the results show that the average mAP for 74, 2022. DOI: 10.31590/ejosat.1171777
axial orientation is 97.33 percent. [8] Cepni S., Atik M., and Duran Z., “Vehicle
Additionally, SGD outperforms Adam optimizer by Detection Using Different Deep Learning
over 20% mAP. In addition, YOLOv5n, YOLOv5s, Algorithms from Image Sequence,” Baltic
YOLOv5x, and YOLOv3 were discovered to have a Journal Modern Computing, vol. 8, no. 2, pp. 347-
mAP of greater than 95%. Furthermore, the YOLOv5 358, 2020. DOI:10.22364/bjmc.2020.8.2.10
and YOLOv3 models were more sensitive to data [9] Chan H., Hadjiiski L., and Samala R., “Computer‐
augmentation than the YOLOv7 model. Using the aided Diagnosis in the Era of Deep Learning,”
proposed framework for brain tumor diagnosis has a Medical Physics, vol. 47, no. 5, pp. e218-e227,
moderate computational cost and a small space 2020. DOI: 10.1002/mp.13764
requirement; as a result, it is capable of running on most [10] Chen C., Liu M., Tuzel O., and Xiao J., “R-CNN
systems. for Small Object Detection,” in Proceedings of the
Asian Conference on Computer Vision, Taipei pp.
Acknowledgement 214-230, 2016.
[11] Cheng J., Huang W., Cao S., Yang R., Yang W.,
The authors would like to thank the Deanship of Yun Z., Wang Z., and Feng Q., “Enhanced
Scientific Research at Shaqra University for supporting Performance of Brain Tumor Classification Via
this research. Tumor Region Augmentation and Partition,”
PLoS One, vol. 10, no. 10, p. e0140381,
References 2015. 10.1371/journal.pone.0140381
[1] Abiwinanda N., Hanif M., Hesaputra S., [12] Dai J., Li Y., He K., and Sun J., “R-FCN: Object
Handayani A., and Mengko T., “Brain tumor Detection Via Region-Based Fully Convolutional
Classification Using Convolutional Neural Networks,” in Proceedings of the 30th Conference
Network,” in Proceedings of the World congress on Neural Information Processing Systems,
on Medical Physics and Biomedical Engineering, Barcelona, vol. 29, 2016.
Prague, pp. 183-189, 2019. DOI: 10.1007/978- [13] Gupta A., Ramanath R., Shi J., and Keerthi S.,
981-10-9035-6_33 “Adam vs. SGD: Closing the Generalization Gap
[2] Atik M. and Duran Z., “Deep learning-based 3d on Image Classification,” in Proceedings of the
Face Recognition Using Derived Features from 13th Annual Workshop on Optimization for
Point Cloud,” The Proceedings of the 3rd Machine Learning, 2021.
International Conference on Smart City [14] He K., Gkioxari G., Dollár P., and Girshick R.,
Applications, Karabukp, pp. 797-808, 2020. “Mask R-CNN,” in Proceedings of the IEEE
[3] Atik S. and Ipbuker C., “Integrating International Conference on Computer Vision,
Convolutional Neural Network and Venice, pp. 2961-2969, 2017.
Multiresolution Segmentation for Land Cover and https://doi.org/10.48550/arXiv.1703.06870
Land Use Mapping Using Satellite Imagery,” [15] Henderson P. and Ferrari V., “End-to-end
Training of Object Class Detectors for Mean
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 479

Average Precision,” in Proceedings of the Asian Mathematical Methods in Medicine, vol. 2022,
Conference on Computer Vision, Taipei, pp. 198- 2022. doi: 10.1155/2022/1359019
213, 2016. [26] Montalbo F., “A Computer-Aided Diagnosis of
https://doi.org/10.48550/arXiv.1607.03476 Brain Tumors Using a Fine-Tuned YOLO-Based
[16] Huang X., Yue X., Xu Z., and Chen Y., Model with Transfer Learning,” KSII
“Integrating General and Specific Priors into Deep Transactions on Internet and Information
Convolutional Neural Networks for Bladder Systems, vol. 14, no. 12, pp. 4816-4834, 2020.
Tumor Segmentation,” in Proceedings of the DOI: 10.3837/tiis.2020.12.011
International Joint Conference on Neural [27] Nepal U. and Eslamiat H., “Comparing YOLOv3,
Networks, Shenzhen, pp. 1-8, 2021. doi: YOLOv4 and YOLOv5 for Autonomous Landing
10.1109/IJCNN52387.2021.9533813 Spot Detection in Faulty UAVs,” Sensors, vol. 22,
[17] Kavitha R., Chitra L., and Kanaga L., “Brain no. 2, pp. 464, 2022.
Tumor Segmentation Using Genetic Algorithm https://doi.org/10.3390/s22020464
with SVM Classifier,” International Journal of [28] Nogales A., Garcia-Tejedor A., Monge D., Vara
Advanced Research in Electrical, Electronics and J., and Antón C., “A Survey of Deep Learning
Instrumentation Engineering, vol. 5, no. 3, pp. Models in Medical Therapeutic Areas,” Artificial
1468-1471, 2016. Intelligence in Medicine, vol. 112, pp. 102020,
DOI:10.15662/IJAREEIE.2016.0503043 2021. doi: 10.1016/j.artmed.2021.102020.
[18] Khambhata K. and Panchal S., “Multiclass [29] Oza P., Sharma P., Patel S., Kumar P., “Deep
Classification of Brain Tumor in MR Images,” Convolutional Neural Networks for Computer-
International Journal of Innovative Research in Aided Breast Cancer Diagnostic: A Survey,”
Computer and Communication Engineering, vol. Neural Computing and Applications, vol. 34, no.
4, no. 5, pp. 8982-8992, 2016. 6, pp. 1-22, 2022. DOI: 10.1007/s00521-021-
[19] Kuznetsova A., Maleva T., and Soloviev V., 06804-y
Cyber-Physical Systems: Modelling and [30] Pan Y., Huang W., Lin Z., Zhu W., Zhou J., Wong
Intelligent Control, Springer, 2021. J., Ding Z., “Brain Tumor Grading Based on
https://doi.org/10.1007/978-3-030-66077-2_28 Neural Networks and Convolutional Neural
[20] Lee S., Kwak S., and Cho M., “Universal Networks,” in Proceedings of the 37th Annual
Bounding Box Regression and Its Applications,” International Conference of the IEEE Engineering
in Proceedings of the Asian Conference on in Medicine and Biology Society, Milan, pp. 699-
Computer Vision, Perth, pp. 373-387, 2018. 702, 2015. DOI: 10.1109/EMBC.2015.7318458
https://doi.org/10.1007/978-3-030-20876-9_24 [31] Rahman M. and Wang Y., “Optimizing
[21] Litjens G., Kooi T., Bejnordi B., Setio A., Ciompi Intersection-Over-Union in Deep Neural
F., Ghafoorian M., Laak J., Ginneken B., and Networks for Image Segmentation,” International
Sánchez C., “A Survey on Deep Learning in Symposium on Visual Computing, pp. 234-244,
Medical Image Analysis,” Medical Image 2016.
Analysis, vol. 42, pp. 60-88, 2017. [32] Redmon J. and Farhadi A., “Yolov3: An
https://doi.org/10.1016/j.media.2017.07.005 Incremental Improvement,” 2018.
[22] Logeswari T. Karnan M., “An Improved [33] Redmon J., Divvala S., Girshick R., and Farhadi
Implementation of Brain Tumor Detection Using A., “You Only Look Once: Unified, Real-Time
Segmentation Based on Soft Computing,” Journal Object Detection,” in Proceedings of the IEEE
of Cancer Research and Experimental Oncology, Conference on Computer Vision and Pattern
vol. 2, no. 1, pp. 006-014, 2010. Recognition, Las Vegas, pp. 779-788, 2016. doi:
[23] Lundervold A. and Lundervold A., “An Overview 10.1109/CVPR.2016.91
of Deep Learning in Medical Imaging Focusing on [34] Ren S., He K., Girshick R., and SunJ., “Faster r-
MRI,” Zeitschrift Für Medizinische Physik, vol. CNN: Towards Real-Time Object Detection with
29, no. 2, pp. 102-127, 2019. Region Proposal Networks,” Advances in Neural
https://doi.org/10.1016/j.zemedi.2018.11.002 Information Processing Systems, vol. 28, 2015.
[24] Magnuska Z., Theek B., Darguzyte M., [35] Shelatkar T., Urvashi D., Shorfuzzaman M.,
Palmowski M., and Stickeler E., “Influence of the Alsufyani A., and Lakshmanna K., “Diagnosis of
Computer-Aided Decision Support System Brain Tumor Using Light Weight Deep Learning
Design on Ultrasound-Based Breast Cancer Model with Fine-Tuning Approach,”
Classification,” Cancers, vol. 14, no. 2, pp. 277, Computational and Mathematical Methods in
2022. DOI: 10.3390/cancers14020277 Medicine, vol. 2022, 2022.
[25] Mohiyuddin A., Basharat A., Ghani U., Peter V., https://doi.org/10.1155/2022/2858845
and Abbas S., “Breast Tumor Detection and [36] Singh L., Chetty G., and Sharma D., “A Novel
Classification in Mammogram Images Using Machine Learning Approach for Detecting the
Modified YOLOv5 network,” Computational and Brain Abnormalities from Mri Structural Images,”
480 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

in Proceedings of IAPR International Conference Deep-Learning Detection and Segmentation


On Pattern Recognition in Bioinformatics, Tokyo, Models with Images from a Lidar as a Camera
2012, pp. 94-105. Sensor,” arXiv Preprint, vol.
[37] Swati Z., Zhao Q., Kabir M., Ali F., Ali Z., Ahmed arXiv:2203.04064v1, pp. 1-6, 2022.
S., and Lu J., “Content-Based Brain Tumor https://arxiv.org/pdf/2203.04064
Retrieval for MR Images Using Transfer [47] Xu S., Guo Z., Liu Y., Fan J., and Liu X., “An
Learning,” IEEE Access, vol. 7, pp. 17809-17822, Improved Lightweight YOLOV5 Model Based on
2019. 10.1109/ACCESS.2019.2892455 Attention Mechanism for Face Mask Detection,”
[38] Tian L., Thalmann N., Thalmann D., Fang Z., and in Proceedings of the International Conference on
Zheng J., “Object grasping of humanoid robot Artificial Neural Networks, Bristol, pp. 531-543,
based on YOLO,” in Proceedings of the Computer 2022. https://doi.org/10.1007/978-3-031-15934-
Graphics International Conference, Calgary 3_44
2019, pp. 476-482: Springer. [48] Yan B., Fan P., Lei X., Liu Z., and Yang F., “A
[39] Vengaloor R. and Muralidhar R., “Deep Learning Real-Time Apple Targets Detection Method for
Based Feature Discriminability Boosted Picking Robot Based on Improved YOLOv5,”
Concurrent Metal Surface Defect Detection Remote Sensing, vol. 13, no. 9, pp. 1619, 2021.
System Using YOLOv-5s-FRN,” The https://doi.org/10.3390/rs13091619
International Arab Journal of Information [49] Zacharaki E ., Wang S., Chawla S., Yoo D., Wolf
Technology, vol. 21, no. 1, pp. 94-106, 2024. R., Melhem E., and Davatzikos C., “Classification
https://doi.org/10.34028/iajit/21/1/9 Of Brain Tumor Type and Grade Using MRI
[40] Walia I., Kumar D., Sharma K., Hemanth J., and Texture and Shape in A Machine Learning
Popescu D., “An Integrated Approach for Scheme,” Magn Reson Med, vol. 62, no. 6, pp.
Monitoring Social Distancing and Face Mask 1609-1618, 2009. doi: 10.1002/mrm.22147.
Detection Using Stacked ResNet-50 and [50] Zhang S., Zhang F., Ding Y., Li Y., “Swin-
YOLOv5,” Electronics, vol. 10, no. 23, pp. 2996, YOLOv5: Research and Application of Fire and
2021.https://doi.org/10.3390/electronics1023299 Smoke Detection Algorithm Based on YOLOv5,”
6 Computer Intelligence Neuroscience, vol. 2022,
[41] Wang C., Zhang Y., Zhou Y., Sun S., Zhang H., 2022. doi: 10.1155/2022/6081680
and Wang Y., “Automatic Detection of Indoor [51] Zhang Y., Shen Y., and Zhang J., “An Improved
Occupancy Based on Improved YOLOv5 model,” Tiny-YOLOv3 Pedestrian Detection Algorithm,”
Neural Computing and Applications, vol. 23, pp. Optik, vol. 183, pp. 17-23, 2019.
2575-2599, 2023. https://doi.org/10.1007/s00521- DOI:10.1016/j.ijleo.2019.02.038
022-07730-3
[42] Wang C., Bochkovskiy A., and Liao H., Abdelraouf Ishtaiwi is a highly
“YOLOv7: Trainable Bag-Of-Freebies Sets New accomplished academic with over 22
State-of-The-Art for Real-Time Object years of experience in teaching and
Detectors,” 2023 IEEE/CVF Conference on research in the field of Artificial
Computer in Proceedings of the Vision and Intelligence (AI). He earned his
Pattern Recognition, Vancouver, 2022. Master's degree in AI from Griffith
[43] Wang Z., Wu L., Li T., and Shi P., “A Smoke University, Brisbane in 2001,
Detection Model Based on Improved YOLOv5,” followed by a Ph.D. in the same field in 2007. Dr.
Mathematics, vol. 10, no. 7, pp. 1190, 2022. Ishtaiwi's academic career has been dedicated to
https://doi.org/10.3390/math10071190 advancing the field of AI through exceptional research
[44] Wenkel S., Alhazmi K., Liiv T., Alrshoud S., and and teaching skills. His expertise in the field has led to
Simon M., “Confidence Score: The Forgotten numerous top scientific contributions, including
Dimension of Object Detection Performance groundbreaking research on machine learning, local
Evaluation,” Sensors, vol. 21, no. 13, pp. 4350, search algorithms, and optimization methods.
2021. https://doi.org/10.3390/s21134350 Throughout his career, Dr. Ishtaiwi has published more
[45] White N., McDonald C., Farid N., Kuperman J., than 23 research papers in highly regarded academic
Kesari S., and Dale A., “Improved Conspicuity journals, demonstrating his significant impact on the
and Delineation of High-Grade Primary and field of AI. His research has been widely cited and has
Metastatic Brain Tumors Using “Restriction received recognition from the academic community for
Spectrum Imaging”: Quantitative Comparison its innovative approach to AI. In addition to his research
with High B-Value DWI and ADC,” American accomplishments, Dr. Ishtaiwi is an experienced teacher
Journal of Neuroradiology, vol. 34, no. 5, pp. 958- and mentor. He has taught a wide range of courses in
964, 2013. AI, including advanced topics in machine learning and
[46] Xianjia Y., Salimpour S., Queralta J., and optimization. His dedication to teaching has earned him
Westerlund T., “Analyzing General-Purpose accolades from his students and colleagues alike.
Impact of Data-Augmentation on Brain Tumor Detection Using Different YOLO ... 481

Ali Ali is an assistant professor at Amjad Aldweesh is a computer


Communications and Computer assistant professor interested in the
Engineering Department, Faculty of Blockchain and Smart contracts
Engineering, Al-Ahliyya Amman technology as well as cyber security.
University. He received a PhD in Amjad has a Bachelor degree in
computer and communications computer science. He has a MSc
engineering from the University of degree in advanced computer science
Huddersfield, UK in 2021. His primary research and security from the University of Manchester with
interests are in the analysis of communication system distinction. Amjad is the second in the UK and the first
reliability using complex modelling techniques, in the middle east to have a PhD in the Blockchain and
network security, machine learning optimization Smart contracts technology from Newcastle University.
techniques as well as approaches to WLAN
optimization. Mohammad Alauthman Received
PhD degree from Northumbria
Ahmad Al-Qerem graduated in University Newcastle, UK in 2016.
applied mathematics and MSc in He received a B.Sc. degree in
Computer Science at the Jordan Computer Science from Hashemite
University of Science and University, Jordan, in 2002, and
Technology and Jordan University in received M.Sc. degrees in Computer
1997 and 2002, respectively. After Science from Amman Arab University, Jordan, in 2004.
that, he was appointed as full-time Currently, he is Assistant Professor and senior lecturer
lecturer at the Zarqa University. He was a visiting at Department of Information Security, Petra
professor at Princess Sumaya University for University, Jordan. His main research areas cyber-
Technology (PSUT). He obtained a PhD from security, Cyber Forensics, advanced machine learning
Loughborough University, UK. His research interests and data science applications.
are in performance and analytical modeling, mobile
computing environments, protocol engineering, Omar Alzoubi is an Assistant
communication networks, transition to IPv6, machine Professor of Computer Engineering
learning and transaction processing. He has published at the University of Umm Al-Qura in
several papers in various areas of computer science. Saudi Arabia. With a focus on
Currently, he has a full academic post as a full professor Computer Engineering, Dr. Alzoubi
at computer science department at Zarqa University- brings a wealth of expertise to his
Jordan. role. He is committed to advancing
the field through both his research and teaching
Yazan Al Smadi is a seasoned software engineering endeavors. As a respected academic, Dr. Alzoubi is
expert, holding a Master's degree in Software dedicated to nurturing the next generation of computer
Engineering from Zarqa University and a Bachelor of engineers, guiding students to excel in their studies and
Science degree in Computer Science from Al Balqaa research pursuits. His contributions to the university
University. With a passion for technology and community are invaluable as he works towards
innovation, Yazan has honed his skills and expertise in furthering knowledge and innovation in the field of
the field of software development. Computer Engineering.
Throughout his academic journey, Yazan demonstrated
exceptional dedication and a keen understanding of
computer science principles. His academic
achievements served as a solid foundation for his
professional endeavors. Currently serving as a software
development consultant in the private sector, Yazan
leverages his extensive knowledge and experience to
provide invaluable insights and solutions to various
technological challenges. His role involves
collaborating with teams, analyzing complex problems,
and implementing innovative software solutions
tailored to meet the specific needs of clients
482 The International Arab Journal of Information Technology, Vol. 21, No. 3, May 2024

Shadi Nashwan was born in Musab Al-Zghoul, PhD, received


Amman, Jordan, in 1978. He his doctorate in Computer Science
received the B.Sc. degree in from Don State Technical University.
computer science from Al-Azhar Currently, he serves as an Assistant
University, Palestine, in 2001, the Professor at Isra University's
M.Sc. degree in computer science Information Technology College.
from University of Jordan, Jordan, in His research interests encompass
2003, and the Ph.D. degree in computer and network machine learning, data mining, and IoT, and he is
security from Anglia Ruskin University, U.K., in 2009. enthusiastic about collaborating with both national and
From 2009 to 2010, he was an Assistant Professor with international research teams. With a rich academic
the Department of Computer Science, Al-Zaytoonah background, Dr. Al-Zghoul spent three years at Zarqa
University, Jordan. At the end of 2010, he became an University in Jordan and eleven years at Umm Al-Qura
Assistant Professor with the Computer Science University in Saudi Arabia. He holds a patent in Russia
Department, Jouf University, Saudi Arabia. In 2018, he since 2008 for a “Software Benchmark for Studying
was promoted to the position of associate professor in Caching Algorithms.” Over the past five years, his
cybersecurity. In 2022, he was promoted to the position publications have centered on classification and NLP in
of full professor in cybersecurity. Currently, he has been Islamic Research.
appointed as a head of cybersecurity and Software
Engineering departments at Middle East University, Someah Alangari is an Assistant Professor, at College
Jordan, in 2023. He has published several articles in the of Science and Humanities Dawadmi at Shaqra
area of authentication protocol, recovery techniques, University, Saudi Arabia. Dr. Someah has got her Ph.D
analytic model, and mobility management. His research in Computer Science at University of Southampton in
interests include authentication protocol for mobile the UK. She received her master's degree in Software
networks, security of healthcare system, security Engineering at University of Southampton in the UK,
Agriculture system and security of wireless networks, and BSc in Computer Science at King Saud University
such as NFC, RFID, WSNs, and WMSNs. in Saudi Arabia. Her research interests include ML ,
software engineering, and information systems. Dr.
Awad Ramadan is a dedicated Someah has over 10 years of working experience in the
academician with a strong academic sector.
commitment to excellence in
education and administration. He
holds the position of Lecturer in the
Computer Science Department at the
College of Computing in Al-
Qunfudah, Umm Al-Qura University, Saudi Arabia, a
role he has served in since 2007. Additionally, he plays
a vital role as a member of the Academic Oversight
Committee, ensuring quality standards in education
since 2021. With a wealth of experience and a proactive
approach, Awad Mohamed Ramadan continues to make
significant contributions to the academic community at
Umm Al-Qura University.

You might also like