Brain Tumor Classification Using Hybrid Single Image Super-Resolution Technique With ResNext101!32!8d and VGG19 PR

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Received 17 March 2023, accepted 25 May 2023, date of publication 30 May 2023, date of current version 8 June 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3281529

Brain Tumor Classification Using Hybrid Single


Image Super-Resolution Technique With
ResNext101_32 × 8d and VGG19
Pre-Trained Models
SAEED MOHSEN 1,2 , ANAS M. ALI 3,4 , EL-SAYED M. EL-RABAIE4 , AHMED ELKASEER 5,6 ,

STEFFEN G. SCHOLZ 5,7 , AND ASHRAF MOHAMED ALI HASSAN8


1 Department of Electronics and Communications Engineering, Al-Madinah Higher Institute for Engineering and Technology, Giza 12947, Egypt
2 Department of Artificial Intelligence Engineering, Faculty of Computer Science and Engineering, King Salman International University (KSIU), El Tor, South
Sinai 46511, Egypt
3 Robotics and Internet of Things, Prince Sultan University, Riyadh 12435, Saudi Arabia
4 Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
5 Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344 Karlsruhe, Germany
6 Department of Production Engineering and Mechanical Design, Faculty of Engineering, Port Said University, Port Fuad 42526, Egypt
7 College of Engineering, Future Manufacturing Research Institute, Swansea University, SA1 8EN Swansea, U.K.
8 Department of Electronics and Communications Engineering, October High Institute for Engineering and Technology, 6th of October City 12596, Egypt

Corresponding author: Ahmed Elkaseer ([email protected])


This work was supported in part by the Karlsruhe Nano Micro Facility (KNMFi, www.knmf.kit.edu), a Helmholtz Research Infrastructure
at Karlsruhe Institute of Technology (KIT, www.kit.edu) and in part by the Helmholtz Research Programme MSE (Materials Systems
Engineering) at KIT. The authors would like to acknowledge the support provided by the KIT-Publication Fund of the Karlsruhe Institute
of Technology.

ABSTRACT High-quality images acquired from medical devices can be utilized to aid diagnosis and
detection of various diseases. However, such images can be very expensive to acquire and difficult to
store, and the process of diagnosis can consume significant time. Automatic diagnosis based on artificial
intelligence (AI) techniques can contribute significantly to overcoming the cost and time issues. Pre-trained
deep learning models can present an effective solution to medical image classification. In this paper,
we propose two such models, ResNext101_32×8d and VGG19 to classify two types of brain tumor: pituitary
and glioma The proposed models are applied to a dataset consisting of 1,800 MRI images comprising
in two classes of diagnoses; glioma tumor and pituitary tumor. A single-image super-resolution (SISR)
technique is applied to the MRI images to classify and enhance their basic features, enabling the proposed
models to enhance particular aspects of the MRI images and assist the training process of the models. These
models are implemented using PyTorch and TensorFlow frameworks with hyper-parameter tuning, and data
augmentation. Experimentally, receiver operating characteristic curve (ROCC), the error matrix, Precision,
and Recall are used to analyze the performance of the proposed model. Results obtained demonstrate that
VGG19 and ResNext101_32 × 8d achieved testing accuracies of 99.98% and 100%, and loss rates of
0.0120 and 0.108, respectively. The F1-score, Precision, Recall, and the area under the ROC for VGG19
were 99.89%, 99.90%, 99.89%, and 100%, respectively, while for the ResNext101_32 × 8d they were all
100%. The proposed models when applied to MRI images to provide a quick and accurate approach to
distinguishing between patients with pituitary and glioma tumors, and could aid doctors and radiologists in
the screening of patients with brain tumors.

INDEX TERMS Single image super-resolution, visual geometry group (VGG)-19, ResNext101_32 × 8d,
brain tumor classification, magnetic resonance imaging (MRI), medical image analysis.
I. INTRODUCTION
The associate editor coordinating the review of this manuscript and An important issue in medicine is classification of brain
approving it for publication was Jiankang Zhang . tumors [1], [2] to decide on treatment type. This is a

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
55582 VOLUME 11, 2023
S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

challenging issue because tumor cells are heterogeneous by adversarial network (GAN) algorithm to improve the
nature. Doctors, of course, are important in the diagnosis resolution of the MRI images;
of this disease, but would benefit from a helpful tool to • Evaluating the performance of the models with alterna-
aid rapid diagnosis [3]. Today, diagnostic systems based on tive evaluation measures based on an assessment of the
computer-aided technology provide an effective means for quality of images in a dataset of 1,800 MRI images of
diagnosing brain tumors via magnetic resonance imaging the brain;
(MRI) [4]. MRI is the most common technique used for • Improving VGG19 accuracy by using k-fold cross-
classifying brain tumors, because of its high image qual- validation;
ity [5]. Artificial intelligence (AI) is also being considered • Comparing the results achieved by the proposed mod-
as a key enabler to assist in resolving issues around els for the MRI image classification with different
brain tumor classification. In particular, the development of state-of-the-art models.
high-performance deep learning models (DLMs) with high
The remainder of this paper is as follows: Section II presents a
levels of accuracy would be a significant step towards a fast,
brief review of similar work. Section III introduces the theo-
high-precision method for detection and diagnosis of brain
retical background to transfer learning. Section IV presents
tumors in patients.
the proposed DLMs, dataset description, and performance
Machine learning and deep learning models are being
metrics utilized. Section V presents the experimental results.
utilized to classify and diagnose brain tumors [6], but, the
The results are discussed in Section VI. Section VII offers
low accuracy of existing models is a challenge that needs
conclusions and suggestions for further work.
to be addressed [7]. Deep convolutional neural networks
(CNNs) are playing an effective role in detecting the presence
of brain tumors and diagnosing tumor type via medical II. RELATED WORK
imaging using MRI, which has been used to successfully The literature contains reports of a number of DL models for
detect patients infected with brain tumors. However, a dis- classifying brain tumors using MRI images of the brain [20],
advantage of these networks is the long time required to [21], [22], [23], but a satisfactory level of accuracy has not
train the models [8]. Low accuracy can be significantly always been achieved. In [20], Afshar, et al., introduced
improved by using pre-trained DLMs, such as VGG [9], a capsule neural network method to classify brain tumor
DenseNet [10], ResNet [11], GoogLeNet [12], AlexNet [13], diseases. This model was applied on a 3,064-image dataset
MobileNet [14], and EfficientNet [15] which can also and had an 86.56% accuracy. In [21], the same authors
reduce training time. Speech recognition [16] language proposed a CNN model enhanced by a genetic algorithm
modeling [17], human activity recognition [18], and image (GA) to recognize brain tumors and applied the model to
processing [19] are already making good use of such models. a dataset of 600 different images, with the best accuracy
Pre-trained models are more convenient since they require achieved is 94.2%. Saxena et al., [22] applied three models,
less training time. However, achieving a sufficiently high ResNet-50, Inception-V3, and VGG-16, to a set of just over
accuracy in classifying brain tumors remains a significant 250 images, divided into 183 for training, 50 for validation,
challenge. and 20 for testing. The respective accuracies for Inception-
This paper reports on implementing ResNext101_32 × 8d V3, VGG-16 and ResNet-50 were 55%, 90%, and 95%.
and VGG19 models to enhance the accuracy of detection Zhou et al., [23] presented a combined DenseNet-Long
and classification of brain tumor using hyper-parameter ShortTermory model for identifying brain tumors and tested
optimization. These two models are based on a transfer the model on a set of data comprising: 708 meningioma,
learning process, and have the advantage of architectural 930 pituitary, and 1,426 glioma patients. The model achieved
simplicity, which can reduce computational cost (training a 92.13% accuracy.
time). The models were tested and trained on a dataset Researchers [24], [25], [26], [27], and [28] presented
available in the Kaggle repository. In addition, a data machine and deep learning architectures to detect brain
augmentation technique was employed on the dataset used tumors in patients. In [24], Cheng et al., introduced an
in order to overcome its deficiency of the available medical SVM model and attained a 91.28% accuracy with a dataset
images. of 1,426 glioma and 930 pituitary images. A model using
The primary contributions of this paper are: CNN architecture was applied to a dataset of 3,064 MRI
• Implementation of robust ResNext101_32 × 8d and images [25] and attained 84.19% accuracy. Kaplan et al., [26]
VGG19 to distinguish between glioma and pituitary applied a k-nearest neighbor (KNN) model with nLBP feature
brain images/cases for fast and more accurate diagnoses; extraction approach, and achieved 95.56% accuracy. Pashaei
• Using ResNext101_32 × 8d and VGG19 with et al., [27] implemented a CNN model with an extreme
hyper-parameter optimization to successfully achieve learning machine (ELM) method and trained using 70%
substantially greater test accuracy for a graphics of a dataset that contained 3,064 brain tumor cases. The
processing unit (GPU); model was then used to assess the other 30% and attained
• Applying a single image super-resolution (SISR) tech- an accuracy of 93.68%. In [28], Zacharaki et al., proposed
nique for the two models based on a generative SVM-KNN models for brain tumor classification, which

VOLUME 11, 2023 55583


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

were applied to a dataset of 102 MRI images, achieving an pre-trained model and fine-tune its performance to accelerate
overall accuracy of 85%. the training process. There are many successful pre-trained
Kurup et al., [29] utilized the architecture of a CapsNet CNN models used in the classification of medical images,
model based on three classes to predict the presence of including ResNext101_32 × 8d and VGG19 [41], which
brain tumors. It applied capsule neural networks to a set of were pre-trained using the ImageNet dataset, which consists
data comprising 3,170 images with an accuracy of 92.6%. of 1.3 × 106 RGB images of 224 × 224 pixels and
In [30], Das et al., applied a CNN for brain tumor detection with 1000 classes. It is proposed to use VGG19 and
with a dataset containing three classes of 1,426 glioma, ResNext101_32 × 8d as pre-trained DLMs for transfer
708 meningioma, and 930 pituitary cases. The accuracy and learning to classify brain tumor types.
precision achieved were 94.39% and 93.33%, respectively.
Ullah et al., [31] proposed to classify brain tumors using MRI IV. METHODOLOGY
images via an artificial neural network (ANN), and attained ResNext101_32 × 8d and VGG19 models are employed here
a testing accuracy of 95.80%. In [32], Huang et al., used a to classify two categories of brain tumors and were applied
CNN to classify brain tumors and attained an overall accuracy to a dataset consisting of 1,800 MRI brain images. These
of 95.49%. In [33], Kalaiselvi et al., diagnosed brain tumors models were chosen due to their robust performance and to
using a CNN with 96.00% accuracy. Li et al., [34] used a be convenient for processing spatial data [42].
hidden Markov model (HMM) for real-time classification of
brain tumors and achieved an accuracy of 96.88%. Noreen et A. THE PROPOSED SISR TECHNIQUE WITH
al., [35] introduced two models: Xception and Inception-V3, RESNEXT101_32 × 8dI
and used 3,064 images to achieve accuracies of 93.79% and In this work, before classification by the ResNext101_32×8d
94.34%, respectively. model, a SISR technique was applied to the chosen MRI
Rehman et al. [36] classified microscopic brain tumors brain tumor images. Fig. 1 illustrates the architecture of the
using a 3D CNN. This model was applied to the BraTS SISR technique with ResNext101_32 × 8d model. The SISR
datasets, using the k-fold cross-validation method, achieving is based on a GAN algorithm to produce high-resolution
a 96.67% accuracy. In [37], Rehman et al., implemented images. The SISR consists of two phases: the first is the
GoogLeNet, VGGNet, and AlexNet models, utilized 3,064 generator, and the second is the discriminator. The generator
images, and classified three types of tumors. Sharif et al., [38] comprises an input layer with a shape of 64 × 64 × 3 and a
presented an InceptionV3 model for brain tumor recognition, kernel size of 3, followed by an up-sampling block containing
with an achieved 93.7% accuracy. In [39], Muzammil et al., a convolutional layer and a Parametric Rectified Linear Unit
proposed a multimodal image fusion algorithm for diagnoses (PReLU) layer, this is followed by a residual block which
using MRI images. It was applied with a convolutional repeats every sixteen iterations and comprises: a convolu-
sparse coding method which used the entropy theorem to tional layer, a Batch-normalization layer, a PReLU layer,
assess the performance of the algorithm. Maqsood et al., [40] another convolutional layer, another Batch-normalization
investigated brain tumor detection using U-NET CNN and layer, and an Add layer. These are followed by three layers:
fuzzy logic algorithms, but did not quote a success rate. Their convolutional, Batch-normalization, and Add, followed by
literature review showed that despite significant research into two blocks Convolutional, Lambda, and PReLU. Finally, the
detecting the presence of glioma or pituitary tumors in the output layer is activated by a sigmoid activation function.
brain via MRI images, improvements more accurate methods Every convolutional layer (conv-layer_has 3 × 3 kernels
are still needed. and 64 filters. The discriminator model comprises an input
layer, a convolutional layer, and a ReLU layer, followed
by seven repeated blocks containing a convolutional layer,
III. BACKGROUND FOR TRANSFER LEARNING a Batch-normalization layer, and a ReLU, this is followed
Transfer learning is a machine learning technique whereby by a block that includes a flatten layer, a dense layer, and
a model trained for one purpose is subsequently used for ReLU, this block is repeated three times. Next, there is a
another. The weights from the pre-trained model are applied dense layer and an output layer with a sigmoid activation
as a starting point for training using a different dataset for a function. The dimensions of the high-resolution (HR) images
different issue. A significant advantage of transfer learning are 256 × 256 × 3 and the low-resolution (LR) images are
is a faster training/learning process. Another advantage is generated from the high-resolution images. The LR images
achieving higher performance with less data due to the have dimensions of 64 × 64 × 3. So, the HR is divided
weights obtained from the previous training being used. by a factor of 4 in order to obtain the dimensions of the
A higher performance can be achieved by the addition of a LR images. For the SISR technique, only 1700 images were
fully connected layer to the existing model. used from the three thousand MRI images available due
Transfer learning is usually associated with relatively to the generator model used in the training process being
small datasets, such as biomedical images. If a DLM is slow. The images were normalized to the range from 1 to
trained ab initio, the training process requires a large −1. 1,550,659 trainable parameters were available for the
amount of data and time. Thus, it is convenient to utilize a generator model-based SISR technique.

55584 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 1. The workflow of the proposed SISR technique with ResNext101_32 × 8d model.

The ResNext101_32 × 8d model consists of 344 layers displays the architecture of the proposed VGG19 model,
including: 104 batch normalization layers, 104 conv-layers, which comprises 19 layers: three fully connected layers and
100 ReLU layers, 33 bottleneck layers, a single max-pooling sixteen 2-D conv-layers, each of which is followed by a 2-D
layer (MPL), a single adaptive average layer, and one linear MPL. Training VGG19 takes less time than other pre-trained
layer. The input MRI images have equal width and height of models while also having high classification accuracy.
224 pixels. A binary cross-entropy loss function was used As previously, the input MRI images have equal width
with the ResNext101_32 × 8d, to estimate the difference and height of 224 pixels. First, a 2-D convolutional layer
between predicted and true values, with the loss function was applied separately to each input image, with a ReLU
calculated using Eq. (1), with y the true output label, and ŷ activation function to extract spatial features. This layer has
the predicted label, and N represents the number of classes. 64 filters, a kernel with 3 × 3 matrix shape, followed by
An Adam algorithm was used as an optimizer. A batch size another convolutional layer with 64 filters with a ReLU
of 16 was used to train the ResNext101_32 × 8d model, with function. To make the convolution output less complex an
10 training epochs, and a 0.0001 learning rate. The overall MPL with a 2 × 2 matrix, carried out a downsampling
trainable parameters for the ResNext101_32×8d model were procedure.
86,746,434. Third, there were two conv-layers having 128 filters,
a kernel with a 3 × 3 matrix, and utilizing a ReLU function.
loss = −[ylog ŷ + (1 − y) log 1 − ŷ ]
 
(1) These added layers enable the VGG19 to discern higher-level
features that might have been missed in the previous conv-
B. THE PROPOSED SISR TECHNIQUE WITH VISUAL layers. Fourth, an MPL with a 2 × 2 pool size is followed
GEOMETRY (VGG)-19 by four 2-D conv-layers having a configuration of 256 filters,
In this work, a SISR technique was also used on the MRI a kernel with a 3 × 3 matrix, followed by an MPL and then
brain tumor images before classification by VGG19. Fig. 2 four 2-D conv-layers having a configuration of 512 filters

VOLUME 11, 2023 55585


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 2. The proposed VGG19 model architecture.

with an MPL. Fifth, four more 2-D conv-layers having a TABLE 1. The summary of the layers of the Vgg19 model.
configuration of 1024 filters, followed by an MPL. Sixth, two
fully connected layers are configured with 4096 neurons and
a ReLU activation function, followed by a fully connected
layer with 1000 neurons. Finally, the output is reduced to
just two classes by the application of a softmax activation
function.
The difference between predicted and true values for
the VGG19 was obtained using a binary cross-entropy loss
function, with the loss function calculated using Eq. (1), with
N the number of classes, y the true output label, and ŷ the
predicted label. Again an Adam algorithm was used as an
optimizer [43]. It was found that a batch size of 30 with
50 training epochs were best to train the VGG19 model for
classification of brain tumors. The total number of trainable
parameters for this model was 2,325,568. Table 1 presents a
summary of the layers for the VGG19 model.
Fig. 3 shows the workflow for the proposed VGG19 model.
There are six steps, the first was uploading the MRI images
dataset used, which were separated into images for testing
and training. The second step was pre-processing the MRI
images i.e., image normalization. The third step was to define
the number of training epochs. The fourth was the training
of the model using the designated MRI images via a fitting
function. The fifth step was to test the prediction capacity of
the VGG19 model using the MRI test images. The final step
was to evaluate the performance of the model using different
metrics on the MRI test images.

C. DATASET DESCRIPTION
Fig. 4 shows four of the MRI brain images from the
dataset supplied by the Kaggle repository and used for
classification [44]. There were 1,800 MRI brain images of
two classes, 900 glioma tumors and 900 pituitary tumors.
Each image was resized into 224 × 224 pixels, and [0, 1]. To minimize over-fitting, for VGG19, three data
then normalized by rescaling the pixels from [0, 255] to augmentation techniques were used to increase the original

55586 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 3. The workflow of the VGG19. FIGURE 4. Four samples of brain MRI images from the dataset used:
(a) and (c) Glioma Tumors, (b) and (d) Pituitary Tumors.

dataset of glioma and pituitary images: rotation, width shift,


and height shift. Each image was randomly rotated by 10◦ , FN = the number of samples that were diseased but falsely
with shifts in the width by up to 0.1, and shifts in the height diagnosed as healthy, and FP = the number of samples that
by up to 0.1. Thus, the number of images in the dataset was were healthy and falsely diagnosed as diseased. Using these
increased by a factor of three. metrics, the total number of images in the dataset is (TP +
Three data augmentation techniques were also used for the TN + FP + FN), and the total number correctly identified is
ResNext101_32 × 8d model: rotation, horizontal flip, and (TP + TN).
vertical flip. Each image was randomly rotated by 45◦ , with Accuracy of the model is the ratio of images accurately
flips in the horizontal or vertical by up to 0.5. identified to the total number of images, see Eq. (2). Precision
The datasets were divided, the training sets were 75% for is the ratio of the number of images correctly diagnosed in
the VGG19 model and 85% for the ResNext101_32 × 8d a particular class, e.g., TP, to the total number of images
model. Thus the corresponding test sets used to assess the diagnosed as in that class (TP+FP), see Eq. (3). Recall
two models were, respectively, 25% and 15% of the datasets. or sensitivity, is the ratio of number of images correctly
diagnosed (TP), to the total number of correctly identified
D. EVALUATION METRICS
MRI images of both classes (TP+FN), see Eq. (4). It is the
probability of a positive test if the patient has a glioma tumor.
In this work, several well-known evaluation metrics were
F1-score, or balanced F1-measure, is the harmonic mean of
used to analyze model performance [45], [46]: Accuracy,
the Recall and Precision weighted by a factor of 2, see Eq. (5).
Precision, Recall, F1-score, and the area under the receiver
The F1-score includes both FN and FP, so it can sometimes
operating characteristic curve (ROCC). The ROCC is a means
be a more useful metric than Accuracy.
of comparing the accuracy of different classification models,
AUC is the area under the ROCC, see Eq. (6). Note: 0
to demonstrate the ability of a test to correctly identify those
≤ AUC value ≤ 1 with higher values of AUC implying a
images with a tumor. The ROCC is a graph of the True
model can successfully differentiate between different classes
Positive Rate (TPR - the images that were correctly diagnosed
of MRI images. It follows that a model with a larger area
as having a tumor as a proportion of all images that did show a
under the ROCC, is more accurate than a model with a smaller
tumor) against the False Positive Rate (FPR – the images that
area.
were incorrectly diagnosed as having a tumor as a proportion
of all images that did not show a tumor). The ranges for both TP + TN
Accuracy = (2)
TPR and FPR are between 0.0 and 0.1. TP + FP + TN + FN
If TP = number of diseased samples correctly identified, TP
Precision = (3)
TN = number of healthy samples correctly identified, TP + FP

VOLUME 11, 2023 55587


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

TP
Recall = (4)
TP + FN
Precision × Recall
F1 − Measure = 2 × (5)
Precision + Recall
Z 1
AUC = TPRd(FPR) (6)
0

In this work, the metrics used to evaluate the performance


of the SISR technique based on the GAN algorithm are [47]:
MSE (mean squared error), MS-SSIM (multiscale structural
similarity index measure), PSNR (peak signal-to-noise ratio),
and SSIM (structural similarity index measure) [48], [49],
[50], [51], [52]. These metrics are calculated using Eqs. (7),
(8), (9), and (10).
1 Xn 2
MSE = ir (k) − iy (k) (7) FIGURE 6. Training and validation loss curves for VGG19.
n k=1
MAX 2i
PSNR = 10log10 (8)
MSE
Fig. 6 shows the training/ validation loss curves for
2µir µiy + c1 2 oiriy + c2
 
SSIM =    (9) the VGG19 model, where the score of 0.0 would indicate
µ2ir + µ2iy + c1 o2ir + o2iy + c2 perfect learning with no mistakes. Both losses continuously
n−1 m−1 decreased as the number of epochs increased with the training
1 XX loss reaching 0.0030 after 50 epochs, and validation loss
MS − SSIM = SSIM (10)
nm commencing at 0.110 and declining to 0.0120.
p=0 j=0

V. EXPERIMENTAL RESULTS
Python in a Google Colab environment, with P100 GPU
and 25 GB RAM memory was used to implement the
proposed models.

FIGURE 7. The error matrix for VGG19.

Figs. 7 and 8 present the error and normalized error


FIGURE 5. Training and validation accuracy curves for VGG19. matrices obtained from use of the VGG19. This matrix is
used to evaluate model performance when classifying two
Fig. 5 presents the training/learning and validation accu- classes, here using the MRI test dataset and comparing the
racies obtained for VGG19. The blue line symbolizes the predicted/true class outputs. The dark purple blocks on the
training accuracy, which increases with increase in the matrix in Fig. 7 represent classification accuracy, while the
number of epochs and approaches 100% after 50 epochs. The values outside the blocks represent error values. Here, the
brown curve shows the validation accuracy, commencing at error matrix shows, respectively, 129 and 141 true positives
97.56% and rising to 99.89% after 50 epochs. The training for the two classes of tumor, glioma, and pituitary. The
stopped at 50 epochs because the learning curve started to normalized error matrix for VGG19 shows classification
overfit. The number of training epochs was tuned for the accuracy of 1.0 (100%) for glioma and pituitary classes, with
highest training/ validation accuracy. zero classification error in both cases. The VGG19 model

55588 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 9. Histograms of Glioma and Pituitary images.


FIGURE 8. The normalized error matrix for VGG19.

performance contained no errors. The total number of glioma


and pituitary tumors which taken for the testing error matrix
is 270.

TABLE 2. Classification of precision, recall, and F1-score for the Vgg19


model.

FIGURE 10. Precision-recall curves for VGG19.

Table 2 presents the F1-score, Precision, and Recall for


VGG19, by which to assess its performance for the dataset
utilized. The values for F1-score, Precision, and Recall for
the glioma and pituitary classes were, respectively, 99.92%
and 99.86%, 99.07% and 99.74%, and 99.78% and 99.00%.
The macro-average is determined by computing an evaluation
metric independently for each class and then taking the mean.
For the F1-score, Precision and Recall the respective macro-
averages were: 99.89%, 99.90%, and 99.89%. We note, see
Table 2, that the corresponding weighted averages had the FIGURE 11. ROC curves for VGG19.
same values.
Fig. 9 shows histograms for the glioma and pituitary
images for VGG19. Precision-recall and ROCCs are pre- giving the corresponding values of the macro-average for
sented in Figs. 10 and 11. Precision-recall curves present both precision and recall of 1.00.
the precision rate as a function of the recall rate. Fig. 10 Fig. 11 shows the ROCC for VGG19, and it can be seen
shows the precision-recall curves for VGG19 for both glioma that the values of the areas for both classes are 1.00 or 100%,
and pituitary classes. The values of the areas under the meaning the macro-average ROCC area is also 1.00. These
precision-recall curves for both classes are 1.00 or 100%, results imply that VGG19 doesn’t cause errors.

VOLUME 11, 2023 55589


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

TABLE 3. Performance of the Vgg19 model for different k-fold


cross-validations.

FIGURE 13. Training and validation loss curves for ResNext101_32 × 8d.

grew, after 10 epochs the value for the training data set had
Table 3 presents performance data for the proposed reached 0.0289 and 0.0121 for the validation data set.
VGG19 model using k-fold cross-validation, where k rep-
resents the number of equal partitions into which the data
is divided. Here k = 5, one for validation and four for
training. The model was trained five times using the different
partitions, each time with an epoch number of 50. The average
loss rate and accuracy rate for the model were 0.0113 and
99.35% respectively. Therefore, VGG19 achieved a high
performance when using 5-cross-validation, avoiding bias in
the results by using a suitable allocation of test and training
datasets.

FIGURE 14. The error matrix for ResNext101_32 × 8d.

Fig. 14 shows the error matrix obtained using ResNext101_


32 × 8d. The dark purple blocks show the classification
accuracy. Here, the error matrix shows, respectively, 128 and
142 true positives for the two classes of tumor: glioma, and
pituitary. The normalized error matrix for ResNext101_32 ×
8d is illustrated in Fig. 15. It has a classification accuracy
of 1.0 ‘‘100%’’ for glioma and pituitary classes, with zero
classification errors.
FIGURE 12. Training and validation accuracy curves for Table 4 presents F1-score, Precision, and Recall metrics
ResNext101_32 × 8d. for ResNext101_32 × 8d, by which to assess its relative
performance for the dataset used. For both the glioma and
Training and validation accuracy for the ResNext101_32× pituitary classes, the F1-score, Precision, and Recall were all
8d model are shown in Fig. 12. The training accuracy 100%, as were the macro-averages and weighted averages.
increased with increase in number of epochs and achieved Fig. 16 presents the precision-recall curves for ResNext101
98.88% after 10 epochs. The validation accuracy commenced _32 × 8d for both glioma and pituitary. The values of
at 93.75% and increased to 99.60% after 10 epochs. areas under the precision-recall curves for both classes
Fig. 13 shows the loss rate curves for ResNext101_32 × 8d are 1.00 or 100%. Therefore, the macro-average precision-
using the validation and training data sets. The numerical recall curve area is 1.00. Fig. 17 shows the ROCCs for
value of the loss rate diminished as the number of epochs ResNext101_32×8d, and, again, the values of the areas under

55590 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 17. ROC curves for ResNext101_32 × 8d.

TABLE 5. Comparison of the accuracy of proposed and previous models.


FIGURE 15. The normalized error matrix for ResNext101_32 × 8d.

TABLE 4. Classification of precision, recall, and F1-score for the proposed


ResNext101_32 × 8d model.

FIGURE 16. Precision-recall curves for ResNext101_32 × 8d.

the curves for both classes are 1.00 or 100%, indicating that The performance of both models in terms of their training
ResNext101_32 × 8d doesn’t generate errors. and validation accuracy curves are shown in Figs 5 and
Figs. 18, 19, and 20 show the results for the SISR 12. Figs 6 and 13 show the model performance, in terms
technique. Fig. 18(a), Fig. 19(a) and Fig. 20(a) illustrate, of loss rates, reached 0.0120 and 0.0108 for VGG19 and
respectivelt, a high-resolution, low-resolution and super- ResNext101_32 × 8d, respectively. Figs 7, 8, 14, and 15
resolution imagese. Fig.s 18(b), 19(b), and 20(b) show present error rate distribution for the two classes and show
the histograms for these images. It is clear that there are that the models perform well. Figs 18, 19, and 20 show the
differences in the resolution of the images. image quality obtained using the SISR process.
Table 5 is a comparison of the accuracy achieved by the
VI. DISCUSSION VGG19 and ResNext101_32 × 8d models with previously
The results presented above show that the ResNext101_32 × published results. We see that the testing accuracies of the
8d and VGG19 models have a very high accuracy with low proposed models are noticeably higher than those achieved
loss rate when trained and tested. Precision-recall curves, by the models listed in references [20], [21], [22], [23],
error matrices, and ROCCs demonstrated that the proposed [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34],
models can accurately classify brain tumors. and [35].

VOLUME 11, 2023 55591


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 18. Results of high-resolution images: (a) High-Resolution MRI image, (b) Histogram of high-resolution image.

FIGURE 19. Results of low-resolution images: (a) Low-Resolution MRI image, (b) Histogram of low-resolution image.

The higher performance achieved is due to batch and kernel However, when the batch size was changed to 64, the
sizes, the fine-tuning of the models’ hyper-parameters, loss training epochs reconfigured to 40, with a sigmoid activation
activation functions, optimizer type, pool size, number of function, and kernel and pool sizes of the convolutional
neurons utilized in the conv-layers, and number of training and max-polling layers set to 5 × 5 and 3 × 3 filters,
epochs. respectively, the VGG19 testing accuracy reached only
For VGG19, setting the number of training epochs to 95.78%. We conclude that the parametric settings can
50, and batch size to 30, and using a softmax activation significantly enhance the results.
function with kernel and pool sizes of the convolutional A GridSearchCV method was used to automatically
and max-polling layers adjusted to 3 × 3 and 2 × 2 filters, compute the optimum values of the hyper-parameters to
respectively, VGG19 achieved a test accuracy of 99.89%. ensure the models achieved optimal performance.

55592 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

FIGURE 20. Results of super-resolution images: (a) Super-Resolution MRI image, (b) Histogram of super-resolution image.

VII. CONCLUSION tools for rapid screening and providing support for medical
This paper reports the application of ResNext101_32×8d and diagnoses.
VGG19 DLMs to classify patients with glioma and pituitary The hyper-parameters of both ResNext101_32 × 8d and
tumors based on brain MRI images. The models were trained VGG19 i.e., batch and kernel sizes, fine-tuning the models’
and assessed using a dataset of 900 each of glioma and hyper-parameters, loss activation functions, optimizer type,
pituitary images. In addition, a single image super-resolution pool size, number of neurons used in the conv-layers and
(SISR) technique was applied to the MRI images to improve number of training epochs. were found to substantially impact
their resolution before classification using ResNext101_32 × the accuracy of the results. Best performance depends on
8d and VGG19. The SISR is based on a GAN algorithm achieving the optimal settings for these parameters.
and evaluated using MS-SSIM, PSNR, and SSIM metrics. The obtained results demonstrate that the pre-trained
The MS-SSIM was 96.39%, the PSNR was 29.30 dB, ResNext101_32 × 8d and VGG19 models achieved high
and the SSIM rate was 0.847. Experimental assessment performance when classifying brain tumors. To assess the
of the accuracy of the VGG19 and ResNext101_32 × 8d performance of VGG19 in terms of testing accuracy and loss
models, showed the accuracy realized was 99.89% and 100% 5-fold cross-validation was used. Both the ResNext101_32 ×
respectively, with respective test loss rates of 0.0120 and 8d and VGG19 models can be applied to MRI medical images
0.0108. to speed up diagnosis for the benefit of both patients and
The error matrix, F1-score, Precision, Recall, area under doctors.
the precision-recall curve, and the ROCC have been Future work should include applying the models proposed
presented and the models’ performances evaluated. The here to more brain MRI images, possibly also adding other
VGG19 model’s F1-score was 99.89%, its precision score pre-trained DLMs, such as ResNet-18 and AlexNet, to the
was 99.90%, and the achieved recall was 99.89%. The utilized dataset.
corresponding precision-recall curves for the VGG19, for
both glioma and pituitary tumors was 100%. The area under REFERENCES
[1] H. H. Sultan, N. M. Salem, and W. Al-Atabany, ‘‘Multi-classification of
the ROCC is 100% for both classes for the VGG19. brain tumor images using deep neural network,’’ IEEE Access, vol. 7,
The ResNext101_32 × 8d model’s F1-score, precision, pp. 69215–69225, 2019.
and recall were all 100%. The achieved areas under the [2] M. B. Naceur, M. Akil, R. Saouli, and R. Kachouri, ‘‘Fully automatic brain
tumor segmentation with deep learning-based selective attention using
ROC and precision-recall curves were 100% for both classes overlapping patches and multi-class weighted cross-entropy,’’ Med. Image
glioma tumor and pituitary tumor. Models such as these Anal., vol. 63, Jul. 2020, Art. no. 101692.
assist specialist doctors by providing a fast identification of [3] J. Paul and T. S. Sivarani, ‘‘RETRACTED ARTICLE: Computer aided
diagnosis of brain tumor using novel classification techniques,’’ J. Ambient
patients with brain tumors, which makes these models useful Intell. Humanized Comput., vol. 12, no. 7, pp. 7499–7509, Jul. 2021.

VOLUME 11, 2023 55593


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

[4] S. Bauer, R. Wiest, L.-P. Nolte, and M. Reyes, ‘‘A survey of MRI-based [26] K. Kaplan, Y. Kaya, M. Kuncan, and H. M. Ertunç, ‘‘Brain tumor
medical image analysis for brain tumor studies,’’ Phys. Med. Biol., vol. 58, classification using modified local binary patterns (LBP) feature extraction
no. 13, pp. R97–R129, Jul. 2013. methods,’’ Med. Hypotheses, vol. 139, Jun. 2020, Art. no. 109696.
[5] M. Pantoja, M. Weyrich, and G. Fernández-Escribano, ‘‘Acceleration of [27] A. Pashaei, H. Sajedi, and N. Jazayeri, ‘‘Brain tumor classification via
MRI analysis using multicore and manycore paradigms,’’ J. Supercomput., convolutional neural network and extreme learning machines,’’ in Proc.
vol. 76, no. 11, pp. 8679–8690, Nov. 2020. 8th Int. Conf. Comput. Knowl. Eng. (ICCKE), Oct. 2018, pp. 314–319.
[6] Q. T. Ostrom, H. Gittleman, G. Truitt, A. Boscia, C. Kruchko, and [28] E. I. Zacharaki, S. Wang, S. Chawla, D. Soo Yoo, R. Wolf, E. R. Melhem,
J. S. Barnholtz-Sloan, ‘‘CBTRUS statistical report: Primary brain and and C. Davatzikos, ‘‘Classification of brain tumor type and grade using
other central nervous system tumors diagnosed in the United States in MRI texture and shape in a machine learning scheme,’’ Magn. Reson. Med.,
2014–2018,’’ Neuro Oncol, vol. 23, no. 12, pp. iii1–iii105, 2021. vol. 62, no. 6, pp. 1609–1618, Dec. 2009.
[7] S. Bakas et al., ‘‘Identifying the best machine learning algorithms for brain [29] R. V. Kurup, V. Sowmya, and K. P. Soman, ‘‘Effect of data pre-
tumor segmentation progression assessment and overall survival prediction processing on brain tumor classifcation using capsulenet,’’ in Proc. Int.
in the BRATS challenge,’’ 2018, arXiv:1811.02629. Conf. Intell. Comput. Commun. Technol., Berlin, Germany: Springer, 2019,
[8] L. Zhang and H. Schaeffer, ‘‘Forward stability of ResNet and its variants,’’ pp. 110–119.
J. Math. Imag. Vis., vol. 62, no. 3, pp. 328–351, Apr. 2020. [30] S. Das, O. F. M. R. R. Aranya, and N. N. Labiba, ‘‘Brain tumor
[9] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for classification using convolutional neural network,’’ in Proc. 1st Int. Conf.
large-scale image recognition,’’ in Proc. 3rd Int. Conf. Learn. Represent., Adv. Sci., Eng. Robot. Technol. (ICASERT), 2019, pp. 1–5.
San Diego, CA, USA, May 2015. [31] Z. Ullah, M. U. Farooq, S.-H. Lee, and D. An, ‘‘A hybrid image
[10] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, ‘‘Densely enhancement based brain MRI images classification technique,’’ Med.
connected convolutional networks,’’ in Proc. IEEE Conf. Comput. Vis. Hypotheses, vol. 143, Oct. 2020, Art. no. 109922.
Pattern Recognit. (CVPR), Jul. 2017, pp. 2261–2269. [32] Z. Huang, X. Du, L. Chen, Y. Li, M. Liu, Y. Chou, and L. Jin,
[11] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for image ‘‘Convolutional neural network based on complex networks for brain
recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), tumor image classification with a modified activation function,’’ IEEE
Jun. 2016, pp. 770–778. Access, vol. 8, pp. 89281–89290, 2020.
[12] P. Aswathy, Siddhartha, and D. Mishra, ‘‘Deep GoogLeNet features for [33] T. Kalaiselvi, S. T. Padmapriya, P. Sriramakrishnan, and
visual object tracking,’’ in Proc. IEEE 13th Int. Conf. Ind. Inf. Syst. (ICIIS), K. Somasundaram, ‘‘Deriving tumor detection models using convolutional
Dec. 2018, pp. 60–66. neural networks from MRI of human brain scans,’’ Int. J. Inf. Technol.,
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ‘‘ImageNet classification vol. 12, no. 2, pp. 403–408, Jun. 2020.
with deep convolutional neural networks,’’ Commun. ACM, vol. 60, no. 6, [34] G. Li, J. Sun, Y. Song, J. Qu, Z. Zhu, and M. R. Khosravi, ‘‘Real-time
pp. 84–90, May 2017. classification of brain tumors in MRI images with a convolutional operator-
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, ‘‘ImageNet: based hidden Markov model,’’ J. Real-Time Image Process., vol. 18, no. 4,
A large-scale hierarchical image database,’’ in Proc. IEEE Conf. Comput. pp. 1207–1219, Aug. 2021.
Vis. Pattern Recognit., Jun. 2009, pp. 248–255. [35] N. Noreen, S. Palaniappan, A. Qayyum, I. Ahmad, and M. O. Alassafi,
[15] M. Tan and Q. Le, ‘‘EfficientNet: Rethinking model scaling for ‘‘Brain tumor classification based on fine-tuned models and the ensemble
convolutional neural networks,’’ in Proc. 36th Int. Conf. Mach. Learn., method,’’ Comput., Mater. Continua, vol. 67, no. 3, pp. 3967–3982, 2021.
Mach. Learn. Res., 2019, pp. 6105–6114. [Online]. Available: http:// [36] A. Rehman, M. A. Khan, T. Saba, Z. Mehmood, U. Tariq, and N. Ayesha,
proceedings.mlr.press/v97/tan19a.html ‘‘Microscopic brain tumor detection and classification using 3D CNN and
[16] D. Yu and L. Deng, Automatic Speech Recognition. Cham, Switzerland: feature selection architecture,’’ Microsc. Res. Technique, vol. 84, no. 1,
Springer, 2016. pp. 133–149, Jan. 2021.
[17] P. Klosowski, ‘‘Deep learning for natural language processing and [37] A. Rehman, S. Naz, M. I. Razzak, F. Akram, and M. Imran, ‘‘A deep
language modelling,’’ in Proc. Signal Process., Algorithms, Archit., learning-based framework for automatic brain tumors classification using
Arrangements, Appl. (SPA), Sep. 2018, pp. 223–228. transfer learning,’’ Circuits, Syst., Signal Process., vol. 39, no. 2,
[18] S. Mohsen, A. Elkaseer, and S. G. Scholz, ‘‘Industry 4.0-oriented deep pp. 757–775, Feb. 2020.
learning models for human activity recognition,’’ IEEE Access, vol. 9, [38] M. I. Sharif, J. P. Li, M. A. Khan, and M. A. Saleem, ‘‘Active deep
pp. 150508–150521, 2021. neural network features selection for segmentation and recognition of brain
[19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, tumors using MRI images,’’ Pattern Recognit. Lett., vol. 129, pp. 181–189,
V. Vanhoucke, and A. Rabinovich, ‘‘Going deeper with convolutions,’’ Jan. 2020.
in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, [39] S. R. Muzammil, S. Maqsood, S. Haider, and R. Damaševičius, ‘‘CSID: A
pp. 1–9. novel multimodal image fusion algorithm for enhanced clinical diagnosis,’’
[20] P. Afshar, A. Mohammadi, and K. N. Plataniotis, ‘‘Brain tumor type Diagnostics, vol. 10, no. 11, p. 904, Nov. 2020.
classification via capsule networks,’’ in Proc. 25th IEEE Int. Conf. Image
[40] S. Maqsood, R. Damasevicius, and F. M. Shah, ‘‘An efficient approach
Process. (ICIP), Oct. 2018, pp. 3129–3133.
for the detection of brain tumor using fuzzy logic and U-NET CNN
[21] A. K. Anaraki, M. Ayati, and F. Kazemi, ‘‘Magnetic resonance imaging-
classification,’’ in Proc. Int. Conf. Comput. Sci. Appl., vol. 12953. Cham,
based brain tumor grades classification and grading via convolutional
Switzerland: Springer, 2021, pp. 105–118.
neural networks and genetic algorithms,’’ Biocybern. Biomed. Eng.,
[41] H. Samma, S. A. Suandi, N. A. Ismail, S. Sulaiman, and L. L. Ping,
vol. 39, no. 1, pp. 63–74, Jan. 2019.
‘‘Evolving pre-trained CNN using two-layers optimizer for road damage
[22] P. Saxena, A. Maheshwari, and S. Maheshwari, ‘‘Predictive modeling of
detection from drone images,’’ IEEE Access, vol. 9, pp. 158215–158226,
brain tumor: A deep learning approach,’’ in Innovations in Computational
2021.
Intelligence and Computer Vision, vol. 1189, M. K. Sharma, V. S. Dhaka,
T. Perumal, N. Dey, and J. M. R. S. Tavares, Eds. Singapore: Springer, [42] J. A. Gamble and J. Huang, ‘‘Convolutional neural network for human
2021, pp. 275–285. activity recognition and identification,’’ in Proc. IEEE Int. Syst. Conf.
[23] Y. Zhou, Z. Li, H. Zhu, C. Chen, M. Gao, K. Xu, and J. Xu, (SysCon), Aug. 2020, pp. 1–7.
‘‘Holistic brain tumor screening and classifcation based on DenseNet and [43] D. P. Kingma and J. Ba, ‘‘Adam: A method for stochastic optimization,’’
recurrent neural network,’’ in Brainlesion: Glioma, Multiple Sclerosis, 2014, arXiv:1412.6980.
Stroke and Traumatic Brain Injuries. Cham, Switzerland: Springer, 2019, [44] (2021). Brain-Tumor-Datasets. Kaggle Repository. Accessed:
pp. 208–217. Dec. 2021. [Online]. Available: https://www.kaggle.com/drsaeedmohsen/
[24] J. Cheng, W. Huang, S. Cao, R. Yang, W. Yang, Z. Yun, Z. Wang, and braintumordatasets
Q. Feng, ‘‘Correction: Enhanced performance of brain tumor classification [45] R. Vinayakumar, M. Alazab, K. P. Soman, P. Poornachandran,
via tumor region augmentation and partition,’’ PLoS ONE, vol. 10, no. 12, A. Al-Nemrat, and S. Venkatraman, ‘‘Deep learning approach
Dec. 2015, Art. no. e0144479. for intelligent intrusion detection system,’’ IEEE Access, vol. 7,
[25] N. Abiwinanda, M. Hanif, S. Hesaputra, A. Handayani, and T. Mengko, pp. 41525–41550, 2019.
‘‘Brain tumor classifcation using convolutional neural network,’’ in [46] S. Mohsen, A. Elkaseer, and S. G. Scholz, ‘‘Human activity recognition
World Congress on Medical Physics and Biomedical Engineering 2018. using k-nearest neighbor machine learning algorithm,’’ Proc. 8th Int. Conf.
Singapore: Springer, 2018, pp. 183–189. Sustain. Design Manuf. (KES-SDM), 2021, pp. 304–313.

55594 VOLUME 11, 2023


S. Mohsen et al.: Brain Tumor Classification Using Hybrid SISR Technique

[47] W. El-Shafai, E. M. Mohamed, M. Zeghid, A. M. Ali, and M. H. Aly, Assurance and Accreditation of Egyptian Higher Education and a member
‘‘Hybrid single image super-resolution algorithm for medical images,’’ of the Scientific Committee for the promotion of professors and assistant
Comput., Mater. Continua, vol. 72, no. 3, pp. 4879–4896, 2022. professors, from 2019 to 2022. He has authored or coauthored more
[48] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image quality than 500 papers and 21 text books. His current research interests include
assessment: From error visibility to structural similarity,’’ IEEE Trans. device characterization, digital communication systems, and digital image
Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. processing. He acts as a reviewer and a member of the editorial board of
[49] W. Lin and C.-C. J. Kuo, ‘‘Perceptual visual quality metrics: A survey,’’
several scientific journals.
J. Vis. Commun. Image Represent., vol. 22, no. 4, pp. 297–312, May 2011.
[50] Z. Wang, E. Simoncelli, and A. Bovik, ‘‘Multiscale structural similarity
for image quality assessment,’’ Signals, Syst. Comput., vol. 2, no. 3,
pp. 1398–1402, 2003.
AHMED ELKASEER received the Ph.D. degree
[51] W. El-Shafai, S. A. El-Nabi, E.-S. M. El-Rabaie, A. M. Ali, N. F. Soliman, from the Cardiff School of Engineering, Cardiff
A. D. Algarni, and F. E. Abd El-Samie, ‘‘Efficient deep-learning-based University, U.K., in 2011. He is a Senior Research
autoencoder denoising approach for medical image diagnosis,’’ Comput., Fellow with the Institute for Automation and
Mater. Continua, vol. 70, no. 3, pp. 6107–6125, 2022. Applied Informatics (IAI), Karlsruhe Institute of
[52] W. El-Shafai, A. A. Mahmoud, A. M. Ali, E.-S. M. El-Rabaie, T. E. Taha, Technology (KIT), Germany. He has more than
O. F. Zahran, A. S. El-Fishawy, N. F. Soliman, A. A. Alhussan, and 20 years of research experience in advanced man-
F. E. Abd El-Samie, ‘‘Deep CNN model for multimodal medical image ufacturing technologies. He has been working on
denoising,’’ Comput., Mater. Continua, vol. 73, no. 2, pp. 3795–3814, different EC and EPSRC funded research projects.
2022. His work entails performing experimental and
laboratory work, modeling, simulation, and optimization-based studies
of mechanical, EDM, and laser processing of advanced materials on
SAEED MOHSEN received the B.Sc. degree conventional and micro scales, with a recent emphasis on additive and smart
(Hons.) in electronics engineering and electrical manufacturing and Industry 4.0 applications. His studies have led to several
communications from the Thebes Higher Institute, publications in the area of conventional and advanced micro-and nano-
Cairo, Egypt, in 2013, and the M.Sc. and Ph.D. manufacturing technologies. He serves as a guest editor, an associate editor,
degrees in electrical engineering from Ain Shams an editorial board member, and a reviewer for a number of journals. He was
University, Cairo, in 2016 and 2020, respectively. a scientific committee chair and a program committee member of a number
He is currently an Assistant Professor with the of international conferences.
Al-Madinah Higher Institute for Engineering and
Technology, Giza, Egypt. He made intensive
research on applications of artificial intelligence STEFFEN G. SCHOLZ is the Head of the
(AI), such as deep learning and machine learning. He has published a Process Optimization, Information Management
number of papers in specialized international conferences and peer-reviewed and Applications Research Team, Institute for
periodicals. His current research interests include biomedical engineering, Automation and Applied Informatics (IAI), Karl-
wearable devices, energy harvesting, analog electronics, and the Internet of sruhe Institute of Technology. He is also an Hon-
Things (IoT). orary Professor with Swansea University, U.K.,
and an Adjunct Professor with the Vellore Institute
ANAS M. ALI was born in Alexandria, Egypt. of Technology, India. He is also the Principal
He received the B.Sc. degree (Hons.) in elec- Investigator in the Helmholtz funded long-term
tronics and communication engineering from the programs, such as Digital System Integration and
Alexandria Higher Institute of Engineering and Printed Materials and Systems. He has more than 22 years of experience
Technology (AIET), Alexandria, Egypt, in 2016, in the field of system integration and automation, sustainable flexible
and the M.Sc. degree in communications and production, polymer micro- and nano-replication, process optimization and
electronics engineering from Menoufia University, control, with a recent emphasis on additive manufacturing and Industry
in 2021. He is currently a Researcher Assistant 4.0 applications. In addition to pursuing and leading research, he is
with the Robotics and Internet-of-Things Labo- very active with knowledge transfer to industry. He has involved in over
ratory, Prince Sultan University, Riyadh, Saudi 30 national and international projects. He has won in excess of 20M EUR
Arabia. His research interests include image and video signal processing, research grants, in which he has acted as a coordinator and/or a principal
medical image processing, the Internet of Things, medical diagnoses investigator. His academic output includes more than 150 technical papers
applications, image and video magnification, artificial intelligence for signal and five books. He is the chair of different international conferences with the
processing algorithms, deep learning in signal processing, modulation scope of advanced and sustainable manufacturing technologies.
identification and classification, and image restoration.
ASHRAF MOHAMED ALI HASSAN was born
EL-SAYED M. EL-RABAIE received the B.Sc. in Giza, in 1979. He received the B.Sc. (Hons.),
degree (Hons.) in radio communications from M.Sc., and Ph.D. degrees in electrical engineering
Tanta University, Tanta, Egypt, in 1976, the from Cairo University, in 2002, 2005, and 2009,
M.Sc. degree in communication systems from respectively. He is an Associate Professor with the
Menoufia University, Menouf, Egypt, in 1981, October High Institute for Engineering and Tech-
and the Ph.D. degree in microwave device nology, 6th of October, Egypt. He has published
engineering from the Queen’s University of more than 25 international papers. He received
Belfast, Belfast, U.K., in 1986. In his doctoral a certificate of appreciation for the supervision
research, he has constructed a computer-aided with the high performance and lasting contribution
design (CAD) package used in nonlinear circuit to the graduation project titled ‘‘Vertical Handover Implementation and
simulations-based on the harmonic balance techniques. Until February 1989, Application’’ which is the first on the level of the Egyptian university.
he was a Postdoctoral Fellow with the Department of Electronic Engineering, His research interests include digital signal processing and synthesis of
Queen’s University of Belfast. He was invited as a Research Fellow with electronic circuits. He has awarded an Associate Professor of electronics and
the College of Engineering and Technology, Northern Arizona University, communication engineering from Supreme Council of Universities, Egypt,
Flagstaff, in 1992; and a Visiting Professor with Ecole Polytechnique de in 2019.
Montréal, Montreal, QC, Canada, in 1994. He is a Reviewer of Quality

VOLUME 11, 2023 55595

You might also like