Measurement: Sensors: Takowa Rahman, MD Saiful Islam
Measurement: Sensors: Takowa Rahman, MD Saiful Islam
Measurement: Sensors
journal homepage: www.sciencedirect.com/journal/measurement-sensors
A R T I C L E I N F O A B S T R A C T
Keywords: Convolutional neural network (CNN) is widely used to classify brain tumors with high accuracy. Since CNN
Brain tumor detection collects features randomly without knowing the local and global features and causes overfitting problems, this
Data augmentation research proposes a novel parallel deep convolutional neural network (PDCNN) topology to extract both global
Parallel convolutional neural network
and local features from the two parallel stages and deal with the over-fitting problem by utilizing dropout
ReLU activation function
Softmax function
regularizer alongside batch normalization. To begin, input images are resized and grayscale transformation is
conducted, which helps to reduce complexity. After that, data augmentation has been used to maximize the
number of datasets. The benefits of parallel pathways are provided by combining two simultaneous deep con
volutional neural networks with two different window sizes, allowing this model to learn local and global in
formation. Three forms of MRI datasets are used to determine the effectiveness of the proposed method. The
binary tumor identification dataset-I, Figshare dataset-II, and Multiclass Kaggle dataset-III provide accuracy of
97.33%, 97.60%, and 98.12%, respectively. The proposed structure is not only accurate but also efficient, as the
proposed method extracts both low-level and high-level features, improving results compared to state-of-the-art
techniques.
* Corresponding author.
E-mail address: [email protected] (M.S. Islam).
https://doi.org/10.1016/j.measen.2023.100694
Received 22 October 2022; Received in revised form 18 December 2022; Accepted 2 February 2023
Available online 6 February 2023
2665-9174/© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
2
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
parallel architecture of two CNNs is presented. Fig. 2 shows a block 3.2. Preprocessing data
diagram of the PDCNN design. The following is the sequence of events
that take place within the suggested structure: the input layer of the Data preprocessing is a technique for cleaning data and preparing it
PDCNNs receives brain MRI pictures, which get preprocessed to reduce for use in a machine-learning model, improving the model’s accuracy
computing complexity. For training purposes, the input images are and efficiency. All the brain images in the MRI datasets do not have the
transformed to 32 × 32 pixels from a variety of pixel heights and widths. same width, height, or size; to provide uniformity for training reasons,
These input photos are converted to grayscale, which helps to reduce all of the photos are scaled to 32 × 32 pixels. These input photos are
complexity. After that, data augmentation is used to create new photos converted to grayscale, which helps to reduce complexity.
from old ones. To train the proposed network, the data set is separated
into training and validation. The PDCNN structure is then used to clas 3.3. Data augmentation
sify the input pictures, which include local, global, merging, and output
routes. At the output path, to carry out the brain tumor categorization At this stage, data augmentation is used to expand the amount of data
process, the softmax function is employed. available by modifying the original image because deep learning re
quires a large amount of data to learn from. It is possible to improve the
3.1. MRI dataset effectiveness of categorized outcomes by supplementing them. Rotation,
scaling, translation, and filtering are all operations that can be applied to
Three different public dataset of brain MRI pictures are used in this images. The filtering procedure is used as a supplement in this article.
research. For simplicity, the Kaggle platform has been used to obtain the The inappropriate information consists of noise being present in all
first publicly attainable dataset of binary-class brain MRI pictures; in this MRI brain pictures, resulting in a poor recognition rate. To provide
paper, this data is referred to as dataset-I [9]. There are 253 brain MRI valuable information, it is required to reduce the noise and unwanted
pictures in this collection, including 98 tumors and 155 non-tumor areas. High-frequency noise is seen in MRI pictures, which is generally
groups. This study makes use of the Figshare dataset of 233 patients’ minimized using a filtering process. The anisotropic diffusion filter is a
brain MRI pictures [12]. These brain MRI pictures are taken at two technique for autonomous noise suppression while maintaining image
Chinese hospitals (Nanfang Hospital and General Hospital). It contains edges. This filter can be used to remove noise from digital photos while
3064 MRI scans of the brain(1426 glioma tumors, 708 meningioma tu avoiding edge blurring. Table 2 displays the performance of MRI brain
mors, and 930 pituitary tumors); this dataset is identified as dataset-II. images after using an anisotropic diffusion filter.
Finally, the Kaggle website is also used to obtain the other dataset
used in this research [13]; it includes 826, 822, 395, and 827 brain MRI 3.4. Developed Parallel deep convolutional neural networks design
pictures, respectively, of glioma tumor, meningioma tumor, no tumor,
and pituitary tumor. In this study, this data is referred to as dataset-III. Extraction of features and categorization are the two components of
Table 1 displays the different types of brain MRI pictures found across all a standard CNN. The CNN models include input, convolutional, pooling,
datasets. fully connected, and classification layers in their construction, shown in
Fig. 3. For classification, fully connected layers and categorization layers
are utilized, whereas the characteristics are obtained using convolu
tional and pooling layers.
CNN has been widely used in image/video recognition and classifi
cation in recent years. CNN can extract image features that are either
global or local, or both, automatically from input images.
This research proposes a new network topology for brain tumor
detection and characterization that consists of two deep convolutional
neural networks operating simultaneously which are shown in Fig. 4.
The PDCNN structure is used to classify the input pictures, which include
local, global, merging, and output routes. To acquire local and global
features, the local and global pathways are used, respectively. In the
local path, the convolutional layers utilize the modest window size of 5
× 5 pixels to offer low-level information with in the pictures. The con
volutional layers of the global pathway, on the other hand, have a
massive number of filters of 12 × 12 pixels. For each path that down
samples the convolutional layer output, the max-pooling layer is utilized
after every convolutional layer. The two pathways are joined by a fusion
layer, which creates a single path along with a cascaded link until it
entered the ultimate goal. A batch normalization layer precedes a ReLU
layer, which is followed by two fully linked layers that are coupled to a
dropout layer in the merging route. To deal with the over-fitting prob
lem, a dropout of 0.3 is applied in the initial layers of the model, which
decreases as the network grows deeper. At the output path, to carry out
the brain tumor categorization process, the softmax function is
employed.
In the PDCNN framework, both global and local features obtained
from the two parallel stages are incorporated. Dropout is a regulariza
tion strategy for preventing overfitting in training data.
The layers used in the proposed PDCNN framework are discussed
below-
3
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Table 1
Categorization of MRI brain pictures.
No Tumor Glioma Tumor Meningioma Tumor Pituitary Tumor
Table 2
Performance of MRI brain images after using an anisotropic diffusion filter.
No Tumor Glioma Tumor Meningioma Tumor Pituitary Tumor
4
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Fig. 4. The PDCNN model contains four stages in its overall structure: local, global, merging, and output stages in order to categorize the input image.
4. Implementation and evaluation model is shown in Fig. 6 for multi-class Figshare dataset- II for many
ratios. The number of epochs, iterations, elapsed time, and accuracy is
The PDCNN’s implementation code is executed in MATLAB. The calculated for each ratio. At a 90:10 training-testing ratio, the maximum
computer has a 3.2 GHz Intel Core i5 processor, 8 GB of RAM, and a accuracy achieved is 96.10%. This result is achieved after 55 epochs and
Windows operating system installed. 1155 iterations, with a total execution time of 4169 s. A change in the
training-testing ratio causes a change in accuracy. The accuracy of the
PDCNN model improves after adding augmentation to the multi-class
4.1. Classification results Figshare dataset- II. The PDCNN model’s accuracy rises to 97.60%
with data augmentation when the training-testing ratio is 90:10. The
The proposed two simultaneous deep CNN architecture for tumor bold numbers represent the greatest possible outcome.
identification and categorization is validated using three datasets: Fig. 7 shows the efficiency of brain tumor classification using the
dataset-I [9], dataset-II [12], and dataset-III [13]. As shown in Figs. 5–7, PDCNN model over several ratios for the multi-class Kaggle dataset- III.
multiple ratios are provided for training and testing of the proposed The number of epochs, iterations, elapsed time, and accuracy is calcu
design, including (90:10), (80:20), (70:30), (60:40), and (50:50). lated for each ratio. At an 80:20 training-testing ratio, the greatest ac
The accuracy of brain tumor classification using the PDCNN model curacy achieved is 95.60%. This result is obtained after 50 epochs and
for multiple ratios is shown in Fig. 5 for binary dataset-I. The number of 800 iterations, with a total execution time of 3465 s. A change in the
epochs, iterations, elapsed time, and accuracy is calculated for each training-testing ratio causes a change in accuracy. The accuracy of the
ratio. At a 90:10 training-testing ratio, the maximum accuracy achieved PDCNN model improves after adding augmentation to the multi-class
is 96.00%. This result is obtained after 90 epochs and 90 iterations, with Kaggle dataset- III. The PDCNN model’s accuracy rises to 98.12
a total execution time of 361 s. A change in the training-testing ratio percent from 95.60% utilizing data augmentation when the training-
causes a change in accuracy. The accuracy of the PDCNN model im testing ratio is 80:20. The bold numbers represent the greatest
proves after applying augmentation to the binary classification dataset-I. possible outcome.
When the training-testing ratio is 90:10, the PDCNN model’s accuracy
climbs to 97.33% with data augmentation from 96%. The bold numbers
represent the greatest possible outcome.
The performance of brain tumor categorization using the PDCNN
5
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Table 3
Details on the PDCNN model that has been proposed.
Layer No. Layer Type Properties Activations Learnable Total Learnable
Fig. 5. Visualization of PDCNN accuracy on multiple training and testing ratios Fig. 6. Visualization of PDCNN accuracy on multiple training and testing ratios
for binary brain tumor classification dataset-I. for multi-class Figshare dataset- II.
6
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Fig. 7. Visualization of PDCNN accuracy on multiple training and testing ratios Fig. 9. Performance parameters comparison of accuracy, precision, recall and
for multi-class Kaggle dataset-III. F1-score with individual models using Figshare dataset- II containing 3 types of
brain tumor images.
Fig. 11. Training-testing performance vs error rate for binary brain tumor
classification dataset-I.
PDCNN model’s input dataset, the error rate is lowered even more and
the model’s performance is significantly improved.
Fig. 13 shows the CNN model with the greatest error rate for the
original dataset. The PDCNN model is used to increase the performance
of the CNN model. Compared to the CNN model, the suggested PDCNN
model has a lower error rate. When augmentation is applied to the
PDCNN model’s input dataset, the error rate is lowered even more and
the model’s performance is significantly improved.
7
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
5. Discussion
8
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Fig. 14. Confusion matrix of PDCNN using binary classification dataset-I (a) confusion matrix of original dataset-based classification (b)confusion matrix of the
original dataset with augmentation-based classification.
seen in Figs. 11–13. The PDCNN model is used to increase the perfor
Table 5
mance of the CNN model. Compared to the CNN model, the suggested
Suggested tumor type categorization outcomes using multiclass Figshare data
PDCNN model has a lower error rate. When augmentation is applied to
set-II.
the PDCNN model’s input dataset, the error rate is lowered even more
Method Original Augmented Measures
and the model’s performance is improved.
Dataset II Dataset II
Accuracy Error Time Kappa The results in Figs. 14–16 show that augmentation improves the
(%) (%) (s) accuracy of the proposed PDCNN model when compared to the original
CNN ✓ 95.80 4.20 833 0.933 images. At the conclusion of the suggested method’s validation, a
✓ 97.90 2.10 1627 0.967
PDCNN ✓ 96.10 3.90 4169 0.938
✓ 97.60 2.40 5803 0.962 Table 6
Suggested tumor type categorization outcomes using multiclass Kaggle dataset-
III.
all three types of datasets. The bold numbers represent the greatest Method Original Augmented Measures
possible outcome. Dataset III Dataset III
Accuracy Error Time Kappa
Figs. 8–10 indicate that the PDCNN model outperforms the standard (%) (%) (s)
CNN model in terms of accuracy, precision, recall, and F1 score. When
CNN 94.10 5.90 629 0.919
augmentation is used on three different types of datasets to improve the
✓
✓ 97.70 2.30 783 0.968
performance of the proposed PDCNN model, the performance metrics PDCNN ✓ 95.60 4.40 3465 0.940
improve even more. ✓ 98.12 1.88 6430 0.974
The original dataset’s CNN model has the maximum error rate, as
Fig. 15. Confusion matrix of PDCNN using Figshare dataset-II (a) confusion matrix of the original dataset-based classification (b)confusion matrix of the original
dataset with augmentation-based classification.
9
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
Fig. 16. Confusion matrix of PDCNN using Kaggle dataset-III (a) confusion matrix of the original dataset-based classification (b)confusion matrix of the original
dataset with augmentation-based classification.
10
T. Rahman and M.S. Islam Measurement: Sensors 26 (2023) 100694
[10] R. Yamashita, M. Nishio, R.K.G. Do, K. Togashi, Convolutional Neural Networks: an [21] P. Rajak, A.S. Jangde, G.P. Gupta, Towards Design of Brain Tumor Detection
Overview and Application in Radiology,” Insights into Imaging, 2018, Framework Using Deep Transfer Learning Techniques,” Convergence of Big Data
pp. 611–629. Technologies and Computational Intelligent Techniques, IGI Global, 2023,
[11] Z. A. Sejuti, M. S. Islam, “An Efficient Method to Classify Brain Tumor using CNN pp. 90–103.
and SVM,” 2021 International Conference on Information and Communication [22] S.U. Habiba, M.K. Islam, L. Nahar, F. Tasnim, M.S. Hossain, K. Andersson, Brain-
Technology for Sustainable Development (ICICT4SD), pp. 259-263, 2021. DeepNet: a deep learning based classifier for brain tumor detection and
[12] Jun Cheng, 2017 Figshare dataset https://figshare.com/articles/brain tumor classification, in: International Conference on Intelligent Computing &
dataset/1512427. Optimization, Springer, 2023, pp. 550–560.
[13] Brain Tumor MRI Dataset | Kaggle. [23] A.K. Budati, R.B. Katta, An automated brain tumor detection and classification
[14] Brain Tumor Detection Using Convolutional Neural Networks | by Mohamed Ali from MRI images using machine learning techniques with IoT, Environ. Dev.
Habib | Medium 25 November, 2021. Sustain. 24 (9) (2022) 10570–10584.
[15] S. Irsheidat, R. Duwairi, Brain tumor detection using artificial convolutional neural [24] R. Vankdothu, M.A. Hameed, Brain tumor MRI images identification and
networks, in: 2020 11th International Conference on Information and classification based on the recurrent convolutional neural network, Measurement:
Communication Systems, ICICS), 2020, pp. 197–203. Sensors 24 (December 2022).
[16] A. Anil, A. Raj, H.A. Sarma, N. Chandran R, P.L. Deepa, Brain tumor detection from [25] H.H. Sultan, N.M. Salim, W. Al-Atabany, Multi-classification of brain tumor images
brain MRI using deep learning, Int. J. Innovat. Res. Appl. Sci. Eng. (IJIRASE) 3 using deep neural network, IEEE Access (2019) 69215–69225. May 27.
(Issue 2) (August 2019) 68–73. [26] D. Lamrani, B. Cherradi, O.E. Gannour, M.A. Bouqentar, L. Bahatti, Brain tumor
[17] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, S.W. Baik, Multi-grade brain detection using MRI images and convolutional neural network, Int. J. Adv.
tumor classification using deep CNN with extensive data augmentation, J. comput. Comput. Sci. Appl. 13 (No. 7) (2022).
sci. 30 (2019) 174–182. [27] A. Nayan, A.N. Mozumder, M.R. Haque, F.H. Sifat, K.R. Mahmud, A.K.A. Azad, M.
[18] P. Afshar, K.N. Plataniotis, A. Mohammadi, Capsule networks for brain tumor G. Kibria, A deep learning approach for brain tumor detection using magnetic
classification based on MRI images and coarse tumor boundaries, in: 2019 IEEE resonance imaging, Int. J. Electr. Comput. Eng. 13 (No. 1) (February 2023)
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1039–1047.
2019, pp. 1368–1372. [29] S. Priyansh, A. Maheshwari, S. Maheshwari, Predictive modeling of brain tumor: a
[19] M.K. Abd-Ellah, A.I. Awad, H.F.A. Hamed, A.A.M. Khalaf, "Parallel deep CNN deep learning approach, Innovations Comput. Intell. Comput. Vis. 3 (Issue 2)
structure for glioma detection and classification via brain MRI Images.", in: 2019 (2021) 275–285.
31st International Conference on Microelectronics, ICM), 2019, pp. 304–307. [30] T. Rahman, M.S. Islam, "MRI brain tumor classification using deep convolutional
[20] A.E. Minarno, M.H.C. Mandiri, Y. Munarko, Hariyady,"Convolutional neural neural network,", in: 2022 International Conference On Innovations In Science,
network with hyperparameter tuning for brain tumor classification, Kinetik: Game Engineering And Technology (ICISET), Chittagong, Bangladesh, 2022,
Technol. Inf. Syst. Comput. Network, Comput. Electron. Control 6 (2) (May 2021) pp. 451–456.
127–132.
11