0% found this document useful (0 votes)
29 views11 pages

Transfer Learning Based Approach For Detecting COV - 2021 - Computers in Biology

This document discusses using deep learning models and transfer learning to detect COVID-19 through lung CT scans. A super-residual dense neural network was applied to enhance CT scan efficiency. Existing pre-trained models like XceptionNet, MobileNet, InceptionV3, DenseNet, ResNet50, and VGG were used to classify CT scans as positive or negative for COVID-19. MobileNet provided a precision of 94.12% and 100% on two COVID-19 CT scan datasets. The research aims to identify COVID-19 more accurately and efficiently through enhanced deep learning models applied to lung CT images.

Uploaded by

Shahzaib RND
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views11 pages

Transfer Learning Based Approach For Detecting COV - 2021 - Computers in Biology

This document discusses using deep learning models and transfer learning to detect COVID-19 through lung CT scans. A super-residual dense neural network was applied to enhance CT scan efficiency. Existing pre-trained models like XceptionNet, MobileNet, InceptionV3, DenseNet, ResNet50, and VGG were used to classify CT scans as positive or negative for COVID-19. MobileNet provided a precision of 94.12% and 100% on two COVID-19 CT scan datasets. The research aims to identify COVID-19 more accurately and efficiently through enhanced deep learning models applied to lung CT images.

Uploaded by

Shahzaib RND
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Computers in Biology and Medicine 135 (2021) 104575

Contents lists available at ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/compbiomed

Transfer learning-based approach for detecting COVID-19 ailment in lung


CT scan
Vinay Arora a, Eddie Yin-Kwee Ng b, *, Rohan Singh Leekha c, Medhavi Darshan d,
Arshdeep Singh e
a
Computer Science & Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
b
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
c
IT-App Development/Maintenance, Concentrix, Gurugram, India
d
Department of Mathematics, Kamala Nehru College, University of Delhi, Delhi, India
e
Wipro Limited, India

A R T I C L E I N F O A B S T R A C T

Keywords: This research work aims to identify COVID-19 through deep learning models using lung CT-SCAN images. In
COVID-19 order to enhance lung CT scan efficiency, a super-residual dense neural network was applied. The experimen­
CT Scan tation has been carried out using benchmark datasets like SARS-COV-2 CT-Scan and Covid-CT Scan. To mark
MobileNet
COVID-19 as positive or negative for the improved CT scan, existing pre-trained models such as XceptionNet,
Transfer learning
Pandemic
MobileNet, InceptionV3, DenseNet, ResNet50, and VGG (Visual Geometry Group)16 have been used. Taking CT
Deep learning scans with super resolution using a residual dense neural network in the pre-processing step resulted in
improving the accuracy, F1 score, precision, and recall of the proposed model. On the dataset Covid-CT Scan and
SARS-COV-2 CT-Scan, the MobileNet model provided a precision of 94.12% and 100% respectively.

1. Introduction for signs of an infection which are still active. Swabbing the back of the
throat for a sample is normally done using a cotton swab. A polymerase
The World Health Organization (WHO) got the first update related to chain reaction (PCR) test is performed on the sample. This test provides
Corona virus disease 2019 (COVID-19) on December 31, 2019. On the signs of the virus’s genetic material. A PCR test after detecting two
January 30, 2020, WHO announced the COVID-19 spread as a global unique SARS-CoV-2 genes confirms a COVID-19 diagnosis. Only existing
health emergency. Corona virus is a zoonotic virus, which means it cases of COVID-19 can be detected through molecular tests; and it
began in animals before spreading to humans. The transmission of the cannot be said whether anyone has recovered from this infection.
virus occurred in humans after coming into contact with animals. Serological testing can identify antibodies generated by the body to fight
Corona virus can transmit from one person to another through respira­ the virus. Normally, a blood sample is needed for a serological test; and
tory droplets when a person exhales, coughs, sneezes, or chats with such a test is particularly helpful in identifying infections that have mild
others [1]. It is also believed that the virus may have transferred from to no symptoms [2]. Anyone who has recovered from COVID-19 pos­
bats to other species, such as snakes or pangolins, and then to humans. sesses these antibodies which can be found in blood and tissues all over
Multiple COVID-19 complications leading to liver problems, pneu­ the body. Apart from swab and serological tests, organs and structures of
monia, respiratory failure, cardiovascular diseases, septic shock, etc. the chest can be imaged using X-rays (radiography) or computed to­
have been prompted by a condition called cytokine release syndrome or mography (CT) scans [3]. A chest CT scan, a common imaging approach
a cytokine storm. This occurs when an infection activates the immune for detecting pneumonia, is a rapid and painless procedure. It has
system to leak inflammatory proteins known as cytokines into the appeared through new research that sensitivity of CT is 98% for
bloodstream which can kill tissues and organs in human beings. COVID-19 infection which is greater as compared to 71% for Reverse
Fig. 1 highlights the various methods used for testing corona virus. Transcription Polymerase Chain Reaction (RT-PCR) testing. The
These include molecular, serological, and scanning. Molecular tests look COVID-19 classification based on the Thorax CT necessitates the

* Corresponding author.
E-mail addresses: [email protected] (V. Arora), [email protected] (E.Y.-K. Ng), [email protected] (R.S. Leekha), darshanmedhavi@
gmail.com (M. Darshan), [email protected] (A. Singh).

https://doi.org/10.1016/j.compbiomed.2021.104575
Received 6 April 2021; Received in revised form 8 June 2021; Accepted 9 June 2021
Available online 12 June 2021
0010-4825/© 2021 Elsevier Ltd. All rights reserved.
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Table 1
Datasets used in various studies showing their Sensitivity (Se)/Specificity (Sp)/
Accuracy (Acc).
S. Author(s) Dataset Dataset Source Se/Sp/
No. Acc

1. Panwar et al. SARS-COV-2 CT https://www.kaggle. 94.04/


[4] com/plamenedua 95.84/
rdo/sarscov2-ctsca 95.00
2. Jaiswal et al. n-dataset/notebooks 96.29/
[11] 96.21/
96.25
3. Singh et al. [13] 90.5/
90.5/93
4. Alshazly et al. 99.10/
[19] 99.60/
99.40
5. Silva et al. [20] 98.80/
NA/
98.99
6. Dina A. Ragab 99/NA/
and Omneya 99
Attallah [22]
7. Mishra et al. [7] COVID-CT https://github.com/ 88.00/
UCSD-AI4H/ 90.00/
COVID-CT 88.00
8. E. Matsuyama 90.40/
[10] 93.30/
92.20
9. Loey et al. [12] 77.60/
87.62/
82.91
10. Cuong Do and NA/
Lan Vu [15] NA/
85.00
11. He et al. [17] NA/
Fig. 1. Methods used to test the subject for COVID-19 as positive or negative. NA/
86.00
12. Sakagianni 90.20/
involvement of a radiology specialist which incurs a lot of time. Thus, an
et al. [18] 86.10/
automated processing of Thorax CT images is desirable in order to help 88.30
the medical specialists to save their precious time. This automation may 13. Ewen et al. [21] NA/
also assist in the avoidance of medicinal delays. NA/
86.21
14. Wang et al. [16] Not available/ Not available/ 79.35/
2. Literature review disclosed disclosed 71.43/
85.00
Panwar et al. [4] conducted a binary image classification study to 15. Li et al. [14] Data collected 90.00/
detect COVID-19 patients. A fine-tuned VGG model was used to classify from 6 different 95.00/
hospitals NA
the input photos. The researchers used the Grad-CAM technique to apply
16. Ni et al. [5] Data collected 100/
a color visualization approach to make the proposed deep learning from three Chinese NA/NA
model more explainable and interpretable. Ni et al. [5] developed a hospitals
lesion detection, segmentation, and position deep learning algorithm. 17. Xiao et al. [6] Data collected NA/
With chest CT images, the proposed approach was validated on 14,435 from patients of NA/
the People’s 81.90
participants. During the outbreak, the algorithm was checked on a Hospital of
non-overlapping dataset of 96 reported COVID-19 patients in three Honghu
hospitals across China. Xiao et al. [6] built and validated a deep 18. Ko et al. [8] COVID-19 https://www.sirm.or 99.50/
learning-based model (ResNet34) using residual convolutional neural Database g/en/category/ar 100/
ticles/covid-19- 99.80
networks and multiple instance learning.
database/
Deep CNN-based image classification strategies were evaluated by 19. Song et al. [9] Images of COVID- https://data.mendele 80.00/
Mishra et al. [7] in order to distinguish COVID-19 cases taking chest CT 19 positive and y.com/datasets/kk6 75.00/
scan images. Ko et al. [8] created the fast-track COVID-19 classification negative y7nnbfs/1 NA
network (FCONet) through a 2-D Deep learning system to diagnose pneumonia
patients
COVID-19 taking image of chest CT. As a backbone, FCONet was built
using transfer learning which included pre-trained deep learning
models, viz. VGG16, Inception-v3, Xception or ResNet-50. Song et al. [9] the DenseNet201-based deep transfer learning (DTL) model. The pro­
developed a large-scale bi-directional generative adversarial network posed model extracted features from the ImageNet dataset using its own
(BigBiGAN) architecture to construct an end-to-end representation trained weights and a convolutional neural structure. Loey et al. [12]
learning system. The researchers extracted semantic features from CT chose five distinct deep convolutional neural network-based models viz.
images used in a linear classifier. ResNet-50, a CNN-based model for AlexNet, VGGNet19, VGGNet16, ResNet50, and GoogleNet to identify
using chest CT to distinguish COVID-19 and Non-COVID-19, was pro­ the Coronavirus-infected person using chest CT radiograph digital im­
posed by E. Matsuyama [10]. Without clipping any parts of the image, ages for their analysis. The authors used CGAN and traditional data
the CNN model fed the wavelet coefficients of the whole picture as input. augmentations to create additional photos to help in the identification of
To identify the patients having COVID-19, Jaiswal et al. [11] provided COVID-19. The COVID-19 patients were classified using CNN by Singh

2
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 2. Steps followed for classifying lung CT into normal and abnormal for
COVID-19.

Fig. 3. Architecture of the residual dense network (RDN).

Fig. 4. Internals of the residual dense block (RDB).

et al. [13]. The multi-objective differential evolution (MODE) was used


to tune the initial parameters of CNN. Using visual features extracted
through volumetric chest CT scans, Li et al. [14] developed a deep
learning model called COVNet to detect COVID-19. To assess the
model’s robustness, CT scans of community-acquired pneumonia (CAP)
as well as other non-pneumonia anomalies were considered. The regions Fig. 5. (a) Lung CT image without super resolution, and (b) CT obtained after
under the receiver operating characteristic (ROC) curve, specificity and super resolution.
sensitivity were applied to test the proposed model’s diagnostic effi­
ciency. Cuong Do et al. [15] investigated VGG-16, Inception-V3, VGG19, Efficient CovidNet, a model for detecting COVID-19 patterns in CT im­
Xception, InceptionResNet-V2, DenseNet-169, DenseNet121, and Den­ ages that involves a voting-based approach and cross-dataset analysis.
seNet201 pre-trained models. Overfitting was avoided by dropping and On a small COVID-19 CT scan dataset, Ewen et al. [21] calculated the
augmenting instances during training. Wang et al. [16] employed a usefulness of self-supervision classification. Ragab et al. [22] introduced
two-step transfer learning methodology to identify CT as positive or a novel CAD method called FUSI-CAD centered on the fusion of many
negative. Where, in the first phase, the researchers gathered CT scans of distinct CNN architectures taking features like grey level co-occurrence
4106 lung cancer patients who had a CT scan and had their epidermal matrix (GLCM) and discrete wavelet transform (DWT). The dataset(s)
growth factor receptor (EGFR) gene sequenced. The DL system picked up utilized by the researchers earlier, as well as the results obtained for
features that could represent the relationship between a chest CT image various assessment criteria, are summarized in Table 1.
and a micro-level functional abnormality in the lungs by using this large The approach proposed here in this research paper has achieved an
CT-EGFR training data collection. The scientists then employed a large, accuracy score of 94.12% on COVID-CT- Dataset, and an accuracy of
geographically diversified COVID-19 dataset (n = 1266) to train and 100% on SARS-COV-2 CT Scan Dataset with MobileNet model. Although
validate the DL system’s diagnostic and prognostic results. the accuracy scores claimed by Panwar et al. [4], Jaiswal et al. [11],
He et al. [17] suggested the Self-Trans method for classifying Alshazly et al. [19], and Ko et al. [8] are 95.00, 96.25, 99.40, and
COVID-19 instances which incorporated transfer learning and contras­ 99.80% respectively, but the findings produced by the researchers [4,11,
tive self-supervised learning to learn efficient as well as unbiased feature 19], and [8] still have some gaps. The work against reference [1] suffers
representations while reducing the possibility of overfitting. To model from the flaw that it stops learning after a few learning epochs, limiting
architecture, validate, practice, and test, Sakagianni et al. [18] used the resulting neural network to the class of semi-linear models and
Google AutoML Cloud Vision. The underlying framework that enabled preventing learning from developing complex non-linear structures in
this platform to find the best way to train deep learning models has been the neural network.
called Neural Architecture Search (NAS). To achieve the best perfor­ In their research work, the authors [11] missed the realistic scenario,
mance, Alshazly et al. [19] used advanced deep network architecture where poorly contrasted CT scan images could lead to the chances of
and suggested a transfer learning strategy that utilized custom-sized misclassification; and the researchers did not pre-processed the image w.
input optimized for deep architecture. Silva et al. [20] proposed

3
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

implementation at the time when loss of validation began to rise. It helps


to resolve the problem of early stopping.

3. Proposed methodology

The processes, in this section, are described to assess if a COVID-19


lung CT scan is positive or negative. Fig. 2 presents a general block di­
agram of the research.
Under most of the circumstances, the spatial resolution in CT pictures
is not perfect because of restrictions in CT machine hardware configu­
rations. Pre-processing through the use of RDNN allows the spatial
resolution of CT images to be increased, resulting in lower system costs
and complexity. Variation in CT picture resolution can also be caused by
changes in radiation dosage and slice thickness which might make ra­
diologists’ diagnostic abilities questionable. As a result, boosting clarity
and crispness in low-quality CT scans is highly desired. The use of linear
and non-linear functions in classical SR approaches results in jagged
edges and blurring in pictures which generates unrequired noise in the
data. Deep learning algorithms have been successful in mitigating these
anomalies by extracting high-dimensional and non-linear information
from CT scans, resulting in improved SR CT pictures. Due to the use of
hierarchical features via dense feature fusion, the RDN can recover
sharper and cleaner edges when compared to other deep learning
models [23–25].

3.1. Image dataset

SARS-COV-2 CT [26] and COVID-CT [27] have been used as the


benchmark datasets for the transfer learning models in this research
work. The datasets were divided into two sections; taking 80% of the
total dataset for training, and 20% for testing. The COVID-CT-Dataset
contained 349 COVID-19 CT images from 216 patients, and 463
non-COVID-19 CTs. The SARS-COV-2 CT included 2482 CT scan images
from 1252 SARSCoV-2 infected patients. In SARS-COV-2 CT, of the total
2482 images, the non-COVID-19 subjects accounted for 1230 CT scans.

3.2. Pre-processing through RDNN

Medical imaging modalities include both anatomical and functional


details about the human body structure. Resolution limits, on the other
hand, often reduce the diagnostic value of medical images. The primary
medical imaging modalities, including computerized tomography (CT),
magnetic resonance imaging (MRI), functional MRI (fMRI), and positron
emission tomography (PET) can be enhanced with Super Resolution
(SR). The purpose of SR is to improve the resolution of medical images
while preserving true isotropic 3-D images.
For SR, an extremely deep convolutional neural network (CNN) has
been used, but the reported ones have not fully exploited the hierar­
chical features of the source low-resolution (LR) images, resulting in
poor performance. In this study, the residual dense network (RDN) was
employed to address the LR problem. RDN uses residual dense block
Fig. 6. Basic architecture and customization in ResNet50.
(RDB) where densely connected convolutional layers extract a large
number of local features. The RDB’s local feature fusion is then utilized
r.t. its quality enhancement. Their models were validated using a to learn more efficient features from the previous and existing local
segment of the testing dataset [8]. As a result, the training and research features in an adaptive manner. Global feature fusion has been used to
datasets were derived from the same sources. This is likely to raise the learn global hierarchical features holistically after dense local features.
concern about generalizability and overfitting. The authors, in their Fig. 3 depicts the RDN’s internals, which have been divided into four
work [19], have not provided any proper reasoning for taking the spe­ sections: Residual dense blocks (RDBs), shallow feature extraction net
cific values of parameters like number of epochs, and customized size of (SFENet), up-sampling net (UPNet) and dense feature fusion (DFF). ILR
the input image. denotes the low-resolution lung CT, while IHR represents the high-
However, in the work proposed here, CT images were enhanced resolution lung CT obtained by RDN. To remove the shallow features,
during the pre-processing phase before feeding them to the deep two convolutional layers were used. The first convolutional layer takes
learning models for classification. By converting the normal CT image the LR input and extracts F− 1 features [Equation (1)].
into its corresponding super-resolution image, a major concern over
compromising the image quality has been settled. Overfitting has been F− 1 = HSFE1 (ILR ) (1)
avoided through augmentation. This investigation also ceased model

4
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 7. Basic architecture and customization in Xception.

where, HSFE1 (.) denotes convolution operation. The shallow function


ISR = HRDN (ILR ) (6)
was extracted, and global residual learning was performed using F− 1 .
Thus, Equation (2) appeared as below: The prime variables used here in the RNN approach are as follows:
D – count of RDB; C – total count of inner layers (convolutional) for
F0 = HSFE2 (F− 1 ) (2)
an RDB; G– feature maps count for the convolutional layer (inside RDB);
HSFE2 (.) denotes the second shallow feature extraction layer’s convolu­ and G0 – feature maps count for the convolutional layer (outside RDB).
tion process which has been used as an input to residual dense blocks. In The model was trained on the DIV2K [28] dataset taking the
following values for the various parameters C = 6, D = 20, G = 64, and
the case of D residual dense blocks, the output Fd of the dth RDB can be
G0 = 64. From a low resolution CT image, an augmented patch of 32 ×
obtained by Equations (3) and (4)
32 has been used taking 86 epochs and batch size as 1000. Fig. 5 shows
Fd = HRDB,d (Fd− 1 ) (3) the sample lung CT images taken randomly from the benchmark datasets
( ( ( ) )) under consideration and as obtained after employing image super res­
= HRDB,d HRDB,d− 1 … HRDB,1 (F0 ) … (4) olution module on the same images.

where, HRDB,d denotes the operations of the dth RDB. HRDB,d is an amal­
gamation of convolution and rectified linear unit (ReLU). Fd can be 3.3. Image augmentation
taken as a local feature, as it has been produced by the dth RDB by fully
utilizing each convolutional layer within the block. Dense feature fusion A large dataset is a technique used to categorize deep learning
(DFF) involving global residual learning (GRL) and global feature fusion effectively and successfully. But it is not always possible to have a vast
(GFF) were used to extract hierarchical features using a collection of dataset. In the domains of machine learning and deep learning, the data
RDBs. Fig. 4 displays the architecture of RDB, where, DFF makes a full size can be raised to enhance classification accuracy. Data enhancement
use of features from all the preceding layers and can be represented as approaches increase the algorithm’s learning and network capability
Equation (5): significantly. Because of few shortcomings, texture, color, and
FDF = HDFF (F− 1 , F0 , F1 , …, FD ) (5) geometric-based data augmentation techniques are not equally com­
mon. While this strategy provides data diversity, it has limitations such
Where, FDF denotes the DFF output feature-maps obtained through a as more memory, training time, and expenses associated with conver­
composite function HDFF . sion measurement. To supplement the lung CT images in the current
In the HR space, up-sampling net (UPNet) was stacked after investigation, only geometric alterations were performed. The count of
extracting global and local features in the LR space. Equation (6) shows images in dataset becomes tentatively three times larger after executing
the output obtained from the RDN: data augmentation. A randomly generated number has been used to
rotate the lung CT images counter clockwise from 00 to 3590 .

5
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 8. Basic architecture and customization in VGG16.

3.4. Transfer learning models for classification running at 1.9 GHz with 8 GB of RAM, Intel UHD Graphics 620, and
Windows 10 installed.
Transfer learning is the process of moving the variables of a neural Figs. 6–10 highlight the basic architecture of the transfer learning
network trained on one dataset, and task to a new data repository and models and their customization that has been finally deployed to obtain
task. Several deep neural networks trained on natural images share a the classification results here in this work.
peculiar function: on the first layer, model learns features that seem to
be universal, in the sense that these can be applied to a wide range of 4. Results and experiments
datasets and tasks. The network’s final layers must gradually transition
features from general to particular. Transfer learning can be an imposing Here, in this research work, COVID-19 positive patients who have
method for enabling training without overfitting when the target dataset been correctly categorized as COVID-19 positive are denoted by true
is a fraction of the size of the base dataset [29,30]. Here, in this work, positive (TP), while false positive (FP) subjects are normal, but have
DenseNet121, MobileNet, VGG16, ResNet50, InceptionV3, and Xcep­ been incorrectly labelled as corona positive. True Negative (TN) denotes
tionNet have been used as the base models, pre-trained on the ImageNet the COVID-19 negative subjects who have been recognized correctly as
dataset for object detection. ImageNet is a 1.28 million natural image negative. Finally, COVID-19 positive patients, misclassified as negative,
dataset that is open to the public; and it is divided into 1,000 categories. have been denoted with False Negative (FN). To ensure the reliability of
Python 3.6, Scikit-Learn 0.20.4, Keras 2.3.1, and TensorFlow 1.15.2 the proposed F1 score, Accuracy, Precision, and Recall have been taken
have been used here to deploy the proposed methods. All the tests were as evaluation matrices.
run on a computer with an Intel Core i7 8th generation processor Accuracy represents total number of correctly identified cases, and

6
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 9. Basic architecture and customization in InceptionV3.

7
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 10. Basic architecture and customization in (a) MobileNet, and (b) DenseNet.

Table 2 Table 3
Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet, Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet,
DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking COVID- DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking COVID-
CT-Dataset without employing any image super resolution operation. CT-Dataset and employing super resolution operation.
Model Name Accuracy Recall Precision F1 Score Model Name Accuracy Recall Precision F1 Score

MobileNet 86.60 95.70 85.04 90.02 MobileNet 94.12 96.11 96.11 96.11
DenseNet121 93.00 93.00 94.10 92.50 DenseNet121 88.24 92.36 84.72 88.36
ResNet50 82.60 91.90 71.40 79.70 ResNet50 73.53 75.49 70.11 72.60
VGG16 86.60 93.80 86.70 90.09 VGG16 85.29 83.38 93.37 87.54
InceptionV3 89.30 92.80 89.00 90.09 InceptionV3 94.10 96.53 92.82 94.57
XceptionNet 85.30 90.20 85.90 88.04 XceptionNet 85.29 87.28 85.79 86.52

can be formulated as per Equation (7):


TP
Precision = (8)
TP + TN TP + FP
Accuracy = (7)
TP + TN + FP + FN
TP
When dataset is imbalanced, the precision-recall serves as a useful Sensitivity = (9)
TP + FN
matric for predicting the performance. Precision is a measure of
outcome relevancy in information retrieval, while recall is a measure of Also, the F1 score is considered to be good as compared to the Ac­
how often genuinely valid items are retrieved. Equations (8) and (9) curacy score, especially when the dataset is not balanced one. It is
represent Precision and Recall respectively: sometimes referred as the harmonic mean of Precision and Recall that

8
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 11. Plots of various evaluation parameters, viz. (a) Accuracy, (b) Recall, (c) Precision and (d) F1 Score as obtained from ResNet, VGG16, Xception, MobileNet,
DenseNet and InceptionV3 using COVID-CT- Dataset (with super resolution).

taken from COVID-CT-Dataset [27] and SARS-COV-2 CT [26] without


Table 4 doing any image super resolution. The results obtained for Accuracy,
Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet, Precision, Recall and F1 Score have been exhibited in Table 2. Table 3
DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking SARS- presents the results taking the same evaluation matrices, but after per­
COV-2 CT dataset and without employing any image super resolution operation.
forming image super resolution during the pre-processing phase. The
Model Name Accuracy Recall Precision F1 Score values for the attributes obtained after deploying super resolution were
MobileNet 98.00 98.40 98.00 98.20 better than the scenario when no super resolution was employed.
DenseNet121 98.00 98.50 94.90 96.60 For the COVID-CT dataset (taken after having super resolution),
ResNet50 93.50 93.60 95.30 93.50 Fig. 11 highlights plotting for the parameters Accuracy, Recall, Precision
VGG16 98.00 98.00 99.40 99.20
and F1 Score taken for 25 epochs of the various models under
InceptionV3 95.10 97.80 93.80 95.70
XceptionNet 95.10 95.20 96.60 95.90 consideration.
Similarly, for the dataset SARS-COV-2 CT, the results obtained for all
the selected parameters are presented in Table 4. Table 5 provides the
results evaluated on the same parameters, but after performing the
Table 5 image super resolution. The values for the attributes obtained after
Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet,
deploying super resolution were better than the scenario when no super
DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking SARS-
resolution was employed.
COV-2 CT dataset and employing super resolution operation.
For the SARS-COV-2 CT Dataset (taken after having super resolu­
Model Name Accuracy Recall Precision F1 Score tion), Fig. 12 displays plotting for the parameters Accuracy, Recall,
MobileNet 100.00 100.00 100.00 100.00 Precision and F1 Score taken for 25 epochs of the various models under
DenseNet121 100.00 100.00 100.00 100.00 consideration.
ResNet50 97.59 97.57 97.57 97.50
VGG16 100.00 100.00 100.00 100.00
InceptionV3 100.00 100.00 100.00 100.00 5. Conclusion and future scope
XceptionNet 98.80 96.68 100.00 98.30
CT scans were taken from two benchmark datasets for this research
study; and all the images were enhanced in the pre-processing phase
can be evaluated as given in Equation (10):
using super resolution deep neural networks. Transfer learning models
2 × Precision × Recall were employed to label the images as positive and negative for COVID-
F1 = (10)
Precision + Recall 19. The MobileNet model produced better results as compared to its peer
models. On the benchmark datasets, viz. Covid-CT Scan and SARS-COV-
To prove the role of super resolution in the proposed methodology,
2 CT-Scan, for the MobileNet model, the sensitivity scores were found to
all the transfer learning models were first executed on the CT images
be 96.11% and 100% respectively; precision scores were 96.11% and

9
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

Fig. 12. Plots of various evaluation parameters, viz. (a) Accuracy, (b) Recall, (c) Precision and (d) F1 Score as obtained from ResNet, VGG16, Xception, MobileNet,
DenseNet, and InceptionV3 using SARS-COV-2 CT Dataset (with super resolution).

100% respectively; F-1 scores were recorded as 96.11% and 100% the work reported in this paper.
respectively; and accuracy was to the tune of 94.12% and 100%
respectively. The proposed work can be customized further by stacking References
hybrid pre-trained algorithms.
[1] A. Tomar, N. Gupta, Prediction for the spread of COVID-19 in India and
effectiveness of preventive measures, Sci. Total Environ. 728 (2020), 138762,
Author contributions https://doi.org/10.1016/j.scitotenv.2020.138762.
[2] S. Kushwaha, S. Bahl, A.K. Bagha, K.S. Parmar, M. Javaid, A. Haleem, R.P. Singh,
Conceptualization, Vinay Arora and Eddie Yin-Kwee Ng; Data Significant applications of machine learning for COVID-19 pandemic, J. Industrial
Integr. Manag. 5 (2020), https://doi.org/10.1142/S2424862220500268.
curation, Rohan Leekha and Medhavi Darshan; Formal analysis, Eddie [3] Y. Jiang, D. Guo, C. Li, T. Chen, R. Li, High-resolution CT features of the COVID-19
Yin-Kwee Ng; Investigation, Vinay Arora; Methodology, Rohan Singh infection in Nanchong City: initial and follow-up changes among different clinical
Leekha and Medhavi Darshan; Project administration, Eddie Yin-Kwee types, Radiol. Infect. Dis. 7 (2020) 71–77, https://doi.org/10.1016/j.
jrid.2020.05.001.
Ng and Vinay Arora; Validation, Eddie Yin-Kwee Ng, Medhavi Darshan, [4] H. Panwar, P. Gupta, M.K. Siddiqui, R. Morales-Menendez, P. Bhardwaj, V. Singh,
Arshdeep Singh; Visualization, Vinay Arora, Rohan Leekha and Eddie A deep learning and grad-CAM based color visualization approach for fast
Yin-Kwee Ng; Writing – original draft, Vinay Arora, Medhavi Darshan; detection of COVID-19 cases using chest X-ray and CT-Scan images, Chaos, Solit.
Fractals 140 (2020), 110190, https://doi.org/10.1016/j.chaos.2020.110190.
Writing – review & editing, Vinay Arora and Rohan Singh Leekha. [5] Q. Ni, Z.Y. Sun, L. Qi, W. Chen, Y. Yang, L. Wang, X. Zhang, L. Yang, Y. Fang,
Z. Xing, Z. Zhou, Y. Yu, G.M. Lu, L.J. Zhang, A deep learning approach to
Funding statement characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images,
Eur. Radiol. 30 (2020) 6517–6527, https://doi.org/10.1007/s00330-020-07044-9.
[6] L.S. Xiao, P. Li, F. Sun, Y. Zhang, C. Xu, H. Zhu, F.Q. Cai, Y.L. He, W.F. Zhang, S.-
No funding has been received. C. Ma, C. Hu, M. Gong, L. Liu, W. Shi, H. Zhu, Development and validation of a
deep learning-based model using computed tomography imaging for predicting
disease severity of Coronavirus disease 2019, Front. Bioeng. Biotechnol. 8 (2020)
Ethical compliance 898, https://doi.org/10.3389/fbioe.2020.00898.
[7] A.K. Mishra, S.K. Das, P. Roy, S. Bandyopadhyay, Identifying COVID19 from chest
Research experiments conducted in this article with animals or CT images: a deep convolutional neural networks based approach, J. Healthcare
Eng. (2020) 2020, https://doi.org/10.1155/2020/8843664.
humans were approved by the Ethical Committee and responsible au­ [8] H. Ko, H. Chung, W.S. Kang, K.W. Kim, Y. Shin, S.J. Kang, J.H. Lee, Y.J. Kim, N.
thorities of our research organization(s) following all guidelines, regu­ Y. Kim, H. Jung, J. Lee, COVID-19 pneumonia diagnosis using a simple 2D deep
lations, legal, and ethical standards as required for humans or animals. learning framework with a single chest CT image: model development and
validation, J. Med. Internet Res. 22 (2020), e19569, https://doi.org/10.2196/
(Yes/No/Not applicable).
19569.
[9] J. Song, H. Wang, Y. Liu, W. Wu, G. Dai, Z. Wu, P. Zhu, W. Zhang, K.W. Yeom,
K. Deng, End-to-end automatic differentiation of the coronavirus disease 2019
Declaration of competing interest (COVID-19) from viral pneumonia based on chest CT, Eur. J. Nucl. Med. Mol. Imag.
47 (2020) 2516–2524, https://doi.org/10.1007/s00259-021-05267-6.
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influence

10
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575

[10] E. Matsuyama, A deep learning interpretable model for novel coronavirus disease [19] H. Alshazly, C. Linse, E. Barth, T. Martinetz, Explainable COVID-19 detection using
(COVID-19) screening with chest CT images, J. Biomed. Sci. Eng. 13 (2020) 140, chest CT scans and deep learning, Sensors 21 (2021) 455, https://doi.org/
https://doi.org/10.4236/jbise.2020.137014. 10.3390/s21020455.
[11] A. Jaiswal, N. Gianchandani, D. Singh, V. Kumar, M. Kaur, Classification of the [20] P. Silva, E. Luz, G. Silva, G. Moreira, R. Silva, D. Lucio, D. Menotti, COVID-19
COVID-19 infected patients using DenseNet201 based deep transfer learning, detection in CT images with deep learning: a voting-based scheme and cross-
J. Biomol. Struct. Dyn. (2020) 1–8, https://doi.org/10.1080/ datasets analysis, Inform. Med. Unlocked 20 (2020) 100427, https://doi.org/
07391102.2020.1788642. 10.1016/j.imu.2020.100427.
[12] M. Loey, G. Manogaran, N.E.M. Khalifa, A deep transfer learning model with [21] N. Ewen, N. Khan, Targeted Self Supervision for Classification on a Small COVID-
classical data augmentation and cgan to detect covid-19 from chest ct radiography 19 CT Scan Dataset, 2020 arXiv preprint arXiv:2011.10188.
digital images, Neural Comput. Appl. (2020) 1–13, https://doi.org/10.1007/ [22] D.A. Ragab, O. Attallah, FUSI-CAD, Coronavirus (COVID-19) diagnosis based on
s00521-020-05437-x. the fusion of CNNs and handcrafted features, PeerJ Comput. Sci. 6 (2020), e306,
[13] D. Singh, V. Kumar, Vaishali, M. Kaur, Classification of COVID-19 patients from https://doi.org/10.7717/peerj-cs.306.
chest CT images using multi-objective differential evolution–based convolutional [23] Z. Zhang, S. Yu, W. Qin, X. Liang, Y. Xie, G. Cao, CT Super Resolution via Zero Shot
neural networks, Eur. J. Clin. Microbiol. Infect. Dis. 39 (2020) 1379–1389, https:// Learning, 2020 arXiv preprint arXiv:2012.08943.
doi.org/10.1007/s10096-020-03901-z. [24] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image
[14] L. Li, L. Qin, Z. Xu, Y. Yin, X. Wang, B. Kong, J. Bai, Y. Lu, Z. Fang, Q. Song, K. Cao, super-resolution, in: Proceedings of the IEEE Conference on Computer Vision and
D. Liu, G. Wang, Q. Xu, X. Fang, S. Zhang, J. Xia, J. Xia, Using artificial intelligence Pattern Recognition (CVPR), 2018, pp. 2472–2481. Salt Lake City, Utah, June 18-
to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: 22.
evaluation of the diagnostic accuracy, Radiology 296 (2020) E65–E71, https://doi. [25] M. Li, S. Shen, W. Gao, W. Hsu, J. Cong, Computed tomography image
org/10.1148/radiol.2020200905. enhancement using 3D convolutional neural network, in: D. Stoyanov, et al. (Eds.),
[15] C. Do, L. Vu, An Approach for Recognizing COVID-19 Cases Using Convolutional Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical
Neural Networks Applied to CT Scan Images, Applications of Digital Image Decision Support. DLMIA 2018, ML-CDS vol. 11045, Springer, Cham, 2018,
Processing XLIII, International Society for Optics and Photonics, 2020, https://doi. https://doi.org/10.1007/978-3-030-00889-5_33. Lecture Notes in Computer
org/10.1117/12.2576276, 1151034. Science.
[16] S. Wang, Y. Zha, W. Li, Q. Wu, X. Li, M. Niu, M. Wang, X. Qiu, H. Li, H. Yu, [26] Dataset SARS-COV-2 CT. https://www.kaggle.com/plameneduardo/sarscov2-ctsca
W. Gong, Y. Bai, L. Li, Y. Zhu, L. Wang, J. Tian, A fully automatic deep learning n-dataset as accessed on November 2020.
system for COVID-19 diagnostic and prognostic analysis, Eur. Respir. J. 56 (2020), [27] Dataset COVID-CT. https://github.com/UCSD-AI4H/COVID-CT as accessed on 15th
https://doi.org/10.1183/13993003.00775-2020. December 2020.
[17] X. He, X. Yang, S. Zhang, J. Zhao, Y. Zhang, E. Xing, P. Xie, Sample-efficient Deep [28] DIV2K Dataset. https://data.vision.ee.ethz.ch/cvl/DIV2K/ as accessed on 25th
Learning for Covid-19 Diagnosis Based on Ct Scans, 2020, https://doi.org/ December 2020.
10.1101/2020.04.13.20063941. MedRxiv. [29] A. Sufian, A. Ghosh, A.S. Sadiq, F. Smarandache, A survey on deep transfer learning
[18] A. Sakagianni, G. Feretzakis, D. kalles, C. Koufopoulou, Setting up an easy-to-use to edge computing for mitigating the COVID-19 pandemic, J. Syst. Architect. 108
machine learning pipeline for medical decision support: a case study for COVID-19 (2020), 101830, https://doi.org/10.1016/j.sysarc.2020.101830.
diagnosis based on deep learning with CT scans, Importance Health Informat. Publ. [30] T. Zhou, H. Lu, Z. Yang, S. Qiu, B. Huo, Y. Dong, The ensemble deep learning model
Health during a Pandemic 272 (2020) 13, https://doi.org/10.3233/shti200481. for novel COVID-19 on CT images, Appl. Soft Comput. 98 (2021), 106885, https://
doi.org/10.1016/j.asoc.2020.106885.

11

You might also like