Transfer Learning Based Approach For Detecting COV - 2021 - Computers in Biology
Transfer Learning Based Approach For Detecting COV - 2021 - Computers in Biology
A R T I C L E I N F O A B S T R A C T
Keywords: This research work aims to identify COVID-19 through deep learning models using lung CT-SCAN images. In
COVID-19 order to enhance lung CT scan efficiency, a super-residual dense neural network was applied. The experimen
CT Scan tation has been carried out using benchmark datasets like SARS-COV-2 CT-Scan and Covid-CT Scan. To mark
MobileNet
COVID-19 as positive or negative for the improved CT scan, existing pre-trained models such as XceptionNet,
Transfer learning
Pandemic
MobileNet, InceptionV3, DenseNet, ResNet50, and VGG (Visual Geometry Group)16 have been used. Taking CT
Deep learning scans with super resolution using a residual dense neural network in the pre-processing step resulted in
improving the accuracy, F1 score, precision, and recall of the proposed model. On the dataset Covid-CT Scan and
SARS-COV-2 CT-Scan, the MobileNet model provided a precision of 94.12% and 100% respectively.
1. Introduction for signs of an infection which are still active. Swabbing the back of the
throat for a sample is normally done using a cotton swab. A polymerase
The World Health Organization (WHO) got the first update related to chain reaction (PCR) test is performed on the sample. This test provides
Corona virus disease 2019 (COVID-19) on December 31, 2019. On the signs of the virus’s genetic material. A PCR test after detecting two
January 30, 2020, WHO announced the COVID-19 spread as a global unique SARS-CoV-2 genes confirms a COVID-19 diagnosis. Only existing
health emergency. Corona virus is a zoonotic virus, which means it cases of COVID-19 can be detected through molecular tests; and it
began in animals before spreading to humans. The transmission of the cannot be said whether anyone has recovered from this infection.
virus occurred in humans after coming into contact with animals. Serological testing can identify antibodies generated by the body to fight
Corona virus can transmit from one person to another through respira the virus. Normally, a blood sample is needed for a serological test; and
tory droplets when a person exhales, coughs, sneezes, or chats with such a test is particularly helpful in identifying infections that have mild
others [1]. It is also believed that the virus may have transferred from to no symptoms [2]. Anyone who has recovered from COVID-19 pos
bats to other species, such as snakes or pangolins, and then to humans. sesses these antibodies which can be found in blood and tissues all over
Multiple COVID-19 complications leading to liver problems, pneu the body. Apart from swab and serological tests, organs and structures of
monia, respiratory failure, cardiovascular diseases, septic shock, etc. the chest can be imaged using X-rays (radiography) or computed to
have been prompted by a condition called cytokine release syndrome or mography (CT) scans [3]. A chest CT scan, a common imaging approach
a cytokine storm. This occurs when an infection activates the immune for detecting pneumonia, is a rapid and painless procedure. It has
system to leak inflammatory proteins known as cytokines into the appeared through new research that sensitivity of CT is 98% for
bloodstream which can kill tissues and organs in human beings. COVID-19 infection which is greater as compared to 71% for Reverse
Fig. 1 highlights the various methods used for testing corona virus. Transcription Polymerase Chain Reaction (RT-PCR) testing. The
These include molecular, serological, and scanning. Molecular tests look COVID-19 classification based on the Thorax CT necessitates the
* Corresponding author.
E-mail addresses: [email protected] (V. Arora), [email protected] (E.Y.-K. Ng), [email protected] (R.S. Leekha), darshanmedhavi@
gmail.com (M. Darshan), [email protected] (A. Singh).
https://doi.org/10.1016/j.compbiomed.2021.104575
Received 6 April 2021; Received in revised form 8 June 2021; Accepted 9 June 2021
Available online 12 June 2021
0010-4825/© 2021 Elsevier Ltd. All rights reserved.
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
Table 1
Datasets used in various studies showing their Sensitivity (Se)/Specificity (Sp)/
Accuracy (Acc).
S. Author(s) Dataset Dataset Source Se/Sp/
No. Acc
2
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
Fig. 2. Steps followed for classifying lung CT into normal and abnormal for
COVID-19.
3
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
3. Proposed methodology
4
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
where, HRDB,d denotes the operations of the dth RDB. HRDB,d is an amal
gamation of convolution and rectified linear unit (ReLU). Fd can be 3.3. Image augmentation
taken as a local feature, as it has been produced by the dth RDB by fully
utilizing each convolutional layer within the block. Dense feature fusion A large dataset is a technique used to categorize deep learning
(DFF) involving global residual learning (GRL) and global feature fusion effectively and successfully. But it is not always possible to have a vast
(GFF) were used to extract hierarchical features using a collection of dataset. In the domains of machine learning and deep learning, the data
RDBs. Fig. 4 displays the architecture of RDB, where, DFF makes a full size can be raised to enhance classification accuracy. Data enhancement
use of features from all the preceding layers and can be represented as approaches increase the algorithm’s learning and network capability
Equation (5): significantly. Because of few shortcomings, texture, color, and
FDF = HDFF (F− 1 , F0 , F1 , …, FD ) (5) geometric-based data augmentation techniques are not equally com
mon. While this strategy provides data diversity, it has limitations such
Where, FDF denotes the DFF output feature-maps obtained through a as more memory, training time, and expenses associated with conver
composite function HDFF . sion measurement. To supplement the lung CT images in the current
In the HR space, up-sampling net (UPNet) was stacked after investigation, only geometric alterations were performed. The count of
extracting global and local features in the LR space. Equation (6) shows images in dataset becomes tentatively three times larger after executing
the output obtained from the RDN: data augmentation. A randomly generated number has been used to
rotate the lung CT images counter clockwise from 00 to 3590 .
5
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
3.4. Transfer learning models for classification running at 1.9 GHz with 8 GB of RAM, Intel UHD Graphics 620, and
Windows 10 installed.
Transfer learning is the process of moving the variables of a neural Figs. 6–10 highlight the basic architecture of the transfer learning
network trained on one dataset, and task to a new data repository and models and their customization that has been finally deployed to obtain
task. Several deep neural networks trained on natural images share a the classification results here in this work.
peculiar function: on the first layer, model learns features that seem to
be universal, in the sense that these can be applied to a wide range of 4. Results and experiments
datasets and tasks. The network’s final layers must gradually transition
features from general to particular. Transfer learning can be an imposing Here, in this research work, COVID-19 positive patients who have
method for enabling training without overfitting when the target dataset been correctly categorized as COVID-19 positive are denoted by true
is a fraction of the size of the base dataset [29,30]. Here, in this work, positive (TP), while false positive (FP) subjects are normal, but have
DenseNet121, MobileNet, VGG16, ResNet50, InceptionV3, and Xcep been incorrectly labelled as corona positive. True Negative (TN) denotes
tionNet have been used as the base models, pre-trained on the ImageNet the COVID-19 negative subjects who have been recognized correctly as
dataset for object detection. ImageNet is a 1.28 million natural image negative. Finally, COVID-19 positive patients, misclassified as negative,
dataset that is open to the public; and it is divided into 1,000 categories. have been denoted with False Negative (FN). To ensure the reliability of
Python 3.6, Scikit-Learn 0.20.4, Keras 2.3.1, and TensorFlow 1.15.2 the proposed F1 score, Accuracy, Precision, and Recall have been taken
have been used here to deploy the proposed methods. All the tests were as evaluation matrices.
run on a computer with an Intel Core i7 8th generation processor Accuracy represents total number of correctly identified cases, and
6
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
7
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
Fig. 10. Basic architecture and customization in (a) MobileNet, and (b) DenseNet.
Table 2 Table 3
Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet, Accuracy, Recall, Precision and F1 Score obtained by executing MobileNet,
DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking COVID- DenseNet121, ResNet50, VGG16, InceptionV3 and XceptionNet taking COVID-
CT-Dataset without employing any image super resolution operation. CT-Dataset and employing super resolution operation.
Model Name Accuracy Recall Precision F1 Score Model Name Accuracy Recall Precision F1 Score
MobileNet 86.60 95.70 85.04 90.02 MobileNet 94.12 96.11 96.11 96.11
DenseNet121 93.00 93.00 94.10 92.50 DenseNet121 88.24 92.36 84.72 88.36
ResNet50 82.60 91.90 71.40 79.70 ResNet50 73.53 75.49 70.11 72.60
VGG16 86.60 93.80 86.70 90.09 VGG16 85.29 83.38 93.37 87.54
InceptionV3 89.30 92.80 89.00 90.09 InceptionV3 94.10 96.53 92.82 94.57
XceptionNet 85.30 90.20 85.90 88.04 XceptionNet 85.29 87.28 85.79 86.52
8
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
Fig. 11. Plots of various evaluation parameters, viz. (a) Accuracy, (b) Recall, (c) Precision and (d) F1 Score as obtained from ResNet, VGG16, Xception, MobileNet,
DenseNet and InceptionV3 using COVID-CT- Dataset (with super resolution).
9
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
Fig. 12. Plots of various evaluation parameters, viz. (a) Accuracy, (b) Recall, (c) Precision and (d) F1 Score as obtained from ResNet, VGG16, Xception, MobileNet,
DenseNet, and InceptionV3 using SARS-COV-2 CT Dataset (with super resolution).
100% respectively; F-1 scores were recorded as 96.11% and 100% the work reported in this paper.
respectively; and accuracy was to the tune of 94.12% and 100%
respectively. The proposed work can be customized further by stacking References
hybrid pre-trained algorithms.
[1] A. Tomar, N. Gupta, Prediction for the spread of COVID-19 in India and
effectiveness of preventive measures, Sci. Total Environ. 728 (2020), 138762,
Author contributions https://doi.org/10.1016/j.scitotenv.2020.138762.
[2] S. Kushwaha, S. Bahl, A.K. Bagha, K.S. Parmar, M. Javaid, A. Haleem, R.P. Singh,
Conceptualization, Vinay Arora and Eddie Yin-Kwee Ng; Data Significant applications of machine learning for COVID-19 pandemic, J. Industrial
Integr. Manag. 5 (2020), https://doi.org/10.1142/S2424862220500268.
curation, Rohan Leekha and Medhavi Darshan; Formal analysis, Eddie [3] Y. Jiang, D. Guo, C. Li, T. Chen, R. Li, High-resolution CT features of the COVID-19
Yin-Kwee Ng; Investigation, Vinay Arora; Methodology, Rohan Singh infection in Nanchong City: initial and follow-up changes among different clinical
Leekha and Medhavi Darshan; Project administration, Eddie Yin-Kwee types, Radiol. Infect. Dis. 7 (2020) 71–77, https://doi.org/10.1016/j.
jrid.2020.05.001.
Ng and Vinay Arora; Validation, Eddie Yin-Kwee Ng, Medhavi Darshan, [4] H. Panwar, P. Gupta, M.K. Siddiqui, R. Morales-Menendez, P. Bhardwaj, V. Singh,
Arshdeep Singh; Visualization, Vinay Arora, Rohan Leekha and Eddie A deep learning and grad-CAM based color visualization approach for fast
Yin-Kwee Ng; Writing – original draft, Vinay Arora, Medhavi Darshan; detection of COVID-19 cases using chest X-ray and CT-Scan images, Chaos, Solit.
Fractals 140 (2020), 110190, https://doi.org/10.1016/j.chaos.2020.110190.
Writing – review & editing, Vinay Arora and Rohan Singh Leekha. [5] Q. Ni, Z.Y. Sun, L. Qi, W. Chen, Y. Yang, L. Wang, X. Zhang, L. Yang, Y. Fang,
Z. Xing, Z. Zhou, Y. Yu, G.M. Lu, L.J. Zhang, A deep learning approach to
Funding statement characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images,
Eur. Radiol. 30 (2020) 6517–6527, https://doi.org/10.1007/s00330-020-07044-9.
[6] L.S. Xiao, P. Li, F. Sun, Y. Zhang, C. Xu, H. Zhu, F.Q. Cai, Y.L. He, W.F. Zhang, S.-
No funding has been received. C. Ma, C. Hu, M. Gong, L. Liu, W. Shi, H. Zhu, Development and validation of a
deep learning-based model using computed tomography imaging for predicting
disease severity of Coronavirus disease 2019, Front. Bioeng. Biotechnol. 8 (2020)
Ethical compliance 898, https://doi.org/10.3389/fbioe.2020.00898.
[7] A.K. Mishra, S.K. Das, P. Roy, S. Bandyopadhyay, Identifying COVID19 from chest
Research experiments conducted in this article with animals or CT images: a deep convolutional neural networks based approach, J. Healthcare
Eng. (2020) 2020, https://doi.org/10.1155/2020/8843664.
humans were approved by the Ethical Committee and responsible au [8] H. Ko, H. Chung, W.S. Kang, K.W. Kim, Y. Shin, S.J. Kang, J.H. Lee, Y.J. Kim, N.
thorities of our research organization(s) following all guidelines, regu Y. Kim, H. Jung, J. Lee, COVID-19 pneumonia diagnosis using a simple 2D deep
lations, legal, and ethical standards as required for humans or animals. learning framework with a single chest CT image: model development and
validation, J. Med. Internet Res. 22 (2020), e19569, https://doi.org/10.2196/
(Yes/No/Not applicable).
19569.
[9] J. Song, H. Wang, Y. Liu, W. Wu, G. Dai, Z. Wu, P. Zhu, W. Zhang, K.W. Yeom,
K. Deng, End-to-end automatic differentiation of the coronavirus disease 2019
Declaration of competing interest (COVID-19) from viral pneumonia based on chest CT, Eur. J. Nucl. Med. Mol. Imag.
47 (2020) 2516–2524, https://doi.org/10.1007/s00259-021-05267-6.
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influence
10
V. Arora et al. Computers in Biology and Medicine 135 (2021) 104575
[10] E. Matsuyama, A deep learning interpretable model for novel coronavirus disease [19] H. Alshazly, C. Linse, E. Barth, T. Martinetz, Explainable COVID-19 detection using
(COVID-19) screening with chest CT images, J. Biomed. Sci. Eng. 13 (2020) 140, chest CT scans and deep learning, Sensors 21 (2021) 455, https://doi.org/
https://doi.org/10.4236/jbise.2020.137014. 10.3390/s21020455.
[11] A. Jaiswal, N. Gianchandani, D. Singh, V. Kumar, M. Kaur, Classification of the [20] P. Silva, E. Luz, G. Silva, G. Moreira, R. Silva, D. Lucio, D. Menotti, COVID-19
COVID-19 infected patients using DenseNet201 based deep transfer learning, detection in CT images with deep learning: a voting-based scheme and cross-
J. Biomol. Struct. Dyn. (2020) 1–8, https://doi.org/10.1080/ datasets analysis, Inform. Med. Unlocked 20 (2020) 100427, https://doi.org/
07391102.2020.1788642. 10.1016/j.imu.2020.100427.
[12] M. Loey, G. Manogaran, N.E.M. Khalifa, A deep transfer learning model with [21] N. Ewen, N. Khan, Targeted Self Supervision for Classification on a Small COVID-
classical data augmentation and cgan to detect covid-19 from chest ct radiography 19 CT Scan Dataset, 2020 arXiv preprint arXiv:2011.10188.
digital images, Neural Comput. Appl. (2020) 1–13, https://doi.org/10.1007/ [22] D.A. Ragab, O. Attallah, FUSI-CAD, Coronavirus (COVID-19) diagnosis based on
s00521-020-05437-x. the fusion of CNNs and handcrafted features, PeerJ Comput. Sci. 6 (2020), e306,
[13] D. Singh, V. Kumar, Vaishali, M. Kaur, Classification of COVID-19 patients from https://doi.org/10.7717/peerj-cs.306.
chest CT images using multi-objective differential evolution–based convolutional [23] Z. Zhang, S. Yu, W. Qin, X. Liang, Y. Xie, G. Cao, CT Super Resolution via Zero Shot
neural networks, Eur. J. Clin. Microbiol. Infect. Dis. 39 (2020) 1379–1389, https:// Learning, 2020 arXiv preprint arXiv:2012.08943.
doi.org/10.1007/s10096-020-03901-z. [24] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image
[14] L. Li, L. Qin, Z. Xu, Y. Yin, X. Wang, B. Kong, J. Bai, Y. Lu, Z. Fang, Q. Song, K. Cao, super-resolution, in: Proceedings of the IEEE Conference on Computer Vision and
D. Liu, G. Wang, Q. Xu, X. Fang, S. Zhang, J. Xia, J. Xia, Using artificial intelligence Pattern Recognition (CVPR), 2018, pp. 2472–2481. Salt Lake City, Utah, June 18-
to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: 22.
evaluation of the diagnostic accuracy, Radiology 296 (2020) E65–E71, https://doi. [25] M. Li, S. Shen, W. Gao, W. Hsu, J. Cong, Computed tomography image
org/10.1148/radiol.2020200905. enhancement using 3D convolutional neural network, in: D. Stoyanov, et al. (Eds.),
[15] C. Do, L. Vu, An Approach for Recognizing COVID-19 Cases Using Convolutional Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical
Neural Networks Applied to CT Scan Images, Applications of Digital Image Decision Support. DLMIA 2018, ML-CDS vol. 11045, Springer, Cham, 2018,
Processing XLIII, International Society for Optics and Photonics, 2020, https://doi. https://doi.org/10.1007/978-3-030-00889-5_33. Lecture Notes in Computer
org/10.1117/12.2576276, 1151034. Science.
[16] S. Wang, Y. Zha, W. Li, Q. Wu, X. Li, M. Niu, M. Wang, X. Qiu, H. Li, H. Yu, [26] Dataset SARS-COV-2 CT. https://www.kaggle.com/plameneduardo/sarscov2-ctsca
W. Gong, Y. Bai, L. Li, Y. Zhu, L. Wang, J. Tian, A fully automatic deep learning n-dataset as accessed on November 2020.
system for COVID-19 diagnostic and prognostic analysis, Eur. Respir. J. 56 (2020), [27] Dataset COVID-CT. https://github.com/UCSD-AI4H/COVID-CT as accessed on 15th
https://doi.org/10.1183/13993003.00775-2020. December 2020.
[17] X. He, X. Yang, S. Zhang, J. Zhao, Y. Zhang, E. Xing, P. Xie, Sample-efficient Deep [28] DIV2K Dataset. https://data.vision.ee.ethz.ch/cvl/DIV2K/ as accessed on 25th
Learning for Covid-19 Diagnosis Based on Ct Scans, 2020, https://doi.org/ December 2020.
10.1101/2020.04.13.20063941. MedRxiv. [29] A. Sufian, A. Ghosh, A.S. Sadiq, F. Smarandache, A survey on deep transfer learning
[18] A. Sakagianni, G. Feretzakis, D. kalles, C. Koufopoulou, Setting up an easy-to-use to edge computing for mitigating the COVID-19 pandemic, J. Syst. Architect. 108
machine learning pipeline for medical decision support: a case study for COVID-19 (2020), 101830, https://doi.org/10.1016/j.sysarc.2020.101830.
diagnosis based on deep learning with CT scans, Importance Health Informat. Publ. [30] T. Zhou, H. Lu, Z. Yang, S. Qiu, B. Huo, Y. Dong, The ensemble deep learning model
Health during a Pandemic 272 (2020) 13, https://doi.org/10.3233/shti200481. for novel COVID-19 on CT images, Appl. Soft Comput. 98 (2021), 106885, https://
doi.org/10.1016/j.asoc.2020.106885.
11