Abstract
Computer-aided diagnosis (CAD) systems are considered a powerful tool for physicians to support identification of the novel Coronavirus Disease 2019 (COVID-19) using medical imaging modalities. Therefore, this article proposes a new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images. Our proposed deep learning framework constitutes two major advancements as follows. First, complicated multi-label classification of X-ray images have been simplified using a series of binary classifiers for each tested case of the health status. That mimics the clinical situation to diagnose potential diseases for a patient. Second, the cascaded architecture of COVID-19 and pneumonia classifiers is flexible to use different fine-tuned deep learning models simultaneously, achieving the best performance of confirming infected cases. This study includes eleven pre-trained convolutional neural network models, such as Visual Geometry Group Network (VGG) and Residual Neural Network (ResNet). They have been successfully tested and evaluated on public X-ray image dataset for normal and three diseased cases. The results of proposed cascaded classifiers showed that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of COVID-19, viral (Non-COVID-19) pneumonia, and bacterial pneumonia images, respectively. Furthermore, the performance of our cascaded deep learning classifiers is superior to other multi-label classification methods of COVID-19 and pneumonia diseases in previous studies. Therefore, the proposed deep learning framework presents a good option to be applied in the clinical routine to assist the diagnostic procedures of COVID-19 infection.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Coronavirus Disease 2019 (COVID-19) initiated a pandemic in December 2019 in the city of Wuhan, China, causing a Public Health Emergency of International Concern (PHEIC) [1]. The COVID-19 is named by the World Health Organization (WHO) as a novel infectious disease, and it belongs to Coronaviruses (CoV) and perilous viruses [2, 3]. It results in some cases a critical care respiratory condition such as Severe Acute Respiratory Syndrome (SARS-CoV), leading to failure in breathing and the death eventually. Recently, situation report no. 74 of the WHO announced that the risk assessment of COVID-19 is very high at the global level on 3 April 2020 [4, 5]. In addition, the total number of cases has become 972,303 confirmed COVID-19 patients and 50,322 deaths worldwide. Also, other common lung infections like viral and bacterial pneumonia lead to thousands of deaths every year [6]. These pneumonia diseases cause fungal infection of one or both sides of the lungs by the formation of pus and other liquids in the air sacs. Symptoms of the viral pneumonia occur gradually and are mild. But bacterial pneumonia is more severe, especially among children [7]. This type of pneumonia can affect many lobes of the lung.
The gold standard for diagnosing common pneumonia diseases and Coronaviruses is the real-time polymerase chain reaction (RT-PCR) assay of the sputum [8]. However, these RT-PCR tests showed high false-negative levels to confirm positive COVID-19 cases. Alternatively, radiological examinations using chest X-ray and computed tomography (CT) scans are now being used to identify the health status of infected patients including children and pregnant women [9, 10], regardless of potential side effects of ionizing radiation exposure. The CT imaging presents an effective method for screening, diagnosis, and progress assessment of patients with COVID-19 [11]. Nevertheless, clinical studies demonstrated that a positive chest X-ray may obviate the need for CT scans and reducing clinical burden on CT suites during this pandemic outbreak [12, 13]. The American College of Radiology (ACR) recommended the utilization of portable chest radiography to minimize the risk of Coronavirus infection, because the decontamination of CT rooms after scanning COVID-19 patients may cause interruption of this radiological service [14]. Also, chest CT screening requires high-dose exposure to scan patients and relatively expensive hospital bills out [15]. In contrast, conventional X-ray machines are always available and portable in hospitals and clinical centers to give a quick scan for the patients’ lungs as two-dimensional (2D) images. Therefore, the chest X-ray scans present the first tool for clinicians to confirm positive COVID-19 cases [10, 16]. In this paper, we focus only on enhancing the performance of using chest X-ray scans for confirming the patients with highly suspected COVID-19 or other pneumonia diseases, namely viral (Non-COVID-19) or bacterial infections.
However, X-ray images are still contrast limited because of low-exposure dose to the patients, leading to difficulties in diagnosing soft tissues or diseased areas in the patient’s thorax [15, 17]. Computer-aided diagnosis (CAD) systems present a practical solution to overcome these limitations of chest X-rays, and to assist radiologists to automatically detect potential diseases in low-contrast X-ray images [18, 19]. The CAD systems combined advanced components of computer technologies with recent image processing algorithms to perform interventional tasks, e.g. tumor segmentation and 3D visualization of vital organs [20, 21]. Now, artificial intelligence (AI) has been widely applied to advance the diagnostic performance of many CAD systems for various medical applications such as brain tumor classification or segmentation [22, 23], minimally invasive aortic valve implantation [17], and detecting pulmonary diseases [24, 25]. Recently, deep learning approaches become the most advanced methods in the field of AI. They can learn patterns and features from labeled (or annotated) data to be capable of automatically performing specific tasks based on the previous training, such as human sentiment classification [26] and computer vision applications in surgery [27]. Convolutional neural networks (CNNs) present a major branch of deep learning techniques in many applications of computer vision and sensitive medical applications in the last years [28]. The CNNs have been used to analyze single and multi-modal medical images in different applications of radiology, e.g. breast cancer classification and lung nodule detection [29, 30].
Related work
Automated diagnosis of COVID-19 and chest diseases has been investigated using medical CT and X-ray imaging modalities. Recent studies [31, 32] demonstrated that chest CT images and deep learning models play an important role to identify and segment COVID-19 infections successfully. However, this study is focused only on the utilization of chest X-ray images as a first tool to detect positive COVID-19 patients and other pneumonia diseases, as presented in the following previous studies: Automatic classification of lung diseases in X-ray images was proposed for tuberculosis screening [33], detection of consolidation in case of increased lung density [34], and critical pneumonia diseases [35]. But identifying the infectious status of novel COVID-19 in chest X-rays is still a new emerging research topic and appeared in a few studies of peer-review published articles. Deep learning classifiers such as the pre-trained InceptionV3 model have been used to confirm the presence of COVID-19 infection [36, 37]. Adding other pneumonia diseases for the proposed CNNs models improved the classification accuracy above 90.0%, as presented in [38, 39]. Drop-weight-based Bayesian CNNs were used to verify a high correlation of the uncertainty with the prediction accuracy of COVID-19 in X-ray images [40]. Ozturk et al. [41] proposed DarkCovidNet model as a single and multi-label classifier for COVID-19 and pneumonia diseases in X-ray radiographs, based on the real-time object detection algorithm, namely You Only Look Once (YOLO). The DarkCovidNet model achieved accuracy of 98.08% for binary classification and 87.02% for multi-class classification task. A threefold deep learning method has been proposed to perform top-down binary classification if the patient is healthy or affected by a pulmonary disease involving COVID-19 case with accuracy of 97.0% [42]. Also, this proposed method can provide a visualization of infected areas in X-ray images based Visual Geometry Group (VGG16) model. Another deep CNN model, called CoroNet was proposed for multi-class classification of COVID-19 and pulmonary diseases [43]. It has been developed based on the architecture of pre-trained Xception model [44], resulting an overall accuracy of 89.6%. Shaban et al. [45] introduced a new detection strategy of positive COVID-19 patients based on a hybrid selection method of the best image features and an enhanced K-nearest neighbor (EKNN) classifier. The proposed COVID-19 detection strategy has been tested on the chest CT images only. Deep feature extractor of a CNN model and Bayesian optimization have been used for detecting COVID-19 infection in chest X-ray images [46]. The extracted features were exploited as input variables to the machine learning algorithms, such as support vector machine (SVM). The SVM classifier resulted an accuracy of 98.7% to verify positive COVID-19 patients.
Contributions of this study
This study aims at proposing a new deep learning framework as a radiological tool to support clinicians for automated diagnosis of COVID-19 and pneumonia diseases using chest X-rays. Our preliminary work (COVIDX-Net) [36] introduced deep learning models to confirm only positive or negative COVID-19 cases. In this article, we present a new version of our deep learning framework to constitute the following advancements:
-
Developing a new cascaded form of deep learning image classifiers for confirming COVID-19, viral and bacterial pneumonia diseases.
-
Using multiple selective and reliable deep learning models to achieve the best classification performance of pulmonary diseases in X-ray images.
-
Proposed architecture of cascaded classifiers based on deploying different deep learning models allows to obtain better performance than could be obtained from other multi-label classifiers in previous studies.
-
Extensive tests and evaluation of eleven deep learning models have been conducted on a public X-ray dataset to verify the best performance of proposed classifiers for detecting COVID-19 and other pneumonia diseases.
-
Demonstrating the feasibility of applying our final recommended classifiers to enhance image-guided diagnosis of COVID-19, viral pneumonia, and bacterial pneumonia infections during the pandemic time.
The remainder of this article is divided into the following sections. “Materials and methods” describes the workflow of our proposed deep learning framework to automatically detect COVID-19, pneumonia viral, and bacterial in 2D radiographic images. The next section presents the experimental results and evaluation of all tested image classifiers. The conclusions and outlook of this study are given in the last section.
Materials and methods
Dataset
In this study, we used the public dataset of X-ray images including positive COVID-19 and other pneumonia cases collected by Dr. Cohen at the University of Montreal [47]. The dataset contains 306 X-ray images with four classes as 79 normal cases, 69 positive COVID-19 images, 79 images for viral (Non-COVID-19) pneumonia, and 79 bacterial pneumonia cases. Table 1 illustrates the distribution of X-ray images for training, validation and testing of our proposed deep learning framework. The size of all tested images is ranging from 1088 × 688 to 2567 × 2190 pixels. The main features of positive COVID-19 in chest X-rays are ground-glass opacification (GGO) and occasional consolidation in the bilateral patchy areas [10].
Description of proposed cascaded image classifiers
We proposed a new deep learning framework for detecting COVID-19, viral and bacterial pneumonia infections using chest X-ray scans. Figure 1 shows the workflow of our proposed framework includes eleven different pre-trained CNNs models, namely VGG16 [48], VGG19, Xception [44], Dense Convolutional Network (DenseNet-121) [49], DenseNet169, DenseNet201, Residual Neural Network (ResNet-50V2) [50], ResNet101V2, ResNet169V2, MobileNet [51], and MobileNetV2. In this study, the hyperparameter values of all deep learning models are fixed and listed in Table 2. As shown in Fig. 1, the proposed cascaded deep learning framework is composed of three main steps to perform automated diagnosis of COVID-19 and pneumonia diseases in X-ray scans as follows:
-
Step #1: Preprocessing
All images of tested X-ray dataset loaded and scaled to a constant size of 150 × 150 pixels for next processing steps in the pipeline of deep learning framework. Although the tested images are noise free, it is possible to apply any non-linear noise cancellation tool such as morphological filters, and the perceptual adaptation of the image (PAI), increasing the quality of input X-ray images [52, 53]. The labels of X-ray image data are binarized using one-hot encoding [54] to identify the positive cases of COVID-19 or viral pneumonia or bacterial pneumonia. Normal cases present here negative results of pathogenic diagnose, and the patient may have a pulmonary disease not defined in this study.
-
Step #2: Training-selected model and validation
Any pre-trained deep learning model is manually selected by the user for fine-tuning procedures. To overcome the limited data size of X-ray images, the top layers of all base models have been truncated and replaced with new fully connected layers to accomplish the fine-tuning task of all pre-trained deep learning models, as depicted in Fig. 1. The preprocessed X-ray dataset is 85–15 split because of the relatively small size of the dataset. Therefore, 15% of dataset approximately 10–12 images will be used for testing all classifiers. The rest of the images are initially used for randomly constructing training and validation sets. Then accuracy and loss metrics will be applied to evaluate the performance of both the training and validation of the trained model during 50 epochs for each selected deep image classifier.
-
Step #3: Multi-stage chest disease classification
Our proposed deep learning framework is a multi-stage classification process such that the priority is to confirm or “NOT” the positive COVID-19 cases. If the result of the first classifier is negative, the image data will be tested sequentially by the next two cascaded image classifiers of viral and bacterial pneumonia, as shown in Fig. 1. This study assumed that the patients have only a single disease if exists. At the end of the proposed workflow, the resulted performance of all tested models is quantitatively analyzed based on the classification evaluation metrics, as presented in the next section.
Classification performance metrics
In this study, the classification performance of deep learning models was evaluated to quantify the results of diagnosed COVID-19 and two other pneumonia diseases in X-ray images. Different classification metrics have been applied as follows. Table 3 illustrates a basic 2 × 2 confusion matrix, which is estimated based on hypothesis testing and cross validation [55]. Comparing the true labels of tested images and predicted results of deep classifier, four expected outcomes of the confusion matrix are defined as follows: true positive (TP) showed the true diagnosis of chest diseases. True negative (TN) is a measured number of healthy cases. False positive (FP) is a test result incorrectly indicates the presence of a chest disease when the disease is not existing, while a false negative (FN) is the opposite error if the test result fails to indicate the presence of a chest disease. Five basic performance metrics of deep image classifiers are derived from the outcomes of confusion matrix as follows.
Accuracy is the most important metric for evaluating the performance of each proposed classifier, as represented in (1). It calculates the number of images correctly classified divided by the total number of X-ray images in the dataset. The precision recall or sensitivity, specificity, and f1-score are also computed by formulae as given below in (2)–(5):
Experiments
Experimental setup
All X-ray images were converted to the RGB format with a fixed size of 150 × 150 pixels. The cascaded deep learning classifiers have been implemented based on a free and open-source Anaconda Navigator with Scientific Python Development Environment (Spyder V3.3.6) including the Keras package with TensorFlow [56] using a PC with Intel(R) Core(TM) i7-2.2 GHz processor. Running deep learning classifiers were done using a graphical processing unit (GPU) of 4 GB NVIDIA GTX1050Ti and RAM of 16 GB.
Performance evaluation of cascaded classifiers
About 263 X-ray images of the dataset including normal cases, COVID-19 and pneumonia diseases are randomly chosen for training and validation phase. In this study, the proposed CNN models were trained using the hyperparameter values, as illustrated above in Table 2. The number of epochs and batch size are set to 50 and 7, respectively, to accomplish the targeted convergence with few iterations, avoiding the possible degradation problem. All deep neural networks are trained using Stochastic Gradient Descent (SGD) optimizer because of its fast running time and accurate convergence. Data augmentation was exploited to fix imbalanced dataset and avoid overfitting during training [34]. Therefore, the training set of X-ray images was randomly flipped horizontally and vertically, rotated up to ± 15°, and shifted ± 20% of the height and width ranges. Then the total number of X-ray images has been significantly increased to 3325 images for each clinical case to enhance the training performance of proposed deep learning classifiers, as illustrated in Table 4.
Figure 2 shows the graphical evaluation of all proposed X-ray image classifiers with four metrics of precision, recall, specificity and f1-score as given in (2)–(5). For confirming positive COVID-19 cases, VGG16 and ResNet50V2 achieved the best score of the above metrics, while the misclassification appeared significantly with DensNet169 and MobileNetV2 classifiers. The ResNet101V2 and Xception models could not successfully detect viral pneumonia in tested X-ray images, but the ResNet50V2 classifier showed the best performance to identify this kind of pneumonia infection. Best classification scores of both bacterial pneumonia and normal cases have been achieved by the DenseNet169 classifier.
Comparative computational times including training and testing times (in seconds) of all tested image classifiers are depicted in Fig. 3. The training times of proposed deep learning models are ranging approximately from 12.0 min for the MobileNetV2 classifier to 173.0 min for the ResNet152V2 classifier. The main advantage of using either the MobileNet or the MobileNetV2 classifiers is obviously verified by achieving the shortest computational times in the range of 12.0–35.5 min for the training phase and less than 2 s for testing all cases. The testing times of deep learning models did not exceed 33.0 s (for the ResNet152V2 classifier), as shown in Fig. 3.
Figure 4 shows the comparative testing accuracy and the values of area under curve (AUC) for all tested cascaded image classifiers. The best scores of both accuracy and AUC have been achieved by the ResNet50V2 above 90% for all diseased and normal cases. In contrast, the Xception classifier resulted worst scores of the accuracy and AUC less than 70%. The MobileNet and MobileNetV2 classifiers showed average values of 55.0–92.0% for the classification accuracy and AUC, but these models can be re-optimized to improve their performance, considering their efficient computational times for confirming the COVID-19 infection using smart devices.
Moreover, Table 5 gives another comparison between the characteristics of each deployed deep learning model and the corresponding classification accuracy for all tested cases. Although the MobileNetV2 model has the lowest total number of parameters approximately 2.34 × 106, it could not be trained well to handle the features of chest X-ray images. Consequently, the classification accuracy did not exceed 75% for the MobileNetV2 classifier. On the other side, the VGG16 and VGG19 models are fully trained and can achieve a good accuracy for two cases of COVID-19 and bacterial pneumonia above 90.0%, but they failed to detect viral pneumonia disease correctly. The highest total number of network parameters is recorded about 58.5 × 106 for the ResNet152V2 model, which could not classify bacterial pneumonia disease accurately. The ResNet50V2 model with minimal non-trainable parameters achieved the most accurate classification results for COVID-19 and viral pneumonia diseases. The DenseNet169 has relatively high number of non-trainable parameters about 0.16 × 106, but it can identify the case of bacterial pneumonia disease successfully, as illustrated in Table 5.
Furthermore, comparative accuracy scores of previous methods in the literature and our proposed cascaded classifiers are reported in Table 6. These deep learning classifiers have been evaluated on the same X-ray dataset in [47]. It is obvious that our proposed method achieved the best accuracy of 99.9% for multi-label classification of COVID-19 and pneumonia diseases, outperforming other methods in previous studies. The concatenation of different deep learning models in a cascaded form presents a powerful and unique advantage in this study, because the user can select the best performance of deep learning model to classify each diseased case separately instead of one complex classifier for all pulmonary diseases.
Final recommended image classifiers
The above evaluation results demonstrated that our proposed deep learning framework can be finalized in multi-stage X-ray image classification, as depicted in Figs. 5 and 6. The VGG16 and ResNet50V2 models showed the best performance and classification accuracy of 99.9% to identify COVID-19 cases. However, the VGG16 classifier is faster than the ResNet50V2 classifier for training and testing times, as shown in Fig. 3. Also, the structure of VGG16 is simpler than the ResNet50V2 model, as summarized in Table 5. Therefore, it is selected as the first stage of classification procedure for detecting Coronavirus infection in X-ray images. In the second stage of viral (Non-COVID-19) pneumonia classification, the ResNetV2 model achieved the best performance (see Fig. 4). Finally, the DenseNet169 model is selected to detect the bacterial pneumonia disease because of its superior performance to other deep learning models as reported in Table 5. Also, Fig. 6 shows the resulted confusion matrix and the accuracy/loss curves to ensure the outstanding performance of our recommended deep image classifiers in this study.
Developing a generic X-ray and/or CT image classifier is still challenging to assist radiologists during the diagnostic procedures of COVID-19 and pulmonary diseases. Many previous studies proposed effective deep learning models to perform this crucial task using many image datasets, but they did not suggest a practical option for the manual supervision of clinicians to customize these classifiers according to their clinical environment. Our proposed framework provides this manual option to get recommended deep learning classifiers as shown in Figs. 1 and 6. It can be easily re-trained on new image dataset, adding or selecting other cascaded deep learning models to precisely identify COVID-19 infections and pulmonary diseases in chest X-ray images.
Conclusions and outlook
This study presented a new framework for automated computer-aided diagnosis of COVID-19, viral and bacterial pneumonia in chest X-rays, based on three cascaded deep learning classifiers. Compared to previous studies, the proposed cascaded classifiers achieved promising results to confirm positive COVID-19 cases using VGG16 model. Also, the ResNet50V2 and DenseNet169 models identified viral and bacterial diseases successfully, as shown above in Fig. 6. Hence, we are currently working on the clinical implantation of our deep learning framework as a new CAD system in the diagnostic protocol of potential COVID-19 patients with other pneumonia diseases using the cost-effective X-ray imaging modality.
Furthermore, automated segmentation of COVID-19 infections in chest X-ray scans is the main prospect of this research work. This segmentation task will significantly assist the clinician to follow-up the disease progress in the lung of infected patients, as described in [57]. To satisfy security and privacy requirements for transmitting medical images over general communication networks [58], securing COVID-19 patient data will be also considered in the next version of our developed deep learning framework.
References
Rodriguez-Morales AJ, Cardona-Ospina JA, Gutiérrez-Ocampo E, Villamizar-Peña R, Holguin-Rivera Y, Escalera-Antezana JP, Alvarado-Arnez LE, Bonilla-Aldana DK, Franco-Paredes C, Henao-Martinez AF, Paniz-Mondolfi A, Lagos-Grisales GJ, Ramírez-Vallejo E, Suárez JA, Zambrano LI, Villamil-Gómez WE, Balbin-Ramon GJ, Rabaan AA, Harapan H, Dhama K, Nishiura H, Kataoka H, Ahmad T, Sah R (2020) Clinical, laboratory and imaging features of COVID-19: a systematic review and meta-analysis. Travel Med Infect Dis. https://doi.org/10.1016/j.tmaid.2020.101623
Paules CI, Marston HD, Fauci AS (2020) Coronavirus infections—more than just the common cold. JAMA 323(8):707–708. https://doi.org/10.1001/jama.2020.0757
Sohrabi C, Alsafi Z, O’Neill N, Khan M, Kerwan A, Al-Jabir A, Iosifidis C, Agha R (2020) World Health Organization declares global emergency: a review of the 2019 novel coronavirus (COVID-19). Int J Surg 76:71–76. https://doi.org/10.1016/j.ijsu.2020.02.034
World Health Organization (WHO), Coronavirus disease 2019 (COVID-19) Situation Report-74. https://www.who.int/docs/defaultsource/coronaviruse/situation-reports/20200403-sitrep-74-covid-19-mp.pdf. Accessed 1 Sept 2020
Reyad O (2020) Novel Coronavirus COVID-19 Strike on Arab Countries and Territories: A Situation Report I. arXiv:2003.09501 [cs.CY]
Lim YK, Kweon OJ, Kim HR, Kim T-H, Lee M-K (2019) Impact of bacterial and viral coinfection in community-acquired pneumonia in adults. Diagn Microbiol Infect Dis 94(1):50–54. https://doi.org/10.1016/j.diagmicrobio.2018.11.014
Scott JA, Brooks WA, Peiris JS, Holtzman D, Mulholland EK (2008) Pneumonia research to reduce childhood mortality in the developing world. J Clin Investig 118(4):1291–1300. https://doi.org/10.1172/JCI33947
Huang P, Liu T, Huang L, Liu H, Lei M, Xu W, Hu X, Chen J, Liu B (2020) Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion. Radiology 295(1):22–23. https://doi.org/10.1148/radiol.2020200330
Liu H, Liu F, Li J, Zhang T, Wang D, Lan W (2020) Clinical and CT imaging features of the COVID-19 pneumonia: focus on pregnant women and children. J Infect. https://doi.org/10.1016/j.jinf.2020.03.007
Ng M-Y, Lee EY, Yang J, Yang F, Li X, Wang H, Lui MM-S, Lo CS-Y, Leung B, Khong P-L, Hui CK-M, Yuen K-Y, Kuo MD (2020) Imaging profile of the COVID-19 infection: radiologic findings and literature review. Radiol Cardiothorac Imaging 2(1):e200034. https://doi.org/10.1148/ryct.2020200034
Ding J, Liu Y, Fu H, Gao J, Zhao X, Zheng J, Sun W, Ma X, Feng J, Liang P, Wu A, Liu J, Wang Y, Geng P, Chen Y, Li H (2020) Experience on radiological examinations and infection prevention for COVID-19 in radiology department. Radiol Infect Dis. https://doi.org/10.1016/j.jrid.2020.03.006
Jacobi A, Chung M, Bernheim A, Eber C (2020) Portable chest X-ray in coronavirus disease-19 (COVID-19): a pictorial review. Clin Imaging 64:35–42. https://doi.org/10.1016/j.clinimag.2020.04.001
Wong HYF, Lam HYS, Fong AH-T, Leung ST, Chin TW-Y, Lo CSY, Lui MM-S, Lee JCY, Chiu KW-H, Chung TW-H, Lee EYP, Wan EYF, Hung IFN, Lam TPW, Kuo MD, Ng M-Y (2020) Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology 296(2):E72–E78. https://doi.org/10.1148/radiol.2020201160
ACR recommendations for the use of chest radiography and computed tomography (CT) for suspected COVID-19 infection|American College of Radiology (2020). https://www.acr.org/Advocacy-and-Economics/ACR-Position-Statements/Recommendations-for-Chest-Radiography-and-CT-for-Suspected-COVID19-Infection. Accessed 9 Sept 2020
Kroft LJM, van der Velden L, Girón IH, Roelofs JJH, de Roos A, Geleijns J (2019) Added value of ultra–low-dose computed tomography, dose equivalent to chest X-ray radiography, for diagnosing chest pathology. J Thorac Imaging 34(3):179–186. https://doi.org/10.1097/rti.0000000000000404
Chen N, Zhou M, Dong X, Qu J, Gong F, Han Y, Qiu Y, Wang J, Liu Y, Wei Y, Xia J, Yu T, Zhang X, Zhang L (2020) Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet 395(10223):507–513. https://doi.org/10.1016/S0140-6736(20)30211-7
Karar ME, Merk DR, Falk V, Burgert O (2016) A simple and accurate method for computer-aided transapical aortic valve replacement. Comput Med Imaging Graph 50:31–41. https://doi.org/10.1016/j.compmedimag.2014.09.005
Merk DR, Karar ME, Chalopin C, Holzhey D, Falk V, Mohr FW, Burgert O (2011) Image-guided transapical aortic valve implantation: sensorless tracking of stenotic valve landmarks in live fluoroscopic images. Innovations (Phila) 6(4):231–236. https://doi.org/10.1097/IMI.0b013e31822c6a77
Messerli M, Kluckert T, Knitel M, Rengier F, Warschkow R, Alkadhi H, Leschka S, Wildermuth S, Bauer RW (2016) Computer-aided detection (CAD) of solid pulmonary nodules in chest x-ray equivalent ultralow dose chest CT—first in vivo results at dose levels of 0.13 mSv. Eur J Radiol 85(12):2217–2224. https://doi.org/10.1016/j.ejrad.2016.10.006
He T, Hu J, Song Y, Guo J, Yi Z (2020) Multi-task learning for the segmentation of organs at risk with label dependence. Med Image Anal 61:101666. https://doi.org/10.1016/j.media.2020.101666
Hannan R, Free M, Arora V, Harle R, Harvie P (2020) Accuracy of computer navigation in total knee arthroplasty: a prospective computed tomography-based study. Med Eng Phys. https://doi.org/10.1016/j.medengphy.2020.02.003
Ghassemi N, Shoeibi A, Rouhani M (2020) Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomed Signal Process Control 57:101678. https://doi.org/10.1016/j.bspc.2019.101678
Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Burgert O (2020) DeepSeg: deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images. Int J Comput Assist Radiol Surg 15(6):909–920. https://doi.org/10.1007/s11548-020-02186-z
Liang CH, Liu YC, Wu MT, Garcia-Castro F, Alberich-Bayarri A, Wu FZ (2020) Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice. Clin Radiol 75(1):38–45. https://doi.org/10.1016/j.crad.2019.08.005
Hussein S, Kandel P, Bolan CW, Wallace MB, Bagci U (2019) Lung and pancreatic tumor characterization in the deep learning era: novel supervised and unsupervised learning approaches. IEEE Trans Med Imaging 38(8):1777–1787. https://doi.org/10.1109/TMI.2019.2894349
Ali MNY, Sarowar MG, Rahman ML, Chaki J, Dey N, Tavares JMRS (2019) Adam deep learning with SOM for human sentiment classification. IJACI 10:92–116. https://doi.org/10.4018/IJACI.2019070106
Hashimoto DA, Rosman G, Rus D, Meireles OR (2018) Artificial intelligence in surgery: promises and perils. Ann Surg 268(1):70–76. https://doi.org/10.1097/SLA.0000000000002693
Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, Lladó X (2019) Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. Artif Intell Med 95:64–81. https://doi.org/10.1016/j.artmed.2018.08.008
Nguyen PT, Nguyen TT, Nguyen NC, Le TT (2019) Multiclass breast cancer classification using convolutional neural network. In: 2019 International symposium on electrical and electronics engineering (ISEE), 10–12 Oct 2019, pp 130–134. https://doi.org/10.1109/isee2.2019.8920916
Monkam P, Qi S, Ma H, Gao W, Yao Y, Qian W (2019) Detection and classification of pulmonary nodules using convolutional neural networks: a survey. IEEE Access 7:78075–78091. https://doi.org/10.1109/ACCESS.2019.2920980
Shinde GR, Kalamkar AB, Mahalle PN, Dey N, Chaki J, Hassanien AE (2020) Forecasting models for coronavirus disease (COVID-19): a survey of the state-of-the-art. SN Comput Sci 1(4):197. https://doi.org/10.1007/s42979-020-00209-9
Dey N, Rajinikanth V, Fong SJ, Kaiser MS, Mahmud M (2020) Social Group Optimization-assisted Kapur’s entropy and morphological segmentation for automated detection of COVID-19 infection from computed tomography images. Cogn Comput. https://doi.org/10.1007/s12559-020-09751-3
Rastoder E, Shaker SB, Naqibullah M, Wille MMW, Lund M, Wilcke JT, Seersholm N, Jensen SG (2019) Chest X-ray findings in tuberculosis patients identified by passive and active case finding: a retrospective study. J Clin Tuberc Other Mycobact Dis 14:26–30. https://doi.org/10.1016/j.jctube.2019.01.003
Behzadi-khormouji H, Rostami H, Salehi S, Derakhshande-Rishehri T, Masoumi M, Salemi S, Keshavarz A, Gholamrezanezhad A, Assadi M, Batouli A (2020) Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput Methods Programs Biomed 185:105162. https://doi.org/10.1016/j.cmpb.2019.105162
Jaiswal AK, Tiwari P, Kumar S, Gupta D, Khanna A, Rodrigues JJPC (2019) Identifying pneumonia in chest X-rays: a deep learning approach. Measurement 145:511–518. https://doi.org/10.1016/j.measurement.2019.05.076
Hemdan EE-D, Shouman MA, Karar ME (2020) COVIDX-Net: a framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv:2003.11055 [eess.IV]
Narin A, Kaya C, Pamuk Z (2020) Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. arXiv:2003.10849 [eess.IV]
Apostolopoulos ID, Mpesiana TA (2020) Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. https://doi.org/10.1007/s13246-020-00865-4
Wang L, Wong A (2020) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. arXiv:2003.09871 [eess.IV]
Ghoshal B, Tucker A (2020) Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv:2003.10769 [eess.IV]
Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Rajendra Acharya U (2020) Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med 121:103792. https://doi.org/10.1016/j.compbiomed.2020.103792
Brunese L, Mercaldo F, Reginelli A, Santone A (2020) Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays. Comput Methods Programs Biomed 196:105608. https://doi.org/10.1016/j.cmpb.2020.105608
Khan AI, Shah JL, Bhat MM (2020) CoroNet: a deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput Methods Programs Biomed 196:105581. https://doi.org/10.1016/j.cmpb.2020.105581
Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: Proceedings—30th IEEE conference on computer vision and pattern recognition, CVPR 2017, vol 2017-January. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/cvpr.2017.195
Shaban WM, Rabie AH, Saleh AI, Abo-Elsoud MA (2020) A new COVID-19 Patients Detection Strategy (CPDS) based on hybrid feature selection and enhanced KNN classifier. Knowl Based Syst 205:106270. https://doi.org/10.1016/j.knosys.2020.106270
Nour M, Cömert Z, Polat K (2020) A novel medical diagnosis model for COVID-19 infection detection based on deep features and Bayesian optimization. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2020.106580
Cohen JP, Morrison P, Dao L (2020) COVID-19 image data collection. arXiv:2003.11597 [eess.IV]
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 [cs.CV]
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings—30th IEEE conference on computer vision and pattern recognition, CVPR 2017, vol 2017-January. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/cvpr.2017.243
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition 2016-December, pp 770–778. https://doi.org/10.1109/cvpr.2016.90
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 4510–4520. https://doi.org/10.1109/cvpr.2018.00474
Khosravy M, Gupta N, Marina N, Sethi IK, Asharif MR (2017) Perceptual adaptation of image based on Chevreul–Mach bands visual phenomenon. IEEE Signal Process Lett 24(5):594–598. https://doi.org/10.1109/LSP.2017.2679608
Sedaaghi MH, Daj R, Khosravi M (2001) Mediated morphological filters. In: Proceedings 2001 international conference on image processing (Cat. No.01CH37205), 7–10 Oct 2001, vol 693, pp 692–695. https://doi.org/10.1109/icip.2001.958213
Harris SL, Harris DM (2016) 3—Sequential logic design. In: Harris SL, Harris DM (eds) Digital design and computer architecture. Morgan Kaufmann, Boston, pp 108–171. https://doi.org/10.1016/B978-0-12-800056-4.00003-0
Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4):427–437. https://doi.org/10.1016/j.ipm.2009.03.002
Gulli A, Kapoor A, Pal S (2019) Deep learning with TensorFlow 2 and Keras: regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd edn. Packt Publishing, Birmingham
Oh Y, Park S, Ye JC (2020) Deep learning COVID-19 features on CXR using limited training data sets. IEEE Trans Med Imaging 39(8):2688–2700. https://doi.org/10.1109/TMI.2020.2993291
Reyad O, Hamed K, Karar ME (2020) Hash-enhanced elliptic curve bit-string generator for medical image encryption. J Intell Fuzzy Syst. https://doi.org/10.3233/jifs-201146(preprint)
Acknowledgements
We would like to thank the editors and anonymous reviewers for their valuable comments and suggestions to enhance the readability of this manuscript.
Funding
No funding source.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest to disclose.
Ethical consent
The authors did not perform any experiment on animals or patients for this study.
Code availability
The code is available upon request to the corresponding author, Dr. Karar ([email protected]).
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Karar, M.E., Hemdan, E.ED. & Shouman, M.A. Cascaded deep learning classifiers for computer-aided diagnosis of COVID-19 and pneumonia diseases in X-ray scans. Complex Intell. Syst. 7, 235–247 (2021). https://doi.org/10.1007/s40747-020-00199-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-020-00199-4