Diagnostics 11 01182 v2
Diagnostics 11 01182 v2
Diagnostics 11 01182 v2
Article
Automated Radiology Alert System for Pneumothorax
Detection on Chest Radiographs Improves Efficiency and
Diagnostic Performance
Cheng-Yi Kao 1,† , Chiao-Yun Lin 2,† , Cheng-Chen Chao 1 , Han-Sheng Huang 1 , Hsing-Yu Lee 1 , Chia-Ming Chang 1 ,
Kang Sung 1 , Ting-Rong Chen 1 , Po-Chang Chiang 1 , Li-Ting Huang 1 , Bow Wang 1 , Yi-Sheng Liu 1 ,
Jung-Hsien Chiang 3 , Chien-Kuo Wang 1 and Yi-Shan Tsai 1, *
1 Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine,
National Cheng Kung University, No. 1 University Road, Tainan 704, Taiwan;
[email protected] (C.-Y.K.); [email protected] (C.-C.C.);
[email protected] (H.-S.H.); [email protected] (H.-Y.L.);
[email protected] (C.-M.C.); [email protected] (K.S.);
[email protected] (T.-R.C.); [email protected] (P.-C.C.);
[email protected] (L.-T.H.); [email protected] (B.W.);
[email protected] (Y.-S.L.); [email protected] (C.-K.W.)
2 Department of Medical Imaging, E-DA Hospital, I-Shou University, No. 1 Yida Road, Jiaosu Village,
Yanchao District, Kaohsiung 824, Taiwan; [email protected]
3 Department of Computer Science and Information Engineering, College of Electrical Engineering and
Computer Science, National Cheng Kung University, No. 1 University Road, Tainan 704, Taiwan;
[email protected]
* Correspondence: [email protected]; Tel.: +886-6-2766108; Fax: +886-6-2766608
Citation: Kao, C.-Y.; Lin, C.-Y.; Chao, † Cheng-Yi Kao and Chiao-Yun Lin equally contributed to this article.
C.-C.; Huang, H.-S.; Lee, H.-Y.;
Chang, C.-M.; Sung, K.; Chen, T.-R.; Abstract: We aimed to set up an Automated Radiology Alert System (ARAS) for the detection of
Chiang, P.-C.; Huang, L.-T.; et al.
pneumothorax in chest radiographs by a deep learning model, and to compare its efficiency and
Automated Radiology Alert System
diagnostic performance with the existing Manual Radiology Alert System (MRAS) at the tertiary
for Pneumothorax Detection on Chest
medical center. This study retrospectively collected 1235 chest radiographs with pneumothorax
Radiographs Improves Efficiency and
Diagnostic Performance. Diagnostics
labeling from 2013 to 2019, and 337 chest radiographs with negative findings in 2019 were separated
2021, 11, 1182. https://doi.org/ into training and validation datasets for the deep learning model of ARAS. The efficiency before and
10.3390/diagnostics11071182 after using the model was compared in terms of alert time and report time. During parallel running
of the two systems from September to October 2020, chest radiographs prospectively acquired in the
Academic Editor: Cesar A. Moran emergency department with age more than 6 years served as the testing dataset for comparison of
diagnostic performance. The efficiency was improved after using the model, with mean alert time
Received: 13 May 2021 improving from 8.45 min to 0.69 min and the mean report time from 2.81 days to 1.59 days. The
Accepted: 26 June 2021 comparison of the diagnostic performance of both systems using 3739 chest radiographs acquired
Published: 29 June 2021
during parallel running showed that the ARAS was better than the MRAS as assessed in terms of
sensitivity (recall), area under receiver operating characteristic curve, and F1 score (0.837 vs. 0.256,
Publisher’s Note: MDPI stays neutral
0.914 vs. 0.628, and 0.754 vs. 0.407, respectively), but worse in terms of positive predictive value
with regard to jurisdictional claims in
(PPV) (precision) (0.686 vs. 1.000). This study had successfully designed a deep learning model for
published maps and institutional affil-
pneumothorax detection on chest radiographs and set up an ARAS with improved efficiency and
iations.
overall diagnostic performance.
As for negative cases, another retrospective search of the PACS and RIS for chest
As for negative cases, another retrospective search of the PACS and RIS for chest
radiographs for health examination with negative findings in 2019 was performed and
Diagnostics 2021, 11, 1182 radiographs for health examination with negative findings in 2019 was performed3 and of 12
337 chest radiographs were identified. The positive cases with segmented areas of pneu-
337 chest radiographs were identified. The positive cases with segmented areas of pneu-
mothorax and negative cases were shuffled and separated into an 80%/20% split for train-
mothorax and negative cases were shuffled and separated into an 80%/20% split for train-
ing and validation datasets for the deep learning model. The flow chart of the data acqui-
ing and validation datasets for the deep learning model. The flow chart of the data acqui-
sition
the for the deepmodel
deep learning model for pneumothorax detection on chest radiographs is
sition for learning formodel
the deep learning pneumothorax detection detection
for pneumothorax on chest radiographs is shown in
on chest radiographs is
shown
Figure 1. in Figure 1.
shown in Figure 1.
2.2. Deep
2.2. Deep Learning
Learning Model
Model
To achieve
To achieve the
the goal
goal of detecting pneumothorax,
pneumothorax, a well-designed
well-designed model
model was
was proposed
proposed
To achieve the goal of detecting pneumothorax, a well-designed model was proposed
based onon U-Net
U-Netwith
withResNet34
ResNet34asas ananencoder
encoder[10,11]. Utilizing
[10,11]. multi-level
Utilizing features
multi-level to gen-
features to
based on U-Net with ResNet34 as an encoder [10,11]. Utilizing multi-level features to gen-
erate discriminative pyramidal representations was crucial to detection performance.
generate discriminative pyramidal representations was crucial to detection performance. Our
erate discriminative pyramidal representations was crucial to detection performance. Our
model
Our utilized
model balanced
utilized feature
balanced pyramids
feature in skip
pyramids connections
in skip and integrated
connections multi-level
and integrated multi-
model utilized balanced feature pyramids in skip connections and integrated multi-level
contextual
level features
contextual in theindecoder
features [12]. [12].
the decoder The overall pneumothorax
The overall detection
pneumothorax network
detection and
network
contextual features in the decoder [12]. The overall pneumothorax detection network and
the pipeline
and andand
the pipeline heatmap
heatmap visualization
visualizationof of
the
thebalanced
balancedfeature
feature pyramid
pyramid module are are
the pipeline and heatmap visualization of the balanced feature pyramid module are
shown in Figure
Figure 2.
2. The design of the deep learning model model is
is described
described in
in Appendix
Appendix A.A.
shown in Figure 2. The design of the deep learning model is described in Appendix A.
Figure
Figure 2.2. The
The overall
overall pneumothorax
pneumothorax detection
detection network
network and
and the
the pipeline
pipeline and
and heatmap
heatmapvisualization
visualiza-
Figure 2. The overall pneumothorax detection network and the pipeline and heatmap visualiza-
of theofbalanced
tion feature
the balanced pyramid
feature module.
pyramid module.
tion of the balanced feature pyramid module.
2.3. Manual and Automated Radiology Alert Systems
Both the MRAS and ARAS for pneumothorax were applied to the Emergency De-
partment (ED). In the MRAS, the radiologic technologists in charge of taking the chest
radiographs would alert the ED physicians via Short Message System (SMS) and then
Diagnostics 2021, 11, 1182 4 of 13
Diagnostics 2021, 11, 1182 2.3. Manual and Automated Radiology Alert Systems 4 of 12
Both the MRAS and ARAS for pneumothorax were applied to the Emergency De-
partment (ED). In the MRAS, the radiologic technologists in charge of taking the chest
radiographs would alert the ED physicians via Short Message System (SMS) and then
leave notes in the RIS notifying reporting radiologist if pneumothorax were newly detected.
leave notes in the RIS notifying reporting radiologist if pneumothorax were newly de-
Newly detected pneumothorax was defined as those without prior radiographs showing
tected. Newly detected pneumothorax was defined as those without prior radiographs
pneumothorax and without pigtail or chest tube drainage. In the ARAS, the deep learning
showing pneumothorax and without pigtail or chest tube drainage. In the ARAS, the deep
model described previously would alert the ED physicians via SMS and leave notes in the
learning
RIS model
notifying described
reporting previously
radiologist would alert the
if pneumothorax EDdetected.
were physicians
Thevia
SMSSMS andwere
alerts leave
notes in the RIS notifying reporting radiologist if pneumothorax were detected. The
also sent to duty radiology residents for confirmation. The flow charts of both systems areSMS
alerts in
shown were also3.sent to duty radiology residents for confirmation. The flow charts of both
Figure
systems are shown in Figure 3.
Figure 3. The flow charts of the manual radiology alert system and the automated radiology alert
Figure 3. The flow charts of the manual radiology alert system and the automated radiology alert
system. Abbreviations: MRAS, Manual Radiology Alert System; ARAS, Automated Radiology
system. Abbreviations:
Alert System; MRAS,
SMS, Short Manual
Message Radiology
System; Alert System;
RIS, Radiology ARAS, Automated
Information System. Radiology Alert
System; SMS, Short Message System; RIS, Radiology Information System.
2.4.Efficiency
2.4. EfficiencyofofDeep
DeepLearning
LearningModel
Model
From2121July
From July2015
2015toto1313May
May2019,
2019,thethealerts
alertssent
sentfrom
from thethe MRAS
MRAS forfor pneumothorax
pneumothorax
on chest radiographs acquired in the ED were retrospectively
on chest radiographs acquired in the ED were retrospectively collected. After the collected. After the ARAS
ARAS
went online, from 1 September 2020 to 31 October 2020, all chest radiographs acquired inin
went online, from 1 September 2020 to 31 October 2020, all chest radiographs acquired
theED
the ED of of
ourour institution
institution were were subjected
subjected to theto ARAS
the ARAS for pneumothorax,
for pneumothorax, and theand the sent
alerts alerts
sent from the ARAS in this period were collected. In both systems,
from the ARAS in this period were collected. In both systems, the image upload time was the image upload time
was assigned as the starting point, and the alert time and the report
assigned as the starting point, and the alert time and the report time were recorded. The time were recorded.
The mean
mean alertand
alert time timemean
and mean
reportreport
time oftime
bothofalert
bothsystems
alert systems were compared.
were compared.
2.5.Diagnostic
2.5. DiagnosticPerformance
Performanceduring
duringParallel
ParallelRunning
Runningofofthe
theTwo
Two Systems
Systems
From1 1September
From September2020
2020toto3131October
October2020,
2020,a aparallel
parallelrunning
running strategy
strategy ofof both
both MRAS
MRAS
and
andARAS
ARASfor forpneumothorax
pneumothoraxwaswasimplemented
implemented atatour
ourinstitution.
institution.Both
Bothsystems
systemsoperated
operated
simultaneously and independently. The radiologic technologist in the MRAS did not
know the detection result of the ARAS, and the deep learning model in the ARAS had
no additional input other than the chest radiograph itself. Chest radiographs acquired
in the ED with an age of more than 6 years during this period were prospectively used
Diagnostics 2021, 11, 1182 5 of 12
as the testing dataset for both systems. These radiographs were reviewed by a 4th year
radiology resident and a radiologist with more than 10 years of working experience and
classified as negative, small, moderate, and large pneumothorax. Since the MRAS was
activated only if pneumothorax were newly detected while all pneumothorax detections
activated the ARAS, the two systems were compared on an equal basis. An alert sent from
the MRAS on one of the serial radiographs with pneumothorax would be interpreted as
positive detections for all these radiographs. Then the confusion matrices of both MRAS
and ARAS were obtained. The false-positive cases of the ARAS were also reviewed, and the
incorrectly predicted areas of the deep learning model and possible undesirable conditions
were recorded.
3. Results
3.1. Deep Learning Model
In this study, 1235 positive cases with segmented areas of pneumothorax and 337 neg-
ative cases were shuffled and separated for training and validation datasets for the deep
learning model. The training datasets contained 979 positive cases and 278 negative cases,
and the validation dataset contained 256 positive cases and 59 negative cases. A deep
learning model based on U-Net with balanced feature pyramid modules for pneumothorax
detection was built using the training and validation datasets stated above. Then, the model
was used to set up the new ARAS. Examples of true-positive detections of pneumothorax
are shown in Figure 4.
Examplesof
Figure4.4.Examples
Figure oftrue-positive
true-positivedetections
detections of
of large
large (A),
(A), moderate
moderate (B),
(B), and
and small
small (C)
(C) pneumo-
pneumotho-
rax. The
thorax. Thepredicted areas
predicted areare
areas shown in purple.
shown in purple.
Diagnostics 2021, 11, 1182 7 of 12
Table 1. Confusion matrices of the Manual Radiology Alert System and Automated Radiology Alert
System for pneumothorax.
MRAS ARAS
Positive Negative Positive Negative Total
Small 2 11 8 5 13
Positive Moderate 22 3 64 17 72 14 14 6 86 20
Ground Truth Large 17 36 50 3 53
Negative 0 3653 33 3620 3653
Total 22 3717 105 3634 3739
Note: MRAS = Manual Radiology Alert System, ARAS = Automated Radiology Alert System.
Table 2. Diagnostic performance of Manual Radiology Alert System and Automated Radiology Alert
System for pneumothorax.
MRAS ARAS
Sensitivity (Recall) 0.256 (0.168–0.361) 0.837 (0.742–0.908)
Specificity 1.000 (0.999–1.000) 0.991 (0.987–0.994)
PPV (Precision) 1.000 (1.000–1.000) 0.686 (0.605–0.756)
NPV 0.983 (0.981–0.985) 0.996 (0.994–0.998)
Accuracy 0.983 (0.978–0.987) 0.987 (0.983–0.991)
AUC 0.628 (0.612–0.643) 0.914 (0.905–0.923)
F1 score 0.407 (0.391–0.423) 0.754 (0.740–0.768)
Note: Data are reported as value with 95% confidence interval in parentheses. MRAS = Manual Radiology Alert
System, ARAS = Automated Radiology Alert System, PPV = positive predictive value, NPV = negative predictive
value, AUC = area under receiver operating characteristic curve.
A review of the 33 false-positive cases with the predicted areas of the deep learning
model was performed. The incorrectly predicted area included 23 at the lung apex along
the ribs, aortic arch, or thickened pleura; 3 at the lung base along the heart border or ribs;
3 with bullae; 2 along the skin fold; 1 detecting gastric gas; and 1 along the chest tube. The
possible undesirable conditions included eight with foreign bodies (metallic implants, chest
Diagnostics 2021, 11, 1182 8 of 12
tubes, or central venous catheters), one with poor positioning, two with poor exposure
Diagnostics 2021, 11, 1182 9 of 13
settings, and one with severe lung fibrosis were identified. Examples of false-positive
detections of pneumothorax are shown in Figure 5.
Figure5.5.
Figure Examples
Examples of false-positive
of false-positive detections
detections of pneumothorax
of pneumothorax with predicted
with predicted area
area along along ribs (A),
ribs
heart
(A), border
heart (B),
border and
(B), andbullae
bullae(C).
(C).The
The predicted areasare
predicted areas areshown
showninin purple.
purple.
Diagnostics 2021, 11, 1182 9 of 12
4. Discussion
In this study, the efficiency of detecting pneumothorax was improved after using the
deep learning model. The mean alert time was significantly shorter in the ARAS than in the
MRAS. In the MRAS, the radiologic technologists in charge could see the chest radiographs
firsthand even before the images were uploaded. However, the high workload distracted
radiologists with little time spent viewing the radiograph, delaying both pneumothorax
detections and the subsequent alert activation. In the ARAS, the deep learning model could
“view” the radiographs almost immediately after image upload and send alerts soon after
pneumothorax detections without interference by the working environment. The ARAS
with a mean alert time of 0.69 min (or 41.4 s) with a maximum of 2.20 min (or 132.0 s)
provided warnings in a constantly efficient manner.
In both MRAS and ARAS, the reporting radiologists received the notes in the RIS about
pneumothorax detections and then completed the reports with high priority. However,
the mean report time was significantly shorter in the ARAS than in the MRAS in this
study, which might result from the following two factors. Firstly, shorter mean alert time
in the ARAS than in the MRAS implied leaving notes in the RIS earlier for the reporting
radiologists. Secondly, the reporting radiologist received not only RIS notes from the deep
learning model, but also confirmation messages sent from duty radiology residents in the
ARAS. This reconfirmation process might have given the reporting radiologists stronger
suggestions to complete the reports earlier.
During the parallel running of both systems, the sensitivity (recall) was significantly
lower in the MRAS than in the ARAS (0.256 vs. 0.837). High workload causing distraction
in the MRAS might have resulted in low sensitivity (recall), while the ARAS was not
interfered by the working environment. Both MRAS and ARAS demonstrated specificity,
NPV, and accuracy greater than 0.98, with minor or no difference statistically. As for the
overall performance, the ARAS outperformed the MRAS in terms of AUC and F1 score.
The performance of the deep learning model implemented in the ARAS, with the sensitivity
(recall) of 0.837, the specificity of 0.991, the accuracy of 0.987, and AUC of 0.914, was good
and comparable to previous studies [6,8,13–16].
During parallel running of both systems, the PPV (precision) was significantly higher
in the MRAS than in the ARAS (1.000 vs. 0.686). A review of the false-positive cases
showed that 78.8% (26/33) of the cases had incorrectly detected normal anatomical border
(ribs, aortic arch, thickened apical pleura, or heart border), while other “stripes” (skin folds
or chest tube), and lucency (bullae or gastric gas) were also misleading. The performance
of the model might be even worse with foreign bodies, poor positioning, or poor exposure
settings. A high false-positive rate of the ARAS posed a major problem of clinical use.
Frequent false alarms would have caused the “crying wolf” phenomenon, and the ED
physicians would be less willing to pay attention to the alerts. A confirmation process
by the duty radiology resident was added to the system to compensate for this issue and
would remain necessary until further improvement of the deep learning model.
This study aimed to design an alert system for potential emergencies, especially with
large pneumothorax as indication for chest tube drainage, while small pneumothorax is
often treated conservatively. In this study, only moderate to large pneumothorax was
subjected to training, while small pneumothorax was excluded. The same criteria were
used in a previous study [8]. In the testing dataset, some cases of small pneumothorax
were still detected by the ARAS, with a smaller extent of the pneumothorax associated
with lower sensitivity (recall). The sensitivity (recall) of the ARAS for moderate and large
pneumothorax reached 0.877 (64/73).
This study excluded patients with an age of less than 6 years. This consideration was
mainly due to the epidemiology of pneumothorax. The earliest peak in the age distribution
of spontaneous pneumothorax is 15 to 20 years [17–19]. Spontaneous pneumothorax
is extremely rare in preschoolers. On the other hand, preschool patients are often not
cooperative with instructions to position and hold breath, leading to the poor image quality
Diagnostics 2021, 11, 1182 10 of 12
of the radiographs and subsequent misinterpretation of the deep learning model. Excluding
patients with age less than 6 years allowed the ARAS to focus on the population at risk.
Several issues resulted in the limitations of this study. First of all, the ground truths of
radiographs in both training and validation datasets were based on the interpretation of
radiologists. Misclassification was still possible even after reviewing process. Secondly,
the datasets for training, validation, and testing were relatively small compared to other
studies [6,8,15,16,20]. Expansion of the datasets from our institution or using public datasets
should be considered. Thirdly, the deep learning model tended to be interfered with by
foreign bodies, poor positioning, and poor exposure settings, which were not uncommon
in daily practice. Training of the model for chest radiographs with these undesirable
conditions might solve this problem. Finally, the ARAS still needed a confirmation process
due to the high false-positive rate. Further improvement of the deep learning model was
crucial to make this system fully “automated”.
5. Conclusions
This study has successfully designed a deep learning model for the detection of
pneumothorax on chest radiographs and set up an ARAS. The efficiency of detecting
pneumothorax was improved after using the deep learning model. During the parallel
running of both systems, the diagnostic performance of the ARAS was better than that of
the MRAS in terms of sensitivity (recall), AUC, and F1 score, but worse in terms of PPV
(precision).
Appendix A
The U-Net architecture has been widely used in medical imaging tasks as it shows
promising results [10]. U-Net uses the concepts of skip connections and joints convolutional
layers with pooling layers and up-sampling layers to create contractive and expansive
paths. Skip connections are implemented to leverage intermediate feature maps and merge
contractive and expansive features. Different from U-Net, which integrates multi-level
features using the lateral connection, we strengthen the multi-level features using the same
deeply integrated balanced semantic features. Each resolution in the pyramid obtains equal
information from others, thus balancing the information flow and leading the features
more discriminative. Specifically, we first resized the multi-level feature (C2, C3, C4, C5) to
Diagnostics 2021, 11, 1182 11 of 12
an intermediate size. Then the balanced semantic features are obtained by averaging as
follows:
1 lmax
L i=∑
C= Ĉi (A1)
l min
where L denotes the number of multi-level features, lmin and lmax denote the indices of
involved lowest and highest levels, and Ĉi denotes respective rescaled features. The
obtained features are then rescaled using the reverse procedure to strengthen the original
features.
We further implemented the attention module in each stage of encoder, Convolutional
Block Attention Module (CBAM), to improve the representation of interests [21]. The
CBAM module is a lightweight and general module, it can be plugged into any CNN
architecture with negligible overheads. The CBAM module applies two attention modules
consecutively. The first attention module is applied channel-wise, in that we want to select
the features that are independent of spatial ones. The second attention module is applied
along the spatial dimensions to select the features which are more relevant to each other
and independent of the channels. Both modules generate attention maps, respectively, then
the attention maps are multiplied to the input feature map for adaptive feature refinement.
References
1. Raoof, S.; Feigin, D.; Sung, A.; Raoof, S.; Irugulpati, L.; Rosenow, E.C. Interpretation of Plain Chest Roentgenogram. Chest 2012,
141, 545–558. [CrossRef] [PubMed]
2. Yarmus, L.; Feller-Kopman, D. Pneumothorax in the Critically Ill Patient. Chest 2012, 141, 1098–1105. [CrossRef] [PubMed]
3. Seow, A.; Kazerooni, E.A.; Pernicano, P.G.; Neary, M. Comparison of upright inspiratory and expiratory chest radiographs for
detecting pneumothoraces. Am. J. Roentgenol. 1996, 166, 313–316. [CrossRef] [PubMed]
4. Thomsen, L.; Natho, O.; Feigen, U.; Schulz, U.; Kivelitz, D. Value of Digital Radiography in Expiration in Detection of Pneumoth-
orax. RoFo Fortschr. Geb. Rontgenstrahlen Bildgeb. Verfahr. 2013, 186, 267–273. [CrossRef]
5. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks
on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics
Engineers (IEEE): New York, NY, USA; pp. 2097–2106.
6. Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Cohen, J.G.; et al. Development and
Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA
Netw. Open 2019, 2, e191095. [CrossRef] [PubMed]
7. Prevedello, L.M.; Erdal, B.S.; Ryu, J.L.; Little, K.J.; Demirer, M.; Qian, S.; White, R.D. Automated Critical Test Findings Identification
and Online Notification System Using Artificial Intelligence in Imaging. Radiology 2017, 285, 923–931. [CrossRef] [PubMed]
8. Taylor, A.G.; Mielke, C.; Mongan, J. Automated detection of moderate and large pneumothorax on frontal chest X-rays using
deep convolutional neural networks: A retrospective study. PLoS Med. 2018, 15, e1002697. [CrossRef]
9. Wada, K. Labelme: Image Polygonal Annotation with Python, GitHub. 2016. Available online: https://github.com/wkentaro/
labelme (accessed on 25 February 2020).
10. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image
Computing and Computer-Assisted Intervention–MICCAI 2015; Springer: Cham, Switzerland, 2015; pp. 234–241.
11. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
12. Pang, J.; Chen, K.; Shi, J.; Feng, H.; Ouyang, W.; Lin, D. Libra R-CNN: Towards Balanced Learning for Object Detection. In
Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
15–20 June 2019; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2019; pp. 821–830.
13. Park, S.; Lee, S.M.; Kim, N.; Choe, J.; Cho, Y.; Do, K.-H.; Seo, J.B. Application of deep learning–based computer-aided detection
system: Detecting pneumothorax on chest radiograph after biopsy. Eur. Radiol. 2019, 29, 5341–5348. [CrossRef] [PubMed]
14. Hwang, E.J.; Hong, J.H.; Lee, K.H.; Kim, J.I.; Nam, J.G.; Kim, D.S.; Choi, H.; Yoo, S.J.; Goo, J.M.; Park, C.M. Deep learning
algorithm for surveillance of pneumothorax after lung biopsy: A multicenter diagnostic cohort study. Eur. Radiol. 2020, 30,
3660–3671. [CrossRef] [PubMed]
15. Park, S.; Lee, S.M.; Lee, K.H.; Jung, K.-H.; Bae, W.; Choe, J.; Seo, J.B. Deep learning-based detection system for multiclass lesions
on chest radiographs: Comparison with observer readings. Eur. Radiol. 2019, 30, 1359–1368. [CrossRef] [PubMed]
16. Majkowska, A.; Mittal, S.; Steiner, D.F.; Reicher, J.J.; McKinney, S.M.; Duggan, G.E.; Eswaran, K.; Chen, P.-H.C.; Liu, Y.; Kalidindi,
S.R.; et al. Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference
Standards and Population-adjusted Evaluation. Radiology 2020, 294, 421–431. [CrossRef] [PubMed]
Diagnostics 2021, 11, 1182 12 of 12
17. Bobbio, A.; Dechartres, A.; Bouam, S.; Damotte, D.; Rabbat, A.; Régnard, J.-F.; Roche, N.; Alifano, M. Epidemiology of spontaneous
pneumothorax: Gender-related differences. Thorax 2015, 70, 653–658. [CrossRef]
18. Hiyama, N.; Sasabuchi, Y.; Jo, T.; Hirata, T.; Osuga, Y.; Nakajima, J.; Yasunaga, H. The three peaks in age distribution of females
with pneumothorax: A nationwide database study in Japan. Eur. J. Cardio Thorac. Surg. 2018, 54, 572–578. [CrossRef]
19. Kim, D.; Jung, B.; Jang, B.-H.; Chung, S.-H.; Lee, Y.J.; Ha, I.-H. Epidemiology and medical service use for spontaneous pneumo-
thorax: A 12-year study using nationwide cohort data in Korea. BMJ Open 2019, 9. [CrossRef]
20. Tolkachev, A.; Sirazitdinov, I.; Kholiavchenko, M.; Mustafaev, T.; Ibragimov, B. Deep Learning for Diagnosis and Segmentation of
Pneumothorax: The Results on the Kaggle Competition and Validation against Radiologists. IEEE J. Biomed. Health Inform. 2021,
25, 1660–1672. [CrossRef] [PubMed]
21. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference
on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Institute of Electrical and Electronics Engineers (IEEE):
New York, NY, USA, 2018; pp. 3–19.