Kista
Kista
Kista
4 October 2019
Objective. The aim of this study was to investigate whether a deep learning object detection technique can automatically detect
and classify radiolucent lesions in the mandible on panoramic radiographs.
Study Design. Panoramic radiographs of patients with mandibular radiolucent lesions of 10 mm or greater, including amelo-
blastomas, odontogenic keratocysts, dentigerous cysts, radicular cysts, and simple bone cysts, were included. Lesion labels,
including region of interest coordinates, were created in text format. In total, 210 training images and labels were imported
into the deep learning GPU training system (DIGITS). A learning model was created using the deep neural network DetectNet.
Two testing data sets (testing 1 and 2) were applied to the learning model. Similarities and differences between the prediction
and ground-truth images were evaluated using Intersection over Union (IoU). Sensitivity and false-positive rate per image
were calculated using an IoU threshold of 0.6. The detection performance for each disease was assessed using multiclass
learning.
Results. Sensitivity was 0.88 for both testing 1 and 2. The false-positive rate per image was 0.00 for testing 1 and 0.04 for testing 2.
The best combination of detection and classification sensitivity occurred with dentigerous cysts.
Conclusions. Radiolucent lesions of the mandible can be detected with high sensitivity using deep learning. (Oral Surg Oral Med
Oral Pathol Oral Radiol 2019;128:424430)
Patients who visit dental clinics often undergo pan- various tasks, including image classification,7-9 object
oramic radiography, and the resulting radiographs detection,10-15 and semantic segmentation.16-18 Within
often show incidental lesions in areas other than that our research group, we have achieved high accuracy
with the patient’s main complaint.1,2 Techniques for in the diagnosis of maxillary sinusitis19 and extra
computer-aided diagnosis of lesions on panoramic roots in mandibular first molars,20 by using a deep
radiographs have advanced, and they now include CNN applied to panoramic radiographs.
such tasks as the prediction of osteoporosis,3,4 and the However, although the methods for object detection in
diagnosis of bisphosphonate-related osteonecrosis5 medical imaging fields have been rapidly progressing,
and maxillary sinusitis.6 object detection on panoramic radiographs can still be
Recently, deep convolutional neural networks challenging.10-15 One such method, the DetectNet, is a
(CNNs) have been developed and applied in many deep CNN for object detection using the NVIDIA DIGITS
medical imaging fields, where they have accomplished deep learning training system. It performs feature extrac-
tion and prediction of object classes with the use of bound-
a
Associate Professor, Department of Oral and Maxillofacial Radiol- ing boxes per grid square.15,21-23
ogy, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. The purpose of this study was to evaluate the per-
b
Postgraduate student, Department of Electrical, Electronic and formance of a deep learning object detection tech-
Computer Faculty of Engineering, Gifu University, Gifu, Japan. nique for the automatic detection and classification of
c
Associate Professor, Department of Electrical, Electronic and Com-
mandibular radiolucent lesions on panoramic radio-
puter Faculty of Engineering, Gifu University, Gifu, Japan; Cur-
rently, Faculty of Data Science, Shiga University, Shiga, Japan. graphs. Targeting only a single lesion cannot provide
d
Associate Professor, Department of Oral and Maxillofacial Radiol- the amount of data required for the deep learning pro-
ogy, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. cedures. Therefore, 5 radiolucent lesions (ameloblas-
e
Part-Time lecturer, Department of Oral and Maxillofacial Radiol- tomas, odontogenic keratocysts, dentigerous cysts,
ogy, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
f radicular cysts, and simple bone cysts) that occur
Professor, Department of Electrical, Electronic and Computer Fac-
ulty of Engineering, Gifu University, Gifu, Japan.
g
Professor, Department of Oral Radiology, Asahi University School
of Dentistry, Mizuho, Japan.
Statement of Clinical Relevance
h
Professor, Department of Oral and Maxillofacial Radiology, Aichi-
Radiolucent lesions of the mandible could be auto-
Gakuin University School of Dentistry, Nagoya, Japan.
Received for publication Mar 7, 2019; returned for revision May 11, matically detected on panoramic radiography with
2019; accepted for publication May 25, 2019. high performance using deep learning. The results
Ó 2019 Elsevier Inc. All rights reserved. of this study will be useful for diagnostic support in
2212-4403/$-see front matter panoramic radiography.
https://doi.org/10.1016/j.oooo.2019.05.014
424
OOOO ORIGINAL ARTICLE
Volume 128, Number 4 Ariji et al. 425
relatively frequently in the mandible were targeted in radiographs from 25 different patients. This newly col-
this investigation. lected data set included 3 ameloblastomas, 6 odonto-
genic keratocysts, 8 dentigerous cysts, 7 radicular
cysts, and 1 simple bone cyst (see Table I). This data
MATERIALS AND METHODS set included 18 men and 7 women (age range 1984
This study was approved by the ethics committee of years; median age 49 years).
our university (No. 496) and was performed in accor-
dance with the tenets of the Declaration of Helsinki. Preparation of the imaging data sets
The digital panoramic radiographs were downloaded in
Patients the bitmap format (.BMP) from the image database of our
Patients were selected from the imaging database of hospital. A pretrained “DetectNet” for panoramic radio-
our institution and had been imaged between 2005 and graphs was set to an image resolution of 900 £ 900 pixels.
2018. The digital panoramic radiographs of all patients Therefore, all images were cropped to a size of 900 £ 900
were obtained using a Veraview Epocs system (J. Mor- pixels for use in the training and testing data sets.
ita Mfg Corp., Kyoto, Japan). The standard parameters
included a tube voltage of 75 kV, tube current of
Annotation procedure
9 mA, and acquisition time of 16 seconds.
A single radiologist set the regions of interest (ROIs)
The inclusion criteria were presence of a radiolucent
by using arbitrarily sized rectangles to include each
lesion in the mandible and histopathologic verification
lesion and recorded the upper left (x1, y1) and lower
of the diagnosis. All lesions were at least 10 mm in
right (x2, y2) coordinates of the ROIs with the use of
diameter. Panoramic radiographs of 210 patients were
ImageJ software (National Institutes of Health,
identified for use in the training data set. The sites of
Bethesda, MD) (Figure 1). The ROIs ranged from 64
lesions were the molar region (molar or molar and
to 450 pixels (median 112 pixels) in width, and 70 to
ramus) in 142 cases (67.6%); the premolar region (pre-
360 pixels (median 122 pixels) in height. The label for
molar, premolar and molar, or premolar and molar and
each lesion, which included these coordinates, was cre-
ramus) in 34 cases (16.2%); and the anterior region
ated in text form for each item of the imaging data.
(anterior, anterior and premolar, or anterior and premo-
lar and molar) in 34 cases (16.2%). The histopathologic
Deep learning procedure
diagnoses included 31 ameloblastomas, 33 odonto-
The deep learning system was implemented on an NVI-
genic keratocysts, 66 dentigerous cysts, 68 radicular
DIA GeForce GTX 11 GB-GPU workstation (NVIDIA
cysts, and 12 simple bone cysts (Table I). The study
included 128 men and 82 women (age range 1585
years; median age 45 years).
For validation purposes, 2 data sets were prepared.
The “Testing 1” data set was used for the purpose of
evaluating the performance of the learning model with
abundant data. It included panoramic radiographs of 50
of the patients in the training data set. The histopatho-
logic diagnoses included 7 ameloblastomas, 8 odonto-
genic keratocysts, 16 dentigerous cysts, 16 radicular
cysts, and 3 simple bone cysts (see Table I). This data
set included 32 men and 18 women (age range 1887
years; median age 49 years). The “Testing 2” data set
was used for the purpose of evaluating performance
with completely new data and consisted of panoramic
Corp., Santa Clara, CA) with 128 GB of memory. The Evaluation methods
training procedures for object detection were performed The similarities and differences between predicted
with use of the deep CNN DetectNet architecture images and ground-truth images were evaluated using
(Figure 2) implemented with the NVIDIA DIGITS Intersection over Union (IoU) for each patient. IoU is
(https://developer.nvidia.com/digits) version 5.0 library the most popular evaluation metric used in the object
on the Caffe framework. DetectNet is a neural network detection benchmarks.
created independently by NVIDIA; it has 5 main parts: IoU ¼ SðP \ GÞ=SðP [ GÞ;
(1) data ingest and augmentation; (2) a fully convolu-
tional network (FCN); (3) loss function measurement; (4) where S is the area, P is the predicted bounding box
bounding box clustering; and (5) mean average precision in which the learning model predicted the presence
calculation.15 DIGITS was used to train the network. We of a lesion, and G is the ground-truth bounding box
used the adaptive moment estimation (Adam) solver, that actually had a lesion. IoU is, therefore, the ratio
with 0.0001 as the base learning rate. of the area where the 2 boxes overlap (S[P\G]) to
The training images and labels data sets, as well as the total combined area of the 2 boxes (S[P[G]).
the testing (validation) images and labels data sets, The larger the value of IoU, the more accurate will
were imported into the DIGITS library. The training be the prediction. In this study, the IoU threshold
processes were conducted for 500 epochs, and a learn- for determining whether the lesions could be
ing model was obtained. The number of epochs was detected was set at 0.6. 11 That is, when the IoU was
determined in such a manner that accuracy of the 0.6 or more, it meant that lesion detection was cor-
model became high and stable and the error in predic- rectly predicted.
tion using training data approached zero. For the evaluation of prediction performance of
lesion detection, the sensitivity and false-positive
Testing procedure rate per image were calculated. Sensitivity was
Two kinds of testing image data sets were applied to the defined as the proportion of regions correctly pre-
learning model, and the predictions of radiolucent lesions dicted by the model among those that actually
of the mandible were obtained. When the presence of radio- existed. False-positive rate per image was defined
lucent lesions was predicted, the red-colored bounding as the proportion of incorrect detections occurring
boxes were superimposed over the panoramic radiographs. per image.
Fig. 2. Workflow of DetectNet. Training processes (top row): Data layers ingest 3 training images and labels, and transformer
layers perform inline data augmentation. A pre-trained fully convolutional network (FCN) performs feature extraction and predic-
tion of object classes and bounding boxes per grid square. Loss functions simultaneously measure the error in the 2 tasks of pre-
dicting the object coverage (L2 loss) and object bounding box corners per grid square (L1 loss). Testing processes (bottom row):
A clustering function produces the final set of predicted bounding boxes during the testing processes. The predicted bounding box
is the area in which the learning model predicts the presence of a lesion. When the presence of a radiolucent lesion is predicted,
the red-colored bounding boxes are superimposed over the panoramic radiographs. The threshold value was set at 0.6.
OOOO ORIGINAL ARTICLE
Volume 128, Number 4 Ariji et al. 427
Comparison of the detection and classification for A dentigerous cyst was well detected not only by its
each lesion well-defined border but also by its positional informa-
Multiclass learning was performed by using the learn- tion, as shown in Figure 3B.
ing model obtained earlier as a pretrained model. Ame- In the failed example shown in Figure 4, a small
loblastomas were defined as class 1, odontogenic lesion with a poorly-defined border and faint radiolu-
keratocysts as class 2, dentigerous cysts as class 3, and cency was not detected.
radicular cysts as class 4.
The number of simple bone cysts was small; there- Comparison of detection and classification for each
fore, they were excluded for multiclass learning of lesion
detection and classification. The training procedure was The results of the multiclass object detection are shown
performed for 500 epochs, and the detection and classifi- in Table III. The term detection sensitivity refers to the
cation performance were determined using training data rate at which the learning model correctly detected the
set 1. For classification of lesions, the predicted bound- lesion as a result of multiclass learning. The term clas-
ing boxes of class 1 (ameloblastomas) were displayed in sification sensitivity refers to the rate at which the
red, those of class 2 (odontogenic keratocysts) in light learning model correctly classified the lesion. The
blue, those of class 3 (dentigerous cysts) in green, and detection sensitivity and classification sensitivity were
those of class 4 (radicular cysts) in purple. 0.71 and 0.60, respectively, for ameloblastomas, 1.00
and 0.13 for odontogenic keratocysts, 0.88 and 0.82 for
RESULTS dentigerous cysts, and 0.81 and 0.77 for radicular cysts.
Time taken for the deep learning procedure All odontogenic keratocysts were detected, but the
The time taken to import the training and validation classification sensitivity was very low (Figure 5). Ulti-
image and label data sets into the DIGITS library was mately, the best detection and classification perfor-
13 seconds. It took 3 hours for the 500 epoch training mance was achieved for dentigerous cysts.
processes to achieve a learning model. Each testing
procedure took 13 seconds. DISCUSSION
Deep learning technologies can be applied to the inter-
pretation of medical images in a number of ways,
Detection performance
including classification,7-9 object detection,10-15 and
The results using the 50-image testing 1 data set and
semantic segmentation.16-18
the 25-image testing 2 data set are shown in Table II.
However, few studies have described the application of
The sensitivity for detection of the mandibular radiolu-
deep learning to the interpretation of panoramic radio-
cent lesions was 0.88 with both data sets. A sensitivity
graphs frequently exposed in dental clinics. De Tobel et
of 0.88 indicated that the learning model could cor-
al. evaluated deep learning for determining the develop-
rectly predict the presence of lesions in 88% of the
mental stage of mandibular third molars on panoramic
actual lesions. The false-positive rate per image was
radiographs,24 and our previous study demonstrated the
0.00 for the testing 1 data set, and 0.04 for testing 2
diagnosis of maxillary sinusitis on panoramic radiographs
data set. A false-positive rate per image of 0.04 indi-
using deep learning, with a high accuracy of 87.5%.19 We
cated that the learning model incorrectly predicted the
also examined whether extra roots of mandibular first
presence of lesions at nonlesional areas in 1 of 25
molars can be diagnosed using deep learning applied to
images from the testing 2 data set.
panoramic radiographs and reported a high accuracy of
87.4%.20 There are also reports evaluating deep learning
Examples of the results of radiolucent lesion for the classification of osteoporosis.25,26
detection The main advantage of panoramic radiography is
Examples of the successful detection of lesions are that it allows for the detection of a variety of lesions,
shown in Figure 3. Large lesions were well detected such as those of teeth, jaws, temporomandibular
when their borders were well-defined (see Figure 3A). joints, maxillary sinuses, and salivary stones.1,2 For
busy and/or inexperienced dentists, automatic detec-
Table II. Performance in the detection of radiolucent tion of lesions is considered clinically more useful
lesions using testing 1 and testing 2 data sets than image classification because it can contribute
Detection to reducing oversight of lesions other than the
patient’s main complaint. However, there are few
Sensitivity False-positive rate
per image
reports on deep learning for lesion detection on pan-
oramic radiographs.
Testing 1 data set 0.88 0.00
Testing 2 data set 0.88 0.04
The most important part of a deep learning system is
the convolutional neural network (CNN). There are
ORAL AND MAXILLOFACIAL RADIOLOGY OOOO
428 Ariji et al. October 2019
Fig. 3. Examples of successful lesion detections. A, A 36-year-old man with an ameloblastoma. This large lesion was well
detected by a well-defined border. B, A 61-year-old man with a dentigerous cyst. This lesion was well detected by a combination
of a well-defined border and position information.
many open source CNN object detection architectures, The most frequently investigated CNN is Faster
including region-based CNN (R-CNN), You Only R-CNN, which is a derivative of R-CNN.10-13 In Faster
Look Once (YOLO), and DetectNet.10-15,21 R-CNN, feature maps are first extracted from the input
Fig. 4. Example of a failed lesion detection. A 56-year-old man with a radicular cyst. The lesion was situated in the left mandibular
canine and first premolar region (arrow), but this small lesion with poorly-borders and faint radiolucency could not be detected.
OOOO ORIGINAL ARTICLE
Volume 128, Number 4 Ariji et al. 429
Table III. Performance in the detection and classification is usually applied.15 Because the CNN contains millions
of each type of lesion using testing I data set of parameters, training is insufficient with low numbers
of annotated images.13 Therefore, the pretrained CNN
Detection Classification
sensitivity sensitivity
usually offers optimal performance for object detection.11
This study demonstrated high detection perfor-
Ameloblastomas 0.71 0.60
Odontogenic keratocysts 1.00 0.13
mance, with sensitivity of nearly 90% for radiolucent
Dentigerous cysts 0.88 0.82 lesions. Dentigerous cysts, with both high detection
Radicular cysts 0.81 0.77 and classification sensitivities, were the best-detected
lesions probably because of their preferred sites; these
cysts are most frequently found in the mandibular third
image. These maps are then passed through a region molars. In the cases of failed detection, lesions could
proposal network, which returns object proposals. not be detected because of their small size, poorly-
Finally, these maps are classified, and the bounding defined borders, and/or faintly radiolucent image.
boxes are predicted.10-13 Faster R-CNN yields better Small lesions provide less information for discrimina-
detection accuracies compared with other networks but tion of their classes, and therefore, more training data
has higher computational time costs.13 will be needed to successfully detect such lesions.13 In
Al-Masni et al. reported using YOLO,14 which the regions with few texture features, lesions may still
divides the image into grids and performs object recog- not be dentified.11 The number of individual lesion
nition for each grid. Compared with R-CNN, YOLO samples was small in the multiclass studies in the pres-
has a somewhat low accuracy for object detection, but ent investigation, so further study will be needed to
has a fast processing speed.14 draw a conclusion.
DetectNet is a neural network created independently There are some limitations to this study. Deep learning
by NVIDIA. It has 5 main parts: (1) data ingest and requires a large amount of labeled training data.11 If more
augmentation; (2) FCN; (3) loss function measure- training data were to be provided, the models have the
ment; (4) bounding box clustering; and (5) mean aver- potential to become significantly more accurate.10
age precision calculation.15,21-23 Yu et al.21 were able External testing data would be necessary to judge the
to detect broadleaf weeds with a high performance rate generalizability of the CNN.13 We did not use external
of 99% with the use of DetectNet, with a reported data in this study. The use of the same images in the train-
image processing speed of 42 ms. Suleymanova et al.15 ing and testing 1 data sets may have improved the sensi-
reported a high positive correlation between manual tivity results. Therefore, the testing 2 data set contained
quantification and the use of DetectNet in counting the completely different lesions. The fact that sensitivity and
numbers of astrocytes in the brains of rats. false-positive rates were virtually identical for both test-
The full CNN of DetectNet is basically the same as ing data sets confirmed the beneficial effects of the CNN.
that of GoogLeNet, differing only in that the input, last For training object detection-based CNNs, a process of
pooling, and output layers are removed.15 To significantly manually annotating individual lesions is necessary.21
speed up the training process, the pretrained GoogLeNet There are few publicly available pixel-level annotated
Fig. 5. Example of a successful lesion detection but failed classification after multiclass learning. A 53-year-old man with an
odontogenic keratocyst. A lesion in the right third molar region was detected but misclassified as a dentigerous cyst.
ORAL AND MAXILLOFACIAL RADIOLOGY OOOO
430 Ariji et al. October 2019
data sets,10 and this manual process is time consuming 12. Onieva JO, Serrano GG, Young TP, Washko GR, Carbayo MJL,
and labor intensive.21 Last, we only used annotations Estepar RSJ. Multiorgan structures detection using deep convo-
from one expert radiologist, and, therefore, we could not lutional neural networks. Proc SPIE Int Soc Opt Eng. 2018:
10574. pii: 1057428.
evaluate interobserver variability.13 13. Koitka S, Demircioglu A, Kim MS, Friedrich CM, Nensa F. Ossifi-
In the future, semantic segmentation will be neces- cation area localization in pediatric hand radiographs using deep
sary, and if fully automated segmentation is shown to neural networks for object detection. PLoS One. 2018;13:e0207496.
be fast, reproducible, and user-friendly, it will be 14. Al-Masni MA, Al-Antari MA, Park JM, et al. Simultaneous
applied in clinical practice.18 This study was limited to detection and classification of breast masses in digital mammo-
grams via a deep learning YOLO-based CAD system. Comput
mandibular radiolucent lesions, so further studies to Methods Programs Biomed. 2018;157:85-94.
consider maxillary lesions are needed. 15. Suleymanova I, Balassa T, Tripathi S, et al. A deep convolu-
tional neural network approach for astrocyte detection. Sci Rep.
CONCLUSIONS 2018;8:12878.
Our study confirmed that a deep learning system using 16. Fu M, Wu W, Hong X, et al. Hierarchical combinatorial deep
learning architecture for pancreas segmentation of medical com-
DIGITS and DetectNet had high values of detection puted tomography cancer images. BMC Syst Biol. 2018;12:56.
and classification sensitivity in the detection of radiolu- 17. Jackson P, Hardcastle N, Dawe N, Kron T, Hofman MS, Hicks RJ.
cent lesions of the mandible. In the future, we will Deep learning renal segmentation for fully automated radiation dose
establish a faster and more accurate system that uses a estimation in unsealed source therapy. Front Oncol. 2018;8:215.
18. Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO.
large amount of labeled external training data.
Automatic lesion detection and segmentation of 18 F-FET PET
in gliomas: a full 3 D U-Net convolutional neural network study.
ACKNOWLEDGMENT PLoS One. 2018;13:e0195798.
We thank Karl Embleton, PhD, from Edanz Group 19. Murata M, Ariji Y, Ohashi Y, et al. Deep-learning classification
(www.edanzediting.com/ac) for editing a draft of this using convolutional neural network for evaluation of maxillary
manuscript. sinusitis on panoramic radiography [e-pub ahead of print]. Oral
Radiol. doi:10.1007/s11282-018-0363-7, accessed Dec 11, 2018
20. Hiraiwa T, Ariji Y, Fukuda M, et al. A deep-learning artificial
REFERENCES intelligence system for assessment of root morphology of the
1. Ribeiro A, Keat R, Khalid S, et al. Prevalence of calcifications in mandibular first molar on panoramic radiography. Dentomaxillo-
soft tissues visible on a dental pantomogram: a retrospective fac Radiol. 2018;47:20180218.
analysis. J Stomatol Oral Maxillofac Surg. 2018;119:369-374. 21. Yu J, Sharpe SM, Schumann AW, Boyd N.S. Detection of broad-
2. Resnick CM, Dentino KM, Garza R, Padwa BL. A management leaf weeds growing in turfgrass with convolutional neural net-
strategy for idiopathic bone cavities of the jaws. J Oral Maxillo- works [e-pub ahead of print]. Pest Manag Sci. doi:10.1002/
fac Surg. 2016;74:1153-1158. ps.5349, accessed Jan 22, 2019.
3. Muramatsu C, Horiba K, Hayashi T, et al. Quantitative assess- 22. Rajpura PS, Bojinov H, Hegde RS. Object detection using deep
ment of mandibular cortical erosion on dental panoramic radio- CNNs trained on synthetic images. Ithaca, NY: Cornell Univer-
graphs for screening osteoporosis. Int J Comput Assist Radiol sity; 2017. Available at: https://arxiv.org/abs/1706.06782.
Surg. 2016;11:2021-2032. 23. Tao A, Barker J, Sarathy S. Detect Net: deep neural network for
4. Nakamoto T, Taguchi A, Ohtsuka M, et al. A computer-aided object detection in DIGITS. 2016. Available at: https://devblogs.nvi-
diagnosis system to screen for osteoporosis using dental pan- dia.com/parallelforall/detectnet-deep-neural-network-object-detection-
oramic radiographs. Dentomaxillofac Radiol. 2008;37:274-281. digits/.
5. Demiralp KO, € Kurşun-Çakmak ES, Bayrak S, Akbulut N, Atakan C, 24. De Tobel J, Radesh P, Vandermeulen D, Thevissen PW. An
Orhan K. Trabecular structure designation using fractal analysis tech- automated technique to stage lower third molar development on
nique on panoramic radiographs of patients with bisphosphonate panoramic radiographs for age estimation: a pilot study. J Foren-
intake: a preliminary study. Oral Radiol. 2019;35:23-28. sic Odontostomatol. 2017;2:42-54.
6. Ohashi Y, Ariji Y, Katsumata A, et al. Utilization of computer-aided 25. Chu P, Bo C, Liang X, et al. Using octuplet Siamese network for
detection system in diagnosing unilateral maxillary sinusitis on pan- osteoporosis analysis on dental panoramic radiographs. Conf
oramic radiographs. Dentomaxillofac Radiol. 2016;45:20150419. Proc IEEE Eng Med Biol Soc. 2018;2018:2579-2582.
7. Perek S, Kiryati N, Zimmerman-Moreno G, Sklair-Levy M, Konen E, 26. Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon S.J. Osteoporo-
Mayer A. Classification of contrast-enhanced spectral mammography sis detection in panoramic radiographs using a deep convolutional
(CESM) images. Int J Comput Assist Radiol Surg. 2019;14:249-257. neural network-based computer-assisted diagnosis system: a prelim-
8. Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolu- inary study [e-pub ahead of print]. Dentomaxillofac Radiol.
tional neural network for differentiation of liver masses at doi:10.1259/dmfr.20170344, accessed Jul 13, 2018.
dynamic contrast enhanced CT: a preliminary study. Radiology.
2018;286:887-896.
9. Wang X, Yang W, Weinreb J, et al. Searching for prostate cancer Reprint requests:
by fully automated magnetic resonance imaging classification: Yoshiko Ariji,
deep learning versus non-deep learning. Sci Rep. 2017;7:15415. Department of Oral and Maxillofacial Radiology
10. Ribli D, Horvath A, Unger Z, Pollner P, Csabai I. Detecting and Aichi-Gakuin University School of Dentistry
classifying lesions in mammograms with deep learning. Sci Rep. 2-11 Suemori-dori
2018;8:4165. Chikusa-ku
11. Li H, Weng J, Shi Y, et al. An improved deep learning approach Nagoya 464-8651
for detection of thyroid papillary cancer in ultrasound images. Japan.
Sci Rep. 2018;8:6600. [email protected]