IA e Ortodontia
IA e Ortodontia
IA e Ortodontia
Abstract
Objective Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantol‑
ogy, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high
accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious,
and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using
deep learning.
Material and methods As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach
based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search
algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set con‑
sisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree
of similarity between the annotated ground truth and the model predictions.
Results The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels
of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agree‑
ments between the automatically and manually segmented teeth components. Minor flaws were mostly seen
at the edges.
Conclusion The proposed method forms a promising foundation for time-effective and observer-independent teeth
segmentation and labeling on intra-oral scans.
Clinical significance Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics,
implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.
Keywords Deep learning, Artificial intelligence, Intra-oral scan, Computer-assisted planning, Digital imaging
† 6
Marcel Hanisch and Tong Xi contributed equally to this work. Department of Radiology, Radboud University Nijmegen Medical
Centre, Nijmegen, the Netherlands
*Correspondence: 7
Charité – Universitätsmedizin Berlin, corporate member of Freie
Tabea Flügge Universität Berlin and Humboldt-Universität Zu Berlin, Department of Oral
[email protected]
1 and Maxillofacial Surgery, Hindenburgdamm 30, 12203 Berlin, Germany
Department of Oral and Maxillofacial Surgery, Radboud University
Nijmegen Medical Centre, Nijmegen, the Netherlands
2
Department of Artificial Intelligence, Radboud University, Nijmegen, the
Netherlands
3
Department of Oral and Maxillofacial Surgery, Universitätsklinikum
Münster, Münster, Germany
4
Promaton Co. Ltd, 1076 GR Amsterdam, The Netherlands
5
MIT Computer Science & Artificial Intelligence Laboratory, 32 Vassar St,
Cambridge, MA 02139, USA
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecom‑
mons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Vinayahalingam et al. BMC Oral Health (2023) 23:643 Page 2 of 9
enc · 1 COM zn < COM z⊙ + enc · 1 COM z⊙ < COM zn +13
+12 max 1 − enc , 0 ,
n c∈ Upper Jaw c∈ Lower Jaw n c
where f11 wished to have all FDI numbers assigned to The detection module was trained over 180 epochs
an unique object, f12 aimed to have detections assigned with a learning rate decay of 0.8 while the segmenta-
to the right jaw, and f13 reduced the count of unassigned tion module was trained for 50 epochs with a learning
detections to a minimum. ’s were weights, and were set rate decay of 1. The applied batch size was one for the
at 12 = 0.1 and 13 = 0.01. For the second stage, a per- detection module with 30,000 vertices and batch size
mutated space of E 1 was explored where the assigned three for the segmentation module with 8192 vertices.
detections remained assigned in each jaw while having Weight decay of 0.0001 and early stopping were applied
a possible permutation of FDI numbers (i.e., c enc stays for both modules. Both modules used the Adam opti-
constant ∀n). This step encourages the FDI numbers to mizer at a learning rate of 0.001. No momentum or gradi-
become sorted. ent clipping were applied. The binary cross-entropy loss
E 2 = arg min f2 (E) is minimized, where function was applied for the segmentation module. The
E∈�Permutation (E 1 )
en1 c1 · en2 c2 · 1 COM xn1 > COM xn2 ⊕ (c1 > c2 )
f2 (E) =
n1 ,n2 c1 ,c2 ∈UpperJaw
en1 c1 · en2 c2 · 1 COM xn1 > COM xn2 ⊕ (c1 > c2 )
+
n1 ,n2 c1 ,c2 ∈LowerJaw
In the formula, COM x ( x went from left to right for the detection module used a multi-task loss function consist-
patient) was enforced to grow monotonically while the ing of binary cross-entropy loss and IoU loss. The model
Table 1 Precision, recall, and Intersection over Union (IoU) of the Table 2 Accuracy, precision, recall, and Intersection over Union
detections (IoU) of the OS segmentations
Tooth Precision Recall IoUBoundingBox Tooth Accuracy Precision Recall IoUMask
Fig. 2 Overview of mandible teeth segmentations; left: manual segmentation; middle: automatic segmentation; right: overlay; one of the two
detection errors is illustrated
Fig. 3 Overview of maxillary teeth segmentations; left: manual segmentation; middle: automatic segmentation; right: overlay
Vinayahalingam et al. BMC Oral Health (2023) 23:643 Page 7 of 9
Table 3 Accuracy of the FDI numeration Zanjani et al. proposed a volumetric anchor-based
Tooth Accuracy
region proposal network for teeth point cloud detec-
tion and segmentation with a mean IoU of 0.98 [21].
11 0.944 Cui et al. applied a two-stage network architecture
12 0.943 for tooth centroid extraction using a distance-aware
13 0.944 voting scheme and segmentation with an F1-score of
14 0.947 0.942 [20]. Similarly, Hao et al. proposed a two-mod-
15 0.945 ule approach. The segmentation module generated a
16 0.902 fine-grained segmentation, whereas the canary module
17 0.797 autocorrected the segmentation based on confidence
18 0,800 evaluation. Hao et al. reported a mean IoU of 0.936 and
21 0.938 0.942 for mandible and maxillary teeth, respectively [6].
22 0.938 The number of studies reporting the classification and
23 0.944 semantic labeling accuracies of each tooth is yet limited
24 0.913 [18, 19]. Tian et al. employed a 3D CNN using a sparse
25 0.926 voxel octree for teeth classification with an accuracy of
26 0.871 0.881 [18]. Ma et al. proposed a deep learning network to
27 0.873 predict the semantic label of each 3D tooth model based
28 0.600 on spatial relationship features. The proposed SRF-Net
31 0.850 achieved a classification accuracy of 0.9386 [19].
32 0.879 It is important to recognize that the performance of
33 0.892 deep learning models relies heavily on factors such as the
34 0.898 dataset, hyperparameters, and architecture involved [8].
35 0.931 One key obstacle to reproducing and validating previous
36 0.849 results is the restricted accessibility of the datasets used,
37 0.843 stemming from privacy concerns. Furthermore, the con-
38 1.000 siderable variation in training and test sets sizes across
41 0.847 different studies makes it difficult to draw direct compar-
42 0.884 isons. The lack of clarity regarding data representative-
43 0.916 ness further compounds the issue.
44 0.918 Moreover, attempting to reproduce complex compu-
45 0.941 tational pipelines based solely on textual descriptions
46 0.905 without access to the source code becomes a subjective
47 0.914 and challenging task (31). The inadequate description of
48 0.667 training pipelines, essential hyperparameters, and cur-
rent software dependencies undermines the transpar-
ency and reproducibility of earlier findings. Given these
treatment evaluation after appointments [18]. Particu- limitations, it’s essential to approach any direct compari-
larly, 3D treatment planning can be time-consuming son of previous segmentation and labeling results with
and laborious, but with the help of automated assis- caution [5].
tance, it can become more time-efficient, leading to Even though previous studies achieved remarkable
a more cost-effective 3D treatment planning process results, the models are regarded as black boxes lacking
[6]. In this study, the researchers evaluated the per- explicit declarative knowledge representation. Generat-
formance of a deep learning model for automating 3D ing the underlying explanatory structures is essential in
teeth detection, segmentation, and FDI labeling on 3D the medical domain to provide clinicians with a trans-
meshes. parent, understandable, and explainable system [29]. The
In dentistry, different studies have applied deep learn- current study made the results re-traceable on demand
ing models for segmentation on 3D meshes [6, 20–23]. using a hierarchical three-step plug-and-play pipeline.
Lian et al. introduced a mesh-based graph neural net- This pipeline allows clinicians to verify the immediate
work for teeth segmentation with an F1-score of 0.981 results of each module before proceeding further. In case
[23]. Zhao et al. used a graph attentional convolution the detection module fails to detect a tooth, the clinician
network with a local spatial augmentation module for can correct the mistake immediately and proceed to the
segmentation and achieved a mean IoU of 0.871 [22]. subsequent module. This stop-and-go approach ensures
Vinayahalingam et al. BMC Oral Health (2023) 23:643 Page 8 of 9
Fig. 4 Confusion Matrices show the agreement between actual and predicted classes to indicate labeling accuracy, and brighter cells signify
a higher class agreement. The left and right matrices display the model performance in the maxilla and mandible, respectively
an efficient workflow while maintaining high precision and automated alignment of intra-oral scans and cone-
and explainability. Another advantage of this plug-and- beam computed tomography.
play pipeline is the interchangeability of the different The proposed model is currently clinically used for
modules. The detection and segmentation modules can orthodontic treatment planning. The constant error
be exchanged with alternative model architectures with- reductions and adaptions to real-world cases will fur-
out much difficulties. ther enhance the current model. The successful imple-
The segmentation IoU scores ranged between 0.792 mentation of this approach in daily clinical practice will
and 0.948. Furthermore, each tooth was classified and also further reduce the risks of limited robustness, gen-
labeled with an accuracy between 0.6 and 1. The low- eralizability, and reproducibility.
est segmentation and labeling accuracies were seen for
third molars. Hierarchical concatenation of different
deep learning models and post-processing heuristics Conclusion
have the disadvantage that the errors in the different In conclusion, our proposed method achieved accurate
modules are cumulative. In other words, inaccuracies teeth segmentations with a mean IoU score of 0.915.
in the detection module will affect the segmentation The FDI labels of the teeth were predicted with a mean
module and the FDI labeling algorithm. However, this accuracy of 0.894. This forms a promising foundation
shortcoming can be neglected if the pipeline is interac- for time-effective and observer-independent teeth seg-
tively used with the clinicians. mentation and labeling on intra-oral scans.
Although our proposed model has achieved clinically
Acknowledgements
applicable results, it has some limitations. Wisdom None.
teeth, supernumerary teeth, or crowded teeth impede
the segmentation and labeling accuracies. Most failure Authors’ contributions
Shankeeth Vinayahalingam: Conceptualization, Method, Investigation, Formal
cases are related to rare or complicated dental mor- Analysis, Software, Funding acquisition, Writing – original draft. Steven
phologies [6, 7, 18–20]. Without real-world integration, Kempers: Validation, Visualization, Data curation, Writing – review &; editing.
deep learning models are bound to the limits of the Julian Schoep: Software, Method, Formal Analysis, Writing – review &; editing.
Tzu-Ming Harry Hsu: Software, Method, Formal Analysis, Writing – review
training set and validation set. Furthermore, extensive &; editing. David Anssari Moin: Investigation, Validation, Resources, Project
model comparisons are required to choose the optimal administration, Funding acquisition, Supervision, Writing – review &; editing.
model architectures for the respective modules (e.g., Bram van Ginneken: Investigation, Validation, Supervision, Writing – review &;
editing. Tabea Flügge: Investigation, Validation, Supervision, Writing – review
Point-RCNN for the detection module). Future stud- &; editing. Marcel Hanisch: Investigation, Validation, Supervision, Writing
ies should focus on further automation of 3D treat- – review &; editing. Tong Xi: Investigation, Validation, Supervision, Writing –
ment planning steps, such as automated crown design review &; editing.
Vinayahalingam et al. BMC Oral Health (2023) 23:643 Page 9 of 9
Funding 14. Yoo JH, Yeom HG, Shin W, Yun JP, Lee JH, Jeong SH, et al. Deep learning
Open Access funding enabled and organized by Projekt DEAL. This research based prediction of extraction difficulty for mandibular third molars. Sci
is partially funded by the Radboud AI for Health collaboration between Rep. 2021;11(1):1954.
Radboud University and Radboudumc, and the Innovation Center for Artificial 15. Yu HJ, Cho SR, Kim MJ, Kim WH, Kim JW, Choi J. Automated skeletal clas‑
Intelligence (ICAI). sification with lateral cephalometry based on artificial intelligence. J Dent
Res. 2020;99(3):249–56.
Availability of data and materials 16. Ter Horst R, van Weert H, Loonen T, Berge S, Vinayahalingam S, Baan F,
The datasets used and/or analyzed during the current study are available from et al. Three-dimensional virtual planning in mandibular advancement
the corresponding author upon reasonable request. surgery: Soft tissue prediction based on deep learning. J Craniomaxillofac
Surg. 2021;49(9):775–82.
17. Lahoud P, EzEldeen M, Beznik T, Willems H, Leite A, Van Gerven A, et al. Arti‑
Declarations ficial intelligence for fast and accurate 3-dimensional tooth segmentation
on cone-beam computed tomography. J Endod. 2021;47(5):827–35.
Ethics approval and consent to participate 18. Tian S, Dai N, Zhang B, Yuan F, Yu Q, Cheng X. Automatic classification and
This study was conducted in accordance with the code of ethics of the World segmentation of teeth on 3d dental model using hierarchical deep learn‑
Medical Association (Declaration of Helsinki) and the ICH-GCP. The approval ing networks. IEEE Access. 2019;7:84817–28.
of this study was granted by the Commissie Mensgebonden Onderzoek 19. Ma Q, Wei GS, Zhou YF, Pan X, Xin SQ, Wang WP. SRF-net: spatial relation‑
Radboudumc, Nijmegen, The Netherlands, which also approved that informed ship feature network for tooth point cloud classification. Comput Graph
consent was not required as all image data were anonymized and de-identi‑ Forum. 2020;39(7):267–77.
fied before analysis (decision no. 2021–13253). 20. Cui ZM, Li CJ, Chen NL, Wei GD, Chen RN, Zhou YF, et al. TSegNet: An
efficient and accurate tooth segmentation network on 3D dental model.
Consent for publication Med Image Anal. 2021;69:101949.
Not applicable. 21. Zanjani FG, Pourtaherian A, Zinger S, Moin DA, Claessen F, Cherici T, et al.
Mask-MCNet: tooth instance segmentation in 3D point clouds of intra-
Competing interests oral scans. Neurocomputing. 2021;453:286–98.
The authors declare no competing interests. 22. Zhao Y, Zhang LM, Yang CS, Tan YY, Liu Y, Li PC, et al. 3D Dental model
segmentation with graph attentional convolution network. Pattern
Recogn Lett. 2021;152:79–85.
Received: 7 April 2023 Accepted: 26 August 2023 23. Lian C, Wang L, Wu TH, Wang F, Yap PT, Ko CC, et al. Deep multi-scale
mesh feature learning for automated labeling of raw dental surfaces from
3d intraoral scanners. IEEE Trans Med Imaging. 2020;39(7):2440–50.
24. Poon AIF, Sung JJY. Opening the black box of AI-Medicine. J Gastroenterol
Hepatol. 2021;36(3):581–4.
References 25. Amann J, Blasimme A, Vayena E, Frey D, Madai VI, Consortium PQ. Explain‑
1. Mangano F, Gandolfi A, Luongo G, Logozzo S. Intraoral scanners in den‑ ability for artificial intelligence in healthcare: a multidisciplinary perspec‑
tistry: a review of the current literature. BMC Oral Health. 2017;17(1):149. tive. Bmc Med Inform Decis. 2020;20(1):1–9.
2. Jheon AH, Oberoi S, Solem RC, Kapila S. Moving towards precision 26. Kempers S, van Lierop P, Hsu TH, Moin DA, Berge S, Ghaeminia H, et al.
orthodontics: an evolving paradigm shift in the planning and delivery of Positional assessment of lower third molar and mandibular canal using
customized orthodontic therapy. Orthod Craniofac Res. 2017;20:106–13. explainable artificial intelligence. J Dent. 2023;133:104519.
3. Baan F, Bruggink R, Nijsink J, Maal TJJ, Ongkosuwito EM. Fusion of intra- 27. Li YY, Bu R, Sun MC, Wu W, Di XH, Chen BQ. PointCNN: Convolution On X
oral scans in cone-beam computed tomography scans. Clin Oral Investig. -Transformed Points. Adv Neur In. 2018;31.
2021;25(1):77–85. 28. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O, editors. 3D
4. Stokbro K, Aagaard E, Torkov P, Bell RB, Thygesen T. Virtual planning in U-Net: learning dense volumetric segmentation from sparse annotation.
orthognathic surgery. Int J Oral Maxillofac Surg. 2014;43(8):957–65. International conference on medical image computing and computer-
5. Vinayahalingam S, Goey RS, Kempers S, Schoep J, Cherici T, Moin DA, et al. assisted intervention; Springer. 2016.
Automated chart filing on panoramic radiographs using deep learning. 29. Chen YW, Stanley K, Att W. Artificial intelligence in dentistry: current appli‑
J Dent. 2021;115:103864. cations and future perspectives. Quintessence Int. 2020;51(3):248–57.
6. Hao J, Liao W, Zhang YL, Peng J, Zhao Z, Chen Z, et al. Toward clinically 30. Haibe-Kains B, Adam GA, Hosny A, Khodakarami F. Massive Analysis
applicable 3-dimensional tooth segmentation via deep learning. J Dent Quality Control Society Board of D, Waldron L, et al. Transparency and
Res. 2022;101(3):304–11. reproducibility in artificial intelligence. Nature. 2020;586(7829):E14–E6.
7. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey 31. Holzinger A, Biemann C, Pattichis CS, Kell DB. What do we need to
on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. build explainable AI systems for the medical domain? arXiv preprint
8. Vinayahalingam S, Kempers S, Limon L, Deibel D, Maal T, Hanisch M, et al. arXiv:171209923. 2017.
Classification of caries in third molars on panoramic radiographs using
deep learning. Sci Rep. 2021;11(1):12609.
9. Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep Publisher’s Note
learning for the radiographic detection of periodontal bone loss. Sci Rep. Springer Nature remains neutral with regard to jurisdictional claims in pub‑
2019;9(1):8495. Ready
lished mapsto and
submit your research
institutional ? Choose BMC and benefit from:
affiliations.
10. Lee J-H, Kim D-H, Jeong S-N. Diagnosis of cystic lesions using panoramic
and cone beam computed tomographic images based on deep learning • fast, convenient online submission
neural network. Oral Dis. 2020;26(1):152–8. • thorough peer review by experienced researchers in your field
11. Fu Q, Chen Y, Li Z, Jing Q, Hu C, Liu H, et al. A deep learning algorithm • rapid publication on acceptance
for detection of oral cavity squamous cell carcinoma from photographic
images: a retrospective study. EClinicalMedicine. 2020;27:100558. • support for research data, including large and complex data types
12. Schwendicke F, Rossi JG, Göstemeyer G, Elhennawy K, Cantu AG, Gaudin • gold Open Access which fosters wider collaboration and increased citations
R, et al. Cost-effectiveness of Artificial Intelligence for Proximal Caries • maximum visibility for your research: over 100M website views per year
Detection. J Dent Res. 2021;100(4):369–76.
13. Qu Y, Lin Z, Yang Z, Lin H, Huang X, Gu L. Machine learning models for At BMC, research is always in progress.
prognosis prediction in endodontic microsurgery. J Dent. 2022;118:103947.
Learn more biomedcentral.com/submissions