1
|
Yacout YM, Eid FY, Tageldin MA, Kassem HE. Evaluation of the accuracy of automated tooth segmentation of intraoral scans using artificial intelligence-based software packages. Am J Orthod Dentofacial Orthop 2024; 166:282-291.e1. [PMID: 38904564 DOI: 10.1016/j.ajodo.2024.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 04/13/2024] [Accepted: 05/23/2024] [Indexed: 06/22/2024]
Abstract
INTRODUCTION The accuracy of tooth segmentation in intraoral scans is crucial for performing virtual setups and appliance fabrication. Hence, the objective of this study was to estimate and compare the accuracy of automated tooth segmentation generated by the artificial intelligence of dentOne software (DIORCO Co, Ltd, Yongin, South Korea) and Medit Ortho Simulation software (Medit Corp, Seoul, South Korea). METHODS Twelve maxillary and mandibular pretreatment dental scan sets comprising 286 teeth were collected for this investigation from the archives of the Department of Orthodontics, Faculty of Dentistry, Alexandria University. The scans were imported as standard tessellation language files into both dentOne and Medit Ortho Simulation software. Automatic segmentation was run on each software. The number of successfully segmented teeth vs failed segmentations was recorded to determine the success rate of automated segmentation of each program. Evaluation of success and/or failure was based on the software's identification of the teeth and the quality of the segmentation. The mesiodistal tooth width measurements after segmentation using both tested software programs were compared with those measured on the unsegmented scan using Meshmixer software (Autodesk, San Rafael, Calif). The unsegmented scans served as the reference standard. RESULTS A total of 288 teeth were examined. Successful identification rates were 99% and 98.3% for Medit and dentOne, respectively. Success rates of segmenting the lingual surfaces of incisors were significantly higher in Medit than in dentOne (93.7% vs 66.7%, respectively; P <0.001). DentOne overestimated the mesiodistal width of canines (0.11 mm, P = 0.032), premolars (0.22 mm, P < 0.001), and molars (0.14 mm, P = 0.043) compared with the reference standard, whereas Medit overestimated the mesiodistal width of premolars only (0.13 mm, P = 0.006). Bland-Altman plots showed that mesiodistal tooth width agreement limits exceeded 0.2 mm between each software and the reference standard. CONCLUSIONS Both artificial intelligence-segmentation software demonstrated acceptable accuracy in tooth segmentation. There is a need for improvement in segmenting incisor lingual tooth surfaces in dentOne. Both software programs tended to overestimate the mesiodistal widths of segmented teeth, particularly the premolars. Artificial intelligence-segmentation needs to be manually adjusted by the operator to ensure accuracy. However, this still does not solve the problem of proximal surface reconstruction by the software.
Collapse
Affiliation(s)
- Yomna M Yacout
- Department of Orthodontics, Faculty of Dentistry, Alexandria University, Alexandria, Egypt.
| | - Farah Y Eid
- Department of Orthodontics, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Mostafa A Tageldin
- Department of Orthodontics, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Hassan E Kassem
- Department of Orthodontics, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| |
Collapse
|
2
|
Chen H, Qu Z, Tian Y, Jiang N, Qin Y, Gao J, Zhang R, Ma Y, Jin Z, Zhai G. A cross-temporal multimodal fusion system based on deep learning for orthodontic monitoring. Comput Biol Med 2024; 180:109025. [PMID: 39159544 DOI: 10.1016/j.compbiomed.2024.109025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 07/30/2024] [Accepted: 08/11/2024] [Indexed: 08/21/2024]
Abstract
INTRODUCTION In the treatment of malocclusion, continuous monitoring of the three-dimensional relationship between dental roots and the surrounding alveolar bone is essential for preventing complications from orthodontic procedures. Cone-beam computed tomography (CBCT) provides detailed root and bone data, but its high radiation dose limits its frequent use, consequently necessitating an alternative for ongoing monitoring. OBJECTIVES We aimed to develop a deep learning-based cross-temporal multimodal image fusion system for acquiring root and jawbone information without additional radiation, enhancing the ability of orthodontists to monitor risk. METHODS Utilizing CBCT and intraoral scans (IOSs) as cross-temporal modalities, we integrated deep learning with multimodal fusion technologies to develop a system that includes a CBCT segmentation model for teeth and jawbones. This model incorporates a dynamic kernel prior model, resolution restoration, and an IOS segmentation network optimized for dense point clouds. Additionally, a coarse-to-fine registration module was developed. This system facilitates the integration of IOS and CBCT images across varying spatial and temporal dimensions, enabling the comprehensive reconstruction of root and jawbone information throughout the orthodontic treatment process. RESULTS The experimental results demonstrate that our system not only maintains the original high resolution but also delivers outstanding segmentation performance on external testing datasets for CBCT and IOSs. CBCT achieved Dice coefficients of 94.1 % and 94.4 % for teeth and jawbones, respectively, and it achieved a Dice coefficient of 91.7 % for the IOSs. Additionally, in the context of real-world registration processes, the system achieved an average distance error (ADE) of 0.43 mm for teeth and 0.52 mm for jawbones, significantly reducing the processing time. CONCLUSION We developed the first deep learning-based cross-temporal multimodal fusion system, addressing the critical challenge of continuous risk monitoring in orthodontic treatments without additional radiation exposure. We hope that this study will catalyze transformative advancements in risk management strategies and treatment modalities, fundamentally reshaping the landscape of future orthodontic practice.
Collapse
Affiliation(s)
- Haiwen Chen
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Zhiyuan Qu
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| | - Yuan Tian
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| | - Ning Jiang
- Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Yuan Qin
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Jie Gao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Ruoyan Zhang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Yanning Ma
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China.
| | - Zuolin Jin
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Clinical Research Center for Oral Diseases, Department of Orthodontics, School of Stomatology, The Fourth Military Medical University, Xi'an, 710032, China
| | - Guangtao Zhai
- Institute of Image Communication and Network Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200011, China
| |
Collapse
|
3
|
van Nistelrooij N, Maier E, Bronkhorst H, Crins L, Xi T, Loomans BAC, Vinayahalingam S. Automated monitoring of tooth wear progression using AI on intraoral scans. J Dent 2024; 150:105323. [PMID: 39197530 DOI: 10.1016/j.jdent.2024.105323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/23/2024] [Accepted: 08/24/2024] [Indexed: 09/01/2024] Open
Abstract
OBJECTIVES This study aimed to develop and evaluate a fully automated method for visualizing and measuring tooth wear progression using pairs of intraoral scans (IOSs) in comparison with a manual protocol. METHODS Eight patients with severe tooth wear progression were retrospectively included, with IOSs taken at baseline and 1-year, 3-year, and 5-year follow-ups. For alignment, the automated method segmented the arch into separate teeth in the IOSs. Tooth pair registration selected tooth surfaces that were likely unaffected by tooth wear and performed point set registration on the selected surfaces. Maximum tooth profile losses from baseline to each follow-up were determined based on signed distances using the manual 3D Wear Analysis (3DWA) protocol and the automated method. The automated method was evaluated against the 3DWA protocol by comparing tooth segmentations with the Dice-Sørensen coefficient (DSC) and intersection over union (IoU). The tooth profile loss measurements were compared with regression and Bland-Altman plots. Additionally, the relationship between the time interval and the measurement differences between the two methods was shown. RESULTS The automated method completed within two minutes. It was very effective for tooth instance segmentation (826 teeth, DSC = 0.947, IoU = 0.907), and a correlation of 0.932 was observed for agreement on tooth profile loss measurements (516 tooth pairs, mean difference = 0.021mm, 95% confidence interval = [-0.085, 0.138]). The variability in measurement differences increased for larger time intervals. CONCLUSIONS The proposed automated method for monitoring tooth wear progression was faster and not clinically significantly different in accuracy compared to a manual protocol for full-arch IOSs. CLINICAL SIGNIFICANCE General practitioners and patients can benefit from the visualization of tooth wear, allowing quantifiable and standardized decisions concerning therapy requirements of worn teeth. The proposed method for tooth wear monitoring decreased the time required to less than two minutes compared with the manual approach, which took at least two hours.
Collapse
Affiliation(s)
- Niels van Nistelrooij
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, 6525 GA Nijmegen, the Netherlands; Department of Oral and Maxillofacial Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and HumboldtUniversität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany
| | - Eva Maier
- Department of Operative Dentistry and Periodontology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nuremberg, Maximiliansplatz 2, 91054 Erlangen, Germany; Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, 6525 EX Nijmegen, the Netherlands
| | - Hilde Bronkhorst
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, 6525 EX Nijmegen, the Netherlands
| | - Luuk Crins
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, 6525 EX Nijmegen, the Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, 6525 GA Nijmegen, the Netherlands
| | - Bas A C Loomans
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, 6525 EX Nijmegen, the Netherlands
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, 6525 GA Nijmegen, the Netherlands.
| |
Collapse
|
4
|
Yoon K, Kim JY, Kim SJ, Huh JK, Kim JW, Choi J. Multi-class segmentation of temporomandibular joint using ensemble deep learning. Sci Rep 2024; 14:18990. [PMID: 39160234 PMCID: PMC11333466 DOI: 10.1038/s41598-024-69814-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 08/08/2024] [Indexed: 08/21/2024] Open
Abstract
Temporomandibular joint disorders are prevalent causes of orofacial discomfort. Diagnosis predominantly relies on assessing the configuration and positions of temporomandibular joint components in magnetic resonance images. The complex anatomy of the temporomandibular joint, coupled with the variability in magnetic resonance image quality, often hinders an accurate diagnosis. To surmount this challenge, we developed deep learning models tailored to the automatic segmentation of temporomandibular joint components, including the temporal bone, disc, and condyle. These models underwent rigorous training and validation utilizing a dataset of 3693 magnetic resonance images from 542 patients. Upon evaluation, our ensemble model, which combines five individual models, yielded average Dice similarity coefficients of 0.867, 0.733, 0.904, and 0.952 for the temporal bone, disc, condyle, and background class during internal testing. In the external validation, the average Dice similarity coefficients values for the temporal bone, disc, condyle, and background were 0.720, 0.604, 0.800, and 0.869, respectively. When applied in a clinical setting, these artificial intelligence-augmented tools enhanced the diagnostic accuracy of physicians, especially when discerning between temporomandibular joint anterior disc displacement and osteoarthritis. In essence, automated temporomandibular joint segmentation by our deep learning approach, stands as a promising aid in refining temporomandibular joint disorders diagnosis and treatment strategies.
Collapse
Affiliation(s)
- Kyubaek Yoon
- Department of Artificial Intelligence and Software, Ewha Womans University, Seoul, South Korea
| | - Jae-Young Kim
- Department of Oral and Maxillofacial Surgery, Gangnam Severance Hospital, Yonsei University College of Dentistry, Seoul, Republic of Korea
| | - Sun-Jong Kim
- Department of Oral and Maxillofacial Surgery, School of Medicine, College of Medicine, Ewha Womans University, Anyangcheon-Ro 1071, Yangcheon-Gu, Seoul, 158-710, South Korea
| | - Jong-Ki Huh
- Department of Oral and Maxillofacial Surgery, Gangnam Severance Hospital, Yonsei University College of Dentistry, Seoul, Republic of Korea
| | - Jin-Woo Kim
- Department of Oral and Maxillofacial Surgery, School of Medicine, College of Medicine, Ewha Womans University, Anyangcheon-Ro 1071, Yangcheon-Gu, Seoul, 158-710, South Korea.
| | - Jongeun Choi
- Department of Mobility Systems Engineering, School of Mechanical Engineering, Yonsei University, 50 Yonsei Ro, Seodaemun Gu, Seoul, 03722, South Korea.
| |
Collapse
|
5
|
Zhu Y, Zhang L, Liu S, Wen A, Gao Z, Qin Q, Gao L, Zhao Y, Wang Y. Automatic three-dimensional facial symmetry reference plane construction based on facial planar reflective symmetry net. J Dent 2024; 147:105043. [PMID: 38735469 DOI: 10.1016/j.jdent.2024.105043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 05/02/2024] [Accepted: 05/03/2024] [Indexed: 05/14/2024] Open
Abstract
OBJECTIVES Three-dimensional (3D) facial symmetry analysis is based on the 3D symmetry reference plane (SRP). Artificial intelligence (AI) is widely used in the dental and oral sciences. This study developed a novel deep learning model called the facial planar reflective symmetry net (FPRS-Net) to automatically construct an SRP and established a method for defining a 3D point-cloud region of interest (ROI) and high-dimensional feature computations suitable for this network model. METHODS Overall, 240 patients were enroled. The deep learning model was trained and predicted using 200 samples, and its clinical suitability was evaluated with 40 samples. Four FPRS-Net models were prepared, each using supervised and unsupervised learning approaches based on full facial and ROI data (FPRS-NetS, FPRS-NetSR, FPRS-NetU, and FPRS-NetUR). These models were trained on 160 3D facial datasets, validated on 20 cases, and tested on another 20 cases. The model predictions were evaluated using an additional 40 clinical 3D facial datasets by comparing the mean square error of the SRP between the parameters predicted by the four FPRS-Net models and the truth plane. The clinical suitability of FPRS-Net models was evaluated by measuring the angle error between the predicted and ground-truth planes; experts evaluated the predicted SRP of the four FPRS-Net models using the visual analogue scales (VAS) method. RESULTS The FPRS-NetSR and FPRS-NetU models achieved an average angle error of 0.84° and 0.99° in predicting 3D facial SRP, respectively, with a VAS value of >8. Using the four FPRS-Net models to create an SRP in 40 cases of 3D facial data required <4 s. CONCLUSIONS Our study demonstrated a new solution for automatically constructing oral clinical 3D facial SRPs. CLINICAL SIGNIFICANCE This study proposes a novel deep learning algorithm (FPRS-Net) to construct a symmetry reference plane that can reduce workload, shorten the time required for digital design, reduce dependence on expert experience, and improve therapeutic efficiency and effectiveness in dental clinics.
Collapse
Affiliation(s)
- Yujia Zhu
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China
| | - Lingxiao Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Shuzhi Liu
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Aonan Wen
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China
| | - Zixiang Gao
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China
| | - Qingzhao Qin
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China
| | - Lin Gao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
| | - Yijiao Zhao
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China.
| | - Yong Wang
- Center of Digital Dentistry/Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China; National Center for Stomatology, Beijing, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China; Beijing Key Laboratory of Digital Stomatology, Beijing, China; NHC Key Laboratory of Digital Stomatology, Beijing, China.
| |
Collapse
|
6
|
Wang X, Alqahtani KA, Van den Bogaert T, Shujaat S, Jacobs R, Shaheen E. Convolutional neural network for automated tooth segmentation on intraoral scans. BMC Oral Health 2024; 24:804. [PMID: 39014389 PMCID: PMC11250967 DOI: 10.1186/s12903-024-04582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 07/05/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND Tooth segmentation on intraoral scanned (IOS) data is a prerequisite for clinical applications in digital workflows. Current state-of-the-art methods lack the robustness to handle variability in dental conditions. This study aims to propose and evaluate the performance of a convolutional neural network (CNN) model for automatic tooth segmentation on IOS images. METHODS A dataset of 761 IOS images (380 upper jaws, 381 lower jaws) was acquired using an intraoral scanner. The inclusion criteria included a full set of permanent teeth, teeth with orthodontic brackets, and partially edentulous dentition. A multi-step 3D U-Net pipeline was designed for automated tooth segmentation on IOS images. The model's performance was assessed in terms of time and accuracy. Additionally, the model was deployed on an online cloud-based platform, where a separate subsample of 18 IOS images was used to test the clinical applicability of the model by comparing three modes of segmentation: automated artificial intelligence-driven (A-AI), refined (R-AI), and semi-automatic (SA) segmentation. RESULTS The average time for automated segmentation was 31.7 ± 8.1 s per jaw. The CNN model achieved an Intersection over Union (IoU) score of 91%, with the full set of teeth achieving the highest performance and the partially edentulous group scoring the lowest. In terms of clinical applicability, SA took an average of 860.4 s per case, whereas R-AI showed a 2.6-fold decrease in time (328.5 s). Furthermore, R-AI offered higher performance and reliability compared to SA, regardless of the dentition group. CONCLUSIONS The 3D U-Net pipeline was accurate, efficient, and consistent for automatic tooth segmentation on IOS images. The online cloud-based platform could serve as a viable alternative for IOS segmentation.
Collapse
Affiliation(s)
- Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Harbin Medical University, Youzheng Street 23, Nangang, Harbin, 150001, China
| | - Khalid Ayidh Alqahtani
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
| | - Tom Van den Bogaert
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, 14611, Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia.
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
| | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Dental Medicine, Karolinska Institutet, Solnavägen 1, 171 77, stockholm, 3000, Sweden
| |
Collapse
|
7
|
Yoon K, Jeong HM, Kim JW, Park JH, Choi J. AI-based dental caries and tooth number detection in intraoral photos: Model development and performance evaluation. J Dent 2024; 141:104821. [PMID: 38145804 DOI: 10.1016/j.jdent.2023.104821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 12/17/2023] [Accepted: 12/18/2023] [Indexed: 12/27/2023] Open
Abstract
OBJECTIVES In this study, we aimed to integrate tooth number recognition and caries detection in full intraoral photographic images using a cascade region-based deep convolutional neural network (R-CNN) model to facilitate the practical application of artificial intelligence (AI)-driven automatic caries detection in clinical practice. METHODS Our dataset comprised 24,578 images, encompassing 4787 upper occlusal, 4347 lower occlusal, 5230 right lateral, 5010 left lateral, and 5204 frontal views. In each intraoral image, tooth numbers and, when present, dental caries, including their location and stage, were annotated using bounding boxes. A cascade R-CNN model was used for dental caries detection and tooth number recognition within intraoral images. RESULTS For tooth number recognition, the model achieved an average mean average precision (mAP) score of 0.880. In the task of dental caries detection, the model's average mAP score was 0.769, with individual scores spanning from 0.695 to 0.893. CONCLUSIONS The primary objective of integrating tooth number recognition and caries detection within full intraoral photographic images has been achieved by our deep learning model. The model's training on comprehensive intraoral datasets has demonstrated its potential for seamless clinical application. CLINICAL SIGNIFICANCE This research holds clinical significance by achieving AI-driven automatic integration of tooth number recognition and caries detection in full intraoral images where multiple teeth are visible. It has the potential to promote the practical application of AI in real-life and clinical settings.
Collapse
Affiliation(s)
- Kyubaek Yoon
- School of Mechanical Engineering, Yonsei University, Seoul, South Korea
| | - Hye-Min Jeong
- Department of Artificial Intelligence Convergence, Ewha Womans University, Seoul, South Korea
| | - Jin-Woo Kim
- Department of Oral and Maxillofacial Surgery, College of Medicine, Ewha Womans University, Seoul, South Korea
| | - Jung-Hyun Park
- Department of Oral and Maxillofacial Surgery, College of Medicine, Ewha Womans University, Seoul, South Korea.
| | - Jongeun Choi
- School of Mechanical Engineering, Yonsei University, Seoul, South Korea.
| |
Collapse
|
8
|
Dot G, Gajny L, Ducret M. [The challenges of artificial intelligence in odontology]. Med Sci (Paris) 2024; 40:79-84. [PMID: 38299907 DOI: 10.1051/medsci/2023199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024] Open
Abstract
Artificial intelligence has numerous potential applications in dentistry, as these algorithms aim to improve the efficiency and safety of several clinical situations. While the first commercial solutions are being proposed, most of these algorithms have not been sufficiently validated for clinical use. This article describes the challenges surrounding the development of these new tools, to help clinicians to keep a critical eye on this technology.
Collapse
Affiliation(s)
- Gauthier Dot
- UFR odontologie, université Paris Cité, Paris, France - AP-HP, hôpital Pitié-Salpêtrière, service de médecine bucco-dentaire, Paris, France - Institut de biomécanique humaine Georges Charpak, école nationale supérieure d'Arts et Métiers, Paris, France
| | - Laurent Gajny
- Institut de biomécanique humaine Georges Charpak, école nationale supérieure d'Arts et Métiers, Paris, France
| | - Maxime Ducret
- Faculté d'odontologie, université Claude Bernard Lyon 1, hospices civils de Lyon, Lyon, France
| |
Collapse
|
9
|
Eliades T, Panayi N, Papageorgiou SN. From biomimetics to smart materials and 3D technology: Applications in orthodontic bonding, debonding, and appliance design or fabrication. JAPANESE DENTAL SCIENCE REVIEW 2023; 59:403-411. [PMID: 38022388 PMCID: PMC10665594 DOI: 10.1016/j.jdsr.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/24/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
This review covers aspects of orthodontic materials, appliance fabrication and bonding, crossing scientific fields and presenting recent advances in science and technology. Its purpose is to familiarize the reader with developments on these issues, indicate possible future applications of such pioneering approaches, and report the current status in orthodontics. The first section of this review covers shape-memory polymer wires, several misconceptions arising from the recent introduction of novel three-dimensional (3D)-printed aligners (mistakenly termed shape-memory polymers only because they present a certain degree of rebound capacity, as most non-stiff alloys or polymers do), frictionless surfaces enabling resistance-less sliding, self-healing materials for effective handling of fractured plastic/ceramic brackets, self-cleaning materials to minimize microbial attachment or plaque build-up on orthodontic appliances, elastomers with reduced force relaxation and extended stretching capacity to address the problem of inadequate force application during wire-engagement in the bracket slot, biomimetic (non-etching mediated) adhesive attachment to surfaces based on the model of the gecko and the mussel, and command-debond adhesives as options for an atraumatic debonding. This review's second section deals with the recent and largely unsubstantiated application of 3D-printed alloys and polymers in orthodontics and aspects of planning, material fabrication, and appliance design.
Collapse
Affiliation(s)
- Theodore Eliades
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
| | - Nearchos Panayi
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
- European University Cyprus, School of Dentistry, Nicosia, Cyprus
| | - Spyridon N. Papageorgiou
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|
10
|
Yu JH, Kim JH, Liu J, Mangal U, Ahn HK, Cha JY. Reliability and time-based efficiency of artificial intelligence-based automatic digital model analysis system. Eur J Orthod 2023; 45:712-721. [PMID: 37418746 DOI: 10.1093/ejo/cjad032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/09/2023]
Abstract
OBJECTIVES To compare the reliability, reproducibility, and time-based efficiency of automatic digital (AD) and manual digital (MD) model analyses using intraoral scan models. MATERIAL AND METHODS Two examiners analysed 26 intraoral scanner records using MD and AD methods for orthodontic modelling. Tooth size reproducibility was confirmed using a Bland-Altman plot. The Wilcoxon signed-rank test was conducted to compare the model analysis parameters (tooth size, sum of 12-teeth, Bolton analysis, arch width, arch perimeter, arch length discrepancy, and overjet/overbite) for each method, including the time taken for model analysis. RESULTS The MD group exhibited a relatively larger spread of 95% agreement limits when compared with AD group. The standard deviations of repeated tooth measurements were 0.15 mm (MD group) and 0.08 mm (AD group). The mean difference values of the 12-tooth (1.80-2.38 mm) and arch perimeter (1.42-3.23 mm) for AD group was significantly (P < 0.001) larger than that for the MD group. The arch width, Bolton, and overjet/overbite were clinically insignificant. The overall mean time required for the measurements was 8.62 min and 0.56 min for the MD and AD groups, respectively. LIMITATIONS Validation results may vary in different clinical cases because our evaluation was limited to mild-to-moderate crowding in the complete dentition. CONCLUSIONS Significant differences were observed between AD and MD groups. The AD method demonstrated reproducible analysis in a considerably reduced timeframe, along with a significant difference in measurements compared to the MD method. Therefore, AD analysis should not be interchanged with MD, and vice versa.
Collapse
Affiliation(s)
- Jae-Hun Yu
- Department of Orthodontics, Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
- BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, Korea
| | - Ji-Hoi Kim
- Department of Orthodontics, Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
- BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, Korea
| | - Jing Liu
- Department of Orthodontics, Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
| | - Utkarsh Mangal
- Department of Orthodontics, Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
| | - Hee-Kap Ahn
- Department of Computer Science and Engineering, Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Republic of Korea
| | - Jung-Yul Cha
- Department of Orthodontics, Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
- BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, Korea
- Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Korea
| |
Collapse
|
11
|
Almalki SA, Alsubai S, Alqahtani A, Alenazi AA. Denoised encoder-based residual U-net for precise teeth image segmentation and damage prediction on panoramic radiographs. J Dent 2023; 137:104651. [PMID: 37553029 DOI: 10.1016/j.jdent.2023.104651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/02/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
OBJECTIVES This research focuses on performing teeth segmentation with panoramic radiograph images using a denoised encoder-based residual U-Net model, which enhances segmentation techniques and has the capacity to adapt to predictions with different and new data in the dataset, making the proposed model more robust and assisting in the accurate identification of damages in individual teeth. METHODS The effective segmentation starts with pre-processing the Tufts dataset to resize images to avoid computational complexities. Subsequently, the prediction of the defect in teeth is performed with the denoised encoder block in the residual U-Net model, in which a modified identity block is provided in the encoder section for finer segmentation on specific regions in images, and features are identified optimally. The denoised block aids in handling noisy ground truth images effectively. RESULTS Proposed module achieved greater values of mean dice and mean IoU with 98.90075 and 98.74147 CONCLUSIONS: Proposed AI enabled model permitted a precise approach to segment the teeth on Tuffs dental dataset in spite of the existence of densed dental filling and the kind of tooth. CLINICAL SIGNIFICANCE The proposed model is pivotal for improved dental diagnostics, offering precise identification of dental anomalies. This could revolutionize clinical dental settings by facilitating more accurate treatments and safer examination processes with lower radiation exposure, thus enhancing overall patient care.
Collapse
Affiliation(s)
- Sultan A Almalki
- Department of Preventive Dental Sciences, College of Dentistry, Prince Sattam Bin AbdulAziz University, Al-Kharj 11942, Saudi Arabia.
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Abdullah Alqahtani
- Department of Software Engineering, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Adel A Alenazi
- Department of Oral and Maxillofacial Surgery and Diagnostic Science, College of Dentistry, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
12
|
Liu J, Hao J, Lin H, Pan W, Yang J, Feng Y, Wang G, Li J, Jin Z, Zhao Z, Liu Z. Deep learning-enabled 3D multimodal fusion of cone-beam CT and intraoral mesh scans for clinically applicable tooth-bone reconstruction. PATTERNS (NEW YORK, N.Y.) 2023; 4:100825. [PMID: 37720330 PMCID: PMC10499902 DOI: 10.1016/j.patter.2023.100825] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 03/24/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
High-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques. Experiments on real-world large-scale datasets revealed that DDMF achieved superior segmentation performance on CBCT and IOS, achieving a 0.17 mm average symmetric surface distance (ASSD) for 3D fusion with a substantial processing time reduction. Additionally, clinical applicability studies have demonstrated DDMF's potential for accurately simulating tooth-bone structures throughout the orthodontic treatment process.
Collapse
Affiliation(s)
- Jiaxiang Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China
| | - Jin Hao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
- Harvard School of Dental Medicine, Harvard University, Boston, MA 02115, USA
| | - Hangzheng Lin
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Wei Pan
- OPT Machine Vision Tech Co., Ltd., Tokyo 135-0064, Japan
| | - Jianfei Yang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Yang Feng
- Angelalign Inc., Shanghai 200433, China
| | - Gaoang Wang
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Jin Li
- Department of Stomatology, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People’s Hospital, Shenzhen 518025, China
| | - Zuolin Jin
- Department of Orthodontics, School of Stomatology, Air Force Medical University, Xi’an 710032, China
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Zuozhu Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| |
Collapse
|
13
|
Vinayahalingam S, Kempers S, Schoep J, Hsu TMH, Moin DA, van Ginneken B, Flügge T, Hanisch M, Xi T. Intra-oral scan segmentation using deep learning. BMC Oral Health 2023; 23:643. [PMID: 37670290 PMCID: PMC10481506 DOI: 10.1186/s12903-023-03362-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 08/26/2023] [Indexed: 09/07/2023] Open
Abstract
OBJECTIVE Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious, and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using deep learning. MATERIAL AND METHODS As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set consisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree of similarity between the annotated ground truth and the model predictions. RESULTS The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agreements between the automatically and manually segmented teeth components. Minor flaws were mostly seen at the edges. CONCLUSION The proposed method forms a promising foundation for time-effective and observer-independent teeth segmentation and labeling on intra-oral scans. CLINICAL SIGNIFICANCE Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics, implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
| | - Steven Kempers
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
| | - Julian Schoep
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tzu-Ming Harry Hsu
- MIT Computer Science & Artificial Intelligence Laboratory, 32 Vassar St, Cambridge, MA, 02139, USA
| | | | - Bram van Ginneken
- Department of Radiology, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Tabea Flügge
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203, Berlin, Germany.
| | - Marcel Hanisch
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| |
Collapse
|
14
|
Liu Z, He X, Wang H, Xiong H, Zhang Y, Wang G, Hao J, Feng Y, Zhu F, Hu H. Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:467-480. [PMID: 36378797 DOI: 10.1109/tmi.2022.3222388] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Accurately delineating individual teeth and the gingiva in the three-dimension (3D) intraoral scanned (IOS) mesh data plays a pivotal role in many digital dental applications, e.g., orthodontics. Recent research shows that deep learning based methods can achieve promising results for 3D tooth segmentation, however, most of them rely on high-quality labeled dataset which is usually of small scales as annotating IOS meshes requires intensive human efforts. In this paper, we propose a novel self-supervised learning framework, named STSNet, to boost the performance of 3D tooth segmentation leveraging on large-scale unlabeled IOS data. The framework follows two-stage training, i.e., pre-training and fine-tuning. In pre-training, three hierarchical-level, i.e., point-level, region-level, cross-level, contrastive losses are proposed for unsupervised representation learning on a set of predefined matched points from different augmented views. The pretrained segmentation backbone is further fine-tuned in a supervised manner with a small number of labeled IOS meshes. With the same amount of annotated samples, our method can achieve an mIoU of 89.88%, significantly outperforming the supervised counterparts. The performance gain becomes more remarkable when only a small amount of labeled samples are available. Furthermore, STSNet can achieve better performance with only 40% of the annotated samples as compared to the fully supervised baselines. To the best of our knowledge, we present the first attempt of unsupervised pre-training for 3D tooth segmentation, demonstrating its strong potential in reducing human efforts for annotation and verification.
Collapse
|
15
|
Pan F, Liu J, Cen Y, Chen Y, Cai R, Zhao Z, Liao W, Wang J. Accuracy of RGB-D camera-based and stereophotogrammetric facial scanners: a comparative study. J Dent 2022; 127:104302. [PMID: 36152954 DOI: 10.1016/j.jdent.2022.104302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 09/05/2022] [Accepted: 09/20/2022] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES This study aimed to evaluate and compare the accuracy and inter-operator reliability of a low-cost red-green-blue-depth (RGB-D) camera-based facial scanner (Bellus3D Arc7) with a stereophotogrammetry facial scanner (3dMD) and to explore the possibility of the former as a clinical substitute for the latter. METHODS A mannequin head was selected as the research object. In the RGB-D camera-based facial scanner group, the head was continuously scanned five times using an RGB-D camera-based facial scanner (Bellus3D Arc7), and the outcome data of each scan was then imported into CAD software (MeshLab) to reconstruct three-dimensional (3D) facial photographs. In the stereophotogrammetry facial scanner group, the mannequin head was scanned with a stereophotogrammetry facial scanner (3dMD). Selected parameters were directly measured on the reconstructed 3D virtual faces using a CAD software. The same parameters were then measured directly on the mannequin head using the direct anthropometry (DA) method as the gold standard for later comparison. The accuracy of the facial scanners was evaluated in terms of trueness and precision. Trueness was evaluated by comparing the measurement results of the two groups with each other and with that of DA using equivalence tests and average absolute deviations, while precision and inter-operator reliability were assessed using the intraclass correlation coefficient (ICC). A 3D facial mesh deviation between the two groups was also calculated for further reference using a 3D metrology software (GOM inspect pro). RESULTS In terms of trueness, the average absolute deviations between RGB-D camera-based and stereophotogrammetry facial scanners, between RGB-D camera-based facial scanner and DA, and between stereophotogrammetry facial scanner and DA were statistically equivalent at 0.50±0.27 mm, 0.61±0.42 mm, and 0.28±0.14 mm, respectively. Equivalence test results confirmed that their equivalence was within clinical requirements (<1 mm). The ICC for each parameter was approximately 0.999 in terms of precision and inter-operator reliability. A 3D facial mesh analysis suggested that the deviation between the two groups was 0.37±0.01 mm. CONCLUSIONS For facial scanners, an accuracy of <1 mm is commonly considered clinically acceptable. Both the RGB-D camera-based and stereophotogrammetry facial scanners in this study showed acceptable trueness, high precision, and inter-operator reliability. A low-cost RGB-D camera-based facial scanner could be an eligible clinical substitute for traditional stereophotogrammetry. CLINICAL SIGNIFICANCE The low-cost RGB-D camera-based facial scanner showed clinically acceptable trueness, high precision, and inter-operator reliability; thus, it could be an eligible clinical substitute for traditional stereophotogrammetry.
Collapse
Affiliation(s)
- Fangwei Pan
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jialing Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Yueyan Cen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Ye Chen
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Ruilie Cai
- Department of Epidemiology and Biostatistics, Arnold School of Public Health, University of South Carolina, South Carolina, United States
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Jian Wang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
16
|
Wu J, Zhang M, Yang D, Wei F, Xiao N, Shi L, Liu H, Shang P. Clinical tooth segmentation based on local enhancement. Front Mol Biosci 2022; 9:932348. [PMID: 36304923 PMCID: PMC9592892 DOI: 10.3389/fmolb.2022.932348] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 09/20/2022] [Indexed: 11/15/2022] Open
Abstract
The tooth arrangements of human beings are challenging to accurately observe when relying on dentists' naked eyes, especially for dental caries in children, which is difficult to detect. Cone-beam computer tomography (CBCT) is used as an auxiliary method to measure patients' teeth, including children. However, subjective and irreproducible manual measurements are required during this process, which wastes much time and energy for the dentists. Therefore, a fast and accurate tooth segmentation algorithm that can replace repeated calculations and annotations in manual segmentation has tremendous clinical significance. This study proposes a local contextual enhancement model for clinical dental CBCT images. The local enhancement model, which is more suitable for dental CBCT images, is proposed based on the analysis of the existing contextual models. Then, the local enhancement model is fused into an encoder-decoder framework for dental CBCT images. At last, extensive experiments are conducted to validate our method.
Collapse
Affiliation(s)
- Jipeng Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ming Zhang
- Department of Pediatrics, Zhongshan Hospital Xiamen University, Xiamen, China
| | - Delong Yang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Burn Surgery, The First People’s Hospital of Foshan, Foshan, China
| | - Feng Wei
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Naian Xiao
- Department of Neurology, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Lei Shi
- Dental Medicine Center, The Second Clinical Medical College of Jinan University, Shenzhen People’s Hosipital, Shenzhen, China
| | - Huifeng Liu
- Dental Medicine Center, The Second Clinical Medical College of Jinan University, Shenzhen People’s Hosipital, Shenzhen, China
| | - Peng Shang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
17
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
18
|
Liu D, Tian Y, Zhang Y, Gelernter J, Wang X. Heterogeneous data fusion and loss function design for tooth point cloud segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07379-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Tooth Defect Segmentation in 3D Mesh Scans Using Deep Learning. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|