1
|
Mangano FG, Yang KR, Lerner H, Admakin O, Mangano C. Artificial intelligence and mixed reality for dental implant planning: A technical note. Clin Implant Dent Relat Res 2024. [PMID: 38940681 DOI: 10.1111/cid.13357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 04/25/2024] [Accepted: 06/09/2024] [Indexed: 06/29/2024]
Abstract
AIM The aim of this work is to present a new protocol for implant surgical planning which involves the combined use of artificial intelligence (AI) and mixed reality (MR). METHODS This protocol involves the acquisition of three-dimensional (3D) patient data through intraoral scanning (IOS) and cone beam computed tomography (CBCT). These data are loaded into AI software which automatically segments and aligns the patient's 3D models. These 3D models are loaded into MR software and used for planning implant surgery through holography. The files are then exported and used to design surgical guides via open-source software, which are 3D printed and used to prepare the implant sites through static computer-assisted implant surgery (s-CAIS). The case is finalized prosthetically through a fully digital protocol. The accuracy of implant positioning is verified by comparing the planned position with the actual position of the implants after surgery. RESULTS As a proof of principle, the present protocol seems to be to be reliable and efficient when used for planning simple cases of s-CAIS in partially edentulous patients. The clinician can plan the implants in an authentic 3D environment without using any radiology-guided surgery software. The precision of implant placement seems clinically acceptable, with minor deviations. CONCLUSIONS The present study suggests that AI and MR technologies can be successfully used in s-CAIS for an authentic 3D planning. Further clinical studies are needed to validate this protocol.
Collapse
Affiliation(s)
- Francesco Guido Mangano
- Department of Pediatric Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russia
| | | | - Henriette Lerner
- Academic Teaching and Research Institution of Johann Wolfgang Goethe University, Frankfurt, Germany
| | - Oleg Admakin
- Department of Pediatric Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russia
| | | |
Collapse
|
2
|
Rieder M, Remschmidt B, Gsaxner C, Gaessler J, Payer M, Zemann W, Wallner J. Augmented Reality-Guided Extraction of Fully Impacted Lower Third Molars Based on Maxillofacial CBCT Scans. Bioengineering (Basel) 2024; 11:625. [PMID: 38927861 PMCID: PMC11200966 DOI: 10.3390/bioengineering11060625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 06/07/2024] [Accepted: 06/16/2024] [Indexed: 06/28/2024] Open
Abstract
(1) Background: This study aimed to integrate an augmented reality (AR) image-guided surgery (IGS) system, based on preoperative cone beam computed tomography (CBCT) scans, into clinical practice. (2) Methods: In preclinical and clinical surgical setups, an AR-guided visualization system based on Microsoft's HoloLens 2 was assessed for complex lower third molar (LTM) extractions. In this study, the system's potential intraoperative feasibility and usability is described first. Preparation and operating times for each procedure were measured, as well as the system's usability, using the System Usability Scale (SUS). (3) Results: A total of six LTMs (n = 6) were analyzed, two extracted from human cadaver head specimens (n = 2) and four from clinical patients (n = 4). The average preparation time was 166 ± 44 s, while the operation time averaged 21 ± 5.9 min. The overall mean SUS score was 79.1 ± 9.3. When analyzed separately, the usability score categorized the AR-guidance system as "good" in clinical patients and "best imaginable" in human cadaver head procedures. (4) Conclusions: This translational study analyzed the first successful and functionally stable application of the HoloLens technology for complex LTM extraction in clinical patients. Further research is needed to refine the technology's integration into clinical practice to improve patient outcomes.
Collapse
Affiliation(s)
- Marcus Rieder
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| | - Bernhard Remschmidt
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria
| | - Jan Gaessler
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| | - Michael Payer
- Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, 8010 Graz, Austria
| | - Wolfgang Zemann
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| | - Juergen Wallner
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| |
Collapse
|
3
|
Lahoud P, Jacobs R, Elahi SA, Ducret M, Lauwers W, van Lenthe GH, Richert R, EzEldeen M. Developing Advanced Patient-Specific In Silico Models: A New Era in Biomechanical Analysis of Tooth Autotransplantation. J Endod 2024; 50:820-826. [PMID: 38452866 DOI: 10.1016/j.joen.2024.02.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/20/2024] [Accepted: 02/25/2024] [Indexed: 03/09/2024]
Abstract
INTRODUCTION As personalized medicine advances, there is an escalating need for sophisticated tools to understand complex biomechanical phenomena in clinical research. Recognizing a significant gap, this study pioneers the development of patient-specific in silico models for tooth autotransplantation (TAT), setting a new standard for predictive accuracy and reliability in evaluating TAT outcomes. METHODS Development of the models relied on 6 consecutive cases of young patients (mean age 11.66 years ± 0.79), all undergoing TAT procedures. The development process involved creating detailed in silico replicas of patient oral structures, focusing on transplanting upper premolars to central incisors. These models underpinned finite element analysis simulations, testing various masticatory and traumatic scenarios. RESULTS The models highlighted critical biomechanical insights. The finite element models indicated homogeneous stress distribution in control teeth, contrasted by shape-dependent stress patterns in transplanted teeth. The surface deviation in the postoperative year for the transplanted elements showed a mean deviation of 0.33 mm (±0.28), significantly higher than their contralateral counterparts at 0.05 mm (±0.04). CONCLUSIONS By developing advanced patient-specific in silico models, we are ushering in a transformative era in TAT research and practice. These models are not just analytical tools; they are predictive instruments capturing patient uniqueness, including anatomical, masticatory, and tissue variables, essential for understanding biomechanical responses in TAT. This foundational work paves the way for future studies, where applying these models to larger cohorts will further validate their predictive capabilities and influence on TAT success parameters.
Collapse
Affiliation(s)
- Pierre Lahoud
- Department of Oral and Maxillofacial Surgery & Imaging and Pathology, OMFS-IMPATH Research Group, University Hospitals Leuven, KU Leuven, Belgium; Division of Periodontology & Oral Microbiology, Department of Oral Health Sciences-University Hospitals Leuven, KU Leuven, Belgium.
| | - Reinhilde Jacobs
- Department of Oral and Maxillofacial Surgery & Imaging and Pathology, OMFS-IMPATH Research Group, University Hospitals Leuven, KU Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| | - Seyed Ali Elahi
- Department of Movement Sciences, Human Movement Biomechanics Research Group, KU Leuven, Leuven, Belgium; Department of Mechanical Engineering, KU Leuven, Leuven, Belgium
| | - Maxime Ducret
- Laboratoire de Biologie Tissulaire et Ingénierie thérapeutique, UMR 5305 CNRS/Université Claude Bernard Lyon 1, UMS 3444 BioSciences Gerland- Lyon Sud, Lyon, France; Service d'Odontologie, Hospices Civils de Lyon, Lyon, France
| | - Wout Lauwers
- Department of Oral and Maxillofacial Surgery & Imaging and Pathology, OMFS-IMPATH Research Group, University Hospitals Leuven, KU Leuven, Belgium
| | | | - Raphaël Richert
- Service d'Odontologie, Hospices Civils de Lyon, Lyon, France; Univ Lyon, INSA Lyon, CNRS, LaMCoS, UMR5259, Villeurbanne, France
| | - Mostafa EzEldeen
- Department of Oral and Maxillofacial Surgery & Imaging and Pathology, OMFS-IMPATH Research Group, University Hospitals Leuven, KU Leuven, Belgium; Department of Oral Health Sciences, KU Leuven and Paediatric Dentistry and Special Dental Care, University Hospitals Leuven, KU Leuven, Leuven, Belgium
| |
Collapse
|
4
|
Jing Q, Dai X, Wang Z, Zhou Y, Shi Y, Yang S, Wang D. Fully automated deep learning model for detecting proximity of mandibular third molar root to inferior alveolar canal using panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:671-678. [PMID: 38614873 DOI: 10.1016/j.oooo.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 02/02/2024] [Accepted: 02/08/2024] [Indexed: 04/15/2024]
Abstract
OBJECTIVE This study endeavored to develop a novel, fully automated deep-learning model to determine the topographic relationship between mandibular third molar (MM3) roots and the inferior alveolar canal (IAC) using panoramic radiographs (PRs). STUDY DESIGN A total of 1570 eligible subjects with MM3s who had paired PR and cone beam computed tomography (CBCT) from January 2019 to December 2020 were retrospectively collected and randomly grouped into training (80%), validation (10%), and testing (10%) cohorts. The spatial relationship of MM3/IAC was assessed by CBCT and set as the ground truth. MM3-IACnet, a modified deep learning network based on YOLOv5 (You only look once), was trained to detect MM3/IAC proximity using PR. Its diagnostic performance was further compared with dentists, AlexNet, GoogleNet, VGG-16, ResNet-50, and YOLOv5 in another independent cohort with 100 high-risk MM3 defined as root overlapping with IAC on PR. RESULTS The MM3-IACnet performed best in predicting the MM3/IAC proximity, as evidenced by the highest accuracy (0.885), precision (0.899), area under the curve value (0.95), and minimal time-spending compared with other models. Moreover, our MM3-IACnet outperformed other models in MM3/IAC risk prediction in high-risk cases. CONCLUSION MM3-IACnet model can assist clinicians in MM3s risk assessment and treatment planning by detecting MM3/IAC topographic relationship using PR.
Collapse
Affiliation(s)
- Qiuping Jing
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Xiubin Dai
- School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, China; Smart Health Big Data Analysis and Location Services Engineering Research Center of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Zhifan Wang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Yanqi Zhou
- School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, China; Smart Health Big Data Analysis and Location Services Engineering Research Center of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yijin Shi
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Shengjun Yang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Dongmiao Wang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC; Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Jiangsu, China PRC.
| |
Collapse
|
5
|
Elsonbaty S, Elgarba BM, Fontenele RC, Swaity A, Jacobs R. Novel AI-based tool for primary tooth segmentation on CBCT using convolutional neural networks: A validation study. Int J Paediatr Dent 2024. [PMID: 38769619 DOI: 10.1111/ipd.13204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 04/25/2024] [Accepted: 05/03/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Primary teeth segmentation on cone beam computed tomography (CBCT) scans is essential for paediatric treatment planning. Conventional methods, however, are time-consuming and necessitate advanced expertise. AIM The aim of this study was to validate an artificial intelligence (AI) cloud-based platform for automated segmentation (AS) of primary teeth on CBCT. Its accuracy, time efficiency, and consistency were compared with manual segmentation (MS). DESIGN A dataset comprising 402 primary teeth (37 CBCT scans) was retrospectively retrieved from two CBCT devices. Primary teeth were manually segmented using a cloud-based platform representing the ground truth, whereas AS was performed on the same platform. To assess the AI tool's performance, voxel- and surface-based metrics were employed to compare MS and AS methods. Additionally, segmentation time was recorded for each method, and intra-class correlation coefficient (ICC) assessed consistency between them. RESULTS AS revealed high performance in segmenting primary teeth with high accuracy (98 ± 1%) and dice similarity coefficient (DSC; 95 ± 2%). Moreover, it was 35 times faster than the manual approach with an average time of 24 s. Both MS and AS demonstrated excellent consistency (ICC = 0.99 and 1, respectively). CONCLUSION The platform demonstrated expert-level accuracy, and time-efficient and consistent segmentation of primary teeth on CBCT scans, serving treatment planning in children.
Collapse
Affiliation(s)
- Sara Elsonbaty
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Egyptian Ministry of Health and Population, Cairo, Egypt
| | - Bahaaeldeen M Elgarba
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Prosthodontics, Faculty of Dentistry, Tanta University, Tanta, Egypt
| | - Rocharles Cavalcante Fontenele
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Abdullah Swaity
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
6
|
Revilla-León M, Gómez-Polo M, Sailer I, Kois JC, Rokhshad R. An overview of artificial intelligence based applications for assisting digital data acquisition and implant planning procedures. J ESTHET RESTOR DENT 2024. [PMID: 38757761 DOI: 10.1111/jerd.13249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
OBJECTIVES To provide an overview of the current artificial intelligence (AI) based applications for assisting digital data acquisition and implant planning procedures. OVERVIEW A review of the main AI-based applications integrated into digital data acquisitions technologies (facial scanners (FS), intraoral scanners (IOSs), cone beam computed tomography (CBCT) devices, and jaw trackers) and computer-aided static implant planning programs are provided. CONCLUSIONS The main AI-based application integrated in some FS's programs involves the automatic alignment of facial and intraoral scans for virtual patient integration. The AI-based applications integrated into IOSs programs include scan cleaning, assist scanning, and automatic alignment between the implant scan body with its corresponding CAD object while scanning. The more frequently AI-based applications integrated into the programs of CBCT units involve positioning assistant, noise and artifacts reduction, structures identification and segmentation, airway analysis, and alignment of facial, intraoral, and CBCT scans. Some computer-aided static implant planning programs include patient's digital files, identification, labeling, and segmentation of anatomical structures, mandibular nerve tracing, automatic implant placement, and surgical implant guide design.
Collapse
Affiliation(s)
- Marta Revilla-León
- Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, Washington, USA
- Research and Digital Dentistry, Kois Center, Seattle, Washington, USA
- Department of Prosthodontics, School of Dental Medicine, Tufts University, Boston, Massachusetts, USA
| | - Miguel Gómez-Polo
- Department of Conservative Dentistry and Prosthodontics, Complutense University of Madrid, Madrid, Spain
- Advanced in Implant-Prosthodontics, School of Dentistry, Complutense University of Madrid, Madrid, Spain
| | - Irena Sailer
- Fixed Prosthodontics and Biomaterials, University Clinic of Dental Medicine, University of Geneva, Geneva, Switzerland
| | - John C Kois
- Kois Center, Seattle, Washington, USA
- Department of Restorative Dentistry, University of Washington, Seattle, Washington, USA
- Private Practice, Seattle, Washington, USA
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| |
Collapse
|
7
|
Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY. Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 2024; 144:104931. [PMID: 38458378 DOI: 10.1016/j.jdent.2024.104931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.
Collapse
Affiliation(s)
- Fang-Duan Ni
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | | | - Mu-Qing Liu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| | - Min-Juan Zhang
- Second Dental Center, Peking University Hospital of Stomatology, Beijing 100101, China
| | - Shu Li
- Department of Stomatology, Beijing Hospital, Beijing 100005, China
| | | | | | - Kai-Yuan Fu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
8
|
Liu Z, Yang D, Zhang M, Liu G, Zhang Q, Li X. Inferior Alveolar Nerve Canal Segmentation on CBCT Using U-Net with Frequency Attentions. Bioengineering (Basel) 2024; 11:354. [PMID: 38671776 PMCID: PMC11048269 DOI: 10.3390/bioengineering11040354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/29/2024] [Accepted: 04/03/2024] [Indexed: 04/28/2024] Open
Abstract
Accurate inferior alveolar nerve (IAN) canal segmentation has been considered a crucial task in dentistry. Failing to accurately identify the position of the IAN canal may lead to nerve injury during dental procedures. While IAN canals can be detected from dental cone beam computed tomography, they are usually difficult for dentists to precisely identify as the canals are thin, small, and span across many slices. This paper focuses on improving accuracy in segmenting the IAN canals. By integrating our proposed frequency-domain attention mechanism in UNet, the proposed frequency attention UNet (FAUNet) is able to achieve 75.55% and 81.35% in the Dice and surface Dice coefficients, respectively, which are much higher than other competitive methods, by adding only 224 parameters to the classical UNet. Compared to the classical UNet, our proposed FAUNet achieves a 2.39% and 2.82% gain in the Dice coefficient and the surface Dice coefficient, respectively. The potential advantage of developing attention in the frequency domain is also discussed, which revealed that the frequency-domain attention mechanisms can achieve better performance than their spatial-domain counterparts.
Collapse
Affiliation(s)
- Zhiyang Liu
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
| | - Dong Yang
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
| | - Minghao Zhang
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
| | - Guohua Liu
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China
| | - Qian Zhang
- School and Hospital of Stomatology, Tianjin Medical University, Tianjin 300070, China
| | - Xiaonan Li
- School and Hospital of Stomatology, Tianjin Medical University, Tianjin 300070, China
| |
Collapse
|
9
|
Elgarba BM, Fontenele RC, Tarce M, Jacobs R. Artificial intelligence serving pre-surgical digital implant planning: A scoping review. J Dent 2024; 143:104862. [PMID: 38336018 DOI: 10.1016/j.jdent.2024.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
OBJECTIVES To conduct a scoping review focusing on artificial intelligence (AI) applications in presurgical dental implant planning. Additionally, to assess the automation degree of clinically available pre-surgical implant planning software. DATA AND SOURCES A systematic electronic literature search was performed in five databases (PubMed, Embase, Web of Science, Cochrane Library, and Scopus), along with exploring gray literature web-based resources until November 2023. English-language studies on AI-driven tools for digital implant planning were included based on an independent evaluation by two reviewers. An assessment of automation steps in dental implant planning software available on the market up to November 2023 was also performed. STUDY SELECTION AND RESULTS From an initial 1,732 studies, 47 met eligibility criteria. Within this subset, 39 studies focused on AI networks for anatomical landmark-based segmentation, creating virtual patients. Eight studies were dedicated to AI networks for virtual implant placement. Additionally, a total of 12 commonly available implant planning software applications were identified and assessed for their level of automation in pre-surgical digital implant workflows. Notably, only six of these featured at least one fully automated step in the planning software, with none possessing a fully automated implant planning protocol. CONCLUSIONS AI plays a crucial role in achieving accurate, time-efficient, and consistent segmentation of anatomical landmarks, serving the process of virtual patient creation. Additionally, currently available systems for virtual implant placement demonstrate different degrees of automation. It is important to highlight that, as of now, full automation of this process has not been documented nor scientifically validated. CLINICAL SIGNIFICANCE Scientific and clinical validation of AI applications for presurgical dental implant planning is currently scarce. The present review allows the clinician to identify AI-based automation in presurgical dental implant planning and assess the potential underlying scientific validation.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt.
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Mihai Tarce
- Division of Periodontology & Implant Dentistry, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China & Periodontology and Oral Microbiology, Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
10
|
Hu F, Chen Z, Wu F. A novel difficult-to-segment samples focusing network for oral CBCT image segmentation. Sci Rep 2024; 14:5068. [PMID: 38429362 PMCID: PMC10907706 DOI: 10.1038/s41598-024-55522-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 02/24/2024] [Indexed: 03/03/2024] Open
Abstract
Using deep learning technology to segment oral CBCT images for clinical diagnosis and treatment is one of the important research directions in the field of clinical dentistry. However, the blurred contour and the scale difference limit the segmentation accuracy of the crown edge and the root part of the current methods, making these regions become difficult-to-segment samples in the oral CBCT segmentation task. Aiming at the above problems, this work proposed a Difficult-to-Segment Focus Network (DSFNet) for segmenting oral CBCT images. The network utilizes a Feature Capturing Module (FCM) to efficiently capture local and long-range features, enhancing the feature extraction performance. Additionally, a Multi-Scale Feature Fusion Module (MFFM) is employed to merge multiscale feature information. To further improve the loss ratio for difficult-to-segment samples, a hybrid loss function is proposed, combining Focal Loss and Dice Loss. By utilizing the hybrid loss function, DSFNet achieves 91.85% Dice Similarity Coefficient (DSC) and 0.216 mm Average Symmetric Surface Distance (ASSD) performance in oral CBCT segmentation tasks. Experimental results show that the proposed method is superior to current dental CBCT image segmentation techniques and has real-world applicability.
Collapse
Affiliation(s)
- Fengjun Hu
- College of Information Science and Technology, Zhejiang Shuren University, Hangzhou, 310015, China
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China
| | - Zeyu Chen
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China
| | - Fan Wu
- College of Information Science and Technology, Zhejiang Shuren University, Hangzhou, 310015, China.
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China.
| |
Collapse
|
11
|
Nogueira-Reis F, Morgan N, Suryani IR, Tabchoury CPM, Jacobs R. Full virtual patient generated by artificial intelligence-driven integrated segmentation of craniomaxillofacial structures from CBCT images. J Dent 2024; 141:104829. [PMID: 38163456 DOI: 10.1016/j.jdent.2023.104829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/13/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024] Open
Abstract
OBJECTIVES To assess the performance, time-efficiency, and consistency of a convolutional neural network (CNN) based automated approach for integrated segmentation of craniomaxillofacial structures compared with semi-automated method for creating a virtual patient using cone beam computed tomography (CBCT) scans. METHODS Thirty CBCT scans were selected. Six craniomaxillofacial structures, encompassing the maxillofacial complex bones, maxillary sinus, dentition, mandible, mandibular canal, and pharyngeal airway space, were segmented on these scans using semi-automated and composite of previously validated CNN-based automated segmentation techniques for individual structures. A qualitative assessment of the automated segmentation revealed the need for minor refinements, which were manually corrected. These refined segmentations served as a reference for comparing semi-automated and automated integrated segmentations. RESULTS The majority of minor adjustments with the automated approach involved under-segmentation of sinus mucosal thickening and regions with reduced bone thickness within the maxillofacial complex. The automated and the semi-automated approaches required an average time of 1.1 min and 48.4 min, respectively. The automated method demonstrated a greater degree of similarity (99.6 %) to the reference than the semi-automated approach (88.3 %). The standard deviation values for all metrics with the automated approach were low, indicating a high consistency. CONCLUSIONS The CNN-driven integrated segmentation approach proved to be accurate, time-efficient, and consistent for creating a CBCT-derived virtual patient through simultaneous segmentation of craniomaxillofacial structures. CLINICAL RELEVANCE The creation of a virtual orofacial patient using an automated approach could potentially transform personalized digital workflows. This advancement could be particularly beneficial for treatment planning in a variety of dental and maxillofacial specialties.
Collapse
Affiliation(s)
- Fernanda Nogueira-Reis
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Dakahlia 35516, Egypt
| | - Isti Rahayu Suryani
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Cinthia Pereira Machado Tabchoury
- Department of Biosciences, Division of Biochemistry, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven Kapucijnenvoer 7, Leuven 3000, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, Huddinge, Stockholm 141 04, Sweden.
| |
Collapse
|
12
|
Gong Z, Feng W, Su X, Choi C. System for automatically assessing the likelihood of inferior alveolar nerve injury. Comput Biol Med 2024; 169:107923. [PMID: 38199211 DOI: 10.1016/j.compbiomed.2024.107923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/20/2023] [Accepted: 01/01/2024] [Indexed: 01/12/2024]
Abstract
Inferior alveolar nerve (IAN) injury is a severe complication associated with mandibular third molar (MM3) extraction. Consequently, the likelihood of IAN injury must be assessed before performing such an extraction. However, existing deep learning methods for classifying the likelihood of IAN injury that rely on mask images often suffer from limited accuracy and lack of interpretability. In this paper, we propose an automated system based on panoramic radiographs, featuring a novel segmentation model SS-TransUnet and classification algorithm CD-IAN injury class. Our objective was to enhance the precision of segmentation of MM3 and mandibular canal (MC) and classification accuracy of the likelihood of IAN injury, ultimately reducing the occurrence of IAN injuries and providing a certain degree of interpretable foundation for diagnosis. The proposed segmentation model demonstrated a 0.9 % and 2.6 % enhancement in dice coefficient for MM3 and MC, accompanied by a reduction in 95 % Hausdorff distance, reaching 1.619 and 1.886, respectively. Additionally, our classification algorithm achieved an accuracy of 0.846, surpassing deep learning-based models by 3.8 %, confirming the effectiveness of our system.
Collapse
Affiliation(s)
- Ziyang Gong
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea
| | - Weikang Feng
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Xin Su
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Chang Choi
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea.
| |
Collapse
|
13
|
Swaity A, Elgarba BM, Morgan N, Ali S, Shujaat S, Borsci E, Chilvarquer I, Jacobs R. Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images. Sci Rep 2024; 14:369. [PMID: 38172136 PMCID: PMC10764895 DOI: 10.1038/s41598-023-49613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 12/10/2023] [Indexed: 01/05/2024] Open
Abstract
The process of creating virtual models of dentomaxillofacial structures through three-dimensional segmentation is a crucial component of most digital dental workflows. This process is typically performed using manual or semi-automated approaches, which can be time-consuming and subject to observer bias. The aim of this study was to train and assess the performance of a convolutional neural network (CNN)-based online cloud platform for automated segmentation of maxillary impacted canine on CBCT image. A total of 100 CBCT images with maxillary canine impactions were randomly allocated into two groups: a training set (n = 50) and a testing set (n = 50). The training set was used to train the CNN model and the testing set was employed to evaluate the model performance. Both tasks were performed on an online cloud-based platform, 'Virtual patient creator' (Relu, Leuven, Belgium). The performance was assessed using voxel- and surface-based comparison between automated and semi-automated ground truth segmentations. In addition, the time required for segmentation was also calculated. The automated tool showed high performance for segmenting impacted canines with a dice similarity coefficient of 0.99 ± 0.02. Moreover, it was 24 times faster than semi-automated approach. The proposed CNN model achieved fast, consistent, and precise segmentation of maxillary impacted canines.
Collapse
Affiliation(s)
- Abdullah Swaity
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Prosthodontic Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Prosthodontics, Tanta University, Tanta, Egypt
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Saleem Ali
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Restorative Dentistry Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Elena Borsci
- Oral Diagnostic Clinic, Karolinska Institute, Stockholm, Sweden
| | - Israel Chilvarquer
- Department of Oral Radiology, School of Dentistry, University of São Paulo (USP), São Paulo, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium.
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
14
|
Picoli FF, Fontenele RC, Van der Cruyssen F, Ahmadzai I, Trigeminal Nerve Injuries Research Group, Politis C, Silva MAG, Jacobs R. Risk assessment of inferior alveolar nerve injury after wisdom tooth removal using 3D AI-driven models: A within-patient study. J Dent 2023; 139:104765. [PMID: 38353315 DOI: 10.1016/j.jdent.2023.104765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/10/2023] [Accepted: 10/26/2023] [Indexed: 02/16/2024] Open
Abstract
OBJECTIVE To compare a three-dimensional (3D) artificial intelligence (AI)- driven model with panoramic radiography (PANO) and cone-beam computed tomography (CBCT) in assessing the risk of inferior alveolar nerve (IAN) injury after mandibular wisdom tooth (M3M) removal through a within-patient controlled trial. METHODS From a database of 6,010 patients undergoing M3M surgery, 25 patients met the inclusion criteria of bilateral M3M removal with postoperative unilateral IAN injury. In this within-patient controlled trial, preoperative PANO and CBCT images were available, while 3D-AI models of the mandibular canal and teeth were generated from the CBCT images using the Virtual Patient Creator AI platform (Relu BV, Leuven, Belgium). Five examiners, who were blinded to surgical outcomes, assessed the imaging modalities and assigned scores indicating the risk level of IAN injury (high, medium, or low risk). Sensitivity, specificity, and area under receiver operating curve (AUC) for IAN risk assessment were calculated for each imaging modality. RESULTS For IAN injury risk assessment after M3M removal, sensitivity was 0.87 for 3D-AI, 0.89 for CBCT versus 0.73 for PANO. Furthermore, the AUC and specificity values were 0.63 and 0.39 for 3D-AI, 0.58 and 0.28 for CBCT, and 0.57 and 0.41 for PANO, respectively. There was no statistically significant difference (p>0.05) among the imaging modalities for any diagnostic parameters. CONCLUSION This within-patient controlled trial study revealed that risk assessment for IAN injury after M3M removal was rather similar for 3D-AI, PANO, and CBCT, with a sensitivity for injury prediction reaching up to 0.87 for 3D-AI and 0.89 for CBCT. CLINICAL SIGNIFICANCE This within-patient trial is pioneering in exploring the application of 3D AI-driven models for assessing IAN injury risk after M3M removal. The present results indicate that AI-powered 3D models based on CBCT might facilitate IAN risk assessment of M3M removal.
Collapse
Affiliation(s)
- Fernando Fortes Picoli
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium; School of Dentistry, Federal University of Goiás, Goiânia, GO, Brazil
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium; Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Sao Paulo, Brazil
| | - Frederic Van der Cruyssen
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium; Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | - Iraj Ahmadzai
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium
| | | | - Constantinus Politis
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium; Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
| | | | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, University of Leuven and Department of Oral & Maxillofacial Surgery, University Hospitals Leuven, KU Leuven, Kapucijnenvoer 7, 3000, Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
15
|
Tao B, Xu J, Gao J, He S, Jiang S, Wang F, Chen X, Wu Y. Deep learning-based automatic segmentation of bone graft material after maxillary sinus augmentation. Clin Oral Implants Res 2023. [PMID: 38033189 DOI: 10.1111/clr.14221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/14/2023] [Accepted: 11/15/2023] [Indexed: 12/02/2023]
Abstract
OBJECTIVES To investigate the accuracy and reliability of deep learning in automatic graft material segmentation after maxillary sinus augmentation (SA) from cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS One hundred paired CBCT scans (a preoperative scan and a postoperative scan) were collected and randomly allocated to training (n = 82) and testing (n = 18) subsets. The ground truths of graft materials were labeled by three observers together (two experienced surgeons and a computer engineer). A deep learning model including a 3D V-Net and a 3D Attention V-Net was developed. The overall performance of the model was assessed through the testing data set. The comparative accuracy and inference time consumption of the model-driven and manual segmentation (by two surgeons with 3 years of experience in dental implant surgery) were conducted on 10 CBCT scans from the test samples. RESULTS The deep learning model had a Dice coefficient (Dice) of 90.36 ± 2.53%, a 95% Hausdorff distance (HD) of 1.59 ± 0.82 mm, and an average surface distance (ASD) of 0.38 ± 0.11 mm. The proposed model only needed 7.2 s, while the surgeon took 19.15 min on average to complete a segmentation task. The overall performances of the model were significantly superior to those of surgeons. CONCLUSIONS The proposed deep learning model yielded a more accurate and efficient performance of automatic segmentation of graft material after SA than that of the two surgeons. The proposed model could facilitate a powerful system for volumetric change evaluation, dental implant planning, and digital dentistry.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Gao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Shamin He
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Shuanglin Jiang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| |
Collapse
|
16
|
Lv J, Zhang L, Xu J, Li W, Li G, Zhou H. Automatic segmentation of mandibular canal using transformer based neural networks. Front Bioeng Biotechnol 2023; 11:1302524. [PMID: 38047288 PMCID: PMC10693337 DOI: 10.3389/fbioe.2023.1302524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 11/01/2023] [Indexed: 12/05/2023] Open
Abstract
Accurate 3D localization of the mandibular canal is crucial for the success of digitally-assisted dental surgeries. Damage to the mandibular canal may result in severe consequences for the patient, including acute pain, numbness, or even facial paralysis. As such, the development of a fast, stable, and highly precise method for mandibular canal segmentation is paramount for enhancing the success rate of dental surgical procedures. Nonetheless, the task of mandibular canal segmentation is fraught with challenges, including a severe imbalance between positive and negative samples and indistinct boundaries, which often compromise the completeness of existing segmentation methods. To surmount these challenges, we propose an innovative, fully automated segmentation approach for the mandibular canal. Our methodology employs a Transformer architecture in conjunction with cl-Dice loss to ensure that the model concentrates on the connectivity of the mandibular canal. Additionally, we introduce a pixel-level feature fusion technique to bolster the model's sensitivity to fine-grained details of the canal structure. To tackle the issue of sample imbalance and vague boundaries, we implement a strategy founded on mandibular foramen localization to isolate the maximally connected domain of the mandibular canal. Furthermore, a contrast enhancement technique is employed for pre-processing the raw data. We also adopt a Deep Label Fusion strategy for pre-training on synthetic datasets, which substantially elevates the model's performance. Empirical evaluations on a publicly accessible mandibular canal dataset reveal superior performance metrics: a Dice score of 0.844, click score of 0.961, IoU of 0.731, and HD95 of 2.947 mm. These results not only validate the efficacy of our approach but also establish its state-of-the-art performance on the public mandibular canal dataset.
Collapse
Affiliation(s)
| | | | | | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | | | | |
Collapse
|
17
|
Cameron AB, Abdelhamid HMHAS, George R. CBCT Segmentation and Additive Manufacturing for the Management of Root Canals with Ledges: A Case Report and Technique. J Endod 2023; 49:1570-1575. [PMID: 37582414 DOI: 10.1016/j.joen.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/06/2023] [Accepted: 08/06/2023] [Indexed: 08/17/2023]
Abstract
Cone-beam computed tomography (CBCT) assessment of a ledge could be useful to a clinician; however, using this information effectively during a treatment procedure can be challenging. Advanced additive manufacturing technologies combined with semi-automated segmentation of root canals can help simulate the ledge and help in management of these iatrogenic complications. A patient presented after unsuccessful root canal treatment with a ledge on the left mandibular first molar. A CBCT was taken, and the images imported into a segmentation software (Mimics, Materialise). The canal was isolated, and segmentation performed along with the other structures of the tooth. A 3-dimensional digital model of the internal structures of the canal were used to design a mock-up which was additively manufactured. This was used as a preclinical guide to simulate the procedure, precurve the file, and manage the canal. This novel technique using virtual modeling from CBCT data post ledge formation allowed for successful and quick management of a tooth with ledges.
Collapse
Affiliation(s)
- Andrew B Cameron
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia; Menzies Health Institute Queensland Disability & Rehabilitation Center, Gold Coast, Australia
| | | | - Roy George
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia.
| |
Collapse
|
18
|
Zhang L, Li W, Lv J, Xu J, Zhou H, Li G, Ai K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J Dent 2023; 138:104727. [PMID: 37769934 DOI: 10.1016/j.jdent.2023.104727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
OBJECTIVES This article reviews recent advances in computer-aided segmentation methods for oral and maxillofacial surgery and describes the advantages and limitations of these methods. The objective is to provide an invaluable resource for precise therapy and surgical planning in oral and maxillofacial surgery. Study selection, data and sources: This review includes full-text articles and conference proceedings reporting the application of segmentation methods in the field of oral and maxillofacial surgery. The research focuses on three aspects: tooth detection segmentation, mandibular canal segmentation and alveolar bone segmentation. The most commonly used imaging technique is CBCT, followed by conventional CT and Orthopantomography. A systematic electronic database search was performed up to July 2023 (Medline via PubMed, IEEE Xplore, ArXiv, Google Scholar were searched). RESULTS These segmentation methods can be mainly divided into two categories: traditional image processing and machine learning (including deep learning). Performance testing on a dataset of images labeled by medical professionals shows that it performs similarly to dentists' annotations, confirming its effectiveness. However, no studies have evaluated its practical application value. CONCLUSION Segmentation methods (particularly deep learning methods) have demonstrated unprecedented performance, while inherent challenges remain, including the scarcity and inconsistency of datasets, visible artifacts in images, unbalanced data distribution, and the "black box" nature. CLINICAL SIGNIFICANCE Accurate image segmentation is critical for precise treatment and surgical planning in oral and maxillofacial surgery. This review aims to facilitate more accurate and effective surgical treatment planning among dental researchers.
Collapse
Affiliation(s)
- Lang Zhang
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Wang Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China.
| | - Jinxun Lv
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Jiajie Xu
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Hengyu Zhou
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Gen Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Keqi Ai
- Department of Radiology, Xinqiao Hospital, Army Medical University, Chongqing 400037, China.
| |
Collapse
|
19
|
Merken K, Monnens J, Marshall N, Johan N, Brasil DM, Santaella GM, Politis C, Jacobs R, Bosmans H. Development and validation of a 3D anthropomorphic phantom for dental CBCT imaging research. Med Phys 2023; 50:6714-6736. [PMID: 37602774 DOI: 10.1002/mp.16661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/17/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
BACKGROUND Optimization of dental cone beam computed tomography (CBCT) imaging is still in a preliminary stage and should be addressed using task-based methods. Dedicated models containing relevant clinical tasks for image quality studies have yet to be developed. PURPOSE To present a methodology to develop and validate a virtual adult anthropomorphic voxel phantom for use in task-based image quality optimization studies in dental CBCT imaging research, focusing on root fracture (RF) detection tasks in the presence of metal artefacts. METHODS The phantom was developed from a CBCT scan with an isotropic voxel size of 0.2 mm, from which the main dental structures, mandible and maxilla were segmented. The missing large anatomical structures, including the spine, skull and remaining soft tissues, were segmented from a lower resolution full skull scan. Anatomical abnormalities were absent in the areas of interest. Fine detailed dental structures, that could not be segmented due to the limited resolution and noise in the clinical data, were modelled using a-priori anatomical knowledge. Model resolution of the teeth was therefore increased to 0.05 mm. Models of RFs as well as dental restorations to create the artefacts, were developed, and could be inserted in the phantom in any desired configuration. Simulated CBCT images of the models were generated using a newly developed multi-resolution simulation framework that incorporated the geometry, beam quality, noise and spatial resolution characteristics of a real dental CBCT scanner. Ray-tracing and Monte Carlo techniques were used to create the projection images, which were reconstructed using the classical FDK algorithm. Validation of the models was assessed by measurements of different tooth lengths, the pulp volume and the mandible, and comparison with reference values. Additionally, the simulated images were used in a reader study in which two oral radiologists had to score the realism level of the model's normal anatomy, as well as the modelled RFs and restorations. RESULTS A model of an adult head, as well as models of RFs and different types of dental restorations were created. Anatomical measurements were consistent with ranges reported in literature. For the tooth length measurements, the deviations from the mean reference values were less than 20%. In 77% of all the measurements, the deviations were within 10.1%. The pulp volumes, and mandible measurements were within one standard deviation of the reference values. Regarding the normal anatomy, both readers considered the realism level of the dental structures to be good. Background structures received a lower realism score due to the lack of detailed enough trabecular bone structure, which was expected but not the focus of this study. All modelled RFs were scored at least adequate by at least one of the readers, both in appearance and position. The realism level of the modelled restorations was considered to be good. CONCLUSIONS A methodology was proposed to develop and validate an anthropomorphic voxel phantom for image quality optimization studies in dental CBCT imaging, with a main focus on RF detection tasks. The methodology can be extended further to create more models representative of the clinical population.
Collapse
Affiliation(s)
- Karen Merken
- Department of Imaging and Pathology, Division of Medical Physics & Quality Assessment, KU Leuven, Leuven, Belgium
| | - Janne Monnens
- Department of Imaging and Pathology, Division of Medical Physics & Quality Assessment, KU Leuven, Leuven, Belgium
| | - Nicholas Marshall
- Department of Imaging and Pathology, Division of Medical Physics & Quality Assessment, KU Leuven, Leuven, Belgium
| | - Nuyts Johan
- Department of Imaging and Pathology, Division of Nuclear Medicine & Molecular Imaging, KU Leuven, Leuven, Belgium
| | - Danieli Moura Brasil
- Department of Diagnosis and Oral Health, School of Dentistry, University of Louisville, Louisville, Kentucky, USA
| | - Gustavo Machado Santaella
- Department of Diagnosis and Oral Health, School of Dentistry, University of Louisville, Louisville, Kentucky, USA
| | - Constantinus Politis
- Department of Imaging and Pathology, Division of Oral and Maxillofacial Surgery, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- Department of Imaging and Pathology, Division of Oral and Maxillofacial Surgery, KU Leuven, Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| | - Hilde Bosmans
- Department of Imaging and Pathology, Division of Medical Physics & Quality Assessment, KU Leuven, Leuven, Belgium
| |
Collapse
|
20
|
Jindanil T, Marinho-Vieira LE, de-Azevedo-Vaz SL, Jacobs R. A unique artificial intelligence-based tool for automated CBCT segmentation of mandibular incisive canal. Dentomaxillofac Radiol 2023; 52:20230321. [PMID: 37870152 DOI: 10.1259/dmfr.20230321] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2023] Open
Abstract
OBJECTIVES To develop and validate a novel artificial intelligence (AI) tool for automated segmentation of mandibular incisive canal on cone beam computed tomography (CBCT) scans. METHODS After ethical approval, a data set of 200 CBCT scans were selected and categorized into training (160), validation (20), and test (20) sets. CBCT scans were imported into Virtual Patient Creator and ground truth for training and validation were manually segmented by three oral radiologists in multiplanar reconstructions. Intra- and interobserver analysis for human segmentation variability was performed on 20% of the data set. Segmentations were imported into Mimics for standardization. Resulting files were imported to 3-Matic for analysis using surface- and voxel-based methods. Evaluation metrics involved time efficiency, analysis metrics including Dice Similarity Coefficient (DSC), Intersection over Union (IoU), Root mean square error (RMSE), precision, recall, accuracy, and consistency. These values were calculated considering AI-based segmentation and refined-AI segmentation compared to manual segmentation. RESULTS Average time for AI-based segmentation, refined-AI segmentation and manual segmentation was 00:10, 08:09, and 47:18 (284-fold time reduction). AI-based segmentation showed mean values of DSC 0.873, IoU 0.775, RMSE 0.256 mm, precision 0.837 and recall 0.890 while refined-AI segmentation provided DSC 0.876, IoU 0.781, RMSE 0.267 mm, precision 0. 852 and recall 0.902 with the accuracy of 0.998 for both methods. The consistency was one for AI-based segmentation and 0.910 for manual segmentation. CONCLUSIONS An innovative AI-tool for automated segmentation of mandibular incisive canal on CBCT scans was proofed to be accurate, time efficient, and highly consistent, serving pre-surgical planning.
Collapse
Affiliation(s)
- Thanatchaporn Jindanil
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
| | - Luiz Eduardo Marinho-Vieira
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Reinhilde Jacobs
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
21
|
Chun SY, Kang YH, Yang S, Kang SR, Lee SJ, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Automatic classification of 3D positional relationship between mandibular third molar and inferior alveolar canal using a distance-aware network. BMC Oral Health 2023; 23:794. [PMID: 37880603 PMCID: PMC10598947 DOI: 10.1186/s12903-023-03496-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 10/05/2023] [Indexed: 10/27/2023] Open
Abstract
The purpose of this study was to automatically classify the three-dimensional (3D) positional relationship between an impacted mandibular third molar (M3) and the inferior alveolar canal (MC) using a distance-aware network in cone-beam CT (CBCT) images. We developed a network consisting of cascaded stages of segmentation and classification for the buccal-lingual relationship between the M3 and the MC. The M3 and the MC were simultaneously segmented using Dense121 U-Net in the segmentation stage, and their buccal-lingual relationship was automatically classified using a 3D distance-aware network with the multichannel inputs of the original CBCT image and the signed distance map (SDM) generated from the segmentation in the classification stage. The Dense121 U-Net achieved the highest average precision of 0.87, 0.96, and 0.94 in the segmentation of the M3, the MC, and both together, respectively. The 3D distance-aware classification network of the Dense121 U-Net with the input of both the CBCT image and the SDM showed the highest performance of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve, each of which had a value of 1.00. The SDM generated from the segmentation mask significantly contributed to increasing the accuracy of the classification network. The proposed distance-aware network demonstrated high accuracy in the automatic classification of the 3D positional relationship between the M3 and the MC by learning anatomical and geometrical information from the CBCT images.
Collapse
Affiliation(s)
- So-Young Chun
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea
| | - Yun-Hui Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | | | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea.
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| |
Collapse
|
22
|
Elgarba BM, Van Aelst S, Swaity A, Morgan N, Shujaat S, Jacobs R. Deep learning-based segmentation of dental implants on cone-beam computed tomography images: A validation study. J Dent 2023; 137:104639. [PMID: 37517787 DOI: 10.1016/j.jdent.2023.104639] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/01/2023] Open
Abstract
OBJECTIVES To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images. METHODS A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences. RESULTS The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy. CONCLUSIONS The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images. CLINICAL SIGNIFICANCE AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri‑implant bone levels.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt
| | - Stijn Van Aelst
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium
| | - Abdullah Swaity
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Prosthodontic Department, King Hussein Medical Center, Royal Medical Services, Amman, Jordan
| | - Nermin Morgan
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Sohaib Shujaat
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
23
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
24
|
Lin X, Xin W, Huang J, Jing Y, Liu P, Han J, Ji J. Accurate mandibular canal segmentation of dental CBCT using a two-stage 3D-UNet based segmentation framework. BMC Oral Health 2023; 23:551. [PMID: 37563606 PMCID: PMC10416403 DOI: 10.1186/s12903-023-03279-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 08/02/2023] [Indexed: 08/12/2023] Open
Abstract
OBJECTIVES The objective of this study is to develop a deep learning (DL) model for fast and accurate mandibular canal (MC) segmentation on cone beam computed tomography (CBCT). METHODS A total of 220 CBCT scans from dentate subjects needing oral surgery were used in this study. The segmentation ground truth is annotated and reviewed by two senior dentists. All patients were randomly splitted into a training dataset (n = 132), a validation dataset (n = 44) and a test dataset (n = 44). We proposed a two-stage 3D-UNet based segmentation framework for automated MC segmentation on CBCT. The Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (95% HD) were used as the evaluation metrics for the segmentation model. RESULTS The two-stage 3D-UNet model successfully segmented the MC on CBCT images. In the test dataset, the mean DSC was 0.875 ± 0.045 and the mean 95% HD was 0.442 ± 0.379. CONCLUSIONS This automatic DL method might aid in the detection of MC and assist dental practitioners to set up treatment plans for oral surgery evolved MC.
Collapse
Affiliation(s)
- Xi Lin
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
| | - Weini Xin
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
- Department of Stomatology of Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangddong China
| | - Jingna Huang
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Pengfei Liu
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Jingdan Han
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Jie Ji
- Network and Information Center, Shantou University, No. 243, University Road, Shantou, Guangdong China
| |
Collapse
|
25
|
Oliveira-Santos N, Jacobs R, Picoli FF, Lahoud P, Niclaes L, Groppo FC. Automated segmentation of the mandibular canal and its anterior loop by deep learning. Sci Rep 2023; 13:10819. [PMID: 37402784 DOI: 10.1038/s41598-023-37798-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/28/2023] [Indexed: 07/06/2023] Open
Abstract
Accurate mandibular canal (MC) detection is crucial to avoid nerve injury during surgical procedures. Moreover, the anatomic complexity of the interforaminal region requires a precise delineation of anatomical variations such as the anterior loop (AL). Therefore, CBCT-based presurgical planning is recommended, even though anatomical variations and lack of MC cortication make canal delineation challenging. To overcome these limitations, artificial intelligence (AI) may aid presurgical MC delineation. In the present study, we aim to train and validate an AI-driven tool capable of performing accurate segmentation of the MC even in the presence of anatomical variation such as AL. Results achieved high accuracy metrics, with 0.997 of global accuracy for both MC with and without AL. The anterior and middle sections of the MC, where most surgical interventions are performed, presented the most accurate segmentation compared to the posterior section. The AI-driven tool provided accurate segmentation of the mandibular canal, even in the presence of anatomical variation such as an anterior loop. Thus, the presently validated dedicated AI tool may aid clinicians in automating the segmentation of neurovascular canals and their anatomical variations. It may significantly contribute to presurgical planning for dental implant placement, especially in the interforaminal region.
Collapse
Affiliation(s)
- Nicolly Oliveira-Santos
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium.
- Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden.
| | - Fernando Fortes Picoli
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
- Department of Stomatology and Oral Radiology, Dental School, Federal University of Goiás, Goiânia, Goiás, Brazil
| | - Pierre Lahoud
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
| | - Liselot Niclaes
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
| | - Francisco Carlos Groppo
- Department of Biosciences, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| |
Collapse
|
26
|
Zhao H, Chen J, Yun Z, Feng Q, Zhong L, Yang W. Whole mandibular canal segmentation using transformed dental CBCT volume in Frenet frame. Heliyon 2023; 9:e17651. [PMID: 37449128 PMCID: PMC10336514 DOI: 10.1016/j.heliyon.2023.e17651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 05/29/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Accurate segmentation of the mandibular canal is essential in dental implant and maxillofacial surgery, which can help prevent nerve or vascular damage inside the mandibular canal. Achieving this is challenging because of the low contrast in CBCT scans and the small scales of mandibular canal areas. Several innovative methods have been proposed for mandibular canal segmentation with positive performance. However, most of these methods segment the mandibular canal based on sliding patches, which may adversely affect the morphological integrity of the tubular structure. In this study, we propose whole mandibular canal segmentation using transformed dental CBCT volume in the Frenet frame. Considering the connectivity of the mandibular canal, we propose to transform the CBCT volume to obtain a sub-volume containing the whole mandibular canal based on the Frenet frame to ensure complete 3D structural information. Moreover, to further improve the performance of mandibular canal segmentation, we use clDice to guarantee the integrity of the mandibular canal structure and segment the mandibular canal. Experimental results on our CBCT dataset show that integrating the proposed transformed volume in the Frenet frame into other state-of-the-art methods achieves a 0.5%∼12.1% improvement in Dice performance. Our proposed method can achieve impressive results with a Dice value of 0.865 (±0.035), and a clDice value of 0.971 (±0.020), suggesting that our method can segment the mandibular canal with superior performance.
Collapse
Affiliation(s)
- Huanmiao Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Junhua Chen
- Stomatology Hospital of Guangzhou Medical University, Guangzhou, 510140, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| |
Collapse
|
27
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
28
|
Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, Wu Y. A deep learning-based automatic segmentation of zygomatic bones from cone-beam computed tomography images: A proof of concept. J Dent 2023:104582. [PMID: 37321334 DOI: 10.1016/j.jdent.2023.104582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/28/2023] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
OBJECTIVES To investigate the efficiency and accuracy of a deep learning-based automatic segmentation method for zygomatic bones from cone-beam computed tomography (CBCT) images. METHODS One hundred thirty CBCT scans were included and randomly divided into three subsets (training, validation, and test) in a 6:2:2 ratio. A deep learning-based model was developed, and it included a classification network and a segmentation network, where an edge supervision module was added to increase the attention of the edges of zygomatic bones. Attention maps were generated by the Grad-CAM and Guided Grad-CAM algorithms to improve the interpretability of the model. The performance of the model was then compared with that of four dentists on 10 CBCT scans from the test dataset. A p value <.05 was considered statistically significant. RESULTS The accuracy of the classification network was 99.64%. The Dice coefficient (Dice) of the deep learning-based model for the test dataset was 92.34 ± 2.04%, the average surface distance (ASD) was 0.1 ± 0.15 mm, and the 95% Hausdorff distance (HD) was 0.98 ± 0.42 mm. The model required 17.03 seconds on average to segment zygomatic bones, whereas this task took 49.3 minutes for dentists to complete. The Dice score of the model for the 10 CBCT scans was 93.2 ± 1.3%, while that of the dentists was 90.37 ± 3.32%. CONCLUSIONS The proposed deep learning-based model could segment zygomatic bones with high accuracy and efficiency compared with those of dentists. CLINICAL SIGNIFICANCE The proposed automatic segmentation model for zygomatic bone could generate an accurate 3D model for the preoperative digital planning of zygoma reconstruction, orbital surgery, zygomatic implant surgery, and orthodontics.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xinbo Yu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Wenying Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Haowei Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Room 805, Dongchuan Road 800, Minhang District, Shanghai, 200240, China..
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| |
Collapse
|
29
|
Abesi F, Maleki M, Zamani M. Diagnostic performance of artificial intelligence using cone-beam computed tomography imaging of the oral and maxillofacial region: A scoping review and meta-analysis. Imaging Sci Dent 2023; 53:101-108. [PMID: 37405196 PMCID: PMC10315225 DOI: 10.5624/isd.20220224] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/13/2023] [Accepted: 02/22/2023] [Indexed: 04/12/2024] Open
Abstract
PURPOSE The aim of this study was to conduct a scoping review and meta-analysis to provide overall estimates of the recall and precision of artificial intelligence for detection and segmentation using oral and maxillofacial cone-beam computed tomography (CBCT) scans. MATERIALS AND METHODS A literature search was done in Embase, PubMed, and Scopus through October 31, 2022 to identify studies that reported the recall and precision values of artificial intelligence systems using oral and maxillofacial CBCT images for the automatic detection or segmentation of anatomical landmarks or pathological lesions. Recall (sensitivity) indicates the percentage of certain structures that are correctly detected. Precision (positive predictive value) indicates the percentage of accurately identified structures out of all detected structures. The performance values were extracted and pooled, and the estimates were presented with 95% confidence intervals (CIs). RESULTS In total, 12 eligible studies were finally included. The overall pooled recall for artificial intelligence was 0.91 (95% CI: 0.87-0.94). In a subgroup analysis, the pooled recall was 0.88 (95% CI: 0.77-0.94) for detection and 0.92 (95% CI: 0.87-0.96) for segmentation. The overall pooled precision for artificial intelligence was 0.93 (95% CI: 0.88-0.95). A subgroup analysis showed that the pooled precision value was 0.90 (95% CI: 0.77-0.96) for detection and 0.94 (95% CI: 0.89-0.97) for segmentation. CONCLUSION Excellent performance was found for artificial intelligence using oral and maxillofacial CBCT images.
Collapse
Affiliation(s)
- Farida Abesi
- Department of Oral and Maxillofacial Radiology, Dental Faculty, Babol University of Medical Sciences, Babol, Iran
| | - Mahla Maleki
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| | - Mohammad Zamani
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| |
Collapse
|
30
|
Mangano FG, Admakin O, Lerner H, Mangano C. Artificial Intelligence and Augmented Reality for Guided Implant Surgery Planning: a Proof of Concept. J Dent 2023; 133:104485. [PMID: 36965859 DOI: 10.1016/j.jdent.2023.104485] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 03/08/2023] [Accepted: 03/13/2023] [Indexed: 03/27/2023] Open
Abstract
PURPOSE To present a novel protocol for authentic three-dimensional (3D) planning of dental implants, using artificial intelligence (AI) and augmented reality (AR). METHODS The novel protocol consists of (1) 3D data acquisition, with an intraoral scanner (IOS) and cone-beam computed tomography (CBCT); (2) application of AI for CBCT segmentation to obtain standard tessellation language (STL) models and automatic alignment with IOS models; (3) loading of selected STL models within the AR system and surgical planning with holograms; (4) surgical guide design with open-source computer-assisted-design (CAD) software; and (5) surgery on the patient. RESULTS This novel protocol is effective and time-efficient when used for planning simple cases of static guided implant surgery in the partially edentulous patient. The clinician can plan the implants in an authentic 3D environment, without using any radiological guided surgery software. The precision of implant placement looks clinically acceptable, with minor deviations. CONCLUSIONS AI and AR technologies can be successfully used for planning guided implant surgery for authentic 3D planning that may replace conventional guided surgery software. However, further clinical studies are needed to validate this protocol. STATEMENT OF CLINICAL RELEVANCE The combined use of AI and AR may change the perspectives of modern guided implant surgery for authentic 3D planning that may replace conventional guided surgery software.
Collapse
Affiliation(s)
- Francesco Guido Mangano
- Department of Pediatric, Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russian Federation; Honorary Professor in Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, China.
| | - Oleg Admakin
- Department of Pediatric, Preventive Dentistry and Orthodontics, Sechenov First State Medical University, Moscow, Russian Federation.
| | - Henriette Lerner
- Academic Teaching and Research Institution of Johann Wolfgang Goethe University, Frankfurt, Germany.
| | | |
Collapse
|
31
|
Novel method for augmented reality guided endodontics: an in vitro study. J Dent 2023; 132:104476. [PMID: 36905949 DOI: 10.1016/j.jdent.2023.104476] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/02/2023] [Accepted: 02/28/2023] [Indexed: 03/11/2023] Open
Abstract
OBJECTIVE The aim of this study is to evaluate the accuracy in endodontics of a novel augmented reality (AR) method for guided access cavity preparation in 3D-printed jaws. METHODS Two operators with different levels of experience in endodontics performed pre-planned virtually guided access cavities through a novel markerless AR system developed by a team among the authors on three sets of 3D-printed jaw models using a 3D printer (Objet Connex 350, Stratasys) mounted on a phantom. After the treatment, a post-operative high-resolution CBCT scan (NewTom VGI Evo, Cefla) was taken for each model and registered to the pre-operative model. All the access cavities were then digitally reconstructed by filling the cavity area using 3D medical software (3-Matic 15.0, Materialise). For the anterior teeth and the premolars, the deviation at the coronal and apical entry points as well as the angular deviation of the access cavity were compared to the virtual plan. For the molars, the deviation at the coronal entry point was compared to the virtual plan. Additionally, the surface area of all access cavities at the entry point was measured and compared to the virtual plan. Descriptive statistics for each parameter were performed. A 95% confidence interval was calculated. RESULTS A total of 90 access cavities were drilled up to a depth of 4 mm inside the tooth. The mean deviation in the frontal teeth and in the premolars at the entry point was 0.51 mm and 0.77 mm at the apical point, with a mean angular deviation of 8.5° and a mean surface overlap of 57%. The mean deviation for the molars at the entry point was 0.63 mm, with a mean surface overlap of 82%. CONCLUSION The use of AR as a digital guide for endodontic access cavity drilling on different teeth showed promising results and might have potential for clinical use. However, further development and research might be needed before in vivo validation to overcome the limitations of the study.
Collapse
|
32
|
Vinayahalingam S, Berends B, Baan F, Moin DA, van Luijn R, Bergé S, Xi T. Deep learning for automated segmentation of the temporomandibular joint. J Dent 2023; 132:104475. [PMID: 36870441 DOI: 10.1016/j.jdent.2023.104475] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 02/25/2023] [Accepted: 02/28/2023] [Indexed: 03/06/2023] Open
Abstract
OBJECTIVE Quantitative analysis of the volume and shape of the temporomandibular joint (TMJ) using cone-beam computed tomography (CBCT) requires accurate segmentation of the mandibular condyles and the glenoid fossae. This study aimed to develop and validate an automated segmentation tool based on a deep learning algorithm for accurate 3D reconstruction of the TMJ. MATERIALS AND METHODS A three-step deep-learning approach based on a 3D U-net was developed to segment the condyles and glenoid fossae on CBCT datasets. Three 3D U-Nets were utilized for region of interest (ROI) determination, bone segmentation, and TMJ classification. The AI-based algorithm was trained and validated on 154 manually segmented CBCT images. Two independent observers and the AI algorithm segmented the TMJs of a test set of 8 CBCTs. The time required for the segmentation and accuracy metrics (intersection of union, DICE, etc.) was calculated to quantify the degree of similarity between the manual segmentations (ground truth) and the performances of the AI models. RESULTS The AI segmentation achieved an intersection over union (IoU) of 0.955 and 0.935 for the condyles and glenoid fossa, respectively. The IoU of the two independent observers for manual condyle segmentation were 0.895 and 0.928, respectively (p<0.05). The mean time required for the AI segmentation was 3.6 s (SD 0.9), whereas the two observers needed 378.9 s (SD 204.9) and 571.6 s (SD 257.4), respectively (p<0.001). CONCLUSION The AI-based automated segmentation tool segmented the mandibular condyles and glenoid fossae with high accuracy, speed, and consistency. Potential limited robustness and generalizability are risks that cannot be ruled out, as the algorithms were trained on scans from orthognathic surgery patients derived from just one type of CBCT scanner. CLINICAL SIGNIFICANCE The incorporation of the AI-based segmentation tool into diagnostic software could facilitate 3D qualitative and quantitative analysis of TMJs in a clinical setting, particularly for the diagnosis of TMJ disorders and longitudinal follow-up.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands.
| | - Bo Berends
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands; Radboudumc 3DLab, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | - Frank Baan
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands; Radboudumc 3DLab, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | | | - Rik van Luijn
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| |
Collapse
|
33
|
Nogueira-Reis F, Morgan N, Nomidis S, Van Gerven A, Oliveira-Santos N, Jacobs R, Tabchoury CPM. Three-dimensional maxillary virtual patient creation by convolutional neural network-based segmentation on cone-beam computed tomography images. Clin Oral Investig 2023; 27:1133-1141. [PMID: 36114907 PMCID: PMC9985582 DOI: 10.1007/s00784-022-04708-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/01/2022] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To qualitatively and quantitatively assess integrated segmentation of three convolutional neural network (CNN) models for the creation of a maxillary virtual patient (MVP) from cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS A dataset of 40 CBCT scans acquired with different scanning parameters was selected. Three previously validated individual CNN models were integrated to achieve a combined segmentation of maxillary complex, maxillary sinuses, and upper dentition. Two experts performed a qualitative assessment, scoring-integrated segmentations from 0 to 10 based on the number of required refinements. Furthermore, experts executed refinements, allowing performance comparison between integrated automated segmentation (AS) and refined segmentation (RS) models. Inter-observer consistency of the refinements and the time needed to create a full-resolution automatic segmentation were calculated. RESULTS From the dataset, 85% scored 7-10, and 15% were within 3-6. The average time required for automated segmentation was 1.7 min. Performance metrics indicated an excellent overlap between automatic and refined segmentation with a dice similarity coefficient (DSC) of 99.3%. High inter-observer consistency of refinements was observed, with a 95% Hausdorff distance (HD) of 0.045 mm. CONCLUSION The integrated CNN models proved to be fast, accurate, and consistent along with a strong interobserver consistency in creating the MVP. CLINICAL RELEVANCE The automated segmentation of these structures simultaneously could act as a valuable tool in clinical orthodontics, implant rehabilitation, and any oral or maxillofacial surgical procedures, where visualization of MVP and its relationship with surrounding structures is a necessity for reaching an accurate diagnosis and patient-specific treatment planning.
Collapse
Affiliation(s)
- Fernanda Nogueira-Reis
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil.,OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium.,Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura , 35516, Dakahlia, Egypt
| | | | | | - Nicolly Oliveira-Santos
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil.,OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, 3000, Leuven, Belgium. .,Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04, Huddinge, Stockholm, Sweden.
| | - Cinthia Pereira Machado Tabchoury
- Department of Biosciences, Division of Biochemistry, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo, 13414‑903, Brazil
| |
Collapse
|
34
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
35
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:bioengineering10020137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
- Correspondence: ; Tel.: +39-049-8276754
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
36
|
Usman M, Rehman A, Saleem AM, Jawaid R, Byon SS, Kim SH, Lee BD, Heo MS, Shin YG. Dual-Stage Deeply Supervised Attention-Based Convolutional Neural Networks for Mandibular Canal Segmentation in CBCT Scans. SENSORS (BASEL, SWITZERLAND) 2022; 22:9877. [PMID: 36560251 PMCID: PMC9785834 DOI: 10.3390/s22249877] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/12/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of mandibular canals in lower jaws is important in dental implantology. Medical experts manually determine the implant position and dimensions from 3D CT images to avoid damaging the mandibular nerve inside the canal. In this paper, we propose a novel dual-stage deep learning-based scheme for the automatic segmentation of the mandibular canal. In particular, we first enhance the CBCT scans by employing the novel histogram-based dynamic windowing scheme, which improves the visibility of mandibular canals. After enhancement, we designed 3D deeply supervised attention UNet architecture for localizing the Volumes Of Interest (VOIs), which contain the mandibular canals (i.e., left and right canals). Finally, we employed the Multi-Scale input Residual UNet (MSiR-UNet) architecture to segment the mandibular canals using VOIs accurately. The proposed method has been rigorously evaluated on 500 and 15 CBCT scans from our dataset and from the public dataset, respectively. The results demonstrate that our technique improves the existing performance of mandibular canal segmentation to a clinically acceptable range. Moreover, it is robust against the types of CBCT scans in terms of field of view.
Collapse
Affiliation(s)
- Muhammad Usman
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Azka Rehman
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Amal Muhammad Saleem
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Rabeea Jawaid
- Division of AI and Computer Engineering, Kyonggi University, Suwon 16227, Republic of Korea
| | - Shi-Sub Byon
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Sung-Hyun Kim
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Byoung-Dai Lee
- Division of AI and Computer Engineering, Kyonggi University, Suwon 16227, Republic of Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| |
Collapse
|
37
|
Ducret M, Mörch CM, Karteva T, Fisher J, Schwendicke F. Artificial intelligence for sustainable oral healthcare. J Dent 2022; 127:104344. [PMID: 36273625 DOI: 10.1016/j.jdent.2022.104344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 08/24/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES Oral health is grounded in the United National (UN) 2030 Agenda for Sustainable Developement and its 17 Goals (SDGs), in particular SDG 3 (Ensure healthy lives and promote well-being for all at all ages). The World Health Organization (WHO) Global Strategy on Oral Health calls for prioritizing environmentally sustainable and less invasive oral health care, and planetary health. Artificial Intelligence (AI) has the potential to power the next generation of oral health services and care, however its relationship with the broader UN and WHO concepts of sustainability remains poorly defined and articulated. We review the double-edged relationships between AI and oral health, to suggest actions that promote a sustainable deployment of AI for oral health. DATA Concepts regarding AI, sustainability and sustainable development were identified and defined. A review of several double-edged relationship between AI and SDGs were exposed for the field of Oral Health. SOURCES Medline and international declarations of the WHO, the UN and the World Dental Federation (FDI) were screened. STUDY SELECTION One the one hand, AI may reduce transportation, optimize care delivery (SDG 3 "Good Health and Well-Being", SDG 13 "Climate Action"), and increase accessibility of services and reduce inequality (SDG 10 "Reduced Inequalities", SDG 4 "Quality Education"). On the other hand, the deployment, implementation and maintenance of AI require significant resources (SDG 12 "Responsible Consumption and Production"), and costs for AI may aggravate inequalities. Also, AI may be biased, reinforcing inequalities (SDG 10) and discrimination (SDG 5), and may violate principles of security, privacy and confidentiality of personal information (SDG 16). CONCLUSIONS Systematic assessment of the positive impact and adverse effects of AI on sustainable oral health may help to foster the former and curb the latter based on evidence. CLINICAL SIGNIFICANCE If sustainability imperatives are actively taken into consideration, the community of oral health professionals should then employ AI for improving effectiveness, efficiency, and safety of oral healthcare; strengthen oral health surveillance; foster education and accessibility of care; ensure fairness, transparency and governance of AI for oral health; develop legislation and infrastructure to expand the use of digital health technologies including AI.
Collapse
Affiliation(s)
- Maxime Ducret
- Institut de Biologie et Chimie des Protéines, Laboratoire de Biologie Tissulaire et Ingénierie Thérapeutique, UMR 5305 CNRS, Université Lyon 1, Lyon, France; Faculté d'Odontologie, Université Lyon 1, Lyon, France; Hospices Civils de Lyon, Centre de soins Dentaires, Lyon, France.
| | - Carl-Maria Mörch
- FARI - AI for the Common Good Institute, Free University of Brussels, Brussels, Belgium
| | - Teodora Karteva
- Department of Operative Dentistry and Endodontics, Medical University of Plovdiv, Plovdiv, Bulgaria
| | - Julian Fisher
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin, Berlin, Germany
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin, Berlin, Germany
| |
Collapse
|
38
|
Baseri Saadi S, Moreno-Rabié C, van den Wyngaert T, Jacobs R. Convolutional neural network for automated classification of osteonecrosis and related mandibular trabecular patterns. Bone Rep 2022; 17:101632. [DOI: 10.1016/j.bonr.2022.101632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 11/06/2022] Open
|
39
|
Huang Z, Zheng H, Huang J, Yang Y, Wu Y, Ge L, Wang L. The Construction and Evaluation of a Multi-Task Convolutional Neural Network for a Cone-Beam Computed-Tomography-Based Assessment of Implant Stability. Diagnostics (Basel) 2022; 12:2673. [PMID: 36359516 PMCID: PMC9689694 DOI: 10.3390/diagnostics12112673] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 10/21/2022] [Accepted: 10/29/2022] [Indexed: 07/21/2023] Open
Abstract
Objectives: Assessing implant stability is integral to dental implant therapy. This study aimed to construct a multi-task cascade convolution neural network to evaluate implant stability using cone-beam computed tomography (CBCT). Methods: A dataset of 779 implant coronal section images was obtained from CBCT scans, and matching clinical information was used for the training and test datasets. We developed a multi-task cascade network based on CBCT to assess implant stability. We used the MobilenetV2-DeeplabV3+ semantic segmentation network, combined with an image processing algorithm in conjunction with prior knowledge, to generate the volume of interest (VOI) that was eventually used for the ResNet-50 classification of implant stability. The performance of the multitask cascade network was evaluated in a test set by comparing the implant stability quotient (ISQ), measured using an Osstell device. Results: The cascade network established in this study showed good prediction performance for implant stability classification. The binary, ternary, and quaternary ISQ classification test set accuracies were 96.13%, 95.33%, and 92.90%, with mean precisions of 96.20%, 95.33%, and 93.71%, respectively. In addition, this cascade network evaluated each implant's stability in only 3.76 s, indicating high efficiency. Conclusions: To our knowledge, this is the first study to present a CBCT-based deep learning approach CBCT to assess implant stability. The multi-task cascade network accomplishes a series of tasks related to implant denture segmentation, VOI extraction, and implant stability classification, and has good concordance with the ISQ.
Collapse
Affiliation(s)
- Zelun Huang
- Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangzhou 510182, China
| | - Haoran Zheng
- Department of Chemical & Materials Engineering, University of Auckland, Auckland 1010, New Zealand
| | - Junqiang Huang
- Department of Stomatology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China
| | - Yang Yang
- Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangzhou 510182, China
| | - Yupeng Wu
- Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangzhou 510182, China
| | - Linhu Ge
- Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangzhou 510182, China
| | - Liping Wang
- Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Affiliated Stomatology Hospital of Guangzhou Medical University, Guangzhou 510182, China
| |
Collapse
|
40
|
Comparison of deep learning segmentation and multigrader-annotated mandibular canals of multicenter CBCT scans. Sci Rep 2022; 12:18598. [PMID: 36329051 PMCID: PMC9633839 DOI: 10.1038/s41598-022-20605-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/15/2022] [Indexed: 11/06/2022] Open
Abstract
Deep learning approach has been demonstrated to automatically segment the bilateral mandibular canals from CBCT scans, yet systematic studies of its clinical and technical validation are scarce. To validate the mandibular canal localization accuracy of a deep learning system (DLS) we trained it with 982 CBCT scans and evaluated using 150 scans of five scanners from clinical workflow patients of European and Southeast Asian Institutes, annotated by four radiologists. The interobserver variability was compared to the variability between the DLS and the radiologists. In addition, the generalisation of DLS to CBCT scans from scanners not used in the training data was examined to evaluate its out-of-distribution performance. The DLS had a statistically significant difference (p < 0.001) with lower variability to the radiologists with 0.74 mm than the interobserver variability of 0.77 mm and generalised to new devices with 0.63 mm, 0.67 mm and 0.87 mm (p < 0.001). For the radiologists' consensus segmentation, used as a gold standard, the DLS showed a symmetric mean curve distance of 0.39 mm, which was statistically significantly different (p < 0.001) compared to those of the individual radiologists with values of 0.62 mm, 0.55 mm, 0.47 mm, and 0.42 mm. These results show promise towards integration of DLS into clinical workflow to reduce time-consuming and labour-intensive manual tasks in implantology.
Collapse
|
41
|
Bonfanti-Gris M, Garcia-Cañas A, Alonso-Calvo R, Salido Rodriguez-Manzaneque MP, Pradies Ramiro G. Evaluation of an Artificial Intelligence web-based software to detect and classify dental structures and treatments in panoramic radiographs. J Dent 2022; 126:104301. [PMID: 36150430 DOI: 10.1016/j.jdent.2022.104301] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES To evaluate the diagnostic reliability of a web-based Artificial Intelligence program on the detection and classification of dental structures and treatments present on panoramic radiographs. METHODS A total of 300 orthopantomographies (OPG) were randomly selected for this study. First, the images were visually evaluated by two calibrated operators with radiodiagnosis experience that, after consensus, established the "ground truth". Operators' findings on the radiographs were collected and classified as follows: metal restorations (MR), resin-based restorations (RR), endodontic treatment (ET), Crowns (C) and Implants (I). The orthopantomographies were then anonymously uploaded and automatically analyzed by the web-based software (Denti.Ai). Results were then stored, and a statistical analysis was performed by comparing them with the ground truth in terms of Sensitivity (S), Specificity (E), Positive Predictive Value (PPV) Negative Predictive Value (NPV) and its later representation in the area under (AUC) the Receiver Operating Characteristic (ROC) Curve. RESULTS Diagnostic metrics obtained for each study variable were as follows: (MR) S=85.48%, E=87.50%, PPV=82.8%, NPV=42.51%, AUC=0.869; (PR) S=41.11%, E=93.30%, PPV=90.24%, NPV=87.50%, AUC=0.672; (ET) S=91.9%, E=100%, PPV=100%, NPV=94.62%, AUC=0.960; (C) S=89.53%, E=95.79%, PPV=89.53%, NPV=95.79%, AUC=0.927; (I) S, E, PPV, NPV=100%, AUC=1.000. CONCLUSIONS Findings suggest that the web-based Artificial intelligence software provides a good performance on the detection of implants, crowns, metal fillings and endodontic treatments, not being so accurate on the classification of dental structures or resin-based restorations. CLINICAL SIGNIFICANCE General diagnostic and treatment decisions using orthopantomographies can be improved by using web-based artificial intelligence tools, avoiding subjectivity and lapses from the clinician.
Collapse
Affiliation(s)
- Monica Bonfanti-Gris
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| | - Angel Garcia-Cañas
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| | - Raul Alonso-Calvo
- Department of Informatics Systems and Languages, Faculty of Software Engineering, Polytechnic University of Madrid. Campus Montegancedo s/n, Boadilla del Monte. 28660 Madrid, Spain
| | - Maria Paz Salido Rodriguez-Manzaneque
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain.
| | - Guillermo Pradies Ramiro
- Department of Conservative and Prosthetic Dentistry, Faculty of Dentistry, Complutense University of Madrid. Plaza Ramón y Cajal, s/n. 28040 Madrid, Spain
| |
Collapse
|
42
|
Kivovics M, Pénzes D, Moldvai J, Mijiritsky E, Németh O. A custom-made removable appliance for the decompression of odontogenic cysts fabricated using a digital workflow. J Dent 2022; 126:104295. [PMID: 36116543 DOI: 10.1016/j.jdent.2022.104295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 08/29/2022] [Accepted: 09/14/2022] [Indexed: 11/25/2022] Open
Abstract
OBJECTIVES This case series aimed to assess the feasibility of a custom-made decompression appliance fabricated using a digital workflow to decompress odontogenic cysts. Additionally, the treated cysts were assessed for volumetric changes. METHODS A three-dimensional (3D) reconstruction software (CoDiagnostiX version 10.4) was used to obtain the master cast STL (Standard Tessellation Language) file by placing a customized virtual implant to create a recess for the tube of the decompression device. The decompression appliance was planned using Dental Wings Open Software (DWOS). Following rapid prototyping, the tube of the appliance was perforated using round burs. In cases where the appliances were designed to replace teeth, denture teeth were added using the conventional workflow. The appliances were delivered on the day of the cystostomy. Following decompression, cyst enucleation was performed. Cyst volume was assessed by manual segmentation of pre- and post-operative cone-beam computed tomography (CBCT) reconstructions using slice-by-slice boundary drawing with a scissors tool in the 3DSlicer 4.10.2 software. Percentage of volume reduction was calculated as follows: volume reduction/pre-operative volume × 100. RESULTS Six odontogenic cysts in six patients (5 male, 1 female; age 40 years, range: 15-49 years) with a pre- and post-operative cyst volume of 5597 ± 3983 mm3 and 2330 ± 1860 mm3 respectively (p < 0.05) were treated. Percentage of volume reduction was 58.84 ± 13.22 % following a 6-month-long decompression period. CONCLUSIONS The digital workflow described in this case series enables the delivery of decompression appliances at the time of cystostomy, thus effectively reducing the volume of odontogenic cysts. The resulting bone formation established a safe zone around the anatomical landmarks; therefore, during enucleation surgery, complications to these landmarks can be avoided.
Collapse
Affiliation(s)
- Márton Kivovics
- Department of Community Dentistry, Semmelweis University, Szentkirályi utca 40. 1088 Budapest, Hungary.
| | - Dorottya Pénzes
- Department of Community Dentistry, Semmelweis University, Szentkirályi utca 40. 1088 Budapest, Hungary.
| | - Júlia Moldvai
- Department of Community Dentistry, Semmelweis University, Szentkirályi utca 40. 1088 Budapest, Hungary.
| | - Eitan Mijiritsky
- Department of Otolaryngology, Head and Neck Surgery and Maxillofacial Surgery, Tel-Aviv Sourasky Medical Center, Sackler School of Medicine, Tel-Aviv University, Tel Aviv 64239, Israel,; Goldschleger School of Dental Medicine, Sackler School of Medicine, Tel-Aviv University, Tel Aviv 39040, Israel.
| | - Orsolya Németh
- Department of Community Dentistry, Semmelweis University, Szentkirályi utca 40. 1088 Budapest, Hungary.
| |
Collapse
|
43
|
Automated detection and labelling of teeth and small edentulous regions on Cone-Beam Computed Tomography using Convolutional Neural Networks. J Dent 2022; 122:104139. [DOI: 10.1016/j.jdent.2022.104139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/04/2022] [Accepted: 04/20/2022] [Indexed: 12/30/2022] Open
|