1
|
Semerci ZM, Yardımcı S. Empowering Modern Dentistry: The Impact of Artificial Intelligence on Patient Care and Clinical Decision Making. Diagnostics (Basel) 2024; 14:1260. [PMID: 38928675 PMCID: PMC11202919 DOI: 10.3390/diagnostics14121260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 06/06/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024] Open
Abstract
Advancements in artificial intelligence (AI) are poised to catalyze a transformative shift across diverse dental disciplines including endodontics, oral radiology, orthodontics, pediatric dentistry, periodontology, prosthodontics, and restorative dentistry. This narrative review delineates the burgeoning role of AI in enhancing diagnostic precision, streamlining treatment planning, and potentially unveiling innovative therapeutic modalities, thereby elevating patient care standards. Recent analyses corroborate the superiority of AI-assisted methodologies over conventional techniques, affirming their capacity for personalization, accuracy, and efficiency in dental care. Central to these AI applications are convolutional neural networks and deep learning models, which have demonstrated efficacy in diagnosis, prognosis, and therapeutic decision making, in some instances surpassing traditional methods in complex cases. Despite these advancements, the integration of AI into clinical practice is accompanied by challenges, such as data security concerns, the demand for transparency in AI-generated outcomes, and the imperative for ongoing validation to establish the reliability and applicability of AI tools. This review underscores the prospective benefits of AI in dental practice, envisioning AI not as a replacement for dental professionals but as an adjunctive tool that fortifies the dental profession. While AI heralds improvements in diagnostics, treatment planning, and personalized care, ethical and practical considerations must be meticulously navigated to ensure responsible development of AI in dentistry.
Collapse
Affiliation(s)
- Zeliha Merve Semerci
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Akdeniz University, Antalya 07070, Turkey
| | | |
Collapse
|
2
|
Elgarba BM, Fontenele RC, Tarce M, Jacobs R. Artificial intelligence serving pre-surgical digital implant planning: A scoping review. J Dent 2024; 143:104862. [PMID: 38336018 DOI: 10.1016/j.jdent.2024.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
OBJECTIVES To conduct a scoping review focusing on artificial intelligence (AI) applications in presurgical dental implant planning. Additionally, to assess the automation degree of clinically available pre-surgical implant planning software. DATA AND SOURCES A systematic electronic literature search was performed in five databases (PubMed, Embase, Web of Science, Cochrane Library, and Scopus), along with exploring gray literature web-based resources until November 2023. English-language studies on AI-driven tools for digital implant planning were included based on an independent evaluation by two reviewers. An assessment of automation steps in dental implant planning software available on the market up to November 2023 was also performed. STUDY SELECTION AND RESULTS From an initial 1,732 studies, 47 met eligibility criteria. Within this subset, 39 studies focused on AI networks for anatomical landmark-based segmentation, creating virtual patients. Eight studies were dedicated to AI networks for virtual implant placement. Additionally, a total of 12 commonly available implant planning software applications were identified and assessed for their level of automation in pre-surgical digital implant workflows. Notably, only six of these featured at least one fully automated step in the planning software, with none possessing a fully automated implant planning protocol. CONCLUSIONS AI plays a crucial role in achieving accurate, time-efficient, and consistent segmentation of anatomical landmarks, serving the process of virtual patient creation. Additionally, currently available systems for virtual implant placement demonstrate different degrees of automation. It is important to highlight that, as of now, full automation of this process has not been documented nor scientifically validated. CLINICAL SIGNIFICANCE Scientific and clinical validation of AI applications for presurgical dental implant planning is currently scarce. The present review allows the clinician to identify AI-based automation in presurgical dental implant planning and assess the potential underlying scientific validation.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt.
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Mihai Tarce
- Division of Periodontology & Implant Dentistry, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China & Periodontology and Oral Microbiology, Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
3
|
Kolenbrander ID, Maspero M, Hendriksen AA, Pollitt R, van der Voort van Zyp JRN, van den Berg CAT, Pluim JPW, van Eijnatten MAJM. Deep-learning-based joint rigid and deformable contour propagation for magnetic resonance imaging-guided prostate radiotherapy. Med Phys 2024; 51:2367-2377. [PMID: 38408022 DOI: 10.1002/mp.17000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 12/05/2023] [Accepted: 02/04/2024] [Indexed: 02/28/2024] Open
Abstract
BACKGROUND Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.
Collapse
Affiliation(s)
- Iris D Kolenbrander
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Eindhoven Artificial Intelligence Systems Institute, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Matteo Maspero
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Allard A Hendriksen
- Computational Imaging, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
| | - Ryan Pollitt
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Cornelis A T van den Berg
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Josien P W Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Eindhoven Artificial Intelligence Systems Institute, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Maureen A J M van Eijnatten
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Eindhoven Artificial Intelligence Systems Institute, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
4
|
Chen X, Ma N, Xu T, Xu C. Deep learning-based tooth segmentation methods in medical imaging: A review. Proc Inst Mech Eng H 2024; 238:115-131. [PMID: 38314788 DOI: 10.1177/09544119231217603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Deep learning approaches for tooth segmentation employ convolutional neural networks (CNNs) or Transformers to derive tooth feature maps from extensive training datasets. Tooth segmentation serves as a critical prerequisite for clinical dental analysis and surgical procedures, enabling dentists to comprehensively assess oral conditions and subsequently diagnose pathologies. Over the past decade, deep learning has experienced significant advancements, with researchers introducing efficient models such as U-Net, Mask R-CNN, and Segmentation Transformer (SETR). Building upon these frameworks, scholars have proposed numerous enhancement and optimization modules to attain superior tooth segmentation performance. This paper discusses the deep learning methods of tooth segmentation on dental panoramic radiographs (DPRs), cone-beam computed tomography (CBCT) images, intro oral scan (IOS) models, and others. Finally, we outline performance-enhancing techniques and suggest potential avenues for ongoing research. Numerous challenges remain, including data annotation and model generalization limitations. This paper offers insights for future tooth segmentation studies, potentially facilitating broader clinical adoption.
Collapse
Affiliation(s)
- Xiaokang Chen
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| | - Nan Ma
- Faculty of Information and Technology, Beijing University of Technology, Beijing, China
- Engineering Research Center of Intelligence Perception and Autonomous Control, Ministry of Education, Beijing University of Technology, Beijing, China
| | - Tongkai Xu
- Department of General Dentistry II, Peking University School and Hospital of Stomatology, Beijing, China
| | - Cheng Xu
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| |
Collapse
|
5
|
Huang J, Farpour N, Yang BJ, Mupparapu M, Lure F, Li J, Yan H, Setzer FC. Uncertainty-based Active Learning by Bayesian U-Net for Multi-label Cone-beam CT Segmentation. J Endod 2024; 50:220-228. [PMID: 37979653 PMCID: PMC10842728 DOI: 10.1016/j.joen.2023.11.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/20/2023]
Abstract
INTRODUCTION Training of Artificial Intelligence (AI) for biomedical image analysis depends on large annotated datasets. This study assessed the efficacy of Active Learning (AL) strategies training AI models for accurate multilabel segmentation and detection of periapical lesions in cone-beam CTs (CBCTs) using a limited dataset. METHODS Limited field-of-view CBCT volumes (n = 20) were segmented by clinicians (clinician segmentation [CS]) and Bayesian U-Net-based AL strategies. Two AL functions, Bayesian Active Learning by Disagreement [BALD] and Max_Entropy [ME], were used for multilabel segmentation ("Lesion"-"Tooth Structure"-"Bone"-"Restorative Materials"-"Background"), and compared to a non-AL benchmark Bayesian U-Net function. The training-to-testing set ratio was 4:1. Comparisons between the AL and Bayesian U-Net functions versus CS were made by evaluating the segmentation accuracy with the Dice indices and lesion detection accuracy. The Kruskal-Wallis test was used to assess statistically significant differences. RESULTS The final training set contained 26 images. After 8 AL iterations, lesion detection sensitivity was 84.0% for BALD, 76.0% for ME, and 32.0% for Bayesian U-Net, which was significantly different (P < .0001; H = 16.989). The mean Dice index for all labels was 0.680 ± 0.155 for Bayesian U-Net and 0.703 ± 0.166 for ME after eight AL iterations, compared to 0.601 ± 0.267 for Bayesian U-Net over the mean of all iterations. The Dice index for "Lesion" was 0.504 for BALD and 0.501 for ME after 8 AL iterations, and at a maximum 0.288 for Bayesian U-Net. CONCLUSIONS Both AL strategies based on uncertainty quantification from Bayesian U-Net BALD, and ME, provided improved segmentation and lesion detection accuracy for CBCTs. AL may contribute to reducing extensive labeling needs for training AI algorithms for biomedical image analysis in dentistry.
Collapse
Affiliation(s)
- Jiayu Huang
- School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona
| | - Nazbanoo Farpour
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Bingjian J Yang
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Muralidhar Mupparapu
- Department of Oral Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Fleming Lure
- MS Technologies Corporation, Rockville, Maryland
| | - Jing Li
- School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia
| | - Hao Yan
- School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona
| | - Frank C Setzer
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
6
|
Elgarba BM, Van Aelst S, Swaity A, Morgan N, Shujaat S, Jacobs R. Deep learning-based segmentation of dental implants on cone-beam computed tomography images: A validation study. J Dent 2023; 137:104639. [PMID: 37517787 DOI: 10.1016/j.jdent.2023.104639] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/01/2023] Open
Abstract
OBJECTIVES To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images. METHODS A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences. RESULTS The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy. CONCLUSIONS The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images. CLINICAL SIGNIFICANCE AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri‑implant bone levels.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt
| | - Stijn Van Aelst
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium
| | - Abdullah Swaity
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Prosthodontic Department, King Hussein Medical Center, Royal Medical Services, Amman, Jordan
| | - Nermin Morgan
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Sohaib Shujaat
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
7
|
Orhan K, Aktuna Belgin C, Manulis D, Golitsyna M, Bayrak S, Aksoy S, Sanders A, Önder M, Ezhov M, Shamshiev M, Gusarev M, Shlenskii V. Determining the reliability of diagnosis and treatment using artificial intelligence software with panoramic radiographs. Imaging Sci Dent 2023; 53:199-208. [PMID: 37799743 PMCID: PMC10548159 DOI: 10.5624/isd.20230109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/07/2023] [Accepted: 07/10/2023] [Indexed: 10/07/2023] Open
Abstract
Purpose The objective of this study was to evaluate the accuracy and effectiveness of an artificial intelligence (AI) program in identifying dental conditions using panoramic radiographs (PRs), as well as to assess the appropriateness of its treatment recommendations. Material and Methods PRs from 100 patients (representing 4497 teeth) with known clinical examination findings were randomly selected from a university database. Three dentomaxillofacial radiologists and the Diagnocat AI software evaluated these PRs. The evaluations were focused on various dental conditions and treatments, including canal filling, caries, cast post and core, dental calculus, fillings, furcation lesions, implants, lack of interproximal tooth contact, open margins, overhangs, periapical lesions, periodontal bone loss, short fillings, voids in root fillings, overfillings, pontics, root fragments, impacted teeth, artificial crowns, missing teeth, and healthy teeth. Results The AI demonstrated almost perfect agreement (exceeding 0.81) in most of the assessments when compared to the ground truth. The sensitivity was very high (above 0.8) for the evaluation of healthy teeth, artificial crowns, dental calculus, missing teeth, fillings, lack of interproximal contact, periodontal bone loss, and implants. However, the sensitivity was low for the assessment of caries, periapical lesions, pontic voids in the root canal, and overhangs. Conclusion Despite the limitations of this study, the synthesized data suggest that AI-based decision support systems can serve as a valuable tool in detecting dental conditions, when used with PR for clinical dental applications.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Ceren Aktuna Belgin
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Hatay Mustafa Kemal University, Hatay, Turkey
| | | | | | - Seval Bayrak
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Abant İzzet Baysal University, Bolu, Turkey
| | - Secil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | | | - Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | | | | | | | | |
Collapse
|
8
|
Yang S, Lee SJ, Yoo JY, Kang SR, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yang HJ, Yi WJ. V 2-Net: An Attention-guided Volumetric Regression Network for Tooth Landmark Localization on CT Images with Metal Artifacts. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083381 DOI: 10.1109/embc40787.2023.10340891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
For virtual surgical planning in orthognathic surgery, marking tooth landmarks on CT images is an important procedure. However, the manual localization procedure of tooth landmarks is time-consuming, labor-intensive, and requires expert knowledge. Also, direct and automatic tooth landmark localization on CT images is difficult because of the lower resolution and metal artifacts of dental images. The purpose of this study was to propose an attention-guided volumetric regression network (V2-Net) for accurate tooth landmark localization on CT images with metal artifacts and lower resolution. V2-Net has an attention-guided network architecture using a coarse-to-fine-attention mechanism that guided the 3D probability distribution of tooth landmark locations within anatomical structures from the coarse V-Net to the fine V-Net for more focus on tooth landmarks. In addition, we combined attention-guided learning and a 3D attention module with optimal Pseudo Huber loss to improve the localization accuracy. Our results show that the proposed method achieves state-of-the-art accuracy of 0.85 ± 0.40 mm in terms of mean radial error, outperforming previous studies. In ablation studies, we observed that the proposed attention-guided learning and a 3D attention module improved the accuracy of tooth landmark localization in CT images of lower resolution and metal artifacts. Furthermore, our method achieved 97.92% in terms of the success detection rate within the clinically accepted accuracy range of 2.0 mm.
Collapse
|
9
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
10
|
Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:jcm12030937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
|
11
|
Hao G, Roberts EJ, Chavez T, Zhao Z, Holman EA, Yanxon H, Green A, Krishnan H, Ushizima D, McReynolds D, Schwarz N, Zwart PH, Hexemer A, Parkinson DY. Deploying Machine Learning Based Segmentation for Scientific Imaging Analysis at Synchrotron Facilities. IS&T INTERNATIONAL SYMPOSIUM ON ELECTRONIC IMAGING 2023; 35:IPAS-290. [PMID: 38130938 PMCID: PMC10735246 DOI: 10.2352/ei.2023.35.9.ipas-290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Scientific user facilities present a unique set of challenges for image processing due to the large volume of data generated from experiments and simulations. Furthermore, developing and implementing algorithms for real-time processing and analysis while correcting for any artifacts or distortions in images remains a complex task, given the computational requirements of the processing algorithms. In a collaborative effort across multiple Department of Energy national laboratories, the "MLExchange" project is focused on addressing these challenges. MLExchange is a Machine Learning framework deploying interactive web interfaces to enhance and accelerate data analysis. The platform allows users to easily upload, visualize, label, and train networks. The resulting models can be deployed on real data while both results and models could be shared with the scientists. The MLExchange web-based application for image segmentation allows for training, testing, and evaluating multiple machine learning models on hand-labeled tomography data. This environment provides users with an intuitive interface for segmenting images using a variety of machine learning algorithms and deep-learning neural networks. Additionally, these tools have the potential to overcome limitations in traditional image segmentation techniques, particularly for complex and low-contrast images.
Collapse
Affiliation(s)
- Guanhua Hao
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Eric J. Roberts
- Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
- Molecular Biophysics and Integrated Bioimaging (MBIB), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Tanny Chavez
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Zhuowen Zhao
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Elizabeth A. Holman
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Howard Yanxon
- Advanced Photon Source (APS), Argonne National Laboratory; Lemont, IL 60439
| | - Adam Green
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Harinarayan Krishnan
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
- Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Daniela Ushizima
- Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
- Computational Research Division (CRD), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Dylan McReynolds
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Nicholas Schwarz
- Advanced Photon Source (APS), Argonne National Laboratory; Lemont, IL 60439
| | - Petrus H. Zwart
- Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
- Molecular Biophysics and Integrated Bioimaging (MBIB), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Alexander Hexemer
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| | - Dilworth Y. Parkinson
- Advanced Light Source (ALS), Lawrence Berkeley National Laboratory; Berkeley, CA 94720
| |
Collapse
|
12
|
Revilla-León M, Gómez-Polo M, Vyas S, Barmak AB, Özcan M, Att W, Krishnamurthy VR. Artificial intelligence applications in restorative dentistry: A systematic review. J Prosthet Dent 2022; 128:867-875. [PMID: 33840515 DOI: 10.1016/j.prosdent.2021.02.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 02/03/2021] [Accepted: 02/04/2021] [Indexed: 11/17/2022]
Abstract
STATEMENT OF PROBLEM Artificial intelligence (AI) applications are increasing in restorative procedures. However, the current development and performance of AI in restorative dentistry applications has not yet been systematically documented and analyzed. PURPOSE The purpose of this systematic review was to identify and evaluate the ability of AI models in restorative dentistry to diagnose dental caries and vertical tooth fracture, detect tooth preparation margins, and predict restoration failure. MATERIAL AND METHODS An electronic systematic review was performed in 5 databases: MEDLINE/PubMed, EMBASE, World of Science, Cochrane, and Scopus. A manual search was also conducted. Studies with AI models were selected based on 4 criteria: diagnosis of dental caries, diagnosis of vertical tooth fracture, detection of the tooth preparation finishing line, and prediction of restoration failure. Two investigators independently evaluated the quality assessment of the studies by applying the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies (nonrandomized experimental studies). A third investigator was consulted to resolve lack of consensus. RESULTS A total of 34 articles were included in the review: 29 studies included AI techniques for the diagnosis of dental caries or the elaboration of caries and postsensitivity prediction models, 2 for the diagnosis of vertical tooth fracture, 1 for the tooth preparation finishing line location, and 2 for the prediction of the restoration failure. Among the studies reviewed, the AI models tested obtained a caries diagnosis accuracy ranging from 76% to 88.3%, sensitivity ranging from 73% to 90%, and specificity ranging from 61.5% to 93%. The caries prediction accuracy among the studies ranged from 83.6% to 97.1%. The studies reported an accuracy for the vertical tooth fracture diagnosis ranging from 88.3% to 95.7%. The article using AI models to locate the finishing line reported an accuracy ranging from 90.6% to 97.4%. CONCLUSIONS AI models have the potential to provide a powerful tool for assisting in the diagnosis of caries and vertical tooth fracture, detecting the tooth preparation margin, and predicting restoration failure. However, the dental applications of AI models are still in development. Further studies are required to assess the clinical performance of AI models in restorative dentistry.
Collapse
Affiliation(s)
- Marta Revilla-León
- Assistant Professor and Assistant Program Director AEGD Residency, Department of Comprehensive Dentistry, College of Dentistry, Texas A&M University, Dallas, Texas; Affiliate Faculty Graduate Prosthodontics, Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, Wash; Researcher at Revilla Research Center, Madrid, Spain
| | - Miguel Gómez-Polo
- Associate Professor, Department of Conservative Dentistry and Prosthodontics, School of Dentistry, Complutense University of Madrid, Madrid, Spain.
| | - Shantanu Vyas
- Graduate Research Assistant, J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, Dallas, Texas
| | - Abdul Basir Barmak
- Assistant Professor Clinical Research and Biostatistics, Eastman Institute of Oral Health, University of Rochester Medical Center, Rochester, NY
| | - Mutlu Özcan
- Professor and Head, Division of Dental Biomaterials, Clinic for Reconstructive Dentistry, Center for Dental and Oral Medicine, University of Zürich, Zürich, Switzerland
| | - Wael Att
- Professor and Chair, Department of Prosthodontics, Tufts University School of Dental Medicine, Boston, Mass
| | - Vinayak R Krishnamurthy
- Assistant Professor, J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, College Station, Texas
| |
Collapse
|
13
|
Minnema J, Ernst A, van Eijnatten M, Pauwels R, Forouzanfar T, Batenburg KJ, Wolff J. A review on the application of deep learning for CT reconstruction, bone segmentation and surgical planning in oral and maxillofacial surgery. Dentomaxillofac Radiol 2022; 51:20210437. [PMID: 35532946 PMCID: PMC9522976 DOI: 10.1259/dmfr.20210437] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/11/2022] Open
Abstract
Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.
Collapse
Affiliation(s)
- Jordi Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Anne Ernst
- Institute for Medical Systems Biology, University Hospital Hamburg-Eppendorf, Hamburg, Germany
| | - Maureen van Eijnatten
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Ruben Pauwels
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
| | - Tymour Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Kees Joost Batenburg
- Department of Oral and Maxillofacial Surgery/Pathology, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, 3D Innovationlab, Amsterdam Movement Sciences, Amsterdam, The Netherlands
| | - Jan Wolff
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard, Aarhus, Denmark
| |
Collapse
|
14
|
Verykokou S, Ioannidis C, Angelopoulos C. Evaluation of 3D Modeling Workflows Using Dental CBCT Data for Periodontal Regenerative Treatment. J Pers Med 2022; 12:jpm12091355. [PMID: 36143140 PMCID: PMC9503221 DOI: 10.3390/jpm12091355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/18/2022] [Accepted: 08/19/2022] [Indexed: 11/26/2022] Open
Abstract
The cone beam computed tomography (CBCT) technology is nowadays widely used in the field of dentistry and its use in the treatment of periodontal diseases has already been tackled in the international literature. At the same time, advanced segmentation methods have been introduced in state-of-the-art medical imaging software and well-established automated techniques for 3D mesh cleaning are available in 3D model editing software. However, except for the application of simple thresholding approaches for the purposes of 3D modeling of the oral cavity using CBCT data for dental applications, which does not yield accurate results, the research that has been conducted using more specialized semi-automated thresholding in dental CBCT images using existing software packages is limited. This article aims to fill the gap in the state-of-the-art research concerning the usage of CBCT data for 3D modeling of the hard tissues of the oral cavity of patients with periodontitis using existing software tools, for the needs of designing and printing 3D scaffolds for periodontal regeneration. In this context, segmentation and 3D modeling workflows using dental CBCT data that belong to a patient with periodontitis are evaluated, comparisons between the 3D models of the teeth and the alveolar bone generated through the experiments that yielded the most satisfactory results are made, and an optimal and efficient methodology for creating 3D models of teeth and alveolar bone, especially for being used as the basis for generating bioabsorbable 3D printed scaffolds of personalized treatment against periodontitis, is discussed.
Collapse
Affiliation(s)
- Styliani Verykokou
- Laboratory of Photogrammetry, School of Rural, Surveying and Geoinformatics Engineering, National Technical University of Athens, 15780 Athens, Greece
- Correspondence:
| | - Charalabos Ioannidis
- Laboratory of Photogrammetry, School of Rural, Surveying and Geoinformatics Engineering, National Technical University of Athens, 15780 Athens, Greece
| | - Christos Angelopoulos
- Department of Oral Diagnosis and Radiology, School of Dentistry, National and Kapodistrian University of Athens, 11527 Athens, Greece
| |
Collapse
|
15
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
16
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|
17
|
Aljabri M, Aljameel SS, Min-Allah N, Alhuthayfi J, Alghamdi L, Alduhailan N, Alfehaid R, Alqarawi R, Alhareky M, Shahin SY, Al Turki W. Canine impaction classification from panoramic dental radiographic images using deep learning models. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
18
|
Zhang Y, Ding SG, Gong XC, Yuan XX, Lin JF, Chen Q, Li JG. Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients. Technol Cancer Res Treat 2022; 21:15330338221085358. [PMID: 35262422 PMCID: PMC8918752 DOI: 10.1177/15330338221085358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Sheng-gou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Xiao-chang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Xing-xing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Jia-fan Lin
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Qi Chen
- MedMind Technology Co. Ltd, Beijing, People’s Republic of China
| | - Jin-gao Li
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, People’s Republic of China
- Medical College of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
- Jin-gao Li, Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University Nanchang, Jiangxi 330029, People’s Republic of China.
| |
Collapse
|
19
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
20
|
Bührer M, Xu H, Hendriksen AA, Büchi FN, Eller J, Stampanoni M, Marone F. Deep learning based classification of dynamic processes in time-resolved X-ray tomographic microscopy. Sci Rep 2021; 11:24174. [PMID: 34921184 PMCID: PMC8683503 DOI: 10.1038/s41598-021-03546-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/03/2021] [Indexed: 11/12/2022] Open
Abstract
Time-resolved X-ray tomographic microscopy is an invaluable technique to investigate dynamic processes in 3D for extended time periods. Because of the limited signal-to-noise ratio caused by the short exposure times and sparse angular sampling frequency, obtaining quantitative information through post-processing remains challenging and requires intensive manual labor. This severely limits the accessible experimental parameter space and so, prevents fully exploiting the capabilities of the dedicated time-resolved X-ray tomographic stations. Though automatic approaches, often exploiting iterative reconstruction methods, are currently being developed, the required computational costs typically remain high. Here, we propose a highly efficient reconstruction and classification pipeline (SIRT-FBP-MS-D-DIFF) that combines an algebraic filter approximation and machine learning to significantly reduce the computational time. The dynamic features are reconstructed by standard filtered back-projection with an algebraic filter to approximate iterative reconstruction quality in a computationally efficient manner. The raw reconstructions are post-processed with a trained convolutional neural network to extract the dynamic features from the low signal-to-noise ratio reconstructions in a fully automatic manner. The capabilities of the proposed pipeline are demonstrated on three different dynamic fuel cell datasets, one exploited for training and two for testing without network retraining. The proposed approach enables automatic processing of several hundreds of datasets in a single day on a single GPU node readily available at most institutions, so extending the possibilities in future dynamic X-ray tomographic investigations.
Collapse
Affiliation(s)
- Minna Bührer
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland.,Institute for Biomedical Engineering, University and ETH Zürich, 8092, Zurich, Zürich, Switzerland
| | - Hong Xu
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Allard A Hendriksen
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG, Amsterdam, The Netherlands
| | - Felix N Büchi
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Jens Eller
- Electrochemistry Laboratory, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland
| | - Marco Stampanoni
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland.,Institute for Biomedical Engineering, University and ETH Zürich, 8092, Zurich, Zürich, Switzerland
| | - Federica Marone
- Swiss Light Source, Paul Scherrer Institut, Forschungsstrasse 111, 5232, Villigen, Aargau, Switzerland.
| |
Collapse
|
21
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
22
|
Liu Q, Deng H, Lian C, Chen X, Xiao D, Ma L, Chen X, Kuang T, Gateno J, Yap PT, Xia JJ. SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2021; 12966:606-614. [PMID: 34964046 DOI: 10.1007/978-3-030-87589-3_62] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.
Collapse
Affiliation(s)
- Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Han Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Xiaoyang Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Xu Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX, USA
| |
Collapse
|
23
|
Bagis N, Kurt MH, Evli C, Camgoz M, Atakan C, Peker Ozturk H, Orhan K. Evaluation of a metal artifact reduction algorithm and an adaptive image noise optimization filter in the estimation of peri-implant fenestration defects using cone beam computed tomography: an in-vitro study. Oral Radiol 2021; 38:325-335. [PMID: 34387842 DOI: 10.1007/s11282-021-00561-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 08/03/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVE The aim of this study is to assess the effects of metal artifact reduction (MAR) and adaptive image noise enhancer (AINO) in CBCT imaging on the detection accuracy of artificially created fenestration defects in proximity to titanium and zirconium implants in sheep jaw. METHODS Six zirconium and 10 titanium implants were planted on mandibular jaws of three sheep, and artificial defects were created. All images were obtained with a standard voxel size (0.150 mm3) and with 4 scan modes: (1) without MAR/without AINO; (2) with MAR/without AINO; (3) without MAR/with AINO; and (4) with MAR/with AINO during CBCT scanning. A total of 60 CBCT scans were produced. RESULTS For all types of implants, intra- and inter-observer kappa values were the highest for MAR filter. The scan mode of with MAR filter was found to have the highest area under the curve (AUC), whereas the scan mode of without both MAR and AINO filters was found to have the lowest AUC values with statistical significance (p ≤ 0.05). Titanium implants were found to have higher AUC values than zirconium (p ≤ 0.05). CONCLUSION Both MAR module and AINO filters enhance the accuracy of the detection of peri-implant fenestrations; however, the use of MAR filter solely can be recommended for detection of peri-implant fenestrations.
Collapse
Affiliation(s)
- Nilsun Bagis
- Department of Periodontology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Mehmet Hakan Kurt
- Department of Dentoaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey.
| | - Cengiz Evli
- Department of Dentoaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Melike Camgoz
- Department of Periodontology, Faculty of Dentistry, Gazi University, Ankara, Turkey
| | - Cemal Atakan
- Department of Statistics, Faculty of Science, Ankara University, Ankara, Turkey
| | - Hilal Peker Ozturk
- Department of Dentomaxillofacial Radiology, Gulhane Faculty of Dentistry, University of Health Sciences, Ankara, Turkey
| | - Kaan Orhan
- Department of Dentoaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey.,Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| |
Collapse
|
24
|
Minnema J, Wolff J, Koivisto J, Lucka F, Batenburg KJ, Forouzanfar T, van Eijnatten M. Comparison of convolutional neural network training strategies for cone-beam CT image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106192. [PMID: 34062493 DOI: 10.1016/j.cmpb.2021.106192] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 05/11/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Over the past decade, convolutional neural networks (CNNs) have revolutionized the field of medical image segmentation. Prompted by the developments in computational resources and the availability of large datasets, a wide variety of different two-dimensional (2D) and three-dimensional (3D) CNN training strategies have been proposed. However, a systematic comparison of the impact of these strategies on the image segmentation performance is still lacking. Therefore, this study aimed to compare eight different CNN training strategies, namely 2D (axial, sagittal and coronal slices), 2.5D (3 and 5 adjacent slices), majority voting, randomly oriented 2D cross-sections and 3D patches. METHODS These eight strategies were used to train a U-Net and an MS-D network for the segmentation of simulated cone-beam computed tomography (CBCT) images comprising randomly-placed non-overlapping cylinders and experimental CBCT images of anthropomorphic phantom heads. The resulting segmentation performances were quantitatively compared by calculating Dice similarity coefficients. In addition, all segmented and gold standard experimental CBCT images were converted into virtual 3D models and compared using orientation-based surface comparisons. RESULTS The CNN training strategy that generally resulted in the best performances on both simulated and experimental CBCT images was majority voting. When employing 2D training strategies, the segmentation performance can be optimized by training on image slices that are perpendicular to the predominant orientation of the anatomical structure of interest. Such spatial features should be taken into account when choosing or developing novel CNN training strategies for medical image segmentation. CONCLUSIONS The results of this study will help clinicians and engineers to choose the most-suited CNN training strategy for CBCT image segmentation.
Collapse
Affiliation(s)
- Jordi Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam 1081 HV, theNetherlands.
| | - Jan Wolff
- Fraunhofer Research Institution for Additive Manufacturing Technologies IAPT, Am Schleusengraben 13, Hamburg 21029, Germany; Department of Oral and Maxillofacial Surgery, Division for Regenerative Orofacial Medicine, University Hospital Hamburg-Eppendorf, Hamburg 20246, Germany; Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, DK-8000 Aarhus C, Denmark
| | - Juha Koivisto
- Department of Physics, University of Helsinki, Helsinki 20560, Finland
| | - Felix Lucka
- Centrum Wiskunde & Informatica (CWI), Amsterdam 1090 GB, the Netherlands; University College London, London WC1E 6BT, United Kingdom
| | | | - Tymour Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam 1081 HV, theNetherlands
| | | |
Collapse
|
25
|
Minnema J, van Eijnatten M, der Sarkissian H, Doyle S, Koivisto J, Wolff J, Forouzanfar T, Lucka F, Batenburg KJ. Efficient high cone-angle artifact reduction in circular cone-beam CT using deep learning with geometry-aware dimension reduction. Phys Med Biol 2021; 66. [PMID: 34107467 DOI: 10.1088/1361-6560/ac09a1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 06/09/2021] [Indexed: 11/11/2022]
Abstract
High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow.
Collapse
Affiliation(s)
- Jordi Minnema
- Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam Movement Sciences, 1081 HV Amsterdam, The Netherlands
| | - Maureen van Eijnatten
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands.,Centrum Wiskunde & Informatica (CWI), 1090 GB Amsterdam, The Netherlands
| | | | - Shannon Doyle
- Centrum Wiskunde & Informatica (CWI), 1090 GB Amsterdam, The Netherlands
| | - Juha Koivisto
- Department of Physics, University of Helsinki, Gustaf Hällsströmin katu 2, FI-00560, Helsinki, Finland
| | - Jan Wolff
- Department of Oral and Maxillofacial Surgery, Division for Regenerative Orofacial Medicine, University Hospital Hamburg-Eppendorf, D-20246 Hamburg, Germany.,Fraunhofer Research Institution for Additive Manufacturing Technologies IAPT, Am Schleusengraben 13, D-21029 Hamburg, Germany.,Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, DK-8000 Aarhus C, Denmark
| | - Tymour Forouzanfar
- Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam Movement Sciences, 1081 HV Amsterdam, The Netherlands
| | - Felix Lucka
- Centrum Wiskunde & Informatica (CWI), 1090 GB Amsterdam, The Netherlands.,Centre for Medical Image Computing, University College London, WC1E 6BT London, United Kingdom
| | - Kees Joost Batenburg
- Centrum Wiskunde & Informatica (CWI), 1090 GB Amsterdam, The Netherlands.,Leiden Institute of Advanced Computer Science (LIACS), Leiden University, 2333 CA Leiden, The Netherlands
| |
Collapse
|
26
|
Qiu B, Guo J, Kraeima J, Glas HH, Zhang W, Borra RJH, Witjes MJH, van Ooijen PMA. Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography. J Pers Med 2021; 11:jpm11060492. [PMID: 34072714 PMCID: PMC8229770 DOI: 10.3390/jpm11060492] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 12/24/2022] Open
Abstract
Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
- Correspondence:
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Weichuan Zhang
- Institute for Integrated and Intelligent System, Griffith University, Nathan, QLD 4111, Australia;
- CSIRO Data61, Epping, NSW 1710, Australia
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| |
Collapse
|
27
|
Ren R, Luo H, Su C, Yao Y, Liao W. Machine learning in dental, oral and craniofacial imaging: a review of recent progress. PeerJ 2021; 9:e11451. [PMID: 34046262 PMCID: PMC8136280 DOI: 10.7717/peerj.11451] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 04/22/2021] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence has been emerging as an increasingly important aspect of our daily lives and is widely applied in medical science. One major application of artificial intelligence in medical science is medical imaging. As a major component of artificial intelligence, many machine learning models are applied in medical diagnosis and treatment with the advancement of technology and medical imaging facilities. The popularity of convolutional neural network in dental, oral and craniofacial imaging is heightening, as it has been continually applied to a broader spectrum of scientific studies. Our manuscript reviews the fundamental principles and rationales behind machine learning, and summarizes its research progress and its recent applications specifically in dental, oral and craniofacial imaging. It also reviews the problems that remain to be resolved and evaluates the prospect of the future development of this field of scientific study.
Collapse
Affiliation(s)
- Ruiyang Ren
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Haozhe Luo
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Chongying Su
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Yang Yao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Orthodontics, Osaka Dental University, Hirakata, Osaka, Japan
| |
Collapse
|
28
|
Qiu B, van der Wel H, Kraeima J, Hendrik Glas H, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model. J Pers Med 2021; 11:364. [PMID: 34062762 PMCID: PMC8147374 DOI: 10.3390/jpm11050364] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 12/17/2022] Open
Abstract
Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
29
|
Wang H, Minnema J, Batenburg KJ, Forouzanfar T, Hu FJ, Wu G. Multiclass CBCT Image Segmentation for Orthodontics with Deep Learning. J Dent Res 2021; 100:943-949. [PMID: 33783247 PMCID: PMC8293763 DOI: 10.1177/00220345211005338] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.
Collapse
Affiliation(s)
- H Wang
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - J Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - K J Batenburg
- Centrum Wiskunde and Informatica, Amsterdam, the Netherlands
| | - T Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - F J Hu
- Institute of Information Technology, Zhejiang Shuren University, Hangzhou, China
| | - G Wu
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.,Department of Oral Implantology and Prosthetic Dentistry, Academic Centre for Dentistry Amsterdam, University of Amsterdam and Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
30
|
Wang X, Pastewait M, Wu TH, Lian C, Tejera B, Lee YT, Lin FC, Wang L, Shen D, Li S, Ko CC. 3D morphometric quantification of maxillae and defects for patients with unilateral cleft palate via deep learning-based CBCT image auto-segmentation. Orthod Craniofac Res 2021; 24 Suppl 2:108-116. [PMID: 33711187 DOI: 10.1111/ocr.12482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 01/25/2021] [Accepted: 03/01/2021] [Indexed: 12/01/2022]
Abstract
OBJECTIVE This study aimed to quantify the 3D asymmetry of the maxilla in patients with unilateral cleft lip and palate (UCP) and investigate the defect factors responsible for the variability of the maxilla on the cleft side using a deep-learning-based CBCT image segmentation protocol. SETTING AND SAMPLE POPULATION Cone beam computed tomography (CBCT) images of 60 patients with UCP were acquired. The samples in this study consisted of 39 males and 21 females, with a mean age of 11.52 years (SD = 3.27 years; range of 8-18 years). MATERIALS AND METHODS The deep-learning-based protocol was used to segment the maxilla and defect initially, followed by manual refinement. Paired t-tests were performed to characterize the maxillary asymmetry. A multiple linear regression was carried out to investigate the relationship between the defect parameters and those of the cleft side of the maxilla. RESULTS The cleft side of the maxilla demonstrated a significant decrease in maxillary volume and length as well as alveolar length, anterior width, posterior width, anterior height and posterior height. A significant increase in maxillary anterior width was demonstrated on the cleft side of the maxilla. There was a close relationship between the defect parameters and those of the cleft side of the maxilla. CONCLUSIONS Based on the 3D volumetric segmentations, significant hypoplasia of the maxilla on the cleft side existed in the pyriform aperture and alveolar crest area near the defect. The defect structures appeared to contribute to the variability of the maxilla on the cleft side.
Collapse
Affiliation(s)
- Xiaoyu Wang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing, China.,Department of Stomatology, Beijing Tian Tan Hospital, Capital Medical University, Beijing, China
| | | | - Tai-Hsien Wu
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shaanxi, China
| | - Beatriz Tejera
- Orthodontics, Nova Southeastern University, Ft. Lauderdale, FL, USA
| | - Yan-Ting Lee
- Oral and Craniofacial Health Sciences Research, Adam School of Dentistry, University of North Carolina, Chapel Hill, NC, USA
| | - Feng-Chang Lin
- Department of Biostatistics, University of North Carolina, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Song Li
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing, China
| | - Ching-Chang Ko
- Division of Orthodontics, College of Dentistry, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
31
|
Trelenberg-Stoll V, Drescher D, Wolf M, Becker K. Automated tooth segmentation as an innovative tool to assess 3D-tooth movement and root resorption in rodents. Head Face Med 2021; 17:3. [PMID: 33531044 PMCID: PMC7856769 DOI: 10.1186/s13005-020-00254-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 12/21/2020] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Orthodontic root resorptions are frequently investigated in small animals, and micro-computed tomography (μCT) enables volumetric comparison. Despite, due to overlapping histograms from dentine and bone, accurate quantification of root resorption is challenging. The present study aims at (i) validating a novel automated approach for tooth segmentation (ATS), (ii) to indicate that matching of contralateral teeth is eligible to assess orthodontic tooth movement (OTM) and root resorption (RR), (iii) and to apply the novel approach in an animal trial performing orthodontic tooth movement. METHODS The oral apparatus of three female mice were scanned with a μCT. The first molars of each jaw and animal were segmented using ATS (test) and manually (control), and contralateral volumes were compared. Agreement in root volumes and time efficiency were assessed for method validation. In another n = 14 animals, the left first upper molar was protracted for 11 days at 0.5 N, whereas the contralateral molar served as control. Following ATS, OTM and RR were estimated. RESULTS ATS was significantly more time efficient compared to the manual approach (81% faster, P < 0.01), accurate (volume differences: - 0.01 ± 0.04 mm3), and contralateral roots had comparable volumes. Protracted molars had significantly lower root volumes (P = 0.03), whereas the amount of OTM failed to reveal linear association with RR (P > 0.05). CONCLUSIONS Within the limits of the study, it was demonstrated that the combination of ATS and registration of contralateral jaws enables measurements of OTS and associated RR in μCT scans.
Collapse
Affiliation(s)
| | - Dieter Drescher
- Department of Orthodontics, Universitätsklinikum Düsseldorf, Düsseldorf, Germany
| | - Michael Wolf
- Department of Orthodontics, Universitätsklinikum RWTH Aachen, Aachen, Germany
| | - Kathrin Becker
- Department of Orthodontics, Universitätsklinikum Düsseldorf, Düsseldorf, Germany. .,Department of Oral Surgery and Implantology, Goethe University, Frankfurt am Main, Germany.
| |
Collapse
|
32
|
Robles M, Carew RM, Morgan RM, Rando C. A step-by-step method for producing 3D crania models from CT data. FORENSIC IMAGING 2020. [DOI: 10.1016/j.fri.2020.200404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
33
|
Kraeima J, Glas HH, Merema BBJ, Vissink A, Spijkervet FKL, Witjes MJH. Three-dimensional virtual surgical planning in the oncologic treatment of the mandible. Oral Dis 2020; 27:14-20. [PMID: 32881177 DOI: 10.1111/odi.13631] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 07/30/2020] [Accepted: 08/22/2020] [Indexed: 02/04/2023]
Abstract
OBJECTIVES In case of surgical removal of oral squamous cell carcinomas, a resection of mandibular bone is frequently part of the treatment. Nowadays, such resections frequently include the application of 3D virtual surgical planning (VSP) and guided surgery techniques. In this paper, current methods for 3D VSP leads for optimisation of the workflow, and patient-specific application of guides and implants are reviewed. RECENT FINDINGS Current methods for 3D VSP enable multi-modality fusion of images. This fusion of images is not restricted to a specific software package or workflow. New strategies for 3D VSP in Oral and Maxillofacial Surgery include finite element analysis, deep learning and advanced augmented reality techniques. These strategies aim to improve the treatment in terms of accuracy, predictability and safety. CONCLUSIONS Application of the discussed novel technologies and strategies will improve the accuracy and safety of mandibular resection and reconstruction planning. Accurate, easy-to-use, safe and efficient three-dimensional VSP can be applied for every patient with malignancies needing resection of the mandible.
Collapse
Affiliation(s)
- Joep Kraeima
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Haye H Glas
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Bram Barteld Jan Merema
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Arjan Vissink
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Fred K L Spijkervet
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Max J H Witjes
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
34
|
Hung K, Yeung AWK, Tanaka R, Bornstein MM. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17124424. [PMID: 32575560 PMCID: PMC7345758 DOI: 10.3390/ijerph17124424] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/12/2020] [Accepted: 06/16/2020] [Indexed: 12/15/2022]
Abstract
The increasing use of three-dimensional (3D) imaging techniques in dental medicine has boosted the development and use of artificial intelligence (AI) systems for various clinical problems. Cone beam computed tomography (CBCT) and intraoral/facial scans are potential sources of image data to develop 3D image-based AI systems for automated diagnosis, treatment planning, and prediction of treatment outcome. This review focuses on current developments and performance of AI for 3D imaging in dentomaxillofacial radiology (DMFR) as well as intraoral and facial scanning. In DMFR, machine learning-based algorithms proposed in the literature focus on three main applications, including automated diagnosis of dental and maxillofacial diseases, localization of anatomical landmarks for orthodontic and orthognathic treatment planning, and general improvement of image quality. Automatic recognition of teeth and diagnosis of facial deformations using AI systems based on intraoral and facial scanning will very likely be a field of increased interest in the future. The review is aimed at providing dental practitioners and interested colleagues in healthcare with a comprehensive understanding of the current trend of AI developments in the field of 3D imaging in dental medicine.
Collapse
Affiliation(s)
- Kuofeng Hung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Michael M. Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, 4058 Basel, Switzerland
- Correspondence: ; Tel.: +41-(0)61-267-25-45
| |
Collapse
|