1
|
Liu W, Li X, Liu C, Gao G, Xiong Y, Zhu T, Zeng W, Guo J, Tang W. Automatic classification and segmentation of multiclass jaw lesions in cone-beam CT using deep learning. Dentomaxillofac Radiol 2024; 53:439-446. [PMID: 38937280 DOI: 10.1093/dmfr/twae028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 04/06/2024] [Accepted: 06/24/2024] [Indexed: 06/29/2024] Open
Abstract
OBJECTIVES To develop and validate a modified deep learning (DL) model based on nnU-Net for classifying and segmenting five-class jaw lesions using cone-beam CT (CBCT). METHODS A total of 368 CBCT scans (37 168 slices) were used to train a multi-class segmentation model. The data underwent manual annotation by two oral and maxillofacial surgeons (OMSs) to serve as ground truth. Sensitivity, specificity, precision, F1-score, and accuracy were used to evaluate the classification ability of the model and doctors, with or without artificial intelligence assistance. The dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and segmentation time were used to evaluate the segmentation effect of the model. RESULTS The model achieved the dual task of classifying and segmenting jaw lesions in CBCT. For classification, the sensitivity, specificity, precision, and accuracy of the model were 0.871, 0.974, 0.874, and 0.891, respectively, surpassing oral and maxillofacial radiologists (OMFRs) and OMSs, approaching the specialist. With the model's assistance, the classification performance of OMFRs and OMSs improved, particularly for odontogenic keratocyst (OKC) and ameloblastoma (AM), with F1-score improvements ranging from 6.2% to 12.7%. For segmentation, the DSC was 87.2% and the ASSD was 1.359 mm. The model's average segmentation time was 40 ± 9.9 s, contrasting with 25 ± 7.2 min for OMSs. CONCLUSIONS The proposed DL model accurately and efficiently classified and segmented five classes of jaw lesions using CBCT. In addition, it could assist doctors in improving classification accuracy and segmentation efficiency, particularly in distinguishing confusing lesions (eg, AM and OKC).
Collapse
Affiliation(s)
- Wei Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Xiang Li
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Chang Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Ge Gao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Yutao Xiong
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Tao Zhu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Wei Zeng
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Wei Tang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| |
Collapse
|
2
|
Macrì M, D’Albis V, D’Albis G, Forte M, Capodiferro S, Favia G, Alrashadah AO, García VDF, Festa F. The Role and Applications of Artificial Intelligence in Dental Implant Planning: A Systematic Review. Bioengineering (Basel) 2024; 11:778. [PMID: 39199736 PMCID: PMC11351972 DOI: 10.3390/bioengineering11080778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/24/2024] [Accepted: 07/26/2024] [Indexed: 09/01/2024] Open
Abstract
Artificial intelligence (AI) is revolutionizing dentistry, offering new opportunities to improve the precision and efficiency of implantology. This literature review aims to evaluate the current evidence on the use of AI in implant planning assessment. The analysis was conducted through PubMed and Scopus search engines, using a combination of relevant keywords, including "artificial intelligence implantology", "AI implant planning", "AI dental implant", and "implantology artificial intelligence". Selected articles were carefully reviewed to identify studies reporting data on the effectiveness of AI in implant planning. The results of the literature review indicate a growing interest in the application of AI in implant planning, with evidence suggesting an improvement in precision and predictability compared to traditional methods. The summary of the obtained findings by the included studies represents the latest AI developments in implant planning, demonstrating its application for the automated detection of bones, the maxillary sinus, neuronal structure, and teeth. However, some disadvantages were also identified, including the need for high-quality training data and the lack of standardization in protocols. In conclusion, the use of AI in implant planning presents promising prospects for improving clinical outcomes and optimizing patient management. However, further research is needed to fully understand its potential and address the challenges associated with its implementation in clinical practice.
Collapse
Affiliation(s)
- Monica Macrì
- Department of Innovative Technologies in Medicine & Dentistry, University “G. D’Annunzio” of Chieti-Pescara, 66100 Chieti, Italy; (V.D.); (F.F.)
| | - Vincenzo D’Albis
- Department of Innovative Technologies in Medicine & Dentistry, University “G. D’Annunzio” of Chieti-Pescara, 66100 Chieti, Italy; (V.D.); (F.F.)
| | - Giuseppe D’Albis
- Department of Interdisciplinary Medicine, University of Bari Aldo Moro, 70121 Bari, Italy; (G.D.); (M.F.); (S.C.); (G.F.)
| | - Marta Forte
- Department of Interdisciplinary Medicine, University of Bari Aldo Moro, 70121 Bari, Italy; (G.D.); (M.F.); (S.C.); (G.F.)
| | - Saverio Capodiferro
- Department of Interdisciplinary Medicine, University of Bari Aldo Moro, 70121 Bari, Italy; (G.D.); (M.F.); (S.C.); (G.F.)
| | - Gianfranco Favia
- Department of Interdisciplinary Medicine, University of Bari Aldo Moro, 70121 Bari, Italy; (G.D.); (M.F.); (S.C.); (G.F.)
| | | | - Victor Diaz-Flores García
- Department of Pre-Clinical Dentistry, School of Biomedical Sciences, Universidad Europea de Madrid, Villaviciosa de Odón, 28670 Madrid, Spain;
| | - Felice Festa
- Department of Innovative Technologies in Medicine & Dentistry, University “G. D’Annunzio” of Chieti-Pescara, 66100 Chieti, Italy; (V.D.); (F.F.)
| |
Collapse
|
3
|
Assiri HA, Hameed MS, Alqarni A, Dawasaz AA, Arem SA, Assiri KI. Artificial Intelligence Application in a Case of Mandibular Third Molar Impaction: A Systematic Review of the Literature. J Clin Med 2024; 13:4431. [PMID: 39124697 PMCID: PMC11313288 DOI: 10.3390/jcm13154431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 08/12/2024] Open
Abstract
Objective: This systematic review aims to summarize the evidence on the use and applicability of AI in impacted mandibular third molars. Methods: Searches were performed in the following databases: PubMed, Scopus, and Google Scholar. The study protocol is registered at the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY202460081). The retrieved articles were subjected to an exhaustive review based on the inclusion and exclusion criteria for the study. Articles on the use of AI for diagnosis, treatment, and treatment planning in patients with impacted mandibular third molars were included. Results: Twenty-one articles were selected and evaluated using the Scottish Intercollegiate Guidelines Network (SIGN) evidence quality scale. Most of the analyzed studies dealt with using AI to determine the relationship between the mandibular canal and the impacted mandibular third molar. The average quality of the articles included in this review was 2+, which indicated that the level of evidence, according to the SIGN protocol, was B. Conclusions: Compared to human observers, AI models have demonstrated decent performance in determining the morphology, anatomy, and relationship of the impaction with the inferior alveolar nerve canal. However, the prediction of eruptions and future horizons of AI models are still in the early developmental stages. Additional studies estimating the eruption in mixed and permanent dentition are warranted to establish a comprehensive model for identifying, diagnosing, and predicting third molar eruptions and determining the treatment outcomes in the case of impacted teeth. This will help clinicians make better decisions and achieve better treatment outcomes.
Collapse
Affiliation(s)
- Hassan Ahmed Assiri
- Department of Diagnostic Science and Oral Biology, College of Dentistry, King Khalid University, P.O. Box 960, Abha City 61421, Saudi Arabia; (M.S.H.); (A.A.); (A.A.D.); (S.A.A.); (K.I.A.)
| | | | | | | | | | | |
Collapse
|
4
|
Alrashed S, Dutra V, Chu TMG, Yang CC, Lin WS. Influence of exposure protocol, voxel size, and artifact removal algorithm on the trueness of segmentation utilizing an artificial-intelligence-based system. J Prosthodont 2024; 33:574-583. [PMID: 38305665 DOI: 10.1111/jopr.13827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
PURPOSE To evaluate the effects of exposure protocol, voxel sizes, and artifact removal algorithms on the trueness of segmentation in various mandible regions using an artificial intelligence (AI)-based system. MATERIALS AND METHODS Eleven dry human mandibles were scanned using a cone beam computed tomography (CBCT) scanner under differing exposure protocols (standard and ultra-low), voxel sizes (0.15 mm, 0.3 mm, and 0.45 mm), and with or without artifact removal algorithm. The resulting datasets were segmented using an AI-based system, exported as 3D models, and compared to reference files derived from a white-light laboratory scanner. Deviation measurement was performed using a computer-aided design (CAD) program and recorded as root mean square (RMS). The RMS values were used as a representation of the trueness of the AI-segmented 3D models. A 4-way ANOVA was used to assess the impact of voxel size, exposure protocol, artifact removal algorithm, and location on RMS values (α = 0.05). RESULTS Significant effects were found with voxel size (p < 0.001) and location (p < 0.001), but not with exposure protocol (p = 0.259) or artifact removal algorithm (p = 0.752). Standard exposure groups had significantly lower RMS values than the ultra-low exposure groups in the mandible body with 0.3 mm (p = 0.014) or 0.45 mm (p < 0.001) voxel sizes, the symphysis with a 0.45 mm voxel size (p = 0.011), and the whole mandible with a 0.45 mm voxel size (p = 0.001). Exposure protocol did not affect RMS values at teeth and alveolar bone (p = 0.544), mandible angles (p = 0.380), condyles (p = 0.114), and coronoids (p = 0.806) locations. CONCLUSION This study informs optimal exposure protocol and voxel size choices in CBCT imaging for true AI-based automatic segmentation with minimal radiation. The artifact removal algorithm did not influence the trueness of AI segmentation. When using an ultra-low exposure protocol to minimize patient radiation exposure in AI segmentations, a voxel size of 0.15 mm is recommended, while a voxel size of 0.45 mm should be avoided.
Collapse
Affiliation(s)
- Safa Alrashed
- Oral Biology PhD program in the College of Dentistry, Division of Restorative and Prosthetic Dentistry, The Ohio State University, Columbus, Ohio, USA
| | - Vinicius Dutra
- Department of Oral Pathology, Medicine, and Radiology, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Tien-Min G Chu
- Department of Biomedical Sciences and Comprehensive Care, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Chao-Chieh Yang
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
- Advanced Education Program in Prosthodontics, Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| | - Wei-Shao Lin
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
- Advanced Education Program in Prosthodontics, Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, Indiana, USA
| |
Collapse
|
5
|
Park D, Park EA, Jeong B, Lee W. A comparative analysis of deep learning-based location-adaptive threshold method software against other commercially available software. Int J Cardiovasc Imaging 2024; 40:1269-1281. [PMID: 38634943 PMCID: PMC11213768 DOI: 10.1007/s10554-024-03099-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/02/2024] [Indexed: 04/19/2024]
Abstract
Automatic segmentation of the coronary artery using coronary computed tomography angiography (CCTA) images can facilitate several analyses related to coronary artery disease (CAD). Accurate segmentation of the lumen or plaque region is one of the most important factors. This study aimed to analyze the performance of the coronary artery segmentation of a software platform with a deep learning-based location-adaptive threshold method (DL-LATM) against commercially available software platforms using CCTA. The dataset from intravascular ultrasound (IVUS) of 26 vessel segments from 19 patients was used as the gold standard to evaluate the performance of each software platform. Statistical analyses (Pearson correlation coefficient [PCC], intraclass correlation coefficient [ICC], and Bland-Altman plot) were conducted for the lumen or plaque parameters by comparing the dataset of each software platform with IVUS. The software platform with DL-LATM showed the bias closest to zero for detecting lumen volume (mean difference = -9.1 mm3, 95% confidence interval [CI] = -18.6 to 0.4 mm3) or area (mean difference = -0.72 mm2, 95% CI = -0.80 to -0.64 mm2) with the highest PCC and ICC. Moreover, lumen or plaque area in the stenotic region was analyzed. The software platform with DL-LATM showed the bias closest to zero for detecting lumen (mean difference = -0.07 mm2, 95% CI = -0.16 to 0.02 mm2) or plaque area (mean difference = 1.70 mm2, 95% CI = 1.37 to 2.03 mm2) in the stenotic region with significantly higher correlation coefficient than other commercially available software platforms (p < 0.001). The result shows that the software platform with DL-LATM has the potential to serve as an aiding system for CAD evaluation.
Collapse
Affiliation(s)
- Daebeom Park
- Department of Clinical Medical Sciences, Seoul National University College of Medicine, Seoul, Korea
| | - Eun-Ah Park
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Baren Jeong
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Whal Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea.
- Department of Clinical Medical Sciences, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
6
|
Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY. Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 2024; 144:104931. [PMID: 38458378 DOI: 10.1016/j.jdent.2024.104931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.
Collapse
Affiliation(s)
- Fang-Duan Ni
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | | | - Mu-Qing Liu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| | - Min-Juan Zhang
- Second Dental Center, Peking University Hospital of Stomatology, Beijing 100101, China
| | - Shu Li
- Department of Stomatology, Beijing Hospital, Beijing 100005, China
| | | | | | - Kai-Yuan Fu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
7
|
Elgarba BM, Fontenele RC, Tarce M, Jacobs R. Artificial intelligence serving pre-surgical digital implant planning: A scoping review. J Dent 2024; 143:104862. [PMID: 38336018 DOI: 10.1016/j.jdent.2024.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
OBJECTIVES To conduct a scoping review focusing on artificial intelligence (AI) applications in presurgical dental implant planning. Additionally, to assess the automation degree of clinically available pre-surgical implant planning software. DATA AND SOURCES A systematic electronic literature search was performed in five databases (PubMed, Embase, Web of Science, Cochrane Library, and Scopus), along with exploring gray literature web-based resources until November 2023. English-language studies on AI-driven tools for digital implant planning were included based on an independent evaluation by two reviewers. An assessment of automation steps in dental implant planning software available on the market up to November 2023 was also performed. STUDY SELECTION AND RESULTS From an initial 1,732 studies, 47 met eligibility criteria. Within this subset, 39 studies focused on AI networks for anatomical landmark-based segmentation, creating virtual patients. Eight studies were dedicated to AI networks for virtual implant placement. Additionally, a total of 12 commonly available implant planning software applications were identified and assessed for their level of automation in pre-surgical digital implant workflows. Notably, only six of these featured at least one fully automated step in the planning software, with none possessing a fully automated implant planning protocol. CONCLUSIONS AI plays a crucial role in achieving accurate, time-efficient, and consistent segmentation of anatomical landmarks, serving the process of virtual patient creation. Additionally, currently available systems for virtual implant placement demonstrate different degrees of automation. It is important to highlight that, as of now, full automation of this process has not been documented nor scientifically validated. CLINICAL SIGNIFICANCE Scientific and clinical validation of AI applications for presurgical dental implant planning is currently scarce. The present review allows the clinician to identify AI-based automation in presurgical dental implant planning and assess the potential underlying scientific validation.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt.
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Mihai Tarce
- Division of Periodontology & Implant Dentistry, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China & Periodontology and Oral Microbiology, Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
8
|
Kise Y, Kuwada C, Mori M, Fukuda M, Ariji Y, Ariji E. Deep learning system for distinguishing between nasopalatine duct cysts and radicular cysts arising in the midline region of the anterior maxilla on panoramic radiographs. Imaging Sci Dent 2024; 54:33-41. [PMID: 38571775 PMCID: PMC10985522 DOI: 10.5624/isd.20230169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/31/2023] [Accepted: 11/22/2023] [Indexed: 04/05/2024] Open
Abstract
Purpose The aims of this study were to create a deep learning model to distinguish between nasopalatine duct cysts (NDCs), radicular cysts, and no-lesions (normal) in the midline region of the anterior maxilla on panoramic radiographs and to compare its performance with that of dental residents. Materials and Methods One hundred patients with a confirmed diagnosis of NDC (53 men, 47 women; average age, 44.6±16.5 years), 100 with radicular cysts (49 men, 51 women; average age, 47.5±16.4 years), and 100 with normal groups (56 men, 44 women; average age, 34.4±14.6 years) were enrolled in this study. Cases were randomly assigned to the training datasets (80%) and the test dataset (20%). Then, 20% of the training data were randomly assigned as validation data. A learning model was created using a customized DetectNet built in Digits version 5.0 (NVIDIA, Santa Clara, USA). The performance of the deep learning system was assessed and compared with that of two dental residents. Results The performance of the deep learning system was superior to that of the dental residents except for the recall of radicular cysts. The areas under the curve (AUCs) for NDCs and radicular cysts in the deep learning system were significantly higher than those of the dental residents. The results for the dental residents revealed a significant difference in AUC between NDCs and normal groups. Conclusion This study showed superior performance in detecting NDCs and radicular cysts and in distinguishing between these lesions and normal groups.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Mizuho Mori
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
9
|
Lamy J, Taoutel R, Chamoun R, Akar J, Niederer S, Mojibian H, Huber S, Baldassarre LA, Meadows J, Peters DC. Atrial fibrosis by cardiac MRI is a correlate for atrial stiffness in patients with atrial fibrillation. THE INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING 2024; 40:107-117. [PMID: 37857929 PMCID: PMC11378145 DOI: 10.1007/s10554-023-02968-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 09/22/2023] [Indexed: 10/21/2023]
Abstract
A relationship between left atrial strain and pressure has been demonstrated in many studies, but not in an atrial fibrillation (AF) cohort. In this work, we hypothesized that elevated left atrial (LA) tissue fibrosis might mediate and confound the LA strain vs. pressure relationship, resulting instead in a relationship between LA fibrosis and stiffness index (mean LA pressure/LA reservoir strain). Sixty-seven patients with AF underwent a standard cardiac MR exam including long-axis cine views (2 and 4-ch) and a free-breathing high resolution three-dimensional late gadolinium enhancement (LGE) of the atrium (N = 41), within 30 days prior to AF ablation, at which procedure invasive mean left atrial pressure (LAP) was measured. LV and LA Volumes, EF, and comprehensive analysis of LA strains (strain and strain rates and strain timings during the atrial reservoir, conduit and active, i.e. active atrial contraction, phases) were measured and LA fibrosis content (LGE (ml)) was assessed from 3D LGE volumes. LA LGE was well correlated to atrial stiffness index overall (R = 0.59, p < 0.001), and among patient subgroups. Pressure was only correlated to maximal LA volume (R = 0.32) and the time to peak reservoir strain rate (R = 0.32) (both p < 0.01), among all functional measurements. LA reservoir strain was strongly correlated with LAEF (R = 0.95, p < 0.001) and LA minimum volume (r = 0.82, p < 0.001). In our AF cohort, pressure is correlated to maximum LA volume and time to peak reservoir strain. LA pressure/ LA reservoir strain, a metric of stiffness, correlates with LA fibrosis (LA LGE), reflecting Hook's Law.
Collapse
Affiliation(s)
- Jérôme Lamy
- Department of Radiology and Biomedical Imaging, Yale Magnetic Resonance Research Center, Yale University, 300 Cedar St, TAC N117, PO Box 208043, New Haven, CT, 06520, USA
| | - Roy Taoutel
- Department of Medicine, Cardiovascular Division, Yale University, New Haven, CT, USA
| | - Romy Chamoun
- Department of Medicine, Cardiovascular Division, Yale University, New Haven, CT, USA
| | - Joseph Akar
- Department of Medicine, Cardiovascular Division, Yale University, New Haven, CT, USA
| | | | - Hamid Mojibian
- Department of Radiology and Biomedical Imaging, Yale Magnetic Resonance Research Center, Yale University, 300 Cedar St, TAC N117, PO Box 208043, New Haven, CT, 06520, USA
| | - Steffen Huber
- Department of Radiology and Biomedical Imaging, Yale Magnetic Resonance Research Center, Yale University, 300 Cedar St, TAC N117, PO Box 208043, New Haven, CT, 06520, USA
| | - Lauren A Baldassarre
- Department of Medicine, Cardiovascular Division, Yale University, New Haven, CT, USA
| | - Judith Meadows
- Department of Medicine, Cardiovascular Division, Yale University, New Haven, CT, USA
| | - Dana C Peters
- Department of Radiology and Biomedical Imaging, Yale Magnetic Resonance Research Center, Yale University, 300 Cedar St, TAC N117, PO Box 208043, New Haven, CT, 06520, USA.
| |
Collapse
|
10
|
Lv J, Zhang L, Xu J, Li W, Li G, Zhou H. Automatic segmentation of mandibular canal using transformer based neural networks. Front Bioeng Biotechnol 2023; 11:1302524. [PMID: 38047288 PMCID: PMC10693337 DOI: 10.3389/fbioe.2023.1302524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 11/01/2023] [Indexed: 12/05/2023] Open
Abstract
Accurate 3D localization of the mandibular canal is crucial for the success of digitally-assisted dental surgeries. Damage to the mandibular canal may result in severe consequences for the patient, including acute pain, numbness, or even facial paralysis. As such, the development of a fast, stable, and highly precise method for mandibular canal segmentation is paramount for enhancing the success rate of dental surgical procedures. Nonetheless, the task of mandibular canal segmentation is fraught with challenges, including a severe imbalance between positive and negative samples and indistinct boundaries, which often compromise the completeness of existing segmentation methods. To surmount these challenges, we propose an innovative, fully automated segmentation approach for the mandibular canal. Our methodology employs a Transformer architecture in conjunction with cl-Dice loss to ensure that the model concentrates on the connectivity of the mandibular canal. Additionally, we introduce a pixel-level feature fusion technique to bolster the model's sensitivity to fine-grained details of the canal structure. To tackle the issue of sample imbalance and vague boundaries, we implement a strategy founded on mandibular foramen localization to isolate the maximally connected domain of the mandibular canal. Furthermore, a contrast enhancement technique is employed for pre-processing the raw data. We also adopt a Deep Label Fusion strategy for pre-training on synthetic datasets, which substantially elevates the model's performance. Empirical evaluations on a publicly accessible mandibular canal dataset reveal superior performance metrics: a Dice score of 0.844, click score of 0.961, IoU of 0.731, and HD95 of 2.947 mm. These results not only validate the efficacy of our approach but also establish its state-of-the-art performance on the public mandibular canal dataset.
Collapse
Affiliation(s)
| | | | | | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | | | | |
Collapse
|
11
|
Zhang L, Li W, Lv J, Xu J, Zhou H, Li G, Ai K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J Dent 2023; 138:104727. [PMID: 37769934 DOI: 10.1016/j.jdent.2023.104727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
OBJECTIVES This article reviews recent advances in computer-aided segmentation methods for oral and maxillofacial surgery and describes the advantages and limitations of these methods. The objective is to provide an invaluable resource for precise therapy and surgical planning in oral and maxillofacial surgery. Study selection, data and sources: This review includes full-text articles and conference proceedings reporting the application of segmentation methods in the field of oral and maxillofacial surgery. The research focuses on three aspects: tooth detection segmentation, mandibular canal segmentation and alveolar bone segmentation. The most commonly used imaging technique is CBCT, followed by conventional CT and Orthopantomography. A systematic electronic database search was performed up to July 2023 (Medline via PubMed, IEEE Xplore, ArXiv, Google Scholar were searched). RESULTS These segmentation methods can be mainly divided into two categories: traditional image processing and machine learning (including deep learning). Performance testing on a dataset of images labeled by medical professionals shows that it performs similarly to dentists' annotations, confirming its effectiveness. However, no studies have evaluated its practical application value. CONCLUSION Segmentation methods (particularly deep learning methods) have demonstrated unprecedented performance, while inherent challenges remain, including the scarcity and inconsistency of datasets, visible artifacts in images, unbalanced data distribution, and the "black box" nature. CLINICAL SIGNIFICANCE Accurate image segmentation is critical for precise treatment and surgical planning in oral and maxillofacial surgery. This review aims to facilitate more accurate and effective surgical treatment planning among dental researchers.
Collapse
Affiliation(s)
- Lang Zhang
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Wang Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China.
| | - Jinxun Lv
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Jiajie Xu
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Hengyu Zhou
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Gen Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Keqi Ai
- Department of Radiology, Xinqiao Hospital, Army Medical University, Chongqing 400037, China.
| |
Collapse
|
12
|
Jindanil T, Marinho-Vieira LE, de-Azevedo-Vaz SL, Jacobs R. A unique artificial intelligence-based tool for automated CBCT segmentation of mandibular incisive canal. Dentomaxillofac Radiol 2023; 52:20230321. [PMID: 37870152 DOI: 10.1259/dmfr.20230321] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2023] Open
Abstract
OBJECTIVES To develop and validate a novel artificial intelligence (AI) tool for automated segmentation of mandibular incisive canal on cone beam computed tomography (CBCT) scans. METHODS After ethical approval, a data set of 200 CBCT scans were selected and categorized into training (160), validation (20), and test (20) sets. CBCT scans were imported into Virtual Patient Creator and ground truth for training and validation were manually segmented by three oral radiologists in multiplanar reconstructions. Intra- and interobserver analysis for human segmentation variability was performed on 20% of the data set. Segmentations were imported into Mimics for standardization. Resulting files were imported to 3-Matic for analysis using surface- and voxel-based methods. Evaluation metrics involved time efficiency, analysis metrics including Dice Similarity Coefficient (DSC), Intersection over Union (IoU), Root mean square error (RMSE), precision, recall, accuracy, and consistency. These values were calculated considering AI-based segmentation and refined-AI segmentation compared to manual segmentation. RESULTS Average time for AI-based segmentation, refined-AI segmentation and manual segmentation was 00:10, 08:09, and 47:18 (284-fold time reduction). AI-based segmentation showed mean values of DSC 0.873, IoU 0.775, RMSE 0.256 mm, precision 0.837 and recall 0.890 while refined-AI segmentation provided DSC 0.876, IoU 0.781, RMSE 0.267 mm, precision 0. 852 and recall 0.902 with the accuracy of 0.998 for both methods. The consistency was one for AI-based segmentation and 0.910 for manual segmentation. CONCLUSIONS An innovative AI-tool for automated segmentation of mandibular incisive canal on CBCT scans was proofed to be accurate, time efficient, and highly consistent, serving pre-surgical planning.
Collapse
Affiliation(s)
- Thanatchaporn Jindanil
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
| | - Luiz Eduardo Marinho-Vieira
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Reinhilde Jacobs
- Department of Imaging and Pathology, Faculty of Medicine, OMFS-IMPATH Research Group, KU Leuven, Leuven, Belgium
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
13
|
Chun SY, Kang YH, Yang S, Kang SR, Lee SJ, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Automatic classification of 3D positional relationship between mandibular third molar and inferior alveolar canal using a distance-aware network. BMC Oral Health 2023; 23:794. [PMID: 37880603 PMCID: PMC10598947 DOI: 10.1186/s12903-023-03496-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 10/05/2023] [Indexed: 10/27/2023] Open
Abstract
The purpose of this study was to automatically classify the three-dimensional (3D) positional relationship between an impacted mandibular third molar (M3) and the inferior alveolar canal (MC) using a distance-aware network in cone-beam CT (CBCT) images. We developed a network consisting of cascaded stages of segmentation and classification for the buccal-lingual relationship between the M3 and the MC. The M3 and the MC were simultaneously segmented using Dense121 U-Net in the segmentation stage, and their buccal-lingual relationship was automatically classified using a 3D distance-aware network with the multichannel inputs of the original CBCT image and the signed distance map (SDM) generated from the segmentation in the classification stage. The Dense121 U-Net achieved the highest average precision of 0.87, 0.96, and 0.94 in the segmentation of the M3, the MC, and both together, respectively. The 3D distance-aware classification network of the Dense121 U-Net with the input of both the CBCT image and the SDM showed the highest performance of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve, each of which had a value of 1.00. The SDM generated from the segmentation mask significantly contributed to increasing the accuracy of the classification network. The proposed distance-aware network demonstrated high accuracy in the automatic classification of the 3D positional relationship between the M3 and the MC by learning anatomical and geometrical information from the CBCT images.
Collapse
Affiliation(s)
- So-Young Chun
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea
| | - Yun-Hui Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | | | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea.
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| |
Collapse
|
14
|
Bağ İ, Bilgir E, Bayrakdar İŞ, Baydar O, Atak FM, Çelik Ö, Orhan K. An artificial intelligence study: automatic description of anatomic landmarks on panoramic radiographs in the pediatric population. BMC Oral Health 2023; 23:764. [PMID: 37848870 PMCID: PMC10583406 DOI: 10.1186/s12903-023-03532-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/11/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.
Collapse
Affiliation(s)
- İrem Bağ
- Department of Pediatric Dentistry, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey.
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Oğuzhan Baydar
- Dentomaxillofacial Radiology Specialist, Faculty of Dentistry, Ege University, İzmir, Turkey
| | - Fatih Mehmet Atak
- Department of Computer Engineering, The Faculty of Engineering, Boğaziçi University, İstanbul, Turkey
| | - Özer Çelik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| |
Collapse
|
15
|
Altalhi AM, Alharbi FS, Alhodaithy MA, Almarshedy BS, Al-Saaib MY, Al Jfshar RM, Aljohani AS, Alshareef AH, Muhayya M, Al-Harbi NH. The Impact of Artificial Intelligence on Dental Implantology: A Narrative Review. Cureus 2023; 15:e47941. [PMID: 38034167 PMCID: PMC10685062 DOI: 10.7759/cureus.47941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/30/2023] [Indexed: 12/02/2023] Open
Abstract
Implant dentistry has witnessed a transformative shift with the integration of artificial intelligence (AI) technologies. This article explores the role of AI in implant dentistry, emphasizing its impact on diagnostics, treatment planning, and patient outcomes. AI-driven image analysis and deep learning algorithms enhance the precision of implant placement, reducing risks and optimizing aesthetics. Moreover, AI-driven data analytics provide valuable insights into patient-specific treatment strategies, improving overall success rates. As AI continues to evolve, it promises to reshape the landscape of implant dentistry and lead in an era of personalized and efficient oral healthcare.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Adeeb H Alshareef
- Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, SAU
| | | | | |
Collapse
|
16
|
Moufti MA, Trabulsi N, Ghousheh M, Fattal T, Ashira A, Danishvar S. Developing an Artificial Intelligence Solution to Autosegment the Edentulous Mandibular Bone for Implant Planning. Eur J Dent 2023; 17:1330-1337. [PMID: 37172946 PMCID: PMC10756774 DOI: 10.1055/s-0043-1764425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023] Open
Abstract
OBJECTIVE Dental implants are considered the optimum solution to replace missing teeth and restore the mouth's function and aesthetics. Surgical planning of the implant position is critical to avoid damage to vital anatomical structures; however, the manual measurement of the edentulous (toothless) bone on cone beam computed tomography (CBCT) images is time-consuming and is subject to human error. An automated process has the potential to reduce human errors and save time and costs. This study developed an artificial intelligence (AI) solution to identify and delineate edentulous alveolar bone on CBCT images before implant placement. MATERIALS AND METHODS After obtaining the ethical approval, CBCT images were extracted from the database of the University Dental Hospital Sharjah based on predefined selection criteria. Manual segmentation of the edentulous span was done by three operators using ITK-SNAP software. A supervised machine learning approach was undertaken to develop a segmentation model on a "U-Net" convolutional neural network (CNN) in the Medical Open Network for Artificial Intelligence (MONAI) framework. Out of the 43 labeled cases, 33 were utilized to train the model, and 10 were used for testing the model's performance. STATISTICAL ANALYSIS The degree of 3D spatial overlap between the segmentation made by human investigators and the model's segmentation was measured by the dice similarity coefficient (DSC). RESULTS The sample consisted mainly of lower molars and premolars. DSC yielded an average value of 0.89 for training and 0.78 for testing. Unilateral edentulous areas, comprising 75% of the sample, resulted in a better DSC (0.91) than bilateral cases (0.73). CONCLUSION Segmentation of the edentulous spans on CBCT images was successfully conducted by machine learning with good accuracy compared to manual segmentation. Unlike traditional AI object detection models that identify objects present in the image, this model identifies missing objects. Finally, challenges in data collection and labeling are discussed, together with an outlook at the prospective stages of a larger project for a complete AI solution for automated implant planning.
Collapse
Affiliation(s)
- Mohammad Adel Moufti
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Nuha Trabulsi
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Marah Ghousheh
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Tala Fattal
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | - Ali Ashira
- Department of Preventive and Restorative Dentistry, University of Sharjah, United Arab Emirates
| | | |
Collapse
|
17
|
Ozsari S, Güzel MS, Yılmaz D, Kamburoğlu K. A Comprehensive Review of Artificial Intelligence Based Algorithms Regarding Temporomandibular Joint Related Diseases. Diagnostics (Basel) 2023; 13:2700. [PMID: 37627959 PMCID: PMC10453523 DOI: 10.3390/diagnostics13162700] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/13/2023] [Accepted: 08/16/2023] [Indexed: 08/27/2023] Open
Abstract
Today, with rapid advances in technology, computer-based studies and Artificial Intelligence (AI) approaches are finding their place in every field, especially in the medical sector, where they attract great attention. The Temporomandibular Joint (TMJ) stands as the most intricate joint within the human body, and diseases related to this joint are quite common. In this paper, we reviewed studies that utilize AI-based algorithms and computer-aided programs for investigating TMJ and TMJ-related diseases. We conducted a literature search on Google Scholar, Web of Science, and PubMed without any time constraints and exclusively selected English articles. Moreover, we examined the references to papers directly related to the topic matter. As a consequence of the survey, a total of 66 articles within the defined scope were assessed. These selected papers were distributed across various areas, with 11 focusing on segmentation, 3 on Juvenile Idiopathic Arthritis (JIA), 10 on TMJ Osteoarthritis (OA), 21 on Temporomandibular Joint Disorders (TMD), 6 on decision support systems, 10 reviews, and 5 on sound studies. The observed trend indicates a growing interest in artificial intelligence algorithms, suggesting that the number of studies in this field will likely continue to expand in the future.
Collapse
Affiliation(s)
- Sifa Ozsari
- Department of Computer Engineering, Ankara University, 06830 Ankara, Turkey;
| | - Mehmet Serdar Güzel
- Department of Computer Engineering, Ankara University, 06830 Ankara, Turkey;
| | - Dilek Yılmaz
- Faculty of Dentistry, Baskent University, 06490 Ankara, Turkey;
| | - Kıvanç Kamburoğlu
- Department of Dentomaxillofacial Radiology, Ankara University, 06560 Ankara, Turkey;
| |
Collapse
|
18
|
Lin X, Xin W, Huang J, Jing Y, Liu P, Han J, Ji J. Accurate mandibular canal segmentation of dental CBCT using a two-stage 3D-UNet based segmentation framework. BMC Oral Health 2023; 23:551. [PMID: 37563606 PMCID: PMC10416403 DOI: 10.1186/s12903-023-03279-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 08/02/2023] [Indexed: 08/12/2023] Open
Abstract
OBJECTIVES The objective of this study is to develop a deep learning (DL) model for fast and accurate mandibular canal (MC) segmentation on cone beam computed tomography (CBCT). METHODS A total of 220 CBCT scans from dentate subjects needing oral surgery were used in this study. The segmentation ground truth is annotated and reviewed by two senior dentists. All patients were randomly splitted into a training dataset (n = 132), a validation dataset (n = 44) and a test dataset (n = 44). We proposed a two-stage 3D-UNet based segmentation framework for automated MC segmentation on CBCT. The Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (95% HD) were used as the evaluation metrics for the segmentation model. RESULTS The two-stage 3D-UNet model successfully segmented the MC on CBCT images. In the test dataset, the mean DSC was 0.875 ± 0.045 and the mean 95% HD was 0.442 ± 0.379. CONCLUSIONS This automatic DL method might aid in the detection of MC and assist dental practitioners to set up treatment plans for oral surgery evolved MC.
Collapse
Affiliation(s)
- Xi Lin
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
| | - Weini Xin
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
- Department of Stomatology of Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangddong China
| | - Jingna Huang
- Clinic of Stomatology of the Shantou University Medical College, No. 22, Xinling Road, Shantou, Guangdong China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Pengfei Liu
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Jingdan Han
- Huiying Medical Technology Co., Ltd, Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Jie Ji
- Network and Information Center, Shantou University, No. 243, University Road, Shantou, Guangdong China
| |
Collapse
|
19
|
Oliveira-Santos N, Jacobs R, Picoli FF, Lahoud P, Niclaes L, Groppo FC. Automated segmentation of the mandibular canal and its anterior loop by deep learning. Sci Rep 2023; 13:10819. [PMID: 37402784 DOI: 10.1038/s41598-023-37798-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/28/2023] [Indexed: 07/06/2023] Open
Abstract
Accurate mandibular canal (MC) detection is crucial to avoid nerve injury during surgical procedures. Moreover, the anatomic complexity of the interforaminal region requires a precise delineation of anatomical variations such as the anterior loop (AL). Therefore, CBCT-based presurgical planning is recommended, even though anatomical variations and lack of MC cortication make canal delineation challenging. To overcome these limitations, artificial intelligence (AI) may aid presurgical MC delineation. In the present study, we aim to train and validate an AI-driven tool capable of performing accurate segmentation of the MC even in the presence of anatomical variation such as AL. Results achieved high accuracy metrics, with 0.997 of global accuracy for both MC with and without AL. The anterior and middle sections of the MC, where most surgical interventions are performed, presented the most accurate segmentation compared to the posterior section. The AI-driven tool provided accurate segmentation of the mandibular canal, even in the presence of anatomical variation such as an anterior loop. Thus, the presently validated dedicated AI tool may aid clinicians in automating the segmentation of neurovascular canals and their anatomical variations. It may significantly contribute to presurgical planning for dental implant placement, especially in the interforaminal region.
Collapse
Affiliation(s)
- Nicolly Oliveira-Santos
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium.
- Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden.
| | - Fernando Fortes Picoli
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
- Department of Stomatology and Oral Radiology, Dental School, Federal University of Goiás, Goiânia, Goiás, Brazil
| | - Pierre Lahoud
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
| | - Liselot Niclaes
- OMFS IMPATH Research Group, Department of Imaging and Pathology, KU Leuven and University Hospitals Leuven, UZ Campus St Rafael, Leuven, Belgium
| | - Francisco Carlos Groppo
- Department of Biosciences, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, São Paulo, Brazil
| |
Collapse
|
20
|
Papasratorn D, Pornprasertsuk-Damrongsri S, Yuma S, Weerawanich W. Investigation of the best effective fold of data augmentation for training deep learning models for recognition of contiguity between mandibular third molar and inferior alveolar canal on panoramic radiographs. Clin Oral Investig 2023; 27:3759-3769. [PMID: 37043029 PMCID: PMC10329615 DOI: 10.1007/s00784-023-04992-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/28/2023] [Indexed: 04/13/2023]
Abstract
OBJECTIVES This study aimed to train deep learning models for recognition of contiguity between the mandibular third molar (M3M) and inferior alveolar canal using panoramic radiographs and to investigate the best effective fold of data augmentation. MATERIALS AND METHODS The total of 1800 M3M cropped images were classified evenly into contact and no-contact. The contact group was confirmed with CBCT images. The models were trained from three pretrained models: AlexNet, VGG-16, and GoogLeNet. Each pretrained model was trained with the original cropped panoramic radiographs. Then the training images were increased fivefold, tenfold, 15-fold, and 20-fold using data augmentation to train additional models. The area under the receiver operating characteristic curve (AUC) of the 15 models were evaluated. RESULTS All models recognized contiguity with AUC from 0.951 to 0.996. Ten-fold augmentation showed the highest AUC in all pretrained models; however, no significant difference with other folds were found. VGG-16 showed the best performance among pretrained models trained at the same fold of augmentation. Data augmentation provided statistically significant improvement in performance of AlexNet and GoogLeNet models, while VGG-16 remained unchanged. CONCLUSIONS Based on our images, all models performed efficiently with high AUC, particularly VGG-16. Ten-fold augmentation showed the highest AUC by all pretrained models. VGG-16 showed promising potential when training with only original images. CLINICAL RELEVANCE Ten-fold augmentation may help improve deep learning models' performances. The variety of original data and the accuracy of labels are essential to train a high-performance model.
Collapse
Affiliation(s)
- Dhanaporn Papasratorn
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Suchaya Pornprasertsuk-Damrongsri
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Suraphong Yuma
- Department of Physics, Faculty of Science, Mahidol University, 272 Rama VI Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Warangkana Weerawanich
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| |
Collapse
|
21
|
Zhao H, Chen J, Yun Z, Feng Q, Zhong L, Yang W. Whole mandibular canal segmentation using transformed dental CBCT volume in Frenet frame. Heliyon 2023; 9:e17651. [PMID: 37449128 PMCID: PMC10336514 DOI: 10.1016/j.heliyon.2023.e17651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 05/29/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Accurate segmentation of the mandibular canal is essential in dental implant and maxillofacial surgery, which can help prevent nerve or vascular damage inside the mandibular canal. Achieving this is challenging because of the low contrast in CBCT scans and the small scales of mandibular canal areas. Several innovative methods have been proposed for mandibular canal segmentation with positive performance. However, most of these methods segment the mandibular canal based on sliding patches, which may adversely affect the morphological integrity of the tubular structure. In this study, we propose whole mandibular canal segmentation using transformed dental CBCT volume in the Frenet frame. Considering the connectivity of the mandibular canal, we propose to transform the CBCT volume to obtain a sub-volume containing the whole mandibular canal based on the Frenet frame to ensure complete 3D structural information. Moreover, to further improve the performance of mandibular canal segmentation, we use clDice to guarantee the integrity of the mandibular canal structure and segment the mandibular canal. Experimental results on our CBCT dataset show that integrating the proposed transformed volume in the Frenet frame into other state-of-the-art methods achieves a 0.5%∼12.1% improvement in Dice performance. Our proposed method can achieve impressive results with a Dice value of 0.865 (±0.035), and a clDice value of 0.971 (±0.020), suggesting that our method can segment the mandibular canal with superior performance.
Collapse
Affiliation(s)
- Huanmiao Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Junhua Chen
- Stomatology Hospital of Guangzhou Medical University, Guangzhou, 510140, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| |
Collapse
|
22
|
Fan W, Zhang J, Wang N, Li J, Hu L. The Application of Deep Learning on CBCT in Dentistry. Diagnostics (Basel) 2023; 13:2056. [PMID: 37370951 DOI: 10.3390/diagnostics13122056] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Cone beam computed tomography (CBCT) has become an essential tool in modern dentistry, allowing dentists to analyze the relationship between teeth and the surrounding tissues. However, traditional manual analysis can be time-consuming and its accuracy depends on the user's proficiency. To address these limitations, deep learning (DL) systems have been integrated into CBCT analysis to improve accuracy and efficiency. Numerous DL models have been developed for tasks such as automatic diagnosis, segmentation, classification of teeth, inferior alveolar nerve, bone, airway, and preoperative planning. All research articles summarized were from Pubmed, IEEE, Google Scholar, and Web of Science up to December 2022. Many studies have demonstrated that the application of deep learning technology in CBCT examination in dentistry has achieved significant progress, and its accuracy in radiology image analysis has reached the level of clinicians. However, in some fields, its accuracy still needs to be improved. Furthermore, ethical issues and CBCT device differences may prohibit its extensive use. DL models have the potential to be used clinically as medical decision-making aids. The combination of DL and CBCT can highly reduce the workload of image reading. This review provides an up-to-date overview of the current applications of DL on CBCT images in dentistry, highlighting its potential and suggesting directions for future research.
Collapse
Affiliation(s)
- Wenjie Fan
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jiaqi Zhang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Nan Wang
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jia Li
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Li Hu
- Department of Stomatology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| |
Collapse
|
23
|
Ileșan RR, Beyer M, Kunz C, Thieringer FM. Comparison of Artificial Intelligence-Based Applications for Mandible Segmentation: From Established Platforms to In-House-Developed Software. Bioengineering (Basel) 2023; 10:604. [PMID: 37237673 PMCID: PMC10215609 DOI: 10.3390/bioengineering10050604] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Medical image segmentation, whether semi-automatically or manually, is labor-intensive, subjective, and needs specialized personnel. The fully automated segmentation process recently gained importance due to its better design and understanding of CNNs. Considering this, we decided to develop our in-house segmentation software and compare it to the systems of established companies, an inexperienced user, and an expert as ground truth. The companies included in the study have a cloud-based option that performs accurately in clinical routine (dice similarity coefficient of 0.912 to 0.949) with an average segmentation time ranging from 3'54″ to 85'54″. Our in-house model achieved an accuracy of 94.24% compared to the best-performing software and had the shortest mean segmentation time of 2'03″. During the study, developing in-house segmentation software gave us a glimpse into the strenuous work that companies face when offering clinically relevant solutions. All the problems encountered were discussed with the companies and solved, so both parties benefited from this experience. In doing so, we demonstrated that fully automated segmentation needs further research and collaboration between academics and the private sector to achieve full acceptance in clinical routines.
Collapse
Affiliation(s)
- Robert R. Ileșan
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Michel Beyer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Christoph Kunz
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Florian M. Thieringer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| |
Collapse
|
24
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|
25
|
Vinayahalingam S, Berends B, Baan F, Moin DA, van Luijn R, Bergé S, Xi T. Deep learning for automated segmentation of the temporomandibular joint. J Dent 2023; 132:104475. [PMID: 36870441 DOI: 10.1016/j.jdent.2023.104475] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 02/25/2023] [Accepted: 02/28/2023] [Indexed: 03/06/2023] Open
Abstract
OBJECTIVE Quantitative analysis of the volume and shape of the temporomandibular joint (TMJ) using cone-beam computed tomography (CBCT) requires accurate segmentation of the mandibular condyles and the glenoid fossae. This study aimed to develop and validate an automated segmentation tool based on a deep learning algorithm for accurate 3D reconstruction of the TMJ. MATERIALS AND METHODS A three-step deep-learning approach based on a 3D U-net was developed to segment the condyles and glenoid fossae on CBCT datasets. Three 3D U-Nets were utilized for region of interest (ROI) determination, bone segmentation, and TMJ classification. The AI-based algorithm was trained and validated on 154 manually segmented CBCT images. Two independent observers and the AI algorithm segmented the TMJs of a test set of 8 CBCTs. The time required for the segmentation and accuracy metrics (intersection of union, DICE, etc.) was calculated to quantify the degree of similarity between the manual segmentations (ground truth) and the performances of the AI models. RESULTS The AI segmentation achieved an intersection over union (IoU) of 0.955 and 0.935 for the condyles and glenoid fossa, respectively. The IoU of the two independent observers for manual condyle segmentation were 0.895 and 0.928, respectively (p<0.05). The mean time required for the AI segmentation was 3.6 s (SD 0.9), whereas the two observers needed 378.9 s (SD 204.9) and 571.6 s (SD 257.4), respectively (p<0.001). CONCLUSION The AI-based automated segmentation tool segmented the mandibular condyles and glenoid fossae with high accuracy, speed, and consistency. Potential limited robustness and generalizability are risks that cannot be ruled out, as the algorithms were trained on scans from orthognathic surgery patients derived from just one type of CBCT scanner. CLINICAL SIGNIFICANCE The incorporation of the AI-based segmentation tool into diagnostic software could facilitate 3D qualitative and quantitative analysis of TMJs in a clinical setting, particularly for the diagnosis of TMJ disorders and longitudinal follow-up.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands.
| | - Bo Berends
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands; Radboudumc 3DLab, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | - Frank Baan
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands; Radboudumc 3DLab, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | | | - Rik van Luijn
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, Postal number 590, Nijmegen, HB 6500, The Netherlands
| |
Collapse
|
26
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
27
|
Usman M, Rehman A, Saleem AM, Jawaid R, Byon SS, Kim SH, Lee BD, Heo MS, Shin YG. Dual-Stage Deeply Supervised Attention-Based Convolutional Neural Networks for Mandibular Canal Segmentation in CBCT Scans. SENSORS (BASEL, SWITZERLAND) 2022; 22:9877. [PMID: 36560251 PMCID: PMC9785834 DOI: 10.3390/s22249877] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/12/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of mandibular canals in lower jaws is important in dental implantology. Medical experts manually determine the implant position and dimensions from 3D CT images to avoid damaging the mandibular nerve inside the canal. In this paper, we propose a novel dual-stage deep learning-based scheme for the automatic segmentation of the mandibular canal. In particular, we first enhance the CBCT scans by employing the novel histogram-based dynamic windowing scheme, which improves the visibility of mandibular canals. After enhancement, we designed 3D deeply supervised attention UNet architecture for localizing the Volumes Of Interest (VOIs), which contain the mandibular canals (i.e., left and right canals). Finally, we employed the Multi-Scale input Residual UNet (MSiR-UNet) architecture to segment the mandibular canals using VOIs accurately. The proposed method has been rigorously evaluated on 500 and 15 CBCT scans from our dataset and from the public dataset, respectively. The results demonstrate that our technique improves the existing performance of mandibular canal segmentation to a clinically acceptable range. Moreover, it is robust against the types of CBCT scans in terms of field of view.
Collapse
Affiliation(s)
- Muhammad Usman
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Azka Rehman
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Amal Muhammad Saleem
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Rabeea Jawaid
- Division of AI and Computer Engineering, Kyonggi University, Suwon 16227, Republic of Korea
| | - Shi-Sub Byon
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Sung-Hyun Kim
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co., Ltd., Seoul 06524, Republic of Korea
| | - Byoung-Dai Lee
- Division of AI and Computer Engineering, Kyonggi University, Suwon 16227, Republic of Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea
| |
Collapse
|
28
|
Al-Sarem M, Al-Asali M, Alqutaibi AY, Saeed F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15414. [PMID: 36430133 PMCID: PMC9692549 DOI: 10.3390/ijerph192215414] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/15/2023]
Abstract
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient's panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth's position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Collapse
Affiliation(s)
- Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Sheba Region University, Marib 14400, Yemen
| | - Mohammed Al-Asali
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ahmed Yaseen Alqutaibi
- Department of Prosthodontics and Implant Dentistry, College of Dentistry, Taibah University, Al Madinah 41311, Saudi Arabia
- Department of Prosthodontics, College of Dentistry, Ibb University, Ibb 70270, Yemen
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| |
Collapse
|
29
|
Comparison of deep learning segmentation and multigrader-annotated mandibular canals of multicenter CBCT scans. Sci Rep 2022; 12:18598. [PMID: 36329051 PMCID: PMC9633839 DOI: 10.1038/s41598-022-20605-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/15/2022] [Indexed: 11/06/2022] Open
Abstract
Deep learning approach has been demonstrated to automatically segment the bilateral mandibular canals from CBCT scans, yet systematic studies of its clinical and technical validation are scarce. To validate the mandibular canal localization accuracy of a deep learning system (DLS) we trained it with 982 CBCT scans and evaluated using 150 scans of five scanners from clinical workflow patients of European and Southeast Asian Institutes, annotated by four radiologists. The interobserver variability was compared to the variability between the DLS and the radiologists. In addition, the generalisation of DLS to CBCT scans from scanners not used in the training data was examined to evaluate its out-of-distribution performance. The DLS had a statistically significant difference (p < 0.001) with lower variability to the radiologists with 0.74 mm than the interobserver variability of 0.77 mm and generalised to new devices with 0.63 mm, 0.67 mm and 0.87 mm (p < 0.001). For the radiologists' consensus segmentation, used as a gold standard, the DLS showed a symmetric mean curve distance of 0.39 mm, which was statistically significantly different (p < 0.001) compared to those of the individual radiologists with values of 0.62 mm, 0.55 mm, 0.47 mm, and 0.42 mm. These results show promise towards integration of DLS into clinical workflow to reduce time-consuming and labour-intensive manual tasks in implantology.
Collapse
|
30
|
Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare (Basel) 2022; 10:healthcare10101892. [PMID: 36292339 PMCID: PMC9602147 DOI: 10.3390/healthcare10101892] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 12/04/2022] Open
Abstract
Within the ever-growing healthcare industry, dental informatics is a burgeoning field of study. One of the major obstacles to the health care system’s transformation is obtaining knowledge and insightful data from complex, high-dimensional, and diverse sources. Modern biomedical research, for instance, has seen an increase in the use of complex, heterogeneous, poorly documented, and generally unstructured electronic health records, imaging, sensor data, and text. There were still certain restrictions even after many current techniques were used to extract more robust and useful elements from the data for analysis. New effective paradigms for building end-to-end learning models from complex data are provided by the most recent deep learning technology breakthroughs. Therefore, the current study aims to examine the most recent research on the use of deep learning techniques for dental informatics problems and recommend creating comprehensive and meaningful interpretable structures that might benefit the healthcare industry. We also draw attention to some drawbacks and the need for better technique development and provide new perspectives about this exciting new development in the field.
Collapse
|
31
|
Canal-Net for automatic and robust 3D segmentation of mandibular canals in CBCT images using a continuity-aware contextual network. Sci Rep 2022; 12:13460. [PMID: 35931733 PMCID: PMC9356068 DOI: 10.1038/s41598-022-17341-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/25/2022] [Indexed: 02/01/2023] Open
Abstract
The purpose of this study was to propose a continuity-aware contextual network (Canal-Net) for the automatic and robust 3D segmentation of the mandibular canal (MC) with high consistent accuracy throughout the entire MC volume in cone-beam CT (CBCT) images. The Canal-Net was designed based on a 3D U-Net with bidirectional convolutional long short-term memory (ConvLSTM) under a multi-task learning framework. Specifically, the Canal-Net learned the 3D anatomical context information of the MC by incorporating spatio-temporal features from ConvLSTM, and also the structural continuity of the overall MC volume under a multi-task learning framework using multi-planar projection losses complementally. The Canal-Net showed higher segmentation accuracies in 2D and 3D performance metrics (p < 0.05), and especially, a significant improvement in Dice similarity coefficient scores and mean curve distance (p < 0.05) throughout the entire MC volume compared to other popular deep learning networks. As a result, the Canal-Net achieved high consistent accuracy in 3D segmentations of the entire MC in spite of the areas of low visibility by the unclear and ambiguous cortical bone layer. Therefore, the Canal-Net demonstrated the automatic and robust 3D segmentation of the entire MC volume by improving structural continuity and boundary details of the MC in CBCT images.
Collapse
|
32
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
33
|
Setzer FC, Kratchman SI. Present Status and Future Directions - Surgical Endodontics. Int Endod J 2022; 55 Suppl 4:1020-1058. [PMID: 35670053 DOI: 10.1111/iej.13783] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/30/2022] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
Endodontic surgery encompasses several procedures for the treatment of teeth with a history of failed root canal treatment, such as root-end surgery, crown- and root resections, surgical perforation repair, and intentional replantation. Endodontic microsurgery is the evolution of the traditional apicoectomy techniques and incorporates high magnification, ultrasonic root-end preparation and root-end filling with biocompatible filling materials. Modern endodontic surgery uses the dental operating microscope, incorporates cone-beam computed tomography (CBCT) for preoperative diagnosis and treatment planning, and has adopted piezoelectric approaches to osteotomy and root manipulation. Crown- and root resection techniques have benefitted from the same technological advances. This review focuses on the current state of root-end surgery by comparing the techniques and materials applied during endodontic microsurgery to the most widely used earlier methods and materials. The most recent additions to the clinical protocol and technical improvements are discussed, and an outlook on future directions is given. While non-surgical retreatment remains the first choice to address most cases with a history of endodontic failure, modern endodontic microsurgery has become a predictable and minimally invasive alternative for the retention of natural teeth.
Collapse
Affiliation(s)
- F C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - S I Kratchman
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
34
|
Fahd A, Temerek AT, Kenawy SM. Validation of different protocols of inferior alveolar canal tracing using cone beam computed tomography (CBCT). Dentomaxillofac Radiol 2022; 51:20220016. [PMID: 35230870 PMCID: PMC9499204 DOI: 10.1259/dmfr.20220016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES The objective of this study was to evaluate, compare and validate different protocols of inferior alveolar canal tracing. METHODS 60 DICOM files with a total of 80 inferior alveolar canals were retrieved and imported to a third-party software where all proposed protocols can be performed. Initially, inferior alveolar canal was traced by two oral and maxillofacial radiologists together on cone beam CT cross-sectional images and considered as the baseline for future comparisons. Oral and maxillofacial surgeon performed the proposed different protocols. The protocols were color-coded differently by the surgeon before being compared with the baseline canal by the radiologists through a 5-point scale. RESULTS Results showed that no single protocol was successful in all cases, even the cross-sectional protocol. According to the present study, the hybrid protocol was the most accurate while the automatic protocol was the least accurate. CONCLUSIONS The hybrid protocol was reliable and showed the highest number of successful applications followed by the commonly used cross-sectional protocol. Dental practitioners should be aware of the application of multiple protocols and their pros and cons as no single protocol was successful in all the cases. Applying the same protocols on a larger sample size using different cone beam CT and multislice CT machines with different exposure parameters is recommended.
Collapse
Affiliation(s)
- Ali Fahd
- Lecturer of Diagnostic Science and Oral & Maxillofacial Radiology, Faculty of Dentistry, Sinai University, Kantara, Egypt
| | - Ahmed Talaat Temerek
- Associate Professor of Oral and Maxillofacial Surgery, and Head of Oral and Maxillofacial Surgery Department, Faculty of Oral and Dental Medicine, South Valley University, Qena, Egypt
| | - Sarah Mohammed Kenawy
- Lecturer of Oral and Maxillofacial Radiology, Faculty of Dentistry, Cairo University, Giza, Egypt
| |
Collapse
|
35
|
Choi E, Lee S, Jeong E, Shin S, Park H, Youm S, Son Y, Pang K. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography. Sci Rep 2022; 12:2456. [PMID: 35165342 PMCID: PMC8844031 DOI: 10.1038/s41598-022-06483-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/06/2022] [Indexed: 11/09/2022] Open
Abstract
Determining the exact positional relationship between mandibular third molar (M3) and inferior alveolar nerve (IAN) is important for surgical extractions. Panoramic radiography is the most common dental imaging test. The purposes of this study were to develop an artificial intelligence (AI) model to determine two positional relationships (true contact and bucco-lingual position) between M3 and IAN when they were overlapped in panoramic radiographs and compare its performance with that of oral and maxillofacial surgery (OMFS) specialists. A total of 571 panoramic images of M3 from 394 patients was used for this study. Among the images, 202 were classified as true contact, 246 as intimate, 61 as IAN buccal position, and 62 as IAN lingual position. A deep convolutional neural network model with ResNet-50 architecture was trained for each task. We randomly split the dataset into 75% for training and validation and 25% for testing. Model performance was superior in bucco-lingual position determination (accuracy 0.76, precision 0.83, recall 0.67, and F1 score 0.73) to true contact position determination (accuracy 0.63, precision 0.62, recall 0.63, and F1 score 0.61). AI exhibited much higher accuracy in both position determinations compared to OMFS specialists. In determining true contact position, OMFS specialists demonstrated an accuracy of 52.68% to 69.64%, while the AI showed an accuracy of 72.32%. In determining bucco-lingual position, OMFS specialists showed an accuracy of 32.26% to 48.39%, and the AI showed an accuracy of 80.65%. Moreover, Cohen’s kappa exhibited a substantial level of agreement for the AI (0.61) and poor agreements for OMFS specialists in bucco-lingual position determination. Determining the position relationship between M3 and IAN is possible using AI, especially in bucco-lingual positioning. The model could be used to support clinicians in the decision-making process for M3 treatment.
Collapse
Affiliation(s)
- Eunhye Choi
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University, 101, Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Soohong Lee
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Eunjae Jeong
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Seokwon Shin
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Hyunwoo Park
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Sekyoung Youm
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Youngdoo Son
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea.
| | - KangMi Pang
- Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
36
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|
37
|
Automated segmentation of articular disc of the temporomandibular joint on magnetic resonance images using deep learning. Sci Rep 2022; 12:221. [PMID: 34997167 PMCID: PMC8741780 DOI: 10.1038/s41598-021-04354-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 12/20/2021] [Indexed: 02/06/2023] Open
Abstract
Temporomandibular disorders are typically accompanied by a number of clinical manifestations that involve pain and dysfunction of the masticatory muscles and temporomandibular joint. The most important subgroup of articular abnormalities in patients with temporomandibular disorders includes patients with different forms of articular disc displacement and deformation. Here, we propose a fully automated articular disc detection and segmentation system to support the diagnosis of temporomandibular disorder on magnetic resonance imaging. This system uses deep learning-based semantic segmentation approaches. The study included a total of 217 magnetic resonance images from 10 patients with anterior displacement of the articular disc and 10 healthy control subjects with normal articular discs. These images were used to evaluate three deep learning-based semantic segmentation approaches: our proposed convolutional neural network encoder-decoder named 3DiscNet (Detection for Displaced articular DISC using convolutional neural NETwork), U-Net, and SegNet-Basic. Of the three algorithms, 3DiscNet and SegNet-Basic showed comparably good metrics (Dice coefficient, sensitivity, and positive predictive value). This study provides a proof-of-concept for a fully automated deep learning-based segmentation methodology for articular discs on magnetic resonance images, and obtained promising initial results, indicating that the method could potentially be used in clinical practice for the assessment of temporomandibular disorders.
Collapse
|
38
|
Issa J, Olszewski R, Dyszkiewicz-Konwińska M. The Effectiveness of Semi-Automated and Fully Automatic Segmentation for Inferior Alveolar Canal Localization on CBCT Scans: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:560. [PMID: 35010820 PMCID: PMC8744855 DOI: 10.3390/ijerph19010560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 12/28/2021] [Accepted: 01/03/2022] [Indexed: 11/17/2022]
Abstract
This systematic review aims to identify the available semi-automatic and fully automatic algorithms for inferior alveolar canal localization as well as to present their diagnostic accuracy. Articles related to inferior alveolar nerve/canal localization using methods based on artificial intelligence (semi-automated and fully automated) were collected electronically from five different databases (PubMed, Medline, Web of Science, Cochrane, and Scopus). Two independent reviewers screened the titles and abstracts of the collected data, stored in EndnoteX7, against the inclusion criteria. Afterward, the included articles have been critically appraised to assess the quality of the studies using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Seven studies were included following the deduplication and screening against exclusion criteria of the 990 initially collected articles. In total, 1288 human cone-beam computed tomography (CBCT) scans were investigated for inferior alveolar canal localization using different algorithms and compared to the results obtained from manual tracing executed by experts in the field. The reported values for diagnostic accuracy of the used algorithms were extracted. A wide range of testing measures was implemented in the analyzed studies, while some of the expected indexes were still missing in the results. Future studies should consider the new artificial intelligence guidelines to ensure proper methodology, reporting, results, and validation.
Collapse
Affiliation(s)
- Julien Issa
- Department of Biomaterials and Experimental Dentistry, Poznań University of Medical Sciences, Bukowska 70, 60-812 Poznań, Poland;
| | - Raphael Olszewski
- Department of Oral and Maxilofacial Surgery, Cliniques Universitaires Saint Luc, UCLouvain, Av. Hippocrate 10, 1200 Brussels, Belgium;
- Oral and Maxillofacial Surgery Research Lab (OMFS Lab), NMSK, Institut de Recherche Experimentale et Clinique, UCLouvain, Louvain-la-Neuve, 1348 Brussels, Belgium
| | - Marta Dyszkiewicz-Konwińska
- Department of Biomaterials and Experimental Dentistry, Poznań University of Medical Sciences, Bukowska 70, 60-812 Poznań, Poland;
| |
Collapse
|
39
|
Nozawa M, Ito H, Ariji Y, Fukuda M, Igarashi C, Nishiyama M, Ogi N, Katsumata A, Kobayashi K, Ariji E. Automatic segmentation of the temporomandibular joint disc on magnetic resonance images using a deep learning technique. Dentomaxillofac Radiol 2022; 51:20210185. [PMID: 34347537 PMCID: PMC8693319 DOI: 10.1259/dmfr.20210185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
OBJECTIVES The aims of the present study were to construct a deep learning model for automatic segmentation of the temporomandibular joint (TMJ) disc on magnetic resonance (MR) images, and to evaluate the performances using the internal and external test data. METHODS In total, 1200 MR images of closed and open mouth positions in patients with temporomandibular disorder (TMD) were collected from two hospitals (Hospitals A and B). The training and validation data comprised 1000 images from Hospital A, which were used to create a segmentation model. The performance was evaluated using 200 images from Hospital A (internal validity test) and 200 images from Hospital B (external validity test). RESULTS Although the analysis of performance determined with data from Hospital B showed low recall (sensitivity), compared with the performance determined with data from Hospital A, both performances were above 80%. Precision (positive predictive value) was lower when test data from Hospital A were used for the position of anterior disc displacement. According to the intra-articular TMD classification, the proportions of accurately assigned TMJs were higher when using images from Hospital A than when using images from Hospital B. CONCLUSION The segmentation deep learning model created in this study may be useful for identifying disc positions on MR images.
Collapse
Affiliation(s)
| | - Hirokazu Ito
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chinami Igarashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Nobumi Ogi
- Department of Oral and Maxillofacial Surgery, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Kaoru Kobayashi
- Department of Oral and Maxillofacial Radiology, Tsurumi University School of Dentistry, Yokohama, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
40
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
41
|
Lim HK, Jung SK, Kim SH, Cho Y, Song IS. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 2021; 21:630. [PMID: 34876105 PMCID: PMC8650351 DOI: 10.1186/s12903-021-01983-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored prior to surgery. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery. Methods A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the 50 datasets. The accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step were compared. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation. Results After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation. Conclusions The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.
Collapse
Affiliation(s)
- Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seung-Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, 46, Gaeunsa 2-gil, Seongbuk-gu, Seoul, 02842, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI Center, Korea University College of Medicine, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
42
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
43
|
Lahoud P, Diels S, Niclaes L, Van Aelst S, Willems H, Van Gerven A, Quirynen M, Jacobs R. Development and validation of a novel artificial intelligence driven tool for accurate mandibular canal segmentation on CBCT. J Dent 2021; 116:103891. [PMID: 34780873 DOI: 10.1016/j.jdent.2021.103891] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/29/2021] [Accepted: 11/11/2021] [Indexed: 12/27/2022] Open
Abstract
OBJECTIVES The objective of this study is the development and validation of a novel artificial intelligence driven tool for fast and accurate mandibular canal segmentation on cone beam computed tomography (CBCT). METHODS A total of 235 CBCT scans from dentate subjects needing oral surgery were used in this study, allowing for development, training and validation of a deep learning algorithm for automated mandibular canal (MC) segmentation on CBCT. Shape, diameter and direction of the MC were adjusted on all CBCT slices using a voxel-wise approach. Validation was then performed on a random set of 30 CBCTs - previously unseen by the algorithm - where voxel-level annotations allowed for assessment of all MC segmentations. RESULTS Primary results show successful implementation of the AI algorithm for segmentation of the MC with a mean IoU of 0.636 (± 0.081), a median IoU of 0.639 (± 0.081), a mean Dice Similarity Coefficient of 0.774 (± 0.062). Precision, recall and accuracy had mean values of 0.782 (± 0.121), 0.792 (± 0.108) and 0.99 (± 7.64×10-05) respectively. The total time for automated AI segmentation was 21.26 s (±2.79), which is 107 times faster than accurate manual segmentation. CONCLUSIONS This study demonstrates a novel, fast and accurate AI-driven module for MC segmentation on CBCT. CLINICAL SIGNIFICANCE Given the importance of adequate pre-operative mandibular canal assessment, Artificial Intelligence could help relieve practitioners from the delicate and time-consuming task of manually tracing and segmenting this structure, helping prevent per- and post-operative neurovascular complications.
Collapse
Affiliation(s)
- Pierre Lahoud
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium; Department of Oral Health Sciences, Periodontology and Oral Microbiology, University Hospitals of Leuven, Belgium.
| | | | - Liselot Niclaes
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium
| | - Stijn Van Aelst
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium
| | | | | | - Marc Quirynen
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, University Hospitals of Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
44
|
Wei X, Wang Y. Inferior alveolar canal segmentation based on cone-beam computed tomography. Med Phys 2021; 48:7074-7088. [PMID: 34628674 DOI: 10.1002/mp.15274] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/19/2021] [Accepted: 09/21/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE The shape and position of the inferior alveolar canal (IAC) are analyzed to effectively reduce the risk of iatrogenic injury based on cone-beam computer tomography (CBCT). To assist dental clinicians to make better use of the IAC information, we propose an IAC segmentation method based on CBCT images. METHODS In this paper, CBCT images are first preprocessed by the Hounsfield unit values clipping and gray normalization. Secondly, based on the multi-plane reconstruction (MPR) and curved surface reconstruction, the curved MPR image sets are generated by the smooth dental arch curve with a sampling distance of 1.00 pixels. Then, the K-means clustering algorithm is used to cluster the texture parameters of the gray level-gradient co-occurrence matrix enhanced by the gradient directions to improve the image contrast of the IAC. Finally, the IAC edges are roughly segmented by the 2D line-tracking method, and smoothed by the fourth-order polynomial to obtain the final segmentation result. RESULTS Twenty-one real clinical dental CBCT datasets were used to test the proposed method. The manual segmentation results of two specialized dental clinicians were used as quantitative evaluation criteria. The dice similarity index (DSI), average symmetric surface distance (ASSD), and mean curve distance (MCD) of the left IAC are 0.93 (SD = 0.01), 0.16 mm (SD = 0.05 mm), and 1.59 mm (SD = 0.25 mm), respectively; the DSI, ASSD, and MCD of the right IAC are 0.93 (SD = 0.02), 0.16 mm (SD = 0.05 mm), and 1.60 mm (SD = 0.30 mm), respectively. CONCLUSIONS The proposed method provides an effective image enhancement and segmentation solution to analyze the shape and position of the IAC. Experimental results show that the relationships between the IAC and other structures can be accurately reflected in the panoramic images without superimposition and geometric distortion, and the smooth edges of the IAC can be segmented.
Collapse
Affiliation(s)
- Xueqiong Wei
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Yuanjun Wang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| |
Collapse
|
45
|
Sherwood AA, Sherwood AI, Setzer FC, K SD, Shamili JV, John C, Schwendicke F. A Deep Learning Approach to Segment and Classify C-Shaped Canal Morphologies in Mandibular Second Molars Using Cone-beam Computed Tomography. J Endod 2021; 47:1907-1916. [PMID: 34563507 DOI: 10.1016/j.joen.2021.09.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/12/2021] [Accepted: 09/14/2021] [Indexed: 01/11/2023]
Abstract
INTRODUCTION The identification of C-shaped root canal anatomy on radiographic images affects clinical decision making and treatment. The aims of this study were to develop a deep learning (DL) model to classify C-shaped canal anatomy in mandibular second molars from cone-beam computed tomographic (CBCT) volumes and to compare the performance of 3 different architectures. METHODS U-Net, residual U-Net, and Xception U-Net architectures were used for image segmentation and classification of C-shaped anatomies. Model training and validation were performed on 100 of a total of 135 available limited field of view CBCT images containing mandibular molars with C-shaped anatomy. Thirty-five CBCT images were used for testing. Voxel-matching accuracy of the automated labeling of the C-shaped anatomy was assessed with the Dice index. The mean sensitivity of predicting the correct C-shape subcategory was calculated based on detection accuracy. One-way analysis of variance and post hoc Tukey honestly significant difference tests were used for statistical evaluation. RESULTS The mean Dice coefficients were 0.768 ± 0.0349 for Xception U-Net, 0.736 ± 0.0297 for residual U-Net, and 0.660 ± 0.0354 for U-Net on the test data set. The performance of the 3 models was significantly different overall (analysis of variance, P = .000779). Both Xception U-Net (Q = 7.23, P = .00070) and residual U-Net (Q = 5.09, P = .00951) performed significantly better than U-Net (post hoc Tukey honestly significant difference test). The mean sensitivity values were 0.786 ± 0.0378 for Xception U-Net, 0.746 ± 0.0391 for residual U-Net, and 0.720 ± 0.0495 for U-Net. The mean positive predictive values were 77.6% ± 0.1998% for U-Net, 78.2% ± 0.0.1971% for residual U-Net, and 80.0% ± 0.1098% for Xception U-Net. The addition of contrast-limited adaptive histogram equalization had improved overall architecture efficacy by a mean of 4.6% (P < .0001). CONCLUSIONS DL may aid in the detection and classification of C-shaped canal anatomy.
Collapse
Affiliation(s)
- Adithya A Sherwood
- Mahatma Montessori Matriculation Higher Secondary School, Madurai, Tamil Nadu, India
| | - Anand I Sherwood
- Department of Conservative Dentistry and Endodontics, CSI College of Dental Sciences, Madurai, Tamil Nadu, India.
| | - Frank C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
| | - Sheela Devi K
- Mahatma Montessori Matriculation Higher Secondary School, Madurai, Tamil Nadu, India
| | - Jasmin V Shamili
- Department of Conservative Dentistry and Endodontics, CSI College of Dental Sciences, Madurai, Tamil Nadu, India
| | - Caroline John
- Department of Computer Science, Hal Marcus College of Science and Engineering, University of West Florida, Pensacola, Florida
| | - Falk Schwendicke
- Department of Oral Diagnostics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
46
|
Lo Giudice A, Ronsivalle V, Spampinato C, Leonardi R. Fully automatic segmentation of the mandible based on convolutional neural networks (CNNs). Orthod Craniofac Res 2021; 24 Suppl 2:100-107. [PMID: 34553817 DOI: 10.1111/ocr.12536] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/15/2021] [Accepted: 06/10/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVES To evaluate the accuracy of automatic deep learning-based method for fully automatic segmentation of the mandible from CBCTs. SETTING AND SAMPLE POPULATION CBCT-derived mandible fully automatic segmentation. METHODS Forty CBCT scans from healthy patients (20 females and 20 males, mean age 23.37 ± 3.34) were collected, and a manual mandible segmentation was carried out by using Mimics software. Twenty CBCT scans were randomly selected and used for training the artificial intelligence model file. The remaining 20 CBCT segmentation masks were used to test the accuracy of the CNN automatic method by comparing the segmentation volumes of the 3D models obtained with automatic and manual segmentations. The accuracy of the CNN-based method was also assessed by using the DICE Score coefficient (DSC) and by the surface-to-surface matching technique. The intraclass correlation coefficient (ICC) and Dahlberg's formula were used respectively to test the intra-observer reliability and method error. Independent Student's t test was used for between-groups volumetric comparison. RESULTS Measurements were highly correlated with an ICC value of 0.937, while the method error was 0.24 mm3 . A difference of 0.71 (±0.49) cm3 was found between the methodologies, but it was not statistically significant (P > .05). The matching percentage detected was 90.35% (±1.88) (tolerance 0.5 mm) and 96.32% ± 1.97% (tolerance 1.0 mm). The differences, measured as DSC in percentage, between the assessments done with both methods were, respectively, 2.8% and 3.1%. CONCLUSION The tested deep learning CNN-based technology is accurate and performs as well as an experienced image reader but at much higher speed, which is of significant clinical relevance.
Collapse
Affiliation(s)
- Antonino Lo Giudice
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy
| | - Vincenzo Ronsivalle
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy
| | - Concetto Spampinato
- Department of Computer and Telecommunications Engineering, University of Catania, Catania, Italy
| | - Rosalia Leonardi
- Department of Orthodontics, School of Dentistry, University of Catania, Catania, Italy
| |
Collapse
|
47
|
Verhelst PJ, Smolders A, Beznik T, Meewis J, Vandemeulebroucke A, Shaheen E, Van Gerven A, Willems H, Politis C, Jacobs R. Layered deep learning for automatic mandibular segmentation in cone-beam computed tomography. J Dent 2021; 114:103786. [PMID: 34425172 DOI: 10.1016/j.jdent.2021.103786] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 08/05/2021] [Accepted: 08/16/2021] [Indexed: 12/23/2022] Open
Abstract
OBJECTIVE To develop and validate a layered deep learning algorithm which automatically creates three-dimensional (3D) surface models of the human mandible out of cone-beam computed tomography (CBCT) imaging. MATERIALS & METHODS Two convolutional networks using a 3D U-Net architecture were combined and deployed in a cloud-based artificial intelligence (AI) model. The AI model was trained in two phases and iteratively improved to optimize the segmentation result using 160 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans). Finally, the final AI model was tested by assessing timing, consistency, and accuracy on a separate testing dataset of 15 pre- and 15 postoperative full skull CBCT scans. The AI model was compared to user refined AI segmentations (RAI) and to semi-automatic segmentation (SA), which is the current clinical standard. The time needed for segmentation was measured in seconds. Intra- and inter-operator consistency were assessed to check if the segmentation protocols delivered reproducible results. The following consistency metrics were used: intersection over union (IoU), dice similarity coefficient (DSC), Hausdorff distance (HD), absolute volume difference and root mean square (RMS) distance. To evaluate the match of the AI and RAI results to those of the SA method, their accuracy was measured using IoU, DSC, HD, absolute volume difference and RMS distance. RESULTS On average, SA took 1218.4s. RAI showed a significant drop (p<0.0001) in timing to 456.5s (2.7-fold decrease). The AI method only took 17s (71.3-fold decrease). The average intra-operator IoU for RAI was 99.5% compared to 96.9% for SA. For inter-operator consistency, RAI scored an IoU of 99.6% compared to 94.6% for SA. The AI method was always consistent by default. In both the intra- and inter-operator consistency assessments, RAI outperformed SA on all metrics indicative of better consistency. With SA as the ground truth, AI and RAI scored an IoU of 94.6% and 94.4%, respectively. All accuracy metrics were similar for AI and RAI, meaning that both methods produce 3D models that closely match those produced by SA. CONCLUSION A layered 3D U-Net architecture deep learning algorithm, with and without additional user refinements, improves time-efficiency, reduces operator error, and provides excellent accuracy when benchmarked against the clinical standard. CLINICAL SIGNIFICANCE Semi-automatic segmentation in CBCT imaging is time-consuming and allows user-induced errors. Layered convolutional neural networks using a 3D U-Net architecture allow direct segmentation of high-resolution CBCT images. This approach creates 3D mandibular models in a more time-efficient and consistent way. It is accurate when benchmarked to semi-automatic segmentation.
Collapse
Affiliation(s)
- Pieter-Jan Verhelst
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium.
| | | | | | - Jeroen Meewis
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium
| | - Arne Vandemeulebroucke
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium
| | - Eman Shaheen
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium
| | | | | | - Constantinus Politis
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium
| | - Reinhilde Jacobs
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, BE-3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04 Huddinge, Sweden
| |
Collapse
|
48
|
Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin Oral Investig 2021; 26:981-991. [PMID: 34312683 DOI: 10.1007/s00784-021-04082-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVES The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT. MATERIALS AND METHODS A dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology. RESULTS For segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents. CONCLUSIONS Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT. Clinical relevance An automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.
Collapse
|
49
|
Coronary artery segmentation under class imbalance using a U-Net based architecture on computed tomography angiography images. Sci Rep 2021; 11:14493. [PMID: 34262118 PMCID: PMC8280179 DOI: 10.1038/s41598-021-93889-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 05/27/2021] [Indexed: 11/12/2022] Open
Abstract
Coronary artery disease is caused primarily by vessel narrowing. Extraction of the coronary artery area from images is the preferred procedure for diagnosing coronary diseases. In this study, a U-Net-based network architecture, 3D Dense-U-Net, was adopted to perform fully automatic segmentation of the coronary artery. The network was applied to 474 coronary computed tomography (CT) angiography scans performed at Wanfang Hospital, Taiwan. Of these, 10% were used for testing. The CT scans were divided into patches of 16 original high-resolution slices. The slices were overlapped between patches to take advantage of surrounding imaging information. However, an imbalance between the foreground and background presents a challenge in smaller-object segmentation such as with coronary arteries. The network was optimized and achieved a promising result when the focal loss concept was adopted. To evaluate the accuracy of the automatic segmentation approach, the dice similarity coefficient (DSC) was calculated, and an existing clinical tool was used. The subjective ratings of three experienced radiologists were used to compare the two ratings. The results show that the proposed approach can achieve a DSC of 0.9691, which is significantly higher than other studies using a deep learning approach. In the main trunk, the results of automatic segmentation agree with those of the clinical tool; they were significantly better in some small branches. In our study, automatic segmentation tool shows high-performance detection in coronary lumen vessels, thereby providing potential power in assisting clinical diagnosis.
Collapse
|
50
|
Jung SK, Lim HK, Lee S, Cho Y, Song IS. Deep Active Learning for Automatic Segmentation of Maxillary Sinus Lesions Using a Convolutional Neural Network. Diagnostics (Basel) 2021; 11:688. [PMID: 33921353 PMCID: PMC8070431 DOI: 10.3390/diagnostics11040688] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/08/2021] [Accepted: 04/09/2021] [Indexed: 11/21/2022] Open
Abstract
The aim of this study was to segment the maxillary sinus into the maxillary bone, air, and lesion, and to evaluate its accuracy by comparing and analyzing the results performed by the experts. We randomly selected 83 cases of deep active learning. Our active learning framework consists of three steps. This framework adds new volumes per step to improve the performance of the model with limited training datasets, while inferring automatically using the model trained in the previous step. We determined the effect of active learning on cone-beam computed tomography (CBCT) volumes of dental with our customized 3D nnU-Net in all three steps. The dice similarity coefficients (DSCs) at each stage of air were 0.920 ± 0.17, 0.925 ± 0.16, and 0.930 ± 0.16, respectively. The DSCs at each stage of the lesion were 0.770 ± 0.18, 0.750 ± 0.19, and 0.760 ± 0.18, respectively. The time consumed by the convolutional neural network (CNN) assisted and manually modified segmentation decreased by approximately 493.2 s for 30 scans in the second step, and by approximately 362.7 s for 76 scans in the last step. In conclusion, this study demonstrates that a deep active learning framework can alleviate annotation efforts and costs by efficiently training on limited CBCT datasets.
Collapse
Affiliation(s)
- Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, Seoul 08308, Korea;
| | - Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, Seoul 08308, Korea;
| | - Seungjun Lee
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, Seoul 02841, Korea;
| | - Yongwon Cho
- Department of Radiology, Korea University Anam Hospital, Seoul 02841, Korea
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, Seoul 02841, Korea;
| |
Collapse
|