1
|
Yasin ET, Erturk M, Tassoker M, Koklu M. Automatic mandibular third molar and mandibular canal relationship determination based on deep learning models for preoperative risk reduction. Clin Oral Investig 2025; 29:203. [PMID: 40128451 PMCID: PMC11933192 DOI: 10.1007/s00784-025-06285-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Accepted: 03/13/2025] [Indexed: 03/26/2025]
Abstract
OBJECTIVES This study explores the application of deep learning models for classifying the spatial relationship between mandibular third molars and the mandibular canal using cone-beam computed tomography images. Accurate classification of this relationship is essential for preoperative planning, as improper assessment can lead to complications such as inferior alveolar nerve injury during extractions. MATERIALS AND METHODS A dataset of 305 cone-beam computed tomography scans, categorized into three classes (not contacted, nearly contacted, and contacted), was meticulously annotated and validated by maxillofacial radiology experts to ensure reliability. Multiple state-of-the-art convolutional neural networks, including MobileNet, Xception, and DenseNet201, were trained and evaluated. Performance metrics were analysed. RESULTS MobileNet achieved the highest overall performance, with an accuracy of 99.44%. Xception and DenseNet201 also demonstrated strong classification capabilities, with accuracies of 98.74% and 98.73%, respectively. CONCLUSIONS These results highlight the potential of deep learning models to automate and improve the accuracy and consistency of mandibular third molars and the mandibular canal relationship classifications. CLINICAL RELEVANCE The integration of such systems into clinical workflows could enhance surgical risk assessments, streamline diagnostics, and reduce reliance on manual analysis, particularly in resource-constrained settings. This study contributes to advancing the use of artificial intelligence in dental imaging, offering a promising avenue for safer and more efficient surgical planning.
Collapse
Affiliation(s)
- Elham Tahsin Yasin
- Graduate School of Natural and Applied Sciences, Department of Computer Engineering, Faculty of Technology, Selcuk University, Konya, Türkiye
| | - Mediha Erturk
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya, Türkiye
| | - Melek Tassoker
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya, Türkiye
| | - Murat Koklu
- Department of Computer Engineering, Faculty of Technology, Selcuk University, Konya, Türkiye.
| |
Collapse
|
2
|
Kayadibi İ, Köse U, Güraksın GE, Çetin B. An AI-assisted explainable mTMCNN architecture for detection of mandibular third molar presence from panoramic radiography. Int J Med Inform 2025; 195:105724. [PMID: 39626596 DOI: 10.1016/j.ijmedinf.2024.105724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Revised: 11/21/2024] [Accepted: 11/22/2024] [Indexed: 02/12/2025]
Abstract
OBJECTIVE This study aimed to design and systematically evaluate an architecture, proposed as the Explainable Mandibular Third Molar Convolutional Neural Network (E-mTMCNN), for detecting the presence of mandibular third molars (m-M3) in panoramic radiography (PR). The proposed architecture seeks to enhance the accuracy of early detection and improve clinical decision-making and treatment planning in dentistry. METHODS A new dataset, named the Mandibular Third Molar (m-TM) dataset, was developed through expert labeling of raw PR images from the UESB dataset. This dataset was subsequently made publicly accessible to support further research. Several advanced image preprocessing techniques, including Gaussian filtering, gamma correction, and data augmentation, were applied to improve image quality. Various Deep learning (DL) based Convolutional Neural Network (CNN) architectures were trained and validated using Transfer Learning (TL) methodologies. Among these, the E-mTMCNN, leveraging the GoogLeNet architecture, achieved the highest performance metrics. To ensure transparency in the model's decision-making process, Local Interpretable Model-Agnostic Explanations (LIME) were integrated as an eXplainable Artificial Intelligence (XAI) approach. Clinical reliability and applicability were assessed through an expert survey conducted among specialized dentists using a decision support system based on the E-mTMCNN. RESULTS The E-mTMCNN architecture demonstrated a classification accuracy of 87.02%, with a sensitivity of 75%, specificity of 94.73%, precision of 77.68%, an F1 score of 75.51%, and an area under the curve (AUC) of 87.01%. The integration of LIME provided visual explanations of the model's decision-making rationale, reinforcing the robustness of the proposed architecture. Results from the expert survey indicated high clinical acceptance and confidence in the reliability of the system. CONCLUSION The findings demonstrate that the E-mTMCNN architecture effectively detects the presence of m-M3 in PRs, outperforming current state-of-the-art methodologies. The proposed architecture shows considerable potential for integration into computer-aided diagnostic systems, advancing early detection capabilities and enhancing the precision of treatment planning in dental practice.
Collapse
Affiliation(s)
- İsmail Kayadibi
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Isparta, Turkey; Department of Management Information Systems, Faculty of Economic and Administrative Sciences, Afyon Kocatepe University, Afyonkarahisar, Turkey.
| | - Utku Köse
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Isparta, Turkey.
| | - Gür Emre Güraksın
- Department of Computer Engineering, Faculty of Engineering, Afyon Kocatepe University, Afyonkarahisar, Turkey.
| | - Bilgün Çetin
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Selcuk University, Konya, Turkey.
| |
Collapse
|
3
|
Akdoğan S, Öziç MÜ, Tassoker M. Development of an AI-Supported Clinical Tool for Assessing Mandibular Third Molar Tooth Extraction Difficulty Using Panoramic Radiographs and YOLO11 Sub-Models. Diagnostics (Basel) 2025; 15:462. [PMID: 40002613 PMCID: PMC11853743 DOI: 10.3390/diagnostics15040462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2024] [Revised: 01/26/2025] [Accepted: 01/28/2025] [Indexed: 02/27/2025] Open
Abstract
Background/Objective: This study aimed to develop an AI-supported clinical tool to evaluate the difficulty of mandibular third molar extractions based on panoramic radiographs. Methods: A dataset of 2000 panoramic radiographs collected between 2023 and 2024 was annotated by an oral radiologist using bounding boxes. YOLO11 sub-models were trained and tested for three basic scenarios according to the Pederson Index criteria, taking into account Winter (angulation) and Pell and Gregory (ramus relationship and depth). For each scenario, the YOLO11 sub-models were trained using 80% of the data for training, 10% for validation, and 10% for testing. Model performance was assessed using precision, recall, F1 score, and mean Average Precision (mAP) metrics, and different graphs. Results: YOLO11 sub-models (nano, small, medium, large, extra-large) showed high accuracy and similar behavior in all scenarios. For the calculation of the Pederson index, nano for Winter (average training mAP@0.50 = 0.963; testing mAP@0.50 = 0.975), nano for class (average training mAP@0.50 = 0.979; testing mAP@0.50 = 0.965), and medium for level (average training mAP@0.50 = 0.977; testing mAP@0.50 = 0.989) from the Pell and Gregory categories were selected as optimal sub-models. Three scenarios were run consecutively on panoramic images, and slightly difficult, moderately difficult, and very difficult Pederson indexes were obtained according to the scores. The results were evaluated by an oral radiologist, and the AI system performed successfully in terms of Pederson index determination with 97.00% precision, 94.55% recall, and 95.76% F1 score. Conclusions: The YOLO11-supported clinical tool demonstrated high accuracy and reliability in assessing mandibular third molar extraction difficulty on panoramic radiographs. These models were integrated into a GUI for clinical use, offering dentists a simple tool for estimating extraction difficulty, and improving decision-making and patient management.
Collapse
Affiliation(s)
- Serap Akdoğan
- Department of Biomedical Engineering, Faculty of Technology, Pamukkale University, Denizli 20160, Türkiye;
| | - Muhammet Üsame Öziç
- Department of Biomedical Engineering, Faculty of Technology, Pamukkale University, Denizli 20160, Türkiye;
| | - Melek Tassoker
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya 42090, Türkiye;
| |
Collapse
|
4
|
Fernandes FA, Ge M, Chaltikyan G, Gerdes MW, Omlin CW. Preparing for downstream tasks in artificial intelligence for dental radiology: a baseline performance comparison of deep learning models. Dentomaxillofac Radiol 2025; 54:149-162. [PMID: 39563402 PMCID: PMC11784916 DOI: 10.1093/dmfr/twae056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 09/20/2024] [Accepted: 10/17/2024] [Indexed: 11/21/2024] Open
Abstract
OBJECTIVES To compare the performance of the convolutional neural network (CNN) with the vision transformer (ViT), and the gated multilayer perceptron (gMLP) in the classification of radiographic images of dental structures. METHODS Retrospectively collected two-dimensional images derived from cone beam computed tomographic volumes were used to train CNN, ViT, and gMLP architectures as classifiers for four different cases. Cases selected for training the architectures were the classification of the radiographic appearance of maxillary sinuses, maxillary and mandibular incisors, the presence or absence of the mental foramen, and the positional relationship of the mandibular third molar to the inferior alveolar nerve canal. The performance metrics (sensitivity, specificity, precision, accuracy, and f1-score) and area under the curve (AUC)-receiver operating characteristic and precision-recall curves were calculated. RESULTS The ViT with an accuracy of 0.74-0.98, performed on par with the CNN model (accuracy 0.71-0.99) in all tasks. The gMLP displayed marginally lower performance (accuracy 0.65-0.98) as compared to the CNN and ViT. For certain tasks, the ViT outperformed the CNN. The AUCs ranged from 0.77 to 1.00 (CNN), 0.80 to 1.00 (ViT), and 0.73 to 1.00 (gMLP) for all of the four cases. CONCLUSIONS The ViT and gMLP exhibited comparable performance with the CNN (the current state-of-the-art). However, for certain tasks, there was a significant difference in the performance of the ViT and gMLP when compared to the CNN. This difference in model performance for various tasks proves that the capabilities of different architectures may be leveraged.
Collapse
Affiliation(s)
- Fara A Fernandes
- Department of Information and Communication Technology, University of Agder (UiA), 4879 Grimstad, Norway
- Faculty European Campus Rottal-Inn, Deggendorf Institute of Technology (DIT), 84347 Pfarrkirchen, Germany
| | - Mouzhi Ge
- Faculty European Campus Rottal-Inn, Deggendorf Institute of Technology (DIT), 84347 Pfarrkirchen, Germany
| | - Georgi Chaltikyan
- Faculty European Campus Rottal-Inn, Deggendorf Institute of Technology (DIT), 84347 Pfarrkirchen, Germany
| | - Martin W Gerdes
- Department of Information and Communication Technology, University of Agder (UiA), 4879 Grimstad, Norway
| | - Christian W Omlin
- Department of Information and Communication Technology, University of Agder (UiA), 4879 Grimstad, Norway
| |
Collapse
|
5
|
Trachoo V, Taetragool U, Pianchoopat P, Sukitporn-Udom C, Morakrant N, Warin K. Deep Learning for Predicting the Difficulty Level of Removing the Impacted Mandibular Third Molar. Int Dent J 2025; 75:144-150. [PMID: 39043529 PMCID: PMC11806308 DOI: 10.1016/j.identj.2024.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/04/2024] [Accepted: 06/06/2024] [Indexed: 07/25/2024] Open
Abstract
BACKGROUND Preoperative assessment of the impacted mandibular third molar (LM3) in a panoramic radiograph is important in surgical planning. The aim of this study was to develop and evaluate a computer-aided visualisation-based deep learning (DL) system using a panoramic radiograph to predict the difficulty level of surgical removal of an impacted LM3. METHODS The study included 1367 LM3 images from 784 patients who presented from 2021-2023 to the University Dental Hospital; images were collected retrospectively. The difficulty level of surgically removing impacted LM3s was assessed via our newly developed DL system, which seamlessly integrated 3 distinct DL models. ResNet101V2 handled binary classification for identifying impacted LM3s in panoramic radiographs, RetinaNet detected the precise location of the impacted LM3, and Vision Transformer performed multiclass image classification to evaluate the difficulty levels of removing the detected impacted LM3. RESULTS The ResNet101V2 model achieved a classification accuracy of 0.8671. The RetinaNet model demonstrated exceptional detection performance, with a mean average precision of 0.9928. Additionally, the Vision Transformer model delivered an average accuracy of 0.7899 in predicting removal difficulty levels. CONCLUSIONS The development of a 3-phase computer-aided visualisation-based DL system has yielded a very good performance in using panoramic radiographs to predict the difficulty level of surgically removing an impacted LM3.
Collapse
Affiliation(s)
- Vorapat Trachoo
- Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - Unchalisa Taetragool
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Ploypapas Pianchoopat
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Chatchapon Sukitporn-Udom
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Narapathra Morakrant
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
6
|
Soltani P, Sohrabniya F, Mohammad-Rahimi H, Mehdizadeh M, Mohammadreza Mousavi S, Moaddabi A, Mohammadmahdi Mousavi S, Spagnuolo G, Yavari A, Schwendicke F. A two-stage deep-learning model for determination of the contact of mandibular third molars with the mandibular canal on panoramic radiographs. BMC Oral Health 2024; 24:1373. [PMID: 39538183 PMCID: PMC11562527 DOI: 10.1186/s12903-024-04850-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 09/02/2024] [Indexed: 11/16/2024] Open
Abstract
OBJECTIVES This study aimed to assess the accuracy of a two-stage deep learning (DL) model for (1) detecting mandibular third molars (MTMs) and the mandibular canal (MC), and (2) classifying the anatomical relationship between these structures (contact/no contact) on panoramic radiographs. METHOD MTMs and MCs were labeled on panoramic radiographs by a calibrated examiner using bounding boxes. Each bounding box contained MTM and MC on one side. The relationship of MTMs with the MC was assessed on CBCT scans by two independent examiners without the knowledge of the condition of MTM and MC on the corresponding panoramic image, and dichotomized as contact/no contact. Data were split into training, validation, and testing sets with a ratio of 80:10:10. Faster R-CNN was used for detecting MTMs and MCs and ResNeXt for classifying their relationship. AP50 and AP75 were used as outcomes for detecting MTMs and MCs, and accuracy, precision, recall, F1-score, and the area-under-the-receiver-operating-characteristics curve (AUROC) were used to assess classification performance. The training and validation of the models were conducted using the Python programming language with the PyTorch framework. RESULTS Three hundred eighty-seven panoramic radiographs were evaluated. MTMs were present bilaterally on 232 and unilaterally on 155 radiographs. In total, 619 images were collected which included MTMs and MCs. AP50 and AP75 indicating accuracy for detecting MTMs and MCs were 0.99 and 0.90 respectively. Classification accuracy, recall, specificity, F1-score, precision, and AUROC values were 0.85, 0.85, 0.93, 0.84, 0.86, and 0.91, respectively. CONCLUSION DL can detect MTMs and MCs and accurately assess their anatomical relationship on panoramic radiographs.
Collapse
Affiliation(s)
- Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
- Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples "Federico II", Naples, Italy
| | - Fatemeh Sohrabniya
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Mojdeh Mehdizadeh
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Amirhossein Moaddabi
- Department of Oral and Maxillofacial Surgery, Dental Research Center, Mazandaran University of Medical Sciences, Sari, Iran.
| | | | - Gianrico Spagnuolo
- Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples "Federico II", Naples, Italy
| | - Amirmohammad Yavari
- Students Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Falk Schwendicke
- Clinic for Conservative Dentistry and Periodontology, University Hospital of the Ludwig-Maximilians- University Munich, Munich, Germany
| |
Collapse
|
7
|
Russo D, Mariani P, Bifulco L, Ferrara S, Cicciù M, Laino L. Three-dimensional Morphometric Analysis of the Effectiveness of Kinesio Taping on Postoperative Discomfort Following Mandibular Third Molar Surgery: A Prospective Randomized Split-mouth Study. J Craniofac Surg 2024:00001665-990000000-02026. [PMID: 39730117 DOI: 10.1097/scs.0000000000010756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 09/15/2024] [Indexed: 12/29/2024] Open
Abstract
This study investigates the efficacy of Kinesio taping (KT) in reducing postoperative discomfort, including edema, trismus, and pain, following mandibular third molar extraction. A prospective randomized split-mouth design was employed, involving 7 patients with impacted mandibular third molars. KT was applied immediately postsurgery, and outcomes were assessed on the third and seventh postoperative days using a Visual Analog Scale (VAS) for pain, 3D morphometric analysis for swelling, and caliper measurements for trismus. Results showed significant reductions in pain, swelling, and trismus on the KT-treated side compared with the control side. The most notable differences were observed on day 7, where KT demonstrated superior effectiveness in alleviating symptoms. The control group showed improvement over time, but the KT-treated group experienced faster and more pronounced recovery. In conclusion, KT proved to be a safe and effective method for improving postoperative recovery following mandibular third molar surgery, offering a low-cost, accessible option to enhance patient comfort and quality of life.
Collapse
Affiliation(s)
- Diana Russo
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples
| | - Pierluigi Mariani
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples
| | - Luca Bifulco
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples
| | - Simone Ferrara
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples
| | - Marco Cicciù
- Department of Medical-Surgical and Surgical Specialties, University of Catania, Catania, Italy
| | - Luigi Laino
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples
| |
Collapse
|
8
|
Qiu P, Cao R, Li Z, Huang J, Zhang H, Zhang X. Applications of artificial intelligence for surgical extraction in stomatology: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:346-361. [PMID: 38834501 DOI: 10.1016/j.oooo.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/16/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) has been extensively used in the field of stomatology over the past several years. This study aimed to evaluate the effectiveness of AI-based models in the procedure, assessment, and treatment planning of surgical extraction. STUDY DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, a comprehensive search was conducted on the Web of Science, PubMed/MEDLINE, Embase, and Scopus databases, covering English publications up to September 2023. Two reviewers performed the study selection and data extraction independently. Only original research studies utilizing AI in surgical extraction of stomatology were included. The Cochrane risk of bias tool for randomized trials (RoB 2) was selected to perform the quality assessment of the selected literature. RESULTS From 2,336 retrieved references, 35 studies were deemed eligible. Among them, 28 researchers reported the pioneering role of AI in segmentation, classification, and detection, aligning with clinical needs. In addition, another 7 studies suggested promising results in tooth extraction decision-making, but further model refinement and validation were required. CONCLUSIONS Integration of AI in stomatology surgical extraction has significantly progressed, enhancing decision-making accuracy. Combining and comparing algorithmic outcomes across studies is essential for determining optimal clinical applications in the future.
Collapse
Affiliation(s)
- Piaopiao Qiu
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Rongkai Cao
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Zhaoyang Li
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Jiaqi Huang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Huasheng Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Xueming Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China.
| |
Collapse
|
9
|
Torul D, Akpinar H, Bayrakdar IS, Celik O, Orhan K. Prediction of extraction difficulty for impacted maxillary third molars with deep learning approach. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101817. [PMID: 38458545 DOI: 10.1016/j.jormas.2024.101817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 03/10/2024]
Abstract
OBJECTIVE The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri‑surgical period.
Collapse
Affiliation(s)
- Damla Torul
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Ordu University, Ordu 52200, Turkey.
| | - Hasan Akpinar
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Afyonkarahisar Health Sciences University, Afyon, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ozer Celik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara Turkey
| |
Collapse
|
10
|
Lin TJ, Mao YC, Lin YJ, Liang CH, He YQ, Hsu YC, Chen SL, Chen TY, Chen CA, Li KC, Abu PAR. Evaluation of the Alveolar Crest and Cemento-Enamel Junction in Periodontitis Using Object Detection on Periapical Radiographs. Diagnostics (Basel) 2024; 14:1687. [PMID: 39125563 PMCID: PMC11312231 DOI: 10.3390/diagnostics14151687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 07/30/2024] [Accepted: 08/02/2024] [Indexed: 08/12/2024] Open
Abstract
The severity of periodontitis can be analyzed by calculating the loss of alveolar crest (ALC) level and the level of bone loss between the tooth's bone and the cemento-enamel junction (CEJ). However, dentists need to manually mark symptoms on periapical radiographs (PAs) to assess bone loss, a process that is both time-consuming and prone to errors. This study proposes the following new method that contributes to the evaluation of disease and reduces errors. Firstly, innovative periodontitis image enhancement methods are employed to improve PA image quality. Subsequently, single teeth can be accurately extracted from PA images by object detection with a maximum accuracy of 97.01%. An instance segmentation developed in this study accurately extracts regions of interest, enabling the generation of masks for tooth bone and tooth crown with accuracies of 93.48% and 96.95%. Finally, a novel detection algorithm is proposed to automatically mark the CEJ and ALC of symptomatic teeth, facilitating faster accurate assessment of bone loss severity by dentists. The PA image database used in this study, with the IRB number 02002030B0 provided by Chang Gung Medical Center, Taiwan, significantly reduces the time required for dental diagnosis and enhances healthcare quality through the techniques developed in this research.
Collapse
Affiliation(s)
- Tai-Jung Lin
- Department of Periodontics, Division of Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 333423, Taiwan;
| | - Yi-Cheng Mao
- Department of Operative Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 333423, Taiwan;
| | - Yuan-Jin Lin
- Department of Program on Semiconductor Manufacturing Technology, Academy of Innovative Semiconductor and Sustainable Manufacturing, National Cheng Kung University, Tainan City 701401, Taiwan;
| | - Chin-Hao Liang
- Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 320234, Taiwan; (C.-H.L.); (Y.-Q.H.); (Y.-C.H.)
| | - Yi-Qing He
- Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 320234, Taiwan; (C.-H.L.); (Y.-Q.H.); (Y.-C.H.)
| | - Yun-Chen Hsu
- Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 320234, Taiwan; (C.-H.L.); (Y.-Q.H.); (Y.-C.H.)
| | - Shih-Lun Chen
- Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 320234, Taiwan; (C.-H.L.); (Y.-Q.H.); (Y.-C.H.)
| | - Tsung-Yi Chen
- Department of Electronic Engineering, Feng Chia University, Taichung City 407301, Taiwan;
| | - Chiung-An Chen
- Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 243303, Taiwan
| | - Kuo-Chen Li
- Department of Information Management, Chung Yuan Christian University, Taoyuan City 320317, Taiwan;
| | - Patricia Angela R. Abu
- Ateneo Laboratory for Intelligent Visual Environments, Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City 1108, Philippines;
| |
Collapse
|
11
|
Zirek T, Öziç MÜ, Tassoker M. AI-Driven localization of all impacted teeth and prediction of winter angulation for third molars on panoramic radiographs: Clinical user interface design. Comput Biol Med 2024; 178:108755. [PMID: 38897151 DOI: 10.1016/j.compbiomed.2024.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 06/05/2024] [Accepted: 06/11/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE Impacted teeth are abnormal tooth disorders under the gums or jawbone that cannot take their normal position even though it is time to erupt. This study aims to detect all impacted teeth and to classify impacted third molars according to the Winter method with an artificial intelligence model on panoramic radiographs. METHODS In this study, 1197 panoramic radiographs from the dentistry faculty database were collected for all impacted teeth, and 1000 panoramic radiographs were collected for Winter classification. Some pre-processing methods were performed and the images were doubled with data augmentation. Both datasets were randomly divided into 80% training, 10% validation, and 10% testing. After transfer learning and fine-tuning processes, the two datasets were trained with the YOLOv8 deep learning algorithm, a high-performance artificial intelligence model, and the detection of impacted teeth was carried out. The results were evaluated with precision, recall, mAP, and F1-score performance metrics. A graphical user interface was designed for clinical use with the artificial intelligence weights obtained as a result of the training. RESULTS For the detection of impacted third molar teeth according to Winter classification, the average precision, average recall, and average F1 score were obtained to be 0.972, 0.967, and 0.969, respectively. For the detection of all impacted teeth, the average precision, average recall, and average F1 score were obtained as 0.991, 0.995, and 0.993, respectively. CONCLUSION According to the results, the artificial intelligence-based YOLOv8 deep learning model successfully detected all impacted teeth and the impacted third molar teeth according to the Winter classification system.
Collapse
Affiliation(s)
- Taha Zirek
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey
| | - Muhammet Üsame Öziç
- Pamukkale University, Faculty of Technology, Department of Biomedical Engineering, Denizli, Turkey
| | - Melek Tassoker
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey.
| |
Collapse
|
12
|
Assiri HA, Hameed MS, Alqarni A, Dawasaz AA, Arem SA, Assiri KI. Artificial Intelligence Application in a Case of Mandibular Third Molar Impaction: A Systematic Review of the Literature. J Clin Med 2024; 13:4431. [PMID: 39124697 PMCID: PMC11313288 DOI: 10.3390/jcm13154431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 08/12/2024] Open
Abstract
Objective: This systematic review aims to summarize the evidence on the use and applicability of AI in impacted mandibular third molars. Methods: Searches were performed in the following databases: PubMed, Scopus, and Google Scholar. The study protocol is registered at the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY202460081). The retrieved articles were subjected to an exhaustive review based on the inclusion and exclusion criteria for the study. Articles on the use of AI for diagnosis, treatment, and treatment planning in patients with impacted mandibular third molars were included. Results: Twenty-one articles were selected and evaluated using the Scottish Intercollegiate Guidelines Network (SIGN) evidence quality scale. Most of the analyzed studies dealt with using AI to determine the relationship between the mandibular canal and the impacted mandibular third molar. The average quality of the articles included in this review was 2+, which indicated that the level of evidence, according to the SIGN protocol, was B. Conclusions: Compared to human observers, AI models have demonstrated decent performance in determining the morphology, anatomy, and relationship of the impaction with the inferior alveolar nerve canal. However, the prediction of eruptions and future horizons of AI models are still in the early developmental stages. Additional studies estimating the eruption in mixed and permanent dentition are warranted to establish a comprehensive model for identifying, diagnosing, and predicting third molar eruptions and determining the treatment outcomes in the case of impacted teeth. This will help clinicians make better decisions and achieve better treatment outcomes.
Collapse
Affiliation(s)
- Hassan Ahmed Assiri
- Department of Diagnostic Science and Oral Biology, College of Dentistry, King Khalid University, P.O. Box 960, Abha City 61421, Saudi Arabia; (M.S.H.); (A.A.); (A.A.D.); (S.A.A.); (K.I.A.)
| | | | | | | | | | | |
Collapse
|
13
|
Karkehabadi H, Khoshbin E, Ghasemi N, Mahavi A, Mohammad-Rahimi H, Sadr S. Deep learning for determining the difficulty of endodontic treatment: a pilot study. BMC Oral Health 2024; 24:574. [PMID: 38760686 PMCID: PMC11102254 DOI: 10.1186/s12903-024-04235-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND To develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs. METHODS A dataset of 1,386 periapical radiographs was compiled from two clinical sites. Two dentists and two endodontists annotated the radiographs for difficulty using the "simple assessment" criteria from the American Association of Endodontists' case difficulty assessment form in the Endocase application. A classification task labeled cases as "easy" or "hard", while regression predicted overall difficulty scores. Convolutional neural networks (i.e. VGG16, ResNet18, ResNet50, ResNext50, and Inception v2) were used, with a baseline model trained via transfer learning from ImageNet weights. Other models was pre-trained using self-supervised contrastive learning (i.e. BYOL, SimCLR, MoCo, and DINO) on 20,295 unlabeled dental radiographs to learn representation without manual labels. Both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set. RESULTS The baseline VGG16 model attained 87.62% accuracy in classifying difficulty. Self-supervised pretraining did not improve performance. Regression predicted scores with ± 3.21 score error. All models outperformed human raters, with poor inter-examiner reliability. CONCLUSION This pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.
Collapse
Affiliation(s)
- Hamed Karkehabadi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
- Department of Endodontics, Dental Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Elham Khoshbin
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Nikoo Ghasemi
- Faculty of Dentistry, Zanjan University of Medical Sciences, Zanjan, Iran
| | - Amal Mahavi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Soroush Sadr
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran.
- Dental School, Hamadan University of Medical Sciences, Shahid Fahmideh Street, PO Box 6517838677, Hamadan, Iran.
| |
Collapse
|
14
|
Carvalho J, Lotz M, Rubi L, Unger S, Pfister T, Buhmann J, Stadlinger B. Preinterventional Third-Molar Assessment Using Robust Machine Learning. J Dent Res 2023; 102:1452-1459. [PMID: 37944556 PMCID: PMC10683342 DOI: 10.1177/00220345231200786] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023] Open
Abstract
Machine learning (ML) models, especially deep neural networks, are increasingly being used for the analysis of medical images and as a supporting tool for clinical decision-making. In this study, we propose an artificial intelligence system to facilitate dental decision-making for the removal of mandibular third molars (M3M) based on 2-dimensional orthopantograms and the risk assessment of such a procedure. A total of 4,516 panoramic radiographic images collected at the Center of Dental Medicine at the University of Zurich, Switzerland, were used for training the ML model. After image preparation and preprocessing, a spatially dependent U-Net was employed to detect and retrieve the region of the M3M and inferior alveolar nerve (IAN). Image patches identified to contain a M3M were automatically processed by a deep neural network for the classification of M3M superimposition over the IAN (task 1) and M3M root development (task 2). A control evaluation set of 120 images, collected from a different data source than the training data and labeled by 5 dental practitioners, was leveraged to reliably evaluate model performance. By 10-fold cross-validation, we achieved accuracy values of 0.94 and 0.93 for the M3M-IAN superimposition task and the M3M root development task, respectively, and accuracies of 0.9 and 0.87 when evaluated on the control data set, using a ResNet-101 trained in a semisupervised fashion. Matthew's correlation coefficient values of 0.82 and 0.75 for task 1 and task 2, evaluated on the control data set, indicate robust generalization of our model. Depending on the different label combinations of task 1 and task 2, we propose a diagnostic table that suggests whether additional imaging via 3-dimensional cone beam tomography is advisable. Ultimately, computer-aided decision-making tools benefit clinical practice by enabling efficient and risk-reduced decision-making and by supporting less experienced practitioners before the surgical removal of the M3M.
Collapse
Affiliation(s)
- J.S. Carvalho
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| | - M. Lotz
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - L. Rubi
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
| | - S. Unger
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - T. Pfister
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - J.M. Buhmann
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| | - B. Stadlinger
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| |
Collapse
|
15
|
Al-Sarem M, Al-Asali M, Alqutaibi AY, Saeed F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15414. [PMID: 36430133 PMCID: PMC9692549 DOI: 10.3390/ijerph192215414] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/15/2023]
Abstract
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient's panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth's position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Collapse
Affiliation(s)
- Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Sheba Region University, Marib 14400, Yemen
| | - Mohammed Al-Asali
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ahmed Yaseen Alqutaibi
- Department of Prosthodontics and Implant Dentistry, College of Dentistry, Taibah University, Al Madinah 41311, Saudi Arabia
- Department of Prosthodontics, College of Dentistry, Ibb University, Ibb 70270, Yemen
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| |
Collapse
|