1
|
Zirek T, Öziç MÜ, Tassoker M. AI-Driven localization of all impacted teeth and prediction of winter angulation for third molars on panoramic radiographs: Clinical user interface design. Comput Biol Med 2024; 178:108755. [PMID: 38897151 DOI: 10.1016/j.compbiomed.2024.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 06/05/2024] [Accepted: 06/11/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE Impacted teeth are abnormal tooth disorders under the gums or jawbone that cannot take their normal position even though it is time to erupt. This study aims to detect all impacted teeth and to classify impacted third molars according to the Winter method with an artificial intelligence model on panoramic radiographs. METHODS In this study, 1197 panoramic radiographs from the dentistry faculty database were collected for all impacted teeth, and 1000 panoramic radiographs were collected for Winter classification. Some pre-processing methods were performed and the images were doubled with data augmentation. Both datasets were randomly divided into 80% training, 10% validation, and 10% testing. After transfer learning and fine-tuning processes, the two datasets were trained with the YOLOv8 deep learning algorithm, a high-performance artificial intelligence model, and the detection of impacted teeth was carried out. The results were evaluated with precision, recall, mAP, and F1-score performance metrics. A graphical user interface was designed for clinical use with the artificial intelligence weights obtained as a result of the training. RESULTS For the detection of impacted third molar teeth according to Winter classification, the average precision, average recall, and average F1 score were obtained to be 0.972, 0.967, and 0.969, respectively. For the detection of all impacted teeth, the average precision, average recall, and average F1 score were obtained as 0.991, 0.995, and 0.993, respectively. CONCLUSION According to the results, the artificial intelligence-based YOLOv8 deep learning model successfully detected all impacted teeth and the impacted third molar teeth according to the Winter classification system.
Collapse
Affiliation(s)
- Taha Zirek
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey
| | - Muhammet Üsame Öziç
- Pamukkale University, Faculty of Technology, Department of Biomedical Engineering, Denizli, Turkey
| | - Melek Tassoker
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey.
| |
Collapse
|
2
|
Trachoo V, Taetragool U, Pianchoopat P, Sukitporn-Udom C, Morakrant N, Warin K. Deep Learning for Predicting the Difficulty Level of Removing the Impacted Mandibular Third Molar. Int Dent J 2024:S0020-6539(24)00193-X. [PMID: 39043529 DOI: 10.1016/j.identj.2024.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/04/2024] [Accepted: 06/06/2024] [Indexed: 07/25/2024] Open
Abstract
BACKGROUND Preoperative assessment of the impacted mandibular third molar (LM3) in a panoramic radiograph is important in surgical planning. The aim of this study was to develop and evaluate a computer-aided visualisation-based deep learning (DL) system using a panoramic radiograph to predict the difficulty level of surgical removal of an impacted LM3. METHODS The study included 1367 LM3 images from 784 patients who presented from 2021-2023 to the University Dental Hospital; images were collected retrospectively. The difficulty level of surgically removing impacted LM3s was assessed via our newly developed DL system, which seamlessly integrated 3 distinct DL models. ResNet101V2 handled binary classification for identifying impacted LM3s in panoramic radiographs, RetinaNet detected the precise location of the impacted LM3, and Vision Transformer performed multiclass image classification to evaluate the difficulty levels of removing the detected impacted LM3. RESULTS The ResNet101V2 model achieved a classification accuracy of 0.8671. The RetinaNet model demonstrated exceptional detection performance, with a mean average precision of 0.9928. Additionally, the Vision Transformer model delivered an average accuracy of 0.7899 in predicting removal difficulty levels. CONCLUSIONS The development of a 3-phase computer-aided visualisation-based DL system has yielded a very good performance in using panoramic radiographs to predict the difficulty level of surgically removing an impacted LM3.
Collapse
Affiliation(s)
- Vorapat Trachoo
- Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - Unchalisa Taetragool
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Ploypapas Pianchoopat
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Chatchapon Sukitporn-Udom
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Narapathra Morakrant
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
3
|
Karkehabadi H, Khoshbin E, Ghasemi N, Mahavi A, Mohammad-Rahimi H, Sadr S. Deep learning for determining the difficulty of endodontic treatment: a pilot study. BMC Oral Health 2024; 24:574. [PMID: 38760686 PMCID: PMC11102254 DOI: 10.1186/s12903-024-04235-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND To develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs. METHODS A dataset of 1,386 periapical radiographs was compiled from two clinical sites. Two dentists and two endodontists annotated the radiographs for difficulty using the "simple assessment" criteria from the American Association of Endodontists' case difficulty assessment form in the Endocase application. A classification task labeled cases as "easy" or "hard", while regression predicted overall difficulty scores. Convolutional neural networks (i.e. VGG16, ResNet18, ResNet50, ResNext50, and Inception v2) were used, with a baseline model trained via transfer learning from ImageNet weights. Other models was pre-trained using self-supervised contrastive learning (i.e. BYOL, SimCLR, MoCo, and DINO) on 20,295 unlabeled dental radiographs to learn representation without manual labels. Both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set. RESULTS The baseline VGG16 model attained 87.62% accuracy in classifying difficulty. Self-supervised pretraining did not improve performance. Regression predicted scores with ± 3.21 score error. All models outperformed human raters, with poor inter-examiner reliability. CONCLUSION This pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.
Collapse
Affiliation(s)
- Hamed Karkehabadi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
- Department of Endodontics, Dental Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Elham Khoshbin
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Nikoo Ghasemi
- Faculty of Dentistry, Zanjan University of Medical Sciences, Zanjan, Iran
| | - Amal Mahavi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Soroush Sadr
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran.
- Dental School, Hamadan University of Medical Sciences, Shahid Fahmideh Street, PO Box 6517838677, Hamadan, Iran.
| |
Collapse
|
4
|
Qiu P, Cao R, Li Z, Huang J, Zhang H, Zhang X. Applications of artificial intelligence for surgical extraction in stomatology: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(24)00285-2. [PMID: 38834501 DOI: 10.1016/j.oooo.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/16/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) has been extensively used in the field of stomatology over the past several years. This study aimed to evaluate the effectiveness of AI-based models in the procedure, assessment, and treatment planning of surgical extraction. STUDY DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, a comprehensive search was conducted on the Web of Science, PubMed/MEDLINE, Embase, and Scopus databases, covering English publications up to September 2023. Two reviewers performed the study selection and data extraction independently. Only original research studies utilizing AI in surgical extraction of stomatology were included. The Cochrane risk of bias tool for randomized trials (RoB 2) was selected to perform the quality assessment of the selected literature. RESULTS From 2,336 retrieved references, 35 studies were deemed eligible. Among them, 28 researchers reported the pioneering role of AI in segmentation, classification, and detection, aligning with clinical needs. In addition, another 7 studies suggested promising results in tooth extraction decision-making, but further model refinement and validation were required. CONCLUSIONS Integration of AI in stomatology surgical extraction has significantly progressed, enhancing decision-making accuracy. Combining and comparing algorithmic outcomes across studies is essential for determining optimal clinical applications in the future.
Collapse
Affiliation(s)
- Piaopiao Qiu
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Rongkai Cao
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Zhaoyang Li
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Jiaqi Huang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Huasheng Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Xueming Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China.
| |
Collapse
|
5
|
Torul D, Akpinar H, Bayrakdar IS, Celik O, Orhan K. Prediction of extraction difficulty for impacted maxillary third molars with deep learning approach. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:101817. [PMID: 38458545 DOI: 10.1016/j.jormas.2024.101817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 03/10/2024]
Abstract
OBJECTIVE The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri‑surgical period.
Collapse
Affiliation(s)
- Damla Torul
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Ordu University, Ordu 52200, Turkey.
| | - Hasan Akpinar
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Afyonkarahisar Health Sciences University, Afyon, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ozer Celik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara Turkey
| |
Collapse
|
6
|
Carvalho J, Lotz M, Rubi L, Unger S, Pfister T, Buhmann J, Stadlinger B. Preinterventional Third-Molar Assessment Using Robust Machine Learning. J Dent Res 2023; 102:1452-1459. [PMID: 37944556 PMCID: PMC10683342 DOI: 10.1177/00220345231200786] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023] Open
Abstract
Machine learning (ML) models, especially deep neural networks, are increasingly being used for the analysis of medical images and as a supporting tool for clinical decision-making. In this study, we propose an artificial intelligence system to facilitate dental decision-making for the removal of mandibular third molars (M3M) based on 2-dimensional orthopantograms and the risk assessment of such a procedure. A total of 4,516 panoramic radiographic images collected at the Center of Dental Medicine at the University of Zurich, Switzerland, were used for training the ML model. After image preparation and preprocessing, a spatially dependent U-Net was employed to detect and retrieve the region of the M3M and inferior alveolar nerve (IAN). Image patches identified to contain a M3M were automatically processed by a deep neural network for the classification of M3M superimposition over the IAN (task 1) and M3M root development (task 2). A control evaluation set of 120 images, collected from a different data source than the training data and labeled by 5 dental practitioners, was leveraged to reliably evaluate model performance. By 10-fold cross-validation, we achieved accuracy values of 0.94 and 0.93 for the M3M-IAN superimposition task and the M3M root development task, respectively, and accuracies of 0.9 and 0.87 when evaluated on the control data set, using a ResNet-101 trained in a semisupervised fashion. Matthew's correlation coefficient values of 0.82 and 0.75 for task 1 and task 2, evaluated on the control data set, indicate robust generalization of our model. Depending on the different label combinations of task 1 and task 2, we propose a diagnostic table that suggests whether additional imaging via 3-dimensional cone beam tomography is advisable. Ultimately, computer-aided decision-making tools benefit clinical practice by enabling efficient and risk-reduced decision-making and by supporting less experienced practitioners before the surgical removal of the M3M.
Collapse
Affiliation(s)
- J.S. Carvalho
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| | - M. Lotz
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - L. Rubi
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
| | - S. Unger
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - T. Pfister
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
| | - J.M. Buhmann
- ETH Zurich, Department of Computer Science, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| | - B. Stadlinger
- University of Zurich, Center for Dental Medicine, Zurich, Switzerland
- ETH AI Center, Zurich, Switzerland
| |
Collapse
|
7
|
Al-Sarem M, Al-Asali M, Alqutaibi AY, Saeed F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15414. [PMID: 36430133 PMCID: PMC9692549 DOI: 10.3390/ijerph192215414] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/15/2023]
Abstract
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient's panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth's position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Collapse
Affiliation(s)
- Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Sheba Region University, Marib 14400, Yemen
| | - Mohammed Al-Asali
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ahmed Yaseen Alqutaibi
- Department of Prosthodontics and Implant Dentistry, College of Dentistry, Taibah University, Al Madinah 41311, Saudi Arabia
- Department of Prosthodontics, College of Dentistry, Ibb University, Ibb 70270, Yemen
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| |
Collapse
|