1
|
Qiu P, Cao R, Li Z, Huang J, Zhang H, Zhang X. Applications of artificial intelligence for surgical extraction in stomatology: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:346-361. [PMID: 38834501 DOI: 10.1016/j.oooo.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/16/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) has been extensively used in the field of stomatology over the past several years. This study aimed to evaluate the effectiveness of AI-based models in the procedure, assessment, and treatment planning of surgical extraction. STUDY DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, a comprehensive search was conducted on the Web of Science, PubMed/MEDLINE, Embase, and Scopus databases, covering English publications up to September 2023. Two reviewers performed the study selection and data extraction independently. Only original research studies utilizing AI in surgical extraction of stomatology were included. The Cochrane risk of bias tool for randomized trials (RoB 2) was selected to perform the quality assessment of the selected literature. RESULTS From 2,336 retrieved references, 35 studies were deemed eligible. Among them, 28 researchers reported the pioneering role of AI in segmentation, classification, and detection, aligning with clinical needs. In addition, another 7 studies suggested promising results in tooth extraction decision-making, but further model refinement and validation were required. CONCLUSIONS Integration of AI in stomatology surgical extraction has significantly progressed, enhancing decision-making accuracy. Combining and comparing algorithmic outcomes across studies is essential for determining optimal clinical applications in the future.
Collapse
Affiliation(s)
- Piaopiao Qiu
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Rongkai Cao
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Zhaoyang Li
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Jiaqi Huang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Huasheng Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Xueming Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China.
| |
Collapse
|
2
|
Torul D, Akpinar H, Bayrakdar IS, Celik O, Orhan K. Prediction of extraction difficulty for impacted maxillary third molars with deep learning approach. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101817. [PMID: 38458545 DOI: 10.1016/j.jormas.2024.101817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 03/10/2024]
Abstract
OBJECTIVE The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri‑surgical period.
Collapse
Affiliation(s)
- Damla Torul
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Ordu University, Ordu 52200, Turkey.
| | - Hasan Akpinar
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Afyonkarahisar Health Sciences University, Afyon, Turkey
| | - Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ozer Celik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara Turkey
| |
Collapse
|
3
|
Mun SB, Kim J, Kim YJ, Seo MS, Kim BC, Kim KG. Deep learning-based prediction of indication for cracked tooth extraction using panoramic radiography. BMC Oral Health 2024; 24:952. [PMID: 39152384 PMCID: PMC11328441 DOI: 10.1186/s12903-024-04721-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 08/08/2024] [Indexed: 08/19/2024] Open
Abstract
BACKGROUND We aimed to determine the feasibility of utilizing deep learning-based predictions of the indications for cracked tooth extraction using panoramic radiography. METHODS Panoramic radiographs of 418 teeth (group 1: 209 normal teeth; group 2: 209 cracked teeth) were evaluated for the training and testing of a deep learning model. We evaluated the performance of the cracked diagnosis model for individual teeth using InceptionV3, ResNet50, and EfficientNetB0. The cracked tooth diagnosis model underwent fivefold cross-validation with 418 data instances divided into training, validation, and test sets at a ratio of 3:1:1. RESULTS To evaluate the feasibility, the sensitivity, specificity, accuracy, and F1 score of the deep learning models were calculated, with values of 90.43-94.26%, 52.63-60.77%, 72.01-75.84%, and 76.36-79.00%, respectively. CONCLUSION We found that the indications for cracked tooth extraction can be predicted to a certain extent through a deep learning model using panoramic radiography.
Collapse
Affiliation(s)
- Sae Byeol Mun
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon, 21999, Republic of Korea
| | - Jeseong Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Republic of Korea
| | - Young Jae Kim
- Gachon Biomedical & Convergence Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Min-Seock Seo
- Department of Conservative Dentistry, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Republic of Korea
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Republic of Korea.
| | - Kwang Gi Kim
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Gyeonggi-do, Republic of Korea.
- KMAIN, Seongnam, Republic of Korea.
| |
Collapse
|
4
|
Dong F, Yan J, Zhang X, Zhang Y, Liu D, Pan X, Xue L, Liu Y. Artificial intelligence-based predictive model for guidance on treatment strategy selection in oral and maxillofacial surgery. Heliyon 2024; 10:e35742. [PMID: 39170321 PMCID: PMC11336844 DOI: 10.1016/j.heliyon.2024.e35742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 07/27/2024] [Accepted: 08/02/2024] [Indexed: 08/23/2024] Open
Abstract
Application of deep learning (DL) and machine learning (ML) is rapidly increasing in the medical field. DL is gaining significance for medical image analysis, particularly, in oral and maxillofacial surgeries. Owing to the ability to accurately identify and categorize both diseased and normal soft- and hard-tissue structures, DL has high application potential in the diagnosis and treatment of tumors and in orthognathic surgeries. Moreover, DL and ML can be used to develop prediction models that can aid surgeons to assess prognosis by analyzing the patient's medical history, imaging data, and surgical records, develop more effective treatment strategies, select appropriate surgical modalities, and evaluate the risk of postoperative complications. Such prediction models can play a crucial role in the selection of treatment strategies for oral and maxillofacial surgeries. Their practical application can improve the utilization of medical staff, increase the treatment accuracy and efficiency, reduce surgical risks, and provide an enhanced treatment experience to patients. However, DL and ML face limitations, such as data drift, unstable model results, and vulnerable social trust. With the advancement of social concepts and technologies, the use of these models in oral and maxillofacial surgery is anticipated to become more comprehensive and extensive.
Collapse
Affiliation(s)
- Fanqiao Dong
- School of Stomatology, China Medical University, Shenyang, China
| | - Jingjing Yan
- Hospital of Stomatology, China Medical University, Shenyang, China
| | - Xiyue Zhang
- School of Stomatology, China Medical University, Shenyang, China
| | - Yikun Zhang
- School of Stomatology, China Medical University, Shenyang, China
| | - Di Liu
- School of Stomatology, China Medical University, Shenyang, China
| | - Xiyun Pan
- School of Stomatology, China Medical University, Shenyang, China
| | - Lei Xue
- School of Stomatology, China Medical University, Shenyang, China
- Hospital of Stomatology, China Medical University, Shenyang, China
| | - Yu Liu
- First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
| |
Collapse
|
5
|
Fernández-Martín U, Lisbona-González MJ, Vallecillo-Rivas M, Mallo-Magariños M, Herrera-Briones FJ. Effect of Preoperative Administration of Dexamethasone vs. Methylprednisolone in Surgical Extraction of Impacted Lower Third Molars: Randomized Controlled Clinical Trial. J Clin Med 2024; 13:4614. [PMID: 39200756 PMCID: PMC11355648 DOI: 10.3390/jcm13164614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 07/30/2024] [Accepted: 08/05/2024] [Indexed: 09/02/2024] Open
Abstract
Background/Objectives: Glucocorticoids are drugs that are increasingly used in oral surgery to reduce trismus, inflammation, and postoperative pain, three frequent complications after the surgical extraction of impacted lower third molars. The aim of this study was to compare the effect of 8 mg dexamethasone versus 40 mg methylprednisolone in the prevention of postoperative complications after third molar surgery. Methods: A randomized double-blind clinical trial was conducted following CONSORT guidelines. In detail, 84 patients were included in the study, who randomly received a single preoperative submucosal dose of dexamethasone (8 mg) or methylprednisolone (40 mg). The variables analyzed, as primary outcomes, were trismus, inflammation, and postoperative pain. The measurements were performed at baseline (0 h), 3 h, 7 h, 24 h, 48 h, and 7 th day using a Visual Analog Scale (VAS), Verbal Rating Scale (VRS), and the Gabka-Matsumara method. Results: Dexamethasone reduced trismus, inflammation, and postoperative pain significantly better than methylprednisolone. Conclusions: Preoperative submucosal administration of 8 mg dexamethasone is effective and safe in reducing the severity of postoperative complications following surgical extraction of impacted lower third molars.
Collapse
Affiliation(s)
- Unai Fernández-Martín
- Department of Oral Surgery and Implant Dentistry, School of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain; (U.F.-M.); (M.V.-R.); (M.M.-M.); (F.J.H.-B.)
| | - María Jesús Lisbona-González
- Department of Oral Surgery and Implant Dentistry, School of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain; (U.F.-M.); (M.V.-R.); (M.M.-M.); (F.J.H.-B.)
- Faculty of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain
| | - Marta Vallecillo-Rivas
- Department of Oral Surgery and Implant Dentistry, School of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain; (U.F.-M.); (M.V.-R.); (M.M.-M.); (F.J.H.-B.)
- Faculty of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain
| | - Manuel Mallo-Magariños
- Department of Oral Surgery and Implant Dentistry, School of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain; (U.F.-M.); (M.V.-R.); (M.M.-M.); (F.J.H.-B.)
| | - Francisco Javier Herrera-Briones
- Department of Oral Surgery and Implant Dentistry, School of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain; (U.F.-M.); (M.V.-R.); (M.M.-M.); (F.J.H.-B.)
- Faculty of Dentistry, Colegio Máximo de Cartuja s/n, University of Granada, 18071 Granada, Spain
| |
Collapse
|
6
|
Zirek T, Öziç MÜ, Tassoker M. AI-Driven localization of all impacted teeth and prediction of winter angulation for third molars on panoramic radiographs: Clinical user interface design. Comput Biol Med 2024; 178:108755. [PMID: 38897151 DOI: 10.1016/j.compbiomed.2024.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 06/05/2024] [Accepted: 06/11/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE Impacted teeth are abnormal tooth disorders under the gums or jawbone that cannot take their normal position even though it is time to erupt. This study aims to detect all impacted teeth and to classify impacted third molars according to the Winter method with an artificial intelligence model on panoramic radiographs. METHODS In this study, 1197 panoramic radiographs from the dentistry faculty database were collected for all impacted teeth, and 1000 panoramic radiographs were collected for Winter classification. Some pre-processing methods were performed and the images were doubled with data augmentation. Both datasets were randomly divided into 80% training, 10% validation, and 10% testing. After transfer learning and fine-tuning processes, the two datasets were trained with the YOLOv8 deep learning algorithm, a high-performance artificial intelligence model, and the detection of impacted teeth was carried out. The results were evaluated with precision, recall, mAP, and F1-score performance metrics. A graphical user interface was designed for clinical use with the artificial intelligence weights obtained as a result of the training. RESULTS For the detection of impacted third molar teeth according to Winter classification, the average precision, average recall, and average F1 score were obtained to be 0.972, 0.967, and 0.969, respectively. For the detection of all impacted teeth, the average precision, average recall, and average F1 score were obtained as 0.991, 0.995, and 0.993, respectively. CONCLUSION According to the results, the artificial intelligence-based YOLOv8 deep learning model successfully detected all impacted teeth and the impacted third molar teeth according to the Winter classification system.
Collapse
Affiliation(s)
- Taha Zirek
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey
| | - Muhammet Üsame Öziç
- Pamukkale University, Faculty of Technology, Department of Biomedical Engineering, Denizli, Turkey
| | - Melek Tassoker
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey.
| |
Collapse
|
7
|
Assiri HA, Hameed MS, Alqarni A, Dawasaz AA, Arem SA, Assiri KI. Artificial Intelligence Application in a Case of Mandibular Third Molar Impaction: A Systematic Review of the Literature. J Clin Med 2024; 13:4431. [PMID: 39124697 PMCID: PMC11313288 DOI: 10.3390/jcm13154431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 08/12/2024] Open
Abstract
Objective: This systematic review aims to summarize the evidence on the use and applicability of AI in impacted mandibular third molars. Methods: Searches were performed in the following databases: PubMed, Scopus, and Google Scholar. The study protocol is registered at the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY202460081). The retrieved articles were subjected to an exhaustive review based on the inclusion and exclusion criteria for the study. Articles on the use of AI for diagnosis, treatment, and treatment planning in patients with impacted mandibular third molars were included. Results: Twenty-one articles were selected and evaluated using the Scottish Intercollegiate Guidelines Network (SIGN) evidence quality scale. Most of the analyzed studies dealt with using AI to determine the relationship between the mandibular canal and the impacted mandibular third molar. The average quality of the articles included in this review was 2+, which indicated that the level of evidence, according to the SIGN protocol, was B. Conclusions: Compared to human observers, AI models have demonstrated decent performance in determining the morphology, anatomy, and relationship of the impaction with the inferior alveolar nerve canal. However, the prediction of eruptions and future horizons of AI models are still in the early developmental stages. Additional studies estimating the eruption in mixed and permanent dentition are warranted to establish a comprehensive model for identifying, diagnosing, and predicting third molar eruptions and determining the treatment outcomes in the case of impacted teeth. This will help clinicians make better decisions and achieve better treatment outcomes.
Collapse
Affiliation(s)
- Hassan Ahmed Assiri
- Department of Diagnostic Science and Oral Biology, College of Dentistry, King Khalid University, P.O. Box 960, Abha City 61421, Saudi Arabia; (M.S.H.); (A.A.); (A.A.D.); (S.A.A.); (K.I.A.)
| | | | | | | | | | | |
Collapse
|
8
|
Trachoo V, Taetragool U, Pianchoopat P, Sukitporn-Udom C, Morakrant N, Warin K. Deep Learning for Predicting the Difficulty Level of Removing the Impacted Mandibular Third Molar. Int Dent J 2024:S0020-6539(24)00193-X. [PMID: 39043529 DOI: 10.1016/j.identj.2024.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/04/2024] [Accepted: 06/06/2024] [Indexed: 07/25/2024] Open
Abstract
BACKGROUND Preoperative assessment of the impacted mandibular third molar (LM3) in a panoramic radiograph is important in surgical planning. The aim of this study was to develop and evaluate a computer-aided visualisation-based deep learning (DL) system using a panoramic radiograph to predict the difficulty level of surgical removal of an impacted LM3. METHODS The study included 1367 LM3 images from 784 patients who presented from 2021-2023 to the University Dental Hospital; images were collected retrospectively. The difficulty level of surgically removing impacted LM3s was assessed via our newly developed DL system, which seamlessly integrated 3 distinct DL models. ResNet101V2 handled binary classification for identifying impacted LM3s in panoramic radiographs, RetinaNet detected the precise location of the impacted LM3, and Vision Transformer performed multiclass image classification to evaluate the difficulty levels of removing the detected impacted LM3. RESULTS The ResNet101V2 model achieved a classification accuracy of 0.8671. The RetinaNet model demonstrated exceptional detection performance, with a mean average precision of 0.9928. Additionally, the Vision Transformer model delivered an average accuracy of 0.7899 in predicting removal difficulty levels. CONCLUSIONS The development of a 3-phase computer-aided visualisation-based DL system has yielded a very good performance in using panoramic radiographs to predict the difficulty level of surgically removing an impacted LM3.
Collapse
Affiliation(s)
- Vorapat Trachoo
- Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - Unchalisa Taetragool
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Ploypapas Pianchoopat
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Chatchapon Sukitporn-Udom
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Narapathra Morakrant
- Department of Computer Engineering, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
| | - Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
9
|
Motmaen I, Xie K, Schönbrunn L, Berens J, Grunert K, Plum AM, Raufeisen J, Ferreira A, Hermans A, Egger J, Hölzle F, Truhn D, Puladi B. Insights into Predicting Tooth Extraction from Panoramic Dental Images: Artificial Intelligence vs. Dentists. Clin Oral Investig 2024; 28:381. [PMID: 38886242 PMCID: PMC11182848 DOI: 10.1007/s00784-024-05781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE AI could help monitor at-risk teeth and reduce errors in indications for extractions.
Collapse
Affiliation(s)
- Ila Motmaen
- Department of Oral and Maxillofacial Surgery, University Hospital Knappschaftskrankenhaus Bochum, 44892, Bochum, Germany
| | - Kunpeng Xie
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Leon Schönbrunn
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Jeff Berens
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Kim Grunert
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Anna Maria Plum
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Johannes Raufeisen
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - André Ferreira
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Centre Algoritmi / LASI, University of Minho, 4710-057, Braga, Portugal
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Alexander Hermans
- Visual Computing Institute, Computer Science and Natural Sciences, RWTH Aachen University, 52074, Aachen, Germany
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
10
|
Karobari MI, Suryawanshi H, Patil SR. Revolutionizing oral and maxillofacial surgery: ChatGPT's impact on decision support, patient communication, and continuing education. Int J Surg 2024; 110:3143-3145. [PMID: 38446838 PMCID: PMC11175733 DOI: 10.1097/js9.0000000000001286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Affiliation(s)
- Mohmed Isaqali Karobari
- Department of Restorative Dentistry and Endodontics, Faculty of Dentistry, University of Puthisastra, Phnom Penh, Cambodia
- Dental Research Unit, Center for Global Health Research, Saveetha Medical College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu
| | - Hema Suryawanshi
- Department of Oral Pathology and Microbiology, Chhattisgarh Dental College and Research Institute
| | - Santosh R. Patil
- Department of Oral Medicine and Radiology, Chhattisgarh Dental College and Research Institute, India
| |
Collapse
|
11
|
Karkehabadi H, Khoshbin E, Ghasemi N, Mahavi A, Mohammad-Rahimi H, Sadr S. Deep learning for determining the difficulty of endodontic treatment: a pilot study. BMC Oral Health 2024; 24:574. [PMID: 38760686 PMCID: PMC11102254 DOI: 10.1186/s12903-024-04235-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND To develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs. METHODS A dataset of 1,386 periapical radiographs was compiled from two clinical sites. Two dentists and two endodontists annotated the radiographs for difficulty using the "simple assessment" criteria from the American Association of Endodontists' case difficulty assessment form in the Endocase application. A classification task labeled cases as "easy" or "hard", while regression predicted overall difficulty scores. Convolutional neural networks (i.e. VGG16, ResNet18, ResNet50, ResNext50, and Inception v2) were used, with a baseline model trained via transfer learning from ImageNet weights. Other models was pre-trained using self-supervised contrastive learning (i.e. BYOL, SimCLR, MoCo, and DINO) on 20,295 unlabeled dental radiographs to learn representation without manual labels. Both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set. RESULTS The baseline VGG16 model attained 87.62% accuracy in classifying difficulty. Self-supervised pretraining did not improve performance. Regression predicted scores with ± 3.21 score error. All models outperformed human raters, with poor inter-examiner reliability. CONCLUSION This pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.
Collapse
Affiliation(s)
- Hamed Karkehabadi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
- Department of Endodontics, Dental Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Elham Khoshbin
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Nikoo Ghasemi
- Faculty of Dentistry, Zanjan University of Medical Sciences, Zanjan, Iran
| | - Amal Mahavi
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Soroush Sadr
- Department of Endodontics, Dental School, Hamadan University of Medical Sciences, Hamadan, Iran.
- Dental School, Hamadan University of Medical Sciences, Shahid Fahmideh Street, PO Box 6517838677, Hamadan, Iran.
| |
Collapse
|
12
|
Jeong H, Han SS, Jung HI, Lee W, Jeon KJ. Perceptions and attitudes of dental students and dentists in South Korea toward artificial intelligence: a subgroup analysis based on professional seniority. BMC MEDICAL EDUCATION 2024; 24:430. [PMID: 38649951 PMCID: PMC11034023 DOI: 10.1186/s12909-024-05441-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND This study explored dental students' and dentists' perceptions and attitudes toward artificial intelligence (AI) and analyzed differences according to professional seniority. METHODS In September to November 2022, online surveys using Google Forms were conducted at 2 dental colleges and on 2 dental websites. The questionnaire consisted of general information (8 or 10 items) and participants' perceptions, confidence, predictions, and perceived future prospects regarding AI (17 items). A multivariate logistic regression analysis was performed on 4 questions representing perceptions and attitudes toward AI to identify highly influential factors according to position, age, sex, residence, and self-reported knowledge level about AI of respondents. Participants were reclassified into 2 subgroups based on students' years in school and 4 subgroups based on dentists' years of experience. The chi-square test or Fisher's exact test was used to determine differences between dental students and dentists and between subgroups for all 17 questions. RESULTS The study included 120 dental students and 96 dentists. Participants with high level of AI knowledge were more likely to be interested in AI compared to those with moderate or low level (adjusted OR 24.345, p < 0.001). Most dental students (60.8%) and dentists (67.7%) predicted that dental AI would complement human limitations. Dental students responded that they would actively use AI in almost all cases (40.8%), while dentists responded that they would use AI only when necessary (44.8%). Dentists with 11-20 years of experience were the most likely to disagree that AI could outperform skilled dentists (50.0%), and respondents with longer careers had higher response rates regarding the need for AI education in schools. CONCLUSIONS Knowledge level about AI emerged as the factor influencing perceptions and attitudes toward AI, with both dental students and dentists showing similar views on recognizing the potential of AI as an auxiliary tool. However, students' and dentists' willingness to use AI differed. Although dentists differed in their confidence in the abilities of AI, all dentists recognized the need for education on AI. AI adoption is becoming a reality in dentistry, which requires proper awareness, proper use, and comprehensive AI education.
Collapse
Affiliation(s)
- Hui Jeong
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, South Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, South Korea
| | - Hoi-In Jung
- Department of Preventive Dentistry & Public Oral Health, Yonsei University College of Dentistry, Seoul, South Korea
| | - Wan Lee
- Department of Oral and Maxillofacial Radiology, Wonkwang University College of Dentistry, Iksan, South Korea
| | - Kug Jin Jeon
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, South Korea.
| |
Collapse
|
13
|
Faadiya AN, Widyaningrum R, Arindra PK, Diba SF. The diagnostic performance of impacted third molars in the mandible: A review of deep learning on panoramic radiographs. Saudi Dent J 2024; 36:404-412. [PMID: 38525176 PMCID: PMC10960107 DOI: 10.1016/j.sdentj.2023.11.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 03/26/2024] Open
Abstract
Background Mandibular third molar is prone to impaction, resulting in its inability to erupt into the oral cavity. The radiographic examination is required to support the odontectomy of impacted teeth. The use of computer-aided diagnosis based on deep learning is emerging in the field of medical and dentistry with the advancement of artificial intelligence (AI) technology. This review describes the performance and prospects of deep learning for the detection, classification, and evaluation of third molar-mandibular canal relationships on panoramic radiographs. Methods This work was conducted using three databases: PubMed, Google Scholar, and Science Direct. Following the literature selection, 49 articles were reviewed, with the 12 main articles discussed in this review. Results Several models of deep learning are currently used for segmentation and classification of third molar impaction with or without the combination of other techniques. Deep learning has demonstrated significant diagnostic performance in identifying mandibular impacted third molars (ITM) on panoramic radiographs, with an accuracy range of 78.91% to 90.23%. Meanwhile, the accuracy of deep learning in determining the relationship between ITM and the mandibular canal (MC) ranges from 72.32% to 99%. Conclusion Deep learning-based AI with high performance for the detection, classification, and evaluation of the relationship of ITM to the MC using panoramic radiographs has been developed over the past decade. However, deep learning must be improved using large datasets, and the evaluation of diagnostic performance for deep learning models should be aligned with medical diagnostic test protocols. Future studies involving collaboration among oral radiologists, clinicians, and computer scientists are required to identify appropriate AI development models that are accurate, efficient, and applicable to clinical services.
Collapse
Affiliation(s)
- Amalia Nur Faadiya
- Dental Medicine Study Program, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Rini Widyaningrum
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Pingky Krisna Arindra
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Silviana Farrah Diba
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| |
Collapse
|
14
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging-a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(23)01566-3. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
15
|
Al-Haj Husain A, Stadlinger B, Winklhofer S, Bosshard FA, Schmidt V, Valdec S. Imaging in Third Molar Surgery: A Clinical Update. J Clin Med 2023; 12:7688. [PMID: 38137758 PMCID: PMC10744030 DOI: 10.3390/jcm12247688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 12/08/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023] Open
Abstract
Third molar surgery is one of the most common surgical procedures performed in oral and maxillofacial surgery. Considering the patient's young age and the often-elective nature of the procedure, a comprehensive preoperative evaluation of the surgical site, relying heavily on preoperative imaging, is key to providing accurate diagnostic work-up, evidence-based clinical decision making, and, when appropriate, indication-specific surgical planning. Given the rapid developments of dental imaging in the field, the aim of this article is to provide a comprehensive, up-to-date clinical overview of various imaging techniques related to perioperative imaging in third molar surgery, ranging from panoramic radiography to emerging technologies, such as photon-counting computed tomography and magnetic resonance imaging. Each modality's advantages, limitations, and recent improvements are evaluated, highlighting their role in treatment planning, complication prevention, and postoperative follow-ups. The integration of recent technological advances, including artificial intelligence and machine learning in biomedical imaging, coupled with a thorough preoperative clinical evaluation, marks another step towards personalized dentistry in high-risk third molar surgery. This approach enables minimally invasive surgical approaches while reducing inefficiencies and risks by incorporating additional imaging modality- and patient-specific parameters, potentially facilitating and improving patient management.
Collapse
Affiliation(s)
- Adib Al-Haj Husain
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (A.A.-H.H.); (B.S.); (F.A.B.); (V.S.)
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland
| | - Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (A.A.-H.H.); (B.S.); (F.A.B.); (V.S.)
| | | | - Fabienne A. Bosshard
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (A.A.-H.H.); (B.S.); (F.A.B.); (V.S.)
| | - Valérie Schmidt
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (A.A.-H.H.); (B.S.); (F.A.B.); (V.S.)
| | - Silvio Valdec
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (A.A.-H.H.); (B.S.); (F.A.B.); (V.S.)
| |
Collapse
|
16
|
Miragall MF, Knoedler S, Kauke-Navarro M, Saadoun R, Grabenhorst A, Grill FD, Ritschl LM, Fichter AM, Safi AF, Knoedler L. Face the Future-Artificial Intelligence in Oral and Maxillofacial Surgery. J Clin Med 2023; 12:6843. [PMID: 37959310 PMCID: PMC10649053 DOI: 10.3390/jcm12216843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 10/24/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has emerged as a versatile health-technology tool revolutionizing medical services through the implementation of predictive, preventative, individualized, and participatory approaches. AI encompasses different computational concepts such as machine learning, deep learning techniques, and neural networks. AI also presents a broad platform for improving preoperative planning, intraoperative workflow, and postoperative patient outcomes in the field of oral and maxillofacial surgery (OMFS). The purpose of this review is to present a comprehensive summary of the existing scientific knowledge. The authors thoroughly reviewed English-language PubMed/MEDLINE and Embase papers from their establishment to 1 December 2022. The search terms were (1) "OMFS" OR "oral and maxillofacial" OR "oral and maxillofacial surgery" OR "oral surgery" AND (2) "AI" OR "artificial intelligence". The search format was tailored to each database's syntax. To find pertinent material, each retrieved article and systematic review's reference list was thoroughly examined. According to the literature, AI is already being used in certain areas of OMFS, such as radiographic image quality improvement, diagnosis of cysts and tumors, and localization of cephalometric landmarks. Through additional research, it may be possible to provide practitioners in numerous disciplines with additional assistance to enhance preoperative planning, intraoperative screening, and postoperative monitoring. Overall, AI carries promising potential to advance the field of OMFS and generate novel solution possibilities for persisting clinical challenges. Herein, this review provides a comprehensive summary of AI in OMFS and sheds light on future research efforts. Further, the advanced analysis of complex medical imaging data can support surgeons in preoperative assessments, virtual surgical simulations, and individualized treatment strategies. AI also assists surgeons during intraoperative decision-making by offering immediate feedback and guidance to enhance surgical accuracy and reduce complication rates, for instance by predicting the risk of bleeding.
Collapse
Affiliation(s)
- Maximilian F. Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Alex Grabenhorst
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Florian D. Grill
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Lucas M. Ritschl
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Andreas M. Fichter
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, 3011 Bern, Switzerland;
- Faculty of Medicine, University of Bern, 3010 Bern, Switzerland
| | - Leonard Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
| |
Collapse
|
17
|
Vinayahalingam S, Kempers S, Schoep J, Hsu TMH, Moin DA, van Ginneken B, Flügge T, Hanisch M, Xi T. Intra-oral scan segmentation using deep learning. BMC Oral Health 2023; 23:643. [PMID: 37670290 PMCID: PMC10481506 DOI: 10.1186/s12903-023-03362-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 08/26/2023] [Indexed: 09/07/2023] Open
Abstract
OBJECTIVE Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious, and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using deep learning. MATERIAL AND METHODS As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set consisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree of similarity between the annotated ground truth and the model predictions. RESULTS The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agreements between the automatically and manually segmented teeth components. Minor flaws were mostly seen at the edges. CONCLUSION The proposed method forms a promising foundation for time-effective and observer-independent teeth segmentation and labeling on intra-oral scans. CLINICAL SIGNIFICANCE Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics, implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
| | - Steven Kempers
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
- Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
| | - Julian Schoep
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tzu-Ming Harry Hsu
- MIT Computer Science & Artificial Intelligence Laboratory, 32 Vassar St, Cambridge, MA, 02139, USA
| | | | - Bram van Ginneken
- Department of Radiology, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Tabea Flügge
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Department of Oral and Maxillofacial Surgery, Hindenburgdamm 30, 12203, Berlin, Germany.
| | - Marcel Hanisch
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
- Promaton Co. Ltd, 1076 GR, Amsterdam, The Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| |
Collapse
|
18
|
Kim JY, Kahm SH, Yoo S, Bae SM, Kang JE, Lee SH. The efficacy of supervised learning and semi-supervised learning in diagnosis of impacted third molar on panoramic radiographs through artificial intelligence model. Dentomaxillofac Radiol 2023; 52:20230030. [PMID: 37192043 PMCID: PMC10461259 DOI: 10.1259/dmfr.20230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/27/2023] [Accepted: 04/18/2023] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVES The aim of the study was to evaluate the efficacy of traditional supervised learning (SL) and semi-supervised learning (SSL) in the classification of mandibular third molars (Mn3s) on panoramic images. The simplicity of preprocessing step and the outcome of the performance of SL and SSL were analyzed. METHODS Total 1625 Mn3s cropped images from 1000 panoramic images were labeled for classifications of the depth of impaction (D class), spatial relation with adjacent second molar (S class), and relationship with inferior alveolar nerve canal (N class). For the SL model, WideResNet (WRN) was applicated and for the SSL model, LaplaceNet (LN) was utilized. RESULTS In the WRN model, 300 labeled images for D and S classes, and 360 labeled images for N class were used for training and validation. In the LN model, only 40 labeled images for D, S, and N classes were used for learning. The F1 score were 0.87, 0.87, and 0.83 in WRN model, 0.84, 0.94, and 0.80 for D class, S class, and N class in the LN model, respectively. CONCLUSIONS These results confirmed that the LN model applied as SSL, even utilizing a small number of labeled images, demonstrated the satisfactory of the prediction accuracy similar to that of the WRN model as SL.
Collapse
Affiliation(s)
- Ji-Youn Kim
- Division of Oral & Maxillofacial Surgery, Department of Dentistry, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Se Hoon Kahm
- Department of Dentistry, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seok Yoo
- AI Business Headquarters, Unidocs Inc., Seoul, South Korea
| | - Soo-Mi Bae
- Department of Artificial Intelligence, Graduate school, Korea University, Seoul, South Korea
| | | | - Sang Hwa Lee
- Department of Dentistry, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
19
|
Antonelli A, Barone S, Bennardo F, Giudice A. Three-dimensional facial swelling evaluation of pre-operative single-dose of prednisone in third molar surgery: a split-mouth randomized controlled trial. BMC Oral Health 2023; 23:614. [PMID: 37653378 PMCID: PMC10468892 DOI: 10.1186/s12903-023-03334-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/18/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND Facial swelling, pain, and trismus are the most common postoperative sequelae after mandibular third molar (M3M) surgery. Corticosteroids are the most used drugs to reduce the severity of inflammatory symptoms after M3M surgery. This study aimed to evaluate the effect of a single pre-operative dose of prednisone on pain, trismus, and swelling after M3M surgery. METHODS This study was designed as a split-mouth randomized, controlled, triple-blind trial with two treatment groups, prednisone (PG) and control (CG). All the parameters were assessed before the extraction (T0), two days (T1), and seven days after surgery (T2). Three-dimensional evaluation of facial swelling was performed with Bellus 3D Face App. A visual analogue scale (VAS) was used to assess pain. The maximum incisal distance was recorded with a calibrated rule to evaluate the trismus. The Shapiro-Wilk test was used to evaluate the normal distribution of each variable. To compare the two study groups, the analysis of variance was performed using a two-tailed Student t-test for normal distributions. The level of significance was set at a = 0.05. Statistical analysis was conducted using the software STATA (STATA 11, StataCorp, College Station, TX). RESULTS Thirty-two patients were recruited with a mean age of 23.6 ± 3.7 years, with a male-to-female ratio of 1:3. A total of 64 M3Ms (32 right and 32 left) were randomly assigned to PG or CG. Surgery time recorded a mean value of 15.6 ± 3.7 min, without statistically significant difference between the groups. At T1, PG showed a significantly lower facial swelling compared to CG (PG: 3.3 ± 2.1 mm; CG: 4.2 ± 1.7 mm; p = 0.02). Similar results were recorded comparing the groups one week after surgery (PG: 1.2 ± 1.2; CG: 2.1 ± 1.3; p = 0.0005). All patients reported a decrease in facial swelling from T1 to T2 without differences between the two groups. At T1, the maximum buccal opening was significantly reduced than T0, and no difference between PG (35.6 ± 8.2 mm) and CG (33.7 ± 7.3 mm) (p > 0.05) was shown. Similar results were reported one week after surgery (PG: 33.2 ± 14.4 mm; CG: 33.7 ± 13.1 mm; p > 0.05). PG showed significantly lower pain values compared to CG, both at T1 (PG: 3.1 ± 1.5; CG: 4.6 ± 1.8; p = 0.0006) and T2 (PG: 1.0 ± 0.8; CG: 1.9 ± 1.4; p = 0.0063). CONCLUSION Our results showed that pre-operative low-dose prednisone administration could reduce postoperative sequelae by improving patient comfort after M3M surgery and reducing facial swelling two days and one week after surgical procedures. TRIAL REGISTRATION www. CLINICALTRIALS gov - NCT05830747 retrospectively recorded-Date of registration: 26/04/2023.
Collapse
Affiliation(s)
- Alessandro Antonelli
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Viale Europa, 88100, Catanzaro, Italy
| | - Selene Barone
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Viale Europa, 88100, Catanzaro, Italy
| | - Francesco Bennardo
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Viale Europa, 88100, Catanzaro, Italy.
| | - Amerigo Giudice
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Viale Europa, 88100, Catanzaro, Italy
- Department of Health Sciences, Oral Surgery Residency Training Program Director, Dean of the School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy
| |
Collapse
|
20
|
Lindahl O, Ventä I. Level of difficulty of tooth extractions among roughly 100,000 procedures in primary care. Clin Oral Investig 2023; 27:4513-4520. [PMID: 37231272 PMCID: PMC10415519 DOI: 10.1007/s00784-023-05073-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 05/17/2023] [Indexed: 05/27/2023]
Abstract
OBJECTIVES The study examined treatment codes of extracted teeth and aimed to assess degree of difficulty concerning all tooth extractions. MATERIALS AND METHODS Retrospective data on treatment codes of all tooth extractions during a two-year period were obtained from the patient register in primary oral healthcare of the City of Helsinki, Finland. Prevalence, indication, and method of extraction appeared in the treatment codes (EBA-codes). Degree of difficulty was determined from the method and classified as non-operative or operative and as routine or demanding. Statistics included frequencies, percentages, and χ2 test. RESULTS Total number of extraction procedures was 97,276, including 121,342 extracted teeth. The most frequent procedure was a routine extraction of a tooth with forceps (55%, n = 53,642). The main reason for extraction was caries (27%, n = 20,889). Of the extractions, 79% (n = 76,435) were non-operative, 13% (n = 12,819) operative, and 8% (n = 8,022) multiple extractions in one visit. Level of difficulty was distributed as routine non-operative (63%), demanding non-operative (15%), routine operative (12%), demanding operative (2%), and multiple extractions (8%). CONCLUSIONS Two-thirds of all tooth extractions in primary care were relatively simple. However, 29% of procedures were classified as demanding. CLINICAL RELEVANCE As earlier methods for assessing level of difficulty were aimed at third molars alone, an analysis was presented for all tooth extractions. This approach may be useful for research purposes, and the profile of tooth extractions and their difficulty level may be practical also for decision-makers in primary care.
Collapse
Affiliation(s)
- Oona Lindahl
- Department of Oral and Maxillofacial Diseases, Faculty of Medicine, University of Helsinki, P.O. Box 41, 00014, Helsinki, Finland.
| | - Irja Ventä
- Department of Oral and Maxillofacial Diseases, Faculty of Medicine, University of Helsinki, P.O. Box 41, 00014, Helsinki, Finland
| |
Collapse
|
21
|
Qu Y, Wen Y, Chen M, Guo K, Huang X, Gu L. Predicting case difficulty in endodontic microsurgery using machine learning algorithms. J Dent 2023; 133:104522. [PMID: 37080531 DOI: 10.1016/j.jdent.2023.104522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 04/09/2023] [Accepted: 04/17/2023] [Indexed: 04/22/2023] Open
Abstract
OBJECTIVES The study aimed to develop and validate machine learning models for case difficulty prediction in endodontic microsurgery, assisting clinicians in preoperative analysis. METHODS The cone-beam computed tomographic images were collected from 261 patients with 341 teeth and used for radiographic examination and measurement. Through linear regression (LR), support vector regression (SVR), and extreme gradient boosting (XGBoost) algorithms, four models were established according to different loss functions, including the L1-loss LR model, L2-loss LR model, SVR model and XGBoost model. Five-fold cross-validation was applied in model training and validation. Explained variance score (EVS), coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE) and median absolute error (MedAE) were calculated to evaluate the prediction performance. RESULTS The MAE, MSE and MedAE values of the XGBoost model were the lowest, which were 0.1010, 0.0391 and 0.0235, respectively. The EVS and R2 values of the XGBoost model were the highest, which were 0.7885 and 0.7967, respectively. The factors used to predict the case difficulty in endodontic microsurgery were ordered according to their relative importance, including lesion size, the distance between apex and adjacent important anatomical structures, root filling density, root apex diameter, root resorption, tooth type, tooth length, root filling length, root canal curvature and the number of root canals. CONCLUSIONS The XGBoost model outperformed the LR and SVR models on all evaluation metrics, which can assist clinicians in preoperative analysis. The relative feature importance provides a reference to develop the scoring system for case difficulty assessment in endodontic microsurgery. CLINICAL SIGNIFICANCE Preoperative case assessment is a crucial step to identify potential risks and make referral decisions. Machine learning models for case difficulty prediction in endodontic microsurgery can assist clinicians in preoperative analysis efficiently and accurately.
Collapse
Affiliation(s)
- Yang Qu
- Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Yiting Wen
- Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Ming Chen
- South China University of Technology, Guangzhou, China
| | - Kailing Guo
- South China University of Technology, Guangzhou, China
| | - Xiangya Huang
- Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China.
| | - Lisha Gu
- Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China.
| |
Collapse
|
22
|
Kempers S, van Lierop P, Hsu TMH, Moin DA, Bergé S, Ghaeminia H, Xi T, Vinayahalingam S. Positional assessment of lower third molar and mandibular canal using explainable artificial intelligence. J Dent 2023; 133:104519. [PMID: 37061117 DOI: 10.1016/j.jdent.2023.104519] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 03/28/2023] [Accepted: 04/12/2023] [Indexed: 04/17/2023] Open
Abstract
OBJECTIVE The aim of this study is to automatically assess the positional relationship between lower third molars (M3i) and the mandibular canal (MC) based on the panoramic radiograph(s) (PR(s)). MATERIAL AND METHODS A total of 1444 M3s were manually annotated and labeled on 863 PRs as a reference. A deep-learning approach, based on MobileNet-V2 combination with a skeletonization algorithm and a signed distance method, was trained and validated on 733 PRs with 1227 M3s to classify the positional relationship between M3i and MC into three categories. Subsequently, the trained algorithm was applied to a test set consisting of 130 PRs (217 M3s). Accuracy, precision, sensitivity, specificity, negative predictive value, and F1-score were calculated. RESULTS The proposed method achieved a weighted accuracy of 0.951, precision of 0.943, sensitivity of 0.941, specificity of 0.800, negative predictive value of 0.865 and an F1-score of 0.938. CONCLUSION AI-enhanced assessment of PRs can objectively, accurately, and reproducibly determine the positional relationship between M3i and MC. CLINICAL SIGNIFICANCE The use of such an explainable AI system can assist clinicians in the intuitive positional assessment of lower third molars and mandibular canals. Further research is required to automatically assess the risk of alveolar nerve injury on panoramic radiographs.
Collapse
Affiliation(s)
- Steven Kempers
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands; Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
| | - Pieter van Lierop
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Tzu-Ming Harry Hsu
- MIT Computer Science & Artificial Intelligence Laboratory, 32 Vassar St, Cambridge, MA 02139, United States
| | | | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Hossein Ghaeminia
- Department of Oral and Maxillofacial Surgery, Rijnstate Hospital, Arnhem, the Netherlands
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands; Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands; Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany.
| |
Collapse
|
23
|
Oh S, Kim YJ, Kim J, Jung JH, Lim HJ, Kim BC, Kim KG. Deep learning-based prediction of osseointegration for dental implant using plain radiography. BMC Oral Health 2023; 23:208. [PMID: 37031221 PMCID: PMC10082489 DOI: 10.1186/s12903-023-02921-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/28/2023] [Indexed: 04/10/2023] Open
Abstract
BACKGROUND In this study, we investigated whether deep learning-based prediction of osseointegration of dental implants using plain radiography is possible. METHODS Panoramic and periapical radiographs of 580 patients (1,206 dental implants) were used to train and test a deep learning model. Group 1 (338 patients, 591 dental implants) included implants that were radiographed immediately after implant placement, that is, when osseointegration had not yet occurred. Group 2 (242 patients, 615 dental implants) included implants radiographed after confirming successful osseointegration. A dataset was extracted using random sampling and was composed of training, validation, and test sets. For osseointegration prediction, we employed seven different deep learning models. Each deep-learning model was built by performing the experiment 10 times. For each experiment, the dataset was randomly separated in a 60:20:20 ratio. For model evaluation, the specificity, sensitivity, accuracy, and AUROC (Area under the receiver operating characteristic curve) of the models was calculated. RESULTS The mean specificity, sensitivity, and accuracy of the deep learning models were 0.780-0.857, 0.811-0.833, and 0.799-0.836, respectively. Furthermore, the mean AUROC values ranged from to 0.890-0.922. The best model yields an accuracy of 0.896, and the worst model yields an accuracy of 0.702. CONCLUSION This study found that osseointegration of dental implants can be predicted to some extent through deep learning using plain radiography. This is expected to complement the evaluation methods of dental implant osseointegration that are currently widely used.
Collapse
Affiliation(s)
- Seok Oh
- Gil Medical Center, Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, 21565, Korea
| | - Young Jae Kim
- Gil Medical Center, Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, 21565, Korea
| | - Jeseong Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Korea
| | - Joon Hyeok Jung
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Korea
| | - Hun Jun Lim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Korea
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, 35233, Korea.
| | - Kwang Gi Kim
- Gil Medical Center, Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, 21565, Korea.
| |
Collapse
|
24
|
Mohammad-Rahimi H, Rokhshad R, Bencharit S, Krois J, Schwendicke F. Deep learning: A primer for dentists and dental researchers. J Dent 2023; 130:104430. [PMID: 36682721 DOI: 10.1016/j.jdent.2023.104430] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 01/04/2023] [Accepted: 01/16/2023] [Indexed: 01/21/2023] Open
Abstract
OBJECTIVES Despite deep learning's wide adoption in dental artificial intelligence (AI) research, researchers from other dental fields and, more so, dental professionals may find it challenging to understand and interpret deep learning studies, their employed methods, and outcomes. The objective of this primer is to explain the basic concept of deep learning. It will lay out the commonly used terms, and describe different deep learning approaches, their methods, and outcomes. METHODS Our research is based on the latest review studies, medical primers, as well as the state-of-the-art research on AI and deep learning, which have been gathered in the current study. RESULTS In this study, a basic understanding of deep learning models and various approaches to deep learning is presented. An overview of data management strategies for deep learning projects is presented, including data collection, data curation, data annotation, and data preprocessing. Additionally, we provided a step-by-step guide for completing a real-world project. CONCLUSION Researchers and clinicians can benefit from this study by gaining insight into deep learning. It can be used to critically appraise existing work or plan new deep learning projects. CLINICAL SIGNIFICANCE This study may be useful to dental researchers and professionals who are assessing and appraising deep learning studies within the field of dentistry.
Collapse
Affiliation(s)
- Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Rata Rokhshad
- Department of Medicine, Section of Endocrinology, Nutrition, and Diabetes, Vitamin D, Boston University Medical Center, Boston, MA, USA
| | - Sompop Bencharit
- Department of Oral and Craniofacial Molecular Biology, Philips Institute for Oral Health Research, School of Dentistry, and Department of Biomedical Engineering, College of Engineering, Virginia Commonwealth University, Richmond, VA 23298, USA
| | - Joachim Krois
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Federal Republic of Germany; Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Aßmannshauser Str. 4-6, Berlin 14197, Federal Republic of Germany.
| |
Collapse
|
25
|
Automatic Machine Learning-based Classification of Mandibular Third Molar Impaction Status. JOURNAL OF ORAL AND MAXILLOFACIAL SURGERY, MEDICINE, AND PATHOLOGY 2023. [DOI: 10.1016/j.ajoms.2022.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
26
|
Hung KF, Yeung AWK, Bornstein MM, Schwendicke F. Personalized dental medicine, artificial intelligence, and their relevance for dentomaxillofacial imaging. Dentomaxillofac Radiol 2023; 52:20220335. [PMID: 36472627 PMCID: PMC9793453 DOI: 10.1259/dmfr.20220335] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Personalized medicine refers to the tailoring of diagnostics and therapeutics to individuals based on one's biological, social, and behavioral characteristics. While personalized dental medicine is still far from being a reality, advanced artificial intelligence (AI) technologies with improved data analytic approaches are expected to integrate diverse data from the individual, setting, and system levels, which may facilitate a deeper understanding of the interaction of these multilevel data and therefore bring us closer to more personalized, predictive, preventive, and participatory dentistry, also known as P4 dentistry. In the field of dentomaxillofacial imaging, a wide range of AI applications, including several commercially available software options, have been proposed to assist dentists in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Notably, the impact of these dental AI applications on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness has so far been assessed sparsely. Such information should be further investigated in future studies to provide patients, providers, and healthcare organizers a clearer picture of the true usefulness of AI in daily dental practice.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Division of Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Michael M. Bornstein
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, Basel, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
27
|
Kwon D, Ahn J, Kim CS, Kang DO, Paeng JY. A deep learning model based on concatenation approach to predict the time to extract a mandibular third molar tooth. BMC Oral Health 2022; 22:571. [PMID: 36476146 PMCID: PMC9730580 DOI: 10.1186/s12903-022-02614-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 11/23/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Assessing the time required for tooth extraction is the most important factor to consider before surgeries. The purpose of this study was to create a practical predictive model for assessing the time to extract the mandibular third molar tooth using deep learning. The accuracy of the model was evaluated by comparing the extraction time predicted by deep learning with the actual time required for extraction. METHODS A total of 724 panoramic X-ray images and clinical data were used for artificial intelligence (AI) prediction of extraction time. Clinical data such as age, sex, maximum mouth opening, body weight, height, the time from the start of incision to the start of suture, and surgeon's experience were recorded. Data augmentation and weight balancing were used to improve learning abilities of AI models. Extraction time predicted by the concatenated AI model was compared with the actual extraction time. RESULTS The final combined model (CNN + MLP) model achieved an R value of 0.8315, an R-squared value of 0.6839, a p-value of less than 0.0001, and a mean absolute error (MAE) of 2.95 min with the test dataset. CONCLUSIONS Our proposed model for predicting time to extract the mandibular third molar tooth performs well with a high accuracy in clinical practice.
Collapse
Affiliation(s)
- Dohyun Kwon
- grid.264381.a0000 0001 2181 989XDepartment of Oral and Maxillofacial Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Irwon-Dong, Gangnam-Gu, Seoul, Republic of Korea
| | - Jaemyung Ahn
- grid.264381.a0000 0001 2181 989XDepartment of Oral and Maxillofacial Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Irwon-Dong, Gangnam-Gu, Seoul, Republic of Korea
| | - Chang-Soo Kim
- grid.264381.a0000 0001 2181 989XDepartment of Oral and Maxillofacial Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Irwon-Dong, Gangnam-Gu, Seoul, Republic of Korea
| | - Dong ohk Kang
- grid.264381.a0000 0001 2181 989XDepartment of Oral and Maxillofacial Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Irwon-Dong, Gangnam-Gu, Seoul, Republic of Korea
| | - Jun-Young Paeng
- grid.264381.a0000 0001 2181 989XDepartment of Oral and Maxillofacial Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Irwon-Dong, Gangnam-Gu, Seoul, Republic of Korea
| |
Collapse
|
28
|
Takebe K, Imai T, Kubota S, Nishimoto A, Amekawa S, Uzawa N. Deep learning model for the automated evaluation of contact between the lower third molar and inferior alveolar nerve on panoramic radiography. J Dent Sci 2022. [DOI: 10.1016/j.jds.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
|
29
|
Ariji Y, Mori M, Fukuda M, Katsumata A, Ariji E. Automatic visualization of the mandibular canal in relation to an impacted mandibular third molar on panoramic radiographs using deep learning segmentation and transfer learning techniques. Oral Surg Oral Med Oral Pathol Oral Radiol 2022; 134:749-757. [PMID: 36229373 DOI: 10.1016/j.oooo.2022.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/01/2022] [Accepted: 05/31/2022] [Indexed: 12/13/2022]
Abstract
OBJECTIVE The aim of this study was to create and assess a deep learning model using segmentation and transfer learning methods to visualize the proximity of the mandibular canal to an impacted third molar on panoramic radiographs. STUDY DESIGN The panoramic radiographs containing the mandibular canal and impacted third molar were collected from 2 hospitals (Hospitals A and B). A total of 3200 areas were used for creating and evaluating learning models. A source model was created using the data from Hospital A, simulatively transferred to Hospital B, and trained using various amounts of data from Hospital B to create target models. The same data were then applied to the target models to calculate the Dice coefficient, Jaccard index, and sensitivity. RESULTS The performance of target models trained using 200 or more data sets was equivalent to that of the source model tested using data obtained from the same hospital (Hospital A). CONCLUSIONS Sufficiently qualified models could delineate the mandibular canal in relation to an impacted third molar on panoramic radiographs using a segmentation technique. Transfer learning appears to be an effective method for creating such models using a relatively small number of data sets.
Collapse
Affiliation(s)
- Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan; Department of Oral Radiology, Osaka Dental University, School of Dentistry, Osaka, Japan
| | - Mizuho Mori
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
| |
Collapse
|
30
|
Al-Sarem M, Al-Asali M, Alqutaibi AY, Saeed F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15414. [PMID: 36430133 PMCID: PMC9692549 DOI: 10.3390/ijerph192215414] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/15/2023]
Abstract
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient's panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth's position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Collapse
Affiliation(s)
- Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Sheba Region University, Marib 14400, Yemen
| | - Mohammed Al-Asali
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ahmed Yaseen Alqutaibi
- Department of Prosthodontics and Implant Dentistry, College of Dentistry, Taibah University, Al Madinah 41311, Saudi Arabia
- Department of Prosthodontics, College of Dentistry, Ibb University, Ibb 70270, Yemen
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| |
Collapse
|
31
|
Jeong SH, Woo MW, Shin DS, Yeom HG, Lim HJ, Kim BC, Yun JP. Three-Dimensional Postoperative Results Prediction for Orthognathic Surgery through Deep Learning-Based Alignment Network. J Pers Med 2022; 12:998. [PMID: 35743782 PMCID: PMC9225553 DOI: 10.3390/jpm12060998] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/13/2022] Open
Abstract
To date, for the diagnosis of dentofacial dysmorphosis, we have relied almost entirely on reference points, planes, and angles. This is time consuming, and it is also greatly influenced by the skill level of the practitioner. To solve this problem, we wanted to know if deep neural networks could predict postoperative results of orthognathic surgery without relying on reference points, planes, and angles. We use three-dimensional point cloud data of the skull of 269 patients. The proposed method has two main stages for prediction. In step 1, the skull is divided into six parts through the segmentation network. In step 2, three-dimensional transformation parameters are predicted through the alignment network. The ground truth values of transformation parameters are calculated through the iterative closest points (ICP), which align the preoperative part of skull to the corresponding postoperative part of skull. We compare pointnet, pointnet++ and pointconv for the feature extractor of the alignment network. Moreover, we design a new loss function, which considers the distance error of transformed points for a better accuracy. The accuracy, mean intersection over union (mIoU), and dice coefficient (DC) of the first segmentation network, which divides the upper and lower part of skull, are 0.9998, 0.9994, and 0.9998, respectively. For the second segmentation network, which divides the lower part of skull into 5 parts, they were 0.9949, 0.9900, 0.9949, respectively. The mean absolute error of transverse, anterior-posterior, and vertical distance of part 2 (maxilla) are 0.765 mm, 1.455 mm, and 1.392 mm, respectively. For part 3 (mandible), they were 1.069 mm, 1.831 mm, and 1.375 mm, respectively, and for part 4 (chin), they were 1.913 mm, 2.340 mm, and 1.257 mm, respectively. From this study, postoperative results can now be easily predicted by simply entering the point cloud data of computed tomography.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
| | - Min Woo Woo
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
| | - Dong Sun Shin
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Han Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea;
| | - Hun Jun Lim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Jong Pil Yun
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- KITECH School, University of Science and Technology, Daejeon 34113, Korea
| |
Collapse
|
32
|
Rasteau S, Ernenwein D, Savoldelli C, Bouletreau P. Artificial intelligence for oral and maxillo-facial surgery: A narrative review. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2022; 123:276-282. [PMID: 35091121 DOI: 10.1016/j.jormas.2022.01.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 01/23/2022] [Indexed: 12/24/2022]
Abstract
Artificial Intelligence (AI) is a set of technologies that simulate human cognition in order to address a specific problem. The improvement in computing speed, the exponential production and the routine collection of data have led to the rapid development of AI in the health sector. In this review, we propose to provide surgeons with the essential technical elements to help them understand the possibilities offered by AI and to review the current applications of AI for oral and maxillofacial surgery (OMFS). The review of the literature reveals a real research boom of AI in all fields in OMFS. The algorithms used are related to machine learning, with a strong representation of the convolutional neural networks specific to deep learning. The complex architecture of these networks gives them the capacity to extract and process the elementary characteristics of an image, and they are therefore particularly used for diagnostic purposes on medical imagery or facial photography. We identified representative articles dealing with AI algorithms providing assistance in diagnosis, therapeutic decision, preoperative planning, or prediction and evaluation of the outcomes. Thanks to their learning, classification, prediction and detection capabilities, AI algorithms complement human skills while limiting their imperfections. However, these algorithms should be subject to rigorous clinical evaluation, and ethical reflection on data protection should be systematically conducted.
Collapse
Affiliation(s)
- Simon Rasteau
- Maxillo-Facial Surgery, Facial Plastic Surgery, Stomatology and Oral Surgery, Hospices Civils de Lyon, Lyon-Sud Hospital - Claude-Bernard Lyon 1 University, 165 Chemin du Grand-Revoyet, Pierre-Bénite 69310, France.
| | - Didier Ernenwein
- Department of Pediatric Oral & Maxillofacial & Plastic Surgery, Children's Hospital Robert-Debré, Paris-Diderot University, Paris, France
| | - Charles Savoldelli
- University Institute of the Face and Neck, Côte d'Azur University, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Pierre Bouletreau
- Maxillo-Facial Surgery, Facial Plastic Surgery, Stomatology and Oral Surgery, Hospices Civils de Lyon, Lyon-Sud Hospital - Claude-Bernard Lyon 1 University, 165 Chemin du Grand-Revoyet, Pierre-Bénite 69310, France
| |
Collapse
|
33
|
Potential and impact of artificial intelligence algorithms in dento-maxillofacial radiology. Clin Oral Investig 2022; 26:5535-5555. [PMID: 35438326 DOI: 10.1007/s00784-022-04477-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/25/2022] [Indexed: 12/20/2022]
Abstract
OBJECTIVES Novel artificial intelligence (AI) learning algorithms in dento-maxillofacial radiology (DMFR) are continuously being developed and improved using advanced convolutional neural networks. This review provides an overview of the potential and impact of AI algorithms in DMFR. MATERIALS AND METHODS A narrative review was conducted on the literature on AI algorithms in DMFR. RESULTS In the field of DMFR, AI algorithms were mainly proposed for (1) automated detection of dental caries, periapical pathologies, root fracture, periodontal/peri-implant bone loss, and maxillofacial cysts/tumors; (2) classification of mandibular third molars, skeletal malocclusion, and dental implant systems; (3) localization of cephalometric landmarks; and (4) improvement of image quality. Data insufficiency, overfitting, and the lack of interpretability are the main issues in the development and use of image-based AI algorithms. Several strategies have been suggested to address these issues, such as data augmentation, transfer learning, semi-supervised training, few-shot learning, and gradient-weighted class activation mapping. CONCLUSIONS Further integration of relevant AI algorithms into one fully automatic end-to-end intelligent system for possible multi-disciplinary applications is very likely to be a field of increased interest in the future. CLINICAL RELEVANCE This review provides dental practitioners and researchers with a comprehensive understanding of the current development, performance, issues, and prospects of image-based AI algorithms in DMFR.
Collapse
|
34
|
Lingual bone thickness in the apical region of the horizontal mandibular third molar: A cross-sectional study in young Japanese. PLoS One 2022; 17:e0263094. [PMID: 35077519 PMCID: PMC8789189 DOI: 10.1371/journal.pone.0263094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/11/2022] [Indexed: 11/20/2022] Open
Abstract
Background Perforation of the lingual plate in the apical region of mandibular third molars will increase the risk of aberration and migration of the root tip and the risk of lingual nerve injury. The aim of this study was to analyze anatomical information, including relationships between the apical region of horizontally impacted mandibular third molars and lingual plates, in young Japanese. Methods Japanese patients, with horizontally impacted third molars, who underwent CT examination as a preoperative assessment for mandibular third molar extraction were included, and anatomical characteristics in the apical region of the right mandibular third molar were analyzed, in this study. Results A total of 121 patients were included based on the inclusion and exclusion criteria of this study. The mean and standard deviation of the bone thickness on the lingual side of the mandibular third molar in the apical region was 1.5 ± 1.6 mm, and the absence of lingual cortical bone in the apical region, namely, “perforation”, was observed in 44 patients. The statistical analysis revealed the predictors of cases with perforation as follows: gender, age, and the available space evaluated by Pell and Gregory classification. Conclusions This study clarified that “perforation” was sometimes observed in young Japanese, and that the predictors of those cases were as follows: gender, age, and the available space evaluated by Pell and Gregory classification.
Collapse
|
35
|
Choi E, Lee S, Jeong E, Shin S, Park H, Youm S, Son Y, Pang K. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography. Sci Rep 2022; 12:2456. [PMID: 35165342 PMCID: PMC8844031 DOI: 10.1038/s41598-022-06483-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/06/2022] [Indexed: 11/09/2022] Open
Abstract
Determining the exact positional relationship between mandibular third molar (M3) and inferior alveolar nerve (IAN) is important for surgical extractions. Panoramic radiography is the most common dental imaging test. The purposes of this study were to develop an artificial intelligence (AI) model to determine two positional relationships (true contact and bucco-lingual position) between M3 and IAN when they were overlapped in panoramic radiographs and compare its performance with that of oral and maxillofacial surgery (OMFS) specialists. A total of 571 panoramic images of M3 from 394 patients was used for this study. Among the images, 202 were classified as true contact, 246 as intimate, 61 as IAN buccal position, and 62 as IAN lingual position. A deep convolutional neural network model with ResNet-50 architecture was trained for each task. We randomly split the dataset into 75% for training and validation and 25% for testing. Model performance was superior in bucco-lingual position determination (accuracy 0.76, precision 0.83, recall 0.67, and F1 score 0.73) to true contact position determination (accuracy 0.63, precision 0.62, recall 0.63, and F1 score 0.61). AI exhibited much higher accuracy in both position determinations compared to OMFS specialists. In determining true contact position, OMFS specialists demonstrated an accuracy of 52.68% to 69.64%, while the AI showed an accuracy of 72.32%. In determining bucco-lingual position, OMFS specialists showed an accuracy of 32.26% to 48.39%, and the AI showed an accuracy of 80.65%. Moreover, Cohen’s kappa exhibited a substantial level of agreement for the AI (0.61) and poor agreements for OMFS specialists in bucco-lingual position determination. Determining the position relationship between M3 and IAN is possible using AI, especially in bucco-lingual positioning. The model could be used to support clinicians in the decision-making process for M3 treatment.
Collapse
Affiliation(s)
- Eunhye Choi
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University, 101, Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Soohong Lee
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Eunjae Jeong
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Seokwon Shin
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Hyunwoo Park
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Sekyoung Youm
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Youngdoo Son
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea.
| | - KangMi Pang
- Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
36
|
Evaluation of multi-task learning in deep learning-based positioning classification of mandibular third molars. Sci Rep 2022; 12:684. [PMID: 35027629 PMCID: PMC8758752 DOI: 10.1038/s41598-021-04603-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 12/21/2021] [Indexed: 01/18/2023] Open
Abstract
Pell and Gregory, and Winter's classifications are frequently implemented to classify the mandibular third molars and are crucial for safe tooth extraction. This study aimed to evaluate the classification accuracy of convolutional neural network (CNN) deep learning models using cropped panoramic radiographs based on these classifications. We compared the diagnostic accuracy of single-task and multi-task learning after labeling 1330 images of mandibular third molars from digital radiographs taken at the Department of Oral and Maxillofacial Surgery at a general hospital (2014-2021). The mandibular third molar classifications were analyzed using a VGG 16 model of a CNN. We statistically evaluated performance metrics [accuracy, precision, recall, F1 score, and area under the curve (AUC)] for each prediction. We found that single-task learning was superior to multi-task learning (all p < 0.05) for all metrics, with large effect sizes and low p-values. Recall and F1 scores for position classification showed medium effect sizes in single and multi-task learning. To our knowledge, this is the first deep learning study to examine single-task and multi-task learning for the classification of mandibular third molars. Our results demonstrated the efficacy of implementing Pell and Gregory, and Winter's classifications for specific respective tasks.
Collapse
|
37
|
Automated segmentation of articular disc of the temporomandibular joint on magnetic resonance images using deep learning. Sci Rep 2022; 12:221. [PMID: 34997167 PMCID: PMC8741780 DOI: 10.1038/s41598-021-04354-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 12/20/2021] [Indexed: 02/06/2023] Open
Abstract
Temporomandibular disorders are typically accompanied by a number of clinical manifestations that involve pain and dysfunction of the masticatory muscles and temporomandibular joint. The most important subgroup of articular abnormalities in patients with temporomandibular disorders includes patients with different forms of articular disc displacement and deformation. Here, we propose a fully automated articular disc detection and segmentation system to support the diagnosis of temporomandibular disorder on magnetic resonance imaging. This system uses deep learning-based semantic segmentation approaches. The study included a total of 217 magnetic resonance images from 10 patients with anterior displacement of the articular disc and 10 healthy control subjects with normal articular discs. These images were used to evaluate three deep learning-based semantic segmentation approaches: our proposed convolutional neural network encoder-decoder named 3DiscNet (Detection for Displaced articular DISC using convolutional neural NETwork), U-Net, and SegNet-Basic. Of the three algorithms, 3DiscNet and SegNet-Basic showed comparably good metrics (Dice coefficient, sensitivity, and positive predictive value). This study provides a proof-of-concept for a fully automated deep learning-based segmentation methodology for articular discs on magnetic resonance images, and obtained promising initial results, indicating that the method could potentially be used in clinical practice for the assessment of temporomandibular disorders.
Collapse
|
38
|
Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12010475] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Extraction of mandibular third molars is a common procedure in oral and maxillofacial surgery. There are studies that simultaneously predict the extraction difficulty of mandibular third molar and the complications that may occur. Thus, we propose a method of automatically detecting mandibular third molars in the panoramic radiographic images and predicting the extraction difficulty and likelihood of inferior alveolar nerve (IAN) injury. Our dataset consists of 4903 panoramic radiographic images acquired from various dental hospitals. Seven dentists annotated detection and classification labels. The detection model determines the mandibular third molar in the panoramic radiographic image. The region of interest (ROI) includes the detected mandibular third molar, adjacent teeth, and IAN, which is cropped in the panoramic radiographic image. The classification models use ROI as input to predict the extraction difficulty and likelihood of IAN injury. The achieved detection performance was 99.0% mAP over the intersection of union (IOU) 0.5. In addition, we achieved an 83.5% accuracy for the prediction of extraction difficulty and an 81.1% accuracy for the prediction of the likelihood of IAN injury. We demonstrated that a deep learning method can support the diagnosis for extracting the mandibular third molar.
Collapse
|
39
|
Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics (Basel) 2021; 11:diagnostics11091572. [PMID: 34573914 PMCID: PMC8469771 DOI: 10.3390/diagnostics11091572] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/04/2023] Open
Abstract
The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic images of patients who had planned mandibular third molar extraction. A total of 100 images taken of patients who had paresthesia after tooth extraction were classified as Group 1, and 200 images taken of patients without paresthesia were classified as Group 2. The dataset was randomly divided into a training and validation set (n = 150 [50%]), and a test set (n = 150 [50%]). CNNs of SSD300 and ResNet-18 were used for deep learning. The average accuracy, sensitivity, specificity, and area under the curve were 0.827, 0.84, 0.82, and 0.917, respectively. This study revealed that CNNs can assist in the prediction of paresthesia of the inferior alveolar nerve after third molar extraction using panoramic radiographic images.
Collapse
|
40
|
Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin Oral Investig 2021; 26:981-991. [PMID: 34312683 DOI: 10.1007/s00784-021-04082-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVES The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT. MATERIALS AND METHODS A dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology. RESULTS For segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents. CONCLUSIONS Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT. Clinical relevance An automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.
Collapse
|
41
|
Classification of caries in third molars on panoramic radiographs using deep learning. Sci Rep 2021; 11:12609. [PMID: 34131266 PMCID: PMC8206082 DOI: 10.1038/s41598-021-92121-2] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/25/2021] [Indexed: 11/15/2022] Open
Abstract
The objective of this study is to assess the classification accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the classification of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped PR(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.86, a specificity of 0.88 and an AUC of 0.90 for the classification of carious lesions of third molars on PR(s). A high accuracy was achieved in caries classification in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future.
Collapse
|
42
|
Jeong SH, Yun JP, Yeom HG, Kim HK, Kim BC. Deep-Learning-Based Detection of Cranio-Spinal Differences between Skeletal Classification Using Cephalometric Radiography. Diagnostics (Basel) 2021; 11:591. [PMID: 33806132 PMCID: PMC8064489 DOI: 10.3390/diagnostics11040591] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/10/2021] [Accepted: 03/22/2021] [Indexed: 12/22/2022] Open
Abstract
The aim of this study was to reveal cranio-spinal differences between skeletal classification using convolutional neural networks (CNNs). Transverse and longitudinal cephalometric images of 832 patients were used for training and testing of CNNs (365 males and 467 females). Labeling was performed such that the jawbone was sufficiently masked, while the parts other than the jawbone were minimally masked. DenseNet was used as the feature extractor. Five random sampling crossvalidations were performed for two datasets. The average and maximum accuracy of the five crossvalidations were 90.43% and 92.54% for test 1 (evaluation of the entire posterior-anterior (PA) and lateral cephalometric images) and 88.17% and 88.70% for test 2 (evaluation of the PA and lateral cephalometric images obscuring the mandible). In this study, we found that even when jawbones of class I (normal mandible), class II (retrognathism), and class III (prognathism) are masked, their identification is possible through deep learning applied only in the cranio-spinal area. This suggests that cranio-spinal differences between each class exist.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Jong Pil Yun
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Han-Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Hwi Kang Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| |
Collapse
|