1
|
Vilcapoma P, Parra Meléndez D, Fernández A, Vásconez IN, Hillmann NC, Gatica G, Vásconez JP. Comparison of Faster R-CNN, YOLO, and SSD for Third Molar Angle Detection in Dental Panoramic X-rays. SENSORS (BASEL, SWITZERLAND) 2024; 24:6053. [PMID: 39338799 PMCID: PMC11435645 DOI: 10.3390/s24186053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 09/05/2024] [Accepted: 09/14/2024] [Indexed: 09/30/2024]
Abstract
The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models-Faster R-CNN, YOLO V2, and SSD-using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter's classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter's classification criterion. This criterion characterizes the third molar's position relative to the second molar's longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.
Collapse
Affiliation(s)
- Piero Vilcapoma
- Faculty of Engineering, Universidad Andres Bello, Santiago 7500735, Chile
| | | | - Alejandra Fernández
- Laboratorio de Odontología Traslacional, Facultad de Odontología, UNAB, Santiago 7591538, Chile
| | - Ingrid Nicole Vásconez
- Centro de Biotecnología Daniel Alkalay Lowitt, Universidad Técnica Federico Santa María, Valparaiso 2390136, Chile
| | - Nicolás Corona Hillmann
- Laboratorio de Odontología Traslacional, Facultad de Odontología, UNAB, Santiago 7591538, Chile
| | - Gustavo Gatica
- Faculty of Engineering, Universidad Andres Bello, Santiago 7500735, Chile
| | - Juan Pablo Vásconez
- Energy Transformation Center, Faculty of Engineering, Universidad Andres Bello, Santiago 7500971, Chile
| |
Collapse
|
2
|
Lin J, Liu J, Liu Z, Fu W, Cai H. Effect of concentrated growth factor on wound healing, side effects, and postoperative complications following third molar surgery. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:102031. [PMID: 39236786 DOI: 10.1016/j.jormas.2024.102031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 08/28/2024] [Accepted: 09/01/2024] [Indexed: 09/07/2024]
Abstract
BACKGROUND Third molar surgery often results in postoperative complications such as pain, trismus, and facial swelling due to surgical trauma. Concentrated Growth Factor (CGF), a third-generation platelet concentrate, is believed to enhance wound healing due to its rich content of growth factors and fibrin. METHODS This systematic review followed PRISMA guidelines and included a search of PubMed, Embase, and Cochrane Library up to April 18, 2024. Randomized controlled trials involving CGF-treated versus non-CGF-treated patients undergoing third molar surgery were included. Risk of bias was assessed using the Cochrane Collaboration RoB 2.0. RESULTS Ten studies were included. CGF significantly improved wound healing, with enhanced soft and hard tissue recovery. Pain relief was notable on postoperative days 3 and 7, although results varied. CGF reduced facial swelling significantly on days 3 and 7 post-surgery. Trismus outcomes were mixed, with some studies reporting significant alleviation and others showing no advantage. CGF showed potential in reducing dry socket incidence, though evidence was not robust. CONCLUSIONS CGF appears to promote wound healing and reduce postoperative complications such as pain and swelling after third molar surgery. However, its effects on trismus and dry socket incidence remain controversial. Further research with standardized measures is needed to confirm these findings.
Collapse
Affiliation(s)
- Jingwen Lin
- Department of Pharmacy, Fujian Medical University Union Hospital, Fuzhou, Fujian, People's Republic of China; The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian, People's Republic of China
| | - Jiaming Liu
- College of Stomatology, Xinjiang Medical University, Xinjiang, People's Republic of China
| | - Zhexuan Liu
- The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian, People's Republic of China
| | - Wu Fu
- Department of Pharmacy, Fujian Medical University Union Hospital, Fuzhou, Fujian, People's Republic of China; The School of Pharmacy, Fujian Medical University, Fuzhou, Fujian, People's Republic of China
| | - Hongfu Cai
- Department of Pharmacy, Fujian Medical University Union Hospital, Fuzhou, Fujian, People's Republic of China.
| |
Collapse
|
3
|
Qiu P, Cao R, Li Z, Huang J, Zhang H, Zhang X. Applications of artificial intelligence for surgical extraction in stomatology: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:346-361. [PMID: 38834501 DOI: 10.1016/j.oooo.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/16/2024] [Accepted: 05/03/2024] [Indexed: 06/06/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) has been extensively used in the field of stomatology over the past several years. This study aimed to evaluate the effectiveness of AI-based models in the procedure, assessment, and treatment planning of surgical extraction. STUDY DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, a comprehensive search was conducted on the Web of Science, PubMed/MEDLINE, Embase, and Scopus databases, covering English publications up to September 2023. Two reviewers performed the study selection and data extraction independently. Only original research studies utilizing AI in surgical extraction of stomatology were included. The Cochrane risk of bias tool for randomized trials (RoB 2) was selected to perform the quality assessment of the selected literature. RESULTS From 2,336 retrieved references, 35 studies were deemed eligible. Among them, 28 researchers reported the pioneering role of AI in segmentation, classification, and detection, aligning with clinical needs. In addition, another 7 studies suggested promising results in tooth extraction decision-making, but further model refinement and validation were required. CONCLUSIONS Integration of AI in stomatology surgical extraction has significantly progressed, enhancing decision-making accuracy. Combining and comparing algorithmic outcomes across studies is essential for determining optimal clinical applications in the future.
Collapse
Affiliation(s)
- Piaopiao Qiu
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Rongkai Cao
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Zhaoyang Li
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Jiaqi Huang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Huasheng Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China
| | - Xueming Zhang
- Department of Oral and Maxillofacial Surgery, Shanghai Engineering Research Center of Tooth Restoration and Regeneration & Tongji Research Institute of Stomatology, Stomatological Hospital and Dental School, Tongji University, Shanghai, China.
| |
Collapse
|
4
|
Fang X, Zhang S, Wei Z, Wang K, Yang G, Li C, Han M, Du M. Automatic detection of the third molar and mandibular canal on panoramic radiographs based on deep learning. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101946. [PMID: 38857691 DOI: 10.1016/j.jormas.2024.101946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/12/2024] [Accepted: 06/08/2024] [Indexed: 06/12/2024]
Abstract
PURPOSE This study aims to develop a deep learning framework for the automatic detection of the position relationship between the mandibular third molar (M3) and the mandibular canal (MC) on panoramic radiographs (PRs), to assist doctors in assessing and planning appropriate surgical interventions. METHODS Datasets D1 and D2 were obtained by collecting 253 PRs from a hospitals and 197 PRs from online platforms. The RPIFormer model proposed in this study was trained and validated on D1 to create a segmentation model. The CycleGAN model was trained and validated on both D1 and D2 to develop an image enhancement model. Ultimately, the segmentation and enhancement models were integrated with an object detection model to create a fully automated framework for M3 and MC detection in PRs. Experimental evaluation included calculating Dice coefficient, IoU, Recall, and Precision during the process. RESULTS The RPIFormer model proposed in this study achieved an average Dice coefficient of 92.56 % for segmenting M3 and MC, representing a 3.06 % improvement over the previous best study. The deep learning framework developed in this research enables automatic detection of M3 and MC in PRs without manual cropping, demonstrating superior detection accuracy and generalization capability. CONCLUSION The framework developed in this study can be applied to PRs captured in different hospitals without the need for model fine-tuning. This feature is significant for aiding doctors in accurately assessing the spatial relationship between M3 and MC, thereby determining the optimal treatment plan to ensure patients' oral health and surgical safety.
Collapse
Affiliation(s)
- Xinle Fang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Shengben Zhang
- Department of Implantology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Zhiyuan Wei
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Kaixin Wang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Guanghui Yang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Chengliang Li
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Min Han
- School of Information Science and Engineering, Shandong University, Qingdao, China.
| | - Mi Du
- Department of Implantology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, China; Shandong Key Laboratory of Oral Tissue Regeneration, Jinan, China; Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Jinan, China; Shandong Provincial Clinical Research Center for Oral Diseases, Jinan, China.
| |
Collapse
|
5
|
Zirek T, Öziç MÜ, Tassoker M. AI-Driven localization of all impacted teeth and prediction of winter angulation for third molars on panoramic radiographs: Clinical user interface design. Comput Biol Med 2024; 178:108755. [PMID: 38897151 DOI: 10.1016/j.compbiomed.2024.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 06/05/2024] [Accepted: 06/11/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE Impacted teeth are abnormal tooth disorders under the gums or jawbone that cannot take their normal position even though it is time to erupt. This study aims to detect all impacted teeth and to classify impacted third molars according to the Winter method with an artificial intelligence model on panoramic radiographs. METHODS In this study, 1197 panoramic radiographs from the dentistry faculty database were collected for all impacted teeth, and 1000 panoramic radiographs were collected for Winter classification. Some pre-processing methods were performed and the images were doubled with data augmentation. Both datasets were randomly divided into 80% training, 10% validation, and 10% testing. After transfer learning and fine-tuning processes, the two datasets were trained with the YOLOv8 deep learning algorithm, a high-performance artificial intelligence model, and the detection of impacted teeth was carried out. The results were evaluated with precision, recall, mAP, and F1-score performance metrics. A graphical user interface was designed for clinical use with the artificial intelligence weights obtained as a result of the training. RESULTS For the detection of impacted third molar teeth according to Winter classification, the average precision, average recall, and average F1 score were obtained to be 0.972, 0.967, and 0.969, respectively. For the detection of all impacted teeth, the average precision, average recall, and average F1 score were obtained as 0.991, 0.995, and 0.993, respectively. CONCLUSION According to the results, the artificial intelligence-based YOLOv8 deep learning model successfully detected all impacted teeth and the impacted third molar teeth according to the Winter classification system.
Collapse
Affiliation(s)
- Taha Zirek
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey
| | - Muhammet Üsame Öziç
- Pamukkale University, Faculty of Technology, Department of Biomedical Engineering, Denizli, Turkey
| | - Melek Tassoker
- Necmettin Erbakan University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Konya, Turkey.
| |
Collapse
|
6
|
Ruiz-Roca J, Rodríguez-Molinero J, Javaloyes-Vicente P, Pereira-Lopes O, Gay-Escoda C. Use of CBCT and panoramic radiography in the prediction of alterations in sensivity of the inferior alveolar nerve in third molars: A retrospective cross-sectional study. Saudi Dent J 2024; 36:1105-1110. [PMID: 39176156 PMCID: PMC11337963 DOI: 10.1016/j.sdentj.2024.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 05/31/2024] [Accepted: 06/02/2024] [Indexed: 08/24/2024] Open
Abstract
Objectives We investigated which type of orthopantomography (OPG) was best able to predict neurological alterations of the inferior alveolar nerve (IAN) during extraction of a lower third molar (3 M). Methods We analysed cone beam computed tomographies (CBCTs) that were performed at a private dental clinic in Cartagena, Spain over five consecutive years. The CBCTs, together with their corresponding OPGs, had been prescribed for the surgical extraction of a lower 3 M. Results We analysed a total of 342 CBCTs and their corresponding OPGs. After explaining the risk of changes in the IAN sensitivity, 37 patients refused to undergo surgical extraction. The incidence of sensitivity alterations in the 332 dental extractions was 62 (19%): 44 were paraesthesias of the IAN, and 18 were associated with darkening of the root and interruption of the cortical line. Conclusion When an OPG revealed darkening of the root and interruption of the cortical line, the risk of contact between the lower 3 M and the IAN-that is, the probability of changes in IAN sensitivity-increased by over three-fold.
Collapse
Affiliation(s)
- J.A. Ruiz-Roca
- Faculty of Dentistry, Department of Dermatology, Stomatology and Radiology, University of Murcia, Spain
| | - J.A. Rodríguez-Molinero
- Faculty of Health Sciences, Department of Nursery and Stomatology, IDIBO Research Group, Rey Juan Carlos University, Alcorcón, Madrid, Spain
| | - P. Javaloyes-Vicente
- Faculty of Dentistry, Department of Dermatology, Stomatology and Radiology, University of Murcia, Spain
| | - O. Pereira-Lopes
- Faculty of Health Sciences, Department of Oral Medicine and Oral Surgery, University Fernando Pessoa, Oporto, Portugal
| | - C. Gay-Escoda
- Chairman and Professor of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Barcelona, Spain
- Coordinator/Researcher at the IDIBELL (Bellvitge Biomedical Research Institute), Barcelona, Spain
| |
Collapse
|
7
|
Assiri HA, Hameed MS, Alqarni A, Dawasaz AA, Arem SA, Assiri KI. Artificial Intelligence Application in a Case of Mandibular Third Molar Impaction: A Systematic Review of the Literature. J Clin Med 2024; 13:4431. [PMID: 39124697 PMCID: PMC11313288 DOI: 10.3390/jcm13154431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 08/12/2024] Open
Abstract
Objective: This systematic review aims to summarize the evidence on the use and applicability of AI in impacted mandibular third molars. Methods: Searches were performed in the following databases: PubMed, Scopus, and Google Scholar. The study protocol is registered at the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY202460081). The retrieved articles were subjected to an exhaustive review based on the inclusion and exclusion criteria for the study. Articles on the use of AI for diagnosis, treatment, and treatment planning in patients with impacted mandibular third molars were included. Results: Twenty-one articles were selected and evaluated using the Scottish Intercollegiate Guidelines Network (SIGN) evidence quality scale. Most of the analyzed studies dealt with using AI to determine the relationship between the mandibular canal and the impacted mandibular third molar. The average quality of the articles included in this review was 2+, which indicated that the level of evidence, according to the SIGN protocol, was B. Conclusions: Compared to human observers, AI models have demonstrated decent performance in determining the morphology, anatomy, and relationship of the impaction with the inferior alveolar nerve canal. However, the prediction of eruptions and future horizons of AI models are still in the early developmental stages. Additional studies estimating the eruption in mixed and permanent dentition are warranted to establish a comprehensive model for identifying, diagnosing, and predicting third molar eruptions and determining the treatment outcomes in the case of impacted teeth. This will help clinicians make better decisions and achieve better treatment outcomes.
Collapse
Affiliation(s)
- Hassan Ahmed Assiri
- Department of Diagnostic Science and Oral Biology, College of Dentistry, King Khalid University, P.O. Box 960, Abha City 61421, Saudi Arabia; (M.S.H.); (A.A.); (A.A.D.); (S.A.A.); (K.I.A.)
| | | | | | | | | | | |
Collapse
|
8
|
Faadiya AN, Widyaningrum R, Arindra PK, Diba SF. The diagnostic performance of impacted third molars in the mandible: A review of deep learning on panoramic radiographs. Saudi Dent J 2024; 36:404-412. [PMID: 38525176 PMCID: PMC10960107 DOI: 10.1016/j.sdentj.2023.11.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 03/26/2024] Open
Abstract
Background Mandibular third molar is prone to impaction, resulting in its inability to erupt into the oral cavity. The radiographic examination is required to support the odontectomy of impacted teeth. The use of computer-aided diagnosis based on deep learning is emerging in the field of medical and dentistry with the advancement of artificial intelligence (AI) technology. This review describes the performance and prospects of deep learning for the detection, classification, and evaluation of third molar-mandibular canal relationships on panoramic radiographs. Methods This work was conducted using three databases: PubMed, Google Scholar, and Science Direct. Following the literature selection, 49 articles were reviewed, with the 12 main articles discussed in this review. Results Several models of deep learning are currently used for segmentation and classification of third molar impaction with or without the combination of other techniques. Deep learning has demonstrated significant diagnostic performance in identifying mandibular impacted third molars (ITM) on panoramic radiographs, with an accuracy range of 78.91% to 90.23%. Meanwhile, the accuracy of deep learning in determining the relationship between ITM and the mandibular canal (MC) ranges from 72.32% to 99%. Conclusion Deep learning-based AI with high performance for the detection, classification, and evaluation of the relationship of ITM to the MC using panoramic radiographs has been developed over the past decade. However, deep learning must be improved using large datasets, and the evaluation of diagnostic performance for deep learning models should be aligned with medical diagnostic test protocols. Future studies involving collaboration among oral radiologists, clinicians, and computer scientists are required to identify appropriate AI development models that are accurate, efficient, and applicable to clinical services.
Collapse
Affiliation(s)
- Amalia Nur Faadiya
- Dental Medicine Study Program, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Rini Widyaningrum
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Pingky Krisna Arindra
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Silviana Farrah Diba
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia
| |
Collapse
|
9
|
Gong Z, Feng W, Su X, Choi C. System for automatically assessing the likelihood of inferior alveolar nerve injury. Comput Biol Med 2024; 169:107923. [PMID: 38199211 DOI: 10.1016/j.compbiomed.2024.107923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/20/2023] [Accepted: 01/01/2024] [Indexed: 01/12/2024]
Abstract
Inferior alveolar nerve (IAN) injury is a severe complication associated with mandibular third molar (MM3) extraction. Consequently, the likelihood of IAN injury must be assessed before performing such an extraction. However, existing deep learning methods for classifying the likelihood of IAN injury that rely on mask images often suffer from limited accuracy and lack of interpretability. In this paper, we propose an automated system based on panoramic radiographs, featuring a novel segmentation model SS-TransUnet and classification algorithm CD-IAN injury class. Our objective was to enhance the precision of segmentation of MM3 and mandibular canal (MC) and classification accuracy of the likelihood of IAN injury, ultimately reducing the occurrence of IAN injuries and providing a certain degree of interpretable foundation for diagnosis. The proposed segmentation model demonstrated a 0.9 % and 2.6 % enhancement in dice coefficient for MM3 and MC, accompanied by a reduction in 95 % Hausdorff distance, reaching 1.619 and 1.886, respectively. Additionally, our classification algorithm achieved an accuracy of 0.846, surpassing deep learning-based models by 3.8 %, confirming the effectiveness of our system.
Collapse
Affiliation(s)
- Ziyang Gong
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea
| | - Weikang Feng
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Xin Su
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Chang Choi
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea.
| |
Collapse
|
10
|
Tian Y, Zhang Z, Zhao B, Liu L, Liu X, Feng Y, Tian J, Kou D. Coarse-to-fine prior-guided attention network for multi-structure segmentation on dental panoramic radiographs. Phys Med Biol 2023; 68:215010. [PMID: 37816372 DOI: 10.1088/1361-6560/ad0218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/10/2023] [Indexed: 10/12/2023]
Abstract
Objective. Accurate segmentation of various anatomical structures from dental panoramic radiographs is essential for the diagnosis and treatment planning of various diseases in digital dentistry. In this paper, we propose a novel deep learning-based method for accurate and fully automatic segmentation of the maxillary sinus, mandibular condyle, mandibular nerve, alveolar bone and teeth on panoramic radiographs.Approach. A two-stage coarse-to-fine prior-guided segmentation framework is proposed to segment multiple structures on dental panoramic radiographs. In the coarse stage, a multi-label segmentation network is used to generate the coarse segmentation mask, and in the fine-tuning stage, a prior-guided attention network with an encoder-decoder architecture is proposed to precisely predict the mask of each anatomical structure. First, a prior-guided edge fusion module is incorporated into the network at the input of each convolution level of the encode path to generate edge-enhanced image feature maps. Second, a prior-guided spatial attention module is proposed to guide the network to extract relevant spatial features from foreground regions based on the combination of the prior information and the spatial attention mechanism. Finally, a prior-guided hybrid attention module is integrated at the bottleneck of the network to explore global context from both spatial and category perspectives.Main results. We evaluated the segmentation performance of our method on a testing dataset that contains 150 panoramic radiographs collected from real-world clinical scenarios. The segmentation results indicate that our proposed method achieves more accurate segmentation performance compared with state-of-the-art methods. The average Jaccard scores are 87.91%, 85.25%, 63.94%, 93.46% and 88.96% for the maxillary sinus, mandibular condyle, mandibular nerve, alveolar bone and teeth, respectively.Significance. The proposed method was able to accurately segment multiple structures on panoramic radiographs. This method has the potential to be part of the process of automatic pathology diagnosis from dental panoramic radiographs.
Collapse
Affiliation(s)
- Yuan Tian
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Zhejia Zhang
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Bailiang Zhao
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Lichao Liu
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Xiaolin Liu
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Yang Feng
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Jie Tian
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Dazhi Kou
- Shanghai Supercomputer Center. No. 585 Guoshoujing Road, Pudong New District, Shanghai, People's Republic of China
| |
Collapse
|
11
|
Chun SY, Kang YH, Yang S, Kang SR, Lee SJ, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Automatic classification of 3D positional relationship between mandibular third molar and inferior alveolar canal using a distance-aware network. BMC Oral Health 2023; 23:794. [PMID: 37880603 PMCID: PMC10598947 DOI: 10.1186/s12903-023-03496-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 10/05/2023] [Indexed: 10/27/2023] Open
Abstract
The purpose of this study was to automatically classify the three-dimensional (3D) positional relationship between an impacted mandibular third molar (M3) and the inferior alveolar canal (MC) using a distance-aware network in cone-beam CT (CBCT) images. We developed a network consisting of cascaded stages of segmentation and classification for the buccal-lingual relationship between the M3 and the MC. The M3 and the MC were simultaneously segmented using Dense121 U-Net in the segmentation stage, and their buccal-lingual relationship was automatically classified using a 3D distance-aware network with the multichannel inputs of the original CBCT image and the signed distance map (SDM) generated from the segmentation in the classification stage. The Dense121 U-Net achieved the highest average precision of 0.87, 0.96, and 0.94 in the segmentation of the M3, the MC, and both together, respectively. The 3D distance-aware classification network of the Dense121 U-Net with the input of both the CBCT image and the SDM showed the highest performance of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve, each of which had a value of 1.00. The SDM generated from the segmentation mask significantly contributed to increasing the accuracy of the classification network. The proposed distance-aware network demonstrated high accuracy in the automatic classification of the 3D positional relationship between the M3 and the MC by learning anatomical and geometrical information from the CBCT images.
Collapse
Affiliation(s)
- So-Young Chun
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea
| | - Yun-Hui Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | | | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, South Korea.
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| |
Collapse
|
12
|
Zhang C, Li M, Luo Z, Xiao R, Li B, Shi J, Zeng C, Sun B, Xu X, Yang H. Deep learning-driven MRI trigeminal nerve segmentation with SEVB-net. Front Neurosci 2023; 17:1265032. [PMID: 37920295 PMCID: PMC10618361 DOI: 10.3389/fnins.2023.1265032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 09/29/2023] [Indexed: 11/04/2023] Open
Abstract
Purpose Trigeminal neuralgia (TN) poses significant challenges in its diagnosis and treatment due to its extreme pain. Magnetic resonance imaging (MRI) plays a crucial role in diagnosing TN and understanding its pathogenesis. Manual delineation of the trigeminal nerve in volumetric images is time-consuming and subjective. This study introduces a Squeeze and Excitation with BottleNeck V-Net (SEVB-Net), a novel approach for the automatic segmentation of the trigeminal nerve in three-dimensional T2 MRI volumes. Methods We enrolled 88 patients with trigeminal neuralgia and 99 healthy volunteers, dividing them into training and testing groups. The SEVB-Net was designed for end-to-end training, taking three-dimensional T2 images as input and producing a segmentation volume of the same size. We assessed the performance of the basic V-Net, nnUNet, and SEVB-Net models by calculating the Dice similarity coefficient (DSC), sensitivity, precision, and network complexity. Additionally, we used the Mann-Whitney U test to compare the time required for manual segmentation and automatic segmentation with manual modification. Results In the testing group, the experimental results demonstrated that the proposed method achieved state-of-the-art performance. SEVB-Net combined with the ωDoubleLoss loss function achieved a DSC ranging from 0.6070 to 0.7923. SEVB-Net combined with the ωDoubleLoss method and nnUNet combined with the DoubleLoss method, achieved DSC, sensitivity, and precision values exceeding 0.7. However, SEVB-Net significantly reduced the number of parameters (2.20 M), memory consumption (11.41 MB), and model size (17.02 MB), resulting in improved computation and forward time compared with nnUNet. The difference in average time between manual segmentation and automatic segmentation with manual modification for both radiologists was statistically significant (p < 0.001). Conclusion The experimental results demonstrate that the proposed method can automatically segment the root and three main branches of the trigeminal nerve in three-dimensional T2 images. SEVB-Net, compared with the basic V-Net model, showed improved segmentation performance and achieved a level similar to nnUNet. The segmentation volumes of both SEVB-Net and nnUNet aligned with expert annotations but SEVB-Net displayed a more lightweight feature.
Collapse
Affiliation(s)
- Chuan Zhang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Man Li
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai, China
| | - Zheng Luo
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Ruhui Xiao
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Bing Li
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jing Shi
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Chen Zeng
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - BaiJinTao Sun
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiaoxue Xu
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Hanfeng Yang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| |
Collapse
|
13
|
Bağ İ, Bilgir E, Bayrakdar İŞ, Baydar O, Atak FM, Çelik Ö, Orhan K. An artificial intelligence study: automatic description of anatomic landmarks on panoramic radiographs in the pediatric population. BMC Oral Health 2023; 23:764. [PMID: 37848870 PMCID: PMC10583406 DOI: 10.1186/s12903-023-03532-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/11/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.
Collapse
Affiliation(s)
- İrem Bağ
- Department of Pediatric Dentistry, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey.
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - Oğuzhan Baydar
- Dentomaxillofacial Radiology Specialist, Faculty of Dentistry, Ege University, İzmir, Turkey
| | - Fatih Mehmet Atak
- Department of Computer Engineering, The Faculty of Engineering, Boğaziçi University, İstanbul, Turkey
| | - Özer Çelik
- Department of Mathematics-Computer, Eskisehir Osmangazi University Faculty of Science, Eskisehir, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| |
Collapse
|
14
|
Imak A, Çelebi A, Polat O, Türkoğlu M, Şengür A. ResMIBCU-Net: an encoder-decoder network with residual blocks, modified inverted residual block, and bi-directional ConvLSTM for impacted tooth segmentation in panoramic X-ray images. Oral Radiol 2023; 39:614-628. [PMID: 36920598 DOI: 10.1007/s11282-023-00677-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 02/16/2023] [Indexed: 03/16/2023]
Abstract
OBJECTIVE Impacted tooth is a common problem that can occur at any age, causing tooth decay, root resorption, and pain in the later stages. In recent years, major advances have been made in medical imaging segmentation using deep convolutional neural network-based networks. In this study, we report on the development of an artificial intelligence system for the automatic identification of impacted tooth from panoramic dental X-ray images. METHODS Among existing networks, in medical imaging segmentation, U-Net architectures are widely implemented. In this article, for dental X-ray image segmentation, blocks and convolutional block structures using inverted residual blocks are upgraded by taking advantage of U-Net's network capacity-intensive connections. At the same time, we propose a method for jumping connections in which bi-directional convolution long short-term memory is used instead of a simple connection. Assessment of the proposed artificial intelligence model performance was evaluated with accuracy, F1-score, intersection over union, and recall. RESULTS In the proposed method, experimental results are obtained with 99.82% accuracy, 91.59% F1-score, 84.48% intersection over union, and 90.71% recall. CONCLUSION Our findings show that our artificial intelligence system could help with future diagnostic support in clinical practice.
Collapse
Affiliation(s)
- Andaç Imak
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Munzur University, Tunceli, Turkey.
| | - Adalet Çelebi
- Oral and Maxillofacial Surgery Department, Faculty of Dentistry, Mersin University, Mersin, Turkey
| | - Onur Polat
- Department of Computer Engineering, Faculty of Technology, Gazi University, Ankara, Turkey
| | - Muammer Türkoğlu
- Department of Software Engineering, Faculty of Engineering, Samsun University, Samsun, Turkey
| | - Abdulkadir Şengür
- Department of Electrical and Electronic Engineering, Faculty of Technology, Firat University, Elazig, Turkey
| |
Collapse
|
15
|
Kim JY, Kahm SH, Yoo S, Bae SM, Kang JE, Lee SH. The efficacy of supervised learning and semi-supervised learning in diagnosis of impacted third molar on panoramic radiographs through artificial intelligence model. Dentomaxillofac Radiol 2023; 52:20230030. [PMID: 37192043 PMCID: PMC10461259 DOI: 10.1259/dmfr.20230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/27/2023] [Accepted: 04/18/2023] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVES The aim of the study was to evaluate the efficacy of traditional supervised learning (SL) and semi-supervised learning (SSL) in the classification of mandibular third molars (Mn3s) on panoramic images. The simplicity of preprocessing step and the outcome of the performance of SL and SSL were analyzed. METHODS Total 1625 Mn3s cropped images from 1000 panoramic images were labeled for classifications of the depth of impaction (D class), spatial relation with adjacent second molar (S class), and relationship with inferior alveolar nerve canal (N class). For the SL model, WideResNet (WRN) was applicated and for the SSL model, LaplaceNet (LN) was utilized. RESULTS In the WRN model, 300 labeled images for D and S classes, and 360 labeled images for N class were used for training and validation. In the LN model, only 40 labeled images for D, S, and N classes were used for learning. The F1 score were 0.87, 0.87, and 0.83 in WRN model, 0.84, 0.94, and 0.80 for D class, S class, and N class in the LN model, respectively. CONCLUSIONS These results confirmed that the LN model applied as SSL, even utilizing a small number of labeled images, demonstrated the satisfactory of the prediction accuracy similar to that of the WRN model as SL.
Collapse
Affiliation(s)
- Ji-Youn Kim
- Division of Oral & Maxillofacial Surgery, Department of Dentistry, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Se Hoon Kahm
- Department of Dentistry, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seok Yoo
- AI Business Headquarters, Unidocs Inc., Seoul, South Korea
| | - Soo-Mi Bae
- Department of Artificial Intelligence, Graduate school, Korea University, Seoul, South Korea
| | | | - Sang Hwa Lee
- Department of Dentistry, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
16
|
Papasratorn D, Pornprasertsuk-Damrongsri S, Yuma S, Weerawanich W. Investigation of the best effective fold of data augmentation for training deep learning models for recognition of contiguity between mandibular third molar and inferior alveolar canal on panoramic radiographs. Clin Oral Investig 2023; 27:3759-3769. [PMID: 37043029 PMCID: PMC10329615 DOI: 10.1007/s00784-023-04992-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/28/2023] [Indexed: 04/13/2023]
Abstract
OBJECTIVES This study aimed to train deep learning models for recognition of contiguity between the mandibular third molar (M3M) and inferior alveolar canal using panoramic radiographs and to investigate the best effective fold of data augmentation. MATERIALS AND METHODS The total of 1800 M3M cropped images were classified evenly into contact and no-contact. The contact group was confirmed with CBCT images. The models were trained from three pretrained models: AlexNet, VGG-16, and GoogLeNet. Each pretrained model was trained with the original cropped panoramic radiographs. Then the training images were increased fivefold, tenfold, 15-fold, and 20-fold using data augmentation to train additional models. The area under the receiver operating characteristic curve (AUC) of the 15 models were evaluated. RESULTS All models recognized contiguity with AUC from 0.951 to 0.996. Ten-fold augmentation showed the highest AUC in all pretrained models; however, no significant difference with other folds were found. VGG-16 showed the best performance among pretrained models trained at the same fold of augmentation. Data augmentation provided statistically significant improvement in performance of AlexNet and GoogLeNet models, while VGG-16 remained unchanged. CONCLUSIONS Based on our images, all models performed efficiently with high AUC, particularly VGG-16. Ten-fold augmentation showed the highest AUC by all pretrained models. VGG-16 showed promising potential when training with only original images. CLINICAL RELEVANCE Ten-fold augmentation may help improve deep learning models' performances. The variety of original data and the accuracy of labels are essential to train a high-performance model.
Collapse
Affiliation(s)
- Dhanaporn Papasratorn
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Suchaya Pornprasertsuk-Damrongsri
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Suraphong Yuma
- Department of Physics, Faculty of Science, Mahidol University, 272 Rama VI Road, Ratchathewi District, Bangkok, 10400 Thailand
| | - Warangkana Weerawanich
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, 6, Yothi Road, Ratchathewi District, Bangkok, 10400 Thailand
| |
Collapse
|
17
|
Lo Casto A, Spartivento G, Benfante V, Di Raimondo R, Ali M, Di Raimondo D, Tuttolomondo A, Stefano A, Yezzi A, Comelli A. Artificial Intelligence for Classifying the Relationship between Impacted Third Molar and Mandibular Canal on Panoramic Radiographs. Life (Basel) 2023; 13:1441. [PMID: 37511816 PMCID: PMC10381483 DOI: 10.3390/life13071441] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 06/09/2023] [Accepted: 06/21/2023] [Indexed: 07/30/2023] Open
Abstract
The purpose of this investigation was to evaluate the diagnostic performance of two convolutional neural networks (CNNs), namely ResNet-152 and VGG-19, in analyzing, on panoramic images, the rapport that exists between the lower third molar (MM3) and the mandibular canal (MC), and to compare this performance with that of an inexperienced observer (a sixth year dental student). Utilizing the k-fold cross-validation technique, 142 MM3 images, cropped from 83 panoramic images, were split into 80% as training and validation data and 20% as test data. They were subsequently labeled by an experienced radiologist as the gold standard. In order to compare the diagnostic capabilities of CNN algorithms and the inexperienced observer, the diagnostic accuracy, sensitivity, specificity, and positive predictive value (PPV) were determined. ResNet-152 achieved a mean sensitivity, specificity, PPV, and accuracy, of 84.09%, 94.11%, 92.11%, and 88.86%, respectively. VGG-19 achieved 71.82%, 93.33%, 92.26%, and 85.28% regarding the aforementioned characteristics. The dental student's diagnostic performance was respectively 69.60%, 53.00%, 64.85%, and 62.53%. This work demonstrated the potential use of deep CNN architecture for the identification and evaluation of the contact between MM3 and MC in panoramic pictures. In addition, CNNs could be a useful tool to assist inexperienced observers in more accurately identifying contact relationships between MM3 and MC on panoramic images.
Collapse
Affiliation(s)
- Antonio Lo Casto
- Section of Radiological Sciences, Department of Biomedicine, Neuroscience and Advanced Diagnostics, University of Palermo, 90127 Palermo, Italy
| | - Giacomo Spartivento
- Section of Radiological Sciences, Department of Biomedicine, Neuroscience and Advanced Diagnostics, University of Palermo, 90127 Palermo, Italy
| | - Viviana Benfante
- Ri.MED Foundation, Via Bandiera 11, 90133 Palermo, Italy
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties, Molecular and Clinical Medicine, University of Palermo, 90127 Palermo, Italy
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Riccardo Di Raimondo
- Postgraduate Section of Periodontology, Faculty of Odontology, University Complutense, 28040 Madrid, Spain
- Postgraduate Section of Oral Surgery, Periodontology and Implant, University Sur Mississippi, Spain Istitutions, 28040 Madrid, Spain
| | - Muhammad Ali
- Ri.MED Foundation, Via Bandiera 11, 90133 Palermo, Italy
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties, Molecular and Clinical Medicine, University of Palermo, 90127 Palermo, Italy
| | - Domenico Di Raimondo
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties, Molecular and Clinical Medicine, University of Palermo, 90127 Palermo, Italy
| | - Antonino Tuttolomondo
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties, Molecular and Clinical Medicine, University of Palermo, 90127 Palermo, Italy
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Albert Comelli
- Ri.MED Foundation, Via Bandiera 11, 90133 Palermo, Italy
| |
Collapse
|
18
|
Ayad N, Schwendicke F, Krois J, van den Bosch S, Bergé S, Bohner L, Hanisch M, Vinayahalingam S. Patients' perspectives on the use of artificial intelligence in dentistry: a regional survey. Head Face Med 2023; 19:23. [PMID: 37349791 PMCID: PMC10288769 DOI: 10.1186/s13005-023-00368-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 06/13/2023] [Indexed: 06/24/2023] Open
Abstract
The use of artificial intelligence (AI) in dentistry is rapidly evolving and could play a major role in a variety of dental fields. This study assessed patients' perceptions and expectations regarding AI use in dentistry. An 18-item questionnaire survey focused on demographics, expectancy, accountability, trust, interaction, advantages and disadvantages was responded to by 330 patients; 265 completed questionnaires were included in this study. Frequencies and differences between age groups were analysed using a two-sided chi-squared or Fisher's exact tests with Monte Carlo approximation. Patients' perceived top three disadvantages of AI use in dentistry were (1) the impact on workforce needs (37.7%), (2) new challenges on doctor-patient relationships (36.2%) and (3) increased dental care costs (31.7%). Major expected advantages were improved diagnostic confidence (60.8%), time reduction (48.3%) and more personalised and evidencebased disease management (43.0%). Most patients expected AI to be part of the dental workflow in 1-5 (42.3%) or 5-10 (46.8%) years. Older patients (> 35 years) expected higher AI performance standards than younger patients (18-35 years) (p < 0.05). Overall, patients showed a positive attitude towards AI in dentistry. Understanding patients' perceptions may allow professionals to shape AI-driven dentistry in the future.
Collapse
Affiliation(s)
- Nasim Ayad
- Department of Oral and Maxillofacial Surgery, Hospital University Münster, 48149 Münster, Germany
| | - Falk Schwendicke
- Department of Oral Diagnostics and Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Aßmannshauser Str. 4-6, 14197 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics and Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Aßmannshauser Str. 4-6, 14197 Berlin, Germany
| | - Stefanie van den Bosch
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, 6500 HB Nijmegen, the Netherlands
| | - Stefaan Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, 6500 HB Nijmegen, the Netherlands
| | - Lauren Bohner
- Department of Oral and Maxillofacial Surgery, Hospital University Münster, 48149 Münster, Germany
| | - Marcel Hanisch
- Department of Oral and Maxillofacial Surgery, Hospital University Münster, 48149 Münster, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Hospital University Münster, 48149 Münster, Germany
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, 6500 HB Nijmegen, the Netherlands
| |
Collapse
|
19
|
Bui R, Iozzino R, Richert R, Roy P, Boussel L, Tafrount C, Ducret M. Artificial Intelligence as a Decision-Making Tool in Forensic Dentistry: A Pilot Study with I3M. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4620. [PMID: 36901630 PMCID: PMC10002153 DOI: 10.3390/ijerph20054620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 02/25/2023] [Accepted: 03/02/2023] [Indexed: 06/18/2023]
Abstract
Expert determination of the third molar maturity index (I3M) constitutes one of the most common approaches for dental age estimation. This work aimed to investigate the technical feasibility of creating a decision-making tool based on I3M to support expert decision-making. Methods: The dataset consisted of 456 images from France and Uganda. Two deep learning approaches (Mask R-CNN, U-Net) were compared on mandibular radiographs, leading to a two-part instance segmentation (apical and coronal). Then, two topological data analysis approaches were compared on the inferred mask: one with a deep learning component (TDA-DL), one without (TDA). Regarding mask inference, U-Net had a better accuracy (mean intersection over union metric (mIoU)), 91.2% compared to 83.8% for Mask R-CNN. The combination of U-Net with TDA or TDA-DL to compute the I3M score revealed satisfying results in comparison with a dental forensic expert. The mean ± SD absolute error was 0.04 ± 0.03 for TDA, and 0.06 ± 0.04 for TDA-DL. The Pearson correlation coefficient of the I3M scores between the expert and a U-Net model was 0.93 when combined with TDA and 0.89 with TDA-DL. This pilot study illustrates the potential feasibility to automate an I3M solution combining a deep learning and a topological approach, with 95% accuracy in comparison with an expert.
Collapse
Affiliation(s)
- Romain Bui
- Pôle d’Odontologie, Hospices Civils de Lyon, 69008 Lyon, France
- Faculté d’Odontologie, Université Claude Bernard Lyon 1, Université de Lyon, 69372 Lyon, France
| | - Régis Iozzino
- Pôle d’Odontologie, Hospices Civils de Lyon, 69008 Lyon, France
- Faculté d’Odontologie, Université Claude Bernard Lyon 1, Université de Lyon, 69372 Lyon, France
| | - Raphaël Richert
- Pôle d’Odontologie, Hospices Civils de Lyon, 69008 Lyon, France
- Faculté d’Odontologie, Université Claude Bernard Lyon 1, Université de Lyon, 69372 Lyon, France
| | - Pascal Roy
- Service de Biostatistique—Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, 69008 Lyon, France
- Équipe Biostatistique-Santé, Laboratoire de Biométrie et Biologie Évolutive, UMR 5558 CNRS, Université Claude Bernard Lyon 1, Université de Lyon, 69100 Villeurbanne, France
| | - Loïc Boussel
- Department of Radiology, Hôpital de la Croix-Rousse, Hospices Civils de Lyon, 69004 Lyon, France
- CREATIS, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, UMR 5220, U1294, 69100 Villeurbanne, France
| | - Cheraz Tafrount
- Pôle d’Odontologie, Hospices Civils de Lyon, 69008 Lyon, France
- Faculté d’Odontologie, Université Claude Bernard Lyon 1, Université de Lyon, 69372 Lyon, France
| | - Maxime Ducret
- Pôle d’Odontologie, Hospices Civils de Lyon, 69008 Lyon, France
- Faculté d’Odontologie, Université Claude Bernard Lyon 1, Université de Lyon, 69372 Lyon, France
- Institut de Biologie et Chimie des Protéines, Laboratoire de Biologie Tissulaire et Ingénierie Thérapeutique, UMR 5305 CNRS, Université Claude Bernard Lyon 1, 69367 Lyon, France
| |
Collapse
|
20
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
21
|
Automatic Machine Learning-based Classification of Mandibular Third Molar Impaction Status. JOURNAL OF ORAL AND MAXILLOFACIAL SURGERY, MEDICINE, AND PATHOLOGY 2023. [DOI: 10.1016/j.ajoms.2022.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
22
|
Ariji Y, Mori M, Fukuda M, Katsumata A, Ariji E. Automatic visualization of the mandibular canal in relation to an impacted mandibular third molar on panoramic radiographs using deep learning segmentation and transfer learning techniques. Oral Surg Oral Med Oral Pathol Oral Radiol 2022; 134:749-757. [PMID: 36229373 DOI: 10.1016/j.oooo.2022.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/01/2022] [Accepted: 05/31/2022] [Indexed: 12/13/2022]
Abstract
OBJECTIVE The aim of this study was to create and assess a deep learning model using segmentation and transfer learning methods to visualize the proximity of the mandibular canal to an impacted third molar on panoramic radiographs. STUDY DESIGN The panoramic radiographs containing the mandibular canal and impacted third molar were collected from 2 hospitals (Hospitals A and B). A total of 3200 areas were used for creating and evaluating learning models. A source model was created using the data from Hospital A, simulatively transferred to Hospital B, and trained using various amounts of data from Hospital B to create target models. The same data were then applied to the target models to calculate the Dice coefficient, Jaccard index, and sensitivity. RESULTS The performance of target models trained using 200 or more data sets was equivalent to that of the source model tested using data obtained from the same hospital (Hospital A). CONCLUSIONS Sufficiently qualified models could delineate the mandibular canal in relation to an impacted third molar on panoramic radiographs using a segmentation technique. Transfer learning appears to be an effective method for creating such models using a relatively small number of data sets.
Collapse
Affiliation(s)
- Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan; Department of Oral Radiology, Osaka Dental University, School of Dentistry, Osaka, Japan
| | - Mizuho Mori
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
| |
Collapse
|
23
|
Miloglu O, Guller MT, Tosun ZT. The Use of Artificial Intelligence in Dentistry Practices. Eurasian J Med 2022; 54:34-42. [PMID: 36655443 PMCID: PMC11163356 DOI: 10.5152/eurasianjmed.2022.22301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Accepted: 11/30/2022] [Indexed: 01/19/2023] Open
Abstract
Artificial intelligence can be defined as "understanding human thinking and trying to develop computer processes that will produce a similar structure." Thus, it is an attempt by a programmed computer to think. According to a broader definition, artificial intelligence is a computer equipped with human intelligencespecific capacities such as acquiring information, perceiving, seeing, thinking, and making decisions. Quality demands in dental treatments have constantly been increasing in recent years. In parallel with this, using image-based methods and multimedia-supported explanation systems on the computer is becoming widespread to evaluate the available information. The use of artificial intelligence in dentistry will greatly contribute to the reduction of treatment times and the effort spent by the dentist, reduce the need for a specialist dentist, and give a new perspective to how dentistry is practiced. In this review, we aim to review the studies conducted with artificial intelligence in dentistry and to inform our dentists about the existence of this new technology.
Collapse
Affiliation(s)
- Ozkan Miloglu
- Department of Oral, Dental and Maxillofacial Radiology, Atatürk University Faculty of Dentistry, Erzurum, Turkey
| | - Mustafa Taha Guller
- Department of Dentistry Services, Oral and Dental Health Program, Binali Yıldırım University Vocational School of Health Services, , Erzincan, Turkey
| | - Zeynep Turanli Tosun
- Department of Oral, Dental and Maxillofacial Radiology, Atatürk University Faculty of Dentistry, Erzurum, Turkey
| |
Collapse
|
24
|
Comparison of detection performance of soft tissue calcifications using artificial intelligence in panoramic radiography. Sci Rep 2022; 12:19115. [PMID: 36352043 PMCID: PMC9646809 DOI: 10.1038/s41598-022-22595-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 10/17/2022] [Indexed: 11/11/2022] Open
Abstract
Artificial intelligence (AI) is limited to teeth and periodontal disease in the dental field, and is used for diagnosis assistance or data analysis, and there has been no research conducted in actual clinical situations. So, we created an environment similar to actual clinical practice and conducted research by selecting three of the soft tissue diseases (carotid artery calcification, lymph node calcification, and sialolith) that are difficult for general dentists to see. Therefore, in this study, the accuracy and reading time are evaluated using panoramic images and AI. A total of 20,000 panoramic images including three diseases were used to develop and train a fast R-CNN model. To compare the performance of the developed model, two oral and maxillofacial radiologists (OMRs) and two general dentists (GDs) read 352 images, excluding the panoramic images used in development for soft tissue calcification diagnosis. On the first visit, the observers read images without AI; on the second visit, the same observers used AI to read the same image. The diagnostic accuracy and specificity for soft tissue calcification of AI were high from 0.727 to 0.926 and from 0.171 to 1.000, whereas the sensitivity for lymph node calcification and sialolith were low at 0.250 and 0.188, respectively. The reading time of AI increased in the GD group (619 to 1049) and decreased in the OMR group (1347 to 1372). In addition, reading scores increased in both groups (GD from 11.4 to 39.8 and OMR from 3.4 to 10.8). Using AI, although the detection sensitivity of sialolith and lymph node calcification was lower than that of carotid artery calcification, the total reading time of the OMR specialists was reduced and the GDs reading accuracy was improved. The AI used in this study helped to improve the diagnostic accuracy of the GD group, who were not familiar with the soft tissue calcification diagnosis, but more data sets are needed to improve the detection performance of the two diseases with low sensitivity of AI.
Collapse
|
25
|
Ariji Y, Gotoh M, Fukuda M, Watanabe S, Nagao T, Katsumata A, Ariji E. A preliminary deep learning study on automatic segmentation of contrast-enhanced bolus in videofluorography of swallowing. Sci Rep 2022; 12:18754. [PMID: 36335226 PMCID: PMC9637105 DOI: 10.1038/s41598-022-21530-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 09/28/2022] [Indexed: 11/07/2022] Open
Abstract
Although videofluorography (VFG) is an effective tool for evaluating swallowing functions, its accurate evaluation requires considerable time and effort. This study aimed to create a deep learning model for automated bolus segmentation on VFG images of patients with healthy swallowing and dysphagia using the artificial intelligence deep learning segmentation method, and to assess the performance of the method. VFG images of 72 swallowing of 12 patients were continuously converted into 15 static images per second. In total, 3910 images were arbitrarily assigned to the training, validation, test 1, and test 2 datasets. In the training and validation datasets, images of colored bolus areas were prepared, along with original images. Using a U-Net neural network, a trained model was created after 500 epochs of training. The test datasets were applied to the trained model, and the performances of automatic segmentation (Jaccard index, Sørensen-Dice coefficient, and sensitivity) were calculated. All performance values for the segmentation of the test 1 and 2 datasets were high, exceeding 0.9. Using an artificial intelligence deep learning segmentation method, we automatically segmented the bolus areas on VFG images; our method exhibited high performance. This model also allowed assessment of aspiration and laryngeal invasion.
Collapse
Affiliation(s)
- Yoshiko Ariji
- grid.411253.00000 0001 2189 9594Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651 Japan ,grid.412378.b0000 0001 1088 0812Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Masakazu Gotoh
- grid.411253.00000 0001 2189 9594Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651 Japan
| | - Motoki Fukuda
- grid.411253.00000 0001 2189 9594Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651 Japan
| | - Satoshi Watanabe
- grid.411253.00000 0001 2189 9594Department of Maxillofacial Surgery, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Toru Nagao
- grid.411253.00000 0001 2189 9594Department of Maxillofacial Surgery, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Akitoshi Katsumata
- grid.411456.30000 0000 9220 8466Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Eiichiro Ariji
- grid.411253.00000 0001 2189 9594Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651 Japan
| |
Collapse
|
26
|
Sheng C, Wang L, Huang Z, Wang T, Guo Y, Hou W, Xu L, Wang J, Yan X. Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs. JOURNAL OF SYSTEMS SCIENCE AND COMPLEXITY 2022; 36:257-272. [PMID: 36258771 PMCID: PMC9561331 DOI: 10.1007/s11424-022-2057-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 03/23/2022] [Indexed: 05/28/2023]
Abstract
Panoramic radiographs can assist dentist to quickly evaluate patients' overall oral health status. The accurate detection and localization of tooth tissue on panoramic radiographs is the first step to identify pathology, and also plays a key role in an automatic diagnosis system. However, the evaluation of panoramic radiographs depends on the clinical experience and knowledge of dentist, while the interpretation of panoramic radiographs might lead misdiagnosis. Therefore, it is of great significance to use artificial intelligence to segment teeth on panoramic radiographs. In this study, SWin-Unet, the transformer-based Ushaped encoder-decoder architecture with skip-connections, is introduced to perform panoramic radiograph segmentation. To well evaluate the tooth segmentation performance of SWin-Unet, the PLAGH-BH dataset is introduced for the research purpose. The performance is evaluated by F1 score, mean intersection and Union (IoU) and Acc, Compared with U-Net, Link-Net and FPN baselines, SWin-Unet performs much better in PLAGH-BH tooth segmentation dataset. These results indicate that SWin-Unet is more feasible on panoramic radiograph segmentation, and is valuable for the potential clinical application.
Collapse
Affiliation(s)
- Chen Sheng
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
| | - Lin Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Zhenhuan Huang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Tian Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Yalin Guo
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Wenjie Hou
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Laiqing Xu
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Jiazhu Wang
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| | - Xue Yan
- Medical School of Chinese PLA, Beijing, 100853 China
- Department of Stomatology, the first Medical Centre, Chinese PLA General Hospital, Beijing, 100853 China
- Beihang University, Beijing, 100191 China
| |
Collapse
|
27
|
Bayrakdar IS, Orhan K, Akarsu S, Çelik Ö, Atasoy S, Pekince A, Yasa Y, Bilgir E, Sağlam H, Aslan AF, Odabaş A. Deep-learning approach for caries detection and segmentation on dental bitewing radiographs. Oral Radiol 2022; 38:468-479. [PMID: 34807344 DOI: 10.1007/s11282-021-00577-9] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 11/09/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES The aim of this study is to recommend an automatic caries detection and segmentation model based on the Convolutional Neural Network (CNN) algorithms in dental bitewing radiographs using VGG-16 and U-Net architecture and evaluate the clinical performance of the model comparing to human observer. METHODS A total of 621 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system (CranioCatch, Eskisehir, Turkey) for the detection and segmentation of caries lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Ordu University. VGG-16 and U-Net implemented with PyTorch models were used for the detection and segmentation of caries lesions, respectively. RESULTS The sensitivity, precision, and F-measure rates for caries detection and caries segmentation were 0.84, 0.81; 0.84, 0.86; and 0.84, 0.84, respectively. Comparing to 5 different experienced observers and AI models on external radiographic dataset, AI models showed superiority to assistant specialists. CONCLUSION CNN-based AI algorithms can have the potential to detect and segmentation of dental caries accurately and effectively in bitewing radiographs. AI algorithms based on the deep-learning method have the potential to assist clinicians in routine clinical practice for quickly and reliably detecting the tooth caries. The use of these algorithms in clinical practice can provide to important benefit to physicians as a clinical decision support system in dentistry.
Collapse
Affiliation(s)
- Ibrahim Sevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey.
- Eskisehir Osmangazi University Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir, Turkey.
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Serdar Akarsu
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| | - Samet Atasoy
- Department of Restorative Dentistry, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Adem Pekince
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Karabuk University, Karabuk, Turkey
| | - Yasin Yasa
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ordu University, Ordu, Turkey
| | - Elif Bilgir
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey
| | - Hande Sağlam
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, 26240, Eskisehir, Turkey
| | - Ahmet Faruk Aslan
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Alper Odabaş
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
28
|
Kim J, Hwang JJ, Jeong T, Cho BH, Shin J. Deep learning-based identification of mesiodens using automatic maxillary anterior region estimation in panoramic radiography of children. Dentomaxillofac Radiol 2022; 51:20210528. [PMID: 35731733 PMCID: PMC9522977 DOI: 10.1259/dmfr.20210528] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 06/06/2022] [Accepted: 06/11/2022] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES The purpose of this study is to develop and evaluate the performance of a model that automatically sets a region of interest (ROI) and diagnoses mesiodens in panoramic radiographs of growing children using deep learning technology. METHODS Out of 988 panoramic radiographs, 489 patients with mesiodens were classified as an experimental group, and 499 patients without mesiodens were classified as a control group. This study consists of two networks. The first network (DeeplabV3plus) is a segmentation model that uses the posterior molar space to set the ROI in the maxillary anterior region with the mesiodens in the panoramic radiograph. The second network (Inception-resnet-v2) is a classification model that uses cropped maxillary anterior teeth to determine the presence of mesiodens. The data were divided into five groups and cross-validated. Deep learning model were created and trained using Inception-ResNet-v2. The performance of the segmentation network was evaluated using accuracy, Intersection over Union (IoU), and MeanBFscore. The overall network performance was evaluated using accuracy, precision, recall, and F1-score. RESULTS Segmentation performance using posterior molar space in panoramic radiographs was 0.839, IoU 0.762, and MeanBFscore 0.907. The mean values of accuracy, precision, recall, F1-score, and area under the curve for the diagnosis of mesiodens using automatic segmentation were 0.971, 0.971, 0.971, 0.971, and 0.971, respectively. CONCLUSIONS The diagnostic performance of the deep learning system using posterior molar space on the panoramic radiograph was sufficiently useful. The results of the deep learning system confirmed the possibility of complete automation of the classification of mesiodens.
Collapse
Affiliation(s)
- Jihoon Kim
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Jae Joon Hwang
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Taesung Jeong
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Bong-Hae Cho
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Jonghyun Shin
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea
| |
Collapse
|
29
|
A Fused Deep Learning Architecture for the Detection of the Relationship between the Mandibular Third Molar and the Mandibular Canal. Diagnostics (Basel) 2022; 12:diagnostics12082018. [PMID: 36010368 PMCID: PMC9407570 DOI: 10.3390/diagnostics12082018] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 08/17/2022] [Accepted: 08/18/2022] [Indexed: 12/01/2022] Open
Abstract
The study aimed to generate a fused deep learning algorithm that detects and classifies the relationship between the mandibular third molar and mandibular canal on orthopantomographs. Radiographs (n = 1880) were randomly selected from the hospital archive. Two dentomaxillofacial radiologists annotated the data via MATLAB and classified them into four groups according to the overlap of the root of the mandibular third molar and mandibular canal. Each radiograph was segmented using a U-Net-like architecture. The segmented images were classified by AlexNet. Accuracy, the weighted intersection over union score, the dice coefficient, specificity, sensitivity, and area under curve metrics were used to quantify the performance of the models. Also, three dental practitioners were asked to classify the same test data, their success rate was assessed using the Intraclass Correlation Coefficient. The segmentation network achieved a global accuracy of 0.99 and a weighted intersection over union score of 0.98, average dice score overall images was 0.91. The classification network achieved an accuracy of 0.80, per class sensitivity of 0.74, 0.83, 0.86, 0.67, per class specificity of 0.92, 0.95, 0.88, 0.96 and AUC score of 0.85. The most successful dental practitioner achieved a success rate of 0.79. The fused segmentation and classification networks produced encouraging results. The final model achieved almost the same classification performance as dental practitioners. Better diagnostic accuracy of the combined artificial intelligence tools may help to improve the prediction of the risk factors, especially for recognizing such anatomical variations.
Collapse
|
30
|
Canal-Net for automatic and robust 3D segmentation of mandibular canals in CBCT images using a continuity-aware contextual network. Sci Rep 2022; 12:13460. [PMID: 35931733 PMCID: PMC9356068 DOI: 10.1038/s41598-022-17341-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/25/2022] [Indexed: 02/01/2023] Open
Abstract
The purpose of this study was to propose a continuity-aware contextual network (Canal-Net) for the automatic and robust 3D segmentation of the mandibular canal (MC) with high consistent accuracy throughout the entire MC volume in cone-beam CT (CBCT) images. The Canal-Net was designed based on a 3D U-Net with bidirectional convolutional long short-term memory (ConvLSTM) under a multi-task learning framework. Specifically, the Canal-Net learned the 3D anatomical context information of the MC by incorporating spatio-temporal features from ConvLSTM, and also the structural continuity of the overall MC volume under a multi-task learning framework using multi-planar projection losses complementally. The Canal-Net showed higher segmentation accuracies in 2D and 3D performance metrics (p < 0.05), and especially, a significant improvement in Dice similarity coefficient scores and mean curve distance (p < 0.05) throughout the entire MC volume compared to other popular deep learning networks. As a result, the Canal-Net achieved high consistent accuracy in 3D segmentations of the entire MC in spite of the areas of low visibility by the unclear and ambiguous cortical bone layer. Therefore, the Canal-Net demonstrated the automatic and robust 3D segmentation of the entire MC volume by improving structural continuity and boundary details of the MC in CBCT images.
Collapse
|
31
|
Artificial intelligence-aided detection of ectopic eruption of maxillary first molars based on panoramic radiography. J Dent 2022; 125:104239. [PMID: 35863549 DOI: 10.1016/j.jdent.2022.104239] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 02/08/2023] Open
Abstract
OBJECTIVES Ectopic eruption (EE) of maxillary permanent first molars (PFMs) is among the most frequent ectopic eruption, which leads to premature loss of adjacent primary second molars, impaction of premolars and a decrease in dental arch length. Apart from oral manifestations such as delayed eruption and discoloration of PFMs, panoramic radiography can reveal EE of maxillary PFMs as well. Identifying eruption anomalies in radiographs can be strongly experience-dependent, leading us to develop here an automatic model that can aid dentists in this task and allow timelier interventions. METHODS Panoramic X-ray images from 1480 patients aged 4-9 years old were used to train an auto-screening model. Another 100 panoramic images were used to validate and test the model. RESULTS The positive and negative predictive values of this auto-screening system were 0.86 and 0.88, respectively, with a specificity of 0.90 and a sensitivity of 0.86. Using the model to aid dentists in detecting EE on the 100 panoramic images led to higher sensitivity and specificity than when three experienced pediatric dentists detected EE manually. CONCLUSIONS Deep learning-based automatic screening system is useful and promising in the detection EE of maxillary PFMs with relatively high specificity. However, deep learning is not completely accurate in the detection of EE. To minimize the effect of possible false negative diagnosis, regular follow-ups and re-evaluation are required if necessary. CLINICAL SIGNIFICANCE Identification of EE through a semi-automatic screening model can improve the efficacy and accuracy of clinical diagnosis compared to human experts alone. This method may allow earlier detection and timelier intervention and management.
Collapse
|
32
|
A Few-Shot Dental Object Detection Method Based on a Priori Knowledge Transfer. Symmetry (Basel) 2022. [DOI: 10.3390/sym14061129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
With the continuous improvement in oral health awareness, people’s demand for oral health diagnosis has also increased. Dental object detection is a key step in automated dental diagnosis; however, because of the particularity of medical data, researchers usually cannot obtain sufficient medical data. Therefore, this study proposes a dental object detection method for small-size datasets based on teeth semantics, structural information feature extraction, and an a priori knowledge migration, called a segmentation, points, segmentation, and classification network (SPSC-NET). In the region of interest area extraction method, the SPSC-NET method converts the teeth X-ray image into an a priori knowledge information image, composed of the edges of the teeth and the semantic segmentation image; the network structure used to extract the a priori knowledge information is a symmetric structure, which then generates the key points of the object instance. Next, it uses the key points of the object instance (i.e., the dental semantic segmentation image and the dental edge image) to obtain the object instance image (i.e., the positioning of the teeth). Using 10 training images, the test precision and recall rate of the tooth object center point of the SPSC-NET method were between 99–100%. In the classification method, the SPSC-NET identified the single instance segmentation image generated by migrating the dental object area, the edge image, and the semantic segmentation image as a priori knowledge. Under the premise of using the same deep neural network classification model, the model classification with a priori knowledge was 20% more accurate than the ordinary classification methods. For the overall object detection performance indicators, the SPSC-NET’s average precision (AP) value was more than 92%, which is better than that of the transfer-based faster region-based convolutional neural network (Faster-RCNN) object detection model; moreover, its AP and mean intersection-over-union (mIOU) were 14.72% and 19.68% better than the transfer-based Faster-CNN model, respectively.
Collapse
|
33
|
Revilla-León M, Gohil A, Barmak AB, Zandinejad A, Raigrodski AJ. Best-Fit Algorithm Influences on Virtual Casts' Alignment Discrepancies. J Prosthodont 2022; 32:331-339. [PMID: 35524587 DOI: 10.1111/jopr.13537] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/23/2022] [Indexed: 11/27/2022] Open
Abstract
PURPOSE To measure the influence of best-fit (BF) algorithms (entire dataset, 3 or 6 points landmark-based, or section-based BF) on virtual casts and their alignment discrepancies. MATERIAL AND METHODS A mandibular typodont was obtained and digitized by using an industrial scanner (GOM Atos Q 3D 12M). A control mesh was acquired. The typodont was digitized by using an intraoral scanner (TRIOS 4). Based on the alignment procedures, four groups were created: BF of the entire dataset (BF group), landmark-based BF using 3 reference points (LBF-3 group), or 6 reference points (LBF-6 group), and section-based BF (SBF group). The root mean square (RMS) error was calculated. One-way ANOVA and post-hoc pairwise multi-comparison Tukey were used to analyze the data (α = .05). RESULTS Significant RMS error mean value differences were found across the groups (P<.001). Tukey test revealed significant RMS error mean value differences between the BF and LBF-3 groups (P = .022), BF and LBF-6 groups (P<.001), LB-3 and LB-6 groups (P<.001), LBF-3 and SBF groups (P<.001), and LBF-6 and SBF groups (P<.001). The LBF-6 group had the lowest trueness, while SBF and BF groups obtained the highest trueness values. Furthermore, significant SD differences were revealed across the groups tested (P<.001). Tukey test revealed significant SD differences between the BF and LBF-6 groups (P<.001), LBF-3 and LB-6 groups (P<.001), LBF-3 and SBF groups (P = .004), and LBF-6 and SBF groups (P<.001). The BF and SBF groups showed equal and highest precision, while the LBF-6 group had the lowest precision. CONCLUSIONS The best-fit algorithms tested influenced the virtual casts' alignment discrepancy. Entire dataset or section-based best-fit algorithms obtained the highest virtual casts' alignment trueness and precision compared with the landmark-based method.
Collapse
Affiliation(s)
- Marta Revilla-León
- Affiliate Assistant Professor, Graduate Prosthodontics, Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, Wash; Director of Research and Digital Dentistry, Kois Center, Seattle, Wash; and Adjunct Professor, Department of Prosthodontics, Tufts University, Boston, MA, USA
| | - Aishwa Gohil
- Undergraduate student, College of Dentistry, Texas A&M University, Dallas, TX, USA
| | - Abdul B Barmak
- Assistant Professor Clinical Research and Biostatistics, Eastman Institute of Oral Health, University of Rochester Medical Center, Rochester, NY, USA
| | - Amirali Zandinejad
- Associate Professor and Program Director AEGD Residency, Comprehensive Dentistry Department, College of Dentistry, Texas A&M University, Dallas, TX, USA
| | - Ariel J Raigrodski
- Private Practice, Lynnwood, Wash and Affiliate Professor, Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, USA
| |
Collapse
|
34
|
Celik ME. Deep Learning Based Detection Tool for Impacted Mandibular Third Molar Teeth. Diagnostics (Basel) 2022; 12:diagnostics12040942. [PMID: 35453990 PMCID: PMC9025752 DOI: 10.3390/diagnostics12040942] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/29/2022] [Accepted: 04/08/2022] [Indexed: 12/01/2022] Open
Abstract
Third molar impacted teeth are a common issue with all ages, possibly causing tooth decay, root resorption, and pain. This study was aimed at developing a computer-assisted detection system based on deep convolutional neural networks for the detection of third molar impacted teeth using different architectures and to evaluate the potential usefulness and accuracy of the proposed solutions on panoramic radiographs. A total of 440 panoramic radiographs from 300 patients were randomly divided. As a two-stage technique, Faster RCNN with ResNet50, AlexNet, and VGG16 as a backbone and one-stage technique YOLOv3 were used. The Faster-RCNN, as a detector, yielded a mAP@0.5 rate of 0.91 with ResNet50 backbone while VGG16 and AlexNet showed slightly lower performances: 0.87 and 0.86, respectively. The other detector, YOLO v3, provided the highest detection efficacy with a mAP@0.5 of 0.96. Recall and precision were 0.93 and 0.88, respectively, which supported its high performance. Considering the findings from different architectures, it was seen that the proposed one-stage detector YOLOv3 had excellent performance for impacted mandibular third molar tooth detection on panoramic radiographs. Promising results showed that diagnostic tools based on state-ofthe-art deep learning models were reliable and robust for clinical decision-making.
Collapse
Affiliation(s)
- Mahmut Emin Celik
- Department of Electrical Electronics Engineering, Faculty of Engineering, Gazi University, Eti mah. Yukselis sk. No: 5 Maltepe, Ankara 06570, Turkey
| |
Collapse
|
35
|
Choi E, Lee S, Jeong E, Shin S, Park H, Youm S, Son Y, Pang K. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography. Sci Rep 2022; 12:2456. [PMID: 35165342 PMCID: PMC8844031 DOI: 10.1038/s41598-022-06483-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/06/2022] [Indexed: 11/09/2022] Open
Abstract
Determining the exact positional relationship between mandibular third molar (M3) and inferior alveolar nerve (IAN) is important for surgical extractions. Panoramic radiography is the most common dental imaging test. The purposes of this study were to develop an artificial intelligence (AI) model to determine two positional relationships (true contact and bucco-lingual position) between M3 and IAN when they were overlapped in panoramic radiographs and compare its performance with that of oral and maxillofacial surgery (OMFS) specialists. A total of 571 panoramic images of M3 from 394 patients was used for this study. Among the images, 202 were classified as true contact, 246 as intimate, 61 as IAN buccal position, and 62 as IAN lingual position. A deep convolutional neural network model with ResNet-50 architecture was trained for each task. We randomly split the dataset into 75% for training and validation and 25% for testing. Model performance was superior in bucco-lingual position determination (accuracy 0.76, precision 0.83, recall 0.67, and F1 score 0.73) to true contact position determination (accuracy 0.63, precision 0.62, recall 0.63, and F1 score 0.61). AI exhibited much higher accuracy in both position determinations compared to OMFS specialists. In determining true contact position, OMFS specialists demonstrated an accuracy of 52.68% to 69.64%, while the AI showed an accuracy of 72.32%. In determining bucco-lingual position, OMFS specialists showed an accuracy of 32.26% to 48.39%, and the AI showed an accuracy of 80.65%. Moreover, Cohen’s kappa exhibited a substantial level of agreement for the AI (0.61) and poor agreements for OMFS specialists in bucco-lingual position determination. Determining the position relationship between M3 and IAN is possible using AI, especially in bucco-lingual positioning. The model could be used to support clinicians in the decision-making process for M3 treatment.
Collapse
Affiliation(s)
- Eunhye Choi
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University, 101, Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Soohong Lee
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Eunjae Jeong
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Seokwon Shin
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Hyunwoo Park
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Sekyoung Youm
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea
| | - Youngdoo Son
- Department of Industrial and Systems Engineering, Dongguk University - Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea.
| | - KangMi Pang
- Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
36
|
Ngoc VTN, Viet DH, Tuan TM, Hai PV, Thang NP, Tuyen DN, Son LH. VNU-diagnosis: A novel medical system based on deep learning for diagnosis of periapical inflammation from X-Rays images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Periapical Inflammation (PI) is one of the most popular diseases in adults due to complication of endodontitis or dental trauma with corresponding consequences to quality-of-life like tiredness and signs of infection. Specifically, patients having severe PI are often tiredness and high fever accompanied by signs of infection such as dry lips, dirty tongue, lymph node reaction in the area under the jaw. In X-Ray images, PI could be recognized by vague boundaries with signs of periapical ligament extensions. It is necessary to design a computerized diagnosis system based on the Deep Learning models for supporting clinicians in diagnosis of PI from X-Ray images. In this paper, we propose a new medical system called VNU for diagnosis of PI from X-Rays images. The VNU system uses Deep Learning to classify whether X-Ray images being PI or not. The Residual Neural Network (ResNet) with 34 layers is utilized with proper data augmentation and learning algorithms. The system is designed based on 7-layer enterprise architecture (User, Business, Application, Data, Technology, Infrastructure, and Security). It is used by both the clinicians and IT operators. The system has been validated on real data from Hanoi Medical University, Vietnam consisting of 900 images with PI and 500 normal images. Two scenarios of validation namely hyperparameter selection and performance comparison with other CNN-based Deep Learning models have been performed. It has been found from the experiments that the proposed system has better performance than the others in terms of sensitivity and specificity with the corresponding values of 96.70% and 93.87%. The system is deployed on the web interface that offers flexibility for clinicians in diagnosis and training.
Collapse
Affiliation(s)
| | - Do Hoang Viet
- School of Odonto Stomatology, Hanoi Medical University, Hanoi, Vietnam
| | - Tran Manh Tuan
- Faculty of Computer Science and Engineering, Thuyloi University, Hanoi, Vietnam
| | - Pham Van Hai
- School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Nguyen Phu Thang
- School of Odonto Stomatology, Hanoi Medical University, Hanoi, Vietnam
| | - Do Ngoc Tuyen
- School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
37
|
Artificial Intelligence: A New Diagnostic Software in Dentistry: A Preliminary Performance Diagnostic Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031728. [PMID: 35162751 PMCID: PMC8835112 DOI: 10.3390/ijerph19031728] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/18/2021] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
Background: Artificial intelligence (AI) has taken hold in public health because more and more people are looking to make a diagnosis using technology that allows them to work faster and more accurately, reducing costs and the number of medical errors. Methods: In the present study, 120 panoramic X-rays (OPGs) were randomly selected from the Department of Oral and Maxillofacial Sciences of Sapienza University of Rome, Italy. The OPGs were acquired and analyzed using Apox, which takes a panoramic X-rayand automatically returns the dental formula, the presence of dental implants, prosthetic crowns, fillings and root remnants. A descriptive analysis was performed presenting the categorical variables as absolute and relative frequencies. Results: In total, the number of true positive (TP) values was 2.195 (19.06%); true negative (TN), 8.908 (77.34%); false positive (FP), 132 (1.15%); and false negative (FN), 283 (2.46%). The overall sensitivity was 0.89, while the overall specificity was 0.98. Conclusions: The present study shows the latest achievements in dentistry, analyzing the application and credibility of a new diagnostic method to improve the work of dentists and the patients’ care.
Collapse
|
38
|
Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12010475] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Extraction of mandibular third molars is a common procedure in oral and maxillofacial surgery. There are studies that simultaneously predict the extraction difficulty of mandibular third molar and the complications that may occur. Thus, we propose a method of automatically detecting mandibular third molars in the panoramic radiographic images and predicting the extraction difficulty and likelihood of inferior alveolar nerve (IAN) injury. Our dataset consists of 4903 panoramic radiographic images acquired from various dental hospitals. Seven dentists annotated detection and classification labels. The detection model determines the mandibular third molar in the panoramic radiographic image. The region of interest (ROI) includes the detected mandibular third molar, adjacent teeth, and IAN, which is cropped in the panoramic radiographic image. The classification models use ROI as input to predict the extraction difficulty and likelihood of IAN injury. The achieved detection performance was 99.0% mAP over the intersection of union (IOU) 0.5. In addition, we achieved an 83.5% accuracy for the prediction of extraction difficulty and an 81.1% accuracy for the prediction of the likelihood of IAN injury. We demonstrated that a deep learning method can support the diagnosis for extracting the mandibular third molar.
Collapse
|
39
|
Kaya E, Gunec HG, Aydin KC, Urkmez ES, Duranay R, Ates HF. A deep learning approach to permanent tooth germ detection on pediatric panoramic radiographs. Imaging Sci Dent 2022; 52:275-281. [PMID: 36238699 PMCID: PMC9530294 DOI: 10.5624/isd.20220050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 05/19/2022] [Accepted: 06/01/2022] [Indexed: 12/01/2022] Open
Abstract
Purpose The aim of this study was to assess the performance of a deep learning system for permanent tooth germ detection on pediatric panoramic radiographs. Materials and Methods In total, 4518 anonymized panoramic radiographs of children between 5 and 13 years of age were collected. YOLOv4, a convolutional neural network (CNN)-based object detection model, was used to automatically detect permanent tooth germs. Panoramic images of children processed in LabelImg were trained and tested in the YOLOv4 algorithm. True-positive, false-positive, and false-negative rates were calculated. A confusion matrix was used to evaluate the performance of the model. Results The YOLOv4 model, which detected permanent tooth germs on pediatric panoramic radiographs, provided an average precision value of 94.16% and an F1 value of 0.90, indicating a high level of significance. The average YOLOv4 inference time was 90 ms. Conclusion The detection of permanent tooth germs on pediatric panoramic X-rays using a deep learning-based approach may facilitate the early diagnosis of tooth deficiency or supernumerary teeth and help dental practitioners find more accurate treatment options while saving time and effort.
Collapse
Affiliation(s)
- Emine Kaya
- Department of Pediatric Dentistry, Faculty of Dentistry, Istanbul Okan University, Istanbul, Turkey
| | - Huseyin Gurkan Gunec
- Department of Endodontics, Faculty of Dentistry, Atlas University, Istanbul, Turkey
| | - Kader Cesur Aydin
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Istanbul Medipol University, Istanbul, Turkey
| | | | - Recep Duranay
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Atlas University, Istanbul, Turkey
| | - Hasan Fehmi Ates
- Department of Computer Engineering, School of Engineering and Natural Sciences, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
40
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
41
|
Lim HK, Jung SK, Kim SH, Cho Y, Song IS. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 2021; 21:630. [PMID: 34876105 PMCID: PMC8650351 DOI: 10.1186/s12903-021-01983-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored prior to surgery. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery. Methods A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the 50 datasets. The accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step were compared. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation. Results After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation. Conclusions The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.
Collapse
Affiliation(s)
- Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seung-Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, 46, Gaeunsa 2-gil, Seongbuk-gu, Seoul, 02842, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI Center, Korea University College of Medicine, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
42
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
43
|
Vinayahalingam S, Goey RS, Kempers S, Schoep J, Cherici T, Moin DA, Hanisch M. Automated chart filing on panoramic radiographs using deep learning. J Dent 2021; 115:103864. [PMID: 34715247 DOI: 10.1016/j.jdent.2021.103864] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 10/19/2021] [Accepted: 10/24/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVE The aim of this study is to automatically detect, segment and label teeth, crowns, fillings, root canal fillings, implants and root remnants on panoramic radiographs (PR(s)). MATERIAL AND METHODS As a reference, 2000 PR(s) were manually annotated and labeled. A deep-learning approach based on mask R-CNN with Resnet-50 in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1800 PR(s). Subsquently, the trained algorithm was applied onto a test set consisting of 200 PR(s). F1 scores, as a measure of accuracy, were calculated to quantify the degree of similarity between the annotated ground-truth and the model predictions. The F1-score considers the harmonic mean of precison (positive predictive value) and recall (specificity). RESULTS The proposes method achieved F1 scores up to 0.993, 0.952 and 0.97 for detection, segmentation and labeling, respectivley. CONCLUSION The proposed method forms a promising foundation for the further development of automatic chart filing on PR(s). CLINICAL SIGNIFICANCE Deep learning may assist clinicians in summarizing the radiological findings on panoramic radiographs. The impact of using such models in clinical practice should be explored.
Collapse
Affiliation(s)
- Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, 6500 HB, Nijmegen, the Netherlands; Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands; Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany.
| | - Ru-Shan Goey
- Promaton Co. Ltd., Amsterdam 1076 GR, the Netherlands
| | - Steven Kempers
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, P.O. Box 9101, 6500 HB, Nijmegen, the Netherlands; Department of Artificial Intelligence, Radboud University, Nijmegen, the Netherlands
| | - Julian Schoep
- Promaton Co. Ltd., Amsterdam 1076 GR, the Netherlands
| | - Teo Cherici
- Promaton Co. Ltd., Amsterdam 1076 GR, the Netherlands
| | | | - Marcel Hanisch
- Promaton Co. Ltd., Amsterdam 1076 GR, the Netherlands; Department of Oral and Maxillofacial Surgery, Universitätsklinikum Münster, Münster, Germany
| |
Collapse
|
44
|
Fang D, Li D, Li C, Yang W, Xiao F, Long Z. Efficacy and Safety of Concentrated Growth Factor Fibrin on the Extraction of Mandibular Third Molars: A Prospective, Randomized, Double-Blind Controlled Clinical Study. J Oral Maxillofac Surg 2021; 80:700-708. [PMID: 34801470 DOI: 10.1016/j.joms.2021.10.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 10/04/2021] [Accepted: 10/04/2021] [Indexed: 11/17/2022]
Abstract
PURPOSE To investigate the efficacy and safety of concentrated growth factor fibrin (CGF) for the extraction of mandibular third molars. PATIENTS AND METHODS This was a randomized, double-blind, and controlled clinical study. Patients who underwent mandibular impacted tooth extraction were randomly divided into 2 groups. In the CGF group, the tooth extraction fossa was utilized to place CGF gel. In the control group, the fossa was filled with serum. The visual analogue scale (VAS), reductions in swelling and trismus, incidence of postoperative dry socket, distal periodontal depth and bone regeneration of the second molar, and bone density (BMD) of the extraction fossa at 24 weeks were evaluated. RESULTS One hundred eighteen patients were enrolled in this study. There was no significant difference in baseline clinical characteristics between the 2 groups. The pain score of the CGF group was significantly lower than that of the control group at 2, 24, and 48 hours after operation. There was no significant difference in the reduction in swelling or trismus between the 2 groups. There were no cases of dry socket in the CGF group and 3 cases of dry socket in the control group. The periodontal probing depth and bone regeneration of the second molar when the socket was implanted with CGF were better than those that healed naturally (P < .05). The bone mineral density of each group was significantly increased at 24 weeks but was significantly different between groups (P < .05). CONCLUSION CGF can effectively reduce reactive tooth extraction pain and help avoid dry sockets. It can promote periodontal tissue and bone healing in distal and extracted sockets.
Collapse
Affiliation(s)
- Dongdong Fang
- Associate Chief of Doctor, Department of Oral and Maxillofacial Surgery, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Dan Li
- Attending Doctor, Department of Scientific Research, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Chengjing Li
- Attending Doctor, Department of Oral and maxillofacial surgery, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Wenyu Yang
- Attending Doctor, Department of Oral and maxillofacial surgery, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Feng Xiao
- Attending Doctor, Department of Oral and maxillofacial surgery, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Zhangbiao Long
- Associate Professor, Department of Hematology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China.
| |
Collapse
|
45
|
Artificial Intelligence Model to Detect Real Contact Relationship between Mandibular Third Molars and Inferior Alveolar Nerve Based on Panoramic Radiographs. Diagnostics (Basel) 2021; 11:diagnostics11091664. [PMID: 34574005 PMCID: PMC8465495 DOI: 10.3390/diagnostics11091664] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/01/2021] [Accepted: 09/08/2021] [Indexed: 11/17/2022] Open
Abstract
This study aimed to develop a novel detection model for automatically assessing the real contact relationship between mandibular third molars (MM3s) and the inferior alveolar nerve (IAN) based on panoramic radiographs processed with deep learning networks, minimizing pseudo-contact interference and reducing the frequency of cone beam computed tomography (CBCT) use. A deep-learning network approach based on YOLOv4, named as MM3-IANnet, was applied to oral panoramic radiographs for the first time. The relationship between MM3s and the IAN in CBCT was considered the real contact relationship. Accuracy metrics were calculated to evaluate and compare the performance of the MM3-IANnet, dentists and a cooperative approach with dentists and the MM3-IANnet. Our results showed that in comparison with detection by dentists (AP = 76.45%) or the MM3-IANnet (AP = 83.02%), the cooperative dentist-MM3-IANnet approach yielded the highest average precision (AP = 88.06%). In conclusion, the MM3-IANnet detection model is an encouraging artificial intelligence approach that might assist dentists in detecting the real contact relationship between MM3s and IANs based on panoramic radiographs.
Collapse
|
46
|
Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics (Basel) 2021; 11:diagnostics11091572. [PMID: 34573914 PMCID: PMC8469771 DOI: 10.3390/diagnostics11091572] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/04/2023] Open
Abstract
The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic images of patients who had planned mandibular third molar extraction. A total of 100 images taken of patients who had paresthesia after tooth extraction were classified as Group 1, and 200 images taken of patients without paresthesia were classified as Group 2. The dataset was randomly divided into a training and validation set (n = 150 [50%]), and a test set (n = 150 [50%]). CNNs of SSD300 and ResNet-18 were used for deep learning. The average accuracy, sensitivity, specificity, and area under the curve were 0.827, 0.84, 0.82, and 0.917, respectively. This study revealed that CNNs can assist in the prediction of paresthesia of the inferior alveolar nerve after third molar extraction using panoramic radiographic images.
Collapse
|
47
|
The wisdom behind the third molars removal: A prospective study of 106 cases. Ann Med Surg (Lond) 2021; 68:102639. [PMID: 34386230 PMCID: PMC8346357 DOI: 10.1016/j.amsu.2021.102639] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 07/26/2021] [Accepted: 07/26/2021] [Indexed: 11/23/2022] Open
Abstract
Purpose This paper aims to evaluate the decision-making of wisdom teeth extractions (M3s extraction) and the epidemiological profile in the targeted population. Materials and method This was a prospective analysis study of 106 patients at our hospital august 20, 1953 specialist hospital, which is a referral center between January 1, 2020 and January 1, 2021. The patients are divided into 2 groups according decision-making of wisdom teeth removal based on scientific evidence if it's right or wrong. Results There was no statistically significant difference between the groups regarding sex (P = 0.478), educational level (P = 0.718), or working status (P = 0.606). Furthermore, there was no statistically significant difference between the groups regarding general co-morbidity (P = 1.00) or oral history (P = 0.28). The mean age of the sample was 32.12 years (SD = 11.337 years, range = 17–70 years, median = 30 -years). We reported that only 28% of the third molars were surgically extracted. We included in Group (I), 81 patients who were treated for third molars removal which the decision-making was justified. In Group (II), 25 patients were treated for third molars removal which the decision-making was unjustified. Group (I) comprised 30 men and 51 women with a mean age of 30 years. Group (II) comprised 7 men and 18 women with a mean age of 27 years. The assessment of surgical outcomes (operating time, blood loss, hospital stay) showed no difference between groups. Discussion Monitoring asymptomatic wisdom teeth appears to be an appropriate strategy. Regarding retention versus prophylactic extraction of asymptomatic wisdom teeth, decision-making should be based on the best evidence combined with clinical experience.76.4% had a reason for extraction that was justified. The reasons why extraction of the wisdom tooth was not justified in our study population was either: extraction for prophylaxis or in the case of asymptomatic non-pathological third molars; without scientific evidence. Conclusion This subject, which is perpetually debated, requires updating dental health authorities by evaluating new conservative procedures. Third molars are a major focus of interest in dentistry. Preliminary indication and wrong decision-making for teeth extraction have resulted in many healthy teeth being sacrificed. Debate continues about the best strategies for the management of wisdom teeth.
Collapse
|
48
|
Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin Oral Investig 2021; 26:981-991. [PMID: 34312683 DOI: 10.1007/s00784-021-04082-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVES The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT. MATERIALS AND METHODS A dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology. RESULTS For segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents. CONCLUSIONS Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT. Clinical relevance An automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.
Collapse
|
49
|
Kim D, Choi J, Ahn S, Park E. A smart home dental care system: integration of deep learning, image sensors, and mobile controller. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021; 14:1123-1131. [PMID: 34249170 PMCID: PMC8259098 DOI: 10.1007/s12652-021-03366-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 06/22/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED In this study, a home dental care system consisting of an oral image acquisition device and deep learning models for maxillary and mandibular teeth images is proposed. The presented method not only classifies tooth diseases, but also determines whether a professional dental treatment (NPDT) is required. Additionally, a specially designed oral image acquisition device was developed to perform image acquisition of maxillary and mandibular teeth. Two evaluation metrics, namely, tooth disease and NPDT classifications, were examined using 610 compounded and 5251 tooth images annotated by an experienced dentist with a Doctor of Dental Surgery and another dentist with a Doctor of Dental Medicine. In the tooth disease and NPDT classifications, the proposed system showed accuracies greater than 96% and 89%, respectively. Based on these results, we believe that the proposed system will allow users to effectively manage their dental health by detecting tooth diseases by providing information on the need for dental treatment. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s12652-021-03366-8.
Collapse
Affiliation(s)
- Dogun Kim
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
| | - Jaeho Choi
- Department of Dental Biomaterials Science, Dental Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Sangyoon Ahn
- School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, USA
| | - Eunil Park
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Interaction Science, Sungkyunkwan University, Seoul, Republic of Korea
- Raon Data, Seoul, Republic of Korea
| |
Collapse
|
50
|
Classification of caries in third molars on panoramic radiographs using deep learning. Sci Rep 2021; 11:12609. [PMID: 34131266 PMCID: PMC8206082 DOI: 10.1038/s41598-021-92121-2] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/25/2021] [Indexed: 11/15/2022] Open
Abstract
The objective of this study is to assess the classification accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the classification of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped PR(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.86, a specificity of 0.88 and an AUC of 0.90 for the classification of carious lesions of third molars on PR(s). A high accuracy was achieved in caries classification in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future.
Collapse
|