1
|
Pul U, Schwendicke F. Artificial intelligence for detecting periapical radiolucencies: A systematic review and meta-analysis. J Dent 2024; 147:105104. [PMID: 38851523 DOI: 10.1016/j.jdent.2024.105104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/24/2024] [Accepted: 05/27/2024] [Indexed: 06/10/2024] Open
Abstract
OBJECTIVES Dentists' diagnostic accuracy in detecting periapical radiolucency varies considerably. This systematic review and meta-analysis aimed to investigate the accuracy of artificial intelligence (AI) for detecting periapical radiolucency. DATA Studies reporting diagnostic accuracy and utilizing AI for periapical radiolucency detection, published until November 2023, were eligible for inclusion. Meta-analysis was conducted using the online MetaDTA Tool to calculate pooled sensitivity and specificity. Risk of bias was evaluated using QUADAS-2. SOURCES A comprehensive search was conducted in PubMed/MEDLINE, ScienceDirect, and Institute of Electrical and Electronics Engineers (IEEE) Xplore databases. Studies reporting diagnostic accuracy and utilizing AI tools for periapical radiolucency detection, published until November 2023, were eligible for inclusion. STUDY SELECTION We identified 210 articles, of which 24 met the criteria for inclusion in the review. All but one study used one type of convolutional neural network. The body of evidence comes with an overall unclear to high risk of bias and several applicability concerns. Four of the twenty-four studies were included in a meta-analysis. AI showed a pooled sensitivity and specificity of 0.94 (95 % CI = 0.90-0.96) and 0.96 (95 % CI = 0.91-0.98), respectively. CONCLUSIONS AI demonstrated high specificity and sensitivity for detecting periapical radiolucencies. However, the current landscape suggests a need for diverse study designs beyond traditional diagnostic accuracy studies. Prospective real-life randomized controlled trials using heterogeneous data are needed to demonstrate the true value of AI. CLINICAL SIGNIFICANCE Artificial intelligence tools seem to have the potential to support detecting periapical radiolucencies on imagery. Notably, nearly all studies did not test fully fledged software systems but measured the mere accuracy of AI models in diagnostic accuracy studies. The true value of currently available AI-based software for lesion detection on both 2D and 3D radiographs remains uncertain.
Collapse
Affiliation(s)
- Utku Pul
- University for Digital Technologies in Medicine and Dentistry, Wiltz, Luxembourg
| | - Falk Schwendicke
- Conservative Dentistry and Periodontology, LMU Klinikum, Goethestr. 70, Munich 80336, Germany.
| |
Collapse
|
2
|
Fung E, Patel D, Tatum S. Artificial intelligence in maxillofacial and facial plastic and reconstructive surgery. Curr Opin Otolaryngol Head Neck Surg 2024; 32:257-262. [PMID: 38837245 DOI: 10.1097/moo.0000000000000983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
PURPOSE OF REVIEW To provide a current review of artificial intelligence and its subtypes in maxillofacial and facial plastic surgery including a discussion of implications and ethical concerns. RECENT FINDINGS Artificial intelligence has gained popularity in recent years due to technological advancements. The current literature has begun to explore the use of artificial intelligence in various medical fields, but there is limited contribution to maxillofacial and facial plastic surgery due to the wide variance in anatomical facial features as well as subjective influences. In this review article, we found artificial intelligence's roles, so far, are to automatically update patient records, produce 3D models for preoperative planning, perform cephalometric analyses, and provide diagnostic evaluation of oropharyngeal malignancies. SUMMARY Artificial intelligence has solidified a role in maxillofacial and facial plastic surgery within the past few years. As high-quality databases expand with more patients, the role for artificial intelligence to assist in more complicated and unique cases becomes apparent. Despite its potential, ethical questions have been raised that should be noted as artificial intelligence continues to thrive. These questions include concerns such as compromise of the physician-patient relationship and healthcare justice.
Collapse
Affiliation(s)
| | | | - Sherard Tatum
- Department of Otolaryngology
- Department of Pediatrics, SUNY Upstate Medical University, Syracuse, New York, USA
| |
Collapse
|
3
|
Huang YS, Iakubovskii P, Lim LZ, Mol A, Tyndall DA. Evaluation of deep learning for detecting intraosseous jaw lesions in cone beam computed tomography volumes. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:173-183. [PMID: 38155015 DOI: 10.1016/j.oooo.2023.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 09/06/2023] [Accepted: 09/15/2023] [Indexed: 12/30/2023]
Abstract
OBJECTIVE The study aim was to develop and assess the performance of a deep learning (DL) algorithm in the detection of radiolucent intraosseous jaw lesions in cone beam computed tomography (CBCT) volumes. STUDY DESIGN A total of 290 CBCT volumes from more than 12 different scanners were acquired. Fields of view ranged from 6 × 6 × 6 cm to 18 × 18 × 16 cm. CBCT volumes contained either zero or at least one biopsy-confirmed intraosseous lesion. 80 volumes with no intraosseous lesions were included as controls and were not annotated. 210 volumes with intraosseous lesions were manually annotated using ITK-Snap 3.8.0. 150 volumes (10 control, 140 positive) were presented to the DL software for training. Validation was performed using 60 volumes (30 control, 30 positive). Testing was performed using the remaining 80 volumes (40 control, 40 positive). RESULTS The DL algorithm obtained an adjusted sensitivity by case, specificity by case, positive predictive value by case, and negative predictive value by case of 0.975, 0.825, 0.848, and 0.971, respectively. CONCLUSIONS A DL algorithm showed moderate success at lesion detection in their correct locations, as well as recognition of lesion shape and extent. This study demonstrated the potential of DL methods for intraosseous lesion detection in CBCT volumes.
Collapse
Affiliation(s)
- Yiing-Shiuan Huang
- Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA.
| | | | - Li Zhen Lim
- Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA; Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, National University of Singapore, Singapore
| | - André Mol
- Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA
| | - Donald A Tyndall
- Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
4
|
Shi YJ, Li JP, Wang Y, Ma RH, Wang YL, Guo Y, Li G. Deep learning in the diagnosis for cystic lesions of the jaws: a review of recent progress. Dentomaxillofac Radiol 2024; 53:271-280. [PMID: 38814810 PMCID: PMC11211683 DOI: 10.1093/dmfr/twae022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 05/06/2024] [Accepted: 05/09/2024] [Indexed: 06/01/2024] Open
Abstract
Cystic lesions of the gnathic bones present challenges in differential diagnosis. In recent years, artificial intelligence (AI) represented by deep learning (DL) has rapidly developed and emerged in the field of dental and maxillofacial radiology (DMFR). Dental radiography provides a rich resource for the study of diagnostic analysis methods for cystic lesions of the jaws and has attracted many researchers. The aim of the current study was to investigate the diagnostic performance of DL for cystic lesions of the jaws. Online searches were done on Google Scholar, PubMed, and IEEE Xplore databases, up to September 2023, with subsequent manual screening for confirmation. The initial search yielded 1862 titles, and 44 studies were ultimately included. All studies used DL methods or tools for the identification of a variable number of maxillofacial cysts. The performance of algorithms with different models varies. Although most of the reviewed studies demonstrated that DL methods have better discriminative performance than clinicians, further development is still needed before routine clinical implementation due to several challenges and limitations such as lack of model interpretability, multicentre data validation, etc. Considering the current limitations and challenges, future studies for the differential diagnosis of cystic lesions of the jaws should follow actual clinical diagnostic scenarios to coordinate study design and enhance the impact of AI in the diagnosis of oral and maxillofacial diseases.
Collapse
Affiliation(s)
- Yu-Jie Shi
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Ju-Peng Li
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Yue Wang
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Ruo-Han Ma
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| | - Yan-Lin Wang
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| | - Yong Guo
- School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
| | - Gang Li
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, Beijing, 100081, China
| |
Collapse
|
5
|
Jiang X, Zheng H, Yuan Z, Lan K, Wu Y. HIMS-Net: Horizontal-vertical interaction and multiple side-outputs network for cyst segmentation in jaw images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4036-4055. [PMID: 38549317 DOI: 10.3934/mbe.2024178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Jaw cysts are mainly caused by abnormal tooth development, chronic oral inflammation, or jaw damage, which may lead to facial swelling, deformity, tooth loss, and other symptoms. Due to the diversity and complexity of cyst images, deep-learning algorithms still face many difficulties and challenges. In response to these problems, we present a horizontal-vertical interaction and multiple side-outputs network for cyst segmentation in jaw images. First, the horizontal-vertical interaction mechanism facilitates complex communication paths in the vertical and horizontal dimensions, and it has the ability to capture a wide range of context dependencies. Second, the feature-fused unit is introduced to adjust the network's receptive field, which enhances the ability of acquiring multi-scale context information. Third, the multiple side-outputs strategy intelligently combines feature maps to generate more accurate and detailed change maps. Finally, experiments were carried out on the self-established jaw cyst dataset and compared with different specialist physicians to evaluate its clinical usability. The research results indicate that the Matthews correlation coefficient (Mcc), Dice, and Jaccard of HIMS-Net were 93.61, 93.66 and 88.10% respectively, which may contribute to rapid and accurate diagnosis in clinical practice.
Collapse
Affiliation(s)
- Xiaoliang Jiang
- College of Mechanical Engineering, Quzhou University, Quzhou 324000, China
| | - Huixia Zheng
- Department of Stomatology, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou 324000, China
| | - Zhenfei Yuan
- Department of Stomatology, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou 324000, China
| | - Kun Lan
- College of Mechanical Engineering, Quzhou University, Quzhou 324000, China
| | - Yaoyang Wu
- Department of Computer and Information Science, University of Macau, Macau 999078, China
| |
Collapse
|
6
|
Xu L, Qiu K, Li K, Ying G, Huang X, Zhu X. Automatic segmentation of ameloblastoma on ct images using deep learning with limited data. BMC Oral Health 2024; 24:55. [PMID: 38195496 PMCID: PMC10775495 DOI: 10.1186/s12903-023-03587-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/27/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Ameloblastoma, a common benign tumor found in the jaw bone, necessitates accurate localization and segmentation for effective diagnosis and treatment. However, the traditional manual segmentation method is plagued with inefficiencies and drawbacks. Hence, the implementation of an AI-based automatic segmentation approach is crucial to enhance clinical diagnosis and treatment procedures. METHODS We collected CT images from 79 patients diagnosed with ameloblastoma and employed a deep learning neural network model for training and testing purposes. Specifically, we utilized the Mask R-CNN neural network structure and implemented image preprocessing and enhancement techniques. During the testing phase, cross-validation methods were employed for evaluation, and the experimental results were verified using an external validation set. Finally, we obtained an additional dataset comprising 200 CT images of ameloblastoma from a different dental center to evaluate the model's generalization performance. RESULTS During extensive testing and evaluation, our model successfully demonstrated the capability to automatically segment ameloblastoma. The DICE index achieved an impressive value of 0.874. Moreover, when the IoU threshold ranged from 0.5 to 0.95, the model's AP was 0.741. For a specific IoU threshold of 0.5, the model achieved an AP of 0.914, and for another IoU threshold of 0.75, the AP was 0.826. Our validation using external data confirms the model's strong generalization performance. CONCLUSION In this study, we successfully applied a neural network model based on deep learning that effectively performs automatic segmentation of ameloblastoma. The proposed method offers notable advantages in terms of efficiency, accuracy, and speed, rendering it a promising tool for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Liang Xu
- The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Kaixi Qiu
- Fuzhou First General Hospital, , Fuzhou, China
| | - Kaiwang Li
- School of Aeronautics and Astronautics, Tsinghua University, Beijing, China
| | - Ge Ying
- Jianning County General Hospital, , Fuzhou, China
| | - Xiaohong Huang
- The First Affiliated Hospital of Fujian Medical University, Fuzhou, China.
| | - Xiaofeng Zhu
- The First Affiliated Hospital of Fujian Medical University, Fuzhou, China.
- Department of Stomatology, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China.
| |
Collapse
|
7
|
Farajollahi M, Safarian MS, Hatami M, Esmaeil Nejad A, Peters OA. Applying artificial intelligence to detect and analyse oral and maxillofacial bone loss-A scoping review. AUST ENDOD J 2023; 49:720-734. [PMID: 37439465 DOI: 10.1111/aej.12775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 07/14/2023]
Abstract
Radiographic evaluation of bone changes is one of the main tools in the diagnosis of many oral and maxillofacial diseases. However, this approach to assessment has limitations in accuracy, inconsistency and comparatively low diagnostic efficiency. Recently, artificial intelligence (AI)-based algorithms like deep learning networks have been introduced as a solution to overcome these challenges. Based on recent studies, AI can improve the detection accuracy of an expert clinician for periapical pathology, periodontal diseases and their prognostication, as well as peri-implant bone loss. Also, AI has been successfully used to detect and diagnose oral and maxillofacial lesions with a high predictive value. This study aims to review the current evidence on artificial intelligence applications in the detection and analysis of bone loss in the oral and maxillofacial regions.
Collapse
Affiliation(s)
- Mehran Farajollahi
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Sadegh Safarian
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Masoud Hatami
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azadeh Esmaeil Nejad
- Iranian Center for Endodontic Research, Research Institute of Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ove A Peters
- School of Dentistry, The University of Queensland, Herston, Queensland, Australia
| |
Collapse
|
8
|
Yeshua T, Ladyzhensky S, Abu-Nasser A, Abdalla-Aslan R, Boharon T, Itzhak-Pur A, Alexander A, Chaurasia A, Cohen A, Sosna J, Leichter I, Nadler C. Deep learning for detection and 3D segmentation of maxillofacial bone lesions in cone beam CT. Eur Radiol 2023; 33:7507-7518. [PMID: 37191921 DOI: 10.1007/s00330-023-09726-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/30/2023] [Accepted: 04/21/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To develop an automated deep-learning algorithm for detection and 3D segmentation of incidental bone lesions in maxillofacial CBCT scans. METHODS The dataset included 82 cone beam CT (CBCT) scans, 41 with histologically confirmed benign bone lesions (BL) and 41 control scans (without lesions), obtained using three CBCT devices with diverse imaging protocols. Lesions were marked in all axial slices by experienced maxillofacial radiologists. All cases were divided into sub-datasets: training (20,214 axial images), validation (4530 axial images), and testing (6795 axial images). A Mask-RCNN algorithm segmented the bone lesions in each axial slice. Analysis of sequential slices was used for improving the Mask-RCNN performance and classifying each CBCT scan as containing bone lesions or not. Finally, the algorithm generated 3D segmentations of the lesions and calculated their volumes. RESULTS The algorithm correctly classified all CBCT cases as containing bone lesions or not, with an accuracy of 100%. The algorithm detected the bone lesion in axial images with high sensitivity (95.9%) and high precision (98.9%) with an average dice coefficient of 83.5%. CONCLUSIONS The developed algorithm detected and segmented bone lesions in CBCT scans with high accuracy and may serve as a computerized tool for detecting incidental bone lesions in CBCT imaging. CLINICAL RELEVANCE Our novel deep-learning algorithm detects incidental hypodense bone lesions in cone beam CT scans, using various imaging devices and protocols. This algorithm may reduce patients' morbidity and mortality, particularly since currently, cone beam CT interpretation is not always preformed. KEY POINTS • A deep learning algorithm was developed for automatic detection and 3D segmentation of various maxillofacial bone lesions in CBCT scans, irrespective of the CBCT device or the scanning protocol. • The developed algorithm can detect incidental jaw lesions with high accuracy, generates a 3D segmentation of the lesion, and calculates the lesion volume.
Collapse
Affiliation(s)
- Talia Yeshua
- Department of Applied Physics, The Jerusalem College of Technology, Jerusalem, Israel
| | - Shmuel Ladyzhensky
- Department of Applied Physics, The Jerusalem College of Technology, Jerusalem, Israel
| | - Amal Abu-Nasser
- Oral Maxillofacial Imaging, Department of Oral Medicine, Sedation and Imaging, Faculty of Dental Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ragda Abdalla-Aslan
- Department of Oral Medicine, Sedation and Imaging, Faculty of Dental Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Oral and Maxillofacial Surgery, Rambam Health Care Campus, Haifa, Israel
| | - Tami Boharon
- Department of Software Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Avital Itzhak-Pur
- Department of Software Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Asher Alexander
- Department of Software Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Akhilanand Chaurasia
- Department of Oral Medicine and Radiology, King George's Medical University, Lucknow, India
| | - Adir Cohen
- Department of Oral and Maxillofacial Surgery, Faculty of Dental Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Jacob Sosna
- Department of Radiology, Faculty of Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Isaac Leichter
- Department of Applied Physics, The Jerusalem College of Technology, Jerusalem, Israel
- Department of Radiology, Faculty of Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Chen Nadler
- Department of Oral Medicine, Sedation and Imaging, Faculty of Dental Medicine, Hadassah Medical Center, Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
9
|
Miragall MF, Knoedler S, Kauke-Navarro M, Saadoun R, Grabenhorst A, Grill FD, Ritschl LM, Fichter AM, Safi AF, Knoedler L. Face the Future-Artificial Intelligence in Oral and Maxillofacial Surgery. J Clin Med 2023; 12:6843. [PMID: 37959310 PMCID: PMC10649053 DOI: 10.3390/jcm12216843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 10/24/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has emerged as a versatile health-technology tool revolutionizing medical services through the implementation of predictive, preventative, individualized, and participatory approaches. AI encompasses different computational concepts such as machine learning, deep learning techniques, and neural networks. AI also presents a broad platform for improving preoperative planning, intraoperative workflow, and postoperative patient outcomes in the field of oral and maxillofacial surgery (OMFS). The purpose of this review is to present a comprehensive summary of the existing scientific knowledge. The authors thoroughly reviewed English-language PubMed/MEDLINE and Embase papers from their establishment to 1 December 2022. The search terms were (1) "OMFS" OR "oral and maxillofacial" OR "oral and maxillofacial surgery" OR "oral surgery" AND (2) "AI" OR "artificial intelligence". The search format was tailored to each database's syntax. To find pertinent material, each retrieved article and systematic review's reference list was thoroughly examined. According to the literature, AI is already being used in certain areas of OMFS, such as radiographic image quality improvement, diagnosis of cysts and tumors, and localization of cephalometric landmarks. Through additional research, it may be possible to provide practitioners in numerous disciplines with additional assistance to enhance preoperative planning, intraoperative screening, and postoperative monitoring. Overall, AI carries promising potential to advance the field of OMFS and generate novel solution possibilities for persisting clinical challenges. Herein, this review provides a comprehensive summary of AI in OMFS and sheds light on future research efforts. Further, the advanced analysis of complex medical imaging data can support surgeons in preoperative assessments, virtual surgical simulations, and individualized treatment strategies. AI also assists surgeons during intraoperative decision-making by offering immediate feedback and guidance to enhance surgical accuracy and reduce complication rates, for instance by predicting the risk of bleeding.
Collapse
Affiliation(s)
- Maximilian F. Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Alex Grabenhorst
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Florian D. Grill
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Lucas M. Ritschl
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Andreas M. Fichter
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, 3011 Bern, Switzerland;
- Faculty of Medicine, University of Bern, 3010 Bern, Switzerland
| | - Leonard Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
| |
Collapse
|
10
|
Zhang H, Zhu L, Zhang Q, Wang Y, Song A. Online view enhancement for exploration inside medical volumetric data using virtual reality. Comput Biol Med 2023; 163:107217. [PMID: 37450968 DOI: 10.1016/j.compbiomed.2023.107217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 06/13/2023] [Accepted: 06/25/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image visualization is an essential tool for conveying anatomical information. Ray-casting-based volume rendering is commonly used for generating visualizations of raw medical images. However, exposing a target area inside the skin often requires manual tuning of transfer functions or segmentation of original images, as preset parameters in volume rendering may not work well for arbitrary scanned data. This process is tedious and unnatural. To address this issue, we propose a volume visualization system that enhances the view inside the skin, enabling flexible exploration of medical volumetric data using virtual reality. METHODS In our proposed system, we design a virtual reality interface that allows users to walk inside the data. We introduce a view-dependent occlusion weakening method based on geodesic distance transform to support this interaction. By combining these methods, we develop a virtual reality system with intuitive interactions, facilitating online view enhancement for medical data exploration and annotation inside the volume. RESULTS Our rendering results demonstrate that the proposed occlusion weakening method effectively weakens obstacles while preserving the target area. Furthermore, comparative analysis with other alternative solutions highlights the advantages of our method in virtual reality. We conducted user studies to evaluate our system, including area annotation and line drawing tasks. The results showed that our method with enhanced views achieved 47.73% and 35.29% higher accuracy compared to the group with traditional volume rendering. Additionally, subjective feedback from medical experts further supported the effectiveness of the designed interactions in virtual reality. CONCLUSIONS We successfully address the occlusion problems in the exploration of medical volumetric data within a virtual reality environment. Our system allows for flexible integration of scanned medical volumes without requiring extensive manual preprocessing. The results of our user studies demonstrate the feasibility and effectiveness of walk-in interaction for medical data exploration.
Collapse
Affiliation(s)
- Hongkun Zhang
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| | - Lifeng Zhu
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China.
| | | | - Yunhai Wang
- Department of Computer Science, Shandong University, Shandong, PR China
| | - Aiguo Song
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| |
Collapse
|
11
|
Rauniyar S, Jena S, Sahoo N, Mohanty P, Dash BP. Artificial Intelligence and Machine Learning for Automated Cephalometric Landmark Identification: A Meta-Analysis Previewed by a Systematic Review. Cureus 2023; 15:e40934. [PMID: 37496553 PMCID: PMC10368300 DOI: 10.7759/cureus.40934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2023] [Indexed: 07/28/2023] Open
Abstract
Digital dentistry has become an integral part of our practice today, with artificial intelligence (AI) playing the predominant role. The present systematic review was intended to detect the accuracy of landmarks identified cephalometrically using machine learning and artificial intelligence and compare the same with the manual tracing (MT) group. According to the PRISMA-DTA guidelines, a scoping evaluation of the articles was performed. Electronic databases like Doaj, PubMed, Scopus, Google Scholar, and Embase from January 2001 to November 2022 were searched. Inclusion and exclusion criteria were applied, and 13 articles were studied in detail. Six full-text articles were further excluded (three articles did not provide a comparison between manual tracing and AI for cephalometric landmark detection, and three full-text articles were systematic reviews and meta-analyses). Finally, seven articles were found appropriate to be included in this review. The outcome of this systematic review has led to the conclusion that AI, when employed for cephalometric landmark detection, has shown extremely positive and promising results as compared to manual tracing.
Collapse
Affiliation(s)
- Sabita Rauniyar
- Orthodontics and Dentofacial Orthopaedics, Kalinga Institute of Dental Science, Bhubaneswar, IND
| | - Sanghamitra Jena
- Department of Orthodontics and Dentofacial Orthopaedics, Kalinga Institute of Dental Sciences, Kalinga Institute of Industrial Technology (KIIT) (Deemed to be University), Bhubaneswar, IND
| | - Nivedita Sahoo
- Department of Orthodontics and Dentofacial Orthopaedics, Kalinga Institute of Dental Sciences, Kalinga Institute of Industrial Technology (KIIT) (Deemed to be University), Bhubaneswar, IND
| | - Pritam Mohanty
- Department of Orthodontics, Kalinga Institute of Dental Sciences, Odisha, IND
| | - Bhagabati P Dash
- Department of Orthodontics and Dentofacial Orthopaedics, Kalinga Institute of Dental Sciences, Kalinga Institute of Industrial Technology (KIIT) (Deemed to be University), Bhubaneswar, IND
| |
Collapse
|
12
|
Kolarkodi SH, Alotaibi KZ. Artificial Intelligence in Diagnosis of Oral Diseases: A Systematic Review. J Contemp Dent Pract 2023; 24:61-68. [PMID: 37189014 DOI: 10.5005/jp-journals-10024-3465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
AIM To understand the role of Artificial intelligence (AI) in oral radiology and its applications. BACKGROUND Over the last two decades, the field of AI has undergone phenomenal progression and expansion. Artificial intelligence applications have taken up new roles in dentistry like digitized data acquisition and machine learning and diagnostic applications. MATERIALS AND METHODS All research papers outlining the population, intervention, control, and outcomes (PICO) questions were searched for in PubMed, ERIC, Embase, CINAHL, database from the last 10 years on first January 2023. Two authors independently reviewed the titles and abstracts of the selected studies, and any discrepancy between the two review authors was handled by a third reviewer. Two independent investigators evaluated all the included studies for the quality assessment using the modified tool for the quality assessment of diagnostic accuracy studies (QUADAS- 2). REVIEW RESULTS After the removal of duplicates and screening of titles and abstracts, 18 full texts were agreed upon for further evaluation, of which 14 that met the inclusion criteria were included in this review. The application of artificial intelligence models has primarily been reported on osteoporosis diagnosis, classification/segmentation of maxillofacial cysts and/or tumors, and alveolar bone resorption. Overall study quality was deemed to be high for two (14%) studies, moderate for six (43%) studies, and low for another six (43%) studies. CONCLUSION The use of AI for patient diagnosis and clinical decision-making can be accomplished with relative ease, and the technology should be regarded as a reliable modality for potential future applications in oral diagnosis.
Collapse
Affiliation(s)
- Shaul Hameed Kolarkodi
- Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Qassim University, Buraydah, Saudi Arabia, Phone: +96 6533653299, e-mail:
| | - Khalid Zabin Alotaibi
- Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Qassim University, Buraydah, Saudi Arabia
| |
Collapse
|
13
|
Kirnbauer B, Hadzic A, Jakse N, Bischof H, Stern D. Automatic Detection of Periapical Osteolytic Lesions on CBCT Using Deep CNNs. J Endod 2022; 48:1434-1440. [PMID: 35952897 DOI: 10.1016/j.joen.2022.07.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 10/31/2022]
Abstract
INTRODUCTION Cone beam computed tomography (CBCT) is an essential diagnostic tool in oral radiology. Radiolucent periapical lesions (PALs) represent the most frequent jaw lesions. However, the description, interpretation, and documentation of radiological findings, especially incidental findings, are time-consuming and resource-intensive, requiring a high degree of expertise. To improve quality, dentists may use artificial intelligence in the form of deep learning tools. This study was conducted to develop and validate a deep convolutional neuronal network for the automated detection of osteolytic PALs in CBCT datasets. METHODS CBCT datasets from routine clinical operations (maxilla, mandible, or both) performed from January to October 2020 were retrospectively screened and selected. A two-step approach was used for automatic PAL detection. First, tooth localization and identification were performed using the SpatialConfiguration-Net based on heatmap regression. Second, binary segmentation of lesions was performed using a modified U-Net architecture. A total of 144 CBCT images were used to train and test the networks. The method was evaluated using the four-fold cross-validation technique. RESULTS The success detection rate of the tooth localization network ranged between 72.6% and 97.3%, whereas the sensitivity and specificity values of lesion detection were 97.1% and 88.0%, respectively. CONCLUSIONS Although PALs showed variations in appearance, size, and shape in the CBCT dataset, and a high imbalance existed between teeth with and without PALs, the proposed fully automated method provided excellent results compared with related literature.
Collapse
Affiliation(s)
- Barbara Kirnbauer
- Department of Dental Medicine and Oral Health, Division of Oral Surgery and Orthodontics, Medical University of Graz, Billrothgasse 4, A-8010 Graz, Austria.
| | - Arnela Hadzic
- Institute for Computer Vision and Graphics, Graz University of Technology, Inffeldgasse 16, A-8010 Graz, Austria
| | - Norbert Jakse
- Department of Dental Medicine and Oral Health, Division of Oral Surgery and Orthodontics, Medical University of Graz, Billrothgasse 4, A-8010 Graz, Austria
| | - Horst Bischof
- Institute for Computer Vision and Graphics, Graz University of Technology, Inffeldgasse 16, A-8010 Graz, Austria
| | - Darko Stern
- Institute for Computer Vision and Graphics, Graz University of Technology, Inffeldgasse 16, A-8010 Graz, Austria
| |
Collapse
|
14
|
Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol 2022; 51:20210197. [PMID: 34233515 PMCID: PMC8693331 DOI: 10.1259/dmfr.20210197] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Collapse
Affiliation(s)
| | - Chiaki Doi
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Nobuhiro Yoda
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Jl. Mayjen Prof. Dr. Moestopo no 47, Surabaya, Indonesia
| | - Keiichi Sasaki
- Division of Advanced Prosthetic Dentistry, Tohoku University Graduate School of Dentistry, 4–1 Seiryo-machi, Sendai, Japan
| |
Collapse
|
15
|
Sherwood AA, Sherwood AI, Setzer FC, K SD, Shamili JV, John C, Schwendicke F. A Deep Learning Approach to Segment and Classify C-Shaped Canal Morphologies in Mandibular Second Molars Using Cone-beam Computed Tomography. J Endod 2021; 47:1907-1916. [PMID: 34563507 DOI: 10.1016/j.joen.2021.09.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/12/2021] [Accepted: 09/14/2021] [Indexed: 01/11/2023]
Abstract
INTRODUCTION The identification of C-shaped root canal anatomy on radiographic images affects clinical decision making and treatment. The aims of this study were to develop a deep learning (DL) model to classify C-shaped canal anatomy in mandibular second molars from cone-beam computed tomographic (CBCT) volumes and to compare the performance of 3 different architectures. METHODS U-Net, residual U-Net, and Xception U-Net architectures were used for image segmentation and classification of C-shaped anatomies. Model training and validation were performed on 100 of a total of 135 available limited field of view CBCT images containing mandibular molars with C-shaped anatomy. Thirty-five CBCT images were used for testing. Voxel-matching accuracy of the automated labeling of the C-shaped anatomy was assessed with the Dice index. The mean sensitivity of predicting the correct C-shape subcategory was calculated based on detection accuracy. One-way analysis of variance and post hoc Tukey honestly significant difference tests were used for statistical evaluation. RESULTS The mean Dice coefficients were 0.768 ± 0.0349 for Xception U-Net, 0.736 ± 0.0297 for residual U-Net, and 0.660 ± 0.0354 for U-Net on the test data set. The performance of the 3 models was significantly different overall (analysis of variance, P = .000779). Both Xception U-Net (Q = 7.23, P = .00070) and residual U-Net (Q = 5.09, P = .00951) performed significantly better than U-Net (post hoc Tukey honestly significant difference test). The mean sensitivity values were 0.786 ± 0.0378 for Xception U-Net, 0.746 ± 0.0391 for residual U-Net, and 0.720 ± 0.0495 for U-Net. The mean positive predictive values were 77.6% ± 0.1998% for U-Net, 78.2% ± 0.0.1971% for residual U-Net, and 80.0% ± 0.1098% for Xception U-Net. The addition of contrast-limited adaptive histogram equalization had improved overall architecture efficacy by a mean of 4.6% (P < .0001). CONCLUSIONS DL may aid in the detection and classification of C-shaped canal anatomy.
Collapse
Affiliation(s)
- Adithya A Sherwood
- Mahatma Montessori Matriculation Higher Secondary School, Madurai, Tamil Nadu, India
| | - Anand I Sherwood
- Department of Conservative Dentistry and Endodontics, CSI College of Dental Sciences, Madurai, Tamil Nadu, India.
| | - Frank C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
| | - Sheela Devi K
- Mahatma Montessori Matriculation Higher Secondary School, Madurai, Tamil Nadu, India
| | - Jasmin V Shamili
- Department of Conservative Dentistry and Endodontics, CSI College of Dental Sciences, Madurai, Tamil Nadu, India
| | - Caroline John
- Department of Computer Science, Hal Marcus College of Science and Engineering, University of West Florida, Pensacola, Florida
| | - Falk Schwendicke
- Department of Oral Diagnostics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
16
|
Mohammad-Rahimi H, Nadimi M, Rohban MH, Shamsoddin E, Lee VY, Motamedian SR. Machine learning and orthodontics, current trends and the future opportunities: A scoping review. Am J Orthod Dentofacial Orthop 2021; 160:170-192.e4. [PMID: 34103190 DOI: 10.1016/j.ajodo.2021.02.013] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/01/2021] [Accepted: 02/01/2021] [Indexed: 12/29/2022]
Abstract
INTRODUCTION In recent years, artificial intelligence (AI) has been applied in various ways in medicine and dentistry. Advancements in AI technology show promising results in the practice of orthodontics. This scoping review aimed to investigate the effectiveness of AI-based models employed in orthodontic landmark detection, diagnosis, and treatment planning. METHODS A precise search of electronic databases was conducted, including PubMed, Google Scholar, Scopus, and Embase (English publications from January 2010 to July 2020). Quality Assessment and Diagnostic Accuracy Tool 2 (QUADAS-2) was used to assess the quality of the articles included in this review. RESULTS After applying inclusion and exclusion criteria, 49 articles were included in the final review. AI technology has achieved state-of-the-art results in various orthodontic applications, including automated landmark detection on lateral cephalograms and photography images, cervical vertebra maturation degree determination, skeletal classification, orthodontic tooth extraction decisions, predicting the need for orthodontic treatment or orthognathic surgery, and facial attractiveness. Most of the AI models used in these applications are based on artificial neural networks. CONCLUSIONS AI can help orthodontists save time and provide accuracy comparable to the trained dentists in diagnostic assessments and prognostic predictions. These systems aim to boost performance and enhance the quality of care in orthodontics. However, based on current studies, the most promising application was cephalometry landmark detection, skeletal classification, and decision making on tooth extractions.
Collapse
Affiliation(s)
| | - Mohadeseh Nadimi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Erfan Shamsoddin
- National Institute for Medical Research Development, Tehran, Iran
| | | | - Saeed Reza Motamedian
- Department of Orthodontics, School of Dentistry, & Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
17
|
Reliability and accuracy of automatic segmentation of mandibular 3D models on linear measurements. Clin Oral Investig 2021; 25:6335-6346. [PMID: 33954849 DOI: 10.1007/s00784-021-03934-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 03/30/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Evaluate if automatic segmentation of mandibular three-dimensional (3D) models is reliable and accurate. MATERIALS AND METHODS Eight dry mandibles with eight silica markers were scanned in the i-CAT Classic device (Imaging Sciences International). Automatic segmentation was performed using nine standard preset thresholds in the Dolphin software (Dolphin Imaging & Management Solutions). Three observers individually made twice eight linear measurements on the mandibular 3D models. Another observer made physical measurements, twice as well, on the dry mandibles. Reliability and accuracy were evaluated with intraclass correlation coefficients (ICCs), Dahlberg's formula, Bland-Altman analyses, and changing bias with regression analyses. RESULTS Inter-observer and intra-observer ICCs and Dahlberg's error were ≥ 0.75 and ≤ 1.0 mm, respectively, for all measurements. Inter-observer agreement between mandibular 3D models and physical measurements ranged from -0.37 to 0.91 mm. CONCLUSIONS Linear measurements made on mandibular 3D models obtained using standard preset thresholds are reliable and accurate. However, additional studies are necessary to confirm this hypothesis for clinical applications. CLINICAL RELEVANCE Since the 3D models are useful for diagnostics and surgical planning, it is necessary to determinate whether the linear measurements made on 3D models obtained by automatic segmentation are sufficiently reliable and accurate.
Collapse
|
18
|
A Knowledge-Based Modality-Independent Technique for Concurrent Thigh Muscle Segmentation: Applicable to CT and MR Images. J Digit Imaging 2020; 33:1122-1135. [PMID: 32588159 DOI: 10.1007/s10278-020-00354-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
Abstract
The mass of the lower extremity muscles is a clinically significant metric. Manual segmentation of these muscles is a time-consuming task. Most of the segmentation methods for the thigh muscles are based on statistical models and atlases which need manually segmented datasets. The goal of this work is an automatic segmentation of the thigh muscles with only one initial segmented slice. A new automatic method is proposed for concurrent individual thigh muscles segmentation using a hybrid level set method and anatomical information of the muscles. In the proposed method, the muscle regions are extracted by the Fast and Robust Fuzzy C-Means Clustering (FRFCM) method, and then a contour is determined for each muscle which changes according to the muscle shape variation through its length. The anatomical information is used to control the contours variations and to refine the final boundaries. The method was validated by 22 CT datasets. The average dice similarity coefficient (DSC) of the method for individual muscle segmentation with one and two initial slices were 89.29 ± 2.59 (%) and 91.77 ± 1.87 (%), respectively. Also, the average symmetric surface distances (ASSDs) were 0.93 ± 0.29 mm and 0.64 ± 0.18 mm. Furthermore, applying to ten MRI datasets, the average DSC and ASSD for muscles were 90.9 ± 2.61 (%) and 0.71 ± 0.33 mm, respectively. The quantitative and intuitive results of the proposed method show the effectiveness of this method in segmentation of large and small muscles in CT and MR images. The consumed computation time is lower than the previous works, and this method does not need any training datasets.
Collapse
|
19
|
Hung K, Yeung AWK, Tanaka R, Bornstein MM. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17124424. [PMID: 32575560 PMCID: PMC7345758 DOI: 10.3390/ijerph17124424] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/12/2020] [Accepted: 06/16/2020] [Indexed: 12/15/2022]
Abstract
The increasing use of three-dimensional (3D) imaging techniques in dental medicine has boosted the development and use of artificial intelligence (AI) systems for various clinical problems. Cone beam computed tomography (CBCT) and intraoral/facial scans are potential sources of image data to develop 3D image-based AI systems for automated diagnosis, treatment planning, and prediction of treatment outcome. This review focuses on current developments and performance of AI for 3D imaging in dentomaxillofacial radiology (DMFR) as well as intraoral and facial scanning. In DMFR, machine learning-based algorithms proposed in the literature focus on three main applications, including automated diagnosis of dental and maxillofacial diseases, localization of anatomical landmarks for orthodontic and orthognathic treatment planning, and general improvement of image quality. Automatic recognition of teeth and diagnosis of facial deformations using AI systems based on intraoral and facial scanning will very likely be a field of increased interest in the future. The review is aimed at providing dental practitioners and interested colleagues in healthcare with a comprehensive understanding of the current trend of AI developments in the field of 3D imaging in dental medicine.
Collapse
Affiliation(s)
- Kuofeng Hung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Michael M. Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, 4058 Basel, Switzerland
- Correspondence: ; Tel.: +41-(0)61-267-25-45
| |
Collapse
|
20
|
Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J. Artificial Intelligence for the Computer-aided Detection of Periapical Lesions in Cone-beam Computed Tomographic Images. J Endod 2020; 46:987-993. [PMID: 32402466 DOI: 10.1016/j.joen.2020.03.025] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 03/19/2020] [Accepted: 03/24/2020] [Indexed: 12/21/2022]
Abstract
INTRODUCTION The aim of this study was to use a Deep Learning (DL) algorithm for the automated segmentation of cone-beam computed tomographic (CBCT) images and the detection of periapical lesions. METHODS Limited field of view CBCT volumes (n = 20) containing 61 roots with and without lesions were segmented clinician dependent versus using the DL approach based on a U-Net architecture. Segmentation labeled each voxel as 1 of 5 categories: "lesion" (periapical lesion), "tooth structure," "bone," "restorative materials," and "background." Repeated splits of all images into a training set and a validation set based on 5-fold cross validation were performed using Deep Learning segmentation (DLS), and the results were averaged. DLS versus clinical-dependent segmentation was assessed by dichotomized lesion detection accuracy evaluating sensitivity, specificity, positive predictive value, negative predictive value, and voxel-matching accuracy using the DICE index for each of the 5 labels. RESULTS DLS lesion detection accuracy was 0.93 with specificity of 0.88, positive predictive value of 0.87, and negative predictive value of 0.93. The overall cumulative DICE indexes for the individual labels were lesion = 0.52, tooth structure = 0.74, bone = 0.78, restorative materials = 0.58, and background = 0.95. The cumulative DICE index for all actual true lesions was 0.67. CONCLUSIONS This DL algorithm trained in a limited CBCT environment showed excellent results in lesion detection accuracy. Overall voxel-matching accuracy may be benefited by enhanced versions of artificial intelligence.
Collapse
Affiliation(s)
- Frank C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
| | - Katherine J Shi
- Private Practice, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Zhiyang Zhang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Hao Yan
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Hyunsoo Yoon
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Mel Mupparapu
- Department of Oral Medicine, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jing Li
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| |
Collapse
|
21
|
Silva VKS, Vieira WA, Bernardino ÍM, Travençolo BAN, Bittencourt MAV, Blumenberg C, Paranhos LR, Galvão HC. Accuracy of computer-assisted image analysis in the diagnosis of maxillofacial radiolucent lesions: A systematic review and meta-analysis. Dentomaxillofac Radiol 2019; 49:20190204. [PMID: 31709811 DOI: 10.1259/dmfr.20190204] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES This study aimed to search for scientific evidence concerning the accuracy of computer-assisted analysis for diagnosing maxillofacial radiolucent lesions. METHODS A systematic review was conducted according to the statements of Preferred Reporting Items for Systematic Reviews and Meta-analyses Protocols and considering 10 databases, including the gray literature. Protocol was registered at the International Prospective Register of Systematic Reviews (CRD42018089945). The population, intervention, comparison and outcome strategy was used to define the eligibility criteria and only diagnostic test studies were included. Their risk of bias was assessed by the Joanna Briggs Institute Critical Appraisal tool. Random-effects model meta-analysis was performed and heterogeneity among the included studies was estimated using the I2 statistic. The grade of recommendation, assessment, development, and evaluation (GRADE) tool assessed the quality of evidence and strength of recommendation across included studies. RESULTS Out of 715 identified citations, four papers, published between 2009 and 2017, fulfilled the criteria and were included in this systematic review. A total of 191 lesions, classified as periapical granuloma and cyst, dentigerous cyst or keratocystic odontogenic tumor, were analyzed. All selected articles scored low risk of bias. The pooled accuracy estimation, regardless of the classification method used, was 88.75% (95% CI = 85.19-92.30). Heterogeneity test reached moderate values (I2 = 57.89%). According to the GRADE tool, the analyzed outcome was classified as having low level of certainty. CONCLUSIONS The overall evaluation showed all studies presented high accuracy rates of computer-aided diagnosis systems in classifying radiolucent maxillofacial lesions compared to histopathological biopsy. However, due to the moderate heterogeneity found among the studies included in this meta-analysis, a pragmatic recommendation about the use of computer-assisted analysis is not possible.
Collapse
Affiliation(s)
- Virginia K S Silva
- Department of Dentistry, Postgraduate Program in Health Sciences, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
| | - Walbert A Vieira
- Postgraduate Program in Dentistry, Endodontics Division, Piracicaba Dental School, State University of Campinas, Piracicaba, São Paulo, Brazil
| | - Ítalo M Bernardino
- Department of Dentistry, Postgraduate Program in Dentistry, State University of Paraíba, Campina Grande, Paraíba, Brazil
| | - Bruno A N Travençolo
- Center for Exact Sciences and Technology, School of Computing, Federal University of Uberlândia, Uberlândia, Minas Gerais, Brazil
| | - Marcos A V Bittencourt
- Department of Pediatric and Community Dentistry, School of Dentistry, Federal University of Bahia, Salvador, Bahia, Brazil
| | - Cauane Blumenberg
- Department of Social Medicine, Postgraduate Program in Epidemiology, Federal University of Pelotas, Pelotas, Rio Grande do Sul, Brazil
| | - Luiz R Paranhos
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Uberlândia, Minas Gerais, Brazil
| | - Hebel C Galvão
- Department of Dentistry, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
| |
Collapse
|
22
|
Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac Radiol 2019; 49:20190107. [PMID: 31386555 DOI: 10.1259/dmfr.20190107] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES To investigate the current clinical applications and diagnostic performance of artificial intelligence (AI) in dental and maxillofacial radiology (DMFR). METHODS Studies using applications related to DMFR to develop or implement AI models were sought by searching five electronic databases and four selected core journals in the field of DMFR. The customized assessment criteria based on QUADAS-2 were adapted for quality analysis of the studies included. RESULTS The initial electronic search yielded 1862 titles, and 50 studies were eventually included. Most studies focused on AI applications for an automated localization of cephalometric landmarks, diagnosis of osteoporosis, classification/segmentation of maxillofacial cysts and/or tumors, and identification of periodontitis/periapical disease. The performance of AI models varies among different algorithms. CONCLUSION The AI models proposed in the studies included exhibited wide clinical applications in DMFR. Nevertheless, it is still necessary to further verify the reliability and applicability of the AI models prior to transferring these models into clinical practice.
Collapse
Affiliation(s)
- Kuofeng Hung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Carla Montalvao
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Taisuke Kawai
- Department of Oral and Maxillofacial Radiology, School of Life Dentistry at Tokyo, Nippon Dental University, Tokyo, Japan
| | - Michael M Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
23
|
Martinelli-Kläy CP, Martinelli CR, Martinelli C, Macedo HR, Lombardi T. Unusual Imaging Features of Dentigerous Cyst: A Case Report. Dent J (Basel) 2019; 7:dj7030076. [PMID: 31374841 PMCID: PMC6784467 DOI: 10.3390/dj7030076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 05/21/2019] [Accepted: 07/05/2019] [Indexed: 12/14/2022] Open
Abstract
Dentigerous cysts (DC) are cystic lesions radiographically represented by a well-defined unilocular radiolucent area involving an impacted tooth crown. We present an unusual radiographic feature of dentigerous cyst related to the impacted mandibular right second molar, in a 16-year-old patient, which suggested an ameloblastoma or odontogenic keratocyst (OKC) because of its multilocular appearance seen on the panoramic radiography. A multi-slice computed tomography (MSCT), however, revealed a unilocular lesion without septations, with an attenuation coefficient from 3.9 to 22.9 HU suggesting a cystic lesion. Due to its extension, a marsupialization was performed together with the histopathological analysis of the fragment removed which suggested a dentigerous cyst. Nine months later, the lesion was reduced in size and then totally excised. The impacted mandibular right second molar was also extracted. Histopathological examination confirmed the diagnosis of a dentigerous cyst. One year later, the panoramic radiography showed a complete mandible bone healing. Large dentigerous cysts can sometimes suggest other more aggressive pathologies. Precise diagnosis is important to avoid mistakes since DC, OKC and ameloblastoma require different treatments. Histological examination is, therefore, essential to establish a definitive diagnosis. In our case, MSCT and the tissue attenuation coefficient analysis contributed to guide the diagnosis and management of the dentigerous cyst.
Collapse
Affiliation(s)
- Carla Patrícia Martinelli-Kläy
- Laboratory of Oral & Maxillofacial Pathology, Oral Medicine and Oral and Maxillofacial Pathology Unit, Division of Oral Maxillofacial Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, 1211 Geneva, Switzerland.
- Centre for Diagnosis and Treatment of Oral Diseases, Ribeirão Preto 14025-250, Brazil.
| | | | - Celso Martinelli
- Centre for Diagnosis and Treatment of Oral Diseases, Ribeirão Preto 14025-250, Brazil
| | | | - Tommaso Lombardi
- Laboratory of Oral & Maxillofacial Pathology, Oral Medicine and Oral and Maxillofacial Pathology Unit, Division of Oral Maxillofacial Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, 1211 Geneva, Switzerland
| |
Collapse
|
24
|
A novel image-based retrieval system for characterization of maxillofacial lesions in cone beam CT images. Int J Comput Assist Radiol Surg 2019; 14:785-796. [DOI: 10.1007/s11548-019-01946-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Accepted: 03/08/2019] [Indexed: 10/27/2022]
|
25
|
Charlesworth JM, Davidson MA. Undermining a common language: smartphone applications for eye emergencies. MEDICAL DEVICES-EVIDENCE AND RESEARCH 2019; 12:21-40. [PMID: 30697086 PMCID: PMC6339640 DOI: 10.2147/mder.s186529] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Background Emergency room physicians are frequently called upon to assess eye injuries and vision problems in the absence of specialized ophthalmologic equipment. Technological applications that can be used on mobile devices are only now becoming available. Objective To review the literature on the evidence of clinical effectiveness of smartphone applications for visual acuity assessment marketed by two providers (Google Play and iTunes). Methods The websites of two mobile technology vendors (iTunes and Google Play) in Canada and Ireland were searched on three separate occasions using the terms “eye”, “ocular”, “ophthalmology”, “optometry”, “vision”, and “visual assessment” to determine what applications were currently available. Four medical databases (Cochrane, Embase, PubMed, Medline) were subsequently searched with the same terms AND mobile OR smart phone for papers in English published in years 2010–2017. Results A total of 5,024 Canadian and 2,571 Irish applications were initially identified. After screening, 44 were retained. Twelve relevant articles were identified from the health literature. After screening, only one validation study referred to one of our identified applications, and this one only partially validated the application as being useful for clinical purposes. Conclusion Mobile device applications in their current state are not suitable for emergency room ophthalmologic assessment, because systematic validation is lacking.
Collapse
Affiliation(s)
- Jennifer M Charlesworth
- School of Medicine, National University of Ireland, Galway, Ireland, .,AM Charlesworth & Associates Science and Technology Consultants, Ottawa, ON, Canada,
| | - Myriam A Davidson
- AM Charlesworth & Associates Science and Technology Consultants, Ottawa, ON, Canada,
| |
Collapse
|
26
|
A pilot study for segmentation of pharyngeal and sino-nasal airway subregions by automatic contour initialization. Int J Comput Assist Radiol Surg 2017; 12:1877-1893. [PMID: 28755036 DOI: 10.1007/s11548-017-1650-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Accepted: 07/17/2017] [Indexed: 12/19/2022]
Abstract
PURPOSE The objective of the present study is to put forward a novel automatic segmentation algorithm to segment pharyngeal and sino-nasal airway subregions on 3D CBCT imaging datasets. METHODS A fully automatic segmentation of sino-nasal and pharyngeal airway subregions was implemented in MATLAB programing environment. The novelty of the algorithm is automatic initialization of contours in upper airway subregions. The algorithm is based on boundary definitions of the human anatomy along with shape constraints with an automatic initialization of contours to develop a complete algorithm which has a potential to enhance utility at clinical level. Post-initialization; five segmentation techniques: Chan-Vese level set (CVL), localized Chan-Vese level set (LCVL), Bhattacharya distance level set (BDL), Grow Cut (GC), and Sparse Field method (SFM) were used to test the robustness of automatic initialization. RESULTS Precision and F-score were found to be greater than 80% for all the regions with all five segmentation methods. High precision and low recall were observed with BDL and GC techniques indicating an under segmentation. Low precision and high recall values were observed with CVL and SFM methods indicating an over segmentation. A Larger F-score value was observed with SFM method for all the subregions. Minimum F-score value was observed for naso-ethmoidal and sphenoidal air sinus region, whereas a maximum F-score was observed in maxillary air sinuses region. The contour initialization was more accurate for maxillary air sinuses region in comparison with sphenoidal and naso-ethmoid regions. CONCLUSION The overall F-score was found to be greater than 80% for all the airway subregions using five segmentation techniques, indicating accurate contour initialization. Robustness of the algorithm needs to be further tested on severely deformed cases and on cases with different races and ethnicity for it to have global acceptance in Katradental radKatraiology workflow.
Collapse
|
27
|
Abdolali F, Zoroofi RA, Otake Y, Sato Y. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 139:197-207. [PMID: 28187891 DOI: 10.1016/j.cmpb.2016.10.024] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 09/16/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate detection of maxillofacial cysts is an essential step for diagnosis, monitoring and planning therapeutic intervention. Cysts can be of various sizes and shapes and existing detection methods lead to poor results. Customizing automatic detection systems to gain sufficient accuracy in clinical practice is highly challenging. For this purpose, integrating the engineering knowledge in efficient feature extraction is essential. METHODS This paper presents a novel framework for maxillofacial cysts detection. A hybrid methodology based on surface and texture information is introduced. The proposed approach consists of three main steps as follows: At first, each cystic lesion is segmented with high accuracy. Then, in the second and third steps, feature extraction and classification are performed. Contourlet and SPHARM coefficients are utilized as texture and shape features which are fed into the classifier. Two different classifiers are used in this study, i.e. support vector machine and sparse discriminant analysis. Generally SPHARM coefficients are estimated by the iterative residual fitting (IRF) algorithm which is based on stepwise regression method. In order to improve the accuracy of IRF estimation, a method based on extra orthogonalization is employed to reduce linear dependency. We have utilized a ground-truth dataset consisting of cone beam CT images of 96 patients, belonging to three maxillofacial cyst categories: radicular cyst, dentigerous cyst and keratocystic odontogenic tumor. RESULTS Using orthogonalized SPHARM, residual sum of squares is decreased which leads to a more accurate estimation. Analysis of the results based on statistical measures such as specificity, sensitivity, positive predictive value and negative predictive value is reported. The classification rate of 96.48% is achieved using sparse discriminant analysis and orthogonalized SPHARM features. Classification accuracy at least improved by 8.94% with respect to conventional features. CONCLUSIONS This study demonstrated that our proposed methodology can improve the computer assisted diagnosis (CAD) performance by incorporating more discriminative features. Using orthogonalized SPHARM is promising in computerized cyst detection and may have a significant impact in future CAD systems.
Collapse
Affiliation(s)
- Fatemeh Abdolali
- Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | - Reza Aghaeizadeh Zoroofi
- Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Yoshito Otake
- Graduate school of Information Science, Nara Institute of Science and Technology (NAIST), Nara, Japan
| | - Yoshinobu Sato
- Graduate school of Information Science, Nara Institute of Science and Technology (NAIST), Nara, Japan
| |
Collapse
|
28
|
Automatic segmentation of mandibular canal in cone beam CT images using conditional statistical shape model and fast marching. Int J Comput Assist Radiol Surg 2016; 12:581-593. [DOI: 10.1007/s11548-016-1484-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
|