1
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging: a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:641-655. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
2
|
Zhou W, Lu X, Zhao D, Jiang M, Fan L, Zhang W, Li F, Wang D, Yin W, Liu X. A dual-labeled dataset and fusion model for automatic teeth segmentation, numbering, and state assessment on panoramic radiographs. BMC Oral Health 2024; 24:1201. [PMID: 39385212 PMCID: PMC11465503 DOI: 10.1186/s12903-024-04984-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 09/30/2024] [Indexed: 10/12/2024] Open
Abstract
BACKGROUND Recently, deep learning has been increasingly applied in the field of dentistry. The aim of this study is to develop a model for the automatic segmentation, numbering, and state assessment of teeth on panoramic radiographs. METHODS We created a dual-labeled dataset on panoramic radiographs for training, incorporating both numbering and state labels. We then developed a fusion model that combines a YOLOv9-e instance segmentation model with an EfficientNetv2-l classification model. The instance segmentation model is used for tooth segmentation and numbering, whereas the classification model is used for state evaluation. The final prediction results integrate tooth position, numbering, and state information. The model's output includes result visualization and automatic report generation. RESULTS Precision, Recall, mAP50 (mean Average Precision), and mAP50-95 for the tooth instance segmentation task are 0.989, 0.955, 0.975, and 0.840, respectively. Precision, Recall, Specificity, and F1 Score for the tooth classification task are 0.943, 0.933, 0.985, and 0.936, respectively. CONCLUSIONS This fusion model is the first to integrate automatic dental segmentation, numbering, and state assessment. It provides highly accurate results, including detailed visualizations and automated report generation.
Collapse
Affiliation(s)
- Wenbo Zhou
- Department of Stomatology, China-Japan Union Hospital of Jilin University, 126#Xiantai Street, Changchun, China
| | - Xin Lu
- School of Electrical and Computer Engineering, The University of Sydney, 2008, Darlington, NSW, Australia
| | - Dan Zhao
- Wuxi Stomatology Hospital, 6#Jiankang Road, Wuxi, China
| | - Meng Jiang
- Wuxi Stomatology Hospital, 6#Jiankang Road, Wuxi, China
| | - Linlin Fan
- Department of Pediatric Dentistry, Wuxi Stomatology Hospital, 6#Jiankang Road, Wuxi, China
| | - Weihang Zhang
- Department of Stomatology, People's Hospital of Zhengzhou, 33#Huanghe Road, Zhengzhou, China
| | - Fenglin Li
- Hospital of Stomatology of Jilin University, 1500#Qinghua Road, Changchun, China
| | - Dezhou Wang
- Department of Stomatology, China-Japan Union Hospital of Jilin University, 126#Xiantai Street, Changchun, China
| | - Weihuang Yin
- Department of Stomatology, China-Japan Union Hospital of Jilin University, 126#Xiantai Street, Changchun, China
| | - Xin Liu
- Department of Stomatology, China-Japan Union Hospital of Jilin University, 126#Xiantai Street, Changchun, China.
| |
Collapse
|
3
|
Morita D, Kawarazaki A, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic detection of midfacial fractures in facial bone CT images using deep learning-based object detection models. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101914. [PMID: 38750725 DOI: 10.1016/j.jormas.2024.101914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/24/2024] [Accepted: 05/12/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND Midfacial fractures are among the most frequent facial fractures. Surgery is recommended within 2 weeks of injury, but this time frame is often extended because the fracture is missed on diagnostic imaging in the busy emergency medicine setting. Using deep learning technology, which has progressed markedly in various fields, we attempted to develop a system for the automatic detection of midfacial fractures. The purpose of this study was to use this system to diagnose fractures accurately and rapidly, with the intention of benefiting both patients and emergency room physicians. METHODS One hundred computed tomography images that included midfacial fractures (e.g., maxillary, zygomatic, nasal, and orbital fractures) were prepared. In each axial image, the fracture area was surrounded by a rectangular region to create the annotation data. Eighty images were randomly classified as the training dataset (3736 slices) and 20 as the validation dataset (883 slices). Training and validation were performed using Single Shot MultiBox Detector (SSD) and version 8 of You Only Look Once (YOLOv8), which are object detection algorithms. RESULTS The performance indicators for SSD and YOLOv8 were respectively: precision, 0.872 and 0.871; recall, 0.823 and 0.775; F1 score, 0.846 and 0.82; average precision, 0.899 and 0.769. CONCLUSIONS The use of deep learning techniques allowed the automatic detection of midfacial fractures with good accuracy and high speed. The system developed in this study is promising for automated detection of midfacial fractures and may provide a quick and accurate solution for emergency medical care and other settings.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan; Department of Plastic and Reconstructive Surgery, Tokai University School of Medicine, Kanagawa, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
4
|
Sillmann YM, Monteiro JLGC, Eber P, Baggio AMP, Peacock ZS, Guastaldi FPS. Empowering surgeons: will artificial intelligence change oral and maxillofacial surgery? Int J Oral Maxillofac Surg 2024:S0901-5027(24)00369-2. [PMID: 39341693 DOI: 10.1016/j.ijom.2024.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/03/2024] [Accepted: 09/10/2024] [Indexed: 10/01/2024]
Abstract
Artificial Intelligence (AI) can enhance the precision and efficiency of diagnostics and treatments in oral and maxillofacial surgery (OMS), leveraging advanced computational technologies to mimic intelligent human behaviors. The study aimed to examine the current state of AI in the OMS literature and highlight the urgent need for further research to optimize AI integration in clinical practice and enhance patient outcomes. A scoping review of journals related to OMS focused on OMS-related applications. PubMed was searched using terms "artificial intelligence", "convolutional networks", "neural networks", "machine learning", "deep learning", and "automation". Ninety articles were analyzed and classified into the following subcategories: pathology, orthognathic surgery, facial trauma, temporomandibular joint disorders, dentoalveolar surgery, dental implants, craniofacial deformities, reconstructive surgery, aesthetic surgery, and complications. There was a significant increase in AI-related studies published after 2019, 95.6% of the total reviewed. This surge in research reflects growing interest in AI and its potential in OMS. Among the studies, the primary uses of AI in OMS were in pathology (e.g., lesion detection, lymph node metastasis detection) and orthognathic surgery (e.g., surgical planning through facial bone segmentation). The studies predominantly employed convolutional neural networks (CNNs) and artificial neural networks (ANNs) for classification tasks, potentially improving clinical outcomes.
Collapse
Affiliation(s)
- Y M Sillmann
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - J L G C Monteiro
- Wellman Center for Photomedicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - P Eber
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - A M P Baggio
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - Z S Peacock
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA
| | - F P S Guastaldi
- Division of Oral and Maxillofacial Surgery, Massachusetts General Hospital, and Department of Oral and Maxillofacial Surgery, Harvard School of Dental Medicine, Boston, MA, USA.
| |
Collapse
|
5
|
Lu W, Yu X, Li Y, Cao Y, Chen Y, Hua F. Artificial Intelligence-Related Dental Research: Bibliometric and Altmetric Analysis. Int Dent J 2024:S0020-6539(24)01415-1. [PMID: 39266401 DOI: 10.1016/j.identj.2024.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 07/09/2024] [Accepted: 08/02/2024] [Indexed: 09/14/2024] Open
Abstract
BACKGROUND Recent years have witnessed an explosive surge in dental research related to artificial intelligence (AI). These applications have optimised dental workflows, demonstrating significant clinical importance. Understanding the current landscape and trends of this topic is crucial for both clinicians and researchers to utilise and advance this technology. However, a comprehensive scientometric study regarding this field had yet to be performed. METHODS A literature search was conducted in the Web of Science Core Collection database to identify eligible "research articles" and "reviews." Literature screening and exclusion were performed by 2 investigators. Thereafter, VOSviewer was utilised in co-occurrence analysis and CiteSpace in co-citation analysis. R package Bibliometrix was employed to automatically calculate scientific impacts, determining the core authors and journals. Altmetric data were described narratively and supplemented with Spearman correlation analysis. RESULTS A total of 1558 research publications were included. During the past 5 years, AI-related dental publications drastically increased in number, from 36 to 581. Diagnostics and Scientific Reports published the most articles, whereas Journal of Dental Research received the highest number of citations per article. China, the US, and South Korea emerged as the most prolific countries, whilst Germany received the highest number of citations per article (23.29). Charité Universitätsmedizin Berlin was the institution with the highest number of publications and citations per article (29.16). Altmetric Attention Score was correlated with News Mentions (P < .001), and significant associations were observed amongst Dimension Citations, Mendeley Readers, and Web of Science Citations (P < .001). CONCLUSIONS The publication numbers regarding AI-related dental research have been rising rapidly and may continue their upwards trend. China, the US, South Korea, and Germany had promoted the progress of AI-related dental research. Disease diagnosis, orthodontic applications, and morphology segmentation were current hotspots. Attention mechanism, explainable AI, multimodal data fusion, and AI-generated text assistants necessitate future research and exploration.
Collapse
Affiliation(s)
- Wei Lu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Xueqian Yu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Library, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Yueyang Li
- Wuhan Children's Hospital (Wuhan Maternal and Child Healthcare Hospital), Tongji Medical College, Huazhong University of Science & Technology, Wuhan, China
| | - Yi Cao
- School of Electronic Information, Wuhan University, Wuhan, China
| | - Yanning Chen
- Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China.
| | - Fang Hua
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Evidence-Based Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Center for Orthodontics and Pediatric Dentistry at Optics Valley Branch, School & Hospital of Stomatology, Wuhan University, Wuhan, China; Division of Dentistry, School of Medical Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK.
| |
Collapse
|
6
|
Yari A, Fasih P, Hosseini Hooshiar M, Goodarzi A, Fattahi SF. Detection and classification of mandibular fractures in panoramic radiography using artificial intelligence. Dentomaxillofac Radiol 2024; 53:363-371. [PMID: 38652576 PMCID: PMC11358630 DOI: 10.1093/dmfr/twae018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/11/2024] [Accepted: 04/19/2024] [Indexed: 04/25/2024] Open
Abstract
OBJECTIVES This study evaluated the performance of the YOLOv5 deep learning model in detecting different mandibular fracture types in panoramic images. METHODS The dataset of panoramic radiographs with mandibular fractures was divided into training, validation, and testing sets, with 60%, 20%, and 20% of the images, respectively. An equal number of control images without fractures were also distributed among the datasets. The YOLOv5 algorithm was trained to detect six mandibular fracture types based on the anatomical location including symphysis, body, angle, ramus, condylar neck, and condylar head. Performance metrics of accuracy, precision, sensitivity (recall), specificity, dice coefficient (F1 score), and area under the curve (AUC) were calculated for each class. RESULTS A total of 498 panoramic images containing 673 fractures were collected. The accuracy was highest in detecting body (96.21%) and symphysis (95.87%), and was lowest in angle (90.51%) fractures. The highest and lowest precision values were observed in detecting symphysis (95.45%) and condylar head (63.16%) fractures, respectively. The sensitivity was highest in the body (96.67%) fractures and was lowest in the condylar head (80.00%) and condylar neck (81.25%) fractures. The highest specificity was noted in symphysis (98.96%), body (96.08%), and ramus (96.04%) fractures, respectively. The dice coefficient and AUC were highest in detecting body fractures (0.921 and 0.942, respectively), and were lowest in detecting condylar head fractures (0.706 and 0.812, respectively). CONCLUSION The trained algorithm achieved promising results in detecting most fracture types, particularly in body and symphysis regions indicating machine learning potential as a diagnostic aid for clinicians.
Collapse
Affiliation(s)
- Amir Yari
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Kashan University of Medical Sciences, Kashan, 8715973474, Iran
| | - Paniz Fasih
- Department of Prosthodontics, School of Dentistry, Kashan University of Medical Sciences, Kashan, 8715973474, Iran
| | - Mohammad Hosseini Hooshiar
- Department of Periodontics, School of Dentistry, Tehran University of Medical Sciences, Tehran, 1439955991, Iran
| | - Ali Goodarzi
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, 7195615878, Iran
| | - Seyedeh Farnaz Fattahi
- Department of Prosthodontics, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, 7195615878, Iran
| |
Collapse
|
7
|
Kutbi M. Artificial Intelligence-Based Applications for Bone Fracture Detection Using Medical Images: A Systematic Review. Diagnostics (Basel) 2024; 14:1879. [PMID: 39272664 PMCID: PMC11394268 DOI: 10.3390/diagnostics14171879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/19/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.
Collapse
Affiliation(s)
- Mohammed Kutbi
- College of Computing and Informatics, Saudi Electronic University, Riyadh 13316, Saudi Arabia
| |
Collapse
|
8
|
Mao J, Du Y, Xue J, Hu J, Mai Q, Zhou T, Zhou Z. Automated detection and classification of mandibular fractures on multislice spiral computed tomography using modified convolutional neural networks. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(24)00404-8. [PMID: 39384413 DOI: 10.1016/j.oooo.2024.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 07/19/2024] [Accepted: 07/26/2024] [Indexed: 10/11/2024]
Abstract
OBJECTIVE To evaluate the performance of convolutional neural networks (CNNs) for the automated detection and classification of mandibular fractures on multislice spiral computed tomography (MSCT). STUDY DESIGN MSCT data from 361 patients with mandibular fractures were retrospectively collected. Two experienced maxillofacial surgeons annotated the images as ground truth. Fractures were detected utilizing the following models: YOLOv3, YOLOv4, Faster R-CNN, CenterNet, and YOLOv5-TRS. Fracture sites were classified by the following models: AlexNet, GoogLeNet, ResNet50, original DenseNet-121, and modified DenseNet-121. The performance was evaluated for accuracy, sensitivity, specificity, and area under the curve (AUC). AUC values were compared using the Z-test and P values <.05 were considered to be statistically significant. RESULTS Of all of the detection models, YOLOv5-TRS obtained the greatest mean accuracy (96.68%). Among all of the fracture subregions, body fractures were the most reliably detected (with accuracies of 88.59%-99.01%). For classification models, the AUCs for body fractures were higher than those of condyle and angle fractures, and they were all above 0.75, with the highest AUC at 0.903. Modified DenseNet-121 had the best overall classification performance with a mean AUC of 0.814. CONCLUSIONS The modified CNN-based models demonstrated high reliability for the diagnosis of mandibular fractures on MSCT.
Collapse
Affiliation(s)
- Jingjing Mao
- Ningxia Medical University, Ningxia Key Laboratory of Oral Disease Research, Yinchuan, P.R. China
| | - Yuhu Du
- College of Computer Science and Engineering, North Minzu University, Yinchuan, P.R. China
| | - Jiawen Xue
- Ningxia Medical University, Ningxia Key Laboratory of Oral Disease Research, Yinchuan, P.R. China
| | - Jingjing Hu
- Department of Oral and Maxillofacial Surgery, Guyuan People's Hospital, Guyuan, P.R. China
| | - Qian Mai
- Department of Stomatology, The First People's Hospital of Yinchuan, Yinchuan, P.R. China
| | - Tao Zhou
- College of Computer Science and Engineering, North Minzu University, Yinchuan, P.R. China
| | - Zhongwei Zhou
- Department of Oral and Maxillofacial Surgery, General Hospital of Ningxia Medical University, Yinchuan, P.R. China; Institution of Medical Sciences, General Hospital of Ningxia Medical University, Yinchuan, P.R. China.
| |
Collapse
|
9
|
van Nistelrooij N, Schitter S, van Lierop P, Ghoul KE, König D, Hanisch M, Tel A, Xi T, Thiem DGE, Smeets R, Dubois L, Flügge T, van Ginneken B, Bergé S, Vinayahalingam S. Detecting Mandible Fractures in CBCT Scans Using a 3-Stage Neural Network. J Dent Res 2024:220345241256618. [PMID: 38910411 DOI: 10.1177/00220345241256618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
After nasal bone fractures, fractures of the mandible are the most frequently encountered injuries of the facial skeleton. Accurate identification of fracture locations is critical for effectively managing these injuries. To address this need, JawFracNet, an innovative artificial intelligence method, has been developed to enable automated detection of mandibular fractures in cone-beam computed tomography (CBCT) scans. JawFracNet employs a 3-stage neural network model that processes 3-dimensional patches from a CBCT scan. Stage 1 predicts a segmentation mask of the mandible in a patch, which is subsequently used in stage 2 to predict a segmentation of the fractures and in stage 3 to classify whether the patch contains any fracture. The final output of JawFracNet is the fracture segmentation of the entire scan, obtained by aggregating and unifying voxel-level and patch-level predictions. A total of 164 CBCT scans without mandibular fractures and 171 CBCT scans with mandibular fractures were included in this study. Evaluation of JawFracNet demonstrated a precision of 0.978 and a sensitivity of 0.956 in detecting mandibular fractures. The current study proposes the first benchmark for mandibular fracture detection in CBCT scans. Straightforward replication is promoted by publicly sharing the code and providing access to JawFracNet on grand-challenge.org.
Collapse
Affiliation(s)
- N van Nistelrooij
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Berlin, Germany
| | - S Schitter
- Department of Oral and Maxillofacial Surgery, Division of Regenerative, Orofacial Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - P van Lierop
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - K El Ghoul
- Department of Oral and Maxillofacial Surgery, Erasmus Medical Center, Rotterdam, The Netherlands
| | - D König
- Department of Oral and Maxillofacial Surgery, Division of Regenerative, Orofacial Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - M Hanisch
- Department of Oral and Maxillofacial Surgery, Universitätsklinikum, Münster, Münster, Germany
| | - A Tel
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department University Hospital of Udine, Udine, Italy
| | - T Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - D G E Thiem
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Mainz, Germany
| | - R Smeets
- Department of Oral and Maxillofacial Surgery, Division of Regenerative, Orofacial Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - L Dubois
- Department of Oral and Maxillofacial Surgery, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - T Flügge
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Berlin, Germany
| | - B van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - S Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - S Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY. Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 2024; 144:104931. [PMID: 38458378 DOI: 10.1016/j.jdent.2024.104931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.
Collapse
Affiliation(s)
- Fang-Duan Ni
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | | | - Mu-Qing Liu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| | - Min-Juan Zhang
- Second Dental Center, Peking University Hospital of Stomatology, Beijing 100101, China
| | - Shu Li
- Department of Stomatology, Beijing Hospital, Beijing 100005, China
| | | | | | - Kai-Yuan Fu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
11
|
Bhatnagar A, Kekatpure AL, Velagala VR, Kekatpure A. A Review on the Use of Artificial Intelligence in Fracture Detection. Cureus 2024; 16:e58364. [PMID: 38756254 PMCID: PMC11097122 DOI: 10.7759/cureus.58364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.
Collapse
Affiliation(s)
- Aayushi Bhatnagar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aditya L Kekatpure
- Orthopedic Surgery, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aashay Kekatpure
- Orthopedic Surgery, Narendra Kumar Prasadrao Salve Institute of Medical Sciences and Research, Nagpur, IND
| |
Collapse
|
12
|
Gontarz M, Bargiel J, Gąsiorowski K, Marecik T, Szczurowski P, Zapała J, Wyszyńska-Pawelec G. "Air Sign" in Misdiagnosed Mandibular Fractures Based on CT and CBCT Evaluation. Diagnostics (Basel) 2024; 14:362. [PMID: 38396403 PMCID: PMC10888197 DOI: 10.3390/diagnostics14040362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 01/29/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
BACKGROUND Diagnostic errors constitute one of the reasons for the improper and often delayed treatment of mandibular fractures. The aim of this study was to present a series of cases involving undiagnosed concomitant secondary fractures in the mandibular body during preoperative diagnostics. Additionally, this study aimed to describe the "air sign" as an indirect indicator of a mandibular body fracture. METHODS A retrospective analysis of CT/CBCT scans conducted before surgery was performed on patients misdiagnosed with a mandibular body fracture within a one-year period. RESULTS Among the 75 patients who underwent surgical treatment for mandibular fractures, mandibular body fractures were missed in 3 cases (4%) before surgery. The analysis of CT/CBCT before surgery revealed the presence of an air collection, termed the "air sign", in the soft tissue adjacent to each misdiagnosed fracture of the mandibular body. CONCLUSIONS The "air sign" in a CT/CBCT scan may serve as an additional indirect indication of a fracture in the mandibular body. Its presence should prompt the surgeon to conduct a more thorough clinical examination of the patient under general anesthesia after completing the ORIF procedure in order to rule-out additional fractures.
Collapse
Affiliation(s)
- Michał Gontarz
- Department of Cranio-Maxillofacial Surgery, Jagiellonian University Medical College, 30-688 Cracow, Poland; (J.B.); (K.G.); (T.M.); (P.S.); (J.Z.); (G.W.-P.)
| | | | | | | | | | | | | |
Collapse
|
13
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
14
|
Morita D, Kawarazaki A, Koimizu J, Tsujiko S, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic orbital segmentation using deep learning-based 2D U-net and accuracy evaluation: A retrospective study. J Craniomaxillofac Surg 2023; 51:609-613. [PMID: 37813770 DOI: 10.1016/j.jcms.2023.09.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 05/25/2023] [Accepted: 09/05/2023] [Indexed: 10/11/2023] Open
Abstract
The purpose of this study was to verify whether the accuracy of automatic segmentation (AS) of computed tomography (CT) images of fractured orbits using deep learning (DL) is sufficient for clinical application. In the surgery of orbital fractures, many methods have been reported to create a 3D anatomical model for use as a reference. However, because the orbit bone is thin and complex, creating a segmentation model for 3D printing is complicated and time-consuming. Here, the training of DL was performed using U-Net as the DL model, and the AS output was validated with Dice coefficients and average symmetry surface distance (ASSD). In addition, the AS output was 3D printed and evaluated for accuracy by four surgeons, each with over 15 years of clinical experience. One hundred twenty-five CT images were prepared, and manual orbital segmentation was performed in all cases. Ten orbital fracture cases were randomly selected as validation data, and the remaining 115 were set as training data. AS was successful in all cases, with good accuracy: Dice, 0.860 ± 0.033 (mean ± SD); ASSD, 0.713 ± 0.212 mm. In evaluating AS accuracy, the expert surgeons generally considered that it could be used for surgical support without further modification. The orbital AS algorithm developed using DL in this study is extremely accurate and can create 3D models rapidly at low cost, potentially enabling safer and more accurate surgeries.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Jungen Koimizu
- Department of Plastic and Reconstructive Surgery, Omihachiman Community Medical Center, Shiga, Japan
| | - Shoko Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
15
|
Diba SF, Sari DCR, Supriatna Y, Ardiyanto I, Bintoro BS. Artificial intelligence in detecting dentomaxillofacial fractures in diagnostic imaging: a scoping review protocol. BMJ Open 2023; 13:e071324. [PMID: 37553193 PMCID: PMC10414106 DOI: 10.1136/bmjopen-2022-071324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 07/18/2023] [Indexed: 08/10/2023] Open
Abstract
INTRODUCTION The dentomaxillofacial (DMF) area, which includes the teeth, maxilla, mandible, zygomaticum, orbits and midface, plays a crucial role in the maintenance of the physiological functions despite its susceptibility to fractures, which are mostly caused by mechanical trauma. As a diagnostic tool, radiographic imaging helps clinicians establish a diagnosis and determine a treatment plan; however, the presence of human factors in image interpretation can result in missed detection of fractures. Therefore, an artificial intelligence (AI) computing system with the potential to help detect abnormalities on radiographic images is currently being developed. This scoping review summarises the literature and assesses the current status of AI in DMF fracture detection in diagnostic imaging. METHODS AND ANALYSIS This proposed scoping review will be conducted using the framework of Arksey and O'Malley, with each step incorporating the recommendations of Levac et al. By using relevant keywords based on the research questions. PubMed, Science Direct, Scopus, Cochrane Library, Springerlink, Institute of Electrical and Electronics Engineers, and ProQuest will be the databases used in this study. The included studies are published in English between 1 January 2000 and 30 June 2023. Two independent reviewers will screen titles and abstracts, followed by full-text screening and data extraction, which will comprise three components: research study characteristics, comparator and AI characteristics. ETHICS AND DISSEMINATION This study does not require ethical approval because it analyses primary research articles. The research findings will be distributed through international conferences and peer-reviewed publications.
Collapse
Affiliation(s)
- Silviana Farrah Diba
- Doctorate Program of Medical and Health Science, Gadjah Mada University Faculty of Medicine Public Health and Nursing, Yogyakarta, Indonesia
- Department of Dentomaxillofacial Radiology, Gadjah Mada University Faculty of Dentistry, Yogyakarta, Indonesia
| | - Dwi Cahyani Ratna Sari
- Department of Anatomy, Gadjah Mada University Faculty of Medicine Public Health and Nursing, Yogyakarta, Indonesia
| | - Yana Supriatna
- Department of Radiology, Gadjah Mada University Faculty of Medicine Public Health and Nursing, Yogyakarta, Indonesia
- Radiological Installation, Public Hospital Dr Sardjito, Yogyakarta, Indonesia
| | - Igi Ardiyanto
- Department of Electrical Engineering and Information Technology, Gadjah Mada University Faculty of Engineering, Yogyakarta, Indonesia
| | - Bagas Suryo Bintoro
- Department of Health Behaviour, Environment, and Social Medicine, Gadjah Mada University Faculty of Medicine Public Health and Nursing, Yogyakarta, Indonesia
- Center of Health Behavior and Promotion, Gadjah Mada University Faculty of Medicine Public Health and Nursing, Yogyakarta, Indonesia
| |
Collapse
|
16
|
Kim T, Moon NH, Goh TS, Jung ID. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via explainable artificial intelligence. Sci Rep 2023; 13:10415. [PMID: 37369833 PMCID: PMC10300092 DOI: 10.1038/s41598-023-37560-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 06/23/2023] [Indexed: 06/29/2023] Open
Abstract
One of the key aspects of the diagnosis and treatment of atypical femoral fractures is the early detection of incomplete fractures and the prevention of their progression to complete fractures. However, an incomplete atypical femoral fracture can be misdiagnosed as a normal lesion by both primary care physicians and orthopedic surgeons; expert consultation is needed for accurate diagnosis. To overcome this limitation, we developed a transfer learning-based ensemble model to detect and localize fractures. A total of 1050 radiographs, including 100 incomplete fractures, were preprocessed by applying a Sobel filter. Six models (EfficientNet B5, B6, B7, DenseNet 121, MobileNet V1, and V2) were selected for transfer learning. We then composed two ensemble models; the first was based on the three models having the highest accuracy, and the second was based on the five models having the highest accuracy. The area under the curve (AUC) of the case that used the three most accurate models was the highest at 0.998. This study demonstrates that an ensemble of transfer-learning-based models can accurately classify and detect fractures, even in an imbalanced dataset. This artificial intelligence (AI)-assisted diagnostic application could support decision-making and reduce the workload of clinicians with its high speed and accuracy.
Collapse
Affiliation(s)
- Taekyeong Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Nam Hoon Moon
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Tae Sik Goh
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Im Doo Jung
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
17
|
Tong Y, Jie B, Wang X, Xu Z, Ding P, He Y. Is Convolutional Neural Network Accurate for Automatic Detection of Zygomatic Fractures on Computed Tomography? J Oral Maxillofac Surg 2023:S0278-2391(23)00393-2. [PMID: 37217163 DOI: 10.1016/j.joms.2023.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE Zygomatic fractures involve complex anatomical structures of the mid-face and the diagnosis can be challenging and labor-consuming. This research is aimed to evaluate the performance of an automatic algorithm for the detection of zygomatic fractures based on convolutional neural network (CNN) on spiral computed tomography (CT). MATERIALS AND METHODS We designed a cross-sectional retrospective diagnostic trial study. Clinical records and CT scans of patients with zygomatic fractures were reviewed. The sample consisted of two types of patients with different zygomatic fractures statuses (positive or negative) in Peking University School of Stomatology from 2013 to 2019. All CT samples were randomly divided into three groups at a ratio of 6:2:2 as training set, validation set, and test set, respectively. All CT scans were viewed and annotated by three experienced maxillofacial surgeons, serving as the gold standard. The algorithm consisted of two modules as follows: (1) segmentation of the zygomatic region of CT based on U-Net, a type of CNN model; (2) detection of fractures based on Deep Residual Network 34(ResNet34). The region segmentation model was used first to detect and extract the zygomatic region, then the detection model was used to detect the fracture status. The Dice coefficient was used to evaluate the performance of the segmentation algorithm. The sensitivity and specificity were used to assess the performance of the detection model. The covariates included age, gender, duration of injury, and the etiology of fractures. RESULTS A total of 379 patients with an average age of 35.43 ± 12.74 years were included in the study. There were 203 nonfracture patients and 176 fracture patients with 220 sites of zygomatic fractures (44 patients underwent bilateral fractures). The Dice coefficientof zygomatic region detection model and gold standard verified by manual labeling were 0.9337 (coronal plane) and 0.9269 (sagittal plane), respectively. The sensitivity and specificity of the fracture detection model were 100% (p>.05). CONCLUSION The performance of the algorithm based on CNNs was not statistically different from the gold standard (manual diagnosis) for zygomatic fracture detection in order for the algorithm to be applied clinically.
Collapse
Affiliation(s)
- Yanhang Tong
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Bimeng Jie
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Xuebing Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | | | | | - Yang He
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China.
| |
Collapse
|
18
|
Maxillofacial fracture detection and classification in computed tomography images using convolutional neural network-based models. Sci Rep 2023; 13:3434. [PMID: 36859660 PMCID: PMC9978019 DOI: 10.1038/s41598-023-30640-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 02/27/2023] [Indexed: 03/03/2023] Open
Abstract
The purpose of this study was to evaluate the performance of convolutional neural network-based models for the detection and classification of maxillofacial fractures in computed tomography (CT) maxillofacial bone window images. A total of 3407 CT images, 2407 of which contained maxillofacial fractures, were retrospectively obtained from the regional trauma center from 2016 to 2020. Multiclass image classification models were created by using DenseNet-169 and ResNet-152. Multiclass object detection models were created by using faster R-CNN and YOLOv5. DenseNet-169 and ResNet-152 were trained to classify maxillofacial fractures into frontal, midface, mandibular and no fracture classes. Faster R-CNN and YOLOv5 were trained to automate the placement of bounding boxes to specifically detect fracture lines in each fracture class. The performance of each model was evaluated on an independent test dataset. The overall accuracy of the best multiclass classification model, DenseNet-169, was 0.70. The mean average precision of the best multiclass detection model, faster R-CNN, was 0.78. In conclusion, DenseNet-169 and faster R-CNN have potential for the detection and classification of maxillofacial fractures in CT images.
Collapse
|
19
|
Hung KF, Yeung AWK, Bornstein MM, Schwendicke F. Personalized dental medicine, artificial intelligence, and their relevance for dentomaxillofacial imaging. Dentomaxillofac Radiol 2023; 52:20220335. [PMID: 36472627 PMCID: PMC9793453 DOI: 10.1259/dmfr.20220335] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Personalized medicine refers to the tailoring of diagnostics and therapeutics to individuals based on one's biological, social, and behavioral characteristics. While personalized dental medicine is still far from being a reality, advanced artificial intelligence (AI) technologies with improved data analytic approaches are expected to integrate diverse data from the individual, setting, and system levels, which may facilitate a deeper understanding of the interaction of these multilevel data and therefore bring us closer to more personalized, predictive, preventive, and participatory dentistry, also known as P4 dentistry. In the field of dentomaxillofacial imaging, a wide range of AI applications, including several commercially available software options, have been proposed to assist dentists in the diagnosis and treatment planning of various dentomaxillofacial diseases, with performance similar or even superior to that of specialists. Notably, the impact of these dental AI applications on treatment decision, clinical and patient-reported outcomes, and cost-effectiveness has so far been assessed sparsely. Such information should be further investigated in future studies to provide patients, providers, and healthcare organizers a clearer picture of the true usefulness of AI in daily dental practice.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Division of Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Michael M. Bornstein
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, Basel, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
20
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
21
|
Artificial Intelligence (AI) for Fracture Diagnosis: An Overview of Current Products and Considerations for Clinical Adoption, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:869-878. [PMID: 35731103 DOI: 10.2214/ajr.22.27873] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Fractures are common injuries that can be difficult to diagnose, with missed fractures accounting for most misdiagnoses in the emergency department. Artificial intelligence (AI) and, specifically, deep learning have shown a strong ability to accurately detect fractures and augment the performance of radiologists in proof-of-concept research settings. Although the number of real-world AI products available for clinical use continues to increase, guidance for practicing radiologists in the adoption of this new technology is limited. This review describes how AI and deep learning algorithms can help radiologists to better diagnose fractures. The article also provides an overview of commercially available U.S. FDA-cleared AI tools for fracture detection as well as considerations for the clinical adoption of these tools by radiology practices.
Collapse
|
22
|
Son DM, Yoon YA, Kwon HJ, Lee SH. Combined Deep Learning Techniques for Mandibular Fracture Diagnosis Assistance. Life (Basel) 2022; 12:1711. [PMID: 36362866 PMCID: PMC9697461 DOI: 10.3390/life12111711] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/19/2022] [Accepted: 10/25/2022] [Indexed: 04/05/2024] Open
Abstract
Mandibular fractures are the most common fractures in dentistry. Since diagnosing a mandibular fracture is difficult when only panoramic radiographic images are used, most doctors use cone beam computed tomography (CBCT) to identify the patient's fracture location. In this study, considering the diagnosis of mandibular fractures using the combined deep learning technique, YOLO and U-Net were used as auxiliary diagnostic methods to detect the location of mandibular fractures based on panoramic images without CBCT. In a previous study, mandibular fracture diagnosis was performed using YOLO learning; in the detection performance result of the YOLOv4-based mandibular fracture diagnosis module, the precision score was approximately 97%, indicating that there was almost no misdiagnosis. In particular, fractures in the symphysis, body, angle, and ramus tend to be distributed in the middle of the mandible. Owing to the irregular fracture types and overlapping location information, the recall score was approximately 79%, which increased the detection of undiagnosed fractures. In many cases, fractures that are clearly visible to the human eye cannot be grasped. To overcome these shortcomings, the number of undiagnosed fractures can be reduced using a combination of the U-Net and YOLOv4 learning modules. U-Net is advantageous for the segmentation of fractures spread over a wide area because it performs semantic segmentation. Consequently, the undiagnosed case in the middle of the mandible, where YOLO was weak, was somewhat supplemented by the U-Net module. The precision score of the combined module was 95%, similar to that of the previous method, and the recall score improved to 87%, as the number of undiagnosed cases was reduced. Through this study, the performance of a deep learning method that can be used for the diagnosis of the mandibular bone has been improved, and it is anticipated that as an auxiliary diagnostic inspection device, it will assist dentists in making diagnoses.
Collapse
Affiliation(s)
- Dong-Min Son
- School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehakro, Buk-gu, Daegu 41566, Korea
| | - Yeong-Ah Yoon
- School of Dentistry, Kyungpook National University, 2177 Dalgubeol-daero, Jung-gu, Daegu 41940, Korea
| | - Hyuk-Ju Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehakro, Buk-gu, Daegu 41566, Korea
| | - Sung-Hak Lee
- School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehakro, Buk-gu, Daegu 41566, Korea
| |
Collapse
|
23
|
Meena T, Roy S. Bone Fracture Detection Using Deep Supervised Learning from Radiological Images: A Paradigm Shift. Diagnostics (Basel) 2022; 12:diagnostics12102420. [PMID: 36292109 PMCID: PMC9600559 DOI: 10.3390/diagnostics12102420] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 01/16/2023] Open
Abstract
Bone diseases are common and can result in various musculoskeletal conditions (MC). An estimated 1.71 billion patients suffer from musculoskeletal problems worldwide. Apart from musculoskeletal fractures, femoral neck injuries, knee osteoarthritis, and fractures are very common bone diseases, and the rate is expected to double in the next 30 years. Therefore, proper and timely diagnosis and treatment of a fractured patient are crucial. Contrastingly, missed fractures are a common prognosis failure in accidents and emergencies. This causes complications and delays in patients’ treatment and care. These days, artificial intelligence (AI) and, more specifically, deep learning (DL) are receiving significant attention to assist radiologists in bone fracture detection. DL can be widely used in medical image analysis. Some studies in traumatology and orthopaedics have shown the use and potential of DL in diagnosing fractures and diseases from radiographs. In this systematic review, we provide an overview of the use of DL in bone imaging to help radiologists to detect various abnormalities, particularly fractures. We have also discussed the challenges and problems faced in the DL-based method, and the future of DL in bone imaging.
Collapse
|
24
|
Artificial Intelligence in Orthopedic Radiography Analysis: A Narrative Review. Diagnostics (Basel) 2022; 12:diagnostics12092235. [PMID: 36140636 PMCID: PMC9498096 DOI: 10.3390/diagnostics12092235] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence (AI) in medicine is a rapidly growing field. In orthopedics, the clinical implementations of AI have not yet reached their full potential. Deep learning algorithms have shown promising results in computed radiographs for fracture detection, classification of OA, bone age, as well as automated measurements of the lower extremities. Studies investigating the performance of AI compared to trained human readers often show equal or better results, although human validation is indispensable at the current standards. The objective of this narrative review is to give an overview of AI in medicine and summarize the current applications of AI in orthopedic radiography imaging. Due to the different AI software and study design, it is difficult to find a clear structure in this field. To produce more homogeneous studies, open-source access to AI software codes and a consensus on study design should be aimed for.
Collapse
|
25
|
Application Value of the CT Scan 3D Reconstruction Technique in Maxillofacial Fracture Patients. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2022; 2022:1643434. [PMID: 35845575 PMCID: PMC9283051 DOI: 10.1155/2022/1643434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 05/09/2022] [Accepted: 05/17/2022] [Indexed: 11/17/2022]
Abstract
Purpose The aim of the study was to explore the application value of computerized tomography (CT) scan 3D reconstruction technology in maxillofacial fracture patients. Methods A total of 80 maxillofacial fracture patients who underwent surgical treatment in Shijiazhuang People's Hospital from January 2019 to January 2020 were enrolled. All of them received 128-slice spiral CT scans before surgery, and the images were subjected to multiplanar reconstruction (MRP) and volume reconstruction (VR). Results A total of 181 fractures were found in 80 patients with maxillofacial fractures. The detection rates of axial CT, MRP, and VR were 77.90% (141/181), 93.92% (170/181), and 97.79% (177/181), respectively. The detection rates of the four inspection methods were statistically different. Taking the findings of surgical anatomy as the gold standard, the sensitivity of MRP and VR for the diagnosis of maxillofacial fractures was 90.06% (163/170) and 95.56% (174/177), with no significant difference. Conclusion CT scan 3D reconstruction technology has a high application value in the clinical diagnosis and treatment of maxillofacial fracture patients.
Collapse
|