1
|
Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R. Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod 2024; 46:cjae029. [PMID: 38895901 PMCID: PMC11185929 DOI: 10.1093/ejo/cjae029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION PROSPERO: CRD42022328800.
Collapse
Affiliation(s)
- Julie Hendrickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Rellyca Sola Gracea
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
| | - Michiel Vanheers
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Nicolas Winderickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 14611, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, 141 04 Stockholm, Sweden
| |
Collapse
|
2
|
Sahlsten J, Järnstedt J, Jaskari J, Naukkarinen H, Mahasantipiya P, Charuakkra A, Vasankari K, Hietanen A, Sundqvist O, Lehtinen A, Kaski K. Deep learning for 3D cephalometric landmarking with heterogeneous multi-center CBCT dataset. PLoS One 2024; 19:e0305947. [PMID: 38917161 PMCID: PMC11198780 DOI: 10.1371/journal.pone.0305947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 06/07/2024] [Indexed: 06/27/2024] Open
Abstract
Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Jorma Järnstedt
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | | | - Phattaranant Mahasantipiya
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Arnon Charuakkra
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
- Division of Oral and Maxillofacial Radiology, Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Krista Vasankari
- Department of Oral Diseases, Tampere University Hospital, Tampere, Finland
| | | | | | - Antti Lehtinen
- Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
- Faculty of Medicine and Health Technology, University of Tampere, Tampere, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
- The Alan Turing Institute, British Library, London, United Kingdom
| |
Collapse
|
3
|
Wu J, Ma Q, Zhou X, Wei Y, Liu Z, Kang H. Segmentation and quantitative analysis of optical coherence tomography (OCT) images of laser burned skin based on deep learning. Biomed Phys Eng Express 2024; 10:045026. [PMID: 38718764 DOI: 10.1088/2057-1976/ad488f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 05/08/2024] [Indexed: 05/22/2024]
Abstract
Evaluation of skin recovery is an important step in the treatment of burns. However, conventional methods only observe the surface of the skin and cannot quantify the injury volume. Optical coherence tomography (OCT) is a non-invasive, non-contact, real-time technique. Swept source OCT uses near infrared light and analyzes the intensity of light echo at different depths to generate images from optical interference signals. To quantify the dynamic recovery of skin burns over time, laser induced skin burns in mice were evaluated using deep learning of Swept source OCT images. A laser-induced mouse skin thermal injury model was established in thirty Kunming mice, and OCT images of normal and burned areas of mouse skin were acquired at day 0, day 1, day 3, day 7, and day 14 after laser irradiation. This resulted in 7000 normal and 1400 burn B-scan images which were divided into training, validation, and test sets at 8:1.5:0.5 ratio for the normal data and 8:1:1 for the burn data. Normal images were manually annotated, and the deep learning U-Net model (verified with PSPNe and HRNet models) was used to segment the skin into three layers: the dermal epidermal layer, subcutaneous fat layer, and muscle layer. For the burn images, the models were trained to segment just the damaged area. Three-dimensional reconstruction technology was then used to reconstruct the damaged tissue and calculate the damaged tissue volume. The average IoU value and f-score of the normal tissue layer U-Net segmentation model were 0.876 and 0.934 respectively. The IoU value of the burn area segmentation model reached 0.907 and f-score value reached 0.951. Compared with manual labeling, the U-Net model was faster with higher accuracy for skin stratification. OCT and U-Net segmentation can provide rapid and accurate analysis of tissue changes and clinical guidance in the treatment of burns.
Collapse
Affiliation(s)
- Jingyuan Wu
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
- College of Life Sciences, Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Qiong Ma
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
| | - Xun Zhou
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
| | - Yu Wei
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
- College of Life Sciences, Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Zhibo Liu
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
| | - Hongxiang Kang
- Beijing Institute of Radiation Medicine, Beijing 100850, People's Republic of China
| |
Collapse
|
4
|
Kapila S, Vora SR, Rengasamy Venugopalan S, Elnagar MH, Akyalcin S. Connecting the dots towards precision orthodontics. Orthod Craniofac Res 2023; 26 Suppl 1:8-19. [PMID: 37968678 DOI: 10.1111/ocr.12725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 11/17/2023]
Abstract
Precision orthodontics entails the use of personalized clinical, biological, social and environmental knowledge of each patient for deep individualized clinical phenotyping and diagnosis combined with the delivery of care using advanced customized devices, technologies and biologics. From its historical origins as a mechanotherapy and materials driven profession, the most recent advances in orthodontics in the past three decades have been propelled by technological innovations including volumetric and surface 3D imaging and printing, advances in software that facilitate the derivation of diagnostic details, enhanced personalization of treatment plans and fabrication of custom appliances. Still, the use of these diagnostic and therapeutic technologies is largely phenotype driven, focusing mainly on facial/skeletal morphology and tooth positions. Future advances in orthodontics will involve comprehensive understanding of an individual's biology through omics, a field of biology that involves large-scale rapid analyses of DNA, mRNA, proteins and other biological regulators from a cell, tissue or organism. Such understanding will define individual biological attributes that will impact diagnosis, treatment decisions, risk assessment and prognostics of therapy. Equally important are the advances in artificial intelligence (AI) and machine learning, and its applications in orthodontics. AI is already being used to perform validation of approaches for diagnostic purposes such as landmark identification, cephalometric tracings, diagnosis of pathologies and facial phenotyping from radiographs and/or photographs. Other areas for future discoveries and utilization of AI will include clinical decision support, precision orthodontics, payer decisions and risk prediction. The synergies between deep 3D phenotyping and advances in materials, omics and AI will propel the technological and omics era towards achieving the goal of delivering optimized and predictable precision orthodontics.
Collapse
Affiliation(s)
- Sunil Kapila
- Strategic Initiatives and Operations, UCLA School of Dentistry, Los Angeles, California, USA
| | - Siddharth R Vora
- Oral Health Sciences, University of British Columbia, Vancouver, British Columbia, USA
| | | | - Mohammed H Elnagar
- Department of Orthodontics, College of Dentistry, University of Illinois Chicago, Chicago, Illinois, USA
| | - Sercan Akyalcin
- Department of Developmental Biology, Harvard School of Dental Medicine, Boston, Massachusetts, USA
| |
Collapse
|
5
|
Miragall MF, Knoedler S, Kauke-Navarro M, Saadoun R, Grabenhorst A, Grill FD, Ritschl LM, Fichter AM, Safi AF, Knoedler L. Face the Future-Artificial Intelligence in Oral and Maxillofacial Surgery. J Clin Med 2023; 12:6843. [PMID: 37959310 PMCID: PMC10649053 DOI: 10.3390/jcm12216843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 10/24/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has emerged as a versatile health-technology tool revolutionizing medical services through the implementation of predictive, preventative, individualized, and participatory approaches. AI encompasses different computational concepts such as machine learning, deep learning techniques, and neural networks. AI also presents a broad platform for improving preoperative planning, intraoperative workflow, and postoperative patient outcomes in the field of oral and maxillofacial surgery (OMFS). The purpose of this review is to present a comprehensive summary of the existing scientific knowledge. The authors thoroughly reviewed English-language PubMed/MEDLINE and Embase papers from their establishment to 1 December 2022. The search terms were (1) "OMFS" OR "oral and maxillofacial" OR "oral and maxillofacial surgery" OR "oral surgery" AND (2) "AI" OR "artificial intelligence". The search format was tailored to each database's syntax. To find pertinent material, each retrieved article and systematic review's reference list was thoroughly examined. According to the literature, AI is already being used in certain areas of OMFS, such as radiographic image quality improvement, diagnosis of cysts and tumors, and localization of cephalometric landmarks. Through additional research, it may be possible to provide practitioners in numerous disciplines with additional assistance to enhance preoperative planning, intraoperative screening, and postoperative monitoring. Overall, AI carries promising potential to advance the field of OMFS and generate novel solution possibilities for persisting clinical challenges. Herein, this review provides a comprehensive summary of AI in OMFS and sheds light on future research efforts. Further, the advanced analysis of complex medical imaging data can support surgeons in preoperative assessments, virtual surgical simulations, and individualized treatment strategies. AI also assists surgeons during intraoperative decision-making by offering immediate feedback and guidance to enhance surgical accuracy and reduce complication rates, for instance by predicting the risk of bleeding.
Collapse
Affiliation(s)
- Maximilian F. Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Alex Grabenhorst
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Florian D. Grill
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Lucas M. Ritschl
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Andreas M. Fichter
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, 3011 Bern, Switzerland;
- Faculty of Medicine, University of Bern, 3010 Bern, Switzerland
| | - Leonard Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, 93053 Regensburg, Germany
| |
Collapse
|
6
|
Liu J, Zhang C, Shan Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare (Basel) 2023; 11:2760. [PMID: 37893833 PMCID: PMC10606213 DOI: 10.3390/healthcare11202760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/11/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
In recent years, there has been the notable emergency of artificial intelligence (AI) as a transformative force in multiple domains, including orthodontics. This review aims to provide a comprehensive overview of the present state of AI applications in orthodontics, which can be categorized into the following domains: (1) diagnosis, including cephalometric analysis, dental analysis, facial analysis, skeletal-maturation-stage determination and upper-airway obstruction assessment; (2) treatment planning, including decision making for extractions and orthognathic surgery, and treatment outcome prediction; and (3) clinical practice, including practice guidance, remote care, and clinical documentation. We have witnessed a broadening of the application of AI in orthodontics, accompanied by advancements in its performance. Additionally, this review outlines the existing limitations within the field and offers future perspectives.
Collapse
Affiliation(s)
- Junqi Liu
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Chengfei Zhang
- Division of Restorative Dental Sciences, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| | - Zhiyi Shan
- Division of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China;
| |
Collapse
|
7
|
Yao K, Xie Y, Xia L, Wei S, Yu W, Shen G. The Reliability of Three-Dimensional Landmark-Based Craniomaxillofacial and Airway Cephalometric Analysis. Diagnostics (Basel) 2023; 13:2360. [PMID: 37510103 PMCID: PMC10377994 DOI: 10.3390/diagnostics13142360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/25/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
Cephalometric analysis is a standard diagnostic tool in orthodontics and craniofacial surgery. Today, as conventional 2D cephalometry is limited and susceptible to analysis bias, a more reliable and user-friendly three-dimensional system that includes hard tissue, soft tissue, and airways is demanded in clinical practice. We launched our study to develop such a system based on CT data and landmarks. This study aims to determine whether the data labeled through our process is highly qualified and whether the soft tissue and airway data derived from CT scans are reliable. We enrolled 15 patients (seven males, eight females, 26.47 ± 3.44 years old) diagnosed with either non-syndromic dento-maxillofacial deformities or OSDB in this study to evaluate the intra- and inter-examiner reliability of our system. A total of 126 landmarks were adopted and divided into five sets by region: 28 cranial points, 25 mandibular points, 20 teeth points, 48 soft tissue points, and 6 airway points. All the landmarks were labeled by two experienced clinical practitioners, either of whom had labeled all the data twice at least one month apart. Furthermore, 78 parameters of three sets were calculated in this study: 42 skeletal parameters (23 angular and 19 linear), 27 soft tissue parameters (9 angular and 18 linear), and 9 upper airway parameters (2 linear, 4 areal, and 3 voluminal). Intraclass correlation coefficient (ICC) was used to evaluate the inter-examiner and intra-examiner reliability of landmark coordinate values and measurement parameters. The overwhelming majority of the landmarks showed excellent intra- and inter-examiner reliability. For skeletal parameters, angular parameters indicated better reliability, while linear parameters performed better for soft tissue parameters. The intra- and inter-examiner ICCs of airway parameters referred to excellent reliability. In summary, the data labeled through our process are qualified, and the soft tissue and airway data derived from CT scans are reliable. Landmarks that are not commonly used in clinical practice may require additional attention while labeling as they are prone to poor reliability. Measurement parameters with values close to 0 tend to have low reliability. We believe this three-dimensional cephalometric system would reach clinical application.
Collapse
Affiliation(s)
- Kan Yao
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Yilun Xie
- Department of Stomatology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Liang Xia
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Silong Wei
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Wenwen Yu
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| | - Guofang Shen
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai 200011, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai 200011, China
| |
Collapse
|
8
|
de Queiroz Tavares Borges Mesquita G, Vieira WA, Vidigal MTC, Travençolo BAN, Beaini TL, Spin-Neto R, Paranhos LR, de Brito Júnior RB. Artificial Intelligence for Detecting Cephalometric Landmarks: A Systematic Review and Meta-analysis. J Digit Imaging 2023; 36:1158-1179. [PMID: 36604364 PMCID: PMC10287619 DOI: 10.1007/s10278-022-00766-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/19/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Using computer vision through artificial intelligence (AI) is one of the main technological advances in dentistry. However, the existing literature on the practical application of AI for detecting cephalometric landmarks of orthodontic interest in digital images is heterogeneous, and there is no consensus regarding accuracy and precision. Thus, this review evaluated the use of artificial intelligence for detecting cephalometric landmarks in digital imaging examinations and compared it to manual annotation of landmarks. An electronic search was performed in nine databases to find studies that analyzed the detection of cephalometric landmarks in digital imaging examinations with AI and manual landmarking. Two reviewers selected the studies, extracted the data, and assessed the risk of bias using QUADAS-2. Random-effects meta-analyses determined the agreement and precision of AI compared to manual detection at a 95% confidence interval. The electronic search located 7410 studies, of which 40 were included. Only three studies presented a low risk of bias for all domains evaluated. The meta-analysis showed AI agreement rates of 79% (95% CI: 76-82%, I2 = 99%) and 90% (95% CI: 87-92%, I2 = 99%) for the thresholds of 2 and 3 mm, respectively, with a mean divergence of 2.05 (95% CI: 1.41-2.69, I2 = 10%) compared to manual landmarking. The menton cephalometric landmark showed the lowest divergence between both methods (SMD, 1.17; 95% CI, 0.82; 1.53; I2 = 0%). Based on very low certainty of evidence, the application of AI was promising for automatically detecting cephalometric landmarks, but further studies should focus on testing its strength and validity in different samples.
Collapse
Affiliation(s)
| | - Walbert A Vieira
- Department of Restorative Dentistry, Endodontics Division, School of Dentistry of Piracicaba, State University of Campinas, Piracicaba, São Paulo, Brazil
| | | | | | - Thiago Leite Beaini
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil
| | - Rubens Spin-Neto
- Department of Dentistry and Oral Health, Section for Oral Radiology, Aarhus University, Aarhus C, Denmark
| | - Luiz Renato Paranhos
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil.
| | | |
Collapse
|
9
|
Serafin M, Baldini B, Cabitza F, Carrafiello G, Baselli G, Del Fabbro M, Sforza C, Caprioglio A, Tartaglia GM. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2023; 128:544-555. [PMID: 37093337 PMCID: PMC10181977 DOI: 10.1007/s11547-023-01629-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/28/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVES The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. METHODS PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. RESULTS The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I2 = 98.13%, τ2 = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). CONCLUSION Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done.
Collapse
Affiliation(s)
- Marco Serafin
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Benedetta Baldini
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy.
| | - Federico Cabitza
- Department of Informatics, System and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Belgioioso 173, 20157, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Oncology and Hematology-Oncology, University of Milan, Via Sforza 35, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Giuseppe Baselli
- Department of Electronics, Information and Bioengineering, Politecnico Di Milano, Via Ponzio 34/5, 20133, Milan, Italy
| | - Massimo Del Fabbro
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Chiarella Sforza
- Department of Biomedical Sciences for Health, University of Milan, Via Mangiagalli 31, 20133, Milan, Italy
| | - Alberto Caprioglio
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| | - Gianluca M Tartaglia
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Via della Commenda 10, 20122, Milan, Italy
- Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Via Sforza 35, 20122, Milan, Italy
| |
Collapse
|
10
|
Pei Y, Mu L, Xu C, Li Q, Sen G, Sun B, Li X, Li X. Learning-based landmark detection in pelvis x-rays with attention mechanism: data from the osteoarthritis initiative. Biomed Phys Eng Express 2023; 9. [PMID: 36070671 DOI: 10.1088/2057-1976/ac8ffa] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Accepted: 09/07/2022] [Indexed: 01/07/2023]
Abstract
Patients with developmental dysplasia of the hip can have this problem throughout their lifetime. The problem is difficult to detect by radiologists throughout x-ray because of an abrasion of anatomical structures. Thus, the landmarks should be automatically and precisely located. In this paper, we propose an attention mechanism of combining multi-dimension information on the basis of separating spatial dimension. The proposed attention mechanism decouples spatial dimension and forms width-channel dimension and height-channel dimension by 1D pooling operations in the height and width of spatial dimension. Then non-local means operations are performed to capture the correlation between long-range pixels in width-channel dimension, as well as that in height-channel dimension at different resolutions. The proposed attention mechanism modules are inserted into the skipped connections of U-Net to form a novel landmark detection structure. This landmark detection method was trained and evaluated through five-fold cross-validation on an open-source dataset, including 524 pelvis x-ray, each containing eight landmarks in pelvis, and achieved excellent performance compared to other landmark detection models. The average point-to-point errors of U-Net, HR-Net, CE-Net, and the proposed network were 3.5651 mm, 3.6118 mm, 3.3914 mm and 3.1350 mm, respectively. The results indicate that the proposed method has the highest detection accuracy. Furthermore, an open-source pelvis dataset is annotated and released for open research.
Collapse
Affiliation(s)
- Yun Pei
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China
| | - Lin Mu
- Department of Radiology, The First Hospital of Jilin University, Changchun, 130021, People's Republic of China
| | - Chuanxin Xu
- School of Electrical Engineering and Computer, Jilin Jianzhu University, Changchun, 130118, People's Republic of China
| | - Qiang Li
- Department of Orthopedics, the Second Hospital of Jilin University, Changchun, 130041, People's Republic of China
| | - Gan Sen
- Department of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, 830011, People's Republic of China
| | - Bin Sun
- Department of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, 830011, People's Republic of China
| | - Xiuying Li
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China
| | - Xueyan Li
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China.,Peng Cheng Laboratory, Shenzhen, 518000, People's Republic of China
| |
Collapse
|
11
|
A semi-supervised learning approach for automated 3D cephalometric landmark identification using computed tomography. PLoS One 2022; 17:e0275114. [PMID: 36170279 PMCID: PMC9518928 DOI: 10.1371/journal.pone.0275114] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 09/11/2022] [Indexed: 11/30/2022] Open
Abstract
Identification of 3D cephalometric landmarks that serve as proxy to the shape of human skull is the fundamental step in cephalometric analysis. Since manual landmarking from 3D computed tomography (CT) images is a cumbersome task even for the trained experts, automatic 3D landmark detection system is in a great need. Recently, automatic landmarking of 2D cephalograms using deep learning (DL) has achieved great success, but 3D landmarking for more than 80 landmarks has not yet reached a satisfactory level, because of the factors hindering machine learning such as the high dimensionality of the input data and limited amount of training data due to the ethical restrictions on the use of medical data. This paper presents a semi-supervised DL method for 3D landmarking that takes advantage of anonymized landmark dataset with paired CT data being removed. The proposed method first detects a small number of easy-to-find reference landmarks, then uses them to provide a rough estimation of the all landmarks by utilizing the low dimensional representation learned by variational autoencoder (VAE). The anonymized landmark dataset is used for training the VAE. Finally, coarse-to-fine detection is applied to the small bounding box provided by rough estimation, using separate strategies suitable for the mandible and the cranium. For mandibular landmarks, patch-based 3D CNN is applied to the segmented image of the mandible (separated from the maxilla), in order to capture 3D morphological features of mandible associated with the landmarks. We detect 6 landmarks around the condyle all at once rather than one by one, because they are closely related to each other. For cranial landmarks, we again use the VAE-based latent representation for more accurate annotation. In our experiment, the proposed method achieved a mean detection error of 2.88 mm for 90 landmarks using only 15 paired training data.
Collapse
|
12
|
Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J Dent Res 2022; 101:1380-1387. [PMID: 35982646 DOI: 10.1177/00220345221112333] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The increasing use of 3-dimensional (3D) imaging by orthodontists and maxillofacial surgeons to assess complex dentofacial deformities and plan orthognathic surgeries implies a critical need for 3D cephalometric analysis. Although promising methods were suggested to localize 3D landmarks automatically, concerns about robustness and generalizability restrain their clinical use. Consequently, highly trained operators remain needed to perform manual landmarking. In this retrospective diagnostic study, we aimed to train and evaluate a deep learning (DL) pipeline based on SpatialConfiguration-Net for automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans. A retrospective sample of consecutive presurgical CT scans was randomly distributed between a training/validation set (n = 160) and a test set (n = 38). The reference data consisted of 33 landmarks, manually localized once by 1 operator(n = 178) or twice by 3 operators (n = 20, test set only). After inference on the test set, 1 CT scan showed "very low" confidence level predictions; we excluded it from the overall analysis but still assessed and discussed the corresponding results. The model performance was evaluated by comparing the predictions with the reference data; the outcome set included localization accuracy, cephalometric measurements, and comparison to manual landmarking reproducibility. On the hold-out test set, the mean localization error was 1.0 ± 1.3 mm, while success detection rates for 2.0, 2.5, and 3.0 mm were 90.4%, 93.6%, and 95.4%, respectively. Mean errors were -0.3 ± 1.3° and -0.1 ± 0.7 mm for angular and linear measurements, respectively. When compared to manual reproducibility, the measurements were within the Bland-Altman 95% limits of agreement for 91.9% and 71.8% of skeletal and dentoalveolar variables, respectively. To conclude, while our DL method still requires improvement, it provided highly accurate 3D landmark localization on a challenging test set, with a reliability for skeletal evaluation on par with what clinicians obtain.
Collapse
Affiliation(s)
- G Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Universite Paris Cite, AP-HP, Hopital Pitie Salpetriere, Service de Medecine Bucco-Dentaire, Paris, France
| | - T Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - S Chang
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - F Rafflenbeul
- Department of Dentofacial Orthopedics, Faculty of Dental Surgery, Strasbourg University, Strasbourg, France
| | - A Kerbrat
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - P Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| | - L Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
13
|
Ryu SM, Shin K, Shin SW, Lee SH, Seo SM, Cheon SU, Ryu SA, Kim JS, Ji S, Kim N. Automated landmark identification for diagnosis of the deformity using a cascade convolutional neural network (FlatNet) on weight-bearing lateral radiographs of the foot. Comput Biol Med 2022; 148:105914. [DOI: 10.1016/j.compbiomed.2022.105914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/08/2022] [Accepted: 07/23/2022] [Indexed: 11/15/2022]
|
14
|
Chen R, Ma Y, Chen N, Liu L, Cui Z, Lin Y, Wang W. Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1791-1801. [PMID: 35130151 DOI: 10.1109/tmi.2022.3149281] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark localization, leading to unreliable diagnosis results. In this work, we propose a novel Structure-Aware Long Short-Term Memory framework (SA-LSTM) for efficient and accurate 3D landmark detection. To reduce the computational burden, SA-LSTM is designed in two stages. It first locates the coarse landmarks via heatmap regression on a down-sampled CBCT volume and then progressively refines landmarks by attentive offset regression using multi-resolution cropped patches. To boost accuracy, SA-LSTM captures global-local dependence among the cropping patches via self-attention. Specifically, a novel graph attention module implicitly encodes the landmark's global structure to rationalize the predicted position. Moreover, a novel attention-gated module recursively filters irrelevant local features and maintains high-confident local predictions for aggregating the final result. Experiments conducted on an in-house dataset and a public dataset show that our method outperforms state-of-the-art methods, achieving 1.64 mm and 2.37 mm average errors, respectively. Furthermore, our method is very efficient, taking only 0.5 seconds for inferring the whole CBCT volume of resolution 768×768×576 .
Collapse
|
15
|
Alshamrani K, Alshamrani H, Alqahtani FF, Alshehri AH. Automation of Cephalometrics Using Machine Learning Methods. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3061154. [PMID: 35774443 PMCID: PMC9239774 DOI: 10.1155/2022/3061154] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/17/2022] [Accepted: 05/26/2022] [Indexed: 11/18/2022]
Abstract
Cephalometry is a medical test that can detect teeth, skeleton, or appearance problems. In this scenario, the patient's lateral radiograph of the face was utilised to construct a tracing from the tracing of lines on the lateral radiograph of the face of the soft and hard structures (skin and bone, respectively). Certain cephalometric locations and characteristic lines and angles are indicated after the tracing is completed to do the real examination. In this unique study, it is proposed that machine learning models be employed to create cephalometry. These models can recognise cephalometric locations in X-ray images, allowing the study's computing procedure to be completed faster. To correlate a probability map with an input image, they combine an Autoencoder architecture with convolutional neural networks and Inception layers. These innovative architectures were demonstrated. When many models were compared, it was observed that they all performed admirably in this task.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Hassan Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - F. F. Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| | - Ali H. Alshehri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Saudi Arabia
| |
Collapse
|
16
|
Jeong SH, Woo MW, Shin DS, Yeom HG, Lim HJ, Kim BC, Yun JP. Three-Dimensional Postoperative Results Prediction for Orthognathic Surgery through Deep Learning-Based Alignment Network. J Pers Med 2022; 12:998. [PMID: 35743782 PMCID: PMC9225553 DOI: 10.3390/jpm12060998] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/13/2022] Open
Abstract
To date, for the diagnosis of dentofacial dysmorphosis, we have relied almost entirely on reference points, planes, and angles. This is time consuming, and it is also greatly influenced by the skill level of the practitioner. To solve this problem, we wanted to know if deep neural networks could predict postoperative results of orthognathic surgery without relying on reference points, planes, and angles. We use three-dimensional point cloud data of the skull of 269 patients. The proposed method has two main stages for prediction. In step 1, the skull is divided into six parts through the segmentation network. In step 2, three-dimensional transformation parameters are predicted through the alignment network. The ground truth values of transformation parameters are calculated through the iterative closest points (ICP), which align the preoperative part of skull to the corresponding postoperative part of skull. We compare pointnet, pointnet++ and pointconv for the feature extractor of the alignment network. Moreover, we design a new loss function, which considers the distance error of transformed points for a better accuracy. The accuracy, mean intersection over union (mIoU), and dice coefficient (DC) of the first segmentation network, which divides the upper and lower part of skull, are 0.9998, 0.9994, and 0.9998, respectively. For the second segmentation network, which divides the lower part of skull into 5 parts, they were 0.9949, 0.9900, 0.9949, respectively. The mean absolute error of transverse, anterior-posterior, and vertical distance of part 2 (maxilla) are 0.765 mm, 1.455 mm, and 1.392 mm, respectively. For part 3 (mandible), they were 1.069 mm, 1.831 mm, and 1.375 mm, respectively, and for part 4 (chin), they were 1.913 mm, 2.340 mm, and 1.257 mm, respectively. From this study, postoperative results can now be easily predicted by simply entering the point cloud data of computed tomography.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
| | - Min Woo Woo
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
| | - Dong Sun Shin
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Han Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea;
| | - Hun Jun Lim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Jong Pil Yun
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- KITECH School, University of Science and Technology, Daejeon 34113, Korea
| |
Collapse
|
17
|
Ma Q, Kobayashi E, Fan B, Hara K, Nakagawa K, Masamune K, Sakuma I, Suenaga H. Machine‐learning‐based approach for predicting postoperative skeletal changes for orthognathic surgical planning. Int J Med Robot 2022; 18:e2379. [DOI: 10.1002/rcs.2379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/11/2022] [Accepted: 01/12/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Qingchuan Ma
- Department of Oral‐Maxillofacial Surgery and Orthodontics The University of Tokyo Hospital Tokyo Japan
- School of Engineering Medicine Beihang University Beijing China
| | - Etsuko Kobayashi
- Department of Precision Engineering The University of Tokyo Tokyo Japan
| | - Bowen Fan
- Department of Precision Engineering The University of Tokyo Tokyo Japan
| | - Kazuaki Hara
- Department of Precision Engineering The University of Tokyo Tokyo Japan
| | - Keiichi Nakagawa
- Department of Precision Engineering The University of Tokyo Tokyo Japan
| | - Ken Masamune
- Institute of Advanced BioMedical Engineering and Science Tokyo Women's Medical University Tokyo Japan
| | - Ichiro Sakuma
- Department of Precision Engineering The University of Tokyo Tokyo Japan
| | - Hideyuki Suenaga
- Department of Oral‐Maxillofacial Surgery and Orthodontics The University of Tokyo Hospital Tokyo Japan
| |
Collapse
|
18
|
Kordon F, Maier A, Swartman B, Privalov M, El Barbari JS, Kunze H. Multi-Stage Platform for (Semi-)Automatic Planning in Reconstructive Orthopedic Surgery. J Imaging 2022; 8:jimaging8040108. [PMID: 35448235 PMCID: PMC9027971 DOI: 10.3390/jimaging8040108] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 04/05/2022] [Accepted: 04/08/2022] [Indexed: 01/11/2023] Open
Abstract
Intricate lesions of the musculoskeletal system require reconstructive orthopedic surgery to restore the correct biomechanics. Careful pre-operative planning of the surgical steps on 2D image data is an essential tool to increase the precision and safety of these operations. However, the plan’s effectiveness in the intra-operative workflow is challenged by unpredictable patient and device positioning and complex registration protocols. Here, we develop and analyze a multi-stage algorithm that combines deep learning-based anatomical feature detection and geometric post-processing to enable accurate pre- and intra-operative surgery planning on 2D X-ray images. The algorithm allows granular control over each element of the planning geometry, enabling real-time adjustments directly in the operating room (OR). In the method evaluation of three ligament reconstruction tasks effect on the knee joint, we found high spatial precision in drilling point localization (ε<2.9mm) and low angulation errors for k-wire instrumentation (ε<0.75∘) on 38 diagnostic radiographs. Comparable precision was demonstrated in 15 complex intra-operative trauma cases suffering from strong implant overlap and multi-anatomy exposure. Furthermore, we found that the diverse feature detection tasks can be efficiently solved with a multi-task network topology, improving precision over the single-task case. Our platform will help overcome the limitations of current clinical practice and foster surgical plan generation and adjustment directly in the OR, ultimately motivating the development of novel 2D planning guidelines.
Collapse
Affiliation(s)
- Florian Kordon
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany; (A.M.); (H.K.)
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander University Erlangen-Nuremberg, 91052 Erlangen, Germany
- Advanced Therapies, Siemens Healthcare GmbH, 91031 Forchheim, Germany
- Correspondence:
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany; (A.M.); (H.K.)
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander University Erlangen-Nuremberg, 91052 Erlangen, Germany
| | - Benedict Swartman
- Department for Trauma and Orthopaedic Surgery, BG Trauma Center, Ludwigshafen, 67071 Ludwigshafen, Germany; (B.S.); (M.P.); (J.S.E.B.)
| | - Maxim Privalov
- Department for Trauma and Orthopaedic Surgery, BG Trauma Center, Ludwigshafen, 67071 Ludwigshafen, Germany; (B.S.); (M.P.); (J.S.E.B.)
| | - Jan Siad El Barbari
- Department for Trauma and Orthopaedic Surgery, BG Trauma Center, Ludwigshafen, 67071 Ludwigshafen, Germany; (B.S.); (M.P.); (J.S.E.B.)
| | - Holger Kunze
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany; (A.M.); (H.K.)
- Advanced Therapies, Siemens Healthcare GmbH, 91031 Forchheim, Germany
| |
Collapse
|
19
|
Thurzo A, Kosnáčová HS, Kurilová V, Kosmeľ S, Beňuš R, Moravanský N, Kováč P, Kuracinová KM, Palkovič M, Varga I. Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy. Healthcare (Basel) 2021; 9:1545. [PMID: 34828590 PMCID: PMC8619074 DOI: 10.3390/healthcare9111545] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/10/2021] [Accepted: 11/10/2021] [Indexed: 12/11/2022] Open
Abstract
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without any human supervision. 3D CNN is used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures, e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers, emphasizing activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore the current and design customized 3D CNN application methods for particular forensic research in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks.
Collapse
Affiliation(s)
- Andrej Thurzo
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
| | - Helena Svobodová Kosnáčová
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- Department of Genetics, Cancer Research Institute, Biomedical Research Center, Slovak Academy Sciences, Dúbravská Cesta 9, 84505 Bratislava, Slovakia
| | - Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 81219 Bratislava, Slovakia;
| | - Silvester Kosmeľ
- Deep Learning Engineering Department at Cognexa, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 2, 84216 Bratislava, Slovakia;
| | - Radoslav Beňuš
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Anthropology, Faculty of Natural Sciences, Comenius University in Bratislava, Mlynská dolina Ilkovičova 6, 84215 Bratislava, Slovakia
| | - Norbert Moravanský
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Institute of Forensic Medicine, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia
| | - Peter Kováč
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Criminal Law and Criminology, Faculty of Law Trnava University, Kollárova 10, 91701 Trnava, Slovakia
| | - Kristína Mikuš Kuracinová
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
| | - Michal Palkovič
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
- Forensic Medicine and Pathological Anatomy Department, Health Care Surveillance Authority (HCSA), Sasinkova 4, 81108 Bratislava, Slovakia
| | - Ivan Varga
- Institute of Histology and Embryology, Faculty of Medicine, Comenius University in Bratislava, 81372 Bratislava, Slovakia;
| |
Collapse
|
20
|
3D cephalometric landmark detection by multiple stage deep reinforcement learning. Sci Rep 2021; 11:17509. [PMID: 34471202 PMCID: PMC8410904 DOI: 10.1038/s41598-021-97116-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 08/17/2021] [Indexed: 11/09/2022] Open
Abstract
The lengthy time needed for manual landmarking has delayed the widespread adoption of three-dimensional (3D) cephalometry. We here propose an automatic 3D cephalometric annotation system based on multi-stage deep reinforcement learning (DRL) and volume-rendered imaging. This system considers geometrical characteristics of landmarks and simulates the sequential decision process underlying human professional landmarking patterns. It consists mainly of constructing an appropriate two-dimensional cutaway or 3D model view, then implementing single-stage DRL with gradient-based boundary estimation or multi-stage DRL to dictate the 3D coordinates of target landmarks. This system clearly shows sufficient detection accuracy and stability for direct clinical applications, with a low level of detection error and low inter-individual variation (1.96 ± 0.78 mm). Our system, moreover, requires no additional steps of segmentation and 3D mesh-object construction for landmark detection. We believe these system features will enable fast-track cephalometric analysis and planning and expect it to achieve greater accuracy as larger CT datasets become available for training and testing.
Collapse
|
21
|
Schwendicke F, Chaurasia A, Arsiwala L, Lee JH, Elhennawy K, Jost-Brinkmann PG, Demarco F, Krois J. Deep learning for cephalometric landmark detection: systematic review and meta-analysis. Clin Oral Investig 2021; 25:4299-4309. [PMID: 34046742 PMCID: PMC8310492 DOI: 10.1007/s00784-021-03990-w] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 05/14/2021] [Indexed: 10/31/2022]
Abstract
OBJECTIVES Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs. METHODS Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498). DATA From 321 identified records, 19 studies (published 2017-2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7-93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (-0.581; 95 CI: -1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824). CONCLUSIONS DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed. CLINICAL SIGNIFICANCE Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective.
Collapse
Affiliation(s)
- Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany.
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany.
| | - Akhilanand Chaurasia
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Oral Medicine and Radiology, King George's Medical University, Lucknow, India
| | - Lubaina Arsiwala
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jae-Hong Lee
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Periodontology, Daejeon Dental Hospital, Institute of Wonkwang Dental Research, Wonkwang University College of Dentistry, Daejeon, Korea
| | - Karim Elhennawy
- Department of Orthodontics, Dentofacial Orthopedics and Pedodontics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Paul-Georg Jost-Brinkmann
- Department of Orthodontics, Dentofacial Orthopedics and Pedodontics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Flavio Demarco
- Post-Graduate Program in Epidemiology, Federal University of Pelotas, Pelotas, Brazil
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| |
Collapse
|
22
|
Robotic Applications in Orthodontics: Changing the Face of Contemporary Clinical Care. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9954615. [PMID: 34222490 PMCID: PMC8225419 DOI: 10.1155/2021/9954615] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/02/2021] [Indexed: 11/18/2022]
Abstract
The last decade (2010-2021) has witnessed the evolution of robotic applications in orthodontics. This review scopes and analyzes published orthodontic literature in eight different domains: (1) robotic dental assistants; (2) robotics in diagnosis and simulation of orthodontic problems; (3) robotics in orthodontic patient education, teaching, and training; (4) wire bending and customized appliance robotics; (5) nanorobots/microrobots for acceleration of tooth movement and for remote monitoring; (6) robotics in maxillofacial surgeries and implant placement; (7) automated aligner production robotics; and (8) TMD rehabilitative robotics. A total of 1,150 records were searched, of which 124 potentially relevant articles were retrieved in full. 87 studies met the selection criteria following screening and were included in the scoping review. The review found that studies pertaining to arch wire bending and customized appliance robots, simulative robots for diagnosis, and surgical robots have been important areas of research in the last decade (32%, 22%, and 16%). Rehabilitative robots and nanorobots are quite promising and have been considerably reported in the orthodontic literature (13%, 9%). On the other hand, assistive robots, automated aligner production robots, and patient robots need more scientific data to be gathered in the future (1%, 1%, and 6%). Technological readiness of different robotic applications in orthodontics was further assessed. The presented eight domains of robotic technologies were assigned to an estimated technological readiness level according to the information given in the publications. Wire bending robots, TMD robots, nanorobots, and aligner production robots have reached the highest levels of technological readiness: 9; diagnostic robots and patient robots reached level 7, whereas surgical robots and assistive robots reached lower levels of readiness: 4 and 3, respectively.
Collapse
|
23
|
Ren R, Luo H, Su C, Yao Y, Liao W. Machine learning in dental, oral and craniofacial imaging: a review of recent progress. PeerJ 2021; 9:e11451. [PMID: 34046262 PMCID: PMC8136280 DOI: 10.7717/peerj.11451] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 04/22/2021] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence has been emerging as an increasingly important aspect of our daily lives and is widely applied in medical science. One major application of artificial intelligence in medical science is medical imaging. As a major component of artificial intelligence, many machine learning models are applied in medical diagnosis and treatment with the advancement of technology and medical imaging facilities. The popularity of convolutional neural network in dental, oral and craniofacial imaging is heightening, as it has been continually applied to a broader spectrum of scientific studies. Our manuscript reviews the fundamental principles and rationales behind machine learning, and summarizes its research progress and its recent applications specifically in dental, oral and craniofacial imaging. It also reviews the problems that remain to be resolved and evaluates the prospect of the future development of this field of scientific study.
Collapse
Affiliation(s)
- Ruiyang Ren
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Haozhe Luo
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Chongying Su
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Yang Yao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Wen Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Orthodontics, Osaka Dental University, Hirakata, Osaka, Japan
| |
Collapse
|
24
|
Shin W, Yeom HG, Lee GH, Yun JP, Jeong SH, Lee JH, Kim HK, Kim BC. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health 2021; 21:130. [PMID: 33736627 PMCID: PMC7977585 DOI: 10.1186/s12903-021-01513-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 03/15/2021] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Posteroanterior and lateral cephalogram have been widely used for evaluating the necessity of orthognathic surgery. The purpose of this study was to develop a deep learning network to automatically predict the need for orthodontic surgery using cephalogram. METHODS The cephalograms of 840 patients (Class ll: 244, Class lll: 447, Facial asymmetry: 149) complaining about dentofacial dysmorphosis and/or a malocclusion were included. Patients who did not require orthognathic surgery were classified as Group I (622 patients-Class ll: 221, Class lll: 312, Facial asymmetry: 89). Group II (218 patients-Class ll: 23, Class lll: 135, Facial asymmetry: 60) was set for cases requiring surgery. A dataset was extracted using random sampling and was composed of training, validation, and test sets. The ratio of the sets was 4:1:5. PyTorch was used as the framework for the experiment. RESULTS Subsequently, 394 out of a total of 413 test data were properly classified. The accuracy, sensitivity, and specificity were 0.954, 0.844, and 0.993, respectively. CONCLUSION It was found that a convolutional neural network can determine the need for orthognathic surgery with relative accuracy when using cephalogram.
Collapse
Affiliation(s)
- WooSang Shin
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan, Korea.,School of Electronics Engineering College of IT Engineering, Kyungpook National University, Daegu, Korea
| | - Han-Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, Korea
| | - Ga Hyung Lee
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, Korea
| | - Jong Pil Yun
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan, Korea
| | - Seung Hyun Jeong
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan, Korea
| | - Jong Hyun Lee
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan, Korea.,School of Electronics Engineering College of IT Engineering, Kyungpook National University, Daegu, Korea
| | - Hwi Kang Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, Korea
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon, Korea.
| |
Collapse
|
25
|
Oh K, Oh IS, Le VNT, Lee DW. Deep Anatomical Context Feature Learning for Cephalometric Landmark Detection. IEEE J Biomed Health Inform 2021; 25:806-817. [PMID: 32750939 DOI: 10.1109/jbhi.2020.3002582] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In the past decade, anatomical context features have been widely used for cephalometric landmark detection and significant progress is still being made. However, most existing methods rely on handcrafted graphical models rather than incorporating anatomical context during training, leading to suboptimal performance. In this study, we present a novel framework that allows a Convolutional Neural Network (CNN) to learn richer anatomical context features during training. Our key idea consists of the Local Feature Perturbator (LFP) and the Anatomical Context loss (AC loss). When training the CNN, the LFP perturbs a cephalometric image based on prior anatomical distribution, forcing the CNN to gaze relevant features more globally. Then AC loss helps the CNN to learn the anatomical context based on spatial relationships between the landmarks. The experimental results demonstrate that the proposed framework makes the CNN learn richer anatomical representation, leading to increased performance. In the performance comparisons, the proposed scheme outperforms state-of-the-art methods on the ISBI 2015 Cephalometric X-ray Image Analysis Challenge.
Collapse
|
26
|
Lachinov D, Getmanskaya A, Turlapov V. Cephalometric Landmark Regression with Convolutional Neural Networks on 3D Computed Tomography Data. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s1054661820030165] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
27
|
Yun HS, Jang TJ, Lee SM, Lee SH, Seo JK. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Phys Med Biol 2020; 65:085018. [PMID: 32101805 DOI: 10.1088/1361-6560/ab7a71] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.
Collapse
Affiliation(s)
- Hye Sun Yun
- Department of Computational Science and Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | | | | |
Collapse
|
28
|
Ma Q, Kobayashi E, Fan B, Nakagawa K, Sakuma I, Masamune K, Suenaga H. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot 2020; 16:e2093. [PMID: 32065718 DOI: 10.1002/rcs.2093] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 02/12/2020] [Accepted: 02/13/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND Manual landmarking is a time consuming and highly professional work. Although some algorithm-based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity. METHODS The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch-based deep neural network model with a three-layer convolutional neural network (CNN) was trained to obtain landmarks from CT images. RESULTS The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm. CONCLUSION This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.
Collapse
Affiliation(s)
- Qingchuan Ma
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Etsuko Kobayashi
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Bowen Fan
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Keiichi Nakagawa
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ken Masamune
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|