1
|
Kapila S, Vora SR, Rengasamy Venugopalan S, Elnagar MH, Akyalcin S. Connecting the dots towards precision orthodontics. Orthod Craniofac Res 2023; 26 Suppl 1:8-19. [PMID: 37968678 DOI: 10.1111/ocr.12725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 11/17/2023]
Abstract
Precision orthodontics entails the use of personalized clinical, biological, social and environmental knowledge of each patient for deep individualized clinical phenotyping and diagnosis combined with the delivery of care using advanced customized devices, technologies and biologics. From its historical origins as a mechanotherapy and materials driven profession, the most recent advances in orthodontics in the past three decades have been propelled by technological innovations including volumetric and surface 3D imaging and printing, advances in software that facilitate the derivation of diagnostic details, enhanced personalization of treatment plans and fabrication of custom appliances. Still, the use of these diagnostic and therapeutic technologies is largely phenotype driven, focusing mainly on facial/skeletal morphology and tooth positions. Future advances in orthodontics will involve comprehensive understanding of an individual's biology through omics, a field of biology that involves large-scale rapid analyses of DNA, mRNA, proteins and other biological regulators from a cell, tissue or organism. Such understanding will define individual biological attributes that will impact diagnosis, treatment decisions, risk assessment and prognostics of therapy. Equally important are the advances in artificial intelligence (AI) and machine learning, and its applications in orthodontics. AI is already being used to perform validation of approaches for diagnostic purposes such as landmark identification, cephalometric tracings, diagnosis of pathologies and facial phenotyping from radiographs and/or photographs. Other areas for future discoveries and utilization of AI will include clinical decision support, precision orthodontics, payer decisions and risk prediction. The synergies between deep 3D phenotyping and advances in materials, omics and AI will propel the technological and omics era towards achieving the goal of delivering optimized and predictable precision orthodontics.
Collapse
Affiliation(s)
- Sunil Kapila
- Strategic Initiatives and Operations, UCLA School of Dentistry, Los Angeles, California, USA
| | - Siddharth R Vora
- Oral Health Sciences, University of British Columbia, Vancouver, British Columbia, USA
| | | | - Mohammed H Elnagar
- Department of Orthodontics, College of Dentistry, University of Illinois Chicago, Chicago, Illinois, USA
| | - Sercan Akyalcin
- Department of Developmental Biology, Harvard School of Dental Medicine, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Weitz J, Grabenhorst A, Singer H, Niu M, Grill FD, Kamreh D, Claßen CAS, Wolff KD, Ritschl LM. Mandibular reconstructions with free fibula flap using standardized partially adjustable cutting guides or CAD/CAM technique: a three- and two-dimensional comparison. Front Oncol 2023; 13:1167071. [PMID: 37228490 PMCID: PMC10203950 DOI: 10.3389/fonc.2023.1167071] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 04/17/2023] [Indexed: 05/27/2023] Open
Abstract
Background Mandibular reconstruction with the fibula free flap (FFF) is performed freehand, CAD/CAM-assisted, or by using partially adjustable resection/reconstruction aids. The two latter options represent the contemporary reconstructive solutions of the recent decade. The purpose of this study was to compare both auxiliary techniques with regard to feasibility, accuracy, and operative parameters. Methods and materials The first twenty consecutively operated patients requiring a mandibular reconstruction (within angle-to-angle) with the FFF using the partially adjustable resection aids between January 2017 and December 2019 at our department were included. Additionally, matching CAD/CAM FFF cases were used as control group in this cross-sectional study. Medical records and general information (sex, age, indication for surgery, extent of resection, number of segments, duration of surgery, and ischemia time) were analyzed. In addition, the pre- and postoperative Digital Imaging and Communications in Medicine data of the mandibles were converted to standard tessellation language (.stl) files. Conventional measurements - six horizontal distances (A-F) and temporo-mandibular joint (TMJ) spaces - and the root mean square error (RMSE) for three-dimensional analysis were measured and calculated. Results In total, 40 patients were enrolled (20:20). Overall operation time, ischemia time, and the interval between ischemia time start until end of operation showed no significant differences. No significant difference between the two groups were revealed in conventional measurements of distances (A-D) and TMJ spaces. The Δ differences for the distance F (between the mandibular foramina) and the right medial joint space were significantly lower in the ReconGuide group. The RMSE analysis of the two groups showed no significant difference (p=0.925), with an overall median RMSE of 3.1 mm (2.2-3.7) in the CAD/CAM and 2.9 mm (2.2-3.8) in the ReconGuide groups. Conclusions The reconstructive surgeon can achieve comparable postoperative results regardless of technique, which may favor the ReconGuide use in mandibular angle-to-angle reconstruction over the CAD/CAM technique because of less preoperative planning time and lower costs per case.
Collapse
Affiliation(s)
- Jochen Weitz
- Department of Oral and Maxillofacial Surgery, Josefinum, Augsburg and Private Practice Oral and Maxillofacial Surgery im Pferseepark, Augsburg, Germany
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Alex Grabenhorst
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Hannes Singer
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Minli Niu
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Florian D. Grill
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Daniel Kamreh
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Carolina A. S. Claßen
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
- Department of Oral and Maxillofacial Surgery, School of Medicine, University of Saarland, Homburg, Saar, Germany
| | - Klaus-Dietrich Wolff
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| | - Lucas M. Ritschl
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
3
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
4
|
Tsolakis IA, Tsolakis AI, Elshebiny T, Matthaios S, Palomo JM. Comparing a Fully Automated Cephalometric Tracing Method to a Manual Tracing Method for Orthodontic Diagnosis. J Clin Med 2022; 11:jcm11226854. [PMID: 36431331 PMCID: PMC9693212 DOI: 10.3390/jcm11226854] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/11/2022] [Accepted: 11/16/2022] [Indexed: 11/22/2022] Open
Abstract
Background: This study aims to compare an automated cephalometric analysis based on the latest deep learning method of automatically identifying cephalometric landmarks with a manual tracing method using broadly accepted cephalometric software. Methods: A total of 100 cephalometric X-rays taken using a CS8100SC cephalostat were collected from a private practice. The X-rays were taken in maximum image size (18 × 24 cm lateral image). All cephalometric X-rays were first manually traced using the Dolphin 3D Imaging program version 11.0 and then automatically, using the Artificial Intelligence CS imaging V8 software. The American Board of Orthodontics analysis and the European Board of Orthodontics analysis were used for the cephalometric measurements. This resulted in the identification of 16 cephalometric landmarks, used for 16 angular and 2 linear measurements. Results: All measurements showed great reproducibility with high intra-class reliability (>0.97). The two methods showed great agreement, with an ICC range of 0.70−0.92. Mean values of SNA, SNB, ANB, SN-MP, U1-SN, L1-NB, SNPg, ANPg, SN/ANS-PNS, SN/GoGn, U1/ANS-PNS, L1-APg, U1-NA, and L1-GoGn landmarks had no significant differences between the two methods (p > 0.0027), while the mean values of FMA, L1-MP, ANS-PNS/GoGn, and U1-L1 were statistically significantly different (p < 0.0027). Conclusions: The automatic cephalometric tracing method using CS imaging V8 software is reliable and accurate for all cephalometric measurements.
Collapse
Affiliation(s)
- Ioannis A. Tsolakis
- Department of Orthodontics, School of Dentistry, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
- Correspondence:
| | - Apostolos I. Tsolakis
- Department of Orthodontics, School of Dentistry, National and Kapodistrian, University of Athens, 157 72 Athens, Greece
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Tarek Elshebiny
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Stefanos Matthaios
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - J. Martin Palomo
- Department of Orthodontics, School of Dental Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
5
|
Luo D, Zeng W, Chen J, Tang W. Deep Learning for Automatic Image Segmentation in Stomatology and Its Clinical Application. FRONTIERS IN MEDICAL TECHNOLOGY 2021; 3:767836. [PMID: 35047964 PMCID: PMC8757832 DOI: 10.3389/fmedt.2021.767836] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.
Collapse
Affiliation(s)
| | | | | | - Wei Tang
- The State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
6
|
Ritschl LM, Kilbertus P, Grill FD, Schwarz M, Weitz J, Nieberler M, Wolff KD, Fichter AM. In-House, Open-Source 3D-Software-Based, CAD/CAM-Planned Mandibular Reconstructions in 20 Consecutive Free Fibula Flap Cases: An Explorative Cross-Sectional Study With Three-Dimensional Performance Analysis. Front Oncol 2021; 11:731336. [PMID: 34631563 PMCID: PMC8498593 DOI: 10.3389/fonc.2021.731336] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 08/31/2021] [Indexed: 11/25/2022] Open
Abstract
Background Mandibular reconstruction is conventionally performed freehand, CAD/CAM-assisted, or by using partially adjustable resection aids. CAD/CAM-assisted reconstructions are usually done in cooperation with osteosynthesis manufacturers, which entails additional costs and longer lead time. The purpose of this study is to analyze an in-house, open-source software-based solution for virtual planning. Methods and Materials All consecutive cases between January 2019 and April 2021 that underwent in-house, software-based (Blender) mandibular reconstruction with a free fibula flap (FFF) were included in this cross-sectional study. The pre- and postoperative Digital Imaging and Com munications in Medicine (DICOM) data were converted to standard tessellation language (STL) files. In addition to documenting general information (sex, age, indication for surgery, extent of resection, number of segments, duration of surgery, and ischemia time), conventional measurements and three-dimensional analysis methods (root mean square error [RMSE], mean surface distance [MSD], and Hausdorff distance [HD]) were used. Results Twenty consecutive cases were enrolled. Three-dimensional analysis of preoperative and virtually planned neomandibula models was associated with a median RMSE of 1.4 (0.4–7.2), MSD of 0.3 (-0.1–2.9), and HD of 0.7 (0.1–3.1). Three-dimensional comparison of preoperative and postoperative models showed a median RMSE of 2.2 (1.5–11.1), MSD of 0.5 (-0.6–6.1), and HD of 1.5 (1.1–6.5) and the differences were significantly different for RMSE (p < 0.001) and HD (p < 0.001). The difference was not significantly different for MSD (p = 0.554). Three-dimensional analysis of virtual and postoperative models had a median RMSE of 2.3 (1.3–10.7), MSD of -0.1 (-1.0–5.6), and HD of 1.7 (0.1–5.9). Conclusions Open-source software-based in-house planning is a feasible, inexpensive, and fast method that enables accurate reconstructions. Additionally, it is excellent for teaching purposes.
Collapse
Affiliation(s)
- Lucas M Ritschl
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Paul Kilbertus
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Florian D Grill
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Matthias Schwarz
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Jochen Weitz
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany.,Department of Oral and Maxillofacial Surgery, Josefinum, Augsburg and Private Practice Oral and Maxillofacial Surgery im Pferseepark, Augsburg, Germany
| | - Markus Nieberler
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Klaus-Dietrich Wolff
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| | - Andreas M Fichter
- Department of Oral and Maxillofacial Surgery, School of Medicine, Technical University of Munich, Klinikum rechts der Isar, Munich, Germany
| |
Collapse
|
7
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
8
|
Qiu B, Guo J, Kraeima J, Glas HH, Zhang W, Borra RJH, Witjes MJH, van Ooijen PMA. Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography. J Pers Med 2021; 11:jpm11060492. [PMID: 34072714 PMCID: PMC8229770 DOI: 10.3390/jpm11060492] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 12/24/2022] Open
Abstract
Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
- Correspondence:
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Weichuan Zhang
- Institute for Integrated and Intelligent System, Griffith University, Nathan, QLD 4111, Australia;
- CSIRO Data61, Epping, NSW 1710, Australia
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| |
Collapse
|
9
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
10
|
Wang H, Minnema J, Batenburg KJ, Forouzanfar T, Hu FJ, Wu G. Multiclass CBCT Image Segmentation for Orthodontics with Deep Learning. J Dent Res 2021; 100:943-949. [PMID: 33783247 PMCID: PMC8293763 DOI: 10.1177/00220345211005338] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.
Collapse
Affiliation(s)
- H Wang
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - J Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - K J Batenburg
- Centrum Wiskunde and Informatica, Amsterdam, the Netherlands
| | - T Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - F J Hu
- Institute of Information Technology, Zhejiang Shuren University, Hangzhou, China
| | - G Wu
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.,Department of Oral Implantology and Prosthetic Dentistry, Academic Centre for Dentistry Amsterdam, University of Amsterdam and Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
11
|
Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Sci Dent 2019; 49:1-7. [PMID: 30941282 PMCID: PMC6444007 DOI: 10.5624/isd.2019.49.1.1] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 12/15/2018] [Accepted: 12/17/2018] [Indexed: 01/23/2023] Open
Abstract
Purpose Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.
Collapse
Affiliation(s)
- Jae-Joon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Yun-Hoa Jung
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Bong-Hae Cho
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| |
Collapse
|