1
|
Mayer C, Pepe A, Hossain S, Karner B, Arnreiter M, Kleesiek J, Schmid J, Janisch M, Hannes D, Fuchsjäger M, Zimpfer D, Egger J, Mächler H. Type B Aortic Dissection CTA Collection with True and False Lumen Expert Annotations for the Development of AI-based Algorithms. Sci Data 2024; 11:596. [PMID: 38844767 PMCID: PMC11156948 DOI: 10.1038/s41597-024-03284-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 04/22/2024] [Indexed: 06/09/2024] Open
Abstract
Aortic dissections (ADs) are serious conditions of the main artery of the human body, where a tear in the inner layer of the aortic wall leads to the formation of a new blood flow channel, named false lumen. ADs affecting the aorta distally to the left subclavian artery are classified as a Stanford type B aortic dissection (type B AD). This is linked to substantial morbidity and mortality, however, the course of the disease for the individual case is often unpredictable. Computed tomography angiography (CTA) is the gold standard for the diagnosis of type B AD. To advance the tools available for the analysis of CTA scans, we provide a CTA collection of 40 type B AD cases from clinical routine with corresponding expert segmentations of the true and false lumina. Segmented CTA scans might aid clinicians in decision making, especially if it is possible to fully automate the process. Therefore, the data collection is meant to be used to develop, train and test algorithms.
Collapse
Affiliation(s)
- Christian Mayer
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria
| | - Sophie Hossain
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Barbara Karner
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Melanie Arnreiter
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), AI-guided Therapies (AIT), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Schmid
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Michael Janisch
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Deutschmann Hannes
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Michael Fuchsjäger
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Daniel Zimpfer
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria.
- Institute for Artificial Intelligence in Medicine (IKIM), AI-guided Therapies (AIT), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany.
| | - Heinrich Mächler
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria.
| |
Collapse
|
2
|
Qi Z, Jin H, Xu X, Wang Q, Gan Z, Xiong R, Zhang S, Liu M, Wang J, Ding X, Chen X, Zhang J, Nimsky C, Bopp MHA. Head model dataset for mixed reality navigation in neurosurgical interventions for intracranial lesions. Sci Data 2024; 11:538. [PMID: 38796526 PMCID: PMC11127921 DOI: 10.1038/s41597-024-03385-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/15/2024] [Indexed: 05/28/2024] Open
Abstract
Mixed reality navigation (MRN) technology is emerging as an increasingly significant and interesting topic in neurosurgery. MRN enables neurosurgeons to "see through" the head with an interactive, hybrid visualization environment that merges virtual- and physical-world elements. Offering immersive, intuitive, and reliable guidance for preoperative and intraoperative intervention of intracranial lesions, MRN showcases its potential as an economically efficient and user-friendly alternative to standard neuronavigation systems. However, the clinical research and development of MRN systems present challenges: recruiting a sufficient number of patients within a limited timeframe is difficult, and acquiring low-cost, commercially available, medically significant head phantoms is equally challenging. To accelerate the development of novel MRN systems and surmount these obstacles, the study presents a dataset designed for MRN system development and testing in neurosurgery. It includes CT and MRI data from 19 patients with intracranial lesions and derived 3D models of anatomical structures and validation references. The models are available in Wavefront object (OBJ) and Stereolithography (STL) formats, supporting the creation and assessment of neurosurgical MRN applications.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
- NCO School, Army Medical University, 050081, Shijiazhuang, China
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Ruochu Xiong
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Department of Neurosurgery, Division of Medicine, Graduate School of Medical Sciences, Kanazawa University, Takara-machi 13-1, 920-8641, Kanazawa, Ishikawa, Japan
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Minghang Liu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xinyu Ding
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
- Medical School of Chinese PLA General Hospital, 100853, Beijing, China
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, 100853, Beijing, China.
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany
| | - Miriam H A Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043, Marburg, Germany.
- Center for Mind, Brain and Behavior (CMBB), 35043, Marburg, Germany.
| |
Collapse
|
3
|
Gupta P, Heffter T, Zubair M, Hsu IC, Burdette EC, Diederich CJ. Treatment Planning Strategies for Interstitial Ultrasound Ablation of Prostate Cancer. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:362-375. [PMID: 38899026 PMCID: PMC11186654 DOI: 10.1109/ojemb.2024.3397965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 03/28/2024] [Accepted: 05/03/2024] [Indexed: 06/21/2024] Open
Abstract
PURPOSE To develop patient-specific 3D models using Finite-Difference Time-Domain (FDTD) simulations and pre-treatment planning tools for the selective thermal ablation of prostate cancer with interstitial ultrasound. This involves the integration with a FDA 510(k) cleared catheter-based ultrasound interstitial applicators and delivery system. METHODS A 3D generalized "prostate" model was developed to generate temperature and thermal dose profiles for different applicator operating parameters and anticipated perfusion ranges. A priori planning, based upon these pre-calculated lethal thermal dose and iso-temperature clouds, was devised for iterative device selection and positioning. Full 3D patient-specific anatomic modeling of actual placement of single or multiple applicators to conformally ablate target regions can be applied, with optional integrated pilot-point temperature-based feedback control and urethral/rectum cooling. These numerical models were verified against previously reported ex-vivo experimental results obtained in soft tissues. RESULTS For generic prostate tissue, 360 treatment schemes were simulated based on the number of transducers (1-4), applied power (8-20 W/cm2), heating time (5, 7.5, 10 min), and blood perfusion (0, 2.5, 5 kg/m3/s) using forward treatment modelling. Selectable ablation zones ranged from 0.8-3.0 cm and 0.8-5.3 cm in radial and axial directions, respectively. 3D patient-specific thermal treatment modeling for 12 Cases of T2/T3 prostate disease demonstrate applicability of workflow and technique for focal, quadrant and hemi-gland ablation. A temperature threshold (e.g., Tthres = 52 °C) at the treatment margin, emulating placement of invasive temperature sensing, can be applied for pilot-point feedback control to improve conformality of thermal ablation. Also, binary power control (e.g., Treg = 45 °C) can be applied which will regulate the applied power level to maintain the surrounding temperature to a safe limit or maximum threshold until the set heating time. CONCLUSIONS Prostate-specific simulations of interstitial ultrasound applicators were used to generate a library of thermal-dose distributions to visually optimize and set applicator positioning and directivity during a priori treatment planning pre-procedure. Anatomic 3D forward treatment planning in patient-specific models, along with optional temperature-based feedback control, demonstrated single and multi-applicator implant strategies to effectively ablate focal disease while affording protection of normal tissues.
Collapse
Affiliation(s)
- Pragya Gupta
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCA94115USA
| | | | - Muhammad Zubair
- Department of Neurology and Neurological SciencesStanford UniversityStanfordCA94305USA
| | - I-Chow Hsu
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCA94115USA
| | | | - Chris J. Diederich
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCA94115USA
| |
Collapse
|
4
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
5
|
Quispe-Enriquez OC, Valero-Lanzuela JJ, Lerma JL. Smartphone Photogrammetric Assessment for Head Measurements. SENSORS (BASEL, SWITZERLAND) 2023; 23:9008. [PMID: 37960704 PMCID: PMC10648760 DOI: 10.3390/s23219008] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 10/24/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023]
Abstract
The assessment of cranial deformation is relevant in the field of medicine dealing with infants, especially in paediatric neurosurgery and paediatrics. To address this demand, the smartphone-based solution PhotoMeDAS has been developed, harnessing mobile devices to create three-dimensional (3D) models of infants' heads and, from them, automatic cranial deformation reports. Therefore, it is crucial to examine the accuracy achievable with different mobile devices under similar conditions so prospective users can consider this aspect when using the smartphone-based solution. This study compares the linear accuracy obtained from three smartphone models (Samsung Galaxy S22 Ultra, S22, and S22+). Twelve measurements are taken with each mobile device using a coded cap on a head mannequin. For processing, three different bundle adjustment implementations are tested with and without self-calibration. After photogrammetric processing, the 3D coordinates are obtained. A comparison is made among spatially distributed distances across the head with PhotoMeDAS vs. ground truth established with a Creaform ACADEMIA 50 while-light 3D scanner. With a homogeneous scale factor for all the smartphones, the results showed that the average accuracy for the S22 smartphone is -1.15 ± 0.53 mm, for the S22+, 0.95 ± 0.40 mm, and for the S22 Ultra, -1.8 ± 0.45 mm. Worth noticing is that a substantial improvement is achieved regardless of whether the scale factor is introduced per device.
Collapse
Affiliation(s)
- Omar C. Quispe-Enriquez
- Photogrammetry and Laser Scanner Research Group (GIFLE), Department of Cartographic Engineering, Geodesy and Photogrammetry, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; (J.J.V.-L.); (J.L.L.)
| | | | | |
Collapse
|
6
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
7
|
Coatings Functionalization via Laser versus Other Deposition Techniques for Medical Applications: A Comparative Review. COATINGS 2022. [DOI: 10.3390/coatings12010071] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The development of new biological devices in response to market demands requires continuous efforts for the improvement of products’ functionalization based upon expansion of the materials used and their fabrication techniques. One viable solution consists of a functionalization substrate covered by layers via an appropriate deposition technique. Laser techniques ensure an enhanced coating’s adherence to the substrate and improved biological characteristics, not compromising the mechanical properties of the functionalized medical device. This is a review of the main laser techniques involved. We mainly refer to pulse laser deposition, matrix-assisted, and laser simple and double writing versus some other well-known deposition methods as magnetron sputtering, 3D bioprinting, inkjet printing, extrusion, solenoid, fuse-deposition modeling, plasma spray (PS), and dip coating. All these techniques can be extended to functionalize surface fabrication to change local morphology, chemistry, and crystal structure, which affect the biomaterial behavior following the chosen application. Surface functionalization laser techniques are strictly controlled within a confined area to deliver a large amount of energy concisely. The laser deposit performances are presented compared to reported data obtained by other techniques.
Collapse
|
8
|
Bori E, Pancani S, Vigliotta S, Innocenti B. Validation and accuracy evaluation of automatic segmentation for knee joint pre-planning. Knee 2021; 33:275-281. [PMID: 34739958 DOI: 10.1016/j.knee.2021.10.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 09/28/2021] [Accepted: 10/12/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Proper use of three-dimensional (3D) models generated from medical imaging data in clinical preoperative planning, training and consultation is based on the preliminary proved accuracy of the replication of the patient anatomy. Therefore, this study investigated the dimensional accuracy of 3D reconstructions of the knee joint generated from computed tomography scans via automatic segmentation by comparing them with 3D models generated through manual segmentation. METHODS Three unpaired, fresh-frozen right legs were investigated. Three-dimensional models of the femur and the tibia of each leg were manually segmented using a commercial software and compared in terms of geometrical accuracy with the 3D models automatically segmented using proprietary software. Bony landmarks were identified and used to calculate clinically relevant distances: femoral epicondylar distance; posterior femoral epicondylar distance; femoral trochlear groove length; tibial knee center tubercle distance (TKCTD). Pearson's correlation coefficient and Bland and Altman plots were used to evaluate the level of agreement between measured distances. RESULTS Differences between parameters measured on 3D models manually and automatically segmented were below 1 mm (range: -0.06 to 0.72 mm), except for TKCTD (between 1.00 and 1.40 mm in two specimens). In addition, there was a significant strong correlation between measurements. CONCLUSIONS The results obtained are comparable to those reported in previous studies where accuracy of bone 3D reconstruction was investigated. Automatic segmentation techniques can be used to quickly reconstruct reliable 3D models of bone anatomy and these results may contribute to enhance the spread of this technology in preoperative and operative settings, where it has shown considerable potential.
Collapse
Affiliation(s)
- Edoardo Bori
- BEAMS Department, Université Libre de Bruxelles, Bruxelles, Belgium.
| | | | | | | |
Collapse
|
9
|
Gsaxner C, Pepe A, Li J, Ibrahimpasic U, Wallner J, Schmalstieg D, Egger J. Augmented Reality for Head and Neck Carcinoma Imaging: Description and Feasibility of an Instant Calibration, Markerless Approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105854. [PMID: 33261944 DOI: 10.1016/j.cmpb.2020.105854] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Augmented reality (AR) can help to overcome current limitations in computer assisted head and neck surgery by granting "X-ray vision" to physicians. Still, the acceptance of AR in clinical applications is limited by technical and clinical challenges. We aim to demonstrate the benefit of a marker-free, instant calibration AR system for head and neck cancer imaging, which we hypothesize to be acceptable and practical for clinical use. METHODS We implemented a novel AR system for visualization of medical image data registered with the head or face of the patient prior to intervention. Our system allows the localization of head and neck carcinoma in relation to the outer anatomy. Our system does not require markers or stationary infrastructure, provides instant calibration and allows 2D and 3D multi-modal visualization for head and neck surgery planning via an AR head-mounted display. We evaluated our system in a pre-clinical user study with eleven medical experts. RESULTS Medical experts rated our application with a system usability scale score of 74.8 ± 15.9, which signifies above average, good usability and clinical acceptance. An average of 12.7 ± 6.6 minutes of training time was needed by physicians, before they were able to navigate the application without assistance. CONCLUSIONS Our AR system is characterized by a slim and easy setup, short training time and high usability and acceptance. Therefore, it presents a promising, novel tool for visualizing head and neck cancer imaging and pre-surgical localization of target structures.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Una Ibrahimpasic
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria; Department of Cranio-Maxillofacial Surgery, AZ Monica Hospital Antwerp and Antwerp University Hospital, Antwerp, Belgium.
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| |
Collapse
|
10
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
11
|
|