1
|
Yu J, Sang Z, Ren Z, Xu Q, Wang Y, Liao H. Initial implementation of surgical guide design utilizing digital medicine for lateral orbital decompression surgery. J Craniomaxillofac Surg 2024; 52:432-437. [PMID: 38448333 DOI: 10.1016/j.jcms.2024.02.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 12/12/2023] [Accepted: 02/13/2024] [Indexed: 03/08/2024] Open
Abstract
This study aimed to assess the feasibility of utilizing a surgical guide, designed through digital medical technology, in lateral orbital decompression surgery. METHODS: In total, 18 patients with thyroid-associated ophthalmopathy (TAO), who underwent orbital balance decompression surgery at the Affiliated Eye Hospital of Nanchang University between September 2018 and August 2022, were included. Orbital CT scanning was performed on all patients with TAO, and Mimics 21.0 software was used to reconstruct a three-dimensional model of the orbit based on the CT data. The osteotomy guide plate for lateral orbital decompression surgery was designed using 3-matic 13.0 software, adhering to the criteria of surgical effectiveness and safety. The surgical positioning guide was designed using Geomagic Wrap 21.0. Once printed, the surgical guide was sterilized with low-temperature plasma and applied during surgery. Of the nine patients treated using a surgical navigation system, three cases experienced cerebrospinal fluid leakage complications during the procedure, and two exhibited inadequate bone removal along the lateral wall. In contrast, among the nine patients treated with surgical guides, no intraoperative cerebrospinal fluid leakage or evidence of insufficient lateral wall bone removal was observed, highlighting a statistically significant distinction between the two cohorts (p = 0.046). Postoperative improvements were notable in best-corrected visual acuity (BCVA) and exophthalmos for patients afflicted with extremely severe TAO. The surgical guide, designed with digital medical technology, has been shown to be an effective and secure auxiliary tool in lateral orbital decompression surgery. It not only aids in reducing the incidence of intraoperative complications, but also enhances the accuracy and safety of surgery. These improvements offer robust support for continued exploration in this field within clinical practice.
Collapse
Affiliation(s)
- Jinhai Yu
- School of Optometry, Jiangxi Medical College, Nanchang University, China; Jiangxi Research Institute of Ophthalmology and Visual Science, China; Jiangxi Provincial Key Laboratory for Ophthalmology, China
| | - Zexi Sang
- School of Optometry, Jiangxi Medical College, Nanchang University, China; Jiangxi Research Institute of Ophthalmology and Visual Science, China; Jiangxi Provincial Key Laboratory for Ophthalmology, China
| | - Zhangjun Ren
- School of Optometry, Jiangxi Medical College, Nanchang University, China; Jiangxi Research Institute of Ophthalmology and Visual Science, China; Jiangxi Provincial Key Laboratory for Ophthalmology, China
| | - Qihua Xu
- The Affiliated Eye Hospital, Jiangxi Medical College, Nanchang University, China; Jiangxi Clinical Research Center for Ophthalmic Disease, China
| | - Yaohua Wang
- The Affiliated Eye Hospital, Jiangxi Medical College, Nanchang University, China; Jiangxi Clinical Research Center for Ophthalmic Disease, China.
| | - Hongfei Liao
- The Affiliated Eye Hospital, Jiangxi Medical College, Nanchang University, China; Jiangxi Clinical Research Center for Ophthalmic Disease, China.
| |
Collapse
|
2
|
Jeyaraman M, Jeyaraman N, Ramasubramanian S, Nallakumarasamy A, Shyam A. Revolutionizing Orthopedic Surgery: The Integration of Holographic Technology. J Orthop Case Rep 2024; 14:5-9. [PMID: 38560302 PMCID: PMC10976551 DOI: 10.13107/jocr.2024.v14.i03.4266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/15/2024] [Indexed: 04/04/2024] Open
Abstract
Orthopedic surgery, traditionally reliant on 2D imaging tools such as X-rays and magnetic resonance imagings (MRIs), is undergoing a revolutionary change with the introduction of holographic technology. Initially, a concept from science fiction used in entertainment and data representation, holography now offers groundbreaking applications in medicine, especially in orthopedics. Conceived by Dennis Gabor in 1948, holographic imaging has evolved significantly, providing real-time, three-dimensional visualizations of human anatomy, thereby aiding surgeons in complex procedures [1]. This technology enhances surgical precision through high-resolution, interactive representations of patient-specific anatomical structures, leading to more accurate planning and less invasive surgeries, crucial for better patient outcomes [2, 3]. This integration signifies a paradigm shift in surgical practices, equipping surgeons to visualize bones, joints, and tissues in unprecedented detail and immersion, similar to moving from radiographs to 3D computed tomography (CT) scans but with the added benefits of interactivity and real-time manipulation. However, challenges exist, including the cost of technology, the learning curve for professionals, extensive training requirements, and maintaining patient safety and medical standards in stringent regulatory environments [4]. This editorial provides an overview of the transformative potential of holographic technology in orthopedic surgery, discussing its historical evolution, current applications, challenges, and prospects, emphasizing the need for cautious optimism and sustainable integration to enhance patient care and surgical outcomes.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, Tamil Nadu, India
| | - Naveen Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, Tamil Nadu, India
| | - Swaminathan Ramasubramanian
- Department of Orthopaedics, Government Medical College, Omandurar Government Estate, Chennai, Tamil Nadu, India
| | - Arulkumar Nallakumarasamy
- Department of Orthopaedics, Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Karaikal, Puducherry, India
| | - Ashok Shyam
- Department of Orthopaedics, Sancheti Institute for Orthopaedics and Rehabilitation, Pune, Maharashtra, India
| |
Collapse
|
3
|
Morley CT, Arreola DM, Qian L, Lynn AL, Veigulis ZP, Osborne TF. Mixed Reality Surgical Navigation System; Positional Accuracy Based on Food and Drug Administration Standard. Surg Innov 2024; 31:48-57. [PMID: 38019844 PMCID: PMC10773158 DOI: 10.1177/15533506231217620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
BACKGROUND Computer assisted surgical navigation systems are designed to improve outcomes by providing clinicians with procedural guidance information. The use of new technologies, such as mixed reality, offers the potential for more intuitive, efficient, and accurate procedural guidance. The goal of this study is to assess the positional accuracy and consistency of a clinical mixed reality system that utilizes commercially available wireless head-mounted displays (HMDs), custom software, and localization instruments. METHODS Independent teams using the second-generation Microsoft HoloLens© hardware, Medivis SurgicalAR© software, and localization instruments, tested the accuracy of the combined system at different institutions, times, and locations. The ASTM F2554-18 consensus standard for computer-assisted surgical systems, as recognized by the U.S. FDA, was utilized to measure the performance. 288 tests were performed. RESULTS The system demonstrated consistent results, with an average accuracy performance that was better than one millimeter (.75 ± SD .37 mm). CONCLUSION Independently acquired positional tracking accuracies exceed conventional in-market surgical navigation tracking systems and FDA standards. Importantly, the performance was achieved at two different institutions, using an international testing standard, and with a system that included a commercially available off-the-shelf wireless head mounted display and software.
Collapse
Affiliation(s)
| | - David M. Arreola
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
| | | | | | - Zachary P. Veigulis
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
- Department of Business Analytics, Tippie College of Business, University of Iowa, Iowa, IA, USA
| | - Thomas F. Osborne
- US Department of Veterans Affairs, Palo Alto Healthcare System, Palo Alto, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
4
|
Kim SK, Lee Y, Hwang HR, Park SY. 3D human anatomy augmentation over a mannequin for the training of nursing skills. Technol Health Care 2024; 32:1523-1533. [PMID: 37781830 DOI: 10.3233/thc-230586] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
BACKGROUND The in-depth understanding of human anatomy is the foundation for safety in nursing practice. Augmented reality is an emerging technology that can be used for integrative learning in nursing education. OBJECTIVE The study aimed to develop a human anatomy-based skill training system and pilot test its usability and feasibility. METHODS Twenty-seven nursing students participated in 3D anatomy-based skill training for intramuscular injection and Levin tube feeding using HoloLens 2. Various user interfaces including pictures, videos, animation graphics, and annotation boxes assisted users with a comprehensive understanding of the step-by-step procedures for these techniques. A one-group pre-post test was conducted to observe changes in skill performance competency, usability, and learning satisfaction. RESULTS After study participation, a statistically significant improvement in skill performance competency (p< 0.05) was observed. The usability results showed that students were satisfied with the usefulness of the program (9.55 ± 0.49) and scored highly for the intention to participate in other educational programs (9.62 ± 0.59). A high level of learning satisfaction was achieved (9.55 ± 0.49), with positive responses in fostering students' engagement and excitement in the application of cutting-edge technology. CONCLUSION The 3D anatomy-based nursing skill training demonstrated good potential to improve learning outcomes and facilitate engagement in self-directed practice. This can be integrated into undergraduate nursing education as an assistant teaching tool, contributing to the combination of knowledge and practice.
Collapse
Affiliation(s)
- Sun Kyung Kim
- Department of Nursing, Mokpo National University, Jeonnam, Korea
- Department of Biomedicine, Health and Life Convergence Sciences, BK21 Four, Mokpo National University, Jeonnam, Korea
- Biomedical and Healthcare Research Institute, Mokpo National University, Jeonnam, Korea
| | - Youngho Lee
- Department of Computer Engineering, Mokpo National University, Jeonnam, Korea
| | - Hye Ri Hwang
- Department of Nursing, Mokpo National University, Jeonnam, Korea
| | - Su Yeon Park
- Department of Nursing, Mokpo National University, Jeonnam, Korea
| |
Collapse
|
5
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
6
|
Park TY, Koh H, Lee W, Park SH, Chang WS, Kim H. Real-Time Acoustic Simulation Framework for tFUS: A Feasibility Study Using Navigation System. Neuroimage 2023; 282:120411. [PMID: 37844771 DOI: 10.1016/j.neuroimage.2023.120411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/10/2023] [Accepted: 10/13/2023] [Indexed: 10/18/2023] Open
Abstract
Transcranial focused ultrasound (tFUS), in which acoustic energy is focused on a small region in the brain through the skull, is a non-invasive therapeutic method with high spatial resolution and depth penetration. Image-guided navigation has been widely utilized to visualize the location of acoustic focus in the cranial cavity. However, this system is often inaccurate because of the significant aberrations caused by the skull. Therefore, acoustic simulations using a numerical solver have been widely adopted to compensate for this inaccuracy. Although the simulation can predict the intracranial acoustic pressure field, real-time application during tFUS treatment is almost impossible due to the high computational cost. In this study, we propose a neural network-based real-time acoustic simulation framework and test its feasibility by implementing a simulation-guided navigation (SGN) system. Real-time acoustic simulation is performed using a 3D conditional generative adversarial network (3D-cGAN) model featuring residual blocks and multiple loss functions. This network was trained by the conventional numerical acoustic simulation program (i.e., k-Wave). The SGN system is then implemented by integrating real-time acoustic simulation with a conventional image-guided navigation system. The proposed system can provide simulation results with a frame rate of 5 Hz (i.e., about 0.2 s), including all processing times. In numerical validation (3D-cGAN vs. k-Wave), the average peak intracranial pressure error was 6.8 ± 5.5%, and the average acoustic focus position error was 5.3 ± 7.7 mm. In experimental validation using a skull phantom (3D-cGAN vs. actual measurement), the average peak intracranial pressure error was 4.5%, and the average acoustic focus position error was 6.6 mm. These results demonstrate that the SGN system can predict the intracranial acoustic field according to transducer placement in real-time.
Collapse
Affiliation(s)
- Tae Young Park
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea
| | - Heekyung Koh
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
| | - Wonhye Lee
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - So Hee Park
- Department of Neurosurgery, Yeungnam University Medical Center, Daegu 42415, Republic of Korea
| | - Won Seok Chang
- Department of Neurosurgery, Brain Research Institute, Yonsei University College of Medicine, Seoul 04527, Republic of Korea
| | - Hyungmin Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea.
| |
Collapse
|
7
|
Csernátony Z, Manó S, Szabó D, Soósné Horváth H, Kovács ÁÉ, Csámer L. Acetabular Revision with McMinn Cup: Development and Application of a Patient-Specific Targeting Device. Bioengineering (Basel) 2023; 10:1095. [PMID: 37760197 PMCID: PMC10526046 DOI: 10.3390/bioengineering10091095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND Surgeries of severe periacetabular bone defects (Paprosky ≥ 2B) are a major challenge in current practice. Although solutions are available for this serious clinical problem, they all have their disadvantages as well as their advantages. An alternative method of reconstructing such extensive defects was the use of a cup with a stem to solve these revision situations. As the instrumentation offered is typically designed for scenarios where a significant bone defect is not present, our unique technique has been developed for implantation in cases where reference points are missing. Our hypothesis was that a targeting device designed based on the CT scan of a patient's pelvis could facilitate the safe insertion of the guiding wire. METHODS Briefly, our surgical solution consists of a two-step operation. If periacetabular bone loss was found to be more significant during revision surgery, all implants were removed, and two titanium marker screws in the anterior iliac crest were percutaneously inserted. Next, by applying the metal artifact removal (MAR) algorithm, a CT scan of the pelvis was performed. Based on that, the dimensions and positioning of the cup to be inserted were determined, and a patient-specific 3D printed targeting device made of biocompatible material was created to safely insert the guidewire, which is essential to the implantation process. RESULTS In this study, medical, engineering, and technical tasks related to the design, the surgical technique, and experiences from 17 surgical cases between February 2018 and July 2021 are reported. There were no surgical complications in any cases. The implant had to be removed due to septic reasons (independently from the technique) in a single case, consistent with the septic statistics for this type of surgery. There was not any perforation of the linea terminalis of the pelvis due to the guiding method. The wound healing of patients was uneventful, and the implant was fixed securely. Following rehabilitation, the joints were able to bear weight again. After one to four years of follow-up, the patient satisfaction level was high, and the gait function of the patients improved a lot in all cases. CONCLUSIONS Our results show that CT-based virtual surgical planning and, based on it, the use of a patient-specific 3D printed aiming device is a reliable method for major hip surgeries with significant bone loss. This technique has also made it possible to perform these operations with minimal X-ray exposure.
Collapse
Affiliation(s)
- Zoltán Csernátony
- Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (Z.C.)
- Laboratory of Biomechanics, Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (S.M.)
| | - Sándor Manó
- Laboratory of Biomechanics, Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (S.M.)
| | - Dániel Szabó
- Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (Z.C.)
| | - Hajnalka Soósné Horváth
- Laboratory of Biomechanics, Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (S.M.)
| | - Ágnes Éva Kovács
- Laboratory of Biomechanics, Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (S.M.)
| | - Loránd Csámer
- Laboratory of Biomechanics, Department of Orthopaedics and Traumatology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary; (S.M.)
| |
Collapse
|
8
|
Hatzl J, Böckler D, Hartmann N, Meisenbacher K, Rengier F, Bruckner T, Uhl C. Mixed reality for the assessment of aortoiliac anatomy in patients with abdominal aortic aneurysm prior to open and endovascular repair: Feasibility and interobserver agreement. Vascular 2023; 31:644-653. [PMID: 35404720 DOI: 10.1177/17085381221081324] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
OBJECTIVES The objective is to evaluate the feasibility and interobserver agreement of a Mixed Reality Viewer (MRV) in the assessment of aortoiliac vascular anatomy of abdominal aortic aneurysm (AAA) patients. METHODS Fifty preoperative computed tomography angiographies (CTAs) of AAA patients were included. CTAs were assessed in a mixed reality (MR) environment with respect to aortoiliac anatomy according to a standardized protocol by two experienced observers (Mixed Reality Viewer, MRV, Brainlab AG, Germany). Additionally, all CTAs were independently assessed applying the same protocol by the same observers using a conventional DICOM viewer on a two-dimensional screen with multi-planar reconstructions (Conventional viewer, CV, GE Centricity PACS RA1000 Workstation, GE, United States). The protocol included four sets of items: calcification, dilatation, patency, and tortuosity as well as the number of lumbar and renal arteries. Interobserver agreement (IA, Cohen's Kappa, κ) was calculated for every item set. RESULTS All CTAs could successfully be displayed in the MRV (100%). The MRV demonstrated equal or better IA in the assessment of anterior and posterior calcification (κMRV: 0.68 and 0.61, κCV: 0.33 and 0.45, respectively) as well as tortuosity (κMRV: 0.60, κCV: 0.48) and dilatation (κMRV: 0.68, κCV: 0.67). The CV demonstrated better IA in the assessment of patency (κMRV: 0.74, κCV: 0.93). The CV also identified significantly more lumbar arteries (CV: 379, MRV: 239, p < 0.01). CONCLUSIONS The MRV is a feasible imaging viewing technology in clinical routine. Future efforts should aim at improving hologram quality and enabling accurate registration of the hologram with the physical patient.
Collapse
Affiliation(s)
- Johannes Hatzl
- Department of Vascular and Endovascular Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Dittmar Böckler
- Department of Vascular and Endovascular Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Niklas Hartmann
- Department of Vascular and Endovascular Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Katrin Meisenbacher
- Department of Vascular and Endovascular Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Fabian Rengier
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Thomas Bruckner
- Institute of Medical Biometry and Informatics (IMBI), Heidelberg University, Heidelberg, Germany
| | - Christian Uhl
- Department of Vascular and Endovascular Surgery, University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
9
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
10
|
Zhang J, Wang C, Li X, Fu S, Gu W, Shi Z. Application of mixed reality technology in talocalcaneal coalition resection. Front Surg 2023; 9:1084365. [PMID: 36684274 PMCID: PMC9852772 DOI: 10.3389/fsurg.2022.1084365] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 12/05/2022] [Indexed: 01/08/2023] Open
Abstract
Objectives With positive outcomes recorded, the mixed reality (MR) technology has lately become popular in orthopedic surgery. However, there are few studies that specifically address the utility of MR in talocalcaneal coalitions (TCC) resection. Our goal in this retrospective study is to assess certain data while examining the viability of using MR to treat TCC resection. Methods Six consecutive patients with TCC diagnosed by computed tomography (CT) for which nonoperative therapy had failed and MR system assisted TCC resection were included in this study from March 2021 to December 2021. The feasibility and accuracy of TCC resection were assessed by post-operation radiography. The American Orthopaedic Foot & Ankle Society (AOFAS) hindfoot score and visual analog scale (VAS) score were used to assess the recovery condition and pain level pre- and post-operation. Results The surgeon can accurately resect the TCC according to the preoperatively determined range by superimposing the holographic model with the actual anatomy of the TCC using an MR system. Additionally, no additional x-ray was necessary while operating. Mean follow-up was 10.3 months, with a minimum of 6 months. There is a significant difference between the preoperative AOFAS score of 53.4 ± 3.8 and the 6-month follow-up AOFAS score of 97.3 ± 2.2 (p < 0.05). There is also a significant difference between the preoperative VAS score of 8.1 ± 0.7 and the 6-month follow-up VAS score of 1.7 ± 0.4 (p < 0.05). All individuals had clinical subtalar mobility without stiffness following surgery. Conclusion While the TCC resection operation is being performed, the application of MR technology is practicable, effective, and radiation-free, giving surgeons satisfactory support.
Collapse
Affiliation(s)
- Jieyuan Zhang
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China
| | - Cheng Wang
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China
| | - Xueqian Li
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China
| | - Shaoling Fu
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China
| | - Wenqi Gu
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China,Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital East Campus, Shanghai, China,Correspondence: Zhongmin Shi Wenqi Gu
| | - Zhongmin Shi
- Department of Orthopedic Surgery, Shanghai Sixth People’s Hospital, Shanghai, China,Correspondence: Zhongmin Shi Wenqi Gu
| |
Collapse
|
11
|
Killeen BD, Winter J, Gu W, Martin-Gomez A, Taylor RH, Osgood G, Unberath M. Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1130-1135. [PMID: 37555199 PMCID: PMC10406465 DOI: 10.1080/21681163.2022.2154272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/14/2022]
Abstract
Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.
Collapse
Affiliation(s)
- Benjamin D Killeen
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Jonas Winter
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Wenhao Gu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Mathias Unberath
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
12
|
Zhou Z, Yang Z, Jiang S, Zhuo J, Zhu T, Ma S. Surgical Navigation System for Hypertensive Intracerebral Hemorrhage Based on Mixed Reality. J Digit Imaging 2022; 35:1530-1543. [PMID: 35819536 PMCID: PMC9712880 DOI: 10.1007/s10278-022-00676-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 06/24/2022] [Accepted: 06/28/2022] [Indexed: 10/17/2022] Open
Abstract
Hypertensive intracerebral hemorrhage (HICH) is an intracerebral bleeding disease that affects 2.5 per 10,000 people worldwide each year. An effective way to cure this disease is puncture through the dura with a brain puncture drill and tube; the accuracy of the insertion determines the quality of the surgery. In recent decades, surgical navigation systems have been widely used to improve the accuracy of surgery and minimize risks. Augmented reality- and mixed reality-based surgical navigation is a promising new technology for surgical navigation in the clinic, aiming to improve the safety and accuracy of the operation. In this study, we present a novel multimodel mixed reality navigation system for HICH surgery in which medical images and virtual anatomical structures can be aligned intraoperatively with the actual structures of the patient in a head-mounted device and adjusted when the patient moves in real time while under local anesthesia; this approach can help the surgeon intuitively perform intraoperative navigation. A novel registration method is used to register the holographic space and serves as an intraoperative optical tracker, and a method for calibrating the HICH surgical tools is used to track the tools in real time. The results of phantom experiments revealed a mean registration error of 1.03 mm and an average time consumption of 12.9 min. In clinical usage, the registration error was 1.94 mm, and the time consumption was 14.2 min, showing that this system is sufficiently accurate and effective for clinical application.
Collapse
Affiliation(s)
- Zeyang Zhou
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| | - Jie Zhuo
- Department of Neurosurgery, Huanhu Hospital, Tianjin, 300350, China.
| | - Tao Zhu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shixing Ma
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|
13
|
Advances and Innovations in Ablative Head and Neck Oncologic Surgery Using Mixed Reality Technologies in Personalized Medicine. J Clin Med 2022; 11:jcm11164767. [PMID: 36013006 PMCID: PMC9410374 DOI: 10.3390/jcm11164767] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 11/17/2022] Open
Abstract
The benefit of computer-assisted planning in head and neck ablative and reconstructive surgery has been extensively documented over the last decade. This approach has been proven to offer a more secure surgical procedure. In the treatment of cancer of the head and neck, computer-assisted surgery can be used to visualize and estimate the location and extent of the tumor mass. Nowadays, some software tools even allow the visualization of the structures of interest in a mixed reality environment. However, the precise integration of mixed reality systems into a daily clinical routine is still a challenge. To date, this technology is not yet fully integrated into clinical settings such as the tumor board, surgical planning for head and neck tumors, or medical and surgical education. As a consequence, the handling of these systems is still of an experimental nature, and decision-making based on the presented data is not yet widely used. The aim of this paper is to present a novel, user-friendly 3D planning and mixed reality software and its potential application for ablative and reconstructive head and neck surgery.
Collapse
|
14
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
15
|
Grunbeck IA, Teatini A, Kumar RP, Elle OJ, Wiig O. Evaluation and Comparison of Target Registration Error in Active and Passive Optical Tracking Systems. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3476-3480. [PMID: 36085841 DOI: 10.1109/embc48229.2022.9871591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Optical tracking systems combined with imaging modalities such as computed tomography and magnetic reso-nance imaging are important parts of image guided surgery systems. By determining the location and orientation of sur-gical tools relative to a patient's reference system, tracking systems assist surgeons during the planning and execution of image guided procedures. Therefore, knowledge of the tracking system-induced error is of great importance. To this end, this study compared one passive and two active optical tracking systems in terms of their Target Registration Error. Two experiments were performed to measure the systems' accuracy, testing the impact of factors such as the size of the measuring volume, length of surgical instruments and environmental conditions with orthopedic procedures in mind. According to the performed experiments, the active systems achieved significantly higher accuracy than the tested passive system, reporting an overall accuracy of 0.063 mm (SD = 0.025) and 0.259 mm (SD = 0.152), respectively.
Collapse
|
16
|
Chytas D, Nikolaou VS. Mixed reality for visualization of orthopedic surgical anatomy. World J Orthop 2021; 12:727-731. [PMID: 34754828 PMCID: PMC8554346 DOI: 10.5312/wjo.v12.i10.727] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 06/16/2021] [Accepted: 08/30/2021] [Indexed: 02/06/2023] Open
Abstract
In the modern era, preoperative planning is substantially facilitated by artificial reality technologies, which permit a better understanding of patient anatomy, thus increasing the safety and accuracy of surgical interventions. In the field of orthopedic surgery, the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes. Artificial reality technologies, which include virtual reality (VR), augmented reality (AR), and mixed reality (MR), use digital images obtained from computed tomography or magnetic resonance imaging. VR replaces the user’s physical environment with one that is computer generated. AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment, enabling the user to interact with both physical and virtual objects. MR has been defined as a technology that, in contrast to AR, enables users to visualize the depth and perspective of the virtual models. We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy. The literature suggests that MR could be a valuable tool in orthopedic surgeon’s hands for visualization of the anatomy. However, we remark that confusion exists in the literature concerning the characteristics of MR. Thus, a more clear description of MR is needed in orthopedic research, so that the potential of this technology can be more deeply understood.
Collapse
Affiliation(s)
- Dimitrios Chytas
- Department of Physiotherapy, University of Peloponnese, Sparta 23100, Greece
| | - Vasileios S Nikolaou
- 2nd Department of Orthopedics, National and Kapodistrian University of Athens, Athens 15124, Greece
| |
Collapse
|
17
|
Augmented Reality, Virtual Reality and Artificial Intelligence in Orthopedic Surgery: A Systematic Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11073253] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background: The application of virtual and augmented reality technologies to orthopaedic surgery training and practice aims to increase the safety and accuracy of procedures and reducing complications and costs. The purpose of this systematic review is to summarise the present literature on this topic while providing a detailed analysis of current flaws and benefits. Methods: A comprehensive search on the PubMed, Cochrane, CINAHL, and Embase database was conducted from inception to February 2021. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used to improve the reporting of the review. The Cochrane Risk of Bias Tool and the Methodological Index for Non-Randomized Studies (MINORS) was used to assess the quality and potential bias of the included randomized and non-randomized control trials, respectively. Results: Virtual reality has been proven revolutionary for both resident training and preoperative planning. Thanks to augmented reality, orthopaedic surgeons could carry out procedures faster and more accurately, improving overall safety. Artificial intelligence (AI) is a promising technology with limitless potential, but, nowadays, its use in orthopaedic surgery is limited to preoperative diagnosis. Conclusions: Extended reality technologies have the potential to reform orthopaedic training and practice, providing an opportunity for unidirectional growth towards a patient-centred approach.
Collapse
|