1
|
Gadodia G, Evans M, Weunski C, Ho A, Cargill A, Martin C. Evaluation of an augmented reality navigational guidance platform for percutaneous procedures in a cadaver model. J Med Imaging (Bellingham) 2024; 11:062602. [PMID: 38370135 PMCID: PMC10868591 DOI: 10.1117/1.jmi.11.6.062602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/19/2023] [Accepted: 01/05/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose The objective of this study is to review the accuracy of an augmented reality navigational guidance system designed to facilitate improved visualization, guidance, and accuracy during percutaneous needle-based procedures including biopsies and ablations. Approach Using the HoloLens 2, the system registers and projects 3D CT-based models of segmented anatomy along with live ultrasound, fused with electromagnetically tracked instruments including ultrasound probes and needles, giving the operator comprehensive stereoscopic visualization for intraoperative planning and navigation during procedures.Tracked needles were guided to targets implanted in a cadaveric model using the system. Image fusion registration error, the multimodality error measured as the post-registration distance between a corresponding point measured in the stereoscopic CT and tracked ultrasound coordinate systems, and target registration error, the Euclidean distance between needle tip and target after needle placement, were measured as registration and targeting accuracy metrics. A t-distribution was used for statistical analysis. Results Three operators performed 36 total needle passes, 18 to measure image fusion registration error and 18 to measure target registration error on four targets. The average depth of each needle pass was 8.4 cm from skin to target center. Mean IFRE was 4.4 mm (H 0 : μ = 5 mm , P < 0.05 ). Mean TRE was 2.3 mm (H 0 : μ = 5 mm , P < 0.00001 ). Conclusions The study demonstrated high registration and targeting accuracy of this AR navigational guidance system in percutaneous, needle-based procedures. This suggests the ability to facilitate improved clinical performance in percutaneous procedures such as ablations and biopsies.
Collapse
Affiliation(s)
- Gaurav Gadodia
- VIR Chicago, Interventional Radiology, Chicago, Illinois, United States
| | | | - Crew Weunski
- MediView XR, Inc., Cleveland, Ohio, United States
| | - Amy Ho
- MediView XR, Inc., Cleveland, Ohio, United States
| | - Adam Cargill
- MediView XR, Inc., Cleveland, Ohio, United States
| | - Charles Martin
- Cleveland Clinic, Diagnostic Radiology, Interventional Radiology, Cleveland, Ohio, United States
| |
Collapse
|
2
|
Li H, Yan W, Zhao J, Ji Y, Qian L, Ding H, Zhao Z, Wang G. Navigate biopsy with ultrasound under augmented reality device: Towards higher system performance. Comput Biol Med 2024; 174:108453. [PMID: 38636327 DOI: 10.1016/j.compbiomed.2024.108453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/07/2024] [Indexed: 04/20/2024]
Abstract
PURPOSE Biopsies play a crucial role in determining the classification and staging of tumors. Ultrasound is frequently used in this procedure to provide real-time anatomical information. Using augmented reality (AR), surgeons can visualize ultrasound data and spatial navigation information seamlessly integrated with real tissues. This innovation facilitates faster and more precise biopsy operations. METHODS We have developed an augmented reality biopsy navigation system characterized by low display latency and high accuracy. Ultrasound data is initially read by an image capture card and streamed to Unity via net communication. In Unity, navigation information is rendered and transmitted to the HoloLens 2 device using holographic remoting. Concurrently, a retro-reflective tool tracking method is implemented on the HoloLens 2, enabling the simultaneous tracking of the ultrasound probe and biopsy needle. Distinct navigation information is provided during in-plane and out-of-plane punctuation. To evaluate the effectiveness of our system, we conducted a study involving ten participants, assessing puncture accuracy and biopsy time in comparison to traditional methods. RESULTS Ultrasound image was streamed from the ultrasound device to augmented reality headset with 122.49±11.61ms latency, while only 16.22±11.25ms was taken after data acquisition from image capture card. Navigation accuracy reached 1.23±0.68mm in the image plane and 0.95±0.70mm outside the image plane, within a depth range of 200 millimeters. Remarkably, the utilization of our system led to 98% and 95% success rate in out-of-plane and in-plane biopsy, among ten participants with little ultrasound experience. CONCLUSION To sum up, this paper introduces an AR-based ultrasound biopsy navigation system characterized by high navigation accuracy and minimal latency. The system provides distinct visualization contents during in-plane and out-of-plane operations according to their different characteristics. Use case study in this paper proved that our system can help young surgeons perform biopsy faster and more accurately.
Collapse
Affiliation(s)
- Haowei Li
- Biomedical Engineering, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| | - Wenqing Yan
- School of Medicine, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| | - Jiasheng Zhao
- Biomedical Engineering, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| | - Yuqi Ji
- School of Medicine, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| | - Long Qian
- Medivis Inc., 920 Broadway, New York, 10010, NY, USA.
| | - Hui Ding
- Biomedical Engineering, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| | - Zhe Zhao
- School of Clinical Medicine, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China; Orthopedics & Sports Medicine Center, Beijing Tsinghua Changgung Hospital, Li Tang Road, Beijing, 100043, Beijing, China.
| | - Guangzhi Wang
- Biomedical Engineering, Tsinghua University, Shuang Qing Road, Beijing, 100084, Beijing, China.
| |
Collapse
|
3
|
Abstract
BACKGROUND In recent years, numerous innovative yet challenging surgeries, such as minimally invasive procedures, have introduced an overwhelming amount of new technologies, increasing the cognitive load for surgeons and potentially diluting their attention. Cognitive support technologies (CSTs) have been in development to reduce surgeons' cognitive load and minimize errors. Despite its huge demands, it still lacks a systematic review. METHODS Literature was searched up until May 21st, 2021. Pubmed, Web of Science, and IEEExplore. Studies that aimed at reducing the cognitive load of surgeons were included. Additionally, studies that contained an experimental trial with real patients and real surgeons were prioritized, although phantom and animal studies were also included. Major outcomes that were assessed included surgical error, anatomical localization accuracy, total procedural time, and patient outcome. RESULTS A total of 37 studies were included. Overall, the implementation of CSTs had better surgical performance than the traditional methods. Most studies reported decreased error rate and increased efficiency. In terms of accuracy, most CSTs had over 90% accuracy in identifying anatomical markers with an error margin below 5 mm. Most studies reported a decrease in surgical time, although some were statistically insignificant. DISCUSSION CSTs have been shown to reduce the mental workload of surgeons. However, the limited ergonomic design of current CSTs has hindered their widespread use in the clinical setting. Overall, more clinical data on actual patients is needed to provide concrete evidence before the ubiquitous implementation of CSTs.
Collapse
Affiliation(s)
- Zhong Shi Zhang
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Yun Wu
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
4
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
5
|
Saruwatari MS, Nguyen TN, Talari HF, Matisoff AJ, Sharma KV, Donoho KG, Basu S, Dwivedi P, Bost JE, Shekhar R. Assessing the Effect of Augmented Reality on Procedural Outcomes During Ultrasound-Guided Vascular Access. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2346-2353. [PMID: 37573178 PMCID: PMC10658651 DOI: 10.1016/j.ultrasmedbio.2023.07.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 06/16/2023] [Accepted: 07/11/2023] [Indexed: 08/14/2023]
Abstract
OBJECTIVE Augmented reality devices are increasingly accepted in health care, though most applications involve education and pre-operative planning. A novel augmented reality ultrasound application, HoloUS, was developed for the Microsoft HoloLens 2 to project real-time ultrasound images directly into the user's field of view. In this work, we assessed the effect of using HoloUS on vascular access procedural outcomes. METHODS A single-center user study was completed with participants with (N = 22) and without (N = 12) experience performing ultrasound-guided vascular access. Users completed a venipuncture and aspiration task a total of four times: three times on study day 1, and once on study day 2 between 2 and 4 weeks later. Users were randomized to use conventional ultrasound during either their first or second task and the HoloUS application at all other times. Task completion time, numbers of needle re-directions, head adjustments and needle visualization rates were recorded. RESULTS For expert users, task completion time was significantly faster using HoloUS (11.5 s, interquartile range [IQR] = 6.5-23.5 s vs. 18.5 s, IQR = 11.0-36.5 s; p = 0.04). The number of head adjustments was significantly lower using the HoloUS app (1.0, IQR = 0.0-1.0 vs. 3.0, IQR = 1.0-5.0; p < 0.0001). No significant differences were identified in other measured outcomes. CONCLUSION This is the first investigation of augmented reality-based ultrasound-guided vascular access using the second-generation HoloLens. It demonstrates equivalent procedural efficiency and accuracy, with favorable usability, ergonomics and user independence when compared with traditional ultrasound techniques.
Collapse
Affiliation(s)
- Michele S Saruwatari
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; Department of Surgery, MedStar Georgetown University Hospital and Washington Hospital Center, Washington, DC, USA
| | | | - Hadi Fooladi Talari
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Andrew J Matisoff
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Karun V Sharma
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Kelsey G Donoho
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Sonali Basu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Pallavi Dwivedi
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - James E Bost
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA; IGI Technologies, Silver Spring, MD, USA; George Washington University School of Medicine and Health Sciences, Washington, DC, USA.
| |
Collapse
|
6
|
Shabir D, Anjum A, Hamza H, Padhan J, Al-Ansari A, Yaacoub E, Mohammed A, Navkar NV. Development and Evaluation of a Mixed-Reality Tele-ultrasound System. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1867-1874. [PMID: 37263893 DOI: 10.1016/j.ultrasmedbio.2023.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/25/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVE The objective of this feasibility study was to develop and assess a tele-ultrasound system that would enable an expert sonographer (situated at the remote site) to provide real-time guidance to an operator (situated at the imaging site) using a mixed-reality environment. METHODS An architecture along with the operational workflow of the system is designed and a prototype is developed that enables guidance in form of audiovisual cues. The visual cues comprise holograms (of the ultrasound images and ultrasound probe) and is rendered to the operator using a head-mounted display device. The position and orientation of the ultrasound probe's hologram are remotely controlled by the expert sonographer and guide the placement of a physical ultrasound probe at the imaging site. The developed prototype was evaluated for its performance on a network. In addition, a user study (with 12 participants) was conducted to assess the operator's ability to align the probe under different guidance modes. RESULTS The network performance revealed the view of the imaging site and ultrasound images were transferred to the remote site in 233 ± 42 and 158 ± 38 ms, respectively. The expert sonographer was able to transfer, to the imaging site, data related to position and orientation of the ultrasound probe's hologram in 78 ± 13 ms. The user study indicated that the audiovisual cues are sufficient for an operator to position and orient a physical probe for accurate depiction of the targeted tissue (p < 0.001). The probe's placement translational and rotational errors were 1.4 ± 0.6 mm and 5.4 ± 2.2º. CONCLUSION The work illustrates the feasibility of using a mixed-reality environment for effective communication between an expert sonographer (ultrasound physician) and an operator. Further studies are required to determine its applicability in a clinical setting during tele-ultrasound.
Collapse
Affiliation(s)
- Dehlela Shabir
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Arshak Anjum
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Hawa Hamza
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | | | - Elias Yaacoub
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Amr Mohammed
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Nikhil V Navkar
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar.
| |
Collapse
|
7
|
Katayama M, Mitsuno D, Ueda K. Clinical Application to Improve the "Depth Perception Problem" by Combining Augmented Reality and a 3D Printing Model. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2023; 11:e5071. [PMID: 37361506 PMCID: PMC10289554 DOI: 10.1097/gox.0000000000005071] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 04/28/2023] [Indexed: 06/28/2023]
Abstract
In our experience with intraoperative evaluation and educational application of augmented reality technology, an illusion of depth has been a major problem. To improve this depth perception problem, we conducted two experiments combining various three-dimensional models and holograms and the observation angles using an augmented reality device. Methods In experiment 1, when observing holograms projected on the surface layer of the model (bone model) or holograms projected on a layer deeper than the model (body surface model), the observer's first impression regarding which model made it easier to understand positional relationships was investigated. In experiment 2, to achieve a more quantitative evaluation, the observer was asked to measure the distance between two specific points on the surface and deep layers from two angles in each of the above combinations. Statistical analysis was performed on the measurement error for this distance. Results In experiment 1, the three-dimensional positional relationships were easier to understand in the bone than in the body surface model. In experiment 2, there was not much difference in the measurement error under either condition, which was not large enough to cause a misunderstanding of the depth relationship between the surface and deep layers. Conclusions Any combination can be used for preoperative examinations and anatomical study purposes. In particular, projecting holograms on a deep model or observing positional relationships from not only the operator's viewpoint, but also multiple other angles is more desirable because it reduces confusion caused by the depth perception problem and improves understanding of anatomy.
Collapse
Affiliation(s)
- Misato Katayama
- From the Department of Plastic and Reconstructive Surgery, Osaka Medical and Pharmaceutical University, Takatsuki City, Osaka, Japan
| | - Daisuke Mitsuno
- From the Department of Plastic and Reconstructive Surgery, Osaka Medical and Pharmaceutical University, Takatsuki City, Osaka, Japan
| | - Koichi Ueda
- From the Department of Plastic and Reconstructive Surgery, Osaka Medical and Pharmaceutical University, Takatsuki City, Osaka, Japan
| |
Collapse
|
8
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
9
|
Baashar Y, Alkawsi G, Wan Ahmad WN, Alomari MA, Alhussian H, Tiong SK. Towards Wearable Augmented Reality in Healthcare: A Comparative Survey and Analysis of Head-Mounted Displays. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3940. [PMID: 36900951 PMCID: PMC10002206 DOI: 10.3390/ijerph20053940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/16/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
Head-mounted displays (HMDs) have the potential to greatly impact the surgical field by maintaining sterile conditions in healthcare environments. Google Glass (GG) and Microsoft HoloLens (MH) are examples of optical HMDs. In this comparative survey related to wearable augmented reality (AR) technology in the medical field, we examine the current developments in wearable AR technology, as well as the medical aspects, with a specific emphasis on smart glasses and HoloLens. The authors searched recent articles (between 2017 and 2022) in the PubMed, Web of Science, Scopus, and ScienceDirect databases and a total of 37 relevant studies were considered for this analysis. The selected studies were divided into two main groups; 15 of the studies (around 41%) focused on smart glasses (e.g., Google Glass) and 22 (59%) focused on Microsoft HoloLens. Google Glass was used in various surgical specialities and preoperative settings, namely dermatology visits and nursing skill training. Moreover, Microsoft HoloLens was used in telepresence applications and holographic navigation of shoulder and gait impairment rehabilitation, among others. However, some limitations were associated with their use, such as low battery life, limited memory size, and possible ocular pain. Promising results were obtained by different studies regarding the feasibility, usability, and acceptability of using both Google Glass and Microsoft HoloLens in patient-centric settings as well as medical education and training. Further work and development of rigorous research designs are required to evaluate the efficacy and cost-effectiveness of wearable AR devices in the future.
Collapse
Affiliation(s)
- Yahia Baashar
- Faculty of Computing and Informatics, Universiti Malaysia Sabah (UMS), Labuan 87000, Malaysia
| | - Gamal Alkawsi
- Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional, Kajang 43000, Malaysia
- Faculty of Computer Science and Information Systems, Thamar University, Thamar 87246, Yemen
| | | | - Mohammad Ahmed Alomari
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional (UNITEN), Kajang 43000, Malaysia
| | - Hitham Alhussian
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia
| | - Sieh Kiong Tiong
- Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional, Kajang 43000, Malaysia
| |
Collapse
|
10
|
von Haxthausen F, Rüger C, Sieren MM, Kloeckner R, Ernst F. Augmenting Image-Guided Procedures through In Situ Visualization of 3D Ultrasound via a Head-Mounted Display. SENSORS (BASEL, SWITZERLAND) 2023; 23:2168. [PMID: 36850766 PMCID: PMC9961663 DOI: 10.3390/s23042168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Medical ultrasound (US) is a commonly used modality for image-guided procedures. Recent research systems providing an in situ visualization of 2D US images via an augmented reality (AR) head-mounted display (HMD) were shown to be advantageous over conventional imaging through reduced task completion times and improved accuracy. In this work, we continue in the direction of recent developments by describing the first AR HMD application visualizing real-time volumetric (3D) US in situ for guiding vascular punctures. We evaluated the application on a technical level as well as in a mixed-methods user study with a qualitative prestudy and a quantitative main study, simulating a vascular puncture. Participants completed the puncture task significantly faster when using 3D US AR mode compared to 2D US AR, with a decrease of 28.4% in time. However, no significant differences were observed regarding the success rate of vascular puncture (2D US AR-50% vs. 3D US AR-72%). On the technical side, the system offers a low latency of 49.90 ± 12.92 ms and a satisfactory frame rate of 60 Hz. Our work shows the feasibility of a system that visualizes real-time 3D US data via an AR HMD, and our experiments show, furthermore, that this may offer additional benefits in US-guided tasks (i.e., reduced task completion time) over 2D US images viewed in AR by offering a vividly volumetric visualization.
Collapse
Affiliation(s)
- Felix von Haxthausen
- Institute for Robotics and Cognitive Systems, University of Lübeck, 23562 Lübeck, Germany
| | - Christoph Rüger
- Department of Surgery, Campus Charité Mitte, Campus Virchow-Klinikum, Experimental Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin Institute of Health, 10117 Berlin, Germany
| | - Malte Maria Sieren
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein Campus Lübeck, 23569 Lübeck, Germany
- Institute of Interventional Radiology, University Hospital Schleswig-Holstein Campus Lübeck, 23569 Lübeck, Germany
| | - Roman Kloeckner
- Institute of Interventional Radiology, University Hospital Schleswig-Holstein Campus Lübeck, 23569 Lübeck, Germany
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Lübeck, 23562 Lübeck, Germany
| |
Collapse
|
11
|
A multimodal user interface for touchless control of robotic ultrasound. Int J Comput Assist Radiol Surg 2022:10.1007/s11548-022-02810-0. [PMID: 36565368 PMCID: PMC10363039 DOI: 10.1007/s11548-022-02810-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/07/2022] [Indexed: 01/05/2023]
Abstract
PURPOSE Past research contained the investigation and development of robotic ultrasound. In this context, interfaces which allow for interaction with the robotic system are of paramount importance. Few researchers have addressed the issue of developing non-tactile interaction approaches, although they could be beneficial for maintaining sterility during medical procedures. Interaction could be supported by multimodality, which has the potential to enable intuitive and natural interaction. To assess the feasibility of multimodal interaction for non-tactile control of a co-located robotic ultrasound system, a novel human-robot interaction concept was developed. METHODS The medical use case of needle-based interventions under hybrid computed tomography and ultrasound imaging was analyzed by interviewing four radiologists. From the resulting workflow, interaction tasks were derived which include human-robot interaction. Based on this, characteristics of a multimodal, touchless human-robot interface were elaborated, suitable interaction modalities were identified, and a corresponding interface was developed, which was thereafter evaluated in a user study with eight participants. RESULTS The implemented interface includes voice commands, combined with hand gesture control for discrete control and navigation interaction of the robotic US probe, respectively. The interaction concept was evaluated by the users in the form of a quantitative questionnaire with a average usability. Qualitative analysis of interview results revealed user satisfaction with the implemented interaction methods and potential improvements to the system. CONCLUSION A multimodal, touchless interaction concept for a robotic US for the use case of needle-based procedures in interventional radiology was developed, incorporating combined voice and hand gesture control. Future steps will include the integration of a solution for the missing haptic feedback and the evaluation of its clinical suitability.
Collapse
|
12
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
13
|
von Haxthausen F, Moreta-Martinez R, Pose Díez de la Lastra A, Pascau J, Ernst F. UltrARsound: in situ visualization of live ultrasound images using HoloLens 2. Int J Comput Assist Radiol Surg 2022; 17:2081-2091. [PMID: 35776399 PMCID: PMC9515035 DOI: 10.1007/s11548-022-02695-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/31/2022] [Indexed: 11/24/2022]
Abstract
Purpose Augmented Reality (AR) has the potential to simplify ultrasound (US) examinations which usually require a skilled and experienced sonographer to mentally align narrow 2D cross-sectional US images in the 3D anatomy of the patient. This work describes and evaluates a novel approach to track retroreflective spheres attached to the US probe using an inside-out technique with the AR glasses HoloLens 2. Finally, live US images are displayed in situ on the imaged anatomy. Methods The Unity application UltrARsound performs spatial tracking of the US probe and attached retroreflective markers using the depth camera integrated into the AR glasses—thus eliminating the need for an external tracking system. Additionally, a Kalman filter is implemented to improve the noisy measurements of the camera. US images are streamed wirelessly via the PLUS toolkit to HoloLens 2. The technical evaluation comprises static and dynamic tracking accuracy, frequency and latency of displayed images. Results Tracking is performed with a median accuracy of 1.98 mm/1.81\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ for the static setting when using the Kalman filter. In a dynamic scenario, the median error was 2.81 mm/1.70\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘. The tracking frequency is currently limited to 20 Hz. 83% of the displayed US images had a latency lower than 16 ms. Conclusions In this work, we showed that spatial tracking of retroreflective spheres with the depth camera of HoloLens 2 is feasible, achieving a promising accuracy for in situ visualization of live US images. For tracking, no additional hardware nor modifications to HoloLens 2 are required making it a cheap and easy-to-use approach. Moreover, a minimal latency of displayed images enables a real-time perception for the sonographer.
Collapse
Affiliation(s)
- Felix von Haxthausen
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain. .,Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Schleswig-Holstein, Germany.
| | - Rafael Moreta-Martinez
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Alicia Pose Díez de la Lastra
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Schleswig-Holstein, Germany
| |
Collapse
|
14
|
HoloUS: Augmented reality visualization of live ultrasound images using HoloLens for ultrasound-guided procedures. Int J Comput Assist Radiol Surg 2021; 17:385-391. [PMID: 34817764 DOI: 10.1007/s11548-021-02526-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 10/20/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Microsoft HoloLens is a pair of augmented reality (AR) smart glasses that could improve the intraprocedural visualization of ultrasound-guided procedures. With the wearable HoloLens headset, an ultrasound image can be virtually rendered and registered with the ultrasound transducer and placed directly in the practitioner's field of view. METHODS A custom application, called HoloUS, was developed using the HoloLens and a portable ultrasound machine connected through a wireless network. A custom 3D-printed case with an AR-pattern for the ultrasound transducer permitted ultrasound image tracking and registration. Voice controls on the HoloLens supported the scaling and movement of the ultrasound image as desired. The ultrasound images were streamed and displayed in real-time. A user study was performed to assess the effectiveness of the HoloLens as an alternative display platform. Novices and experts were timed on tasks involving targeting simulated vessels using a needle in a custom phantom. RESULTS Technical characterization of the HoloUS app was conducted using frame rate, tracking accuracy, and latency as performance metrics. The app ran at 25 frames/s, had an 80-ms latency, and could track the transducer with an average reprojection error of 0.0435 pixels. With AR visualization, the novices' times improved by 17% but the experts' times decreased slightly by 5%, which may reflect the experts' training and experience bias. CONCLUSION The HoloUS application was found to enhance user experience and simplify hand-eye coordination. By eliminating the need to alternately observe the patient and the ultrasound images presented on a separate monitor, the proposed AR application has the potential to improve efficiency and effectiveness of ultrasound-guided procedures.
Collapse
|
15
|
Amiras D, Hurkxkens TJ, Figueroa D, Pratt PJ, Pitrola B, Watura C, Rostampour S, Shimshon GJ, Hamady M. Augmented reality simulator for CT-guided interventions. Eur Radiol 2021; 31:8897-8902. [PMID: 34109488 PMCID: PMC8589738 DOI: 10.1007/s00330-021-08043-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 03/18/2021] [Accepted: 05/04/2021] [Indexed: 01/20/2023]
Abstract
Introduction CT-guided interventions are taught using a mentored approach on real patients. It is well established that simulation is a valuable training tool in medicine. This project assessed the feasibility and acceptance of replicating a CT-guided intervention using a bespoke software application with an augmented reality head-mounted display (ARHMD). Methods A virtual patient was generated using a CT dataset obtained from The Cancer Imaging Archive. A surface mesh of a virtual patient was projected into the field-of-view of the operator. ChArUco markers, placed on both the needle and agar jelly phantom, were tracked using RGB cameras built into the ARHMD. A virtual CT slice simulating the needle position was generated on voice command. The application was trialled by senior interventional radiologists and trainee radiologists with a structured questionnaire evaluating face validity and technical aspects. Results Sixteen users trialled the application and feedback was received from all. Eleven felt the accuracy and realism was adequate for training and twelve felt more confident about their CT biopsy skills after this training session. Discussion The study showed the feasibility of simulating a CT-guided procedure with augmented reality and that this could be used as a training tool. Key Points • Simulating a CT-guided procedure using augmented reality is possible. • The simulator developed could be an effective training tool for clinical practical skills. • Complexity of cases can be tailored to address the training level demands.
Collapse
Affiliation(s)
- D Amiras
- Imaging Department, Imperial College Healthcare NHS Trust, London, UK.
- Department of Surgery, Imperial College London, London, UK.
| | - T J Hurkxkens
- Digital Learning Hub, Imperial College London, London, UK
| | - D Figueroa
- Digital Learning Hub, Imperial College London, London, UK
| | - P J Pratt
- Digital Learning Hub, Imperial College London, London, UK
- Imperial College London, London, UK
- Medical iSight Corporation, New York, USA
| | - B Pitrola
- Imaging Department, Imperial College Healthcare NHS Trust, London, UK
| | - C Watura
- Imaging Department, Imperial College Healthcare NHS Trust, London, UK
| | - S Rostampour
- Imaging Department, Imperial College Healthcare NHS Trust, London, UK
| | | | - M Hamady
- Imaging Department, Imperial College Healthcare NHS Trust, London, UK
- Department of Surgery, Imperial College London, London, UK
| |
Collapse
|
16
|
Lareyre F, Chaudhuri A, Adam C, Carrier M, Mialhe C, Raffort J. Applications of Head-Mounted Displays and Smart Glasses in Vascular Surgery. Ann Vasc Surg 2021; 75:497-512. [PMID: 33823254 DOI: 10.1016/j.avsg.2021.02.033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/22/2021] [Accepted: 02/25/2021] [Indexed: 12/11/2022]
Abstract
OBJECTIVES Advances in virtual, augmented and mixed reality have led to the development of wearable technologies including head mounted displays (HMD) and smart glasses. While there is a growing interest on their potential applications in health, only a few studies have addressed so far their use in vascular surgery. The aim of this review was to summarize the fundamental notions associated with these technologies and to discuss potential applications and current limits for their use in vascular surgery. METHODS A comprehensive literature review was performed to introduce the fundamental concepts and provide an overview of applications of HMD and smart glasses in surgery. RESULTS HMD and smart glasses demonstrated a potential interest for the education of surgeons including anatomical teaching, surgical training, teaching and telementoring. Applications for pre-surgical planning have been developed in general and cardiac surgery and could be transposed for a use in vascular surgery. The use of wearable technologies in the operating room has also been investigated in both general and cardiovascular surgery and demonstrated its potential interest for image-guided surgery and data collection. CONCLUSION Studies performed so far represent a proof of concept of the interest of HMD and smart glasses in vascular surgery for education of surgeons and for surgical practice. Although these technologies exhibited encouraging results for applications in vascular surgery, technical improvements and further clinical research in large series are required before hoping using them in daily clinical practice.
Collapse
Affiliation(s)
- Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes-Juan-les-Pins, France; Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France.
| | - Arindam Chaudhuri
- Bedfordshire-Milton Keynes Vascular Centre, Bedfordshire Hospitals NHS Foundation Trust, Bedford, UK
| | - Cédric Adam
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, France
| | - Marion Carrier
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, France
| | - Claude Mialhe
- Cardiovascular Surgery Unit, Cardio Thoracic Centre of Monaco, Monaco
| | - Juliette Raffort
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Clinical Chemistry Laboratory, University Hospital of Nice, France
| |
Collapse
|
17
|
Using virtual 3D-models in surgical planning: workflow of an immersive virtual reality application in liver surgery. Langenbecks Arch Surg 2021; 406:911-915. [PMID: 33710462 PMCID: PMC8106601 DOI: 10.1007/s00423-021-02127-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 02/08/2021] [Indexed: 12/21/2022]
Abstract
Purpose Three-dimensional (3D) surgical planning is widely accepted in liver surgery. Currently, the 3D reconstructions are usually presented as 3D PDF data on regular monitors. 3D-printed liver models are sometimes used for education and planning. Methods We developed an immersive virtual reality (VR) application that enables the presentation of preoperative 3D models. The 3D reconstructions are exported as STL files and easily imported into the application, which creates the virtual model automatically. The presentation is possible in “OpenVR”-ready VR headsets. To interact with the 3D liver model, VR controllers are used. Scaling is possible, as well as changing the opacity from invisible over transparent to fully opaque. In addition, the surgeon can draw potential resection lines on the surface of the liver. All these functions can be used in a single or multi-user mode. Results Five highly experienced HPB surgeons of our department evaluated the VR application after using it for the very first time and considered it helpful according to the “System Usability Scale” (SUS) with a score of 76.6%. Especially with the subitem “necessary learning effort,” it was shown that the application is easy to use. Conclusion We introduce an immersive, interactive presentation of medical volume data for preoperative 3D liver surgery planning. The application is easy to use and may have advantages over 3D PDF and 3D print in preoperative liver surgery planning. Prospective trials are needed to evaluate the optimal presentation mode of 3D liver models. Supplementary Information The online version contains supplementary material available at 10.1007/s00423-021-02127-7.
Collapse
|
18
|
Abstract
Current developments in the field of extended reality (XR) could prove useful in the optimization of surgical workflows, time effectiveness and postoperative outcome. Although still primarily a subject of research, the state of XR technologies is rapidly improving and approaching feasibility for a broad clinical application. Surgical fields of application of XR technologies are currently primarily training, preoperative planning and intraoperative assistance. For all three areas, products already exist (some clinically approved) and technical feasibility studies have been conducted. In teaching, the use of XR can already be assessed as fundamentally practical and meaningful but still needs to be evaluated in large multicenter studies. In preoperative planning XR can also offer advantages, although technical limitations often impede routine use; however, for cases of intraoperative use informative evaluation studies are mostly lacking, so that an assessment is not yet possible in a meaningful way. Furthermore, there is a lack of assessments regarding cost-effectiveness in all three areas. The XR technologies enable proven advantages in surgical workflows despite the lack of high-quality evaluation with respect to the practical and clinical use of XR. New concepts for effective interaction with XR media also need to be developed. In the future, further research progress and technical developments in the field can be expected.
Collapse
Affiliation(s)
- Christoph Rüger
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland
| | - Simon Moosburner
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland
| | - Igor M Sauer
- Chirurgische Klinik, Campus Charité Mitte|Campus Virchow-Klinikum, Experimentelle Chirurgie, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Deutschland.
- Matters of Activity. Image Space Material, Berlin, Deutschland.
| |
Collapse
|