1
|
Hamza H, Al-Ansari A, Navkar NV. Technologies Used for Telementoring in Open Surgery: A Scoping Review. Telemed J E Health 2024; 30:1810-1824. [PMID: 38546446 DOI: 10.1089/tmj.2023.0669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
Background: Telementoring technologies enable a remote mentor to guide a mentee in real-time during surgical procedures. This addresses challenges, such as lack of expertise and limited surgical training/education opportunities in remote locations. This review aims to provide a comprehensive account of these technologies tailored for open surgery. Methods: A comprehensive scoping review of the scientific literature was conducted using PubMed, ScienceDirect, ACM Digital Library, and IEEE Xplore databases. Broad and inclusive searches were done to identify articles reporting telementoring or teleguidance technologies in open surgery. Results: Screening of the search results yielded 43 articles describing surgical telementoring for open approach. The studies were categorized based on the type of open surgery (surgical specialty, surgical procedure, and stage of clinical trial), the telementoring technology used (information transferred between mentor and mentee, devices used for rendering the information), and assessment of the technology (experience level of mentor and mentee, study design, and assessment criteria). Majority of the telementoring technologies focused on trauma-related surgeries and mixed reality headsets were commonly used for rendering information (telestrations, surgical tools, or hand gestures) to the mentee. These technologies were primarily assessed on high-fidelity synthetic phantoms. Conclusions: Despite longer operative time, these telementoring technologies demonstrated clinical viability during open surgeries through improved performance and confidence of the mentee. In general, usage of immersive devices and annotations appears to be promising, although further clinical trials will be required to thoroughly assess its benefits.
Collapse
Affiliation(s)
- Hawa Hamza
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | - Nikhil V Navkar
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
2
|
Franson D, Dupuis A, Gulani V, Griswold M, Seiberlich N. A System for Real-Time, Online Mixed-Reality Visualization of Cardiac Magnetic Resonance Images. J Imaging 2021; 7:jimaging7120274. [PMID: 34940741 PMCID: PMC8709155 DOI: 10.3390/jimaging7120274] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 12/07/2021] [Accepted: 12/09/2021] [Indexed: 11/16/2022] Open
Abstract
Image-guided cardiovascular interventions are rapidly evolving procedures that necessitate imaging systems capable of rapid data acquisition and low-latency image reconstruction and visualization. Compared to alternative modalities, Magnetic Resonance Imaging (MRI) is attractive for guidance in complex interventional settings thanks to excellent soft tissue contrast and large fields-of-view without exposure to ionizing radiation. However, most clinically deployed MRI sequences and visualization pipelines exhibit poor latency characteristics, and spatial integration of complex anatomy and device orientation can be challenging on conventional 2D displays. This work demonstrates a proof-of-concept system linking real-time cardiac MR image acquisition, online low-latency reconstruction, and a stereoscopic display to support further development in real-time MR-guided intervention. Data are acquired using an undersampled, radial trajectory and reconstructed via parallelized through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) implemented on graphics processing units. Images are rendered for display in a stereoscopic mixed-reality head-mounted display. The system is successfully tested by imaging standard cardiac views in healthy volunteers. Datasets comprised of one slice (46 ms), two slices (92 ms), and three slices (138 ms) are collected, with the acquisition time of each listed in parentheses. Images are displayed with latencies of 42 ms/frame or less for all three conditions. Volumetric data are acquired at one volume per heartbeat with acquisition times of 467 ms and 588 ms when 8 and 12 partitions are acquired, respectively. Volumes are displayed with a latency of 286 ms or less. The faster-than-acquisition latencies for both planar and volumetric display enable real-time 3D visualization of the heart.
Collapse
Affiliation(s)
- Dominique Franson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA;
- Correspondence: (D.F.); (A.D.)
| | - Andrew Dupuis
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA;
- Correspondence: (D.F.); (A.D.)
| | - Vikas Gulani
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (V.G.); (N.S.)
| | - Mark Griswold
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA;
- Department of Radiology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Nicole Seiberlich
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (V.G.); (N.S.)
| |
Collapse
|
3
|
Kwan AC, Salto G, Cheng S, Ouyang D. Artificial Intelligence in Computer Vision: Cardiac MRI and Multimodality Imaging Segmentation. CURRENT CARDIOVASCULAR RISK REPORTS 2021; 15:18. [PMID: 35693045 PMCID: PMC9187294 DOI: 10.1007/s12170-021-00678-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2021] [Indexed: 12/17/2022]
Abstract
Purpose of Review Anatomical segmentation has played a major role within clinical cardiology. Novel techniques through artificial intelligence-based computer vision have revolutionized this process through both automation and novel applications. This review discusses the history and clinical context of cardiac segmentation to provide a framework for a survey of recent manuscripts in artificial intelligence and cardiac segmentation. We aim to clarify for the reader the clinical question of "Why do we segment?" in order to understand the question of "Where is current research and where should be?". Recent Findings There has been increasing research in cardiac segmentation in recent years. Segmentation models are most frequently based on a U-Net structure. Multiple innovations have been added in terms of pre-processing or connection to analysis pipelines. Cardiac MRI is the most frequently segmented modality, which is due in part to the presence of publically-available, moderately sized, computer vision competition datasets. Further progress in data availability, model explanation, and clinical integration are being pursued. Summary The task of cardiac anatomical segmentation has experienced massive strides forward within the past five years due to convolutional neural networks. These advances provide a basis for streamlining image analysis, and a foundation for further analysis both by computer and human systems. While technical advances are clear, clinical benefit remains nascent. Novel approaches may improve measurement precision by decreasing inter-reader variability and appear to also have the potential for larger-reaching effects in the future within integrated analysis pipelines.
Collapse
Affiliation(s)
- Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
| | - Gerran Salto
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA
- Framingham Heart Study, Framingham, MA
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA
- Framingham Heart Study, Framingham, MA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
| |
Collapse
|
4
|
Morales Mojica CM, Velazco-Garcia JD, Pappas EP, Birbilis TA, Becker A, Leiss EL, Webb A, Seimenis I, Tsekos NV. A Holographic Augmented Reality Interface for Visualizing of MRI Data and Planning of Neurosurgical Procedures. J Digit Imaging 2021; 34:1014-1025. [PMID: 34027587 DOI: 10.1007/s10278-020-00412-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 12/06/2020] [Accepted: 12/18/2020] [Indexed: 10/21/2022] Open
Abstract
The recent introduction of wireless head-mounted displays (HMD) promises to enhance 3D image visualization by immersing the user into 3D morphology. This work introduces a prototype holographic augmented reality (HAR) interface for the 3D visualization of magnetic resonance imaging (MRI) data for the purpose of planning neurosurgical procedures. The computational platform generates a HAR scene that fuses pre-operative MRI sets, segmented anatomical structures, and a tubular tool for planning an access path to the targeted pathology. The operator can manipulate the presented images and segmented structures and perform path-planning using voice and gestures. On-the-fly, the software uses defined forbidden-regions to prevent the operator from harming vital structures. In silico studies using the platform with a HoloLens HMD assessed its functionality and the computational load and memory for different tasks. A preliminary qualitative evaluation revealed that holographic visualization of high-resolution 3D MRI data offers an intuitive and interactive perspective of the complex brain vasculature and anatomical structures. This initial work suggests that immersive experiences may be an unparalleled tool for planning neurosurgical procedures.
Collapse
Affiliation(s)
- Cristina M Morales Mojica
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Jose D Velazco-Garcia
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Eleftherios P Pappas
- Medical Physics Laboratory, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Aaron Becker
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, USA
| | - Ernst L Leiss
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Andrew Webb
- C.J. Gorter Center for High Field MRI, Leiden University Medical Center, Leiden, Netherlands
| | - Ioannis Seimenis
- Medical Physics Laboratory, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | - Nikolaos V Tsekos
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA.
| |
Collapse
|
5
|
Velazco-Garcia JD, Shah DJ, Leiss EL, Tsekos NV. A modular and scalable computational framework for interactive immersion into imaging data with a holographic augmented reality interface. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105779. [PMID: 33045556 DOI: 10.1016/j.cmpb.2020.105779] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 09/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Modern imaging scanners produce an ever-growing body of 3D/4D multimodal data requiring image analytics and visualization of fused images, segmentations, and information. For the latter, augmented reality (AR) with head-mounted displays (HMDs) has shown potential. This work describes a framework (FI3D) for interactive immersion with data, integration of image processing and analytics, and rendering and fusion with an AR interface. METHODS The FI3D was designed and endowed with modules to communicate with peripherals, including imaging scanners and HMDs, and to provide computational power for data acquisition and processing. The core of FI3D is deployed to a dedicated computational unit that performs the computationally demanding processes in real-time, and the HMD is used as a display output peripheral and an input peripheral through gestures and voice commands. FI3D offers user-made processing and analysis dedicated modules. Users can customize and optimize these for a particular workflow while incorporating current or future libraries. RESULTS The FI3D framework was used to develop a workflow for processing, rendering, and visualization of CINE MRI cardiac sets. In this version, the data were loaded from a remote database, and the endocardium and epicardium of the left ventricle (LV) were segmented using a machine learning model and transmitted to a HoloLens HMD to be visualized in 4D. Performance results show that the system is capable of maintaining an image stream of one image per second with a resolution of 512 × 512. Also, it can modify visual properties of the holograms at 1 update per 16 milliseconds (62.5 Hz) while providing enough resources for the segmentation and surface reconstruction tasks without hindering the HMD. CONCLUSIONS We provide a system design and framework to be used as a foundation for medical applications that benefit from AR visualization, removing several technical challenges from the developmental pipeline.
Collapse
Affiliation(s)
- Jose D Velazco-Garcia
- MRI Lab, Dept. of CS, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA.
| | - Dipan J Shah
- Cardiovascular MRI Lab, Houston Methodist DeBakey Heart and Vascular Center, 6550 Fannin St., Smith Tower - Suite 1801, Houston, USA.
| | - Ernst L Leiss
- MRI Lab, Dept. of CS, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA.
| | - Nikolaos V Tsekos
- MRI Lab, Dept. of CS, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA.
| |
Collapse
|
6
|
Velazco‐Garcia JD, Navkar NV, Balakrishnan S, Abi‐Nahed J, Al‐Rumaihi K, Darweesh A, Al‐Ansari A, Christoforou EG, Karkoub M, Leiss EL, Tsiamyrtzis P, Tsekos NV. End‐user evaluation of software‐generated intervention planning environment for transrectal magnetic resonance‐guided prostate biopsies. Int J Med Robot 2020; 17:1-12. [DOI: 10.1002/rcs.2179] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 09/25/2020] [Accepted: 09/30/2020] [Indexed: 01/20/2023]
Affiliation(s)
| | | | | | | | | | - Adham Darweesh
- Department of Clinical Imaging Hamad Medical Corporation Doha Qatar
| | | | | | - Mansour Karkoub
- Department of Mechanical Engineering Texas A&M University—Qatar Doha Qatar
| | - Ernst L. Leiss
- Department of Computer Science University of Houston Houston Texas USA
| | | | | |
Collapse
|